Greeting,
FYI, we noticed a -66.0% regression of fxmark.ssd_f2fs_dbench_client_4_bufferedio.works/sec due to commit:
commit: cb2ac2912a9ca7d3d26291c511939a41361d2d83 ("block: reduce kblockd_mod_delayed_work_on() CPU consumption")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: fxmark
on test machine: 24 threads 1 sockets Intel Atom(R) P5362 processor with 64G memory
with following parameters:
disk: 1SSD
media: ssd
test: dbench_client
fstype: f2fs
directio: bufferedio
cpufreq_governor: performance
ucode: 0x9c02000e
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <[email protected]>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
sudo bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
sudo bin/lkp run generated-yaml-file
# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.
=========================================================================================
compiler/cpufreq_governor/directio/disk/fstype/kconfig/media/rootfs/tbox_group/test/testcase/ucode:
gcc-9/performance/bufferedio/1SSD/f2fs/x86_64-rhel-8.3/ssd/debian-10.4-x86_64-20200603.cgz/lkp-snr-a1/dbench_client/fxmark/0x9c02000e
commit:
edaa26334c ("iocost: Fix divide-by-zero on donation from low hweight cgroup")
cb2ac2912a ("block: reduce kblockd_mod_delayed_work_on() CPU consumption")
edaa26334c117a58 cb2ac2912a9ca7d3d26291c5119
---------------- ---------------------------
%stddev %change %stddev
\ | \
0.05 ? 28% +26109.7% 13.54 ? 6% fxmark.ssd_f2fs_dbench_client_18_bufferedio.iowait_sec
0.09 ? 28% +26775.3% 23.05 ? 7% fxmark.ssd_f2fs_dbench_client_18_bufferedio.iowait_util
4.26 -20.9% 3.37 ? 15% fxmark.ssd_f2fs_dbench_client_18_bufferedio.irq_sec
7.07 -18.9% 5.74 ? 15% fxmark.ssd_f2fs_dbench_client_18_bufferedio.irq_util
1.07 -43.2% 0.61 ? 3% fxmark.ssd_f2fs_dbench_client_18_bufferedio.softirq_sec
1.77 -41.8% 1.03 ? 3% fxmark.ssd_f2fs_dbench_client_18_bufferedio.softirq_util
39.66 -25.7% 29.47 ? 2% fxmark.ssd_f2fs_dbench_client_18_bufferedio.sys_sec
65.84 -23.8% 50.16 ? 2% fxmark.ssd_f2fs_dbench_client_18_bufferedio.sys_util
15.10 -22.7% 11.67 fxmark.ssd_f2fs_dbench_client_18_bufferedio.user_sec
25.08 -20.8% 19.87 fxmark.ssd_f2fs_dbench_client_18_bufferedio.user_util
574.25 -18.5% 468.11 fxmark.ssd_f2fs_dbench_client_18_bufferedio.works/sec
7.78 ? 3% +458.1% 43.44 ? 6% fxmark.ssd_f2fs_dbench_client_2_bufferedio.iowait_sec
13.26 ? 3% +474.8% 76.21 ? 5% fxmark.ssd_f2fs_dbench_client_2_bufferedio.iowait_util
4.16 -27.3% 3.03 ? 17% fxmark.ssd_f2fs_dbench_client_2_bufferedio.irq_sec
7.10 -25.1% 5.32 ? 17% fxmark.ssd_f2fs_dbench_client_2_bufferedio.irq_util
0.91 -68.7% 0.28 ? 18% fxmark.ssd_f2fs_dbench_client_2_bufferedio.softirq_sec
1.55 -67.8% 0.50 ? 18% fxmark.ssd_f2fs_dbench_client_2_bufferedio.softirq_util
32.43 -77.8% 7.18 ? 22% fxmark.ssd_f2fs_dbench_client_2_bufferedio.sys_sec
55.25 -77.2% 12.61 ? 22% fxmark.ssd_f2fs_dbench_client_2_bufferedio.sys_util
13.31 -77.8% 2.96 ? 22% fxmark.ssd_f2fs_dbench_client_2_bufferedio.user_sec
22.67 -77.1% 5.19 ? 22% fxmark.ssd_f2fs_dbench_client_2_bufferedio.user_util
536.81 -72.8% 145.99 ? 19% fxmark.ssd_f2fs_dbench_client_2_bufferedio.works/sec
0.06 ? 19% +4041.0% 2.69 ? 10% fxmark.ssd_f2fs_dbench_client_36_bufferedio.iowait_sec
0.11 ? 19% +4061.6% 4.49 ? 10% fxmark.ssd_f2fs_dbench_client_36_bufferedio.iowait_util
4.19 -15.8% 3.53 ? 14% fxmark.ssd_f2fs_dbench_client_36_bufferedio.irq_sec
6.95 -15.4% 5.88 ? 14% fxmark.ssd_f2fs_dbench_client_36_bufferedio.irq_util
0.98 ? 4% -23.0% 0.76 ? 3% fxmark.ssd_f2fs_dbench_client_36_bufferedio.softirq_sec
1.63 ? 4% -22.6% 1.26 ? 3% fxmark.ssd_f2fs_dbench_client_36_bufferedio.softirq_util
3.17 ? 4% +1122.1% 38.74 fxmark.ssd_f2fs_dbench_client_4_bufferedio.iowait_sec
5.31 ? 4% +1189.4% 68.46 fxmark.ssd_f2fs_dbench_client_4_bufferedio.iowait_util
4.23 -26.6% 3.11 ? 16% fxmark.ssd_f2fs_dbench_client_4_bufferedio.irq_sec
7.09 -22.5% 5.49 ? 16% fxmark.ssd_f2fs_dbench_client_4_bufferedio.irq_util
1.04 ? 2% -62.0% 0.40 ? 13% fxmark.ssd_f2fs_dbench_client_4_bufferedio.softirq_sec
1.75 -59.9% 0.70 ? 13% fxmark.ssd_f2fs_dbench_client_4_bufferedio.softirq_util
36.84 -72.5% 10.13 fxmark.ssd_f2fs_dbench_client_4_bufferedio.sys_sec
61.70 -71.0% 17.90 fxmark.ssd_f2fs_dbench_client_4_bufferedio.sys_util
14.32 -71.2% 4.12 ? 2% fxmark.ssd_f2fs_dbench_client_4_bufferedio.user_sec
23.99 -69.6% 7.28 ? 2% fxmark.ssd_f2fs_dbench_client_4_bufferedio.user_util
573.42 -66.0% 194.68 fxmark.ssd_f2fs_dbench_client_4_bufferedio.works/sec
0.98 ? 2% -13.9% 0.85 ? 2% fxmark.ssd_f2fs_dbench_client_54_bufferedio.softirq_sec
1.63 ? 2% -13.9% 1.40 ? 2% fxmark.ssd_f2fs_dbench_client_54_bufferedio.softirq_util
0.93 ? 2% -12.4% 0.81 fxmark.ssd_f2fs_dbench_client_72_bufferedio.softirq_sec
1.53 ? 2% -12.4% 1.34 fxmark.ssd_f2fs_dbench_client_72_bufferedio.softirq_util
1.175e+08 -20.9% 92902262 fxmark.time.file_system_outputs
1890674 -57.6% 801366 ? 3% fxmark.time.involuntary_context_switches
61.00 -22.4% 47.33 fxmark.time.percent_of_cpu_this_job_got
252.35 -21.8% 197.33 fxmark.time.system_time
97.27 -22.3% 75.58 fxmark.time.user_time
1417786 -3.5% 1367858 fxmark.time.voluntary_context_switches
1.298e+08 +68.8% 2.191e+08 ? 2% cpuidle..time
22.60 +72.8% 39.06 ? 2% iostat.cpu.iowait
55.58 -22.2% 43.24 iostat.cpu.system
21.69 -19.0% 17.57 iostat.cpu.user
22.68 +16.5 39.20 ? 2% mpstat.cpu.all.iowait%
1.22 -0.4 0.84 mpstat.cpu.all.soft%
47.15 -11.1 36.06 mpstat.cpu.all.sys%
21.50 -4.1 17.36 mpstat.cpu.all.usr%
829819 -43.9% 465519 softirqs.BLOCK
829819 -43.9% 465519 softirqs.CPU0.BLOCK
1416557 -20.6% 1124482 softirqs.CPU0.RCU
1416557 -20.6% 1124482 softirqs.RCU
1726 -19.2% 1394 turbostat.Avg_MHz
78.49 -15.1 63.38 turbostat.Busy%
3.90 ? 5% +6.7 10.61 ? 6% turbostat.C1%
203739 +42.9% 291057 ? 8% turbostat.C1E
17.53 +8.4 25.98 ? 6% turbostat.C1E%
3039624 -26.7% 2226778 ? 9% turbostat.IRQ
22.00 +75.0% 38.50 ? 2% vmstat.cpu.wa
26313 -20.7% 20853 vmstat.io.bo
0.00 +2e+102% 2.00 vmstat.procs.b
18.00 -13.0% 15.67 ? 3% vmstat.procs.r
13423 -34.5% 8795 vmstat.system.cs
5309 -26.9% 3880 ? 9% vmstat.system.in
14701469 -20.9% 11625477 proc-vmstat.nr_dirtied
9468 -0.8% 9395 proc-vmstat.nr_mapped
17652 -1.0% 17472 proc-vmstat.nr_slab_reclaimable
3770510 -20.5% 2995998 proc-vmstat.nr_written
14185527 -19.8% 11374067 proc-vmstat.numa_hit
14185527 -19.8% 11374067 proc-vmstat.numa_local
689034 -19.6% 553980 proc-vmstat.pgactivate
14193405 -19.8% 11381945 proc-vmstat.pgalloc_normal
758967 -21.5% 595725 proc-vmstat.pgdeactivate
909347 +1.1% 918936 proc-vmstat.pgfault
14213872 -19.8% 11402055 proc-vmstat.pgfree
15082048 -20.5% 11983998 proc-vmstat.pgpgout
783351 -22.4% 607984 proc-vmstat.pgrotated
1616179 ? 4% -55.7% 715381 ? 24% proc-vmstat.slabs_scanned
3.86 -1.1% 3.81 perf-stat.i.MPKI
3.895e+08 -20.3% 3.106e+08 perf-stat.i.branch-instructions
6018793 -20.4% 4791460 perf-stat.i.branch-misses
44.43 +1.2 45.67 perf-stat.i.cache-miss-rate%
5076824 -11.4% 4495612 perf-stat.i.cache-misses
8645920 -19.5% 6960538 perf-stat.i.cache-references
13446 -34.7% 8787 perf-stat.i.context-switches
1.718e+09 -19.3% 1.386e+09 perf-stat.i.cpu-cycles
414520 ? 5% -21.4% 325966 ? 6% perf-stat.i.dTLB-load-misses
4.994e+08 -20.9% 3.951e+08 perf-stat.i.dTLB-loads
30613 -26.8% 22395 ? 6% perf-stat.i.dTLB-store-misses
2.937e+08 -21.5% 2.306e+08 perf-stat.i.dTLB-stores
1.913e+09 -20.4% 1.523e+09 perf-stat.i.instructions
1.72 -19.3% 1.39 perf-stat.i.metric.GHz
60.16 -20.6% 47.79 ? 5% perf-stat.i.metric.K/sec
1191 -20.8% 943.22 perf-stat.i.metric.M/sec
58.73 +5.9 64.59 perf-stat.overall.cache-miss-rate%
338.44 -8.9% 308.37 ? 2% perf-stat.overall.cycles-between-cache-misses
3.892e+08 -20.3% 3.104e+08 perf-stat.ps.branch-instructions
6013633 -20.4% 4788028 perf-stat.ps.branch-misses
5072419 -11.5% 4491286 perf-stat.ps.cache-misses
8637647 -19.5% 6953483 perf-stat.ps.cache-references
13433 -34.6% 8778 perf-stat.ps.context-switches
1.717e+09 -19.3% 1.385e+09 perf-stat.ps.cpu-cycles
414159 ? 5% -21.4% 325648 ? 6% perf-stat.ps.dTLB-load-misses
4.99e+08 -20.9% 3.947e+08 perf-stat.ps.dTLB-loads
30589 -26.8% 22377 ? 6% perf-stat.ps.dTLB-store-misses
2.935e+08 -21.5% 2.303e+08 perf-stat.ps.dTLB-stores
1.911e+09 -20.4% 1.522e+09 perf-stat.ps.instructions
1.09e+12 -20.1% 8.705e+11 perf-stat.total.instructions
5.24 ? 6% -1.8 3.47 ? 17% perf-profile.calltrace.cycles-pp.ret_from_fork
5.24 ? 6% -1.8 3.47 ? 17% perf-profile.calltrace.cycles-pp.kthread.ret_from_fork
1.70 ? 15% -1.0 0.72 ? 56% perf-profile.calltrace.cycles-pp.worker_thread.kthread.ret_from_fork
3.35 ? 17% -0.6 2.70 ? 11% perf-profile.calltrace.cycles-pp.evict.do_unlinkat.__x64_sys_unlink.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.70 ? 11% -0.3 0.36 ? 70% perf-profile.calltrace.cycles-pp.io_schedule.folio_wait_bit.folio_wait_writeback.__filemap_fdatawait_range.file_write_and_wait_range
0.70 ? 12% -0.3 0.36 ? 70% perf-profile.calltrace.cycles-pp.schedule.io_schedule.folio_wait_bit.folio_wait_writeback.__filemap_fdatawait_range
0.91 ? 10% -0.3 0.59 ? 45% perf-profile.calltrace.cycles-pp.issue_flush_thread.kthread.ret_from_fork
0.83 ? 11% -0.3 0.53 ? 44% perf-profile.calltrace.cycles-pp.__filemap_fdatawait_range.file_write_and_wait_range.f2fs_do_sync_file.do_fsync.__x64_sys_fsync
0.72 ? 10% -0.3 0.45 ? 45% perf-profile.calltrace.cycles-pp.folio_wait_bit.folio_wait_writeback.__filemap_fdatawait_range.file_write_and_wait_range.f2fs_do_sync_file
0.72 ? 10% -0.3 0.46 ? 45% perf-profile.calltrace.cycles-pp.folio_wait_writeback.__filemap_fdatawait_range.file_write_and_wait_range.f2fs_do_sync_file.do_fsync
1.17 ? 8% -0.2 0.94 ? 14% perf-profile.calltrace.cycles-pp.f2fs_convert_inline_inode.f2fs_preallocate_blocks.f2fs_file_write_iter.new_sync_write.vfs_write
1.19 ? 5% -0.2 0.96 ? 9% perf-profile.calltrace.cycles-pp.smpboot_thread_fn.kthread.ret_from_fork
1.42 ? 6% -0.2 1.22 ? 10% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__fxstat64
0.92 ? 6% -0.2 0.76 ? 11% perf-profile.calltrace.cycles-pp.run_ksoftirqd.smpboot_thread_fn.kthread.ret_from_fork
1.19 ? 10% -0.2 1.04 ? 12% perf-profile.calltrace.cycles-pp.__do_sys_newfstat.do_syscall_64.entry_SYSCALL_64_after_hwframe.__fxstat64
0.90 ? 6% -0.1 0.76 ? 11% perf-profile.calltrace.cycles-pp.__softirqentry_text_start.run_ksoftirqd.smpboot_thread_fn.kthread.ret_from_fork
1.17 ? 14% +0.4 1.61 ? 16% perf-profile.calltrace.cycles-pp.proc_sys_call_handler.new_sync_write.vfs_write.ksys_write.do_syscall_64
4.12 ? 3% +0.4 4.56 ? 8% perf-profile.calltrace.cycles-pp.update_free_nid_bitmap.f2fs_build_node_manager.f2fs_fill_super.mount_bdev.legacy_get_tree
4.47 ? 2% +0.8 5.30 ? 10% perf-profile.calltrace.cycles-pp.f2fs_build_node_manager.f2fs_fill_super.mount_bdev.legacy_get_tree.vfs_get_tree
4.97 ? 2% +1.0 5.93 ? 9% perf-profile.calltrace.cycles-pp.f2fs_fill_super.mount_bdev.legacy_get_tree.vfs_get_tree.path_mount
5.11 ? 3% +1.3 6.44 ? 8% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.mount
5.11 ? 3% +1.3 6.44 ? 8% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.mount
5.11 ? 3% +1.3 6.44 ? 8% perf-profile.calltrace.cycles-pp.__x64_sys_mount.do_syscall_64.entry_SYSCALL_64_after_hwframe.mount
5.11 ? 3% +1.3 6.44 ? 8% perf-profile.calltrace.cycles-pp.do_mount.__x64_sys_mount.do_syscall_64.entry_SYSCALL_64_after_hwframe.mount
5.11 ? 3% +1.3 6.44 ? 8% perf-profile.calltrace.cycles-pp.path_mount.do_mount.__x64_sys_mount.do_syscall_64.entry_SYSCALL_64_after_hwframe
5.11 ? 3% +1.3 6.44 ? 8% perf-profile.calltrace.cycles-pp.mount
5.11 ? 3% +1.3 6.44 ? 8% perf-profile.calltrace.cycles-pp.vfs_get_tree.path_mount.do_mount.__x64_sys_mount.do_syscall_64
5.11 ? 3% +1.3 6.44 ? 8% perf-profile.calltrace.cycles-pp.legacy_get_tree.vfs_get_tree.path_mount.do_mount.__x64_sys_mount
5.11 ? 3% +1.3 6.44 ? 8% perf-profile.calltrace.cycles-pp.mount_bdev.legacy_get_tree.vfs_get_tree.path_mount.do_mount
6.79 ? 10% -2.0 4.74 ? 11% perf-profile.children.cycles-pp.__schedule
5.24 ? 6% -1.8 3.47 ? 17% perf-profile.children.cycles-pp.kthread
5.25 ? 6% -1.8 3.49 ? 17% perf-profile.children.cycles-pp.ret_from_fork
5.35 ? 9% -1.3 4.06 ? 12% perf-profile.children.cycles-pp.schedule
1.58 ? 6% -1.1 0.51 ? 24% perf-profile.children.cycles-pp.__submit_merged_write_cond
2.00 ? 11% -0.9 1.09 ? 19% perf-profile.children.cycles-pp.__cond_resched
1.70 ? 15% -0.9 0.80 ? 34% perf-profile.children.cycles-pp.worker_thread
1.50 ? 14% -0.9 0.62 ? 30% perf-profile.children.cycles-pp.preempt_schedule_common
3.72 ? 4% -0.9 2.85 ? 18% perf-profile.children.cycles-pp.unwind_next_frame
1.12 ? 9% -0.8 0.31 ? 29% perf-profile.children.cycles-pp.kblockd_mod_delayed_work_on
3.80 ? 3% -0.8 3.04 ? 16% perf-profile.children.cycles-pp.try_to_wake_up
3.35 ? 17% -0.6 2.70 ? 11% perf-profile.children.cycles-pp.evict
1.02 ? 6% -0.6 0.39 ? 33% perf-profile.children.cycles-pp.__queue_work
0.90 ? 9% -0.6 0.33 ? 30% perf-profile.children.cycles-pp.__submit_merged_bio
0.65 ? 12% -0.6 0.10 ? 85% perf-profile.children.cycles-pp.blk_mq_sched_insert_request
2.12 ? 15% -0.5 1.59 ? 16% perf-profile.children.cycles-pp.perf_trace_sched_switch
1.94 ? 11% -0.5 1.42 ? 13% perf-profile.children.cycles-pp.dequeue_task_fair
0.81 ? 23% -0.5 0.32 ? 25% perf-profile.children.cycles-pp.down_read
1.12 ? 25% -0.5 0.63 ? 14% perf-profile.children.cycles-pp.finish_task_switch
1.21 ? 17% -0.5 0.74 ? 21% perf-profile.children.cycles-pp.handle_edge_irq
1.82 ? 11% -0.4 1.37 ? 15% perf-profile.children.cycles-pp.dequeue_entity
1.61 ? 9% -0.4 1.17 ? 20% perf-profile.children.cycles-pp.__orc_find
1.10 ? 16% -0.4 0.68 ? 22% perf-profile.children.cycles-pp.ahci_single_level_irq_intr
2.06 ? 5% -0.4 1.66 ? 15% perf-profile.children.cycles-pp.perf_trace_sched_wakeup_template
0.84 ? 14% -0.4 0.47 ? 30% perf-profile.children.cycles-pp.ahci_handle_port_intr
0.95 ? 14% -0.3 0.62 ? 18% perf-profile.children.cycles-pp.pick_next_task_fair
0.91 ? 10% -0.3 0.65 ? 21% perf-profile.children.cycles-pp.issue_flush_thread
0.58 ? 13% -0.2 0.34 ? 26% perf-profile.children.cycles-pp.ahci_handle_port_interrupt
0.80 ? 19% -0.2 0.56 ? 23% perf-profile.children.cycles-pp.submit_bio_wait
0.82 ? 19% -0.2 0.57 ? 24% perf-profile.children.cycles-pp.blkdev_issue_flush
0.57 ? 17% -0.2 0.33 ? 24% perf-profile.children.cycles-pp.put_prev_entity
0.82 ? 19% -0.2 0.58 ? 24% perf-profile.children.cycles-pp.__submit_flush_wait
0.37 ? 24% -0.2 0.13 ? 61% perf-profile.children.cycles-pp.rwsem_down_write_slowpath
0.75 ? 15% -0.2 0.51 ? 12% perf-profile.children.cycles-pp.__write_node_page
1.19 ? 5% -0.2 0.96 ? 9% perf-profile.children.cycles-pp.smpboot_thread_fn
0.83 ? 11% -0.2 0.61 ? 11% perf-profile.children.cycles-pp.__filemap_fdatawait_range
0.39 ? 22% -0.2 0.17 ? 27% perf-profile.children.cycles-pp.blk_flush_complete_seq
1.21 ? 6% -0.2 0.99 ? 14% perf-profile.children.cycles-pp.f2fs_convert_inline_inode
0.63 ? 25% -0.2 0.41 ? 14% perf-profile.children.cycles-pp.unwind_get_return_address
0.41 ? 20% -0.2 0.20 ? 33% perf-profile.children.cycles-pp.flush_end_io
0.54 ? 25% -0.2 0.35 ? 14% perf-profile.children.cycles-pp.__kernel_text_address
0.30 ? 96% -0.2 0.12 ? 30% perf-profile.children.cycles-pp.common_file_perm
0.48 ? 17% -0.2 0.32 ? 21% perf-profile.children.cycles-pp.f2fs_submit_page_write
0.92 ? 6% -0.2 0.76 ? 11% perf-profile.children.cycles-pp.run_ksoftirqd
1.22 ? 8% -0.1 1.07 ? 10% perf-profile.children.cycles-pp.__do_sys_newfstat
0.43 ? 13% -0.1 0.28 ? 26% perf-profile.children.cycles-pp.f2fs_do_write_node_page
0.65 ? 11% -0.1 0.52 ? 11% perf-profile.children.cycles-pp.do_dentry_open
0.44 ? 15% -0.1 0.32 ? 24% perf-profile.children.cycles-pp.wait_for_completion_io_timeout
0.26 ? 20% -0.1 0.15 ? 25% perf-profile.children.cycles-pp.blk_mq_end_request
0.18 ? 21% -0.1 0.08 ? 81% perf-profile.children.cycles-pp.switch_mm_irqs_off
0.16 ? 16% -0.1 0.09 ? 29% perf-profile.children.cycles-pp.__remove_mapping
0.17 ? 16% -0.1 0.09 ? 25% perf-profile.children.cycles-pp.remove_mapping
0.13 ? 15% -0.1 0.05 ? 46% perf-profile.children.cycles-pp.__delete_from_page_cache
0.13 ? 20% -0.1 0.06 ? 72% perf-profile.children.cycles-pp.unaccount_page_cache_page
0.31 ? 17% -0.1 0.25 ? 11% perf-profile.children.cycles-pp.vfs_getattr_nosec
0.33 ? 5% +0.1 0.41 ? 11% perf-profile.children.cycles-pp.build_sit_entries
0.47 ? 4% +0.1 0.57 ? 8% perf-profile.children.cycles-pp.f2fs_build_segment_manager
0.20 ? 33% +0.1 0.34 ? 24% perf-profile.children.cycles-pp.intel_idle
0.07 ? 41% +0.2 0.23 ? 90% perf-profile.children.cycles-pp.__do_munmap
0.01 ?223% +0.3 0.28 ? 37% perf-profile.children.cycles-pp.queue_work_on
0.69 ? 18% +0.3 0.98 ? 14% perf-profile.children.cycles-pp.folio_wake_bit
0.98 ? 15% +0.3 1.31 ? 11% perf-profile.children.cycles-pp.f2fs_write_end_io
0.96 ? 16% +0.4 1.31 ? 12% perf-profile.children.cycles-pp.folio_end_writeback
0.18 ?103% +0.4 0.57 ? 47% perf-profile.children.cycles-pp.drop_caches_sysctl_handler.cold
1.17 ? 14% +0.4 1.61 ? 16% perf-profile.children.cycles-pp.proc_sys_call_handler
4.19 ? 2% +0.5 4.68 ? 7% perf-profile.children.cycles-pp.update_free_nid_bitmap
0.14 ?146% +0.7 0.88 ? 67% perf-profile.children.cycles-pp.f2fs_printk
4.49 ? 2% +0.8 5.34 ? 10% perf-profile.children.cycles-pp.f2fs_build_node_manager
4.97 ? 2% +1.0 5.93 ? 9% perf-profile.children.cycles-pp.f2fs_fill_super
5.11 ? 3% +1.3 6.44 ? 8% perf-profile.children.cycles-pp.__x64_sys_mount
5.11 ? 3% +1.3 6.44 ? 8% perf-profile.children.cycles-pp.do_mount
5.11 ? 3% +1.3 6.44 ? 8% perf-profile.children.cycles-pp.path_mount
5.11 ? 3% +1.3 6.44 ? 8% perf-profile.children.cycles-pp.mount
5.11 ? 3% +1.3 6.44 ? 8% perf-profile.children.cycles-pp.vfs_get_tree
5.11 ? 3% +1.3 6.44 ? 8% perf-profile.children.cycles-pp.legacy_get_tree
5.11 ? 3% +1.3 6.44 ? 8% perf-profile.children.cycles-pp.mount_bdev
1.60 ? 9% -0.4 1.17 ? 20% perf-profile.self.cycles-pp.__orc_find
0.28 ? 9% -0.1 0.17 ? 20% perf-profile.self.cycles-pp.ahci_handle_port_interrupt
0.05 ? 75% +0.1 0.11 ? 21% perf-profile.self.cycles-pp.vfs_write
0.20 ? 33% +0.1 0.34 ? 24% perf-profile.self.cycles-pp.intel_idle
3.93 ? 3% +0.5 4.42 ? 7% perf-profile.self.cycles-pp.update_free_nid_bitmap
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
---
0DAY/LKP+ Test Infrastructure Open Source Technology Center
https://lists.01.org/hyperkitty/list/[email protected] Intel Corporation
Thanks,
Oliver Sang