2023-05-09 02:32:12

by kernel test robot

[permalink] [raw]
Subject: [linus:master] [xfs] 2edf06a50f: fsmark.files_per_sec -5.7% regression



Hello,

kernel test robot noticed a -5.7% regression of fsmark.files_per_sec on:


commit: 2edf06a50f5bbe664283f3c55c480fc013221d70 ("xfs: factor xfs_alloc_vextent_this_ag() for _iterate_ags()")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master

testcase: fsmark
test machine: 96 threads 2 sockets Intel(R) Xeon(R) Platinum 8260L CPU @ 2.40GHz (Cascade Lake) with 128G memory
parameters:

iterations: 8
disk: 1SSD
nr_threads: 32
fs: xfs
filesize: 8K
test_size: 50G
sync_method: fsyncBeforeClose
nr_directories: 16d
nr_files_per_directory: 256fpd
cpufreq_governor: performance

test-description: The fsmark is a file system benchmark to test synchronous write workloads, for example, mail servers workload.
test-url: https://sourceforge.net/projects/fsmark/

In addition to that, the commit also has significant impact on the following tests:

+------------------+----------------------------------------------------------+
| testcase: change | fsmark: fsmark.files_per_sec -3.7% regression |
| test machine | 224 threads 2 sockets (Sapphire Rapids) with 256G memory |
| test parameters | cpufreq_governor=performance |
| | disk=1SSD |
| | filesize=8K |
| | fs=xfs |
| | iterations=8 |
| | nr_directories=16d |
| | nr_files_per_directory=256fpd |
| | nr_threads=32 |
| | sync_method=fsyncBeforeClose |
| | test_size=50G |
+------------------+----------------------------------------------------------+


If you fix the issue, kindly add following tag
| Reported-by: kernel test robot <[email protected]>
| Link: https://lore.kernel.org/oe-lkp/[email protected]


Details are as below:
-------------------------------------------------------------------------------------------------->


To reproduce:

git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
sudo bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
sudo bin/lkp run generated-yaml-file

# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.

=========================================================================================
compiler/cpufreq_governor/disk/filesize/fs/iterations/kconfig/nr_directories/nr_files_per_directory/nr_threads/rootfs/sync_method/tbox_group/test_size/testcase:
gcc-11/performance/1SSD/8K/xfs/8/x86_64-rhel-8.3/16d/256fpd/32/debian-11.1-x86_64-20220510.cgz/fsyncBeforeClose/lkp-csl-2sp3/50G/fsmark

commit:
ecd788a924 ("xfs: rework xfs_alloc_vextent()")
2edf06a50f ("xfs: factor xfs_alloc_vextent_this_ag() for _iterate_ags()")

ecd788a92460eef4 2edf06a50f5bbe664283f3c55c4
---------------- ---------------------------
%stddev %change %stddev
\ | \
14349 -5.7% 13527 fsmark.files_per_sec
486.29 +5.8% 514.28 fsmark.time.elapsed_time
486.29 +5.8% 514.28 fsmark.time.elapsed_time.max
211.17 ? 2% +7.3% 226.50 fsmark.time.percent_of_cpu_this_job_got
969.01 ? 2% +14.7% 1111 fsmark.time.system_time
32491035 +10.9% 36033196 fsmark.time.voluntary_context_switches
4.63 +6.0% 4.91 iostat.cpu.iowait
427253 ? 6% +10.1% 470526 ? 4% sched_debug.cpu.nr_switches.avg
0.14 ? 2% +20.5% 0.17 ? 2% turbostat.IPC
176400 +3.0% 181615 vmstat.system.cs
4038 ? 2% +5.6% 4262 ? 2% proc-vmstat.nr_active_anon
4038 ? 2% +5.6% 4262 ? 2% proc-vmstat.nr_zone_active_anon
6254 ? 2% +4.3% 6524 proc-vmstat.numa_huge_pte_updates
184421 ? 4% +8.1% 199355 ? 2% proc-vmstat.pgactivate
1474801 +3.6% 1528229 proc-vmstat.pgfault
1.096e+08 +5.1% 1.153e+08 proc-vmstat.pgpgout
3718912 +5.5% 3922432 proc-vmstat.unevictable_pgs_scanned
8.916e+08 +21.2% 1.08e+09 perf-stat.i.branch-instructions
177602 +2.9% 182781 perf-stat.i.context-switches
2.12 ? 3% -17.9% 1.74 ? 3% perf-stat.i.cpi
744.83 ? 2% +4.3% 777.00 perf-stat.i.cycles-between-cache-misses
1.237e+09 +28.8% 1.594e+09 perf-stat.i.dTLB-loads
6.391e+08 +27.6% 8.152e+08 perf-stat.i.dTLB-stores
7441243 +3.6% 7712079 perf-stat.i.iTLB-loads
4.584e+09 +24.7% 5.718e+09 perf-stat.i.instructions
1701 ? 3% +21.6% 2068 perf-stat.i.instructions-per-iTLB-miss
0.48 ? 3% +20.5% 0.58 ? 3% perf-stat.i.ipc
29.02 +25.9% 36.53 perf-stat.i.metric.M/sec
2634 -1.5% 2595 perf-stat.i.minor-faults
2634 -1.5% 2595 perf-stat.i.page-faults
2.04 ? 3% -16.6% 1.70 ? 3% perf-stat.overall.cpi
716.67 +4.3% 747.31 ? 2% perf-stat.overall.cycles-between-cache-misses
1605 ? 3% +23.9% 1989 perf-stat.overall.instructions-per-iTLB-miss
0.49 ? 3% +20.0% 0.59 ? 3% perf-stat.overall.ipc
8.898e+08 +21.2% 1.078e+09 perf-stat.ps.branch-instructions
177212 +2.9% 182411 perf-stat.ps.context-switches
1.234e+09 +28.9% 1.591e+09 perf-stat.ps.dTLB-loads
6.378e+08 +27.6% 8.137e+08 perf-stat.ps.dTLB-stores
7425084 +3.7% 7696575 perf-stat.ps.iTLB-loads
4.574e+09 +24.8% 5.707e+09 perf-stat.ps.instructions
2629 -1.4% 2592 perf-stat.ps.minor-faults
2629 -1.4% 2592 perf-stat.ps.page-faults
2.227e+12 +31.9% 2.938e+12 perf-stat.total.instructions
1.18 ? 5% -0.2 0.97 ? 6% perf-profile.calltrace.cycles-pp.folio_wait_bit_common.folio_wait_writeback.__filemap_fdatawait_range.file_write_and_wait_range.xfs_file_fsync
1.20 ? 5% -0.2 0.98 ? 7% perf-profile.calltrace.cycles-pp.folio_wait_writeback.__filemap_fdatawait_range.file_write_and_wait_range.xfs_file_fsync.__x64_sys_fsync
1.12 ? 5% -0.2 0.91 ? 7% perf-profile.calltrace.cycles-pp.io_schedule.folio_wait_bit_common.folio_wait_writeback.__filemap_fdatawait_range.file_write_and_wait_range
1.26 ? 5% -0.2 1.05 ? 6% perf-profile.calltrace.cycles-pp.__filemap_fdatawait_range.file_write_and_wait_range.xfs_file_fsync.__x64_sys_fsync.do_syscall_64
1.11 ? 5% -0.2 0.91 ? 7% perf-profile.calltrace.cycles-pp.schedule.io_schedule.folio_wait_bit_common.folio_wait_writeback.__filemap_fdatawait_range
1.11 ? 4% -0.2 0.90 ? 7% perf-profile.calltrace.cycles-pp.__schedule.schedule.io_schedule.folio_wait_bit_common.folio_wait_writeback
1.60 ? 3% -0.2 1.42 ? 8% perf-profile.calltrace.cycles-pp.xfs_end_io.process_one_work.worker_thread.kthread.ret_from_fork
1.58 ? 3% -0.2 1.41 ? 7% perf-profile.calltrace.cycles-pp.xfs_end_ioend.xfs_end_io.process_one_work.worker_thread.kthread
1.77 ? 4% -0.2 1.62 ? 6% perf-profile.calltrace.cycles-pp.__wait_for_common.__flush_workqueue.xlog_cil_push_now.xlog_cil_force_seq.xfs_log_force_seq
0.66 -0.2 0.50 ? 44% perf-profile.calltrace.cycles-pp.folio_end_writeback.iomap_finish_ioend.iomap_finish_ioends.xfs_end_ioend.xfs_end_io
1.93 ? 5% -0.2 1.78 ? 2% perf-profile.calltrace.cycles-pp.xlog_cil_commit.__xfs_trans_commit.xfs_create.xfs_generic_create.lookup_open
1.70 ? 4% -0.2 1.56 ? 6% perf-profile.calltrace.cycles-pp.schedule.schedule_timeout.__wait_for_common.__flush_workqueue.xlog_cil_push_now
1.71 ? 4% -0.2 1.56 ? 6% perf-profile.calltrace.cycles-pp.schedule_timeout.__wait_for_common.__flush_workqueue.xlog_cil_push_now.xlog_cil_force_seq
1.70 ? 3% -0.1 1.55 ? 6% perf-profile.calltrace.cycles-pp.__schedule.schedule.schedule_timeout.__wait_for_common.__flush_workqueue
0.72 ? 2% -0.1 0.63 ? 8% perf-profile.calltrace.cycles-pp.iomap_finish_ioend.iomap_finish_ioends.xfs_end_ioend.xfs_end_io.process_one_work
0.73 ? 2% -0.1 0.64 ? 8% perf-profile.calltrace.cycles-pp.iomap_finish_ioends.xfs_end_ioend.xfs_end_io.process_one_work.worker_thread
0.83 ? 6% -0.1 0.75 ? 10% perf-profile.calltrace.cycles-pp.dequeue_task_fair.__schedule.schedule.schedule_timeout.__wait_for_common
0.98 ? 3% -0.1 0.90 ? 5% perf-profile.calltrace.cycles-pp.xfs_dialloc_ag.xfs_dialloc.xfs_create.xfs_generic_create.lookup_open
0.79 ? 4% +0.3 1.13 ? 6% perf-profile.calltrace.cycles-pp.xlog_cil_commit.__xfs_trans_commit.xfs_bmapi_convert_delalloc.xfs_map_blocks.iomap_writepage_map
0.84 ? 4% +0.4 1.20 ? 6% perf-profile.calltrace.cycles-pp.__xfs_trans_commit.xfs_bmapi_convert_delalloc.xfs_map_blocks.iomap_writepage_map.write_cache_pages
0.00 +0.7 0.66 ? 8% perf-profile.calltrace.cycles-pp.xfs_btree_increment.xfs_alloc_walk_iter.xfs_alloc_ag_vextent_lastblock.xfs_alloc_ag_vextent_near.xfs_alloc_ag_vextent
0.00 +0.7 0.75 ? 6% perf-profile.calltrace.cycles-pp.xfs_btree_get_rec.xfs_alloc_get_rec.xfs_alloc_cur_check.xfs_alloc_walk_iter.xfs_alloc_ag_vextent_lastblock
1.01 ? 6% +0.9 1.88 ? 7% perf-profile.calltrace.cycles-pp.__schedule.schedule.schedule_timeout.___down_common.__down
1.01 ? 7% +0.9 1.89 ? 6% perf-profile.calltrace.cycles-pp.schedule.schedule_timeout.___down_common.__down.down
1.02 ? 7% +0.9 1.91 ? 6% perf-profile.calltrace.cycles-pp.schedule_timeout.___down_common.__down.down.xfs_buf_lock
1.05 ? 7% +0.9 1.95 ? 6% perf-profile.calltrace.cycles-pp.__down.down.xfs_buf_lock.xfs_buf_find_lock.xfs_buf_lookup
1.05 ? 7% +0.9 1.95 ? 6% perf-profile.calltrace.cycles-pp.___down_common.__down.down.xfs_buf_lock.xfs_buf_find_lock
1.08 ? 6% +0.9 1.99 ? 6% perf-profile.calltrace.cycles-pp.down.xfs_buf_lock.xfs_buf_find_lock.xfs_buf_lookup.xfs_buf_get_map
1.08 ? 6% +0.9 2.00 ? 6% perf-profile.calltrace.cycles-pp.xfs_buf_lock.xfs_buf_find_lock.xfs_buf_lookup.xfs_buf_get_map.xfs_buf_read_map
1.10 ? 6% +0.9 2.02 ? 6% perf-profile.calltrace.cycles-pp.xfs_buf_find_lock.xfs_buf_lookup.xfs_buf_get_map.xfs_buf_read_map.xfs_trans_read_buf_map
0.00 +1.0 1.03 ? 4% perf-profile.calltrace.cycles-pp.xfs_alloc_get_rec.xfs_alloc_cur_check.xfs_alloc_walk_iter.xfs_alloc_ag_vextent_lastblock.xfs_alloc_ag_vextent_near
0.00 +1.0 1.04 ? 7% perf-profile.calltrace.cycles-pp.xfs_buf_lookup.xfs_buf_get_map.xfs_buf_read_map.xfs_trans_read_buf_map.xfs_read_agf
0.00 +1.1 1.08 ? 7% perf-profile.calltrace.cycles-pp.xfs_buf_get_map.xfs_buf_read_map.xfs_trans_read_buf_map.xfs_read_agf.xfs_alloc_read_agf
0.00 +1.1 1.09 ? 7% perf-profile.calltrace.cycles-pp.xfs_buf_read_map.xfs_trans_read_buf_map.xfs_read_agf.xfs_alloc_read_agf.xfs_alloc_fix_freelist
0.00 +1.1 1.15 ? 8% perf-profile.calltrace.cycles-pp.xfs_trans_read_buf_map.xfs_read_agf.xfs_alloc_read_agf.xfs_alloc_fix_freelist.__xfs_alloc_vextent_this_ag
0.00 +1.2 1.16 ? 8% perf-profile.calltrace.cycles-pp.xfs_read_agf.xfs_alloc_read_agf.xfs_alloc_fix_freelist.__xfs_alloc_vextent_this_ag.xfs_alloc_vextent_iterate_ags
0.00 +1.2 1.18 ? 7% perf-profile.calltrace.cycles-pp.xfs_alloc_read_agf.xfs_alloc_fix_freelist.__xfs_alloc_vextent_this_ag.xfs_alloc_vextent_iterate_ags.xfs_alloc_vextent
0.00 +1.4 1.36 ? 6% perf-profile.calltrace.cycles-pp.xfs_alloc_fix_freelist.__xfs_alloc_vextent_this_ag.xfs_alloc_vextent_iterate_ags.xfs_alloc_vextent.xfs_bmap_btalloc
0.00 +1.4 1.36 ? 6% perf-profile.calltrace.cycles-pp.__xfs_alloc_vextent_this_ag.xfs_alloc_vextent_iterate_ags.xfs_alloc_vextent.xfs_bmap_btalloc.xfs_bmapi_allocate
0.00 +2.0 1.96 ? 4% perf-profile.calltrace.cycles-pp.xfs_alloc_cur_check.xfs_alloc_walk_iter.xfs_alloc_ag_vextent_lastblock.xfs_alloc_ag_vextent_near.xfs_alloc_ag_vextent
1.18 ? 7% +2.5 3.66 ? 5% perf-profile.calltrace.cycles-pp.xfs_alloc_ag_vextent.xfs_alloc_vextent_iterate_ags.xfs_alloc_vextent.xfs_bmap_btalloc.xfs_bmapi_allocate
1.05 ? 7% +2.6 3.61 ? 5% perf-profile.calltrace.cycles-pp.xfs_alloc_ag_vextent_near.xfs_alloc_ag_vextent.xfs_alloc_vextent_iterate_ags.xfs_alloc_vextent.xfs_bmap_btalloc
0.00 +2.7 2.68 ? 5% perf-profile.calltrace.cycles-pp.xfs_alloc_walk_iter.xfs_alloc_ag_vextent_lastblock.xfs_alloc_ag_vextent_near.xfs_alloc_ag_vextent.xfs_alloc_vextent_iterate_ags
0.00 +2.7 2.71 ? 5% perf-profile.calltrace.cycles-pp.xfs_alloc_ag_vextent_lastblock.xfs_alloc_ag_vextent_near.xfs_alloc_ag_vextent.xfs_alloc_vextent_iterate_ags.xfs_alloc_vextent
1.63 ? 6% +3.4 5.06 ? 5% perf-profile.calltrace.cycles-pp.xfs_alloc_vextent.xfs_bmap_btalloc.xfs_bmapi_allocate.xfs_bmapi_convert_delalloc.xfs_map_blocks
1.61 ? 6% +3.4 5.04 ? 5% perf-profile.calltrace.cycles-pp.xfs_alloc_vextent_iterate_ags.xfs_alloc_vextent.xfs_bmap_btalloc.xfs_bmapi_allocate.xfs_bmapi_convert_delalloc
1.85 ? 6% +3.5 5.30 ? 5% perf-profile.calltrace.cycles-pp.xfs_bmapi_allocate.xfs_bmapi_convert_delalloc.xfs_map_blocks.iomap_writepage_map.write_cache_pages
1.76 ? 6% +3.5 5.22 ? 5% perf-profile.calltrace.cycles-pp.xfs_bmap_btalloc.xfs_bmapi_allocate.xfs_bmapi_convert_delalloc.xfs_map_blocks.iomap_writepage_map
5.14 ? 3% +3.5 8.69 ? 5% perf-profile.calltrace.cycles-pp.file_write_and_wait_range.xfs_file_fsync.__x64_sys_fsync.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.86 ? 3% +3.8 7.62 ? 5% perf-profile.calltrace.cycles-pp.__filemap_fdatawrite_range.file_write_and_wait_range.xfs_file_fsync.__x64_sys_fsync.do_syscall_64
3.85 ? 3% +3.8 7.62 ? 5% perf-profile.calltrace.cycles-pp.filemap_fdatawrite_wbc.__filemap_fdatawrite_range.file_write_and_wait_range.xfs_file_fsync.__x64_sys_fsync
3.81 ? 3% +3.8 7.58 ? 5% perf-profile.calltrace.cycles-pp.do_writepages.filemap_fdatawrite_wbc.__filemap_fdatawrite_range.file_write_and_wait_range.xfs_file_fsync
3.80 ? 3% +3.8 7.57 ? 5% perf-profile.calltrace.cycles-pp.xfs_vm_writepages.do_writepages.filemap_fdatawrite_wbc.__filemap_fdatawrite_range.file_write_and_wait_range
3.47 ? 3% +3.8 7.25 ? 5% perf-profile.calltrace.cycles-pp.write_cache_pages.iomap_writepages.xfs_vm_writepages.do_writepages.filemap_fdatawrite_wbc
3.47 ? 3% +3.8 7.26 ? 5% perf-profile.calltrace.cycles-pp.iomap_writepages.xfs_vm_writepages.do_writepages.filemap_fdatawrite_wbc.__filemap_fdatawrite_range
2.92 ? 3% +3.8 6.71 ? 5% perf-profile.calltrace.cycles-pp.xfs_map_blocks.iomap_writepage_map.write_cache_pages.iomap_writepages.xfs_vm_writepages
2.88 ? 4% +3.8 6.67 ? 5% perf-profile.calltrace.cycles-pp.xfs_bmapi_convert_delalloc.xfs_map_blocks.iomap_writepage_map.write_cache_pages.iomap_writepages
3.21 ? 3% +3.8 7.01 ? 5% perf-profile.calltrace.cycles-pp.iomap_writepage_map.write_cache_pages.iomap_writepages.xfs_vm_writepages.do_writepages
1.20 ? 5% -0.2 0.98 ? 7% perf-profile.children.cycles-pp.folio_wait_writeback
1.18 ? 5% -0.2 0.97 ? 6% perf-profile.children.cycles-pp.folio_wait_bit_common
1.12 ? 5% -0.2 0.91 ? 7% perf-profile.children.cycles-pp.io_schedule
1.26 ? 5% -0.2 1.05 ? 6% perf-profile.children.cycles-pp.__filemap_fdatawait_range
1.60 ? 3% -0.2 1.42 ? 8% perf-profile.children.cycles-pp.xfs_end_io
1.58 ? 3% -0.2 1.41 ? 7% perf-profile.children.cycles-pp.xfs_end_ioend
0.50 ? 28% -0.1 0.35 ? 14% perf-profile.children.cycles-pp.syscall_exit_to_user_mode
0.72 ? 2% -0.1 0.63 ? 8% perf-profile.children.cycles-pp.iomap_finish_ioend
0.73 ? 2% -0.1 0.64 ? 8% perf-profile.children.cycles-pp.iomap_finish_ioends
0.66 -0.1 0.58 ? 9% perf-profile.children.cycles-pp.folio_end_writeback
0.98 ? 4% -0.1 0.90 ? 5% perf-profile.children.cycles-pp.xfs_dialloc_ag
0.24 ? 10% -0.1 0.18 ? 11% perf-profile.children.cycles-pp.xfs_perag_get
0.40 ? 5% -0.1 0.35 ? 10% perf-profile.children.cycles-pp.folio_wake_bit
0.37 ? 4% -0.1 0.32 ? 9% perf-profile.children.cycles-pp.wake_page_function
0.32 ? 5% -0.0 0.28 ? 10% perf-profile.children.cycles-pp.lapic_next_deadline
0.18 ? 10% -0.0 0.14 ? 12% perf-profile.children.cycles-pp.xfs_inode_item_push
0.17 ? 10% -0.0 0.13 ? 11% perf-profile.children.cycles-pp.xfs_iflush_cluster
0.12 ? 9% -0.0 0.08 ? 7% perf-profile.children.cycles-pp.xfs_iflush
0.11 ? 9% -0.0 0.08 ? 10% perf-profile.children.cycles-pp.memcg_slab_post_alloc_hook
0.07 ? 18% -0.0 0.04 ? 45% perf-profile.children.cycles-pp.try_to_unlazy
0.15 ? 10% -0.0 0.12 ? 10% perf-profile.children.cycles-pp.d_alloc
0.11 ? 6% -0.0 0.09 ? 10% perf-profile.children.cycles-pp.__intel_pmu_enable_all
0.11 ? 4% -0.0 0.09 ? 7% perf-profile.children.cycles-pp.set_next_entity
0.17 ? 12% +0.0 0.21 ? 9% perf-profile.children.cycles-pp.xfs_alloc_lookup_eq
0.12 ? 17% +0.1 0.17 ? 6% perf-profile.children.cycles-pp.xfs_ialloc_ag_alloc
0.03 ?100% +0.1 0.09 ? 18% perf-profile.children.cycles-pp.xfs_dir2_node_add_datablk
0.00 +0.1 0.07 ? 16% perf-profile.children.cycles-pp.xfs_allocbt_get_maxrecs
0.02 ?141% +0.1 0.08 ? 14% perf-profile.children.cycles-pp.xfs_dir2_grow_inode
0.02 ?141% +0.1 0.08 ? 14% perf-profile.children.cycles-pp.xfs_da_grow_inode_int
0.02 ?141% +0.1 0.09 ? 13% perf-profile.children.cycles-pp.xfs_allocbt_init_key_from_rec
0.09 ? 10% +0.1 0.17 ? 13% perf-profile.children.cycles-pp.xfs_lookup_get_search_key
0.01 ?223% +0.1 0.09 ? 8% perf-profile.children.cycles-pp.xfs_btree_rec_offset
0.35 ? 10% +0.1 0.45 ? 10% perf-profile.children.cycles-pp.xfs_alloc_cur_finish
0.00 +0.1 0.10 ? 15% perf-profile.children.cycles-pp.xfs_btree_check_block
0.07 ? 24% +0.1 0.20 ? 12% perf-profile.children.cycles-pp.xfs_errortag_test
3.16 ? 3% +0.2 3.31 ? 2% perf-profile.children.cycles-pp.xlog_cil_commit
3.35 ? 2% +0.2 3.54 ? 2% perf-profile.children.cycles-pp.__xfs_trans_commit
0.00 +0.2 0.22 ? 10% perf-profile.children.cycles-pp.xfs_alloc_compute_diff
1.58 ? 4% +0.3 1.83 ? 6% perf-profile.children.cycles-pp.__orc_find
0.57 ? 4% +0.3 0.85 ? 7% perf-profile.children.cycles-pp.up
1.24 ? 4% +0.3 1.52 ? 6% perf-profile.children.cycles-pp.orc_find
0.59 ? 5% +0.3 0.88 ? 6% perf-profile.children.cycles-pp.xfs_buf_unlock
0.62 ? 5% +0.3 0.92 ? 7% perf-profile.children.cycles-pp.xfs_buf_item_release
0.02 ?141% +0.3 0.36 ? 7% perf-profile.children.cycles-pp.xfs_extent_busy_trim
0.06 ? 6% +0.4 0.46 ? 6% perf-profile.children.cycles-pp.xfs_alloc_compute_aligned
3.91 ? 3% +0.4 4.33 ? 4% perf-profile.children.cycles-pp.unwind_next_frame
6.16 ? 2% +0.6 6.73 ? 5% perf-profile.children.cycles-pp.perf_callchain
0.10 ? 31% +0.6 0.72 ? 7% perf-profile.children.cycles-pp.xfs_btree_increment
0.14 ? 9% +0.7 0.84 ? 5% perf-profile.children.cycles-pp.xfs_btree_get_rec
0.16 ? 11% +0.7 0.88 ? 6% perf-profile.children.cycles-pp.__xfs_btree_check_sblock
2.80 ? 3% +0.7 3.53 ? 5% perf-profile.children.cycles-pp.schedule_timeout
2.18 ? 5% +0.8 2.98 ? 6% perf-profile.children.cycles-pp.xfs_buf_get_map
2.20 ? 5% +0.8 3.00 ? 6% perf-profile.children.cycles-pp.xfs_buf_read_map
2.84 ? 5% +0.8 3.66 ? 6% perf-profile.children.cycles-pp.xfs_trans_read_buf_map
1.72 ? 4% +0.9 2.58 ? 7% perf-profile.children.cycles-pp.xfs_buf_lookup
1.32 ? 5% +0.9 2.21 ? 6% perf-profile.children.cycles-pp.xfs_buf_find_lock
1.10 ? 6% +0.9 1.99 ? 6% perf-profile.children.cycles-pp.___down_common
1.10 ? 6% +0.9 1.99 ? 6% perf-profile.children.cycles-pp.__down
1.24 ? 5% +0.9 2.14 ? 6% perf-profile.children.cycles-pp.xfs_buf_lock
1.19 ? 5% +0.9 2.09 ? 6% perf-profile.children.cycles-pp.down
0.21 ? 13% +0.9 1.14 ? 7% perf-profile.children.cycles-pp.xfs_btree_check_sblock
0.11 ? 10% +1.0 1.08 ? 3% perf-profile.children.cycles-pp.xfs_alloc_get_rec
0.41 ? 6% +1.0 1.41 ? 6% perf-profile.children.cycles-pp.xfs_alloc_fix_freelist
0.22 ? 8% +1.0 1.22 ? 7% perf-profile.children.cycles-pp.xfs_alloc_read_agf
0.19 ? 9% +1.0 1.20 ? 7% perf-profile.children.cycles-pp.xfs_read_agf
0.00 +1.4 1.41 ? 6% perf-profile.children.cycles-pp.__xfs_alloc_vextent_this_ag
0.22 ? 12% +1.9 2.07 ? 4% perf-profile.children.cycles-pp.xfs_alloc_cur_check
0.29 ? 12% +2.5 2.80 ? 5% perf-profile.children.cycles-pp.xfs_alloc_ag_vextent_lastblock
0.31 ? 10% +2.5 2.82 ? 5% perf-profile.children.cycles-pp.xfs_alloc_walk_iter
1.24 ? 7% +2.6 3.79 ? 5% perf-profile.children.cycles-pp.xfs_alloc_ag_vextent
1.09 ? 7% +2.6 3.74 ? 5% perf-profile.children.cycles-pp.xfs_alloc_ag_vextent_near
1.63 ? 6% +3.5 5.11 ? 5% perf-profile.children.cycles-pp.xfs_alloc_vextent_iterate_ags
1.89 ? 6% +3.5 5.38 ? 5% perf-profile.children.cycles-pp.xfs_bmapi_allocate
1.79 ? 6% +3.5 5.29 ? 5% perf-profile.children.cycles-pp.xfs_bmap_btalloc
5.14 ? 3% +3.6 8.70 ? 5% perf-profile.children.cycles-pp.file_write_and_wait_range
1.65 ? 6% +3.6 5.24 ? 5% perf-profile.children.cycles-pp.xfs_alloc_vextent
3.86 ? 3% +3.8 7.62 ? 5% perf-profile.children.cycles-pp.__filemap_fdatawrite_range
3.86 ? 3% +3.8 7.62 ? 5% perf-profile.children.cycles-pp.filemap_fdatawrite_wbc
3.90 ? 3% +3.8 7.66 ? 5% perf-profile.children.cycles-pp.do_writepages
3.86 ? 3% +3.8 7.63 ? 5% perf-profile.children.cycles-pp.xfs_vm_writepages
3.50 ? 3% +3.8 7.28 ? 5% perf-profile.children.cycles-pp.write_cache_pages
2.92 ? 3% +3.8 6.71 ? 5% perf-profile.children.cycles-pp.xfs_map_blocks
3.50 ? 3% +3.8 7.29 ? 5% perf-profile.children.cycles-pp.iomap_writepages
2.88 ? 4% +3.8 6.67 ? 5% perf-profile.children.cycles-pp.xfs_bmapi_convert_delalloc
3.21 ? 3% +3.8 7.02 ? 5% perf-profile.children.cycles-pp.iomap_writepage_map
0.21 ? 9% -0.1 0.15 ? 10% perf-profile.self.cycles-pp.xfs_perag_get
0.32 ? 5% -0.0 0.28 ? 10% perf-profile.self.cycles-pp.lapic_next_deadline
0.06 ? 7% -0.0 0.04 ? 71% perf-profile.self.cycles-pp.memcg_slab_post_alloc_hook
0.11 ? 6% -0.0 0.09 ? 10% perf-profile.self.cycles-pp.__intel_pmu_enable_all
0.12 ? 6% -0.0 0.10 ? 8% perf-profile.self.cycles-pp.down_read
0.01 ?223% +0.1 0.08 ? 10% perf-profile.self.cycles-pp.xfs_allocbt_init_key_from_rec
0.00 +0.1 0.07 ? 10% perf-profile.self.cycles-pp.xfs_btree_rec_offset
0.00 +0.1 0.09 ? 15% perf-profile.self.cycles-pp.xfs_extent_busy_trim
0.00 +0.1 0.09 ? 18% perf-profile.self.cycles-pp.xfs_btree_check_block
0.00 +0.1 0.10 ? 16% perf-profile.self.cycles-pp.xfs_alloc_compute_aligned
0.06 ? 51% +0.1 0.17 ? 12% perf-profile.self.cycles-pp.xfs_errortag_test
0.00 +0.1 0.12 ? 11% perf-profile.self.cycles-pp.xfs_btree_get_rec
0.00 +0.1 0.14 ? 15% perf-profile.self.cycles-pp.xfs_btree_check_sblock
0.00 +0.2 0.16 ? 6% perf-profile.self.cycles-pp.xfs_btree_increment
0.00 +0.2 0.20 ? 3% perf-profile.self.cycles-pp.xfs_alloc_compute_diff
1.10 ? 6% +0.2 1.33 ? 3% perf-profile.self.cycles-pp._raw_spin_lock
1.57 ? 4% +0.3 1.82 ? 6% perf-profile.self.cycles-pp.__orc_find
0.00 +0.3 0.27 ? 6% perf-profile.self.cycles-pp.xfs_alloc_get_rec
0.00 +0.3 0.28 ? 9% perf-profile.self.cycles-pp.xfs_alloc_cur_check
0.13 ? 12% +0.7 0.79 ? 7% perf-profile.self.cycles-pp.__xfs_btree_check_sblock


***************************************************************************************************
lkp-spr-r02: 224 threads 2 sockets (Sapphire Rapids) with 256G memory
=========================================================================================
compiler/cpufreq_governor/disk/filesize/fs/iterations/kconfig/nr_directories/nr_files_per_directory/nr_threads/rootfs/sync_method/tbox_group/test_size/testcase:
gcc-11/performance/1SSD/8K/xfs/8/x86_64-rhel-8.3/16d/256fpd/32/debian-11.1-x86_64-20220510.cgz/fsyncBeforeClose/lkp-spr-r02/50G/fsmark

commit:
ecd788a924 ("xfs: rework xfs_alloc_vextent()")
2edf06a50f ("xfs: factor xfs_alloc_vextent_this_ag() for _iterate_ags()")

ecd788a92460eef4 2edf06a50f5bbe664283f3c55c4
---------------- ---------------------------
%stddev %change %stddev
\ | \
71096 -3.7% 68492 fsmark.files_per_sec
537.67 +5.8% 569.00 fsmark.time.percent_of_cpu_this_job_got
606.41 +9.1% 661.29 fsmark.time.system_time
33180376 +4.8% 34779903 fsmark.time.voluntary_context_switches
3.76 +3.2% 3.88 iostat.cpu.system
0.34 +21.6% 0.41 turbostat.IPC
809115 ? 17% +19.6% 968066 ? 15% turbostat.POLL
921206 +3.5% 953292 vmstat.io.bo
728873 +1.5% 739648 vmstat.system.cs
1178 -3.3% 1140 ? 2% proc-vmstat.direct_map_level2_splits
1118 +5.9% 1184 proc-vmstat.nr_active_anon
1118 +5.9% 1184 proc-vmstat.nr_zone_active_anon
5814 -6.9% 5411 proc-vmstat.numa_huge_pte_updates
3078897 -6.8% 2870651 proc-vmstat.numa_pte_updates
7925 -4.3% 7584 proc-vmstat.pgactivate
1.099e+08 +6.4% 1.169e+08 proc-vmstat.pgpgout
976896 +2.5% 1000960 proc-vmstat.unevictable_pgs_scanned
13064 ? 11% +32.7% 17334 ? 12% sched_debug.cfs_rq:/.min_vruntime.stddev
122.15 ? 15% -32.4% 82.52 ? 16% sched_debug.cfs_rq:/.runnable_avg.avg
1169 ? 10% -24.5% 883.06 ? 2% sched_debug.cfs_rq:/.runnable_avg.max
211.40 ? 12% -27.4% 153.52 ? 8% sched_debug.cfs_rq:/.runnable_avg.stddev
13115 ? 11% +32.2% 17337 ? 12% sched_debug.cfs_rq:/.spread0.stddev
121.88 ? 15% -32.5% 82.25 ? 16% sched_debug.cfs_rq:/.util_avg.avg
1168 ? 10% -25.8% 867.28 sched_debug.cfs_rq:/.util_avg.max
210.78 ? 13% -27.6% 152.65 ? 8% sched_debug.cfs_rq:/.util_avg.stddev
10.49 ? 19% -39.8% 6.31 ? 28% sched_debug.cfs_rq:/.util_est_enqueued.avg
69.83 ? 19% -35.0% 45.42 ? 22% sched_debug.cfs_rq:/.util_est_enqueued.stddev
72309 ? 19% +41.5% 102337 ? 13% sched_debug.cpu.clock.avg
72324 ? 19% +41.5% 102351 ? 13% sched_debug.cpu.clock.max
72292 ? 19% +41.5% 102321 ? 13% sched_debug.cpu.clock.min
71938 ? 19% +41.3% 101681 ? 13% sched_debug.cpu.clock_task.avg
72187 ? 19% +41.2% 101961 ? 13% sched_debug.cpu.clock_task.max
5843 ? 18% +33.2% 7781 ? 7% sched_debug.cpu.curr->pid.max
41950 ?137% +275.6% 157567 ? 27% sched_debug.cpu.nr_switches.avg
177180 ? 91% +190.6% 514800 ? 11% sched_debug.cpu.nr_switches.max
31279 ?121% +264.0% 113856 ? 16% sched_debug.cpu.nr_switches.stddev
72293 ? 19% +41.5% 102321 ? 13% sched_debug.cpu_clk
71040 ? 19% +42.3% 101061 ? 14% sched_debug.ktime
73365 ? 19% +40.9% 103391 ? 13% sched_debug.sched_clk
13.52 -16.6% 11.27 ? 3% perf-stat.i.MPKI
3.816e+09 +22.2% 4.663e+09 perf-stat.i.branch-instructions
0.75 -0.1 0.63 perf-stat.i.branch-miss-rate%
73259837 +6.2% 77781445 perf-stat.i.cache-misses
750409 +1.7% 763405 perf-stat.i.context-switches
1.76 -15.9% 1.48 perf-stat.i.cpi
3.367e+10 +2.5% 3.451e+10 perf-stat.i.cpu-cycles
5.662e+09 +27.5% 7.218e+09 perf-stat.i.dTLB-loads
220394 ? 4% +18.5% 261185 ? 9% perf-stat.i.dTLB-store-misses
2.753e+09 +28.0% 3.523e+09 perf-stat.i.dTLB-stores
2.019e+10 +24.8% 2.52e+10 perf-stat.i.instructions
0.59 +19.5% 0.70 perf-stat.i.ipc
0.15 +2.5% 0.15 perf-stat.i.metric.GHz
55.82 +25.5% 70.04 perf-stat.i.metric.M/sec
18252836 ? 2% +4.4% 19056718 perf-stat.i.node-load-misses
14.22 -16.9% 11.82 ? 4% perf-stat.overall.MPKI
0.81 -0.1 0.67 perf-stat.overall.branch-miss-rate%
1.67 -17.9% 1.37 perf-stat.overall.cpi
459.70 -3.5% 443.70 ? 2% perf-stat.overall.cycles-between-cache-misses
0.01 ? 4% -0.0 0.01 ? 7% perf-stat.overall.dTLB-store-miss-rate%
0.60 +21.8% 0.73 perf-stat.overall.ipc
3.783e+09 +22.1% 4.62e+09 perf-stat.ps.branch-instructions
72614040 +6.2% 77102710 perf-stat.ps.cache-misses
743776 +1.7% 756437 perf-stat.ps.context-switches
3.337e+10 +2.5% 3.42e+10 perf-stat.ps.cpu-cycles
5.612e+09 +27.5% 7.153e+09 ? 2% perf-stat.ps.dTLB-loads
218292 ? 4% +18.5% 258697 ? 9% perf-stat.ps.dTLB-store-misses
2.729e+09 +27.9% 3.491e+09 perf-stat.ps.dTLB-stores
2.002e+10 +24.8% 2.498e+10 perf-stat.ps.instructions
18090782 ? 2% +4.4% 18888511 perf-stat.ps.node-load-misses
2.339e+12 +28.0% 2.994e+12 perf-stat.total.instructions
22.79 -3.6 19.20 ? 7% perf-profile.calltrace.cycles-pp.evict_inodes.generic_shutdown_super.kill_block_super.deactivate_locked_super.cleanup_mnt
20.93 -3.3 17.65 ? 7% perf-profile.calltrace.cycles-pp.dispose_list.evict_inodes.generic_shutdown_super.kill_block_super.deactivate_locked_super
9.63 -2.5 7.10 ? 24% perf-profile.calltrace.cycles-pp.__do_softirq.run_ksoftirqd.smpboot_thread_fn.kthread.ret_from_fork
9.63 -2.5 7.10 ? 24% perf-profile.calltrace.cycles-pp.run_ksoftirqd.smpboot_thread_fn.kthread.ret_from_fork
9.79 -2.5 7.27 ? 23% perf-profile.calltrace.cycles-pp.smpboot_thread_fn.kthread.ret_from_fork
9.62 -2.5 7.10 ? 24% perf-profile.calltrace.cycles-pp.rcu_core.__do_softirq.run_ksoftirqd.smpboot_thread_fn.kthread
9.60 -2.5 7.08 ? 24% perf-profile.calltrace.cycles-pp.rcu_do_batch.rcu_core.__do_softirq.run_ksoftirqd.smpboot_thread_fn
13.98 -2.1 11.91 ? 6% perf-profile.calltrace.cycles-pp.evict.dispose_list.evict_inodes.generic_shutdown_super.kill_block_super
22.80 -2.1 20.73 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.umount2
22.80 -2.1 20.73 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.umount2
22.80 -2.1 20.73 perf-profile.calltrace.cycles-pp.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe.umount2
22.80 -2.1 20.73 perf-profile.calltrace.cycles-pp.exit_to_user_mode_loop.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe
22.80 -2.1 20.73 perf-profile.calltrace.cycles-pp.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe.umount2
22.80 -2.1 20.73 perf-profile.calltrace.cycles-pp.task_work_run.exit_to_user_mode_loop.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64
22.80 -2.1 20.73 perf-profile.calltrace.cycles-pp.umount2
22.80 -2.1 20.73 perf-profile.calltrace.cycles-pp.cleanup_mnt.task_work_run.exit_to_user_mode_loop.exit_to_user_mode_prepare.syscall_exit_to_user_mode
22.80 -2.1 20.73 perf-profile.calltrace.cycles-pp.deactivate_locked_super.cleanup_mnt.task_work_run.exit_to_user_mode_loop.exit_to_user_mode_prepare
22.80 -2.1 20.73 perf-profile.calltrace.cycles-pp.kill_block_super.deactivate_locked_super.cleanup_mnt.task_work_run.exit_to_user_mode_loop
22.80 -2.1 20.73 perf-profile.calltrace.cycles-pp.generic_shutdown_super.kill_block_super.deactivate_locked_super.cleanup_mnt.task_work_run
11.34 -1.6 9.74 ? 7% perf-profile.calltrace.cycles-pp.truncate_inode_pages_range.evict.dispose_list.evict_inodes.generic_shutdown_super
5.64 ? 3% -1.0 4.59 ? 8% perf-profile.calltrace.cycles-pp.destroy_inode.dispose_list.evict_inodes.generic_shutdown_super.kill_block_super
4.09 ? 4% -0.7 3.35 ? 19% perf-profile.calltrace.cycles-pp.xfs_icwalk_ag.xfs_icwalk.xfs_reclaim_worker.process_one_work.worker_thread
4.10 ? 4% -0.7 3.36 ? 19% perf-profile.calltrace.cycles-pp.xfs_reclaim_worker.process_one_work.worker_thread.kthread.ret_from_fork
4.10 ? 4% -0.7 3.36 ? 19% perf-profile.calltrace.cycles-pp.xfs_icwalk.xfs_reclaim_worker.process_one_work.worker_thread.kthread
1.81 -0.7 1.10 ? 43% perf-profile.calltrace.cycles-pp.xfs_inode_item_destroy.xfs_inode_free_callback.rcu_do_batch.rcu_core.__do_softirq
4.05 -0.6 3.42 ? 8% perf-profile.calltrace.cycles-pp.find_lock_entries.truncate_inode_pages_range.evict.dispose_list.evict_inodes
3.72 -0.5 3.27 ? 6% perf-profile.calltrace.cycles-pp.__pagevec_release.truncate_inode_pages_range.evict.dispose_list.evict_inodes
3.64 -0.4 3.19 ? 7% perf-profile.calltrace.cycles-pp.release_pages.__pagevec_release.truncate_inode_pages_range.evict.dispose_list
2.32 ? 4% -0.4 1.89 ? 16% perf-profile.calltrace.cycles-pp.xfs_reclaim_inode.xfs_icwalk_ag.xfs_icwalk.xfs_reclaim_worker.process_one_work
2.30 -0.4 1.88 ? 8% perf-profile.calltrace.cycles-pp.xfs_inode_mark_reclaimable.destroy_inode.dispose_list.evict_inodes.generic_shutdown_super
2.63 ? 3% -0.4 2.25 ? 6% perf-profile.calltrace.cycles-pp.delete_from_page_cache_batch.truncate_inode_pages_range.evict.dispose_list.evict_inodes
2.59 -0.4 2.24 ? 9% perf-profile.calltrace.cycles-pp.xas_find.find_lock_entries.truncate_inode_pages_range.evict.dispose_list
1.93 ? 2% -0.4 1.58 ? 8% perf-profile.calltrace.cycles-pp.xfs_can_free_eofblocks.xfs_inode_mark_reclaimable.destroy_inode.dispose_list.evict_inodes
1.73 ? 2% -0.3 1.39 ? 10% perf-profile.calltrace.cycles-pp.__destroy_inode.destroy_inode.dispose_list.evict_inodes.generic_shutdown_super
1.62 ? 2% -0.3 1.28 ? 9% perf-profile.calltrace.cycles-pp.fsnotify_destroy_marks.__destroy_inode.destroy_inode.dispose_list.evict_inodes
1.58 ? 3% -0.3 1.25 ? 9% perf-profile.calltrace.cycles-pp.fsnotify_grab_connector.fsnotify_destroy_marks.__destroy_inode.destroy_inode.dispose_list
3.79 ? 7% -0.3 3.46 ? 5% perf-profile.calltrace.cycles-pp.timekeeping_max_deferment.tick_nohz_next_event.tick_nohz_get_sleep_length.menu_select.cpuidle_idle_call
1.61 ? 2% -0.3 1.34 ? 5% perf-profile.calltrace.cycles-pp.free_unref_page_list.release_pages.__pagevec_release.truncate_inode_pages_range.evict
1.31 -0.3 1.04 ? 9% perf-profile.calltrace.cycles-pp.kmem_cache_free.rcu_do_batch.rcu_core.__do_softirq.run_ksoftirqd
0.58 -0.2 0.36 ? 70% perf-profile.calltrace.cycles-pp.xas_store.delete_from_page_cache_batch.truncate_inode_pages_range.evict.dispose_list
0.91 ? 3% -0.2 0.73 ? 10% perf-profile.calltrace.cycles-pp.down_read.xfs_can_free_eofblocks.xfs_inode_mark_reclaimable.destroy_inode.dispose_list
0.95 ? 9% -0.2 0.77 ? 9% perf-profile.calltrace.cycles-pp.xfs_inodegc_set_reclaimable.destroy_inode.dispose_list.evict_inodes.generic_shutdown_super
0.89 -0.2 0.74 ? 3% perf-profile.calltrace.cycles-pp.free_pcppages_bulk.free_unref_page_list.release_pages.__pagevec_release.truncate_inode_pages_range
0.81 ? 5% -0.2 0.66 perf-profile.calltrace.cycles-pp.inode_io_list_del.evict.dispose_list.evict_inodes.generic_shutdown_super
0.73 ? 2% -0.1 0.60 ? 4% perf-profile.calltrace.cycles-pp.__free_one_page.free_pcppages_bulk.free_unref_page_list.release_pages.__pagevec_release
0.70 ? 4% -0.1 0.61 ? 6% perf-profile.calltrace.cycles-pp.xfs_bmapi_read.xfs_can_free_eofblocks.xfs_inode_mark_reclaimable.destroy_inode.dispose_list
0.63 ? 5% +0.1 0.68 ? 4% perf-profile.calltrace.cycles-pp.native_apic_msr_eoi_write.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state
0.60 ? 4% +0.1 0.66 perf-profile.calltrace.cycles-pp.perf_mux_hrtimer_handler.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt
0.86 ? 4% +0.1 0.93 perf-profile.calltrace.cycles-pp.lapic_next_deadline.clockevents_program_event.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt
0.74 ? 2% +0.1 0.88 ? 4% perf-profile.calltrace.cycles-pp.ktime_get_update_offsets_now.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
0.17 ?141% +0.4 0.53 ? 2% perf-profile.calltrace.cycles-pp.check_cpu_stall.rcu_pending.rcu_sched_clock_irq.update_process_times.tick_sched_handle
11.33 +0.8 12.17 ? 7% perf-profile.calltrace.cycles-pp.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter
22.97 +1.2 24.14 ? 4% perf-profile.calltrace.cycles-pp.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call.do_idle
26.56 +1.2 27.80 ? 2% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call.do_idle
26.55 +1.2 27.79 ? 2% perf-profile.calltrace.cycles-pp.mwait_idle_with_hints.intel_idle.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call
0.00 +1.6 1.58 ? 46% perf-profile.calltrace.cycles-pp.do_writepages.__writeback_single_inode.writeback_sb_inodes.wb_writeback.wb_do_writeback
0.00 +1.6 1.64 ? 47% perf-profile.calltrace.cycles-pp.__writeback_single_inode.writeback_sb_inodes.wb_writeback.wb_do_writeback.wb_workfn
60.97 +2.1 63.10 perf-profile.calltrace.cycles-pp.secondary_startup_64_no_verify
60.68 +2.1 62.83 perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
60.68 +2.1 62.83 perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64_no_verify
60.66 +2.1 62.81 perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
59.73 +2.2 61.88 perf-profile.calltrace.cycles-pp.cpuidle_idle_call.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
50.48 +2.3 52.79 ? 3% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call.do_idle.cpu_startup_entry
51.52 +2.4 53.92 ? 3% perf-profile.calltrace.cycles-pp.cpuidle_enter.cpuidle_idle_call.do_idle.cpu_startup_entry.start_secondary
4.18 ? 4% +2.6 6.73 ? 17% perf-profile.calltrace.cycles-pp.worker_thread.kthread.ret_from_fork
4.15 ? 4% +2.6 6.70 ? 17% perf-profile.calltrace.cycles-pp.process_one_work.worker_thread.kthread.ret_from_fork
0.00 +2.6 2.61 ? 47% perf-profile.calltrace.cycles-pp.writeback_sb_inodes.wb_writeback.wb_do_writeback.wb_workfn.process_one_work
0.00 +3.3 3.28 ? 51% perf-profile.calltrace.cycles-pp.wb_workfn.process_one_work.worker_thread.kthread.ret_from_fork
0.00 +3.3 3.28 ? 51% perf-profile.calltrace.cycles-pp.wb_do_writeback.wb_workfn.process_one_work.worker_thread.kthread
0.00 +3.3 3.28 ? 51% perf-profile.calltrace.cycles-pp.wb_writeback.wb_do_writeback.wb_workfn.process_one_work.worker_thread
22.80 -3.6 19.21 ? 7% perf-profile.children.cycles-pp.evict_inodes
20.95 -3.3 17.67 ? 7% perf-profile.children.cycles-pp.dispose_list
9.63 -2.5 7.10 ? 24% perf-profile.children.cycles-pp.run_ksoftirqd
9.79 -2.5 7.27 ? 23% perf-profile.children.cycles-pp.smpboot_thread_fn
11.02 -2.4 8.61 ? 19% perf-profile.children.cycles-pp.rcu_core
23.93 -2.1 21.80 perf-profile.children.cycles-pp.do_syscall_64
23.93 -2.1 21.81 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
14.02 -2.1 11.94 ? 6% perf-profile.children.cycles-pp.evict
22.87 -2.1 20.80 perf-profile.children.cycles-pp.exit_to_user_mode_prepare
22.86 -2.1 20.79 perf-profile.children.cycles-pp.exit_to_user_mode_loop
22.86 -2.1 20.79 perf-profile.children.cycles-pp.task_work_run
22.80 -2.1 20.73 perf-profile.children.cycles-pp.umount2
22.80 -2.1 20.73 perf-profile.children.cycles-pp.cleanup_mnt
22.80 -2.1 20.73 perf-profile.children.cycles-pp.deactivate_locked_super
22.80 -2.1 20.73 perf-profile.children.cycles-pp.kill_block_super
22.80 -2.1 20.73 perf-profile.children.cycles-pp.generic_shutdown_super
22.86 -2.1 20.80 perf-profile.children.cycles-pp.syscall_exit_to_user_mode
11.40 -1.6 9.77 ? 7% perf-profile.children.cycles-pp.truncate_inode_pages_range
5.66 ? 3% -1.0 4.61 ? 8% perf-profile.children.cycles-pp.destroy_inode
4.10 ? 4% -0.7 3.36 ? 19% perf-profile.children.cycles-pp.xfs_reclaim_worker
4.10 ? 4% -0.7 3.36 ? 19% perf-profile.children.cycles-pp.xfs_icwalk
4.10 ? 4% -0.7 3.36 ? 19% perf-profile.children.cycles-pp.xfs_icwalk_ag
1.82 -0.7 1.11 ? 43% perf-profile.children.cycles-pp.xfs_inode_item_destroy
4.08 -0.6 3.45 ? 8% perf-profile.children.cycles-pp.find_lock_entries
3.74 -0.5 3.27 ? 6% perf-profile.children.cycles-pp.__pagevec_release
3.70 -0.5 3.25 ? 6% perf-profile.children.cycles-pp.release_pages
2.35 ? 4% -0.4 1.92 ? 16% perf-profile.children.cycles-pp.xfs_reclaim_inode
2.31 -0.4 1.89 ? 8% perf-profile.children.cycles-pp.xfs_inode_mark_reclaimable
2.82 -0.4 2.42 ? 9% perf-profile.children.cycles-pp.xas_find
2.69 ? 3% -0.4 2.29 ? 6% perf-profile.children.cycles-pp.delete_from_page_cache_batch
1.96 ? 2% -0.4 1.61 ? 8% perf-profile.children.cycles-pp.xfs_can_free_eofblocks
1.76 ? 3% -0.4 1.41 ? 10% perf-profile.children.cycles-pp.__destroy_inode
1.63 ? 3% -0.3 1.29 ? 9% perf-profile.children.cycles-pp.fsnotify_destroy_marks
1.60 ? 2% -0.3 1.26 ? 8% perf-profile.children.cycles-pp.fsnotify_grab_connector
3.81 ? 7% -0.3 3.48 ? 5% perf-profile.children.cycles-pp.timekeeping_max_deferment
1.72 -0.3 1.43 ? 10% perf-profile.children.cycles-pp.kmem_cache_free
1.27 -0.3 0.99 ? 9% perf-profile.children.cycles-pp.free_pcppages_bulk
1.65 ? 2% -0.3 1.38 ? 5% perf-profile.children.cycles-pp.free_unref_page_list
1.20 ? 4% -0.2 1.01 ? 7% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
0.92 ? 3% -0.2 0.75 ? 9% perf-profile.children.cycles-pp.down_read
0.97 ? 9% -0.2 0.80 ? 8% perf-profile.children.cycles-pp.xfs_inodegc_set_reclaimable
0.43 ? 2% -0.2 0.27 ? 23% perf-profile.children.cycles-pp.free_unref_page
0.90 -0.2 0.74 ? 9% perf-profile.children.cycles-pp.__free_one_page
0.84 ? 5% -0.2 0.69 perf-profile.children.cycles-pp.inode_io_list_del
0.33 ? 10% -0.1 0.18 ? 18% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
0.72 ? 4% -0.1 0.63 ? 6% perf-profile.children.cycles-pp.xfs_bmapi_read
0.46 ? 6% -0.1 0.36 ? 10% perf-profile.children.cycles-pp.down_write_trylock
0.48 ? 5% -0.1 0.39 ? 7% perf-profile.children.cycles-pp.xfs_ilock_nowait
0.62 -0.1 0.53 ? 10% perf-profile.children.cycles-pp.xas_store
0.24 ? 9% -0.1 0.17 ? 10% perf-profile.children.cycles-pp.free_unref_page_commit
0.41 ? 10% -0.1 0.34 ? 4% perf-profile.children.cycles-pp.xfs_perag_set_inode_tag
0.31 ? 6% -0.1 0.24 ? 14% perf-profile.children.cycles-pp.read
0.18 ? 2% -0.1 0.12 ? 15% perf-profile.children.cycles-pp.clear_inode
0.17 ? 4% -0.1 0.11 ? 19% perf-profile.children.cycles-pp.get_slabinfo
0.17 ? 4% -0.1 0.11 ? 23% perf-profile.children.cycles-pp.slab_show
0.30 ? 2% -0.1 0.24 ? 13% perf-profile.children.cycles-pp.xfs_iunlock
0.41 ? 6% -0.1 0.36 ? 6% perf-profile.children.cycles-pp.filemap_unaccount_folio
0.14 ? 5% -0.1 0.09 ? 9% perf-profile.children.cycles-pp.find_get_entries
0.14 ? 8% -0.1 0.09 ? 5% perf-profile.children.cycles-pp.truncate_cleanup_folio
0.18 ? 7% -0.0 0.13 ? 15% perf-profile.children.cycles-pp.xfs_ilock
0.48 ? 5% -0.0 0.43 ? 5% perf-profile.children.cycles-pp.ksys_read
0.47 ? 5% -0.0 0.43 ? 3% perf-profile.children.cycles-pp.vfs_read
0.19 ? 15% -0.0 0.14 ? 20% perf-profile.children.cycles-pp.xfs_perag_put
0.16 ? 3% -0.0 0.11 ? 8% perf-profile.children.cycles-pp.obj_cgroup_uncharge_pages
0.38 ? 4% -0.0 0.34 ? 4% perf-profile.children.cycles-pp.seq_read_iter
0.33 ? 5% -0.0 0.29 ? 5% perf-profile.children.cycles-pp.seq_read
0.17 -0.0 0.14 ? 6% perf-profile.children.cycles-pp.filemap_free_folio
0.12 ? 8% -0.0 0.08 ? 20% perf-profile.children.cycles-pp.down_write
0.10 ? 9% -0.0 0.07 ? 11% perf-profile.children.cycles-pp.__srcu_read_lock
0.12 ? 6% -0.0 0.09 ? 10% perf-profile.children.cycles-pp.__mod_zone_page_state
0.18 ? 5% -0.0 0.15 ? 11% perf-profile.children.cycles-pp.radix_tree_node_rcu_free
0.07 ? 7% -0.0 0.04 ? 71% perf-profile.children.cycles-pp.__inode_wait_for_writeback
0.50 -0.0 0.48 perf-profile.children.cycles-pp.irq_enter_rcu
0.46 -0.0 0.43 perf-profile.children.cycles-pp.tick_irq_enter
0.12 ? 3% -0.0 0.10 ? 14% perf-profile.children.cycles-pp.cgroup_rstat_updated
0.09 ? 9% -0.0 0.07 ? 7% perf-profile.children.cycles-pp.up_read
0.08 ? 10% -0.0 0.06 ? 16% perf-profile.children.cycles-pp.node_tag_clear
0.30 ? 3% -0.0 0.28 ? 5% perf-profile.children.cycles-pp.radix_tree_tag_set
0.10 ? 12% -0.0 0.08 ? 12% perf-profile.children.cycles-pp.force_qs_rnp
0.07 ? 11% -0.0 0.05 perf-profile.children.cycles-pp.refill_stock
0.07 ? 17% -0.0 0.05 ? 8% perf-profile.children.cycles-pp.dyntick_save_progress_counter
0.14 ? 3% -0.0 0.12 ? 6% perf-profile.children.cycles-pp.timerqueue_del
0.10 ? 8% -0.0 0.08 ? 5% perf-profile.children.cycles-pp.rcu_all_qs
0.08 ? 10% -0.0 0.06 ? 7% perf-profile.children.cycles-pp.__count_memcg_events
0.08 ? 5% -0.0 0.07 perf-profile.children.cycles-pp.filename_lookup
0.08 ? 5% -0.0 0.07 perf-profile.children.cycles-pp.path_lookupat
0.28 ? 2% +0.0 0.31 ? 5% perf-profile.children.cycles-pp.__intel_pmu_enable_all
0.43 ? 2% +0.0 0.46 ? 3% perf-profile.children.cycles-pp.perf_rotate_context
0.49 ? 3% +0.0 0.53 ? 3% perf-profile.children.cycles-pp.perf_session__deliver_event
0.02 ?141% +0.0 0.06 ? 8% perf-profile.children.cycles-pp.get_cpu_device
0.31 ? 9% +0.0 0.35 ? 2% perf-profile.children.cycles-pp.hrtimer_next_event_without
0.23 ? 6% +0.0 0.28 ? 4% perf-profile.children.cycles-pp.__hrtimer_next_event_base
0.63 ? 3% +0.0 0.68 perf-profile.children.cycles-pp.perf_mux_hrtimer_handler
0.78 ? 5% +0.1 0.84 perf-profile.children.cycles-pp.native_apic_msr_eoi_write
0.91 ? 4% +0.1 0.98 perf-profile.children.cycles-pp.lapic_next_deadline
0.43 ? 14% +0.1 0.54 ? 2% perf-profile.children.cycles-pp.check_cpu_stall
0.77 +0.1 0.91 ? 3% perf-profile.children.cycles-pp.ktime_get_update_offsets_now
0.00 +0.2 0.21 ? 50% perf-profile.children.cycles-pp.xas_find_marked
0.00 +0.2 0.22 ? 47% perf-profile.children.cycles-pp.tag_pages_for_writeback
0.00 +0.3 0.34 ? 49% perf-profile.children.cycles-pp.write_cache_pages
0.00 +0.3 0.34 ? 49% perf-profile.children.cycles-pp.iomap_writepages
0.00 +0.5 0.48 ? 47% perf-profile.children.cycles-pp.inode_cgwb_move_to_attached
1.90 ? 4% +0.8 2.73 ? 16% perf-profile.children.cycles-pp.__list_del_entry_valid
19.09 +1.0 20.14 ? 5% perf-profile.children.cycles-pp.sysvec_apic_timer_interrupt
21.60 +1.2 22.81 ? 5% perf-profile.children.cycles-pp.asm_sysvec_apic_timer_interrupt
0.00 +1.2 1.21 ? 48% perf-profile.children.cycles-pp.xfs_vm_writepages
26.70 +1.2 27.92 ? 2% perf-profile.children.cycles-pp.intel_idle
26.68 +1.3 27.97 ? 2% perf-profile.children.cycles-pp.mwait_idle_with_hints
0.00 +1.6 1.58 ? 46% perf-profile.children.cycles-pp.do_writepages
0.00 +1.6 1.64 ? 47% perf-profile.children.cycles-pp.__writeback_single_inode
60.97 +2.1 63.10 perf-profile.children.cycles-pp.secondary_startup_64_no_verify
60.97 +2.1 63.10 perf-profile.children.cycles-pp.cpu_startup_entry
60.97 +2.1 63.10 perf-profile.children.cycles-pp.do_idle
60.05 +2.1 62.19 perf-profile.children.cycles-pp.cpuidle_idle_call
60.68 +2.1 62.83 perf-profile.children.cycles-pp.start_secondary
51.71 +2.4 54.09 ? 3% perf-profile.children.cycles-pp.cpuidle_enter_state
51.77 +2.4 54.16 ? 3% perf-profile.children.cycles-pp.cpuidle_enter
4.18 ? 4% +2.6 6.73 ? 17% perf-profile.children.cycles-pp.worker_thread
4.15 ? 4% +2.6 6.70 ? 17% perf-profile.children.cycles-pp.process_one_work
0.00 +2.6 2.64 ? 47% perf-profile.children.cycles-pp.writeback_sb_inodes
0.00 +3.3 3.28 ? 51% perf-profile.children.cycles-pp.wb_workfn
0.00 +3.3 3.28 ? 51% perf-profile.children.cycles-pp.wb_do_writeback
0.00 +3.3 3.28 ? 51% perf-profile.children.cycles-pp.wb_writeback
3.81 ? 7% -0.3 3.48 ? 5% perf-profile.self.cycles-pp.timekeeping_max_deferment
2.06 ? 2% -0.3 1.75 ? 8% perf-profile.self.cycles-pp.xas_find
1.36 ? 2% -0.3 1.09 ? 10% perf-profile.self.cycles-pp.fsnotify_grab_connector
1.41 -0.2 1.17 ? 5% perf-profile.self.cycles-pp.find_lock_entries
1.19 ? 5% -0.2 0.97 ? 6% perf-profile.self.cycles-pp.delete_from_page_cache_batch
0.82 ? 3% -0.2 0.63 ? 7% perf-profile.self.cycles-pp.down_read
0.88 ? 2% -0.2 0.72 ? 12% perf-profile.self.cycles-pp.kmem_cache_free
1.17 -0.2 1.01 ? 9% perf-profile.self.cycles-pp.evict_inodes
0.33 ? 10% -0.1 0.18 ? 18% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
0.65 ? 6% -0.1 0.53 ? 9% perf-profile.self.cycles-pp.evict
0.68 ? 2% -0.1 0.57 ? 8% perf-profile.self.cycles-pp.__free_one_page
0.97 ? 5% -0.1 0.88 ? 5% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.45 ? 6% -0.1 0.36 ? 11% perf-profile.self.cycles-pp.down_write_trylock
0.17 ? 4% -0.1 0.11 ? 19% perf-profile.self.cycles-pp.get_slabinfo
0.49 ? 3% -0.1 0.43 ? 6% perf-profile.self.cycles-pp.release_pages
0.20 ? 6% -0.1 0.14 ? 11% perf-profile.self.cycles-pp.free_unref_page_commit
0.14 ? 10% -0.1 0.09 ? 28% perf-profile.self.cycles-pp.xfs_inode_item_destroy
0.14 ? 3% -0.0 0.10 ? 24% perf-profile.self.cycles-pp.xfs_icwalk_ag
0.15 ? 8% -0.0 0.10 ? 12% perf-profile.self.cycles-pp.xfs_can_free_eofblocks
0.14 ? 6% -0.0 0.10 ? 25% perf-profile.self.cycles-pp.rcu_do_batch
0.18 ? 16% -0.0 0.14 ? 18% perf-profile.self.cycles-pp.xfs_perag_put
0.12 ? 10% -0.0 0.09 ? 5% perf-profile.self.cycles-pp.truncate_cleanup_folio
0.10 ? 14% -0.0 0.06 ? 19% perf-profile.self.cycles-pp.__srcu_read_lock
0.18 ? 4% -0.0 0.15 ? 8% perf-profile.self.cycles-pp.radix_tree_node_rcu_free
0.14 ? 5% -0.0 0.11 ? 4% perf-profile.self.cycles-pp.xfs_bmapi_read
0.16 ? 3% -0.0 0.13 ? 7% perf-profile.self.cycles-pp.filemap_free_folio
0.15 ? 3% -0.0 0.12 ? 3% perf-profile.self.cycles-pp.sched_clock_cpu
0.10 ? 12% -0.0 0.07 ? 6% perf-profile.self.cycles-pp.filemap_unaccount_folio
0.08 ? 5% -0.0 0.06 ? 13% perf-profile.self.cycles-pp.up_read
0.17 ? 7% -0.0 0.14 ? 3% perf-profile.self.cycles-pp.truncate_inode_pages_range
0.29 ? 4% -0.0 0.27 ? 3% perf-profile.self.cycles-pp.radix_tree_tag_set
0.07 ? 17% -0.0 0.05 ? 8% perf-profile.self.cycles-pp.dyntick_save_progress_counter
0.17 ? 5% -0.0 0.15 ? 8% perf-profile.self.cycles-pp.__mod_memcg_lruvec_state
0.10 ? 9% -0.0 0.08 ? 6% perf-profile.self.cycles-pp.__mod_zone_page_state
0.12 ? 8% -0.0 0.10 ? 4% perf-profile.self.cycles-pp.free_pcp_prepare
0.12 ? 4% -0.0 0.10 perf-profile.self.cycles-pp.__list_add_valid
0.06 ? 7% -0.0 0.05 perf-profile.self.cycles-pp.rcu_all_qs
0.07 ? 7% +0.0 0.08 perf-profile.self.cycles-pp.tick_sched_timer
0.28 ? 2% +0.0 0.31 ? 5% perf-profile.self.cycles-pp.__intel_pmu_enable_all
0.81 +0.0 0.85 ? 3% perf-profile.self.cycles-pp.read_tsc
0.77 ? 5% +0.1 0.83 perf-profile.self.cycles-pp.native_apic_msr_eoi_write
0.90 ? 4% +0.1 0.98 perf-profile.self.cycles-pp.lapic_next_deadline
0.43 ? 14% +0.1 0.54 ? 2% perf-profile.self.cycles-pp.check_cpu_stall
0.61 ? 3% +0.1 0.73 ? 3% perf-profile.self.cycles-pp.ktime_get_update_offsets_now
3.20 ? 2% +0.2 3.38 ? 2% perf-profile.self.cycles-pp.cpuidle_enter_state
0.00 +0.2 0.20 ? 49% perf-profile.self.cycles-pp.xas_find_marked
0.00 +0.2 0.23 ? 50% perf-profile.self.cycles-pp.writeback_sb_inodes
0.00 +0.4 0.37 ? 42% perf-profile.self.cycles-pp.do_writepages
1.79 ? 3% +0.8 2.60 ? 16% perf-profile.self.cycles-pp.__list_del_entry_valid
26.67 +1.3 27.96 ? 2% perf-profile.self.cycles-pp.mwait_idle_with_hints





Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests



Attachments:
(No filename) (59.92 kB)
config-6.2.0-rc6-00029-g2edf06a50f5b (158.65 kB)
job-script (8.43 kB)
job.yaml (6.06 kB)
reproduce (1.11 kB)
Download all attachments

2023-05-09 07:02:52

by Dave Chinner

[permalink] [raw]
Subject: Re: [linus:master] [xfs] 2edf06a50f: fsmark.files_per_sec -5.7% regression

On Tue, May 09, 2023 at 10:13:19AM +0800, kernel test robot wrote:
>
>
> Hello,
>
> kernel test robot noticed a -5.7% regression of fsmark.files_per_sec on:
>
>
> commit: 2edf06a50f5bbe664283f3c55c480fc013221d70 ("xfs: factor xfs_alloc_vextent_this_ag() for _iterate_ags()")
> https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master

This is just a refactoring patch and doesn't change any logic.
Hence I'm sceptical that it actually resulted in a performance
regression. Indeed, the profile indicates a significant change of
behaviour in the allocator and I can't see how the commit above
would cause anything like that.

Was this a result of a bisect? If so, what were the original kernel
versions where the regression was detected?

Cheers,

Dave.

--
Dave Chinner
[email protected]

2023-05-09 07:41:25

by Dave Chinner

[permalink] [raw]
Subject: Re: [linus:master] [xfs] 2edf06a50f: fsmark.files_per_sec -5.7% regression

On Tue, May 09, 2023 at 04:54:33PM +1000, Dave Chinner wrote:
> On Tue, May 09, 2023 at 10:13:19AM +0800, kernel test robot wrote:
> >
> >
> > Hello,
> >
> > kernel test robot noticed a -5.7% regression of fsmark.files_per_sec on:
> >
> >
> > commit: 2edf06a50f5bbe664283f3c55c480fc013221d70 ("xfs: factor xfs_alloc_vextent_this_ag() for _iterate_ags()")
> > https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
>
> This is just a refactoring patch and doesn't change any logic.
> Hence I'm sceptical that it actually resulted in a performance
> regression. Indeed, the profile indicates a significant change of
> behaviour in the allocator and I can't see how the commit above
> would cause anything like that.
>
> Was this a result of a bisect? If so, what were the original kernel
> versions where the regression was detected?

Oh, CONFIG_XFS_DEBUG=y, which means:

static int
xfs_alloc_ag_vextent_lastblock(
struct xfs_alloc_arg *args,
struct xfs_alloc_cur *acur,
xfs_agblock_t *bno,
xfs_extlen_t *len,
bool *allocated)
{
int error;
int i;

#ifdef DEBUG
/* Randomly don't execute the first algorithm. */
if (get_random_u32_below(2))
return 0;
#endif

We randomly chose a near block allocation strategy to use to improve
code coverage, not the optimal one for IO performance. Hence the CPU
usage and allocation patterns that impact IO performance are simply
not predictable or reproducable from run to run. So, yeah, trying to
bisect a minor difference in performance as a result of this
randomness will not be reliable....

Cheers,

Dave.

--
Dave Chinner
[email protected]

2023-05-12 07:53:56

by kernel test robot

[permalink] [raw]
Subject: Re: [linus:master] [xfs] 2edf06a50f: fsmark.files_per_sec -5.7% regression

hi, Dave Chinner,

On Tue, May 09, 2023 at 05:10:53PM +1000, Dave Chinner wrote:
> On Tue, May 09, 2023 at 04:54:33PM +1000, Dave Chinner wrote:
> > On Tue, May 09, 2023 at 10:13:19AM +0800, kernel test robot wrote:
> > >
> > >
> > > Hello,
> > >
> > > kernel test robot noticed a -5.7% regression of fsmark.files_per_sec on:
> > >
> > >
> > > commit: 2edf06a50f5bbe664283f3c55c480fc013221d70 ("xfs: factor xfs_alloc_vextent_this_ag() for _iterate_ags()")
> > > https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
> >
> > This is just a refactoring patch and doesn't change any logic.
> > Hence I'm sceptical that it actually resulted in a performance
> > regression. Indeed, the profile indicates a significant change of
> > behaviour in the allocator and I can't see how the commit above
> > would cause anything like that.
> >
> > Was this a result of a bisect? If so, what were the original kernel
> > versions where the regression was detected?
>
> Oh, CONFIG_XFS_DEBUG=y, which means:
>
> static int
> xfs_alloc_ag_vextent_lastblock(
> struct xfs_alloc_arg *args,
> struct xfs_alloc_cur *acur,
> xfs_agblock_t *bno,
> xfs_extlen_t *len,
> bool *allocated)
> {
> int error;
> int i;
>
> #ifdef DEBUG
> /* Randomly don't execute the first algorithm. */
> if (get_random_u32_below(2))
> return 0;
> #endif
>
> We randomly chose a near block allocation strategy to use to improve
> code coverage, not the optimal one for IO performance. Hence the CPU
> usage and allocation patterns that impact IO performance are simply
> not predictable or reproducable from run to run. So, yeah, trying to
> bisect a minor difference in performance as a result of this
> randomness will not be reliable....

Thanks a lot for guidance!

we plan to disable XFS_DEBUG (as well as XFS_WARN) in our performance tests.
want to consult with you if this is the correct thing to do?
and I guess we should still keep them in functional tests, am I right?

BTW, regarding this case, we tested again with disabling XFS_DEBUG (as well as
XFS_WARN), kconfig is attached, only diff with last time is:
-CONFIG_XFS_DEBUG=y
-CONFIG_XFS_ASSERT_FATAL=y
+# CONFIG_XFS_WARN is not set
+# CONFIG_XFS_DEBUG is not set

but we still observed similar regression:

ecd788a92460eef4 2edf06a50f5bbe664283f3c55c4
---------------- ---------------------------
%stddev %change %stddev
\ | \
8176057 ? 15% +4.7% 8558110 fsmark.app_overhead
14484 -6.3% 13568 fsmark.files_per_sec

detail comparison as below [1]
not sure if we still enable some unnecessary configs?

[1]
=========================================================================================
compiler/cpufreq_governor/disk/filesize/fs/iterations/kconfig/nr_directories/nr_files_per_directory/nr_threads/rootfs/sync_method/tbox_group/test_size/testcase:
gcc-11/performance/1SSD/8K/xfs/8/x86_64-2edf06a50f-CONFIG_XFS_DEBUG-CONFIG_XFS_WARN/16d/256fpd/32/debian-11.1-x86_64-20220510.cgz/fsyncBeforeClose/lkp-csl-2sp3/50G/fsmark

ecd788a92460eef4 2edf06a50f5bbe664283f3c55c4
---------------- ---------------------------
%stddev %change %stddev
\ | \
8176057 ? 15% +4.7% 8558110 fsmark.app_overhead
14484 -6.3% 13568 fsmark.files_per_sec
482.08 +6.6% 513.68 fsmark.time.elapsed_time
482.08 +6.6% 513.68 fsmark.time.elapsed_time.max
164.00 +0.0% 164.00 fsmark.time.file_system_inputs
1.049e+08 +0.0% 1.049e+08 fsmark.time.file_system_outputs
4788 ? 7% +56.0% 7469 ? 33% fsmark.time.involuntary_context_switches
53790 +0.5% 54070 fsmark.time.maximum_resident_set_size
159905 +14.9% 183789 ? 26% fsmark.time.minor_page_faults
4096 +0.0% 4096 fsmark.time.page_size
197.83 ? 9% +12.0% 221.50 fsmark.time.percent_of_cpu_this_job_got
902.17 ? 10% +20.3% 1084 fsmark.time.system_time
54.12 ? 15% +3.2% 55.88 ? 2% fsmark.time.user_time
32688645 +12.3% 36701748 fsmark.time.voluntary_context_switches
4.425e+10 +6.1% 4.695e+10 cpuidle..time
1.325e+08 +8.4% 1.436e+08 cpuidle..usage
10.00 +0.0% 10.00 dmesg.bootstage:last
608.05 ? 10% +1.2% 615.25 ? 2% dmesg.timestamp:last
549.28 +5.8% 580.92 uptime.boot
47988 +5.2% 50465 uptime.idle
60.12 ? 3% -0.2% 60.00 ? 3% boot-time.boot
33.41 +0.8% 33.68 boot-time.dhcp
5184 ? 3% -0.3% 5169 ? 3% boot-time.idle
1.01 +0.1% 1.01 boot-time.smp_boot
91.45 -0.6% 90.87 iostat.cpu.idle
4.56 +6.9% 4.88 iostat.cpu.iowait
3.78 ? 12% +7.3% 4.06 iostat.cpu.system
0.21 ? 9% -4.7% 0.20 ? 2% iostat.cpu.user
91.43 -0.6 90.84 mpstat.cpu.all.idle%
4.58 +0.3 4.89 mpstat.cpu.all.iowait%
1.35 ? 16% +0.0 1.39 mpstat.cpu.all.irq%
0.13 ? 14% +0.0 0.13 ? 2% mpstat.cpu.all.soft%
2.31 ? 9% +0.2 2.54 mpstat.cpu.all.sys%
0.21 ? 8% -0.0 0.20 ? 2% mpstat.cpu.all.usr%
0.00 -100.0% 0.00 numa-numastat.node0.interleave_hit
8570232 ? 7% -11.2% 7606395 ? 12% numa-numastat.node0.local_node
8614749 ? 7% -11.3% 7642912 ? 12% numa-numastat.node0.numa_hit
44588 ? 80% -18.5% 36355 ? 74% numa-numastat.node0.other_node
0.00 -100.0% 0.00 numa-numastat.node1.interleave_hit
7709816 ? 7% +13.7% 8762734 ? 10% numa-numastat.node1.local_node
7752272 ? 7% +13.7% 8813548 ? 10% numa-numastat.node1.numa_hit
42460 ? 84% +19.5% 50730 ? 53% numa-numastat.node1.other_node
482.08 +6.6% 513.68 time.elapsed_time
482.08 +6.6% 513.68 time.elapsed_time.max
164.00 +0.0% 164.00 time.file_system_inputs
1.049e+08 +0.0% 1.049e+08 time.file_system_outputs
4788 ? 7% +56.0% 7469 ? 33% time.involuntary_context_switches
53790 +0.5% 54070 time.maximum_resident_set_size
159905 +14.9% 183789 ? 26% time.minor_page_faults
4096 +0.0% 4096 time.page_size
197.83 ? 9% +12.0% 221.50 time.percent_of_cpu_this_job_got
902.17 ? 10% +20.3% 1084 time.system_time
54.12 ? 15% +3.2% 55.88 ? 2% time.user_time
32688645 +12.3% 36701748 time.voluntary_context_switches
91.17 -1.3% 90.00 vmstat.cpu.id
2.83 ? 13% +5.9% 3.00 vmstat.cpu.sy
0.00 -100.0% 0.00 vmstat.cpu.us
4.00 +0.0% 4.00 vmstat.cpu.wa
0.00 -100.0% 0.00 vmstat.io.bi
225488 -1.2% 222787 vmstat.io.bo
4.00 +0.0% 4.00 vmstat.memory.buff
36031366 +0.2% 36093165 vmstat.memory.cache
88939420 -0.1% 88822523 vmstat.memory.free
4.00 +4.2% 4.17 ? 8% vmstat.procs.b
1.83 ? 20% +45.5% 2.67 ? 17% vmstat.procs.r
178703 +3.1% 184315 vmstat.system.cs
211078 -0.2% 210736 vmstat.system.in
100.50 ? 5% +0.3% 100.83 turbostat.Avg_MHz
5.54 ? 11% +0.3 5.82 turbostat.Busy%
1863 ? 19% -6.9% 1733 turbostat.Bzy_MHz
13318557 ?120% -54.0% 6126899 ? 5% turbostat.C1
2.77 ?140% -1.8 0.96 ? 15% turbostat.C1%
1.068e+08 ? 25% +22.3% 1.306e+08 ? 6% turbostat.C1E
73.30 ? 34% +13.8 87.10 ? 13% turbostat.C1E%
11070416 ?117% -47.9% 5770062 ?183% turbostat.C6
19.14 ?122% -12.4 6.78 ?179% turbostat.C6%
93.04 ? 2% +1.1% 94.09 turbostat.CPU%c1
1.42 ?152% -93.8% 0.09 ? 39% turbostat.CPU%c6
41.17 ? 2% +0.0% 41.17 turbostat.CoreTmp
0.14 ? 5% +25.9% 0.17 turbostat.IPC
1.024e+08 +6.3% 1.088e+08 turbostat.IRQ
1313271 ? 59% -17.1% 1088098 turbostat.POLL
0.06 ? 65% -0.0 0.04 turbostat.POLL%
0.19 ?141% -100.0% 0.00 turbostat.Pkg%pc2
0.00 ?223% -100.0% 0.00 turbostat.Pkg%pc6
41.17 ? 2% +0.0% 41.17 turbostat.PkgTmp
123.16 ? 8% -1.8% 120.91 turbostat.PkgWatt
34.23 -0.9% 33.92 turbostat.RAMWatt
672.00 +0.0% 672.00 turbostat.SMI
2394 +0.0% 2394 turbostat.TSC_MHz
16295 ? 3% +7.1% 17447 ? 3% meminfo.Active
16180 ? 3% +7.1% 17327 ? 3% meminfo.Active(anon)
114.67 ? 9% +4.7% 120.00 ? 6% meminfo.Active(file)
933864 ? 2% -1.8% 917134 meminfo.AnonHugePages
1133278 -0.2% 1130832 meminfo.AnonPages
4.00 +0.0% 4.00 meminfo.Buffers
29171319 +0.2% 29215684 meminfo.Cached
65841941 +0.0% 65841942 meminfo.CommitLimit
2100335 +0.2% 2103946 meminfo.Committed_AS
6.571e+08 -0.1% 6.568e+08 meminfo.DirectMap1G
7300096 ? 6% +4.8% 7649621 ? 11% meminfo.DirectMap2M
1126497 ? 15% +0.0% 1126497 ? 13% meminfo.DirectMap4k
1.50 ? 33% +166.7% 4.00 ? 14% meminfo.Dirty
2048 +0.0% 2048 meminfo.Hugepagesize
27524631 +0.1% 27565040 meminfo.Inactive
1153255 -0.2% 1150827 meminfo.Inactive(anon)
26371376 +0.2% 26414211 meminfo.Inactive(file)
6938222 +0.1% 6947904 meminfo.KReclaimable
17485 -0.2% 17456 meminfo.KernelStack
41511 ? 2% +0.8% 41844 ? 2% meminfo.Mapped
1.211e+08 -0.0% 1.21e+08 meminfo.MemAvailable
88846138 -0.1% 88738313 meminfo.MemFree
1.317e+08 +0.0% 1.317e+08 meminfo.MemTotal
42837744 +0.3% 42945572 meminfo.Memused
854.00 +11.5% 951.83 meminfo.Mlocked
6653 -0.0% 6651 meminfo.PageTables
52857 +0.0% 52857 meminfo.Percpu
6938222 +0.1% 6947904 meminfo.SReclaimable
3023422 +2.2% 3089285 meminfo.SUnreclaim
36259 ? 3% +4.2% 37771 ? 4% meminfo.Shmem
9961645 +0.8% 10037190 meminfo.Slab
2763687 +0.0% 2763698 meminfo.Unevictable
3.436e+10 +0.0% 3.436e+10 meminfo.VmallocTotal
185092 -0.0% 185050 meminfo.VmallocUsed
48.17 ? 26% -48.8% 24.67 ? 58% meminfo.Writeback
44505996 +0.0% 44512572 meminfo.max_used_kB
3429 ? 14% +22.9% 4214 ? 13% numa-meminfo.node0.Active
3373 ? 14% +24.1% 4187 ? 13% numa-meminfo.node0.Active(anon)
56.00 ?101% -52.4% 26.67 ?175% numa-meminfo.node0.Active(file)
508198 ? 13% -12.7% 443664 ? 14% numa-meminfo.node0.AnonHugePages
611605 ? 10% -10.4% 547928 ? 14% numa-meminfo.node0.AnonPages
1260772 ? 10% -8.3% 1156210 ? 8% numa-meminfo.node0.AnonPages.max
0.83 ? 44% +120.0% 1.83 ? 20% numa-meminfo.node0.Dirty
16253358 ? 5% -9.8% 14663400 ? 15% numa-meminfo.node0.FilePages
14579504 ? 9% -11.4% 12910480 ? 12% numa-meminfo.node0.Inactive
616360 ? 10% -10.1% 554067 ? 14% numa-meminfo.node0.Inactive(anon)
13963143 ? 9% -11.5% 12356412 ? 12% numa-meminfo.node0.Inactive(file)
3685579 ? 8% -11.3% 3267717 ? 12% numa-meminfo.node0.KReclaimable
8899 ? 5% +1.0% 8984 ? 4% numa-meminfo.node0.KernelStack
28461 ? 32% +1.7% 28956 ? 36% numa-meminfo.node0.Mapped
42160970 ? 3% +5.6% 44508009 ? 6% numa-meminfo.node0.MemFree
65675508 +0.0% 65675508 numa-meminfo.node0.MemTotal
23514536 ? 6% -10.0% 21167497 ? 14% numa-meminfo.node0.MemUsed
729.83 ? 37% +11.6% 814.33 ? 36% numa-meminfo.node0.Mlocked
3541 ? 16% -16.2% 2966 ? 21% numa-meminfo.node0.PageTables
3685579 ? 8% -11.3% 3267717 ? 12% numa-meminfo.node0.SReclaimable
1611979 ? 9% -9.2% 1464088 ? 11% numa-meminfo.node0.SUnreclaim
8393 ? 22% +26.7% 10634 ? 31% numa-meminfo.node0.Shmem
5297559 ? 8% -10.7% 4731806 ? 12% numa-meminfo.node0.Slab
2281881 ? 40% +0.6% 2296450 ? 41% numa-meminfo.node0.Unevictable
92.67 ? 34% -9.5% 83.83 ? 26% numa-meminfo.node0.Writeback
12887 ? 5% +0.5% 12945 ? 4% numa-meminfo.node1.Active
12829 ? 6% +0.2% 12852 ? 4% numa-meminfo.node1.Active(anon)
58.67 ?100% +59.1% 93.33 ? 47% numa-meminfo.node1.Active(file)
424903 ? 13% +11.3% 472832 ? 14% numa-meminfo.node1.AnonHugePages
520841 ? 12% +11.8% 582132 ? 13% numa-meminfo.node1.AnonPages
1132273 ? 7% +9.2% 1236266 ? 9% numa-meminfo.node1.AnonPages.max
0.17 ?223% +1100.0% 2.00 numa-meminfo.node1.Dirty
12911198 ? 7% +12.6% 14536604 ? 15% numa-meminfo.node1.FilePages
12937503 ? 10% +13.1% 14638386 ? 10% numa-meminfo.node1.Inactive
536142 ? 11% +11.2% 596059 ? 13% numa-meminfo.node1.Inactive(anon)
12401360 ? 10% +13.2% 14042326 ? 10% numa-meminfo.node1.Inactive(file)
3250622 ? 9% +13.1% 3676292 ? 10% numa-meminfo.node1.KReclaimable
8580 ? 5% -1.3% 8469 ? 5% numa-meminfo.node1.KernelStack
13202 ? 77% -1.2% 13041 ? 75% numa-meminfo.node1.Mapped
46697463 ? 3% -5.2% 44253565 ? 6% numa-meminfo.node1.MemFree
66008375 +0.0% 66008378 numa-meminfo.node1.MemTotal
19310910 ? 7% +12.7% 21754812 ? 13% numa-meminfo.node1.MemUsed
125.50 ?223% +10.1% 138.17 ?223% numa-meminfo.node1.Mlocked
3108 ? 19% +18.5% 3682 ? 18% numa-meminfo.node1.PageTables
3250622 ? 9% +13.1% 3676292 ? 10% numa-meminfo.node1.SReclaimable
1410450 ? 9% +15.1% 1623451 ? 10% numa-meminfo.node1.SUnreclaim
27969 ? 2% -3.7% 26921 ? 13% numa-meminfo.node1.Shmem
4661072 ? 9% +13.7% 5299743 ? 10% numa-meminfo.node1.Slab
481806 ?192% -3.0% 467247 ?202% numa-meminfo.node1.Unevictable
86.17 ? 24% -39.5% 52.17 ? 43% numa-meminfo.node1.Writeback
529.00 ? 16% -0.1% 528.50 ? 14% proc-vmstat.direct_map_level2_splits
3.17 ? 11% +5.3% 3.33 ? 22% proc-vmstat.direct_map_level3_splits
4046 ? 3% +5.9% 4283 ? 2% proc-vmstat.nr_active_anon
28.67 ? 9% +4.7% 30.00 ? 6% proc-vmstat.nr_active_file
283271 -0.2% 282606 proc-vmstat.nr_anon_pages
455.50 ? 2% -1.9% 447.00 proc-vmstat.nr_anon_transparent_hugepages
13107200 +0.0% 13107200 proc-vmstat.nr_dirtied
0.00 +8.3e+101% 0.83 ? 44% proc-vmstat.nr_dirty
2860915 -0.1% 2859290 proc-vmstat.nr_dirty_background_threshold
5728827 -0.1% 5725572 proc-vmstat.nr_dirty_threshold
7291747 +0.2% 7302802 proc-vmstat.nr_file_pages
22213523 -0.1% 22186520 proc-vmstat.nr_free_pages
288270 -0.2% 287611 proc-vmstat.nr_inactive_anon
6591754 +0.2% 6602477 proc-vmstat.nr_inactive_file
0.00 -100.0% 0.00 proc-vmstat.nr_isolated_anon
17483 -0.2% 17456 proc-vmstat.nr_kernel_stack
10388 ? 2% +0.8% 10470 ? 2% proc-vmstat.nr_mapped
213.17 +11.6% 237.83 proc-vmstat.nr_mlock
1662 -0.0% 1662 proc-vmstat.nr_page_table_pages
9071 ? 3% +3.6% 9398 ? 4% proc-vmstat.nr_shmem
1734251 +0.1% 1736707 proc-vmstat.nr_slab_reclaimable
755741 +2.2% 772205 proc-vmstat.nr_slab_unreclaimable
690921 +0.0% 690924 proc-vmstat.nr_unevictable
10.33 ? 38% -46.8% 5.50 ? 47% proc-vmstat.nr_writeback
13107200 +0.0% 13107200 proc-vmstat.nr_written
4046 ? 3% +5.9% 4283 ? 2% proc-vmstat.nr_zone_active_anon
28.67 ? 9% +4.7% 30.00 ? 6% proc-vmstat.nr_zone_active_file
288270 -0.2% 287611 proc-vmstat.nr_zone_inactive_anon
6591754 +0.2% 6602477 proc-vmstat.nr_zone_inactive_file
690921 +0.0% 690924 proc-vmstat.nr_zone_unevictable
10.17 ? 39% -47.5% 5.33 ? 53% proc-vmstat.nr_zone_write_pending
32506 ? 17% +17.1% 38050 ? 13% proc-vmstat.numa_hint_faults
16094 ? 20% +13.9% 18324 ? 40% proc-vmstat.numa_hint_faults_local
16368461 +0.5% 16457850 proc-vmstat.numa_hit
5897 ? 6% +11.0% 6547 ? 2% proc-vmstat.numa_huge_pte_updates
0.00 -100.0% 0.00 proc-vmstat.numa_interleave
16281479 +0.5% 16370518 proc-vmstat.numa_local
87048 +0.0% 87085 proc-vmstat.numa_other
108881 ? 11% +29.3% 140739 ? 12% proc-vmstat.numa_pages_migrated
3215676 ? 6% +11.5% 3584341 ? 2% proc-vmstat.numa_pte_updates
187209 ? 2% +7.6% 201389 ? 3% proc-vmstat.pgactivate
23690780 +0.6% 23832573 proc-vmstat.pgalloc_normal
1463764 +6.8% 1562761 ? 3% proc-vmstat.pgfault
23621244 +0.7% 23785769 proc-vmstat.pgfree
108881 ? 11% +29.3% 140739 ? 12% proc-vmstat.pgmigrate_success
82.00 +0.0% 82.00 proc-vmstat.pgpgin
1.094e+08 +5.2% 1.151e+08 proc-vmstat.pgpgout
61969 ? 3% +9.1% 67613 ? 4% proc-vmstat.pgreuse
74.67 ? 13% -12.1% 65.67 ? 7% proc-vmstat.thp_collapse_alloc
0.17 ?223% +0.0% 0.17 ?223% proc-vmstat.thp_deferred_split_page
6412 -0.7% 6369 proc-vmstat.thp_fault_alloc
184.67 ? 11% +32.9% 245.33 ? 12% proc-vmstat.thp_migration_success
0.17 ?223% +100.0% 0.33 ?223% proc-vmstat.thp_split_pmd
0.00 -100.0% 0.00 proc-vmstat.thp_zero_page_alloc
364.17 +0.4% 365.50 proc-vmstat.unevictable_pgs_culled
607.50 +0.1% 607.83 proc-vmstat.unevictable_pgs_mlocked
3.50 ? 14% +9.5% 3.83 ? 17% proc-vmstat.unevictable_pgs_munlocked
3.50 ? 14% +9.5% 3.83 ? 17% proc-vmstat.unevictable_pgs_rescued
3688704 +6.3% 3920384 proc-vmstat.unevictable_pgs_scanned
843.17 ? 15% +24.2% 1047 ? 13% numa-vmstat.node0.nr_active_anon
14.00 ?101% -52.4% 6.67 ?175% numa-vmstat.node0.nr_active_file
152905 ? 10% -10.4% 136947 ? 14% numa-vmstat.node0.nr_anon_pages
247.50 ? 13% -12.8% 215.83 ? 14% numa-vmstat.node0.nr_anon_transparent_hugepages
7032437 ? 8% -13.5% 6085836 ? 14% numa-vmstat.node0.nr_dirtied
0.00 -100.0% 0.00 numa-vmstat.node0.nr_dirty
4062724 ? 5% -9.8% 3665049 ? 15% numa-vmstat.node0.nr_file_pages
10541235 ? 3% +5.6% 11128287 ? 6% numa-vmstat.node0.nr_free_pages
154095 ? 10% -10.1% 138489 ? 14% numa-vmstat.node0.nr_inactive_anon
3490169 ? 9% -11.5% 3088299 ? 12% numa-vmstat.node0.nr_inactive_file
0.17 ?223% -100.0% 0.00 numa-vmstat.node0.nr_isolated_anon
8900 ? 5% +0.9% 8984 ? 4% numa-vmstat.node0.nr_kernel_stack
7117 ? 32% +1.8% 7244 ? 36% numa-vmstat.node0.nr_mapped
182.17 ? 38% +11.6% 203.33 ? 37% numa-vmstat.node0.nr_mlock
885.00 ? 16% -16.3% 741.17 ? 21% numa-vmstat.node0.nr_page_table_pages
2099 ? 22% +26.8% 2660 ? 32% numa-vmstat.node0.nr_shmem
921217 ? 8% -11.3% 816697 ? 12% numa-vmstat.node0.nr_slab_reclaimable
402922 ? 9% -9.2% 365912 ? 11% numa-vmstat.node0.nr_slab_unreclaimable
570470 ? 40% +0.6% 574112 ? 41% numa-vmstat.node0.nr_unevictable
20.67 ? 30% +8.9% 22.50 ? 12% numa-vmstat.node0.nr_writeback
7032437 ? 8% -13.5% 6085836 ? 14% numa-vmstat.node0.nr_written
843.17 ? 15% +24.2% 1047 ? 13% numa-vmstat.node0.nr_zone_active_anon
14.00 ?101% -52.4% 6.67 ?175% numa-vmstat.node0.nr_zone_active_file
154095 ? 10% -10.1% 138489 ? 14% numa-vmstat.node0.nr_zone_inactive_anon
3490169 ? 9% -11.5% 3088299 ? 12% numa-vmstat.node0.nr_zone_inactive_file
570470 ? 40% +0.6% 574112 ? 41% numa-vmstat.node0.nr_zone_unevictable
20.83 ? 29% +7.2% 22.33 ? 13% numa-vmstat.node0.nr_zone_write_pending
8614901 ? 7% -11.3% 7642926 ? 12% numa-vmstat.node0.numa_hit
0.00 -100.0% 0.00 numa-vmstat.node0.numa_interleave
8570374 ? 7% -11.2% 7606409 ? 12% numa-vmstat.node0.numa_local
44588 ? 80% -18.5% 36355 ? 74% numa-vmstat.node0.numa_other
3209 ? 6% +0.1% 3214 ? 4% numa-vmstat.node1.nr_active_anon
14.67 ?100% +59.1% 23.33 ? 47% numa-vmstat.node1.nr_active_file
130189 ? 12% +11.8% 145506 ? 13% numa-vmstat.node1.nr_anon_pages
207.00 ? 13% +11.4% 230.50 ? 14% numa-vmstat.node1.nr_anon_transparent_hugepages
6074762 ? 9% +15.6% 7021363 ? 12% numa-vmstat.node1.nr_dirtied
0.00 -100.0% 0.00 numa-vmstat.node1.nr_dirty
3227189 ? 7% +12.6% 3633215 ? 15% numa-vmstat.node1.nr_file_pages
11675460 ? 3% -5.2% 11064889 ? 6% numa-vmstat.node1.nr_free_pages
134022 ? 11% +11.2% 148997 ? 13% numa-vmstat.node1.nr_inactive_anon
3099719 ? 10% +13.2% 3509635 ? 10% numa-vmstat.node1.nr_inactive_file
8583 ? 5% -1.3% 8469 ? 5% numa-vmstat.node1.nr_kernel_stack
3312 ? 76% -1.2% 3273 ? 74% numa-vmstat.node1.nr_mapped
31.33 ?223% +10.6% 34.67 ?223% numa-vmstat.node1.nr_mlock
776.83 ? 19% +18.4% 919.83 ? 18% numa-vmstat.node1.nr_page_table_pages
7002 ? 2% -3.8% 6740 ? 13% numa-vmstat.node1.nr_shmem
812484 ? 9% +13.1% 918801 ? 10% numa-vmstat.node1.nr_slab_reclaimable
352545 ? 9% +15.1% 405734 ? 10% numa-vmstat.node1.nr_slab_unreclaimable
120451 ?192% -3.0% 116811 ?202% numa-vmstat.node1.nr_unevictable
16.67 ? 18% -29.0% 11.83 ? 57% numa-vmstat.node1.nr_writeback
6074762 ? 9% +15.6% 7021363 ? 12% numa-vmstat.node1.nr_written
3209 ? 6% +0.1% 3214 ? 4% numa-vmstat.node1.nr_zone_active_anon
14.67 ?100% +59.1% 23.33 ? 47% numa-vmstat.node1.nr_zone_active_file
134022 ? 11% +11.2% 148996 ? 13% numa-vmstat.node1.nr_zone_inactive_anon
3099719 ? 10% +13.2% 3509635 ? 10% numa-vmstat.node1.nr_zone_inactive_file
120451 ?192% -3.0% 116811 ?202% numa-vmstat.node1.nr_zone_unevictable
16.67 ? 18% -28.0% 12.00 ? 55% numa-vmstat.node1.nr_zone_write_pending
7752385 ? 7% +13.7% 8813351 ? 10% numa-vmstat.node1.numa_hit
0.00 -100.0% 0.00 numa-vmstat.node1.numa_interleave
7709929 ? 7% +13.7% 8762537 ? 10% numa-vmstat.node1.numa_local
42460 ? 84% +19.5% 50730 ? 53% numa-vmstat.node1.numa_other
16.89 ? 23% -35.2% 10.94 perf-stat.i.MPKI
8.488e+08 +22.8% 1.043e+09 perf-stat.i.branch-instructions
1.90 ? 19% -0.6 1.34 perf-stat.i.branch-miss-rate%
17055381 ? 16% -11.0% 15176777 perf-stat.i.branch-misses
19.79 ? 15% +2.4 22.16 ? 2% perf-stat.i.cache-miss-rate%
12682047 ? 2% -1.4% 12509329 ? 2% perf-stat.i.cache-misses
69981715 ? 21% -14.9% 59520868 ? 2% perf-stat.i.cache-references
179965 +3.1% 185547 perf-stat.i.context-switches
2.22 ? 4% -22.7% 1.72 perf-stat.i.cpi
96009 -0.0% 96001 perf-stat.i.cpu-clock
9.339e+09 ? 5% +0.1% 9.35e+09 perf-stat.i.cpu-cycles
137.66 ? 3% +5.9% 145.72 ? 5% perf-stat.i.cpu-migrations
765.10 ? 3% +2.2% 781.69 ? 2% perf-stat.i.cycles-between-cache-misses
0.12 ? 25% -0.0 0.08 ? 9% perf-stat.i.dTLB-load-miss-rate%
1342146 ? 24% -13.8% 1156830 ? 9% perf-stat.i.dTLB-load-misses
1.179e+09 +32.2% 1.558e+09 perf-stat.i.dTLB-loads
0.02 ? 27% -0.0 0.01 ? 10% perf-stat.i.dTLB-store-miss-rate%
127241 ? 26% -10.6% 113722 ? 10% perf-stat.i.dTLB-store-misses
6.124e+08 +35.6% 8.305e+08 perf-stat.i.dTLB-stores
29.20 ? 2% -1.4 27.83 perf-stat.i.iTLB-load-miss-rate%
2760483 ? 2% -0.1% 2757788 perf-stat.i.iTLB-load-misses
6740620 +6.8% 7196964 perf-stat.i.iTLB-loads
4.364e+09 +27.3% 5.553e+09 perf-stat.i.instructions
1675 ? 2% +24.7% 2089 perf-stat.i.instructions-per-iTLB-miss
0.46 ? 4% +27.5% 0.59 perf-stat.i.ipc
0.14 ? 36% -27.7% 0.10 ? 44% perf-stat.i.major-faults
0.10 ? 5% +0.1% 0.10 perf-stat.i.metric.GHz
696.40 ? 30% +8.7% 757.02 perf-stat.i.metric.K/sec
27.67 +29.2% 35.75 perf-stat.i.metric.M/sec
2639 +1.0% 2665 ? 3% perf-stat.i.minor-faults
87.04 +0.3 87.35 perf-stat.i.node-load-miss-rate%
3509882 ? 3% -0.6% 3487922 ? 3% perf-stat.i.node-load-misses
623292 ? 2% -4.8% 593074 ? 4% perf-stat.i.node-loads
73.20 +1.9 75.09 perf-stat.i.node-store-miss-rate%
1881398 ? 3% -2.1% 1841853 ? 3% perf-stat.i.node-store-misses
703862 -11.7% 621333 ? 4% perf-stat.i.node-stores
2640 +1.0% 2665 ? 3% perf-stat.i.page-faults
96009 -0.0% 96001 perf-stat.i.task-clock
16.04 ? 21% -33.2% 10.72 perf-stat.overall.MPKI
2.01 ? 16% -0.6 1.46 perf-stat.overall.branch-miss-rate%
18.74 ? 15% +2.3 21.03 ? 2% perf-stat.overall.cache-miss-rate%
2.14 ? 4% -21.3% 1.68 perf-stat.overall.cpi
736.02 ? 3% +1.6% 747.51 perf-stat.overall.cycles-between-cache-misses
0.11 ? 25% -0.0 0.07 ? 9% perf-stat.overall.dTLB-load-miss-rate%
0.02 ? 26% -0.0 0.01 ? 9% perf-stat.overall.dTLB-store-miss-rate%
29.06 ? 2% -1.4 27.70 perf-stat.overall.iTLB-load-miss-rate%
1582 ? 2% +27.3% 2014 perf-stat.overall.instructions-per-iTLB-miss
0.47 ? 4% +26.8% 0.59 perf-stat.overall.ipc
84.90 +0.6 85.45 perf-stat.overall.node-load-miss-rate%
72.76 +2.0 74.75 perf-stat.overall.node-store-miss-rate%
8.471e+08 +22.9% 1.041e+09 perf-stat.ps.branch-instructions
17022588 ? 16% -11.0% 15150727 perf-stat.ps.branch-misses
12655694 ? 2% -1.3% 12487159 ? 2% perf-stat.ps.cache-misses
69826950 ? 21% -14.9% 59408840 ? 2% perf-stat.ps.cache-references
179546 +3.1% 185166 perf-stat.ps.context-switches
95793 +0.0% 95813 perf-stat.ps.cpu-clock
9.319e+09 ? 5% +0.1% 9.332e+09 perf-stat.ps.cpu-cycles
137.38 ? 3% +5.9% 145.46 ? 5% perf-stat.ps.cpu-migrations
1339043 ? 24% -13.8% 1154794 ? 9% perf-stat.ps.dTLB-load-misses
1.176e+09 +32.2% 1.555e+09 perf-stat.ps.dTLB-loads
126964 ? 26% -10.6% 113513 ? 10% perf-stat.ps.dTLB-store-misses
6.111e+08 +35.6% 8.289e+08 perf-stat.ps.dTLB-stores
2754174 ? 2% -0.1% 2752232 perf-stat.ps.iTLB-load-misses
6725013 +6.8% 7182318 perf-stat.ps.iTLB-loads
4.355e+09 +27.3% 5.543e+09 perf-stat.ps.instructions
0.14 ? 36% -27.7% 0.10 ? 44% perf-stat.ps.major-faults
2636 +0.9% 2661 ? 3% perf-stat.ps.minor-faults
3502327 ? 3% -0.6% 3481483 ? 3% perf-stat.ps.node-load-misses
622410 ? 2% -4.8% 592386 ? 4% perf-stat.ps.node-loads
1877163 ? 3% -2.1% 1838328 ? 3% perf-stat.ps.node-store-misses
702432 -11.7% 620245 ? 4% perf-stat.ps.node-stores
2636 +0.9% 2661 ? 3% perf-stat.ps.page-faults
95793 +0.0% 95813 perf-stat.ps.task-clock
2.102e+12 +35.6% 2.849e+12 perf-stat.total.instructions
232.25 ? 98% -54.2% 106.42 ? 81% sched_debug.cfs_rq:/.MIN_vruntime.avg
11623 ? 84% -35.8% 7462 ? 56% sched_debug.cfs_rq:/.MIN_vruntime.max
0.00 +0.0% 0.00 sched_debug.cfs_rq:/.MIN_vruntime.min
1539 ? 87% -45.2% 843.05 ? 62% sched_debug.cfs_rq:/.MIN_vruntime.stddev
0.06 ? 19% +0.4% 0.06 ? 9% sched_debug.cfs_rq:/.h_nr_running.avg
1.08 ? 5% -2.1% 1.06 ? 8% sched_debug.cfs_rq:/.h_nr_running.max
0.23 ? 9% -0.5% 0.23 ? 5% sched_debug.cfs_rq:/.h_nr_running.stddev
147300 ? 69% -64.7% 51944 ? 93% sched_debug.cfs_rq:/.load.avg
9345951 ? 75% -58.8% 3849138 ?122% sched_debug.cfs_rq:/.load.max
1126104 ? 71% -63.0% 416308 ?114% sched_debug.cfs_rq:/.load.stddev
77.40 ? 13% -3.1% 74.96 ? 9% sched_debug.cfs_rq:/.load_avg.avg
875.18 ? 11% +1.2% 885.81 ? 10% sched_debug.cfs_rq:/.load_avg.max
172.61 ? 14% -5.7% 162.83 ? 8% sched_debug.cfs_rq:/.load_avg.stddev
232.25 ? 98% -54.2% 106.42 ? 81% sched_debug.cfs_rq:/.max_vruntime.avg
11623 ? 84% -35.8% 7462 ? 56% sched_debug.cfs_rq:/.max_vruntime.max
0.00 +0.0% 0.00 sched_debug.cfs_rq:/.max_vruntime.min
1539 ? 87% -45.2% 843.05 ? 62% sched_debug.cfs_rq:/.max_vruntime.stddev
51805 ? 17% +1.5% 52590 ? 10% sched_debug.cfs_rq:/.min_vruntime.avg
92816 ? 15% +9.7% 101859 ? 15% sched_debug.cfs_rq:/.min_vruntime.max
30832 ? 26% -15.8% 25952 ? 12% sched_debug.cfs_rq:/.min_vruntime.min
12208 ? 5% +19.2% 14550 ? 9% sched_debug.cfs_rq:/.min_vruntime.stddev
0.06 ? 19% +0.4% 0.06 ? 9% sched_debug.cfs_rq:/.nr_running.avg
1.08 ? 5% -2.1% 1.06 ? 8% sched_debug.cfs_rq:/.nr_running.max
0.23 ? 9% -0.4% 0.23 ? 5% sched_debug.cfs_rq:/.nr_running.stddev
8.03 ? 26% +19.0% 9.56 ? 34% sched_debug.cfs_rq:/.removed.load_avg.avg
246.46 ? 5% -20.2% 196.74 ? 64% sched_debug.cfs_rq:/.removed.load_avg.max
40.83 ? 10% -1.9% 40.04 ? 44% sched_debug.cfs_rq:/.removed.load_avg.stddev
3.37 ? 26% +23.3% 4.16 ? 27% sched_debug.cfs_rq:/.removed.runnable_avg.avg
124.54 ? 5% -21.1% 98.32 ? 61% sched_debug.cfs_rq:/.removed.runnable_avg.max
18.37 ? 11% -0.5% 18.28 ? 41% sched_debug.cfs_rq:/.removed.runnable_avg.stddev
3.37 ? 26% +23.3% 4.16 ? 27% sched_debug.cfs_rq:/.removed.util_avg.avg
124.54 ? 5% -21.1% 98.32 ? 61% sched_debug.cfs_rq:/.removed.util_avg.max
18.37 ? 11% -0.5% 18.28 ? 41% sched_debug.cfs_rq:/.removed.util_avg.stddev
61.22 ? 6% -3.1% 59.32 ? 5% sched_debug.cfs_rq:/.runnable_avg.avg
705.72 ? 12% -4.5% 674.01 ? 9% sched_debug.cfs_rq:/.runnable_avg.max
110.12 ? 17% -7.3% 102.05 ? 4% sched_debug.cfs_rq:/.runnable_avg.stddev
-6685 +57.2% -10508 sched_debug.cfs_rq:/.spread0.avg
34326 ? 21% +12.9% 38762 ? 40% sched_debug.cfs_rq:/.spread0.max
-27658 +34.3% -37151 sched_debug.cfs_rq:/.spread0.min
12208 ? 5% +19.2% 14552 ? 9% sched_debug.cfs_rq:/.spread0.stddev
60.09 ? 4% -1.5% 59.18 ? 5% sched_debug.cfs_rq:/.util_avg.avg
666.17 ? 7% +0.4% 668.93 ? 9% sched_debug.cfs_rq:/.util_avg.max
105.30 ? 12% -3.5% 101.59 ? 4% sched_debug.cfs_rq:/.util_avg.stddev
7.34 ? 56% -20.9% 5.81 ? 14% sched_debug.cfs_rq:/.util_est_enqueued.avg
219.47 ? 25% -14.1% 188.59 ? 11% sched_debug.cfs_rq:/.util_est_enqueued.max
32.57 ? 36% -18.6% 26.53 ? 9% sched_debug.cfs_rq:/.util_est_enqueued.stddev
777337 ? 3% -4.5% 742049 sched_debug.cpu.avg_idle.avg
1059023 ? 3% -0.5% 1053731 sched_debug.cpu.avg_idle.max
97795 ? 36% -17.7% 80512 ? 12% sched_debug.cpu.avg_idle.min
252849 ? 7% +14.2% 288829 ? 5% sched_debug.cpu.avg_idle.stddev
286042 ? 4% +3.5% 296148 ? 4% sched_debug.cpu.clock.avg
286045 ? 4% +3.5% 296152 ? 4% sched_debug.cpu.clock.max
286037 ? 4% +3.5% 296144 ? 4% sched_debug.cpu.clock.min
2.32 ? 2% +2.5% 2.37 ? 3% sched_debug.cpu.clock.stddev
282473 ? 4% +3.5% 292332 ? 4% sched_debug.cpu.clock_task.avg
283053 ? 4% +3.5% 292917 ? 4% sched_debug.cpu.clock_task.max
267328 ? 5% +3.7% 277163 ? 5% sched_debug.cpu.clock_task.min
1584 -0.1% 1583 sched_debug.cpu.clock_task.stddev
340.71 ? 17% +20.2% 409.64 ? 12% sched_debug.cpu.curr->pid.avg
10482 ? 3% +4.0% 10903 ? 3% sched_debug.cpu.curr->pid.max
1690 ? 11% +10.4% 1866 ? 8% sched_debug.cpu.curr->pid.stddev
501554 -0.0% 501333 sched_debug.cpu.max_idle_balance_cost.avg
537336 ? 2% -1.0% 531909 sched_debug.cpu.max_idle_balance_cost.max
500000 +0.0% 500000 sched_debug.cpu.max_idle_balance_cost.min
5864 ? 48% -11.0% 5221 ? 30% sched_debug.cpu.max_idle_balance_cost.stddev
4294 +0.0% 4294 sched_debug.cpu.next_balance.avg
4294 +0.0% 4294 sched_debug.cpu.next_balance.max
4294 +0.0% 4294 sched_debug.cpu.next_balance.min
0.00 ? 23% -9.6% 0.00 ? 21% sched_debug.cpu.next_balance.stddev
0.05 ? 14% +13.1% 0.05 ? 9% sched_debug.cpu.nr_running.avg
1.10 ? 7% -3.8% 1.06 ? 8% sched_debug.cpu.nr_running.max
0.20 ? 11% +5.3% 0.21 ? 6% sched_debug.cpu.nr_running.stddev
433539 ? 5% +7.9% 467704 ? 5% sched_debug.cpu.nr_switches.avg
762394 ? 7% +13.1% 862282 ? 7% sched_debug.cpu.nr_switches.max
89087 ? 38% +8.4% 96553 ? 62% sched_debug.cpu.nr_switches.min
172936 ? 9% +11.6% 192989 ? 11% sched_debug.cpu.nr_switches.stddev
2.107e+09 ? 4% -2.7% 2.051e+09 ? 8% sched_debug.cpu.nr_uninterruptible.avg
4.295e+09 +0.0% 4.295e+09 sched_debug.cpu.nr_uninterruptible.max
2.141e+09 -0.4% 2.134e+09 sched_debug.cpu.nr_uninterruptible.stddev
286038 ? 4% +3.5% 296145 ? 4% sched_debug.cpu_clk
996147 +0.0% 996147 sched_debug.dl_rq:.dl_bw->bw.avg
996147 +0.0% 996147 sched_debug.dl_rq:.dl_bw->bw.max
996147 +0.0% 996147 sched_debug.dl_rq:.dl_bw->bw.min
4.295e+09 +0.0% 4.295e+09 sched_debug.jiffies
285431 ? 5% +3.5% 295538 ? 4% sched_debug.ktime
950.00 +0.0% 950.00 sched_debug.rt_rq:.rt_runtime.avg
950.00 +0.0% 950.00 sched_debug.rt_rq:.rt_runtime.max
950.00 +0.0% 950.00 sched_debug.rt_rq:.rt_runtime.min
286733 ? 4% +3.5% 296839 ? 4% sched_debug.sched_clk
1.00 +0.0% 1.00 sched_debug.sched_clock_stable()
58611259 +0.0% 58611259 sched_debug.sysctl_sched.sysctl_sched_features
0.75 +0.0% 0.75 sched_debug.sysctl_sched.sysctl_sched_idle_min_granularity
24.00 +0.0% 24.00 sched_debug.sysctl_sched.sysctl_sched_latency
3.00 +0.0% 3.00 sched_debug.sysctl_sched.sysctl_sched_min_granularity
1.00 +0.0% 1.00 sched_debug.sysctl_sched.sysctl_sched_tunable_scaling
4.00 +0.0% 4.00 sched_debug.sysctl_sched.sysctl_sched_wakeup_granularity
200518 -0.2% 200215 slabinfo.Acpi-Operand.active_objs
3585 -0.2% 3578 slabinfo.Acpi-Operand.active_slabs
200769 -0.2% 200411 slabinfo.Acpi-Operand.num_objs
3585 -0.2% 3578 slabinfo.Acpi-Operand.num_slabs
2680 ? 8% +3.4% 2771 ? 7% slabinfo.Acpi-Parse.active_objs
36.17 ? 8% +4.6% 37.83 ? 7% slabinfo.Acpi-Parse.active_slabs
2680 ? 8% +3.4% 2771 ? 7% slabinfo.Acpi-Parse.num_objs
36.17 ? 8% +4.6% 37.83 ? 7% slabinfo.Acpi-Parse.num_slabs
1899 ? 6% +3.2% 1961 ? 7% slabinfo.Acpi-State.active_objs
37.00 ? 6% +3.2% 38.17 ? 8% slabinfo.Acpi-State.active_slabs
1899 ? 6% +3.2% 1961 ? 7% slabinfo.Acpi-State.num_objs
37.00 ? 6% +3.2% 38.17 ? 8% slabinfo.Acpi-State.num_slabs
36.00 +0.0% 36.00 slabinfo.DCCP.active_objs
2.00 +0.0% 2.00 slabinfo.DCCP.active_slabs
36.00 +0.0% 36.00 slabinfo.DCCP.num_objs
2.00 +0.0% 2.00 slabinfo.DCCP.num_slabs
34.00 +0.0% 34.00 slabinfo.DCCPv6.active_objs
2.00 +0.0% 2.00 slabinfo.DCCPv6.active_slabs
34.00 +0.0% 34.00 slabinfo.DCCPv6.num_objs
2.00 +0.0% 2.00 slabinfo.DCCPv6.num_slabs
192.00 +0.0% 192.00 slabinfo.RAW.active_objs
6.00 +0.0% 6.00 slabinfo.RAW.active_slabs
192.00 +0.0% 192.00 slabinfo.RAW.num_objs
6.00 +0.0% 6.00 slabinfo.RAW.num_slabs
147.33 ? 8% -2.9% 143.00 ? 9% slabinfo.RAWv6.active_objs
5.67 ? 8% -2.9% 5.50 ? 9% slabinfo.RAWv6.active_slabs
147.33 ? 8% -2.9% 143.00 ? 9% slabinfo.RAWv6.num_objs
5.67 ? 8% -2.9% 5.50 ? 9% slabinfo.RAWv6.num_slabs
53.67 ? 9% +4.3% 56.00 slabinfo.TCP.active_objs
3.83 ? 9% +4.3% 4.00 slabinfo.TCP.active_slabs
53.67 ? 9% +4.3% 56.00 slabinfo.TCP.num_objs
3.83 ? 9% +4.3% 4.00 slabinfo.TCP.num_slabs
36.83 ? 13% +5.9% 39.00 slabinfo.TCPv6.active_objs
2.83 ? 13% +5.9% 3.00 slabinfo.TCPv6.active_slabs
36.83 ? 13% +5.9% 39.00 slabinfo.TCPv6.num_objs
2.83 ? 13% +5.9% 3.00 slabinfo.TCPv6.num_slabs
112.00 ? 10% +7.1% 120.00 slabinfo.UDPv6.active_objs
4.67 ? 10% +7.1% 5.00 slabinfo.UDPv6.active_slabs
112.00 ? 10% +7.1% 120.00 slabinfo.UDPv6.num_objs
4.67 ? 10% +7.1% 5.00 slabinfo.UDPv6.num_slabs
1838 ? 10% -0.2% 1835 ? 10% slabinfo.UNIX.active_objs
60.67 ? 11% +0.3% 60.83 ? 10% slabinfo.UNIX.active_slabs
1838 ? 10% -0.2% 1835 ? 10% slabinfo.UNIX.num_objs
60.67 ? 11% +0.3% 60.83 ? 10% slabinfo.UNIX.num_slabs
17427 ? 2% -2.4% 17008 ? 2% slabinfo.anon_vma.active_objs
450.00 ? 3% -2.3% 439.50 ? 2% slabinfo.anon_vma.active_slabs
17568 ? 3% -2.4% 17151 ? 2% slabinfo.anon_vma.num_objs
450.00 ? 3% -2.3% 439.50 ? 2% slabinfo.anon_vma.num_slabs
21718 ? 2% -2.2% 21244 ? 2% slabinfo.anon_vma_chain.active_objs
344.17 ? 2% -2.6% 335.17 ? 2% slabinfo.anon_vma_chain.active_slabs
22042 ? 2% -2.5% 21483 ? 2% slabinfo.anon_vma_chain.num_objs
344.17 ? 2% -2.6% 335.17 ? 2% slabinfo.anon_vma_chain.num_slabs
213.33 ? 16% -15.6% 180.00 ? 16% slabinfo.bdev_cache.active_objs
10.67 ? 16% -15.6% 9.00 ? 16% slabinfo.bdev_cache.active_slabs
213.33 ? 16% -15.6% 180.00 ? 16% slabinfo.bdev_cache.num_objs
10.67 ? 16% -15.6% 9.00 ? 16% slabinfo.bdev_cache.num_slabs
3323 +0.2% 3330 slabinfo.bfq_io_cq.active_objs
94.50 +0.2% 94.67 slabinfo.bfq_io_cq.active_slabs
3323 +0.2% 3330 slabinfo.bfq_io_cq.num_objs
94.50 +0.2% 94.67 slabinfo.bfq_io_cq.num_slabs
3684 ? 6% -5.7% 3473 ? 4% slabinfo.bio-120.active_objs
114.83 ? 6% -6.0% 108.00 ? 4% slabinfo.bio-120.active_slabs
3687 ? 6% -5.8% 3473 ? 4% slabinfo.bio-120.num_objs
114.83 ? 6% -6.0% 108.00 ? 4% slabinfo.bio-120.num_slabs
6294 ? 5% -4.2% 6027 ? 9% slabinfo.bio-184.active_objs
155.00 ? 5% -4.5% 148.00 ? 9% slabinfo.bio-184.active_slabs
6519 ? 5% -4.3% 6241 ? 9% slabinfo.bio-184.num_objs
155.00 ? 5% -4.5% 148.00 ? 9% slabinfo.bio-184.num_slabs
32.00 +0.0% 32.00 slabinfo.bio-216.active_objs
1.00 +0.0% 1.00 slabinfo.bio-216.active_slabs
32.00 +0.0% 32.00 slabinfo.bio-216.num_objs
1.00 +0.0% 1.00 slabinfo.bio-216.num_slabs
3053 +0.4% 3065 slabinfo.bio-248.active_objs
94.83 +0.9% 95.67 slabinfo.bio-248.active_slabs
3053 +0.4% 3065 slabinfo.bio-248.num_objs
94.83 +0.9% 95.67 slabinfo.bio-248.num_slabs
126.00 ? 19% -5.6% 119.00 ? 24% slabinfo.bio-344.active_objs
3.00 ? 19% -5.6% 2.83 ? 24% slabinfo.bio-344.active_slabs
126.00 ? 19% -5.6% 119.00 ? 24% slabinfo.bio-344.num_objs
3.00 ? 19% -5.6% 2.83 ? 24% slabinfo.bio-344.num_slabs
170.00 +0.0% 170.00 slabinfo.bio_post_read_ctx.active_objs
2.00 +0.0% 2.00 slabinfo.bio_post_read_ctx.active_slabs
170.00 +0.0% 170.00 slabinfo.bio_post_read_ctx.num_objs
2.00 +0.0% 2.00 slabinfo.bio_post_read_ctx.num_slabs
125.33 ? 34% -12.8% 109.33 ? 15% slabinfo.biovec-128.active_objs
7.83 ? 34% -12.8% 6.83 ? 15% slabinfo.biovec-128.active_slabs
125.33 ? 34% -12.8% 109.33 ? 15% slabinfo.biovec-128.num_objs
7.83 ? 34% -12.8% 6.83 ? 15% slabinfo.biovec-128.num_slabs
560.00 ? 12% +1.9% 570.67 ? 14% slabinfo.biovec-64.active_objs
17.50 ? 12% +1.9% 17.83 ? 14% slabinfo.biovec-64.active_slabs
560.00 ? 12% +1.9% 570.67 ? 14% slabinfo.biovec-64.num_objs
17.50 ? 12% +1.9% 17.83 ? 14% slabinfo.biovec-64.num_slabs
824.17 +0.6% 829.17 slabinfo.biovec-max.active_objs
102.50 +1.0% 103.50 slabinfo.biovec-max.active_slabs
824.17 +0.6% 829.50 slabinfo.biovec-max.num_objs
102.50 +1.0% 103.50 slabinfo.biovec-max.num_slabs
88.67 ? 11% +0.0% 88.67 ? 21% slabinfo.btrfs_inode.active_objs
3.17 ? 11% +0.0% 3.17 ? 21% slabinfo.btrfs_inode.active_slabs
88.67 ? 11% +0.0% 88.67 ? 21% slabinfo.btrfs_inode.num_objs
3.17 ? 11% +0.0% 3.17 ? 21% slabinfo.btrfs_inode.num_slabs
364.00 ? 7% +1.8% 370.50 ? 8% slabinfo.buffer_head.active_objs
9.33 ? 7% +1.8% 9.50 ? 8% slabinfo.buffer_head.active_slabs
364.00 ? 7% +1.8% 370.50 ? 8% slabinfo.buffer_head.num_objs
9.33 ? 7% +1.8% 9.50 ? 8% slabinfo.buffer_head.num_slabs
6973 ? 2% +4.1% 7258 ? 2% slabinfo.cred_jar.active_objs
165.33 ? 2% +4.0% 172.00 ? 2% slabinfo.cred_jar.active_slabs
6973 ? 2% +4.1% 7258 ? 2% slabinfo.cred_jar.num_objs
165.33 ? 2% +4.0% 172.00 ? 2% slabinfo.cred_jar.num_slabs
117.00 +0.0% 117.00 slabinfo.dax_cache.active_objs
3.00 +0.0% 3.00 slabinfo.dax_cache.active_slabs
117.00 +0.0% 117.00 slabinfo.dax_cache.num_objs
3.00 +0.0% 3.00 slabinfo.dax_cache.num_slabs
3253295 +0.4% 3267079 slabinfo.dentry.active_objs
77485 +0.4% 77814 slabinfo.dentry.active_slabs
3254391 +0.4% 3268212 slabinfo.dentry.num_objs
77485 +0.4% 77814 slabinfo.dentry.num_slabs
30.00 +0.0% 30.00 slabinfo.dmaengine-unmap-128.active_objs
1.00 +0.0% 1.00 slabinfo.dmaengine-unmap-128.active_slabs
30.00 +0.0% 30.00 slabinfo.dmaengine-unmap-128.num_objs
1.00 +0.0% 1.00 slabinfo.dmaengine-unmap-128.num_slabs
475.83 ? 6% +9.6% 521.33 ? 7% slabinfo.dmaengine-unmap-16.active_objs
11.17 ? 8% +9.0% 12.17 ? 5% slabinfo.dmaengine-unmap-16.active_slabs
475.83 ? 6% +9.6% 521.33 ? 7% slabinfo.dmaengine-unmap-16.num_objs
11.17 ? 8% +9.0% 12.17 ? 5% slabinfo.dmaengine-unmap-16.num_slabs
15.00 +0.0% 15.00 slabinfo.dmaengine-unmap-256.active_objs
1.00 +0.0% 1.00 slabinfo.dmaengine-unmap-256.active_slabs
15.00 +0.0% 15.00 slabinfo.dmaengine-unmap-256.num_objs
1.00 +0.0% 1.00 slabinfo.dmaengine-unmap-256.num_slabs
16343 ? 7% +4.7% 17106 ? 3% slabinfo.ep_head.active_objs
63.33 ? 7% +5.3% 66.67 ? 3% slabinfo.ep_head.active_slabs
16343 ? 7% +4.7% 17106 ? 3% slabinfo.ep_head.num_objs
63.33 ? 7% +5.3% 66.67 ? 3% slabinfo.ep_head.num_slabs
3511 +0.2% 3518 slabinfo.file_lock_cache.active_objs
94.33 +0.4% 94.67 slabinfo.file_lock_cache.active_slabs
3511 +0.2% 3518 slabinfo.file_lock_cache.num_objs
94.33 +0.4% 94.67 slabinfo.file_lock_cache.num_slabs
4571 -0.1% 4568 slabinfo.files_cache.active_objs
98.67 +0.2% 98.83 slabinfo.files_cache.active_slabs
4571 -0.1% 4568 slabinfo.files_cache.num_objs
98.67 +0.2% 98.83 slabinfo.files_cache.num_slabs
12549 -0.9% 12434 ? 2% slabinfo.filp.active_objs
410.33 ? 2% -1.2% 405.33 ? 2% slabinfo.filp.active_slabs
13141 ? 2% -1.2% 12985 slabinfo.filp.num_objs
410.33 ? 2% -1.2% 405.33 ? 2% slabinfo.filp.num_slabs
2733 ? 6% +3.4% 2827 ? 12% slabinfo.fsnotify_mark_connector.active_objs
21.00 ? 5% +2.4% 21.50 ? 12% slabinfo.fsnotify_mark_connector.active_slabs
2733 ? 6% +3.4% 2827 ? 12% slabinfo.fsnotify_mark_connector.num_objs
21.00 ? 5% +2.4% 21.50 ? 12% slabinfo.fsnotify_mark_connector.num_slabs
39667 +0.1% 39699 slabinfo.ftrace_event_field.active_objs
466.33 +0.1% 466.67 slabinfo.ftrace_event_field.active_slabs
39667 +0.1% 39699 slabinfo.ftrace_event_field.num_objs
466.33 +0.1% 466.67 slabinfo.ftrace_event_field.num_slabs
56.00 +0.0% 56.00 slabinfo.fuse_request.active_objs
1.00 +0.0% 1.00 slabinfo.fuse_request.active_slabs
56.00 +0.0% 56.00 slabinfo.fuse_request.num_objs
1.00 +0.0% 1.00 slabinfo.fuse_request.num_slabs
98.00 +0.0% 98.00 slabinfo.hugetlbfs_inode_cache.active_objs
2.00 +0.0% 2.00 slabinfo.hugetlbfs_inode_cache.active_slabs
98.00 +0.0% 98.00 slabinfo.hugetlbfs_inode_cache.num_objs
2.00 +0.0% 2.00 slabinfo.hugetlbfs_inode_cache.num_slabs
98568 +0.1% 98618 slabinfo.inode_cache.active_objs
1936 +0.0% 1937 slabinfo.inode_cache.active_slabs
98785 +0.0% 98825 slabinfo.inode_cache.num_objs
1936 +0.0% 1937 slabinfo.inode_cache.num_slabs
3780 +0.3% 3793 slabinfo.ip4-frags.active_objs
93.83 +0.9% 94.67 slabinfo.ip4-frags.active_slabs
3780 +0.3% 3793 slabinfo.ip4-frags.num_objs
93.83 +0.9% 94.67 slabinfo.ip4-frags.num_slabs
4082 +0.6% 4108 slabinfo.ip6-frags.active_objs
92.50 +0.4% 92.83 slabinfo.ip6-frags.active_slabs
4082 +0.6% 4108 slabinfo.ip6-frags.num_objs
92.50 +0.4% 92.83 slabinfo.ip6-frags.num_slabs
170.33 ? 20% +21.4% 206.83 ? 13% slabinfo.ip_fib_alias.active_objs
2.33 ? 20% +21.4% 2.83 ? 13% slabinfo.ip_fib_alias.active_slabs
170.33 ? 20% +21.4% 206.83 ? 13% slabinfo.ip_fib_alias.num_objs
2.33 ? 20% +21.4% 2.83 ? 13% slabinfo.ip_fib_alias.num_slabs
198.33 ? 20% +21.4% 240.83 ? 13% slabinfo.ip_fib_trie.active_objs
2.33 ? 20% +21.4% 2.83 ? 13% slabinfo.ip_fib_trie.active_slabs
198.33 ? 20% +21.4% 240.83 ? 13% slabinfo.ip_fib_trie.num_objs
2.33 ? 20% +21.4% 2.83 ? 13% slabinfo.ip_fib_trie.num_slabs
85873 -0.2% 85680 slabinfo.kernfs_node_cache.active_objs
2683 -0.2% 2677 slabinfo.kernfs_node_cache.active_slabs
85873 -0.2% 85680 slabinfo.kernfs_node_cache.num_objs
2683 -0.2% 2677 slabinfo.kernfs_node_cache.num_slabs
2918 ? 3% -4.1% 2799 ? 4% slabinfo.khugepaged_mm_slot.active_objs
80.67 ? 3% -4.3% 77.17 ? 5% slabinfo.khugepaged_mm_slot.active_slabs
2918 ? 3% -4.1% 2799 ? 4% slabinfo.khugepaged_mm_slot.num_objs
80.67 ? 3% -4.3% 77.17 ? 5% slabinfo.khugepaged_mm_slot.num_slabs
5585 +0.3% 5599 ? 3% slabinfo.kmalloc-128.active_objs
175.83 +0.3% 176.33 ? 4% slabinfo.kmalloc-128.active_slabs
5640 +0.3% 5659 ? 4% slabinfo.kmalloc-128.num_objs
175.83 +0.3% 176.33 ? 4% slabinfo.kmalloc-128.num_slabs
3383563 +0.1% 3386226 slabinfo.kmalloc-16.active_objs
13256 +0.1% 13266 slabinfo.kmalloc-16.active_slabs
3393816 +0.1% 3396273 slabinfo.kmalloc-16.num_objs
13256 +0.1% 13266 slabinfo.kmalloc-16.num_slabs
5911 +0.8% 5957 slabinfo.kmalloc-192.active_objs
139.83 +1.0% 141.17 slabinfo.kmalloc-192.active_slabs
5911 +0.9% 5965 slabinfo.kmalloc-192.num_objs
139.83 +1.0% 141.17 slabinfo.kmalloc-192.num_slabs
2631045 +2.5% 2697164 slabinfo.kmalloc-1k.active_objs
82405 +2.5% 84463 slabinfo.kmalloc-1k.active_slabs
2636989 +2.5% 2702854 slabinfo.kmalloc-1k.num_objs
82405 +2.5% 84463 slabinfo.kmalloc-1k.num_slabs
119154 -0.7% 118370 slabinfo.kmalloc-256.active_objs
3784 -0.8% 3753 slabinfo.kmalloc-256.active_slabs
121109 -0.8% 120139 slabinfo.kmalloc-256.num_objs
3784 -0.8% 3753 slabinfo.kmalloc-256.num_slabs
3571 +0.4% 3585 slabinfo.kmalloc-2k.active_objs
228.00 +0.4% 228.83 slabinfo.kmalloc-2k.active_slabs
3654 +0.4% 3668 slabinfo.kmalloc-2k.num_objs
228.00 +0.4% 228.83 slabinfo.kmalloc-2k.num_slabs
153363 -0.1% 153278 slabinfo.kmalloc-32.active_objs
1198 -0.1% 1197 slabinfo.kmalloc-32.active_slabs
153403 -0.1% 153316 slabinfo.kmalloc-32.num_objs
1198 -0.1% 1197 slabinfo.kmalloc-32.num_slabs
2109 ? 2% +7.8% 2273 slabinfo.kmalloc-4k.active_objs
272.17 ? 2% +6.9% 290.83 slabinfo.kmalloc-4k.active_slabs
2179 ? 2% +6.9% 2329 slabinfo.kmalloc-4k.num_objs
272.17 ? 2% +6.9% 290.83 slabinfo.kmalloc-4k.num_slabs
195873 +0.3% 196460 slabinfo.kmalloc-512.active_objs
6424 +0.0% 6427 slabinfo.kmalloc-512.active_slabs
205584 +0.0% 205681 slabinfo.kmalloc-512.num_objs
6424 +0.0% 6427 slabinfo.kmalloc-512.num_slabs
53041 +0.4% 53275 slabinfo.kmalloc-64.active_objs
829.17 +0.5% 833.50 slabinfo.kmalloc-64.active_slabs
53110 +0.5% 53373 slabinfo.kmalloc-64.num_objs
829.17 +0.5% 833.50 slabinfo.kmalloc-64.num_slabs
58032 -1.3% 57252 slabinfo.kmalloc-8.active_objs
123.17 ? 3% -3.5% 118.83 ? 2% slabinfo.kmalloc-8.active_slabs
63379 ? 2% -3.6% 61086 ? 2% slabinfo.kmalloc-8.num_objs
123.17 ? 3% -3.5% 118.83 ? 2% slabinfo.kmalloc-8.num_slabs
1189 -1.1% 1175 slabinfo.kmalloc-8k.active_objs
301.50 -1.3% 297.50 slabinfo.kmalloc-8k.active_slabs
1207 -1.3% 1191 slabinfo.kmalloc-8k.num_objs
301.50 -1.3% 297.50 slabinfo.kmalloc-8k.num_slabs
12409 ? 2% +1.7% 12620 ? 4% slabinfo.kmalloc-96.active_objs
314.17 ? 2% +2.1% 320.83 ? 4% slabinfo.kmalloc-96.active_slabs
13219 ? 2% +2.0% 13488 ? 4% slabinfo.kmalloc-96.num_objs
314.17 ? 2% +2.1% 320.83 ? 4% slabinfo.kmalloc-96.num_slabs
794.67 ? 7% +7.4% 853.33 ? 9% slabinfo.kmalloc-cg-128.active_objs
24.83 ? 7% +7.4% 26.67 ? 9% slabinfo.kmalloc-cg-128.active_slabs
794.67 ? 7% +7.4% 853.33 ? 9% slabinfo.kmalloc-cg-128.num_objs
24.83 ? 7% +7.4% 26.67 ? 9% slabinfo.kmalloc-cg-128.num_slabs
4945 ? 7% -11.9% 4357 ? 9% slabinfo.kmalloc-cg-16.active_objs
19.00 ? 6% -13.2% 16.50 ? 9% slabinfo.kmalloc-cg-16.active_slabs
4945 ? 7% -11.9% 4357 ? 9% slabinfo.kmalloc-cg-16.num_objs
19.00 ? 6% -13.2% 16.50 ? 9% slabinfo.kmalloc-cg-16.num_slabs
4345 -0.6% 4319 slabinfo.kmalloc-cg-192.active_objs
102.83 -0.5% 102.33 slabinfo.kmalloc-cg-192.active_slabs
4345 -0.6% 4319 slabinfo.kmalloc-cg-192.num_objs
102.83 -0.5% 102.33 slabinfo.kmalloc-cg-192.num_slabs
3461 ? 4% +0.6% 3481 ? 2% slabinfo.kmalloc-cg-1k.active_objs
107.50 ? 4% +1.1% 108.67 ? 3% slabinfo.kmalloc-cg-1k.active_slabs
3461 ? 4% +0.6% 3482 ? 2% slabinfo.kmalloc-cg-1k.num_objs
107.50 ? 4% +1.1% 108.67 ? 3% slabinfo.kmalloc-cg-1k.num_slabs
538.67 ? 4% +2.0% 549.33 ? 5% slabinfo.kmalloc-cg-256.active_objs
16.83 ? 4% +2.0% 17.17 ? 5% slabinfo.kmalloc-cg-256.active_slabs
538.67 ? 4% +2.0% 549.33 ? 5% slabinfo.kmalloc-cg-256.num_objs
16.83 ? 4% +2.0% 17.17 ? 5% slabinfo.kmalloc-cg-256.num_slabs
532.50 ? 7% +5.7% 563.00 ? 5% slabinfo.kmalloc-cg-2k.active_objs
32.83 ? 7% +5.6% 34.67 ? 5% slabinfo.kmalloc-cg-2k.active_slabs
532.50 ? 7% +5.7% 563.00 ? 5% slabinfo.kmalloc-cg-2k.num_objs
32.83 ? 7% +5.6% 34.67 ? 5% slabinfo.kmalloc-cg-2k.num_slabs
12462 +0.0% 12468 slabinfo.kmalloc-cg-32.active_objs
96.83 +0.0% 96.83 slabinfo.kmalloc-cg-32.active_slabs
12462 +0.0% 12468 slabinfo.kmalloc-cg-32.num_objs
96.83 +0.0% 96.83 slabinfo.kmalloc-cg-32.num_slabs
1022 -0.8% 1014 slabinfo.kmalloc-cg-4k.active_objs
134.83 ? 2% +1.2% 136.50 slabinfo.kmalloc-cg-4k.active_slabs
1082 +1.4% 1097 slabinfo.kmalloc-cg-4k.num_objs
134.83 ? 2% +1.2% 136.50 slabinfo.kmalloc-cg-4k.num_slabs
3141 +0.7% 3162 slabinfo.kmalloc-cg-512.active_objs
98.17 +0.7% 98.83 slabinfo.kmalloc-cg-512.active_slabs
3141 +0.7% 3162 slabinfo.kmalloc-cg-512.num_objs
98.17 +0.7% 98.83 slabinfo.kmalloc-cg-512.num_slabs
1971 ? 8% -1.2% 1947 ? 10% slabinfo.kmalloc-cg-64.active_objs
30.17 ? 7% -1.7% 29.67 ? 10% slabinfo.kmalloc-cg-64.active_slabs
1971 ? 8% -1.2% 1947 ? 10% slabinfo.kmalloc-cg-64.num_objs
30.17 ? 7% -1.7% 29.67 ? 10% slabinfo.kmalloc-cg-64.num_slabs
49749 -0.2% 49664 slabinfo.kmalloc-cg-8.active_objs
97.17 -0.2% 97.00 slabinfo.kmalloc-cg-8.active_slabs
49749 -0.2% 49664 slabinfo.kmalloc-cg-8.num_objs
97.17 -0.2% 97.00 slabinfo.kmalloc-cg-8.num_slabs
51.67 ? 2% -1.0% 51.17 slabinfo.kmalloc-cg-8k.active_objs
12.67 ? 3% -1.3% 12.50 ? 4% slabinfo.kmalloc-cg-8k.active_slabs
51.67 ? 2% -1.0% 51.17 slabinfo.kmalloc-cg-8k.num_objs
12.67 ? 3% -1.3% 12.50 ? 4% slabinfo.kmalloc-cg-8k.num_slabs
846.00 ? 5% +0.8% 853.00 ? 3% slabinfo.kmalloc-cg-96.active_objs
19.17 ? 5% +0.9% 19.33 ? 3% slabinfo.kmalloc-cg-96.active_slabs
846.00 ? 5% +0.8% 853.00 ? 3% slabinfo.kmalloc-cg-96.num_objs
19.17 ? 5% +0.9% 19.33 ? 3% slabinfo.kmalloc-cg-96.num_slabs
165.33 ? 17% +6.5% 176.00 ? 17% slabinfo.kmalloc-rcl-128.active_objs
5.17 ? 17% +6.5% 5.50 ? 17% slabinfo.kmalloc-rcl-128.active_slabs
165.33 ? 17% +6.5% 176.00 ? 17% slabinfo.kmalloc-rcl-128.num_objs
5.17 ? 17% +6.5% 5.50 ? 17% slabinfo.kmalloc-rcl-128.num_slabs
154.00 ? 25% +4.5% 161.00 ? 23% slabinfo.kmalloc-rcl-192.active_objs
3.67 ? 25% +4.5% 3.83 ? 23% slabinfo.kmalloc-rcl-192.active_slabs
154.00 ? 25% +4.5% 161.00 ? 23% slabinfo.kmalloc-rcl-192.num_objs
3.67 ? 25% +4.5% 3.83 ? 23% slabinfo.kmalloc-rcl-192.num_slabs
3144877 +0.4% 3158806 slabinfo.kmalloc-rcl-64.active_objs
49154 +0.4% 49373 slabinfo.kmalloc-rcl-64.active_slabs
3145909 +0.4% 3159922 slabinfo.kmalloc-rcl-64.num_objs
49154 +0.4% 49373 slabinfo.kmalloc-rcl-64.num_slabs
1894 ? 5% +2.0% 1933 ? 9% slabinfo.kmalloc-rcl-96.active_objs
45.00 ? 6% +1.9% 45.83 ? 9% slabinfo.kmalloc-rcl-96.active_slabs
1894 ? 5% +2.0% 1933 ? 9% slabinfo.kmalloc-rcl-96.num_objs
45.00 ? 6% +1.9% 45.83 ? 9% slabinfo.kmalloc-rcl-96.num_slabs
661.33 ? 9% +7.3% 709.33 ? 4% slabinfo.kmem_cache.active_objs
20.67 ? 9% +7.3% 22.17 ? 4% slabinfo.kmem_cache.active_slabs
661.33 ? 9% +7.3% 709.33 ? 4% slabinfo.kmem_cache.num_objs
20.67 ? 9% +7.3% 22.17 ? 4% slabinfo.kmem_cache.num_slabs
1387 ? 8% +6.9% 1483 ? 4% slabinfo.kmem_cache_node.active_objs
22.67 ? 8% +6.6% 24.17 ? 4% slabinfo.kmem_cache_node.active_slabs
1450 ? 8% +6.6% 1546 ? 4% slabinfo.kmem_cache_node.num_objs
22.67 ? 8% +6.6% 24.17 ? 4% slabinfo.kmem_cache_node.num_slabs
18853 ? 2% -1.6% 18551 slabinfo.lsm_file_cache.active_objs
113.50 ? 2% -1.2% 112.17 slabinfo.lsm_file_cache.active_slabs
19423 ? 2% -1.4% 19156 slabinfo.lsm_file_cache.num_objs
113.50 ? 2% -1.2% 112.17 slabinfo.lsm_file_cache.num_slabs
7146 ? 3% -2.6% 6960 ? 2% slabinfo.maple_node.active_objs
223.67 ? 3% -2.5% 218.17 ? 3% slabinfo.maple_node.active_slabs
7174 ? 3% -2.5% 6992 ? 3% slabinfo.maple_node.num_objs
223.67 ? 3% -2.5% 218.17 ? 3% slabinfo.maple_node.num_slabs
2759 -0.8% 2738 slabinfo.mm_struct.active_objs
105.67 -0.8% 104.83 slabinfo.mm_struct.active_slabs
2760 -0.8% 2738 slabinfo.mm_struct.num_objs
105.67 -0.8% 104.83 slabinfo.mm_struct.num_slabs
1241 -4.8% 1181 ? 10% slabinfo.mnt_cache.active_objs
24.33 -4.8% 23.17 ? 10% slabinfo.mnt_cache.active_slabs
1241 -4.8% 1181 ? 10% slabinfo.mnt_cache.num_objs
24.33 -4.8% 23.17 ? 10% slabinfo.mnt_cache.num_slabs
34.00 +0.0% 34.00 slabinfo.mqueue_inode_cache.active_objs
1.00 +0.0% 1.00 slabinfo.mqueue_inode_cache.active_slabs
34.00 +0.0% 34.00 slabinfo.mqueue_inode_cache.num_objs
1.00 +0.0% 1.00 slabinfo.mqueue_inode_cache.num_slabs
788.00 ? 4% -2.4% 769.33 slabinfo.names_cache.active_objs
98.83 ? 5% -2.7% 96.17 slabinfo.names_cache.active_slabs
790.67 ? 5% -2.7% 769.33 slabinfo.names_cache.num_objs
98.83 ? 5% -2.7% 96.17 slabinfo.names_cache.num_slabs
7.00 +0.0% 7.00 slabinfo.net_namespace.active_objs
1.00 +0.0% 1.00 slabinfo.net_namespace.active_slabs
7.00 +0.0% 7.00 slabinfo.net_namespace.num_objs
1.00 +0.0% 1.00 slabinfo.net_namespace.num_slabs
46.00 +0.0% 46.00 slabinfo.nfs_commit_data.active_objs
1.00 +0.0% 1.00 slabinfo.nfs_commit_data.active_slabs
46.00 +0.0% 46.00 slabinfo.nfs_commit_data.num_objs
1.00 +0.0% 1.00 slabinfo.nfs_commit_data.num_slabs
36.00 +0.0% 36.00 slabinfo.nfs_read_data.active_objs
1.00 +0.0% 1.00 slabinfo.nfs_read_data.active_slabs
36.00 +0.0% 36.00 slabinfo.nfs_read_data.num_objs
1.00 +0.0% 1.00 slabinfo.nfs_read_data.num_slabs
290.50 ? 16% +12.4% 326.67 ? 10% slabinfo.nsproxy.active_objs
4.50 ? 21% +14.8% 5.17 ? 13% slabinfo.nsproxy.active_slabs
290.50 ? 16% +12.4% 326.67 ? 10% slabinfo.nsproxy.num_objs
4.50 ? 21% +14.8% 5.17 ? 13% slabinfo.nsproxy.num_slabs
12994 -2.0% 12730 ? 2% slabinfo.numa_policy.active_objs
223.83 ? 2% -2.4% 218.50 slabinfo.numa_policy.active_slabs
13463 -2.4% 13146 slabinfo.numa_policy.num_objs
223.83 ? 2% -2.4% 218.50 slabinfo.numa_policy.num_slabs
9717 +0.2% 9733 slabinfo.pde_opener.active_objs
94.50 +0.4% 94.83 slabinfo.pde_opener.active_slabs
9717 +0.2% 9733 slabinfo.pde_opener.num_objs
94.50 +0.4% 94.83 slabinfo.pde_opener.num_slabs
3649 -1.6% 3590 slabinfo.perf_event.active_objs
145.83 -1.6% 143.50 slabinfo.perf_event.active_slabs
3807 -1.5% 3748 slabinfo.perf_event.num_objs
145.83 -1.6% 143.50 slabinfo.perf_event.num_slabs
5038 +1.3% 5104 ? 4% slabinfo.pid.active_objs
157.00 +1.6% 159.50 ? 3% slabinfo.pid.active_slabs
5038 +1.6% 5116 ? 3% slabinfo.pid.num_objs
157.00 +1.6% 159.50 ? 3% slabinfo.pid.num_slabs
4234 -1.2% 4184 ? 3% slabinfo.proc_dir_entry.active_objs
100.83 -1.2% 99.67 ? 3% slabinfo.proc_dir_entry.active_slabs
4235 -1.2% 4186 ? 3% slabinfo.proc_dir_entry.num_objs
100.83 -1.2% 99.67 ? 3% slabinfo.proc_dir_entry.num_slabs
11561 -0.1% 11547 slabinfo.proc_inode_cache.active_objs
255.00 -0.4% 254.00 slabinfo.proc_inode_cache.active_slabs
11753 -0.5% 11699 slabinfo.proc_inode_cache.num_objs
255.00 -0.4% 254.00 slabinfo.proc_inode_cache.num_slabs
3393485 +0.2% 3400050 slabinfo.radix_tree_node.active_objs
61218 +0.1% 61307 slabinfo.radix_tree_node.active_slabs
3428248 +0.1% 3433270 slabinfo.radix_tree_node.num_objs
61218 +0.1% 61307 slabinfo.radix_tree_node.num_slabs
271.33 ? 6% +2.3% 277.50 ? 10% slabinfo.request_queue.active_objs
7.33 ? 6% +2.3% 7.50 ? 10% slabinfo.request_queue.active_slabs
271.33 ? 6% +2.3% 277.50 ? 10% slabinfo.request_queue.num_objs
7.33 ? 6% +2.3% 7.50 ? 10% slabinfo.request_queue.num_slabs
46.00 +0.0% 46.00 slabinfo.rpc_inode_cache.active_objs
1.00 +0.0% 1.00 slabinfo.rpc_inode_cache.active_slabs
46.00 +0.0% 46.00 slabinfo.rpc_inode_cache.num_objs
1.00 +0.0% 1.00 slabinfo.rpc_inode_cache.num_slabs
922.67 +0.0% 922.67 slabinfo.scsi_sense_cache.active_objs
28.83 +0.0% 28.83 slabinfo.scsi_sense_cache.active_slabs
922.67 +0.0% 922.67 slabinfo.scsi_sense_cache.num_objs
28.83 +0.0% 28.83 slabinfo.scsi_sense_cache.num_slabs
3433 -0.2% 3427 slabinfo.seq_file.active_objs
100.00 -0.2% 99.83 slabinfo.seq_file.active_slabs
3433 -0.2% 3427 slabinfo.seq_file.num_objs
100.00 -0.2% 99.83 slabinfo.seq_file.num_slabs
5128 +0.6% 5161 slabinfo.shmem_inode_cache.active_objs
121.67 +0.8% 122.67 slabinfo.shmem_inode_cache.active_slabs
5128 +0.6% 5161 slabinfo.shmem_inode_cache.num_objs
121.67 +0.8% 122.67 slabinfo.shmem_inode_cache.num_slabs
2473 +0.1% 2475 slabinfo.sighand_cache.active_objs
164.17 +0.5% 165.00 slabinfo.sighand_cache.active_slabs
2473 +0.3% 2481 slabinfo.sighand_cache.num_objs
164.17 +0.5% 165.00 slabinfo.sighand_cache.num_slabs
4437 ? 2% +0.9% 4479 ? 3% slabinfo.signal_cache.active_objs
159.33 ? 2% +1.0% 161.00 ? 3% slabinfo.signal_cache.active_slabs
4474 ? 2% +1.1% 4522 ? 3% slabinfo.signal_cache.num_objs
159.33 ? 2% +1.0% 161.00 ? 3% slabinfo.signal_cache.num_slabs
5608 -1.3% 5537 ? 2% slabinfo.sigqueue.active_objs
109.17 -1.2% 107.83 ? 2% slabinfo.sigqueue.active_slabs
5608 -1.3% 5537 ? 2% slabinfo.sigqueue.num_objs
109.17 -1.2% 107.83 ? 2% slabinfo.sigqueue.num_slabs
336.00 ? 18% -14.3% 288.00 ? 15% slabinfo.skbuff_fclone_cache.active_objs
10.50 ? 18% -14.3% 9.00 ? 15% slabinfo.skbuff_fclone_cache.active_slabs
336.00 ? 18% -14.3% 288.00 ? 15% slabinfo.skbuff_fclone_cache.num_objs
10.50 ? 18% -14.3% 9.00 ? 15% slabinfo.skbuff_fclone_cache.num_slabs
4341 ? 6% -0.4% 4325 ? 3% slabinfo.skbuff_head_cache.active_objs
137.83 ? 7% -0.5% 137.17 ? 3% slabinfo.skbuff_head_cache.active_slabs
4410 ? 7% -0.5% 4389 ? 3% slabinfo.skbuff_head_cache.num_objs
137.83 ? 7% -0.5% 137.17 ? 3% slabinfo.skbuff_head_cache.num_slabs
3216 ? 5% -1.6% 3166 ? 7% slabinfo.sock_inode_cache.active_objs
82.00 ? 5% -1.4% 80.83 ? 7% slabinfo.sock_inode_cache.active_slabs
3216 ? 5% -1.6% 3166 ? 7% slabinfo.sock_inode_cache.num_objs
82.00 ? 5% -1.4% 80.83 ? 7% slabinfo.sock_inode_cache.num_slabs
1467 ? 4% -12.3% 1286 ? 11% slabinfo.task_group.active_objs
28.00 ? 4% -11.3% 24.83 ? 12% slabinfo.task_group.active_slabs
1467 ? 4% -12.3% 1286 ? 11% slabinfo.task_group.num_objs
28.00 ? 4% -11.3% 24.83 ? 12% slabinfo.task_group.num_slabs
1778 +0.2% 1781 slabinfo.task_struct.active_objs
1780 +0.2% 1783 slabinfo.task_struct.active_slabs
1780 +0.2% 1783 slabinfo.task_struct.num_objs
1780 +0.2% 1783 slabinfo.task_struct.num_slabs
346.33 ? 5% -0.1% 346.00 ? 10% slabinfo.taskstats.active_objs
8.33 ? 5% +0.0% 8.33 ? 11% slabinfo.taskstats.active_slabs
346.33 ? 5% -0.1% 346.00 ? 10% slabinfo.taskstats.num_objs
8.33 ? 5% +0.0% 8.33 ? 11% slabinfo.taskstats.num_slabs
2875 +0.3% 2882 slabinfo.trace_event_file.active_objs
62.50 +0.3% 62.67 slabinfo.trace_event_file.active_slabs
2875 +0.3% 2882 slabinfo.trace_event_file.num_objs
62.50 +0.3% 62.67 slabinfo.trace_event_file.num_slabs
60.00 +0.0% 60.00 slabinfo.tw_sock_TCP.active_objs
1.00 +0.0% 1.00 slabinfo.tw_sock_TCP.active_slabs
60.00 +0.0% 60.00 slabinfo.tw_sock_TCP.num_objs
1.00 +0.0% 1.00 slabinfo.tw_sock_TCP.num_slabs
67.83 ? 20% +9.1% 74.00 slabinfo.uts_namespace.active_objs
1.83 ? 20% +9.1% 2.00 slabinfo.uts_namespace.active_slabs
67.83 ? 20% +9.1% 74.00 slabinfo.uts_namespace.num_objs
1.83 ? 20% +9.1% 2.00 slabinfo.uts_namespace.num_slabs
21786 -0.1% 21765 ? 2% slabinfo.vm_area_struct.active_objs
415.00 -0.4% 413.33 ? 2% slabinfo.vm_area_struct.active_slabs
22007 -0.3% 21931 ? 2% slabinfo.vm_area_struct.num_objs
415.00 -0.4% 413.33 ? 2% slabinfo.vm_area_struct.num_slabs
10943 ? 2% +0.9% 11043 ? 2% slabinfo.vmap_area.active_objs
175.00 ? 2% +0.7% 176.17 ? 2% slabinfo.vmap_area.active_slabs
11226 ? 2% +0.7% 11303 ? 2% slabinfo.vmap_area.num_objs
175.00 ? 2% +0.7% 176.17 ? 2% slabinfo.vmap_area.num_slabs
4134 +0.2% 4144 slabinfo.xfs_bmbt_cur.active_objs
83.67 +0.8% 84.33 slabinfo.xfs_bmbt_cur.active_slabs
4134 +0.2% 4144 slabinfo.xfs_bmbt_cur.num_objs
83.67 +0.8% 84.33 slabinfo.xfs_bmbt_cur.num_slabs
168642 -0.1% 168439 slabinfo.xfs_buf.active_objs
4014 -0.1% 4010 slabinfo.xfs_buf.active_slabs
168643 -0.1% 168440 slabinfo.xfs_buf.num_objs
4014 -0.1% 4010 slabinfo.xfs_buf.num_slabs
3105 +0.9% 3134 slabinfo.xfs_da_state.active_objs
91.00 +0.9% 91.83 slabinfo.xfs_da_state.active_slabs
3105 +0.9% 3134 slabinfo.xfs_da_state.num_objs
91.00 +0.9% 91.83 slabinfo.xfs_da_state.num_slabs
3356447 +0.1% 3359177 slabinfo.xfs_ili.active_objs
83954 +0.1% 84023 slabinfo.xfs_ili.active_slabs
3358213 +0.1% 3360952 slabinfo.xfs_ili.num_objs
83954 +0.1% 84023 slabinfo.xfs_ili.num_slabs
3356044 +0.1% 3358777 slabinfo.xfs_inode.active_objs
104920 +0.1% 105005 slabinfo.xfs_inode.active_slabs
3357459 +0.1% 3360203 slabinfo.xfs_inode.num_objs
104920 +0.1% 105005 slabinfo.xfs_inode.num_slabs
2.58 ?188% -2.1 0.49 ? 49% perf-profile.calltrace.cycles-pp.asm_sysvec_apic_timer_interrupt.mwait_idle_with_hints.intel_idle_irq.cpuidle_enter_state.cpuidle_enter
2.78 ?113% -1.6 1.14 ? 15% perf-profile.calltrace.cycles-pp.intel_idle_irq.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call.do_idle
14.01 ? 3% -1.4 12.65 ? 2% perf-profile.calltrace.cycles-pp.xfs_log_force_seq.xfs_file_fsync.__x64_sys_fsync.do_syscall_64.entry_SYSCALL_64_after_hwframe
55.37 -1.1 54.29 perf-profile.calltrace.cycles-pp.cpuidle_idle_call.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
58.43 -1.0 57.39 perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64_no_verify
58.42 -1.0 57.39 perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
58.39 -1.0 57.36 perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
53.38 ? 2% -1.0 52.42 perf-profile.calltrace.cycles-pp.cpuidle_enter.cpuidle_idle_call.do_idle.cpu_startup_entry.start_secondary
9.48 ? 4% -0.7 8.77 ? 2% perf-profile.calltrace.cycles-pp.xlog_cil_force_seq.xfs_log_force_seq.xfs_file_fsync.__x64_sys_fsync.do_syscall_64
58.80 -0.7 58.15 perf-profile.calltrace.cycles-pp.secondary_startup_64_no_verify
1.44 ? 64% -0.6 0.78 ? 21% perf-profile.calltrace.cycles-pp.mwait_idle_with_hints.intel_idle_irq.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call
4.22 ? 7% -0.6 3.57 ? 5% perf-profile.calltrace.cycles-pp.xlog_wait_on_iclog.xfs_log_force_seq.xfs_file_fsync.__x64_sys_fsync.do_syscall_64
7.47 ? 4% -0.6 6.83 ? 3% perf-profile.calltrace.cycles-pp.open64
7.42 ? 4% -0.6 6.78 ? 3% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.open64
7.40 ? 4% -0.6 6.76 ? 3% perf-profile.calltrace.cycles-pp.__x64_sys_openat.do_syscall_64.entry_SYSCALL_64_after_hwframe.open64
7.42 ? 4% -0.6 6.79 ? 3% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.open64
7.39 ? 4% -0.6 6.76 ? 3% perf-profile.calltrace.cycles-pp.do_sys_openat2.__x64_sys_openat.do_syscall_64.entry_SYSCALL_64_after_hwframe.open64
7.32 ? 4% -0.6 6.70 ? 3% perf-profile.calltrace.cycles-pp.do_filp_open.do_sys_openat2.__x64_sys_openat.do_syscall_64.entry_SYSCALL_64_after_hwframe
7.30 ? 4% -0.6 6.69 ? 3% perf-profile.calltrace.cycles-pp.path_openat.do_filp_open.do_sys_openat2.__x64_sys_openat.do_syscall_64
6.98 ? 4% -0.6 6.38 ? 3% perf-profile.calltrace.cycles-pp.open_last_lookups.path_openat.do_filp_open.do_sys_openat2.__x64_sys_openat
0.59 ? 8% -0.6 0.00 perf-profile.calltrace.cycles-pp.dequeue_entity.dequeue_task_fair.__schedule.schedule.io_schedule
6.90 ? 4% -0.6 6.31 ? 3% perf-profile.calltrace.cycles-pp.lookup_open.open_last_lookups.path_openat.do_filp_open.do_sys_openat2
1.62 ? 27% -0.6 1.05 ? 25% perf-profile.calltrace.cycles-pp.update_curr.dequeue_entity.dequeue_task_fair.__schedule.schedule
3.30 ? 8% -0.5 2.77 ? 7% perf-profile.calltrace.cycles-pp.remove_wait_queue.xlog_wait_on_iclog.xfs_log_force_seq.xfs_file_fsync.__x64_sys_fsync
3.28 ? 8% -0.5 2.74 ? 7% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.remove_wait_queue.xlog_wait_on_iclog.xfs_log_force_seq.xfs_file_fsync
3.25 ? 8% -0.5 2.72 ? 7% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.remove_wait_queue.xlog_wait_on_iclog.xfs_log_force_seq
7.83 ? 4% -0.5 7.34 ? 2% perf-profile.calltrace.cycles-pp.xlog_cil_push_now.xlog_cil_force_seq.xfs_log_force_seq.xfs_file_fsync.__x64_sys_fsync
6.10 ? 4% -0.5 5.63 ? 3% perf-profile.calltrace.cycles-pp.xfs_generic_create.lookup_open.open_last_lookups.path_openat.do_filp_open
7.54 ? 4% -0.5 7.07 ? 2% perf-profile.calltrace.cycles-pp.__flush_workqueue.xlog_cil_push_now.xlog_cil_force_seq.xfs_log_force_seq.xfs_file_fsync
52.87 ? 2% -0.5 52.41 ? 2% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call.do_idle.cpu_startup_entry
5.98 ? 4% -0.5 5.52 ? 4% perf-profile.calltrace.cycles-pp.xfs_create.xfs_generic_create.lookup_open.open_last_lookups.path_openat
0.61 ? 8% -0.4 0.17 ?141% perf-profile.calltrace.cycles-pp.dequeue_task_fair.__schedule.schedule.io_schedule.folio_wait_bit_common
1.19 ? 33% -0.4 0.76 ? 24% perf-profile.calltrace.cycles-pp.perf_trace_sched_stat_runtime.update_curr.dequeue_entity.dequeue_task_fair.__schedule
1.17 ? 34% -0.4 0.74 ? 24% perf-profile.calltrace.cycles-pp.perf_tp_event.perf_trace_sched_stat_runtime.update_curr.dequeue_entity.dequeue_task_fair
0.48 ? 45% -0.4 0.09 ?223% perf-profile.calltrace.cycles-pp.xfs_vn_lookup.lookup_open.open_last_lookups.path_openat.do_filp_open
3.34 ? 9% -0.4 2.97 ? 9% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
0.37 ? 71% -0.4 0.00 perf-profile.calltrace.cycles-pp.xfs_lookup.xfs_vn_lookup.lookup_open.open_last_lookups.path_openat
3.25 ? 9% -0.4 2.88 ? 9% perf-profile.calltrace.cycles-pp.xfs_file_buffered_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.45 ? 45% -0.4 0.08 ?223% perf-profile.calltrace.cycles-pp.xfs_buf_unlock.xfs_buf_item_release.xlog_cil_commit.__xfs_trans_commit.xfs_create
3.33 ? 9% -0.4 2.96 ? 8% perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
3.34 ? 9% -0.4 2.98 ? 8% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.write
3.36 ? 9% -0.4 3.00 ? 8% perf-profile.calltrace.cycles-pp.write
3.31 ? 9% -0.4 2.95 ? 8% perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
3.16 ? 9% -0.4 2.80 ? 9% perf-profile.calltrace.cycles-pp.iomap_file_buffered_write.xfs_file_buffered_write.vfs_write.ksys_write.do_syscall_64
2.97 ? 10% -0.3 2.63 ? 9% perf-profile.calltrace.cycles-pp.iomap_write_iter.iomap_file_buffered_write.xfs_file_buffered_write.vfs_write.ksys_write
5.54 ? 10% -0.3 5.20 ? 5% perf-profile.calltrace.cycles-pp.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call.do_idle
0.57 ? 9% -0.3 0.26 ?100% perf-profile.calltrace.cycles-pp.xfs_buf_item_release.xlog_cil_commit.__xfs_trans_commit.xfs_create.xfs_generic_create
5.78 ? 3% -0.3 5.48 ? 2% perf-profile.calltrace.cycles-pp.ret_from_fork
5.78 ? 3% -0.3 5.48 ? 2% perf-profile.calltrace.cycles-pp.kthread.ret_from_fork
0.47 ? 45% -0.3 0.17 ?141% perf-profile.calltrace.cycles-pp.xfs_end_bio.blk_mq_end_request_batch.nvme_irq.__handle_irq_event_percpu.handle_irq_event
0.57 ? 10% -0.3 0.28 ?100% perf-profile.calltrace.cycles-pp.perf_tp_event.perf_trace_sched_switch.__schedule.schedule.schedule_timeout
0.28 ?100% -0.3 0.00 perf-profile.calltrace.cycles-pp.queue_work_on.xfs_end_bio.blk_mq_end_request_batch.nvme_irq.__handle_irq_event_percpu
0.47 ? 82% -0.3 0.20 ?141% perf-profile.calltrace.cycles-pp.perf_session__process_user_event.reader__read_event.perf_session__process_events.record__finish_output.__cmd_record
5.52 ? 2% -0.3 5.25 ? 2% perf-profile.calltrace.cycles-pp.worker_thread.kthread.ret_from_fork
0.99 ? 43% -0.3 0.73 ? 25% perf-profile.calltrace.cycles-pp.__perf_event_overflow.perf_tp_event.perf_trace_sched_stat_runtime.update_curr.dequeue_entity
0.26 ?223% -0.3 0.00 perf-profile.calltrace.cycles-pp.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.mwait_idle_with_hints.intel_idle_irq.cpuidle_enter_state
0.98 ? 43% -0.3 0.73 ? 25% perf-profile.calltrace.cycles-pp.perf_event_output_forward.__perf_event_overflow.perf_tp_event.perf_trace_sched_stat_runtime.update_curr
1.87 ? 6% -0.3 1.61 ? 3% perf-profile.calltrace.cycles-pp.__wait_for_common.__flush_workqueue.xlog_cil_push_now.xlog_cil_force_seq.xfs_log_force_seq
1.81 ? 7% -0.3 1.56 ? 4% perf-profile.calltrace.cycles-pp.schedule_timeout.__wait_for_common.__flush_workqueue.xlog_cil_push_now.xlog_cil_force_seq
1.47 ? 18% -0.2 1.22 ? 15% perf-profile.calltrace.cycles-pp.iomap_write_end.iomap_write_iter.iomap_file_buffered_write.xfs_file_buffered_write.vfs_write
1.34 ? 20% -0.2 1.10 ? 17% perf-profile.calltrace.cycles-pp.filemap_dirty_folio.iomap_write_end.iomap_write_iter.iomap_file_buffered_write.xfs_file_buffered_write
1.79 ? 7% -0.2 1.55 ? 4% perf-profile.calltrace.cycles-pp.schedule.schedule_timeout.__wait_for_common.__flush_workqueue.xlog_cil_push_now
1.78 ? 7% -0.2 1.54 ? 4% perf-profile.calltrace.cycles-pp.__schedule.schedule.schedule_timeout.__wait_for_common.__flush_workqueue
1.48 ? 18% -0.2 1.25 ? 17% perf-profile.calltrace.cycles-pp.perf_callchain.perf_prepare_sample.perf_event_output_forward.__perf_event_overflow.perf_tp_event
1.00 ? 64% -0.2 0.77 ? 11% perf-profile.calltrace.cycles-pp.process_simple.reader__read_event.perf_session__process_events.record__finish_output.__cmd_record
1.47 ? 18% -0.2 1.25 ? 16% perf-profile.calltrace.cycles-pp.get_perf_callchain.perf_callchain.perf_prepare_sample.perf_event_output_forward.__perf_event_overflow
1.19 ? 23% -0.2 0.96 ? 19% perf-profile.calltrace.cycles-pp._raw_spin_lock.locked_inode_to_wb_and_lock_list.__mark_inode_dirty.filemap_dirty_folio.iomap_write_end
1.17 ? 24% -0.2 0.94 ? 19% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.locked_inode_to_wb_and_lock_list.__mark_inode_dirty.filemap_dirty_folio
1.19 ? 23% -0.2 0.97 ? 18% perf-profile.calltrace.cycles-pp.locked_inode_to_wb_and_lock_list.__mark_inode_dirty.filemap_dirty_folio.iomap_write_end.iomap_write_iter
0.22 ?223% -0.2 0.00 perf-profile.calltrace.cycles-pp.poll_idle.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call.do_idle
2.05 ? 4% -0.2 1.82 ? 3% perf-profile.calltrace.cycles-pp.__xfs_trans_commit.xfs_create.xfs_generic_create.lookup_open.open_last_lookups
1.22 ? 22% -0.2 1.00 ? 18% perf-profile.calltrace.cycles-pp.__mark_inode_dirty.filemap_dirty_folio.iomap_write_end.iomap_write_iter.iomap_file_buffered_write
1.93 ? 4% -0.2 1.72 ? 4% perf-profile.calltrace.cycles-pp.xlog_cil_commit.__xfs_trans_commit.xfs_create.xfs_generic_create.lookup_open
0.95 ? 63% -0.2 0.74 ? 12% perf-profile.calltrace.cycles-pp.ordered_events__queue.process_simple.reader__read_event.perf_session__process_events.record__finish_output
0.48 ? 46% -0.2 0.27 ?100% perf-profile.calltrace.cycles-pp.__perf_event_overflow.perf_tp_event.perf_trace_sched_switch.__schedule.schedule
0.94 ? 63% -0.2 0.74 ? 12% perf-profile.calltrace.cycles-pp.queue_event.ordered_events__queue.process_simple.reader__read_event.perf_session__process_events
1.28 ? 5% -0.2 1.08 ? 3% perf-profile.calltrace.cycles-pp.__filemap_fdatawait_range.file_write_and_wait_range.xfs_file_fsync.__x64_sys_fsync.do_syscall_64
0.20 ?141% -0.2 0.00 perf-profile.calltrace.cycles-pp.xfs_dir_lookup.xfs_lookup.xfs_vn_lookup.lookup_open.open_last_lookups
1.20 ? 5% -0.2 1.00 ? 3% perf-profile.calltrace.cycles-pp.folio_wait_bit_common.folio_wait_writeback.__filemap_fdatawait_range.file_write_and_wait_range.xfs_file_fsync
0.19 ?141% -0.2 0.00 perf-profile.calltrace.cycles-pp.__queue_work.queue_work_on.xfs_end_bio.blk_mq_end_request_batch.nvme_irq
0.36 ? 71% -0.2 0.17 ?141% perf-profile.calltrace.cycles-pp.up.xfs_buf_unlock.xfs_buf_item_release.xlog_cil_commit.__xfs_trans_commit
0.39 ?110% -0.2 0.20 ?141% perf-profile.calltrace.cycles-pp.__ordered_events__flush.perf_session__process_user_event.reader__read_event.perf_session__process_events.record__finish_output
0.19 ?141% -0.2 0.00 perf-profile.calltrace.cycles-pp.try_to_wake_up.__queue_work.queue_work_on.xfs_end_bio.blk_mq_end_request_batch
1.21 ? 5% -0.2 1.02 ? 3% perf-profile.calltrace.cycles-pp.folio_wait_writeback.__filemap_fdatawait_range.file_write_and_wait_range.xfs_file_fsync.__x64_sys_fsync
1.13 ? 5% -0.2 0.94 ? 3% perf-profile.calltrace.cycles-pp.io_schedule.folio_wait_bit_common.folio_wait_writeback.__filemap_fdatawait_range.file_write_and_wait_range
1.12 ? 6% -0.2 0.94 ? 3% perf-profile.calltrace.cycles-pp.schedule.io_schedule.folio_wait_bit_common.folio_wait_writeback.__filemap_fdatawait_range
1.12 ? 6% -0.2 0.94 ? 3% perf-profile.calltrace.cycles-pp.__schedule.schedule.io_schedule.folio_wait_bit_common.folio_wait_writeback
0.18 ?141% -0.2 0.00 perf-profile.calltrace.cycles-pp.iomap_write_begin.iomap_write_iter.iomap_file_buffered_write.xfs_file_buffered_write.vfs_write
0.18 ?223% -0.2 0.00 perf-profile.calltrace.cycles-pp.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.mwait_idle_with_hints.intel_idle_irq
0.56 ? 8% -0.2 0.39 ? 70% perf-profile.calltrace.cycles-pp.scheduler_tick.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues
4.63 ? 3% -0.2 4.46 ? 2% perf-profile.calltrace.cycles-pp.process_one_work.worker_thread.kthread.ret_from_fork
0.17 ?223% -0.2 0.00 perf-profile.calltrace.cycles-pp.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.mwait_idle_with_hints
0.17 ?141% -0.2 0.00 perf-profile.calltrace.cycles-pp.tick_nohz_next_event.tick_nohz_get_sleep_length.menu_select.cpuidle_idle_call.do_idle
1.38 ? 28% -0.2 1.22 ? 25% perf-profile.calltrace.cycles-pp.reader__read_event.perf_session__process_events.record__finish_output.__cmd_record
4.86 ? 7% -0.2 4.70 ? 5% perf-profile.calltrace.cycles-pp.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call
1.58 ? 16% -0.2 1.42 ? 18% perf-profile.calltrace.cycles-pp.perf_event_output_forward.__perf_event_overflow.perf_tp_event.perf_trace_sched_switch.__schedule
1.38 ? 28% -0.2 1.22 ? 25% perf-profile.calltrace.cycles-pp.__cmd_record
1.38 ? 28% -0.2 1.22 ? 25% perf-profile.calltrace.cycles-pp.record__finish_output.__cmd_record
1.38 ? 28% -0.2 1.22 ? 25% perf-profile.calltrace.cycles-pp.perf_session__process_events.record__finish_output.__cmd_record
2.26 ? 5% -0.1 2.12 ? 5% perf-profile.calltrace.cycles-pp.xfs_dialloc.xfs_create.xfs_generic_create.lookup_open.open_last_lookups
1.10 ? 25% -0.1 0.95 ? 17% perf-profile.calltrace.cycles-pp.perf_prepare_sample.perf_event_output_forward.__perf_event_overflow.perf_tp_event.perf_trace_sched_switch
1.06 ? 8% -0.1 0.91 ? 6% perf-profile.calltrace.cycles-pp.remove_wait_queue.xlog_cil_force_seq.xfs_log_force_seq.xfs_file_fsync.__x64_sys_fsync
1.05 ? 8% -0.1 0.90 ? 6% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.remove_wait_queue.xlog_cil_force_seq.xfs_log_force_seq.xfs_file_fsync
1.03 ? 8% -0.1 0.88 ? 6% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.remove_wait_queue.xlog_cil_force_seq.xfs_log_force_seq
0.58 ? 10% -0.1 0.44 ? 45% perf-profile.calltrace.cycles-pp.perf_trace_sched_switch.__schedule.schedule.schedule_timeout.__wait_for_common
4.56 ? 6% -0.1 4.43 ? 2% perf-profile.calltrace.cycles-pp.__mutex_lock.__flush_workqueue.xlog_cil_push_now.xlog_cil_force_seq.xfs_log_force_seq
0.96 ? 28% -0.1 0.83 ? 28% perf-profile.calltrace.cycles-pp.reader__read_event.perf_session__process_events.record__finish_output.__cmd_record.cmd_record
0.96 ? 28% -0.1 0.84 ? 27% perf-profile.calltrace.cycles-pp.record__finish_output.__cmd_record.cmd_record.cmd_sched.run_builtin
0.96 ? 28% -0.1 0.84 ? 27% perf-profile.calltrace.cycles-pp.perf_session__process_events.record__finish_output.__cmd_record.cmd_record.cmd_sched
0.89 ? 3% -0.1 0.78 ? 4% perf-profile.calltrace.cycles-pp.dequeue_task_fair.__schedule.schedule.schedule_timeout.__wait_for_common
1.48 ? 7% -0.1 1.38 ? 5% perf-profile.calltrace.cycles-pp.xfs_end_io.process_one_work.worker_thread.kthread.ret_from_fork
1.47 ? 7% -0.1 1.37 ? 5% perf-profile.calltrace.cycles-pp.xfs_end_ioend.xfs_end_io.process_one_work.worker_thread.kthread
1.19 ? 6% -0.1 1.09 ? 3% perf-profile.calltrace.cycles-pp.xlog_ioend_work.process_one_work.worker_thread.kthread.ret_from_fork
0.81 ? 5% -0.1 0.71 ? 2% perf-profile.calltrace.cycles-pp.__schedule.schedule.worker_thread.kthread.ret_from_fork
0.85 ? 4% -0.1 0.75 ? 7% perf-profile.calltrace.cycles-pp.xfs_dialloc_ag.xfs_dialloc.xfs_create.xfs_generic_create.lookup_open
0.28 ?147% -0.1 0.18 ?142% perf-profile.calltrace.cycles-pp.perf_session__deliver_event.__ordered_events__flush.perf_session__process_user_event.reader__read_event.perf_session__process_events
0.10 ?223% -0.1 0.00 perf-profile.calltrace.cycles-pp.xfs_trans_committed_bulk.xlog_cil_committed.xlog_cil_process_committed.xlog_state_do_iclog_callbacks.xlog_state_do_callback
0.82 ? 5% -0.1 0.73 ? 2% perf-profile.calltrace.cycles-pp.schedule.worker_thread.kthread.ret_from_fork
0.18 ?141% -0.1 0.08 ?223% perf-profile.calltrace.cycles-pp.complete.flush_workqueue_prep_pwqs.__flush_workqueue.xlog_cil_push_now.xlog_cil_force_seq
0.82 ? 7% -0.1 0.73 ? 5% perf-profile.calltrace.cycles-pp.mutex_spin_on_owner.__mutex_lock.__flush_workqueue.xlog_cil_push_now.xlog_cil_force_seq
1.64 ? 4% -0.1 1.55 ? 8% perf-profile.calltrace.cycles-pp.menu_select.cpuidle_idle_call.do_idle.cpu_startup_entry.start_secondary
0.82 ? 6% -0.1 0.73 ? 5% perf-profile.calltrace.cycles-pp.__do_softirq.__irq_exit_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state
0.95 ? 7% -0.1 0.86 ? 5% perf-profile.calltrace.cycles-pp.__irq_exit_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter
0.84 ? 6% -0.1 0.75 ? 3% perf-profile.calltrace.cycles-pp.schedule.xlog_wait_on_iclog.xfs_log_force_seq.xfs_file_fsync.__x64_sys_fsync
1.10 ? 6% -0.1 1.02 ? 2% perf-profile.calltrace.cycles-pp.xlog_state_do_iclog_callbacks.xlog_state_do_callback.xlog_ioend_work.process_one_work.worker_thread
0.09 ?223% -0.1 0.00 perf-profile.calltrace.cycles-pp.schedule.xlog_cil_force_seq.xfs_log_force_seq.xfs_file_fsync.__x64_sys_fsync
0.09 ?223% -0.1 0.00 perf-profile.calltrace.cycles-pp.try_to_wake_up.up.xfs_buf_unlock.xfs_buf_item_release.xlog_cil_commit
0.09 ?223% -0.1 0.00 perf-profile.calltrace.cycles-pp.xfs_dir2_node_lookup.xfs_dir_lookup.xfs_lookup.xfs_vn_lookup.lookup_open
1.11 ? 6% -0.1 1.02 ? 3% perf-profile.calltrace.cycles-pp.xlog_state_do_callback.xlog_ioend_work.process_one_work.worker_thread.kthread
0.26 ?100% -0.1 0.17 ?141% perf-profile.calltrace.cycles-pp.mem_cgroup_css_rstat_flush.cgroup_rstat_flush_locked.cgroup_rstat_flush_irqsafe.__mem_cgroup_flush_stats.mem_cgroup_wb_stats
0.08 ?223% -0.1 0.00 perf-profile.calltrace.cycles-pp.try_to_wake_up.complete.flush_workqueue_prep_pwqs.__flush_workqueue.xlog_cil_push_now
0.08 ?223% -0.1 0.00 perf-profile.calltrace.cycles-pp.__filemap_get_folio.iomap_write_begin.iomap_write_iter.iomap_file_buffered_write.xfs_file_buffered_write
0.08 ?223% -0.1 0.00 perf-profile.calltrace.cycles-pp.__schedule.schedule.xlog_cil_force_seq.xfs_log_force_seq.xfs_file_fsync
1.24 ? 7% -0.1 1.16 ? 6% perf-profile.calltrace.cycles-pp.xfs_ialloc_read_agi.xfs_dialloc.xfs_create.xfs_generic_create.lookup_open
1.23 ? 7% -0.1 1.15 ? 6% perf-profile.calltrace.cycles-pp.xfs_read_agi.xfs_ialloc_read_agi.xfs_dialloc.xfs_create.xfs_generic_create
0.82 ? 6% -0.1 0.74 ? 3% perf-profile.calltrace.cycles-pp.__schedule.schedule.xlog_wait_on_iclog.xfs_log_force_seq.xfs_file_fsync
1.17 ? 7% -0.1 1.09 ? 6% perf-profile.calltrace.cycles-pp.xfs_buf_read_map.xfs_trans_read_buf_map.xfs_read_agi.xfs_ialloc_read_agi.xfs_dialloc
1.13 ? 7% -0.1 1.05 ? 6% perf-profile.calltrace.cycles-pp.xfs_buf_lookup.xfs_buf_get_map.xfs_buf_read_map.xfs_trans_read_buf_map.xfs_read_agi
1.16 ? 7% -0.1 1.08 ? 6% perf-profile.calltrace.cycles-pp.xfs_buf_get_map.xfs_buf_read_map.xfs_trans_read_buf_map.xfs_read_agi.xfs_ialloc_read_agi
0.65 ? 4% -0.1 0.58 ? 6% perf-profile.calltrace.cycles-pp.perf_prepare_sample.perf_event_output_forward.__perf_event_overflow.perf_tp_event.perf_trace_sched_stat_runtime
0.76 ? 9% -0.1 0.69 ? 6% perf-profile.calltrace.cycles-pp.blk_mq_end_request_batch.nvme_irq.__handle_irq_event_percpu.handle_irq_event.handle_edge_irq
0.93 ? 9% -0.1 0.86 ? 5% perf-profile.calltrace.cycles-pp.common_interrupt.asm_common_interrupt.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call
1.22 ? 7% -0.1 1.14 ? 6% perf-profile.calltrace.cycles-pp.xfs_trans_read_buf_map.xfs_read_agi.xfs_ialloc_read_agi.xfs_dialloc.xfs_create
0.94 ? 9% -0.1 0.86 ? 5% perf-profile.calltrace.cycles-pp.asm_common_interrupt.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call.do_idle
1.28 ? 5% -0.1 1.20 ? 6% perf-profile.calltrace.cycles-pp.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt
0.89 ? 9% -0.1 0.82 ? 5% perf-profile.calltrace.cycles-pp.handle_edge_irq.__common_interrupt.common_interrupt.asm_common_interrupt.cpuidle_enter_state
0.84 ? 9% -0.1 0.77 ? 5% perf-profile.calltrace.cycles-pp.__handle_irq_event_percpu.handle_irq_event.handle_edge_irq.__common_interrupt.common_interrupt
0.84 ? 8% -0.1 0.77 ? 5% perf-profile.calltrace.cycles-pp.nvme_irq.__handle_irq_event_percpu.handle_irq_event.handle_edge_irq.__common_interrupt
0.75 ? 3% -0.1 0.68 ? 6% perf-profile.calltrace.cycles-pp.xlog_cil_insert_items.xlog_cil_commit.__xfs_trans_commit.xfs_create.xfs_generic_create
0.86 ? 9% -0.1 0.79 ? 5% perf-profile.calltrace.cycles-pp.handle_irq_event.handle_edge_irq.__common_interrupt.common_interrupt.asm_common_interrupt
0.89 ? 9% -0.1 0.82 ? 5% perf-profile.calltrace.cycles-pp.__common_interrupt.common_interrupt.asm_common_interrupt.cpuidle_enter_state.cpuidle_enter
0.74 ? 9% -0.1 0.67 ? 8% perf-profile.calltrace.cycles-pp.xfs_iomap_write_unwritten.xfs_end_ioend.xfs_end_io.process_one_work.worker_thread
0.80 ? 28% -0.1 0.74 ? 3% perf-profile.calltrace.cycles-pp.perf_callchain_kernel.get_perf_callchain.perf_callchain.perf_prepare_sample.perf_event_output_forward
3.30 ? 8% -0.1 3.25 ? 5% perf-profile.calltrace.cycles-pp.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter
0.99 ? 7% -0.0 0.95 ? 6% perf-profile.calltrace.cycles-pp.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt
3.26 ? 8% -0.0 3.22 ? 5% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state
0.67 ? 8% -0.0 0.63 ? 3% perf-profile.calltrace.cycles-pp.xlog_cil_process_committed.xlog_state_do_iclog_callbacks.xlog_state_do_callback.xlog_ioend_work.process_one_work
0.67 ? 8% -0.0 0.63 ? 3% perf-profile.calltrace.cycles-pp.xlog_cil_committed.xlog_cil_process_committed.xlog_state_do_iclog_callbacks.xlog_state_do_callback.xlog_ioend_work
0.65 ? 4% -0.0 0.61 ? 6% perf-profile.calltrace.cycles-pp.flush_workqueue_prep_pwqs.__flush_workqueue.xlog_cil_push_now.xlog_cil_force_seq.xfs_log_force_seq
0.96 ? 7% -0.0 0.92 ? 7% perf-profile.calltrace.cycles-pp.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt
0.84 ? 7% -0.0 0.80 ? 3% perf-profile.calltrace.cycles-pp.xfs_dir_createname.xfs_create.xfs_generic_create.lookup_open.open_last_lookups
0.50 ? 45% -0.0 0.46 ? 45% perf-profile.calltrace.cycles-pp.tick_nohz_get_sleep_length.menu_select.cpuidle_idle_call.do_idle.cpu_startup_entry
0.71 ? 7% -0.0 0.67 ? 4% perf-profile.calltrace.cycles-pp.iomap_finish_ioend.iomap_finish_ioends.xfs_end_ioend.xfs_end_io.process_one_work
0.62 ? 5% -0.0 0.58 ? 3% perf-profile.calltrace.cycles-pp.xfs_init_new_inode.xfs_create.xfs_generic_create.lookup_open.open_last_lookups
3.66 ? 7% -0.0 3.62 ? 3% perf-profile.calltrace.cycles-pp.osq_lock.__mutex_lock.__flush_workqueue.xlog_cil_push_now.xlog_cil_force_seq
1.67 ? 8% -0.0 1.64 ? 5% perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
0.77 ? 4% -0.0 0.74 ? 5% perf-profile.calltrace.cycles-pp.balance_dirty_pages_ratelimited_flags.iomap_write_iter.iomap_file_buffered_write.xfs_file_buffered_write.vfs_write
0.75 ? 7% -0.0 0.72 ? 4% perf-profile.calltrace.cycles-pp.xfs_dir2_node_addname.xfs_dir_createname.xfs_create.xfs_generic_create.lookup_open
1.74 ? 12% -0.0 1.71 ? 16% perf-profile.calltrace.cycles-pp.__libc_start_main
1.74 ? 12% -0.0 1.71 ? 16% perf-profile.calltrace.cycles-pp.main.__libc_start_main
1.74 ? 12% -0.0 1.71 ? 16% perf-profile.calltrace.cycles-pp.run_builtin.main.__libc_start_main
0.73 ? 4% -0.0 0.70 ? 6% perf-profile.calltrace.cycles-pp.balance_dirty_pages.balance_dirty_pages_ratelimited_flags.iomap_write_iter.iomap_file_buffered_write.xfs_file_buffered_write
0.72 ? 7% -0.0 0.68 ? 4% perf-profile.calltrace.cycles-pp.iomap_finish_ioends.xfs_end_ioend.xfs_end_io.process_one_work.worker_thread
0.72 ? 4% -0.0 0.69 ? 6% perf-profile.calltrace.cycles-pp.mem_cgroup_wb_stats.balance_dirty_pages.balance_dirty_pages_ratelimited_flags.iomap_write_iter.iomap_file_buffered_write
1.69 ? 12% -0.0 1.66 ? 17% perf-profile.calltrace.cycles-pp.__cmd_record.cmd_record.cmd_sched.run_builtin.main
1.69 ? 12% -0.0 1.66 ? 17% perf-profile.calltrace.cycles-pp.cmd_record.cmd_sched.run_builtin.main.__libc_start_main
1.69 ? 12% -0.0 1.66 ? 17% perf-profile.calltrace.cycles-pp.cmd_sched.run_builtin.main.__libc_start_main
0.71 ? 4% -0.0 0.68 ? 5% perf-profile.calltrace.cycles-pp.cgroup_rstat_flush_irqsafe.__mem_cgroup_flush_stats.mem_cgroup_wb_stats.balance_dirty_pages.balance_dirty_pages_ratelimited_flags
0.71 ? 4% -0.0 0.68 ? 5% perf-profile.calltrace.cycles-pp.cgroup_rstat_flush_locked.cgroup_rstat_flush_irqsafe.__mem_cgroup_flush_stats.mem_cgroup_wb_stats.balance_dirty_pages
0.64 ? 9% -0.0 0.62 ? 5% perf-profile.calltrace.cycles-pp.folio_end_writeback.iomap_finish_ioend.iomap_finish_ioends.xfs_end_ioend.xfs_end_io
0.71 ? 4% -0.0 0.68 ? 6% perf-profile.calltrace.cycles-pp.__mem_cgroup_flush_stats.mem_cgroup_wb_stats.balance_dirty_pages.balance_dirty_pages_ratelimited_flags.iomap_write_iter
0.95 ? 11% -0.0 0.93 ? 9% perf-profile.calltrace.cycles-pp.ktime_get.clockevents_program_event.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt
0.48 ? 45% -0.0 0.46 ? 45% perf-profile.calltrace.cycles-pp.sched_ttwu_pending.flush_smp_call_function_queue.do_idle.cpu_startup_entry.start_secondary
1.23 ? 9% -0.0 1.22 ? 6% perf-profile.calltrace.cycles-pp.clockevents_program_event.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
0.79 ? 10% +0.0 0.79 ? 4% perf-profile.calltrace.cycles-pp.flush_smp_call_function_queue.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
0.85 ? 8% +0.0 0.89 ? 6% perf-profile.calltrace.cycles-pp.xlog_write.xlog_cil_push_work.process_one_work.worker_thread.kthread
1.22 ? 9% +0.0 1.26 ? 2% perf-profile.calltrace.cycles-pp.perf_trace_sched_switch.__schedule.schedule_idle.do_idle.cpu_startup_entry
1.15 ? 8% +0.0 1.20 ? 2% perf-profile.calltrace.cycles-pp.perf_tp_event.perf_trace_sched_switch.__schedule.schedule_idle.do_idle
0.52 ? 46% +0.1 0.57 ? 44% perf-profile.calltrace.cycles-pp.record__mmap_read_evlist.__cmd_record.cmd_record.cmd_sched.run_builtin
0.46 ? 45% +0.1 0.51 ? 45% perf-profile.calltrace.cycles-pp.__generic_file_write_iter.generic_file_write_iter.vfs_write.ksys_write.do_syscall_64
0.46 ? 45% +0.1 0.51 ? 45% perf-profile.calltrace.cycles-pp.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.vfs_write.ksys_write
1.11 ? 8% +0.1 1.16 ? 2% perf-profile.calltrace.cycles-pp.__perf_event_overflow.perf_tp_event.perf_trace_sched_switch.__schedule.schedule_idle
0.47 ? 45% +0.1 0.52 ? 45% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_write.writen.record__pushfn
0.46 ? 45% +0.1 0.52 ? 45% perf-profile.calltrace.cycles-pp.generic_file_write_iter.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.47 ? 45% +0.1 0.53 ? 45% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__libc_write.writen.record__pushfn.perf_mmap__push
0.51 ? 46% +0.1 0.56 ? 45% perf-profile.calltrace.cycles-pp.perf_mmap__push.record__mmap_read_evlist.__cmd_record.cmd_record.cmd_sched
0.48 ? 45% +0.1 0.54 ? 45% perf-profile.calltrace.cycles-pp.record__pushfn.perf_mmap__push.record__mmap_read_evlist.__cmd_record.cmd_record
0.48 ? 45% +0.1 0.53 ? 45% perf-profile.calltrace.cycles-pp.__libc_write.writen.record__pushfn.perf_mmap__push.record__mmap_read_evlist
0.46 ? 45% +0.1 0.52 ? 45% perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_write
0.48 ? 45% +0.1 0.54 ? 45% perf-profile.calltrace.cycles-pp.writen.record__pushfn.perf_mmap__push.record__mmap_read_evlist.__cmd_record
0.47 ? 45% +0.1 0.52 ? 45% perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_write.writen
1.84 ? 8% +0.1 1.90 perf-profile.calltrace.cycles-pp.schedule_idle.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
1.82 ? 7% +0.1 1.88 perf-profile.calltrace.cycles-pp.__schedule.schedule_idle.do_idle.cpu_startup_entry.start_secondary
0.00 +0.1 0.09 ?223% perf-profile.calltrace.cycles-pp.memcpy_erms.xlog_write.xlog_cil_push_work.process_one_work.worker_thread
1.48 ? 7% +0.1 1.58 ? 5% perf-profile.calltrace.cycles-pp.xlog_cil_push_work.process_one_work.worker_thread.kthread.ret_from_fork
1.34 ? 17% +0.1 1.47 ? 28% perf-profile.calltrace.cycles-pp.dequeue_entity.dequeue_task_fair.__schedule.schedule.schedule_timeout
0.00 +0.2 0.17 ?141% perf-profile.calltrace.cycles-pp.xfs_buf_unlock.xfs_buf_item_release.xlog_cil_commit.__xfs_trans_commit.xfs_bmapi_convert_delalloc
0.47 ? 44% +0.2 0.71 ? 57% perf-profile.calltrace.cycles-pp.dequeue_task_fair.__schedule.schedule.schedule_timeout.___down_common
0.00 +0.3 0.26 ?100% perf-profile.calltrace.cycles-pp.xfs_buf_item_release.xlog_cil_commit.__xfs_trans_commit.xfs_bmapi_convert_delalloc.xfs_map_blocks
0.00 +0.4 0.36 ? 70% perf-profile.calltrace.cycles-pp._raw_spin_lock.xfs_extent_busy_trim.xfs_alloc_compute_aligned.xfs_alloc_cur_check.xfs_alloc_walk_iter
0.75 ? 6% +0.4 1.18 ? 4% perf-profile.calltrace.cycles-pp.xlog_cil_commit.__xfs_trans_commit.xfs_bmapi_convert_delalloc.xfs_map_blocks.iomap_writepage_map
0.80 ? 6% +0.4 1.24 ? 4% perf-profile.calltrace.cycles-pp.__xfs_trans_commit.xfs_bmapi_convert_delalloc.xfs_map_blocks.iomap_writepage_map.write_cache_pages
0.00 +0.5 0.49 ?117% perf-profile.calltrace.cycles-pp.cpuidle_enter.cpuidle_idle_call.do_idle.cpu_startup_entry.rest_init
0.00 +0.5 0.50 ?116% perf-profile.calltrace.cycles-pp.cpuidle_idle_call.do_idle.cpu_startup_entry.rest_init.arch_call_rest_init
0.08 ?223% +0.5 0.61 ? 90% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.rest_init.arch_call_rest_init.start_kernel
0.08 ?223% +0.5 0.61 ? 90% perf-profile.calltrace.cycles-pp.cpu_startup_entry.rest_init.arch_call_rest_init.start_kernel.secondary_startup_64_no_verify
0.08 ?223% +0.5 0.61 ? 90% perf-profile.calltrace.cycles-pp.start_kernel.secondary_startup_64_no_verify
0.08 ?223% +0.5 0.61 ? 90% perf-profile.calltrace.cycles-pp.arch_call_rest_init.start_kernel.secondary_startup_64_no_verify
0.08 ?223% +0.5 0.61 ? 90% perf-profile.calltrace.cycles-pp.rest_init.arch_call_rest_init.start_kernel.secondary_startup_64_no_verify
0.00 +0.6 0.63 ? 5% perf-profile.calltrace.cycles-pp.xfs_alloc_get_rec.xfs_alloc_cur_check.xfs_alloc_walk_iter.xfs_alloc_ag_vextent_near.xfs_alloc_ag_vextent
0.00 +0.8 0.76 ? 8% perf-profile.calltrace.cycles-pp.xfs_extent_busy_trim.xfs_alloc_compute_aligned.xfs_alloc_cur_check.xfs_alloc_walk_iter.xfs_alloc_ag_vextent_near
1.02 ? 7% +0.9 1.96 ? 4% perf-profile.calltrace.cycles-pp.schedule.schedule_timeout.___down_common.__down.down
1.02 ? 8% +0.9 1.96 ? 4% perf-profile.calltrace.cycles-pp.__schedule.schedule.schedule_timeout.___down_common.__down
1.02 ? 8% +0.9 1.97 ? 4% perf-profile.calltrace.cycles-pp.schedule_timeout.___down_common.__down.down.xfs_buf_lock
0.00 +1.0 0.96 ? 6% perf-profile.calltrace.cycles-pp.xfs_alloc_compute_aligned.xfs_alloc_cur_check.xfs_alloc_walk_iter.xfs_alloc_ag_vextent_near.xfs_alloc_ag_vextent
1.06 ? 7% +1.0 2.02 ? 4% perf-profile.calltrace.cycles-pp.__down.down.xfs_buf_lock.xfs_buf_find_lock.xfs_buf_lookup
1.06 ? 7% +1.0 2.02 ? 4% perf-profile.calltrace.cycles-pp.___down_common.__down.down.xfs_buf_lock.xfs_buf_find_lock
1.08 ? 7% +1.0 2.07 ? 4% perf-profile.calltrace.cycles-pp.down.xfs_buf_lock.xfs_buf_find_lock.xfs_buf_lookup.xfs_buf_get_map
1.08 ? 7% +1.0 2.08 ? 4% perf-profile.calltrace.cycles-pp.xfs_buf_lock.xfs_buf_find_lock.xfs_buf_lookup.xfs_buf_get_map.xfs_buf_read_map
1.10 ? 7% +1.0 2.10 ? 4% perf-profile.calltrace.cycles-pp.xfs_buf_find_lock.xfs_buf_lookup.xfs_buf_get_map.xfs_buf_read_map.xfs_trans_read_buf_map
0.00 +1.1 1.12 ? 4% perf-profile.calltrace.cycles-pp.xfs_buf_lookup.xfs_buf_get_map.xfs_buf_read_map.xfs_trans_read_buf_map.xfs_read_agf
0.00 +1.2 1.15 ? 4% perf-profile.calltrace.cycles-pp.xfs_buf_get_map.xfs_buf_read_map.xfs_trans_read_buf_map.xfs_read_agf.xfs_alloc_read_agf
0.00 +1.2 1.16 ? 4% perf-profile.calltrace.cycles-pp.xfs_buf_read_map.xfs_trans_read_buf_map.xfs_read_agf.xfs_alloc_read_agf.xfs_alloc_fix_freelist
0.00 +1.2 1.20 ? 4% perf-profile.calltrace.cycles-pp.xfs_trans_read_buf_map.xfs_read_agf.xfs_alloc_read_agf.xfs_alloc_fix_freelist.__xfs_alloc_vextent_this_ag
0.00 +1.2 1.21 ? 4% perf-profile.calltrace.cycles-pp.xfs_read_agf.xfs_alloc_read_agf.xfs_alloc_fix_freelist.__xfs_alloc_vextent_this_ag.xfs_alloc_vextent_iterate_ags
0.00 +1.2 1.23 ? 4% perf-profile.calltrace.cycles-pp.xfs_alloc_read_agf.xfs_alloc_fix_freelist.__xfs_alloc_vextent_this_ag.xfs_alloc_vextent_iterate_ags.xfs_alloc_vextent
0.00 +1.4 1.42 ? 4% perf-profile.calltrace.cycles-pp.xfs_alloc_fix_freelist.__xfs_alloc_vextent_this_ag.xfs_alloc_vextent_iterate_ags.xfs_alloc_vextent.xfs_bmap_btalloc
0.00 +1.4 1.42 ? 4% perf-profile.calltrace.cycles-pp.__xfs_alloc_vextent_this_ag.xfs_alloc_vextent_iterate_ags.xfs_alloc_vextent.xfs_bmap_btalloc.xfs_bmapi_allocate
42.77 ? 7% +1.5 44.26 ? 2% perf-profile.calltrace.cycles-pp.mwait_idle_with_hints.intel_idle.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call
42.78 ? 7% +1.5 44.28 ? 2% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call.do_idle
19.36 ? 4% +2.3 21.68 ? 2% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.fsync
19.34 ? 4% +2.3 21.67 ? 2% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.fsync
19.49 ? 4% +2.3 21.82 ? 2% perf-profile.calltrace.cycles-pp.fsync
19.33 ? 4% +2.3 21.66 ? 2% perf-profile.calltrace.cycles-pp.__x64_sys_fsync.do_syscall_64.entry_SYSCALL_64_after_hwframe.fsync
19.31 ? 4% +2.3 21.64 ? 2% perf-profile.calltrace.cycles-pp.xfs_file_fsync.__x64_sys_fsync.do_syscall_64.entry_SYSCALL_64_after_hwframe.fsync
0.00 +2.4 2.41 ? 3% perf-profile.calltrace.cycles-pp.xfs_alloc_cur_check.xfs_alloc_walk_iter.xfs_alloc_ag_vextent_near.xfs_alloc_ag_vextent.xfs_alloc_vextent_iterate_ags
1.17 ? 8% +2.5 3.66 ? 4% perf-profile.calltrace.cycles-pp.xfs_alloc_ag_vextent.xfs_alloc_vextent_iterate_ags.xfs_alloc_vextent.xfs_bmap_btalloc.xfs_bmapi_allocate
1.02 ? 8% +2.6 3.61 ? 4% perf-profile.calltrace.cycles-pp.xfs_alloc_ag_vextent_near.xfs_alloc_ag_vextent.xfs_alloc_vextent_iterate_ags.xfs_alloc_vextent.xfs_bmap_btalloc
0.00 +2.8 2.85 ? 4% perf-profile.calltrace.cycles-pp.xfs_alloc_walk_iter.xfs_alloc_ag_vextent_near.xfs_alloc_ag_vextent.xfs_alloc_vextent_iterate_ags.xfs_alloc_vextent
1.77 ? 6% +3.5 5.29 ? 3% perf-profile.calltrace.cycles-pp.xfs_bmapi_allocate.xfs_bmapi_convert_delalloc.xfs_map_blocks.iomap_writepage_map.write_cache_pages
1.70 ? 7% +3.5 5.23 ? 3% perf-profile.calltrace.cycles-pp.xfs_bmap_btalloc.xfs_bmapi_allocate.xfs_bmapi_convert_delalloc.xfs_map_blocks.iomap_writepage_map
1.56 ? 8% +3.5 5.10 ? 3% perf-profile.calltrace.cycles-pp.xfs_alloc_vextent.xfs_bmap_btalloc.xfs_bmapi_allocate.xfs_bmapi_convert_delalloc.xfs_map_blocks
1.55 ? 8% +3.5 5.09 ? 3% perf-profile.calltrace.cycles-pp.xfs_alloc_vextent_iterate_ags.xfs_alloc_vextent.xfs_bmap_btalloc.xfs_bmapi_allocate.xfs_bmapi_convert_delalloc
5.04 ? 4% +3.7 8.74 ? 3% perf-profile.calltrace.cycles-pp.file_write_and_wait_range.xfs_file_fsync.__x64_sys_fsync.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.73 ? 5% +3.9 7.64 ? 3% perf-profile.calltrace.cycles-pp.__filemap_fdatawrite_range.file_write_and_wait_range.xfs_file_fsync.__x64_sys_fsync.do_syscall_64
3.72 ? 5% +3.9 7.64 ? 3% perf-profile.calltrace.cycles-pp.filemap_fdatawrite_wbc.__filemap_fdatawrite_range.file_write_and_wait_range.xfs_file_fsync.__x64_sys_fsync
3.67 ? 5% +3.9 7.59 ? 3% perf-profile.calltrace.cycles-pp.xfs_vm_writepages.do_writepages.filemap_fdatawrite_wbc.__filemap_fdatawrite_range.file_write_and_wait_range
3.68 ? 5% +3.9 7.61 ? 3% perf-profile.calltrace.cycles-pp.do_writepages.filemap_fdatawrite_wbc.__filemap_fdatawrite_range.file_write_and_wait_range.xfs_file_fsync
3.33 ? 5% +4.0 7.29 ? 3% perf-profile.calltrace.cycles-pp.write_cache_pages.iomap_writepages.xfs_vm_writepages.do_writepages.filemap_fdatawrite_wbc
3.34 ? 5% +4.0 7.29 ? 3% perf-profile.calltrace.cycles-pp.iomap_writepages.xfs_vm_writepages.do_writepages.filemap_fdatawrite_wbc.__filemap_fdatawrite_range
2.74 ? 5% +4.0 6.70 ? 3% perf-profile.calltrace.cycles-pp.xfs_bmapi_convert_delalloc.xfs_map_blocks.iomap_writepage_map.write_cache_pages.iomap_writepages
3.06 ? 5% +4.0 7.02 ? 3% perf-profile.calltrace.cycles-pp.iomap_writepage_map.write_cache_pages.iomap_writepages.xfs_vm_writepages.do_writepages
2.77 ? 5% +4.0 6.74 ? 3% perf-profile.calltrace.cycles-pp.xfs_map_blocks.iomap_writepage_map.write_cache_pages.iomap_writepages.xfs_vm_writepages
2.82 ?112% -1.6 1.17 ? 15% perf-profile.children.cycles-pp.intel_idle_irq
14.01 ? 3% -1.4 12.65 ? 2% perf-profile.children.cycles-pp.xfs_log_force_seq
7.74 ? 27% -1.4 6.38 ? 4% perf-profile.children.cycles-pp.asm_sysvec_apic_timer_interrupt
58.43 -1.0 57.39 perf-profile.children.cycles-pp.start_secondary
6.34 ? 7% -0.9 5.40 ? 7% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
5.33 ? 6% -0.7 4.61 ? 4% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
9.48 ? 4% -0.7 8.77 ? 2% perf-profile.children.cycles-pp.xlog_cil_force_seq
55.71 -0.7 55.02 perf-profile.children.cycles-pp.cpuidle_idle_call
4.46 ? 7% -0.7 3.80 ? 5% perf-profile.children.cycles-pp.remove_wait_queue
58.80 -0.7 58.15 perf-profile.children.cycles-pp.secondary_startup_64_no_verify
58.80 -0.7 58.15 perf-profile.children.cycles-pp.cpu_startup_entry
58.79 -0.7 58.14 perf-profile.children.cycles-pp.do_idle
4.36 ? 6% -0.6 3.72 ? 5% perf-profile.children.cycles-pp.xlog_wait_on_iclog
7.47 ? 4% -0.6 6.83 ? 3% perf-profile.children.cycles-pp.open64
7.42 ? 4% -0.6 6.78 ? 3% perf-profile.children.cycles-pp.__x64_sys_openat
7.42 ? 4% -0.6 6.78 ? 3% perf-profile.children.cycles-pp.do_sys_openat2
7.34 ? 4% -0.6 6.71 ? 3% perf-profile.children.cycles-pp.do_filp_open
7.33 ? 4% -0.6 6.71 ? 3% perf-profile.children.cycles-pp.path_openat
6.99 ? 4% -0.6 6.38 ? 3% perf-profile.children.cycles-pp.open_last_lookups
53.70 ? 2% -0.6 53.11 perf-profile.children.cycles-pp.cpuidle_enter
6.90 ? 4% -0.6 6.31 ? 3% perf-profile.children.cycles-pp.lookup_open
53.67 ? 2% -0.6 53.09 perf-profile.children.cycles-pp.cpuidle_enter_state
7.83 ? 4% -0.5 7.34 ? 2% perf-profile.children.cycles-pp.xlog_cil_push_now
6.11 ? 4% -0.5 5.63 ? 3% perf-profile.children.cycles-pp.xfs_generic_create
7.54 ? 4% -0.5 7.07 ? 2% perf-profile.children.cycles-pp.__flush_workqueue
5.98 ? 4% -0.5 5.52 ? 4% perf-profile.children.cycles-pp.xfs_create
5.88 ? 2% -0.4 5.47 ? 4% perf-profile.children.cycles-pp.sysvec_apic_timer_interrupt
3.36 ? 9% -0.4 3.00 ? 8% perf-profile.children.cycles-pp.write
3.25 ? 9% -0.4 2.88 ? 9% perf-profile.children.cycles-pp.xfs_file_buffered_write
3.16 ? 9% -0.4 2.81 ? 9% perf-profile.children.cycles-pp.iomap_file_buffered_write
2.97 ? 10% -0.3 2.63 ? 9% perf-profile.children.cycles-pp.iomap_write_iter
5.78 ? 3% -0.3 5.48 ? 2% perf-profile.children.cycles-pp.kthread
5.78 ? 3% -0.3 5.48 ? 2% perf-profile.children.cycles-pp.ret_from_fork
3.84 ? 8% -0.3 3.55 ? 6% perf-profile.children.cycles-pp.ksys_write
2.34 ? 26% -0.3 2.05 ? 5% perf-profile.children.cycles-pp.record__finish_output
2.34 ? 26% -0.3 2.05 ? 5% perf-profile.children.cycles-pp.perf_session__process_events
2.34 ? 26% -0.3 2.05 ? 5% perf-profile.children.cycles-pp.reader__read_event
3.82 ? 9% -0.3 3.53 ? 6% perf-profile.children.cycles-pp.vfs_write
5.52 ? 2% -0.3 5.25 ? 2% perf-profile.children.cycles-pp.worker_thread
1.89 ? 7% -0.3 1.64 ? 3% perf-profile.children.cycles-pp.__wait_for_common
1.47 ? 18% -0.2 1.22 ? 15% perf-profile.children.cycles-pp.iomap_write_end
1.34 ? 20% -0.2 1.10 ? 17% perf-profile.children.cycles-pp.filemap_dirty_folio
1.20 ? 23% -0.2 0.97 ? 19% perf-profile.children.cycles-pp.locked_inode_to_wb_and_lock_list
1.22 ? 22% -0.2 1.00 ? 18% perf-profile.children.cycles-pp.__mark_inode_dirty
1.34 ? 29% -0.2 1.12 ? 9% perf-profile.children.cycles-pp.process_simple
1.27 ? 29% -0.2 1.06 ? 9% perf-profile.children.cycles-pp.ordered_events__queue
45.78 ? 2% -0.2 45.57 ? 2% perf-profile.children.cycles-pp.mwait_idle_with_hints
1.25 ? 29% -0.2 1.04 ? 9% perf-profile.children.cycles-pp.queue_event
1.28 ? 5% -0.2 1.08 ? 3% perf-profile.children.cycles-pp.__filemap_fdatawait_range
3.08 ? 15% -0.2 2.88 ? 7% perf-profile.children.cycles-pp.__cmd_record
4.05 ? 2% -0.2 3.86 ? 5% perf-profile.children.cycles-pp.__sysvec_apic_timer_interrupt
1.20 ? 5% -0.2 1.00 ? 3% perf-profile.children.cycles-pp.folio_wait_bit_common
1.21 ? 5% -0.2 1.02 ? 3% perf-profile.children.cycles-pp.folio_wait_writeback
4.00 ? 2% -0.2 3.82 ? 4% perf-profile.children.cycles-pp.hrtimer_interrupt
1.13 ? 5% -0.2 0.94 ? 3% perf-profile.children.cycles-pp.io_schedule
4.63 ? 3% -0.2 4.46 ? 2% perf-profile.children.cycles-pp.process_one_work
1.14 ? 2% -0.2 0.98 ? 6% perf-profile.children.cycles-pp.__irq_exit_rcu
2.26 ? 5% -0.1 2.12 ? 5% perf-profile.children.cycles-pp.xfs_dialloc
0.97 ? 2% -0.1 0.83 ? 5% perf-profile.children.cycles-pp.__do_softirq
0.55 ? 66% -0.1 0.41 ? 7% perf-profile.children.cycles-pp.poll_idle
0.70 ? 6% -0.1 0.57 ? 4% perf-profile.children.cycles-pp.xfs_da_read_buf
0.61 ? 10% -0.1 0.48 ? 6% perf-profile.children.cycles-pp.xfs_da3_node_lookup_int
4.56 ? 6% -0.1 4.43 ? 2% perf-profile.children.cycles-pp.__mutex_lock
1.68 ? 11% -0.1 1.56 ? 7% perf-profile.children.cycles-pp.ktime_get
1.48 ? 7% -0.1 1.38 ? 5% perf-profile.children.cycles-pp.xfs_end_io
1.47 ? 7% -0.1 1.37 ? 5% perf-profile.children.cycles-pp.xfs_end_ioend
0.85 ? 5% -0.1 0.75 ? 7% perf-profile.children.cycles-pp.xfs_dialloc_ag
1.19 ? 6% -0.1 1.09 ? 3% perf-profile.children.cycles-pp.xlog_ioend_work
0.52 ? 10% -0.1 0.43 ? 8% perf-profile.children.cycles-pp.xfs_lookup
0.52 ? 11% -0.1 0.42 ? 9% perf-profile.children.cycles-pp.xfs_dir_lookup
0.82 ? 7% -0.1 0.73 ? 5% perf-profile.children.cycles-pp.mutex_spin_on_owner
0.56 ? 9% -0.1 0.47 ? 7% perf-profile.children.cycles-pp.xfs_vn_lookup
2.17 ? 2% -0.1 2.08 ? 5% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.42 ? 16% -0.1 0.33 ? 7% perf-profile.children.cycles-pp.xfs_dir2_node_lookup
1.10 ? 6% -0.1 1.02 ? 2% perf-profile.children.cycles-pp.xlog_state_do_iclog_callbacks
1.11 ? 6% -0.1 1.02 ? 3% perf-profile.children.cycles-pp.xlog_state_do_callback
0.08 ? 21% -0.1 0.00 perf-profile.children.cycles-pp.xfs_alloc_ag_vextent_size
1.66 ? 4% -0.1 1.58 ? 8% perf-profile.children.cycles-pp.menu_select
1.67 ? 4% -0.1 1.59 ? 6% perf-profile.children.cycles-pp.tick_sched_timer
1.24 ? 7% -0.1 1.16 ? 6% perf-profile.children.cycles-pp.xfs_ialloc_read_agi
1.23 ? 7% -0.1 1.15 ? 6% perf-profile.children.cycles-pp.xfs_read_agi
0.99 ? 8% -0.1 0.90 ? 5% perf-profile.children.cycles-pp.common_interrupt
0.94 ? 8% -0.1 0.86 ? 5% perf-profile.children.cycles-pp.handle_edge_irq
1.00 ? 8% -0.1 0.92 ? 5% perf-profile.children.cycles-pp.asm_common_interrupt
0.80 ? 8% -0.1 0.72 ? 6% perf-profile.children.cycles-pp.blk_mq_end_request_batch
0.89 ? 7% -0.1 0.81 ? 5% perf-profile.children.cycles-pp.__handle_irq_event_percpu
0.88 ? 7% -0.1 0.81 ? 6% perf-profile.children.cycles-pp.nvme_irq
0.94 ? 8% -0.1 0.87 ? 5% perf-profile.children.cycles-pp.__common_interrupt
0.91 ? 8% -0.1 0.83 ? 5% perf-profile.children.cycles-pp.handle_irq_event
0.70 ? 10% -0.1 0.62 ? 7% perf-profile.children.cycles-pp.load_balance
0.71 ? 8% -0.1 0.63 ? 6% perf-profile.children.cycles-pp.__queue_work
0.56 ? 8% -0.1 0.49 ? 3% perf-profile.children.cycles-pp.xfs_end_bio
0.60 ? 11% -0.1 0.53 ? 6% perf-profile.children.cycles-pp.find_busiest_group
0.71 ? 8% -0.1 0.64 ? 6% perf-profile.children.cycles-pp.queue_work_on
1.24 ? 3% -0.1 1.18 ? 4% perf-profile.children.cycles-pp.xlog_cil_insert_items
0.58 ? 11% -0.1 0.51 ? 5% perf-profile.children.cycles-pp.update_sd_lb_stats
0.74 ? 9% -0.1 0.67 ? 8% perf-profile.children.cycles-pp.xfs_iomap_write_unwritten
0.20 ? 12% -0.1 0.14 ? 9% perf-profile.children.cycles-pp.xfs_perag_get
0.88 ? 23% -0.1 0.82 ? 5% perf-profile.children.cycles-pp.__ordered_events__flush
0.77 ? 14% -0.1 0.71 ? 5% perf-profile.children.cycles-pp.pick_next_task_fair
0.88 ? 23% -0.1 0.82 ? 5% perf-profile.children.cycles-pp.perf_session__process_user_event
1.38 ? 8% -0.1 1.32 ? 6% perf-profile.children.cycles-pp.clockevents_program_event
0.86 ? 23% -0.1 0.80 ? 5% perf-profile.children.cycles-pp.perf_session__deliver_event
0.59 ? 8% -0.1 0.54 ? 5% perf-profile.children.cycles-pp.xfs_btree_read_buf_block
0.52 ? 5% -0.1 0.46 ? 4% perf-profile.children.cycles-pp.xlog_cil_alloc_shadow_bufs
0.58 ? 15% -0.1 0.52 ? 8% perf-profile.children.cycles-pp.newidle_balance
0.34 ? 23% -0.1 0.28 ? 5% perf-profile.children.cycles-pp.dso__load
1.36 ? 4% -0.1 1.30 ? 7% perf-profile.children.cycles-pp.tick_sched_handle
1.71 ? 4% -0.1 1.66 ? 3% perf-profile.children.cycles-pp.__unwind_start
1.32 ? 4% -0.1 1.26 ? 7% perf-profile.children.cycles-pp.update_process_times
0.34 ? 23% -0.1 0.28 ? 5% perf-profile.children.cycles-pp.__dso__load_kallsyms
0.30 ? 5% -0.1 0.24 ? 4% perf-profile.children.cycles-pp.rcu_core
0.34 ? 23% -0.1 0.29 ? 5% perf-profile.children.cycles-pp.map__load
0.50 ? 12% -0.0 0.45 ? 6% perf-profile.children.cycles-pp.update_sg_lb_stats
0.77 ? 5% -0.0 0.72 ? 5% perf-profile.children.cycles-pp.complete
0.69 ? 7% -0.0 0.64 ? 6% perf-profile.children.cycles-pp.xfs_btree_lookup_get_block
0.50 ? 4% -0.0 0.45 ? 5% perf-profile.children.cycles-pp.iomap_write_begin
0.37 ? 5% -0.0 0.32 ? 7% perf-profile.children.cycles-pp.rebalance_domains
0.41 ? 8% -0.0 0.36 ? 5% perf-profile.children.cycles-pp.xlog_state_clean_iclog
0.28 ? 22% -0.0 0.24 ? 5% perf-profile.children.cycles-pp.kallsyms__parse
0.04 ? 45% -0.0 0.00 perf-profile.children.cycles-pp.__d_lookup
0.11 ? 69% -0.0 0.06 ? 14% perf-profile.children.cycles-pp.__irqentry_text_start
0.48 ? 5% -0.0 0.44 ? 5% perf-profile.children.cycles-pp.__filemap_get_folio
0.36 ? 11% -0.0 0.32 ? 10% perf-profile.children.cycles-pp.xfs_dialloc_ag_update_inobt
0.22 ? 24% -0.0 0.18 ? 15% perf-profile.children.cycles-pp.tick_irq_enter
0.24 ? 24% -0.0 0.19 ? 14% perf-profile.children.cycles-pp.irq_enter_rcu
0.67 ? 8% -0.0 0.63 ? 3% perf-profile.children.cycles-pp.xlog_cil_process_committed
0.67 ? 8% -0.0 0.63 ? 3% perf-profile.children.cycles-pp.xlog_cil_committed
0.36 ? 22% -0.0 0.32 ? 4% perf-profile.children.cycles-pp.thread__find_map
0.84 ? 7% -0.0 0.80 ? 3% perf-profile.children.cycles-pp.xfs_dir_createname
0.65 ? 4% -0.0 0.61 ? 6% perf-profile.children.cycles-pp.flush_workqueue_prep_pwqs
0.21 ? 18% -0.0 0.17 ? 11% perf-profile.children.cycles-pp.wb_workfn
0.21 ? 18% -0.0 0.17 ? 11% perf-profile.children.cycles-pp.wb_do_writeback
0.21 ? 18% -0.0 0.17 ? 11% perf-profile.children.cycles-pp.wb_writeback
0.98 ? 5% -0.0 0.94 ? 6% perf-profile.children.cycles-pp.xfs_btree_lookup
0.17 ? 13% -0.0 0.13 ? 8% perf-profile.children.cycles-pp.xfs_trans_log_buf
0.06 ? 47% -0.0 0.02 ?141% perf-profile.children.cycles-pp.xfs_buf_offset
0.05 ? 82% -0.0 0.01 ?223% perf-profile.children.cycles-pp.xfs_trans_buf_set_type
0.62 ? 5% -0.0 0.58 ? 3% perf-profile.children.cycles-pp.xfs_init_new_inode
0.59 ? 9% -0.0 0.55 ? 7% perf-profile.children.cycles-pp.tick_nohz_get_sleep_length
0.33 ? 14% -0.0 0.29 ? 5% perf-profile.children.cycles-pp.iomap_submit_ioend
0.71 ? 7% -0.0 0.67 ? 4% perf-profile.children.cycles-pp.iomap_finish_ioend
0.41 ? 22% -0.0 0.38 ? 5% perf-profile.children.cycles-pp.build_id__mark_dso_hit
0.22 ? 9% -0.0 0.19 ? 14% perf-profile.children.cycles-pp.dput
0.22 ? 13% -0.0 0.18 ? 7% perf-profile.children.cycles-pp.xfs_inode_item_format
0.16 ? 14% -0.0 0.13 ? 8% perf-profile.children.cycles-pp.xfs_trans_dirty_buf
0.05 ? 45% -0.0 0.01 ?223% perf-profile.children.cycles-pp.tick_nohz_tick_stopped
0.06 ? 8% -0.0 0.02 ?141% perf-profile.children.cycles-pp.xfs_file_write_checks
0.48 ? 10% -0.0 0.45 ? 4% perf-profile.children.cycles-pp.xfs_trans_committed_bulk
0.07 ? 15% -0.0 0.04 ?104% perf-profile.children.cycles-pp.try_to_unlazy
0.35 ? 5% -0.0 0.32 ? 6% perf-profile.children.cycles-pp.mkdir
1.74 ? 12% -0.0 1.71 ? 16% perf-profile.children.cycles-pp.__libc_start_main
1.74 ? 12% -0.0 1.71 ? 16% perf-profile.children.cycles-pp.main
1.74 ? 12% -0.0 1.71 ? 16% perf-profile.children.cycles-pp.run_builtin
0.73 ? 4% -0.0 0.70 ? 6% perf-profile.children.cycles-pp.balance_dirty_pages
0.72 ? 7% -0.0 0.68 ? 4% perf-profile.children.cycles-pp.iomap_finish_ioends
0.41 ? 8% -0.0 0.38 ? 9% perf-profile.children.cycles-pp.__close
0.38 ? 8% -0.0 0.35 ? 8% perf-profile.children.cycles-pp.exit_to_user_mode_prepare
0.28 ? 7% -0.0 0.25 ? 13% perf-profile.children.cycles-pp.xfs_dialloc_ag_finobt_near
0.16 ? 19% -0.0 0.13 ? 9% perf-profile.children.cycles-pp.__writeback_inodes_wb
0.16 ? 19% -0.0 0.13 ? 9% perf-profile.children.cycles-pp.writeback_sb_inodes
1.69 ? 12% -0.0 1.66 ? 17% perf-profile.children.cycles-pp.cmd_record
0.75 ? 7% -0.0 0.72 ? 4% perf-profile.children.cycles-pp.xfs_dir2_node_addname
0.72 ? 4% -0.0 0.69 ? 6% perf-profile.children.cycles-pp.mem_cgroup_wb_stats
1.69 ? 12% -0.0 1.66 ? 17% perf-profile.children.cycles-pp.cmd_sched
0.67 ? 4% -0.0 0.64 ? 3% perf-profile.children.cycles-pp.__wake_up_common_lock
0.36 ? 9% -0.0 0.32 ? 9% perf-profile.children.cycles-pp.exit_to_user_mode_loop
0.71 ? 4% -0.0 0.68 ? 5% perf-profile.children.cycles-pp.cgroup_rstat_flush_irqsafe
0.71 ? 4% -0.0 0.68 ? 5% perf-profile.children.cycles-pp.cgroup_rstat_flush_locked
0.32 ? 6% -0.0 0.29 ? 6% perf-profile.children.cycles-pp.__x64_sys_mkdir
0.78 ? 4% -0.0 0.74 ? 5% perf-profile.children.cycles-pp.balance_dirty_pages_ratelimited_flags
0.39 ? 8% -0.0 0.36 ? 7% perf-profile.children.cycles-pp.syscall_exit_to_user_mode
0.23 ? 11% -0.0 0.20 ? 11% perf-profile.children.cycles-pp.rwsem_down_read_slowpath
0.22 ? 14% -0.0 0.19 ? 8% perf-profile.children.cycles-pp.ksys_read
0.22 ? 14% -0.0 0.19 ? 8% perf-profile.children.cycles-pp.vfs_read
0.15 ? 20% -0.0 0.12 ? 13% perf-profile.children.cycles-pp.__libc_read
0.03 ?101% -0.0 0.00 perf-profile.children.cycles-pp._xfs_buf_ioapply
0.14 ? 15% -0.0 0.11 ? 18% perf-profile.children.cycles-pp.list_lru_add
0.44 ? 6% -0.0 0.41 ? 6% perf-profile.children.cycles-pp.exc_page_fault
3.66 ? 7% -0.0 3.63 ? 3% perf-profile.children.cycles-pp.osq_lock
0.34 ? 10% -0.0 0.31 ? 9% perf-profile.children.cycles-pp.task_work_run
0.14 ? 16% -0.0 0.12 ? 8% perf-profile.children.cycles-pp.xfs_trans_buf_item_match
0.14 ? 14% -0.0 0.12 ? 19% perf-profile.children.cycles-pp.d_lru_add
0.13 ? 7% -0.0 0.10 ? 7% perf-profile.children.cycles-pp.note_gp_changes
0.08 ? 17% -0.0 0.05 ? 47% perf-profile.children.cycles-pp.xlog_state_done_syncing
0.06 ? 13% -0.0 0.03 ?102% perf-profile.children.cycles-pp.__x64_sys_execve
0.06 ? 13% -0.0 0.03 ?102% perf-profile.children.cycles-pp.do_execveat_common
0.03 ?100% -0.0 0.00 perf-profile.children.cycles-pp.xfs_mod_freecounter
0.64 ? 9% -0.0 0.62 ? 5% perf-profile.children.cycles-pp.folio_end_writeback
0.71 ? 4% -0.0 0.68 ? 6% perf-profile.children.cycles-pp.__mem_cgroup_flush_stats
0.45 ? 11% -0.0 0.42 ? 9% perf-profile.children.cycles-pp.tick_nohz_next_event
0.20 ? 36% -0.0 0.17 ? 22% perf-profile.children.cycles-pp.xfsaild_push
0.20 ? 36% -0.0 0.17 ? 22% perf-profile.children.cycles-pp.xfsaild
0.27 ? 10% -0.0 0.24 ? 6% perf-profile.children.cycles-pp.__list_del_entry_valid
0.24 ? 11% -0.0 0.21 ? 7% perf-profile.children.cycles-pp.__folio_end_writeback
0.23 ? 7% -0.0 0.20 ? 16% perf-profile.children.cycles-pp.do_user_addr_fault
0.33 ? 3% -0.0 0.30 ? 11% perf-profile.children.cycles-pp.update_load_avg
0.17 ? 25% -0.0 0.14 ? 6% perf-profile.children.cycles-pp.io__get_hex
0.04 ?101% -0.0 0.01 ?223% perf-profile.children.cycles-pp.switch_mm_irqs_off
0.03 ?100% -0.0 0.00 perf-profile.children.cycles-pp.get_stack_info
0.04 ? 45% -0.0 0.02 ?141% perf-profile.children.cycles-pp.submit_bio_wait
0.04 ? 71% -0.0 0.01 ?223% perf-profile.children.cycles-pp.ttwu_do_wakeup
0.32 ? 21% -0.0 0.30 ? 11% perf-profile.children.cycles-pp.tick_nohz_irq_exit
0.48 ? 6% -0.0 0.45 ? 5% perf-profile.children.cycles-pp.xfs_buf_item_format_segment
0.13 ? 4% -0.0 0.10 ? 16% perf-profile.children.cycles-pp.kmem_cache_free
0.12 ? 22% -0.0 0.10 ? 11% perf-profile.children.cycles-pp.proc_reg_read
0.13 ? 16% -0.0 0.10 ? 15% perf-profile.children.cycles-pp.random
0.02 ? 99% -0.0 0.00 perf-profile.children.cycles-pp.pick_next_task_idle
0.15 ? 12% -0.0 0.12 ? 15% perf-profile.children.cycles-pp.__might_resched
0.13 ? 10% -0.0 0.11 ? 3% perf-profile.children.cycles-pp.xfs_dir3_data_read
0.51 ? 4% -0.0 0.48 ? 3% perf-profile.children.cycles-pp.xfs_buf_item_format
0.19 ? 13% -0.0 0.16 ? 13% perf-profile.children.cycles-pp.iomap_iter
0.15 ? 10% -0.0 0.13 ? 11% perf-profile.children.cycles-pp.xfs_inode_to_log_dinode
0.06 ? 46% -0.0 0.03 ?106% perf-profile.children.cycles-pp.__legitimize_path
0.06 ? 13% -0.0 0.03 ? 70% perf-profile.children.cycles-pp.d_lookup
0.08 ? 13% -0.0 0.06 ? 46% perf-profile.children.cycles-pp.drm_atomic_helper_commit_tail_rpm
0.08 ? 13% -0.0 0.06 ? 46% perf-profile.children.cycles-pp.drm_atomic_helper_commit_planes
0.08 ? 13% -0.0 0.06 ? 46% perf-profile.children.cycles-pp.ast_primary_plane_helper_atomic_update
0.08 ? 13% -0.0 0.06 ? 46% perf-profile.children.cycles-pp.drm_fb_memcpy
0.07 ? 10% -0.0 0.05 ? 47% perf-profile.children.cycles-pp.rcu_report_qs_rdp
0.36 ? 8% -0.0 0.33 ? 5% perf-profile.children.cycles-pp.submit_bio_noacct_nocheck
0.29 ? 10% -0.0 0.27 ? 10% perf-profile.children.cycles-pp.__fput
0.28 ? 8% -0.0 0.25 ? 7% perf-profile.children.cycles-pp.__filemap_add_folio
0.19 ? 9% -0.0 0.17 ? 16% perf-profile.children.cycles-pp.handle_mm_fault
0.20 ? 19% -0.0 0.18 ? 23% perf-profile.children.cycles-pp.timekeeping_max_deferment
0.86 ? 5% -0.0 0.84 ? 7% perf-profile.children.cycles-pp.scheduler_tick
0.24 ? 13% -0.0 0.22 ? 4% perf-profile.children.cycles-pp.xfs_iget_cache_miss
0.18 ? 7% -0.0 0.16 ? 12% perf-profile.children.cycles-pp.__handle_mm_fault
0.14 ? 21% -0.0 0.12 ? 13% perf-profile.children.cycles-pp.seq_read
0.17 ? 7% -0.0 0.15 ? 10% perf-profile.children.cycles-pp.link_path_walk
0.16 ? 18% -0.0 0.14 ? 8% perf-profile.children.cycles-pp.seq_read_iter
0.09 ? 18% -0.0 0.07 ? 20% perf-profile.children.cycles-pp.pagevec_lookup_range_tag
0.09 ? 18% -0.0 0.06 ? 17% perf-profile.children.cycles-pp.find_get_pages_range_tag
0.03 ?101% -0.0 0.01 ?223% perf-profile.children.cycles-pp.__xfs_buf_submit
0.05 ? 47% -0.0 0.02 ? 99% perf-profile.children.cycles-pp.map__process_kallsym_symbol
0.22 ? 22% -0.0 0.19 ? 23% perf-profile.children.cycles-pp._raw_spin_unlock_irqrestore
0.59 ? 6% -0.0 0.56 ? 4% perf-profile.children.cycles-pp.perf_output_copy
0.35 ? 9% -0.0 0.33 ? 10% perf-profile.children.cycles-pp.kmem_cache_alloc
0.26 ? 14% -0.0 0.24 ? 7% perf-profile.children.cycles-pp.read_tsc
0.18 ? 9% -0.0 0.16 ? 10% perf-profile.children.cycles-pp.__mod_lruvec_page_state
0.11 ? 6% -0.0 0.09 ? 15% perf-profile.children.cycles-pp.xfs_btree_update
0.08 ? 11% -0.0 0.06 ? 19% perf-profile.children.cycles-pp.xfs_inobt_update
0.06 ? 11% -0.0 0.04 ? 72% perf-profile.children.cycles-pp.terminate_walk
0.06 ? 7% -0.0 0.04 ? 45% perf-profile.children.cycles-pp.xas_find_marked
0.05 ? 46% -0.0 0.03 ?103% perf-profile.children.cycles-pp.xfs_trans_precommit_sort
0.05 ? 46% -0.0 0.03 ?100% perf-profile.children.cycles-pp.xlog_state_get_iclog_space
0.06 ? 13% -0.0 0.04 ? 73% perf-profile.children.cycles-pp.execve
0.29 ? 7% -0.0 0.27 ? 8% perf-profile.children.cycles-pp.filemap_add_folio
0.25 ? 5% -0.0 0.23 ? 8% perf-profile.children.cycles-pp.do_mkdirat
0.76 ? 5% -0.0 0.74 ? 3% perf-profile.children.cycles-pp.__output_copy
0.35 ? 10% -0.0 0.33 ? 6% perf-profile.children.cycles-pp.blk_mq_submit_bio
0.16 ? 19% -0.0 0.14 ? 9% perf-profile.children.cycles-pp.xfs_perag_put
0.19 ? 9% -0.0 0.17 ? 6% perf-profile.children.cycles-pp.xfs_trans_alloc_inode
0.15 ? 12% -0.0 0.13 ? 10% perf-profile.children.cycles-pp.path_parentat
0.15 ? 10% -0.0 0.13 ? 5% perf-profile.children.cycles-pp.xfs_bmapi_read
0.15 ? 21% -0.0 0.13 ? 8% perf-profile.children.cycles-pp.folio_alloc
0.11 ? 17% -0.0 0.09 ? 12% perf-profile.children.cycles-pp.__writeback_single_inode
0.05 ? 46% -0.0 0.03 ?101% perf-profile.children.cycles-pp.xfs_bmap_add_extent_hole_delay
0.04 ? 71% -0.0 0.02 ?141% perf-profile.children.cycles-pp.xfs_dir2_leaf_search_hash
0.05 ? 46% -0.0 0.04 ? 71% perf-profile.children.cycles-pp.xfs_btree_insrec
0.03 ?100% -0.0 0.01 ?223% perf-profile.children.cycles-pp.queue_io
0.03 ?100% -0.0 0.01 ?223% perf-profile.children.cycles-pp.move_expired_inodes
0.10 ? 22% -0.0 0.08 ? 21% perf-profile.children.cycles-pp.drm_fb_helper_damage_work
0.10 ? 22% -0.0 0.08 ? 21% perf-profile.children.cycles-pp.drm_fbdev_fb_dirty
0.42 ? 6% -0.0 0.41 ? 5% perf-profile.children.cycles-pp.sched_clock_cpu
0.23 ? 5% -0.0 0.21 ? 6% perf-profile.children.cycles-pp.filename_create
0.16 ? 11% -0.0 0.14 ? 8% perf-profile.children.cycles-pp.xfs_dabuf_map
0.30 ? 8% -0.0 0.28 ? 8% perf-profile.children.cycles-pp.xfs_trans_reserve
0.26 ? 8% -0.0 0.25 ? 7% perf-profile.children.cycles-pp.kmem_cache_alloc_lru
0.13 ? 37% -0.0 0.12 ? 20% perf-profile.children.cycles-pp.xfs_inode_item_push
0.07 ? 16% -0.0 0.06 ? 47% perf-profile.children.cycles-pp.xfs_buf_item_size_segment
0.04 ? 71% -0.0 0.02 ?142% perf-profile.children.cycles-pp._find_next_and_bit
0.07 ? 9% -0.0 0.06 ? 48% perf-profile.children.cycles-pp.charge_memcg
0.07 ? 17% -0.0 0.06 ? 8% perf-profile.children.cycles-pp.xfs_perag_grab
0.02 ? 99% -0.0 0.01 ?223% perf-profile.children.cycles-pp.sb_mark_inode_writeback
0.05 ? 45% -0.0 0.04 ? 71% perf-profile.children.cycles-pp.blkdev_issue_flush
0.02 ?141% -0.0 0.00 perf-profile.children.cycles-pp.xfs_trans_get_buf_map
0.02 ?141% -0.0 0.00 perf-profile.children.cycles-pp.xfs_btree_log_recs
0.02 ?141% -0.0 0.00 perf-profile.children.cycles-pp.bad_get_user
0.02 ?141% -0.0 0.00 perf-profile.children.cycles-pp.security_file_open
0.02 ?223% -0.0 0.00 perf-profile.children.cycles-pp.xfs_alloc_update_counters
0.02 ?223% -0.0 0.00 perf-profile.children.cycles-pp.xfs_alloc_log_agf
0.32 ? 13% -0.0 0.30 ? 2% perf-profile.children.cycles-pp.xfs_iget
0.32 ? 12% -0.0 0.30 ? 6% perf-profile.children.cycles-pp.lapic_next_deadline
0.15 ? 13% -0.0 0.13 ? 9% perf-profile.children.cycles-pp.xfs_buf_item_unpin
0.13 ? 14% -0.0 0.11 ? 20% perf-profile.children.cycles-pp.setup_file_name
0.11 ? 12% -0.0 0.10 ? 17% perf-profile.children.cycles-pp.__update_load_avg_se
0.08 ? 10% -0.0 0.06 ? 11% perf-profile.children.cycles-pp.xfs_btree_insert
0.04 ? 73% -0.0 0.02 ? 99% perf-profile.children.cycles-pp.xfs_buf_delwri_submit_buffers
0.14 ? 14% -0.0 0.12 ? 11% perf-profile.children.cycles-pp.xfs_trans_ail_update_bulk
0.92 ? 6% -0.0 0.90 ? 4% perf-profile.children.cycles-pp.__wake_up_common
0.37 ? 7% -0.0 0.36 ? 9% perf-profile.children.cycles-pp.xfs_trans_alloc
0.25 ? 8% -0.0 0.24 ? 7% perf-profile.children.cycles-pp.__kmem_cache_alloc_node
0.10 ? 18% -0.0 0.08 ? 10% perf-profile.children.cycles-pp.trigger_load_balance
0.06 ? 11% -0.0 0.05 ? 46% perf-profile.children.cycles-pp.kmem_alloc
0.14 ? 9% -0.0 0.12 ? 19% perf-profile.children.cycles-pp.down_read
0.09 ? 37% -0.0 0.08 ? 17% perf-profile.children.cycles-pp.rmqueue
0.09 ? 15% -0.0 0.07 ? 14% perf-profile.children.cycles-pp.xfs_bmap_btalloc_select_lengths
0.08 ? 10% -0.0 0.06 ? 46% perf-profile.children.cycles-pp.folio_account_dirtied
0.19 ? 15% -0.0 0.17 ? 17% perf-profile.children.cycles-pp.ktime_get_update_offsets_now
0.28 ? 14% -0.0 0.26 ? 5% perf-profile.children.cycles-pp.xfs_trans_log_inode
0.36 ? 7% -0.0 0.35 ? 5% perf-profile.children.cycles-pp.native_sched_clock
0.11 ? 8% -0.0 0.10 ? 13% perf-profile.children.cycles-pp.memcg_slab_post_alloc_hook
0.18 ? 22% -0.0 0.17 ? 9% perf-profile.children.cycles-pp.__alloc_pages
0.14 ? 10% -0.0 0.13 ? 12% perf-profile.children.cycles-pp.xfs_trans_alloc_icreate
0.14 ? 7% -0.0 0.13 ? 10% perf-profile.children.cycles-pp.d_alloc
0.15 ? 10% -0.0 0.14 ? 4% perf-profile.children.cycles-pp._raw_spin_trylock
0.13 ? 14% -0.0 0.11 ? 14% perf-profile.children.cycles-pp.xfs_imap_to_bp
0.10 ? 8% -0.0 0.09 ? 15% perf-profile.children.cycles-pp.__mem_cgroup_charge
0.13 ? 38% -0.0 0.11 ? 19% perf-profile.children.cycles-pp.xfs_iflush_cluster
0.08 ? 24% -0.0 0.06 ? 11% perf-profile.children.cycles-pp.s_show
0.08 ? 13% -0.0 0.07 ? 14% perf-profile.children.cycles-pp.do_dentry_open
0.01 ?223% -0.0 0.00 perf-profile.children.cycles-pp.irqentry_enter
0.01 ?223% -0.0 0.00 perf-profile.children.cycles-pp.inode_init_always
0.12 ? 13% -0.0 0.11 ? 9% perf-profile.children.cycles-pp.cgroup_rstat_updated
0.10 ? 15% -0.0 0.09 ? 17% perf-profile.children.cycles-pp.xfs_buf_item_size
0.08 ? 8% -0.0 0.07 ? 14% perf-profile.children.cycles-pp.xfs_btree_del_cursor
0.08 ? 13% -0.0 0.06 ? 19% perf-profile.children.cycles-pp.commit_tail
0.08 ? 13% -0.0 0.06 ? 19% perf-profile.children.cycles-pp.ast_mode_config_helper_atomic_commit_tail
0.16 ? 12% -0.0 0.15 ? 9% perf-profile.children.cycles-pp.filename_parentat
0.11 ? 14% -0.0 0.10 ? 13% perf-profile.children.cycles-pp.do_open
0.08 ? 20% -0.0 0.07 ? 11% perf-profile.children.cycles-pp.xfs_btree_delete
0.03 ?102% -0.0 0.02 ?141% perf-profile.children.cycles-pp._atomic_dec_and_lock
0.03 ?102% -0.0 0.02 ?141% perf-profile.children.cycles-pp.xfs_bmapi_convert_unwritten
0.01 ?223% -0.0 0.00 perf-profile.children.cycles-pp.s_next
0.01 ?223% -0.0 0.00 perf-profile.children.cycles-pp.ct_nmi_enter
0.01 ?223% -0.0 0.00 perf-profile.children.cycles-pp.perf_env__arch
0.07 ? 10% -0.0 0.06 ? 14% perf-profile.children.cycles-pp.bio_alloc_bioset
0.07 ? 18% -0.0 0.06 ? 47% perf-profile.children.cycles-pp.rcu_do_batch
0.06 ? 19% -0.0 0.05 ? 45% perf-profile.children.cycles-pp.walk_component
0.04 ? 71% -0.0 0.02 ? 99% perf-profile.children.cycles-pp.mod_objcg_state
0.29 ? 6% -0.0 0.28 ? 7% perf-profile.children.cycles-pp.xfs_cil_prepare_item
0.16 ? 14% -0.0 0.14 ? 15% perf-profile.children.cycles-pp.xfs_buffered_write_iomap_begin
0.10 ? 23% -0.0 0.09 ? 20% perf-profile.children.cycles-pp.delay_tsc
0.10 ? 12% -0.0 0.09 ? 15% perf-profile.children.cycles-pp.hrtimer_next_event_without
0.08 ? 20% -0.0 0.07 ? 12% perf-profile.children.cycles-pp.drm_atomic_helper_dirtyfb
0.08 ? 20% -0.0 0.07 ? 12% perf-profile.children.cycles-pp.drm_atomic_commit
0.08 ? 20% -0.0 0.07 ? 12% perf-profile.children.cycles-pp.drm_atomic_helper_commit
0.08 ? 11% -0.0 0.07 ? 46% perf-profile.children.cycles-pp.xlog_ticket_alloc
0.07 ? 10% -0.0 0.06 ? 25% perf-profile.children.cycles-pp.strncpy_from_user
0.27 ? 8% -0.0 0.26 ? 9% perf-profile.children.cycles-pp.perf_mux_hrtimer_handler
0.18 ? 15% -0.0 0.16 ? 6% perf-profile.children.cycles-pp.xfs_inode_alloc
0.18 ? 3% -0.0 0.17 ? 10% perf-profile.children.cycles-pp.___slab_alloc
0.16 ? 11% -0.0 0.15 ? 5% perf-profile.children.cycles-pp.d_alloc_parallel
0.12 ? 12% -0.0 0.12 ? 15% perf-profile.children.cycles-pp.blk_mq_get_new_requests
0.09 ? 15% -0.0 0.08 ? 14% perf-profile.children.cycles-pp.xfs_iext_lookup_extent
0.09 ? 5% -0.0 0.08 ? 13% perf-profile.children.cycles-pp.__might_sleep
0.07 ? 9% -0.0 0.06 ? 14% perf-profile.children.cycles-pp.percpu_counter_add_batch
0.03 ?100% -0.0 0.02 ?141% perf-profile.children.cycles-pp.folio_clear_dirty_for_io
0.02 ?144% -0.0 0.01 ?223% perf-profile.children.cycles-pp.xfs_btree_dup_cursor
0.01 ?223% -0.0 0.00 perf-profile.children.cycles-pp.apparmor_file_alloc_security
0.01 ?223% -0.0 0.00 perf-profile.children.cycles-pp.pipe_read
0.01 ?223% -0.0 0.00 perf-profile.children.cycles-pp.map_id_range_down
0.01 ?223% -0.0 0.00 perf-profile.children.cycles-pp.make_kuid
0.04 ?105% -0.0 0.02 ? 99% perf-profile.children.cycles-pp.machines__deliver_event
0.04 ? 71% -0.0 0.03 ?100% perf-profile.children.cycles-pp.menu_reflect
0.04 ? 71% -0.0 0.02 ? 99% perf-profile.children.cycles-pp.rb_next
0.11 ? 16% -0.0 0.10 ? 12% perf-profile.children.cycles-pp.memset_erms
0.08 ? 11% -0.0 0.07 ? 15% perf-profile.children.cycles-pp.clear_page_erms
0.08 ? 20% -0.0 0.07 ? 20% perf-profile.children.cycles-pp.xfs_bmapi_reserve_delalloc
0.07 ? 16% -0.0 0.06 ? 13% perf-profile.children.cycles-pp.submit_bio_noacct
0.10 ? 10% -0.0 0.09 ? 14% perf-profile.children.cycles-pp.__intel_pmu_enable_all
0.26 ? 7% -0.0 0.25 ? 8% perf-profile.children.cycles-pp.xfs_log_reserve
0.18 ? 8% -0.0 0.17 ? 9% perf-profile.children.cycles-pp.perf_rotate_context
0.12 ? 10% -0.0 0.12 ? 13% perf-profile.children.cycles-pp.___perf_sw_event
0.10 ? 16% -0.0 0.10 ? 13% perf-profile.children.cycles-pp.getname_flags
0.10 ? 10% -0.0 0.09 ? 14% perf-profile.children.cycles-pp.iomap_add_to_ioend
0.06 ? 21% -0.0 0.06 ? 9% perf-profile.children.cycles-pp.lru_add_fn
0.02 ? 99% -0.0 0.02 ?141% perf-profile.children.cycles-pp.do_read_fault
0.01 ?223% -0.0 0.00 perf-profile.children.cycles-pp.__entry_text_start
0.02 ?141% -0.0 0.01 ?223% perf-profile.children.cycles-pp.xfs_ialloc_inode_init
0.02 ?142% -0.0 0.01 ?223% perf-profile.children.cycles-pp.__wrgsbase_inactive
0.02 ?141% -0.0 0.01 ?223% perf-profile.children.cycles-pp.xfs_dir2_leafn_lookup_for_entry
0.01 ?223% -0.0 0.00 perf-profile.children.cycles-pp.send_call_function_single_ipi
0.01 ?223% -0.0 0.00 perf-profile.children.cycles-pp.perf_event_task_tick
0.01 ?223% -0.0 0.00 perf-profile.children.cycles-pp.__mod_lruvec_state
0.01 ?223% -0.0 0.00 perf-profile.children.cycles-pp.irqtime_account_process_tick
0.02 ?141% -0.0 0.01 ?223% perf-profile.children.cycles-pp.xlog_prepare_iovec
0.02 ?141% -0.0 0.01 ?223% perf-profile.children.cycles-pp.release_pages
0.01 ?223% -0.0 0.00 perf-profile.children.cycles-pp.xfs_bmap_add_extent_unwritten_real
0.01 ?223% -0.0 0.00 perf-profile.children.cycles-pp.xfs_bmap_longest_free_extent
0.01 ?223% -0.0 0.00 perf-profile.children.cycles-pp.tick_check_oneshot_broadcast_this_cpu
0.01 ?223% -0.0 0.00 perf-profile.children.cycles-pp.__do_sys_newstat
0.02 ?141% -0.0 0.01 ?223% perf-profile.children.cycles-pp.shmem_add_to_page_cache
0.01 ?223% -0.0 0.00 perf-profile.children.cycles-pp.__d_lookup_rcu
0.02 ?141% -0.0 0.01 ?223% perf-profile.children.cycles-pp.__count_memcg_events
0.01 ?223% -0.0 0.00 perf-profile.children.cycles-pp._find_next_bit
0.01 ?223% -0.0 0.00 perf-profile.children.cycles-pp.xfs_dir2_data_use_free
0.01 ?223% -0.0 0.00 perf-profile.children.cycles-pp.perf_output_end
0.01 ?223% -0.0 0.00 perf-profile.children.cycles-pp.inode_sb_list_add
0.01 ?223% -0.0 0.00 perf-profile.children.cycles-pp.xfs_setup_inode
0.01 ?223% -0.0 0.00 perf-profile.children.cycles-pp.xfs_inobt_init_key_from_rec
0.01 ?223% -0.0 0.00 perf-profile.children.cycles-pp.do_anonymous_page
0.01 ?223% -0.0 0.00 perf-profile.children.cycles-pp.tracing_gen_ctx_irq_test
0.01 ?223% -0.0 0.00 perf-profile.children.cycles-pp.xfs_inode_item_format_data_fork
0.01 ?223% -0.0 0.00 perf-profile.children.cycles-pp.rcu_gp_kthread
0.01 ?223% -0.0 0.00 perf-profile.children.cycles-pp.random@plt
0.01 ?223% -0.0 0.00 perf-profile.children.cycles-pp.xfs_alloc_vextent_this_ag
0.01 ?223% -0.0 0.00 perf-profile.children.cycles-pp.apparmor_file_open
0.01 ?223% -0.0 0.00 perf-profile.children.cycles-pp.step_into
0.01 ?223% -0.0 0.00 perf-profile.children.cycles-pp.__bio_split_to_limits
0.01 ?223% -0.0 0.00 perf-profile.children.cycles-pp.xfs_da_state_alloc
0.16 ? 11% -0.0 0.15 ? 12% perf-profile.children.cycles-pp.__pagevec_release
0.14 ? 12% -0.0 0.13 ? 10% perf-profile.children.cycles-pp.__list_add_valid
0.14 ? 7% -0.0 0.13 ? 11% perf-profile.children.cycles-pp.__mod_memcg_lruvec_state
0.14 ? 14% -0.0 0.13 ? 10% perf-profile.children.cycles-pp.crc32c_pcl_intel_update
0.15 ? 9% -0.0 0.14 ? 11% perf-profile.children.cycles-pp.__slab_free
0.13 ? 10% -0.0 0.12 ? 8% perf-profile.children.cycles-pp.xfs_dir2_node_find_freeblk
0.13 ? 14% -0.0 0.12 ? 13% perf-profile.children.cycles-pp.idle_cpu
0.11 ? 4% -0.0 0.10 ? 3% perf-profile.children.cycles-pp.__hrtimer_next_event_base
0.08 ? 16% -0.0 0.07 ? 15% perf-profile.children.cycles-pp.xas_alloc
0.06 ? 48% -0.0 0.05 ? 49% perf-profile.children.cycles-pp.xfs_buf_inode_iodone
0.08 ? 11% -0.0 0.08 ? 17% perf-profile.children.cycles-pp.error_entry
0.08 ? 17% -0.0 0.07 ? 12% perf-profile.children.cycles-pp.xfs_log_ticket_ungrant
0.08 ? 27% -0.0 0.07 ? 9% perf-profile.children.cycles-pp.vsnprintf
0.08 ? 24% -0.0 0.07 ? 11% perf-profile.children.cycles-pp.seq_printf
0.06 ? 48% -0.0 0.05 ? 72% perf-profile.children.cycles-pp.xfs_buf_item_put
0.04 ? 44% -0.0 0.03 ? 70% perf-profile.children.cycles-pp.read_counters
0.06 ? 8% -0.0 0.05 ? 46% perf-profile.children.cycles-pp.mempool_alloc
0.12 ? 10% -0.0 0.11 ? 10% perf-profile.children.cycles-pp.set_next_entity
0.92 ? 5% -0.0 0.92 ? 2% perf-profile.children.cycles-pp.perf_output_sample
0.48 ? 7% -0.0 0.48 ? 7% perf-profile.children.cycles-pp.mem_cgroup_css_rstat_flush
0.32 ? 25% -0.0 0.32 ? 12% perf-profile.children.cycles-pp.evlist__parse_sample
0.20 ? 9% -0.0 0.19 ? 6% perf-profile.children.cycles-pp.irqtime_account_irq
0.23 ? 5% -0.0 0.22 ? 4% perf-profile.children.cycles-pp.xfs_buf_rele
0.14 ? 5% -0.0 0.14 ? 13% perf-profile.children.cycles-pp.xlog_sync
0.14 ? 28% -0.0 0.13 ? 12% perf-profile.children.cycles-pp.get_page_from_freelist
0.08 ? 9% -0.0 0.07 ? 14% perf-profile.children.cycles-pp.clear_huge_page
0.07 ? 46% -0.0 0.06 ? 17% perf-profile.children.cycles-pp.xfs_btree_delrec
0.06 ? 11% -0.0 0.06 ? 11% perf-profile.children.cycles-pp._IO_default_xsputn
0.05 -0.0 0.04 ? 45% perf-profile.children.cycles-pp.xfs_bmap_add_extent_delay_real
0.02 ? 99% -0.0 0.02 ?142% perf-profile.children.cycles-pp.lookup_fast
0.10 ? 18% -0.0 0.09 ? 12% perf-profile.children.cycles-pp.update_irq_load_avg
0.08 ? 14% -0.0 0.07 ? 23% perf-profile.children.cycles-pp.check_cpu_stall
0.07 ? 15% -0.0 0.07 ? 16% perf-profile.children.cycles-pp.__cgroup_account_cputime
0.07 ? 9% -0.0 0.06 ? 11% perf-profile.children.cycles-pp.xfs_release
0.05 ? 8% -0.0 0.05 ? 45% perf-profile.children.cycles-pp.xfs_fs_inode_init_once
0.05 ? 47% -0.0 0.04 ? 45% perf-profile.children.cycles-pp.down_write
0.02 ?141% -0.0 0.01 ?223% perf-profile.children.cycles-pp.get_obj_cgroup_from_current
0.02 ?141% -0.0 0.01 ?223% perf-profile.children.cycles-pp.__check_object_size
0.02 ?142% -0.0 0.01 ?223% perf-profile.children.cycles-pp.xfs_alloc_update
0.02 ?141% -0.0 0.01 ?223% perf-profile.children.cycles-pp.xfs_iunlock
0.02 ?141% -0.0 0.01 ?223% perf-profile.children.cycles-pp.lockref_get_not_dead
0.27 ? 9% -0.0 0.26 ? 3% perf-profile.children.cycles-pp.update_rq_clock
0.30 ? 14% -0.0 0.29 ? 13% perf-profile.children.cycles-pp.irq_work_run_list
0.12 ? 13% -0.0 0.11 ? 14% perf-profile.children.cycles-pp.xas_store
0.10 ? 13% -0.0 0.09 ? 12% perf-profile.children.cycles-pp.hrtimer_update_next_event
0.24 ? 6% -0.0 0.23 ? 9% perf-profile.children.cycles-pp.xfs_buf_item_pin
0.42 ? 11% -0.0 0.41 ? 3% perf-profile.children.cycles-pp._xfs_trans_bjoin
0.32 ? 6% -0.0 0.32 ? 15% perf-profile.children.cycles-pp.xlog_force_lsn
0.29 ? 15% -0.0 0.28 ? 13% perf-profile.children.cycles-pp.asm_sysvec_irq_work
0.29 ? 15% -0.0 0.28 ? 13% perf-profile.children.cycles-pp.sysvec_irq_work
0.29 ? 15% -0.0 0.28 ? 13% perf-profile.children.cycles-pp.__sysvec_irq_work
0.29 ? 15% -0.0 0.28 ? 13% perf-profile.children.cycles-pp.irq_work_single
0.29 ? 15% -0.0 0.28 ? 13% perf-profile.children.cycles-pp.irq_work_run
0.29 ? 4% -0.0 0.28 ? 7% perf-profile.children.cycles-pp.__kmalloc
0.17 ? 12% -0.0 0.16 ? 12% perf-profile.children.cycles-pp.update_blocked_averages
0.20 ? 34% -0.0 0.20 ? 25% perf-profile.children.cycles-pp.tick_sched_do_timer
0.14 ? 11% -0.0 0.14 ? 9% perf-profile.children.cycles-pp.crc32c
0.12 ? 7% -0.0 0.12 ? 16% perf-profile.children.cycles-pp.__update_load_avg_cfs_rq
0.07 ? 48% -0.0 0.06 ? 21% perf-profile.children.cycles-pp.xfs_buf_ioend
0.08 ? 22% -0.0 0.07 ? 12% perf-profile.children.cycles-pp.__update_blocked_fair
0.08 ? 17% -0.0 0.07 ? 12% perf-profile.children.cycles-pp.xfs_bmap_last_offset
0.09 ? 9% -0.0 0.08 ? 20% perf-profile.children.cycles-pp.__folio_mark_dirty
0.08 ? 9% -0.0 0.07 ? 12% perf-profile.children.cycles-pp.__do_huge_pmd_anonymous_page
0.02 ? 99% -0.0 0.02 ?144% perf-profile.children.cycles-pp.lockref_put_return
0.02 ? 99% -0.0 0.02 ?141% perf-profile.children.cycles-pp.do_fault
0.03 ?100% -0.0 0.03 ?100% perf-profile.children.cycles-pp.security_file_alloc
0.06 ? 9% -0.0 0.06 ? 47% perf-profile.children.cycles-pp.xfs_inobt_init_common
0.04 ? 44% -0.0 0.04 ? 71% perf-profile.children.cycles-pp.cmd_stat
0.04 ? 44% -0.0 0.04 ? 71% perf-profile.children.cycles-pp.dispatch_events
0.04 ? 44% -0.0 0.04 ? 71% perf-profile.children.cycles-pp.process_interval
0.08 ? 19% -0.0 0.08 ? 14% perf-profile.children.cycles-pp.slab_pre_alloc_hook
0.27 ? 16% -0.0 0.26 ? 13% perf-profile.children.cycles-pp.wait_for_lsr
0.58 ? 10% -0.0 0.58 ? 5% perf-profile.children.cycles-pp.sched_ttwu_pending
0.29 ? 15% -0.0 0.28 ? 13% perf-profile.children.cycles-pp._printk
0.29 ? 15% -0.0 0.28 ? 13% perf-profile.children.cycles-pp.vprintk_emit
0.29 ? 15% -0.0 0.28 ? 13% perf-profile.children.cycles-pp.console_unlock
0.29 ? 15% -0.0 0.28 ? 13% perf-profile.children.cycles-pp.console_flush_all
0.29 ? 15% -0.0 0.28 ? 13% perf-profile.children.cycles-pp.console_emit_next_record
0.24 ? 11% -0.0 0.23 ? 13% perf-profile.children.cycles-pp.task_tick_fair
0.11 ? 12% -0.0 0.10 ? 10% perf-profile.children.cycles-pp.__d_alloc
0.11 ? 15% -0.0 0.11 ? 10% perf-profile.children.cycles-pp.__xfs_dir3_free_read
0.07 ? 19% -0.0 0.06 ? 14% perf-profile.children.cycles-pp.xfs_bmap_last_extent
0.03 ?103% -0.0 0.03 ?100% perf-profile.children.cycles-pp.__rq_qos_throttle
0.03 ?100% -0.0 0.03 ?100% perf-profile.children.cycles-pp.timerqueue_del
0.17 ? 9% -0.0 0.17 ? 14% perf-profile.children.cycles-pp.rcu_sched_clock_irq
0.11 ? 4% -0.0 0.11 ? 10% perf-profile.children.cycles-pp.allocate_slab
0.09 ? 17% -0.0 0.09 ? 16% perf-profile.children.cycles-pp.xas_expand
0.09 ? 17% -0.0 0.08 ? 14% perf-profile.children.cycles-pp.get_next_timer_interrupt
0.07 ? 22% -0.0 0.06 ? 14% perf-profile.children.cycles-pp.xlog_grant_push_threshold
0.08 ? 10% -0.0 0.08 ? 11% perf-profile.children.cycles-pp.xlog_grant_add_space
0.08 ? 32% -0.0 0.08 ? 11% perf-profile.children.cycles-pp.read
0.05 ? 49% -0.0 0.05 ? 73% perf-profile.children.cycles-pp.xfs_inobt_init_cursor
0.28 ? 16% -0.0 0.27 ? 12% perf-profile.children.cycles-pp.serial8250_console_write
0.13 ? 16% -0.0 0.13 ? 11% perf-profile.children.cycles-pp.blk_mq_try_issue_directly
0.11 ? 12% -0.0 0.11 ? 16% perf-profile.children.cycles-pp.update_cfs_group
0.06 ? 58% -0.0 0.06 ? 23% perf-profile.children.cycles-pp.xfs_iflush
0.03 ?102% -0.0 0.03 ?100% perf-profile.children.cycles-pp.io__get_char
0.02 ?142% -0.0 0.02 ?141% perf-profile.children.cycles-pp.xfs_iext_last
0.01 ?223% -0.0 0.01 ?223% perf-profile.children.cycles-pp.rq_qos_wait
0.10 ? 15% -0.0 0.10 ? 13% perf-profile.children.cycles-pp.xas_create
0.09 ? 7% -0.0 0.09 ? 16% perf-profile.children.cycles-pp.shuffle_freelist
0.08 ? 20% -0.0 0.07 ? 12% perf-profile.children.cycles-pp.xlog_grant_push_ail
0.06 ? 7% -0.0 0.06 ? 11% perf-profile.children.cycles-pp.timerqueue_add
0.07 ? 8% -0.0 0.07 ? 15% perf-profile.children.cycles-pp.random_r
0.20 ? 10% -0.0 0.20 ? 6% perf-profile.children.cycles-pp.__perf_event_header__init_id
0.16 ? 16% -0.0 0.15 ? 15% perf-profile.children.cycles-pp.run_rebalance_domains
0.14 ? 17% -0.0 0.14 ? 16% perf-profile.children.cycles-pp.mutex_lock
0.46 ? 9% -0.0 0.46 ? 4% perf-profile.children.cycles-pp.xfs_alloc_fixup_trees
0.38 ? 11% +0.0 0.38 ? 6% perf-profile.children.cycles-pp.folio_wake_bit
0.35 ? 11% +0.0 0.35 ? 7% perf-profile.children.cycles-pp.wake_page_function
0.20 ? 26% +0.0 0.20 ? 12% perf-profile.children.cycles-pp.evsel__parse_sample
0.18 ? 4% +0.0 0.18 ? 10% perf-profile.children.cycles-pp.select_task_rq
0.13 ? 15% +0.0 0.13 ? 11% perf-profile.children.cycles-pp.__blk_mq_try_issue_directly
0.12 ? 14% +0.0 0.12 ? 10% perf-profile.children.cycles-pp.bsearch
0.08 ? 20% +0.0 0.08 ? 15% perf-profile.children.cycles-pp.xfs_allocbt_init_cursor
0.09 ? 14% +0.0 0.09 ? 20% perf-profile.children.cycles-pp.cpuidle_governor_latency_req
0.08 ? 13% +0.0 0.08 ? 11% perf-profile.children.cycles-pp.tsc_verify_tsc_adjust
0.08 ? 22% +0.0 0.08 perf-profile.children.cycles-pp.wake_up_q
0.04 ? 71% +0.0 0.04 ? 71% perf-profile.children.cycles-pp.perf_trace_buf_alloc
0.02 ?141% +0.0 0.02 ?141% perf-profile.children.cycles-pp.cpuacct_charge
0.01 ?223% +0.0 0.01 ?223% perf-profile.children.cycles-pp.xfs_trans_roll
0.02 ?141% +0.0 0.02 ?141% perf-profile.children.cycles-pp.xfs_bmapi_finish
0.03 ? 70% +0.0 0.03 ? 70% perf-profile.children.cycles-pp.xas_load
0.01 ?223% +0.0 0.01 ?223% perf-profile.children.cycles-pp.irqentry_exit
0.01 ?223% +0.0 0.01 ?223% perf-profile.children.cycles-pp.up_read
0.01 ?223% +0.0 0.01 ?223% perf-profile.children.cycles-pp.obj_cgroup_charge
0.01 ?223% +0.0 0.01 ?223% perf-profile.children.cycles-pp.nr_iowait_cpu
0.01 ?223% +0.0 0.01 ?223% perf-profile.children.cycles-pp.folio_add_lru
0.01 ?223% +0.0 0.01 ?223% perf-profile.children.cycles-pp.xfs_trans_ijoin
0.01 ?223% +0.0 0.01 ?223% perf-profile.children.cycles-pp.xfs_iext_insert
0.09 ? 31% +0.0 0.09 ? 13% perf-profile.children.cycles-pp.ftrace_graph_ret_addr
0.11 ? 12% +0.0 0.12 ? 12% perf-profile.children.cycles-pp.xlog_cksum
0.07 ? 10% +0.0 0.08 ? 14% perf-profile.children.cycles-pp.enqueue_hrtimer
0.07 ? 10% +0.0 0.07 ? 21% perf-profile.children.cycles-pp.kfree
0.08 ? 13% +0.0 0.09 ? 8% perf-profile.children.cycles-pp.arch_cpu_idle_enter
0.04 ? 72% +0.0 0.04 ? 72% perf-profile.children.cycles-pp.xfs_inode_item_committed
0.16 ? 12% +0.0 0.16 ? 9% perf-profile.children.cycles-pp.fixup_exception
0.13 ? 10% +0.0 0.14 ? 19% perf-profile.children.cycles-pp.calc_global_load_tick
0.12 ? 14% +0.0 0.12 ? 11% perf-profile.children.cycles-pp.__switch_to
0.91 ? 8% +0.0 0.91 ? 4% perf-profile.children.cycles-pp.asm_exc_page_fault
0.28 ? 8% +0.0 0.28 ? 5% perf-profile.children.cycles-pp._raw_spin_lock_irq
0.17 ? 14% +0.0 0.17 ? 9% perf-profile.children.cycles-pp.list_sort
0.18 ? 12% +0.0 0.18 ? 8% perf-profile.children.cycles-pp.kernelmode_fixup_or_oops
0.14 ? 7% +0.0 0.14 ? 16% perf-profile.children.cycles-pp.rcu_pending
0.13 ? 12% +0.0 0.13 ? 8% perf-profile.children.cycles-pp.search_exception_tables
0.08 ? 6% +0.0 0.08 ? 13% perf-profile.children.cycles-pp.setup_object
0.12 ? 45% +0.0 0.12 ? 31% perf-profile.children.cycles-pp.fault_in_readable
0.12 ? 16% +0.0 0.12 ? 11% perf-profile.children.cycles-pp.xfs_alloc_read_agfl
0.10 ? 14% +0.0 0.10 ? 9% perf-profile.children.cycles-pp.__radix_tree_lookup
0.12 ? 17% +0.0 0.12 ? 10% perf-profile.children.cycles-pp.nvme_queue_rq
0.11 ? 8% +0.0 0.11 ? 12% perf-profile.children.cycles-pp.__alloc_file
0.10 ? 16% +0.0 0.10 ? 9% perf-profile.children.cycles-pp.xfs_trans_brelse
0.43 ? 7% +0.0 0.44 ? 2% perf-profile.children.cycles-pp.stack_access_ok
0.18 ? 14% +0.0 0.18 ? 9% perf-profile.children.cycles-pp.prepare_task_switch
0.14 ? 11% +0.0 0.14 ? 11% perf-profile.children.cycles-pp.folio_batch_move_lru
0.13 ? 13% +0.0 0.13 ? 12% perf-profile.children.cycles-pp.lru_add_drain_cpu
0.14 ? 5% +0.0 0.14 ? 22% perf-profile.children.cycles-pp.perf_output_begin_forward
0.07 ? 25% +0.0 0.08 ? 13% perf-profile.children.cycles-pp.cmp_ex_search
0.07 ? 28% +0.0 0.08 ? 16% perf-profile.children.cycles-pp.nvme_prep_rq
0.05 ? 46% +0.0 0.05 ? 8% perf-profile.children.cycles-pp.rb_insert_color
0.06 ? 20% +0.0 0.06 ? 11% perf-profile.children.cycles-pp.tick_nohz_idle_enter
0.02 ?142% +0.0 0.03 ?100% perf-profile.children.cycles-pp.tick_nohz_idle_exit
0.02 ?142% +0.0 0.03 ?100% perf-profile.children.cycles-pp.wbt_wait
0.12 ? 45% +0.0 0.12 ? 30% perf-profile.children.cycles-pp.fault_in_iov_iter_readable
0.14 ? 7% +0.0 0.14 ? 13% perf-profile.children.cycles-pp.select_task_rq_fair
0.14 ? 14% +0.0 0.14 ? 13% perf-profile.children.cycles-pp.__switch_to_asm
0.08 ? 7% +0.0 0.08 ? 16% perf-profile.children.cycles-pp.available_idle_cpu
0.06 ? 11% +0.0 0.07 ? 13% perf-profile.children.cycles-pp.ct_kernel_enter
0.07 ? 16% +0.0 0.08 ? 14% perf-profile.children.cycles-pp.ct_kernel_exit_state
0.08 ? 13% +0.0 0.08 ? 5% perf-profile.children.cycles-pp.__cond_resched
0.12 ? 12% +0.0 0.13 ? 9% perf-profile.children.cycles-pp.search_extable
0.46 ? 10% +0.0 0.46 ? 8% perf-profile.children.cycles-pp.native_irq_return_iret
0.44 ? 11% +0.0 0.45 ? 2% perf-profile.children.cycles-pp.ttwu_do_activate
0.16 ? 19% +0.0 0.17 ? 6% perf-profile.children.cycles-pp.xfs_buf_item_init
0.13 ? 18% +0.0 0.14 ? 6% perf-profile.children.cycles-pp.core_kernel_text
0.01 ?223% +0.0 0.02 ?141% perf-profile.children.cycles-pp.hrtimer_get_next_event
0.08 ? 11% +0.0 0.09 ? 11% perf-profile.children.cycles-pp.wake_affine
0.07 ? 47% +0.0 0.08 ? 10% perf-profile.children.cycles-pp.llist_reverse_order
0.08 ? 24% +0.0 0.09 ? 6% perf-profile.children.cycles-pp.__task_pid_nr_ns
0.80 ? 9% +0.0 0.81 ? 4% perf-profile.children.cycles-pp.flush_smp_call_function_queue
0.26 ? 11% +0.0 0.27 ? 13% perf-profile.children.cycles-pp.arch_scale_freq_tick
0.36 ? 9% +0.0 0.37 ? 3% perf-profile.children.cycles-pp.enqueue_entity
0.30 ? 5% +0.0 0.31 ? 11% perf-profile.children.cycles-pp.xlog_state_release_iclog
0.16 ? 17% +0.0 0.17 ? 13% perf-profile.children.cycles-pp.io_serial_in
0.11 ? 7% +0.0 0.12 ? 12% perf-profile.children.cycles-pp.alloc_empty_file
0.04 ? 45% +0.0 0.05 ? 13% perf-profile.children.cycles-pp.kvm_guest_state
0.08 ? 21% +0.0 0.08 ? 8% perf-profile.children.cycles-pp.rwsem_wake
0.04 ? 72% +0.0 0.05 ? 47% perf-profile.children.cycles-pp.inode_permission
0.01 ?223% +0.0 0.02 ?141% perf-profile.children.cycles-pp.d_splice_alias
0.01 ?223% +0.0 0.02 ?141% perf-profile.children.cycles-pp.inode_init_once
0.00 +0.0 0.01 ?223% perf-profile.children.cycles-pp.memcpy
0.00 +0.0 0.01 ?223% perf-profile.children.cycles-pp.asm_sysvec_call_function_single
0.00 +0.0 0.01 ?223% perf-profile.children.cycles-pp.xlog_calc_unit_res
0.00 +0.0 0.01 ?223% perf-profile.children.cycles-pp.__x64_sys_exit_group
0.00 +0.0 0.01 ?223% perf-profile.children.cycles-pp.do_group_exit
0.00 +0.0 0.01 ?223% perf-profile.children.cycles-pp.do_exit
0.00 +0.0 0.01 ?223% perf-profile.children.cycles-pp.__strncat_chk
0.00 +0.0 0.01 ?223% perf-profile.children.cycles-pp.put_prev_task_fair
0.00 +0.0 0.01 ?223% perf-profile.children.cycles-pp.__mmput
0.00 +0.0 0.01 ?223% perf-profile.children.cycles-pp.exit_mmap
0.00 +0.0 0.01 ?223% perf-profile.children.cycles-pp.rmqueue_bulk
0.00 +0.0 0.01 ?223% perf-profile.children.cycles-pp.__up
0.00 +0.0 0.01 ?223% perf-profile.children.cycles-pp.apparmor_file_free_security
0.00 +0.0 0.01 ?223% perf-profile.children.cycles-pp.in_gate_area_no_mm
0.04 ? 71% +0.0 0.04 ? 47% perf-profile.children.cycles-pp.perf_trace_buf_update
0.14 ? 14% +0.0 0.15 ? 10% perf-profile.children.cycles-pp.__flush_smp_call_function_queue
0.01 ?223% +0.0 0.02 ?142% perf-profile.children.cycles-pp.ct_kernel_exit
0.00 +0.0 0.01 ?223% perf-profile.children.cycles-pp.mutex_unlock
0.00 +0.0 0.01 ?223% perf-profile.children.cycles-pp.get_cpu_device
0.06 ? 50% +0.0 0.06 ? 7% perf-profile.children.cycles-pp.xlog_space_left
0.02 ?141% +0.0 0.03 ?100% perf-profile.children.cycles-pp.rb_erase
0.02 ?141% +0.0 0.03 ?100% perf-profile.children.cycles-pp.memmove
0.07 ? 7% +0.0 0.08 ? 14% perf-profile.children.cycles-pp.xfs_btree_ptr_to_daddr
0.09 ? 4% +0.0 0.10 ? 14% perf-profile.children.cycles-pp.ct_idle_exit
0.05 ? 47% +0.0 0.06 ? 17% perf-profile.children.cycles-pp.xfs_trans_del_item
0.06 ? 47% +0.0 0.07 ? 18% perf-profile.children.cycles-pp.shmem_write_end
0.00 +0.0 0.01 ?223% perf-profile.children.cycles-pp.__ctype_b_loc
0.21 ? 25% +0.0 0.22 ? 14% perf-profile.children.cycles-pp.finish_task_switch
0.12 ? 23% +0.0 0.13 ? 26% perf-profile.children.cycles-pp.perf_poll
0.25 ? 8% +0.0 0.26 ? 9% perf-profile.children.cycles-pp.xlog_write_get_more_iclog_space
0.03 ?100% +0.0 0.04 ? 72% perf-profile.children.cycles-pp.nvme_poll_cq
0.03 ?101% +0.0 0.04 ? 45% perf-profile.children.cycles-pp.xfs_dir2_isblock
0.02 ?142% +0.0 0.04 ? 71% perf-profile.children.cycles-pp.nvme_map_data
0.42 ? 10% +0.0 0.43 ? 2% perf-profile.children.cycles-pp.enqueue_task_fair
0.02 ? 99% +0.0 0.04 ? 75% perf-profile.children.cycles-pp.shmem_alloc_and_acct_folio
0.28 ? 11% +0.0 0.30 ? 8% perf-profile.children.cycles-pp.xlog_write_partial
0.17 ? 25% +0.0 0.18 ? 21% perf-profile.children.cycles-pp.__x64_sys_poll
0.17 ? 25% +0.0 0.18 ? 21% perf-profile.children.cycles-pp.do_sys_poll
0.39 ? 9% +0.0 0.41 ? 5% perf-profile.children.cycles-pp.xfs_dir2_node_addname_int
0.16 ? 11% +0.0 0.18 ? 16% perf-profile.children.cycles-pp.__folio_start_writeback
0.13 ? 21% +0.0 0.14 ? 15% perf-profile.children.cycles-pp.shmem_write_begin
0.13 ? 21% +0.0 0.15 ? 16% perf-profile.children.cycles-pp.shmem_get_folio_gfp
0.14 ? 24% +0.0 0.15 ? 23% perf-profile.children.cycles-pp.do_poll
0.04 ? 73% +0.0 0.06 ? 19% perf-profile.children.cycles-pp.xfs_allocbt_init_common
0.00 +0.0 0.02 ?141% perf-profile.children.cycles-pp.pwq_dec_nr_in_flight
0.00 +0.0 0.02 ?141% perf-profile.children.cycles-pp.__blk_mq_alloc_requests
0.00 +0.0 0.02 ?141% perf-profile.children.cycles-pp.blk_mq_get_tag
0.00 +0.0 0.02 ?141% perf-profile.children.cycles-pp.perf_misc_flags
0.00 +0.0 0.02 ?141% perf-profile.children.cycles-pp.security_file_free
0.02 ?142% +0.0 0.04 ? 71% perf-profile.children.cycles-pp.xfs_iext_get_extent
0.02 ?142% +0.0 0.04 ? 71% perf-profile.children.cycles-pp.sched_setaffinity
0.01 ?223% +0.0 0.02 ? 99% perf-profile.children.cycles-pp.d_instantiate
0.17 ? 25% +0.0 0.18 ? 22% perf-profile.children.cycles-pp.__poll
0.16 ? 13% +0.0 0.18 ? 5% perf-profile.children.cycles-pp.xlog_cil_set_ctx_write_state
0.12 ? 23% +0.0 0.14 ? 13% perf-profile.children.cycles-pp.xfs_lookup_get_search_key
0.00 +0.0 0.02 ?142% perf-profile.children.cycles-pp.xfs_dir3_leaf_find_entry
0.00 +0.0 0.02 ?142% perf-profile.children.cycles-pp.shmem_alloc_folio
0.00 +0.0 0.02 ?142% perf-profile.children.cycles-pp.__folio_alloc
0.04 ? 71% +0.0 0.06 ? 54% perf-profile.children.cycles-pp.xfs_dir2_leafn_add
0.00 +0.0 0.02 ?141% perf-profile.children.cycles-pp.vma_alloc_folio
0.11 ? 12% +0.0 0.13 ? 11% perf-profile.children.cycles-pp.__smp_call_single_queue
0.11 ? 12% +0.0 0.13 ? 11% perf-profile.children.cycles-pp.llist_add_batch
0.16 ? 10% +0.0 0.19 ? 5% perf-profile.children.cycles-pp.xlog_cil_write_commit_record
1.38 ? 2% +0.0 1.41 ? 3% perf-profile.children.cycles-pp.memcpy_erms
0.23 ? 5% +0.0 0.26 ? 6% perf-profile.children.cycles-pp.ttwu_queue_wakelist
0.38 ? 17% +0.0 0.40 ? 8% perf-profile.children.cycles-pp.copy_user_enhanced_fast_string
0.02 ?142% +0.0 0.05 ? 45% perf-profile.children.cycles-pp.get_callchain_entry
0.39 ? 16% +0.0 0.42 ? 9% perf-profile.children.cycles-pp.copy_page_from_iter_atomic
0.26 ? 10% +0.0 0.29 ? 4% perf-profile.children.cycles-pp.xfs_alloc_lookup_eq
0.38 ? 16% +0.0 0.41 ? 9% perf-profile.children.cycles-pp.copyin
0.00 +0.0 0.04 ? 71% perf-profile.children.cycles-pp.rwsem_optimistic_spin
0.03 ?100% +0.0 0.06 ? 14% perf-profile.children.cycles-pp.rcu_is_watching
0.03 ?100% +0.0 0.07 ? 11% perf-profile.children.cycles-pp.rwsem_down_write_slowpath
0.89 ? 9% +0.0 0.93 ? 4% perf-profile.children.cycles-pp.__get_user_nocheck_8
0.16 ? 11% +0.0 0.20 ? 10% perf-profile.children.cycles-pp.xfs_bmapi_write
0.09 ? 14% +0.0 0.14 ? 10% perf-profile.children.cycles-pp.xfs_ialloc_ag_alloc
0.02 ?223% +0.0 0.06 ? 11% perf-profile.children.cycles-pp.xfs_allocbt_init_key_from_rec
0.92 ? 8% +0.0 0.97 ? 4% perf-profile.children.cycles-pp.perf_callchain_user
0.42 ? 8% +0.1 0.47 ? 4% perf-profile.children.cycles-pp.xfs_alloc_cur_finish
1.88 ? 7% +0.1 1.94 perf-profile.children.cycles-pp.schedule_idle
1.02 ? 7% +0.1 1.07 ? 5% perf-profile.children.cycles-pp.xlog_write
0.04 ? 73% +0.1 0.10 ? 10% perf-profile.children.cycles-pp.xfs_dir2_node_add_datablk
0.00 +0.1 0.06 ? 51% perf-profile.children.cycles-pp.xfs_alloc_fix_len
0.03 ?100% +0.1 0.09 ? 11% perf-profile.children.cycles-pp.xfs_dir2_grow_inode
0.03 ?100% +0.1 0.09 ? 11% perf-profile.children.cycles-pp.xfs_da_grow_inode_int
0.50 ? 26% +0.1 0.56 ? 19% perf-profile.children.cycles-pp.generic_perform_write
0.55 ? 26% +0.1 0.62 ? 20% perf-profile.children.cycles-pp.perf_mmap__push
0.56 ? 26% +0.1 0.63 ? 19% perf-profile.children.cycles-pp.record__mmap_read_evlist
0.52 ? 26% +0.1 0.59 ? 19% perf-profile.children.cycles-pp.__libc_write
0.50 ? 26% +0.1 0.57 ? 19% perf-profile.children.cycles-pp.__generic_file_write_iter
0.50 ? 26% +0.1 0.57 ? 19% perf-profile.children.cycles-pp.generic_file_write_iter
0.52 ? 26% +0.1 0.59 ? 19% perf-profile.children.cycles-pp.record__pushfn
0.52 ? 26% +0.1 0.59 ? 19% perf-profile.children.cycles-pp.writen
0.35 ? 17% +0.1 0.43 ? 18% perf-profile.children.cycles-pp.mod_find
3.20 ? 4% +0.1 3.29 ? 3% perf-profile.children.cycles-pp.dequeue_task_fair
0.33 ? 12% +0.1 0.42 ? 17% perf-profile.children.cycles-pp.is_module_text_address
3.14 ? 4% +0.1 3.23 ? 3% perf-profile.children.cycles-pp.dequeue_entity
0.00 +0.1 0.09 ? 18% perf-profile.children.cycles-pp.xfs_btree_readahead
1.48 ? 7% +0.1 1.59 ? 5% perf-profile.children.cycles-pp.xlog_cil_push_work
3.13 ? 5% +0.1 3.24 ? 3% perf-profile.children.cycles-pp.update_curr
0.01 ?223% +0.1 0.12 ? 10% perf-profile.children.cycles-pp.xfs_btree_rec_offset
0.43 ? 16% +0.1 0.54 ? 14% perf-profile.children.cycles-pp.__module_address
2.99 ? 9% +0.1 3.11 ? 7% perf-profile.children.cycles-pp._raw_spin_lock
0.65 ? 10% +0.1 0.78 ? 10% perf-profile.children.cycles-pp.kernel_text_address
0.74 ? 8% +0.1 0.88 ? 9% perf-profile.children.cycles-pp.__kernel_text_address
2.90 ? 5% +0.1 3.04 ? 3% perf-profile.children.cycles-pp.perf_trace_sched_stat_runtime
0.81 ? 8% +0.2 0.96 ? 8% perf-profile.children.cycles-pp.unwind_get_return_address
2.15 ? 6% +0.2 2.31 ? 2% perf-profile.children.cycles-pp.perf_trace_sched_wakeup_template
3.07 ? 3% +0.2 3.23 ? 3% perf-profile.children.cycles-pp.xlog_cil_commit
3.29 ? 2% +0.2 3.45 ? 3% perf-profile.children.cycles-pp.__xfs_trans_commit
0.00 +0.2 0.21 ? 18% perf-profile.children.cycles-pp.xfs_alloc_compute_diff
2.84 ? 4% +0.2 3.08 perf-profile.children.cycles-pp.try_to_wake_up
1.60 ? 6% +0.2 1.85 ? 3% perf-profile.children.cycles-pp.__orc_find
6.40 ? 5% +0.3 6.66 ? 3% perf-profile.children.cycles-pp.schedule
0.00 +0.3 0.26 ? 6% perf-profile.children.cycles-pp.xfs_btree_get_rec
3.32 ? 5% +0.3 3.60 ? 2% perf-profile.children.cycles-pp.perf_trace_sched_switch
1.28 ? 9% +0.3 1.58 ? 6% perf-profile.children.cycles-pp.orc_find
0.02 ?141% +0.3 0.32 ? 11% perf-profile.children.cycles-pp.xfs_btree_increment
8.22 ? 5% +0.3 8.55 ? 2% perf-profile.children.cycles-pp.__schedule
0.63 ? 11% +0.4 0.99 ? 7% perf-profile.children.cycles-pp.xfs_buf_item_release
0.62 ? 10% +0.4 0.98 ? 7% perf-profile.children.cycles-pp.xfs_buf_unlock
0.59 ? 6% +0.4 0.96 ? 6% perf-profile.children.cycles-pp.up
4.12 ? 5% +0.4 4.49 ? 2% perf-profile.children.cycles-pp.unwind_next_frame
0.38 ? 26% +0.4 0.76 ? 53% perf-profile.children.cycles-pp.start_kernel
0.38 ? 26% +0.4 0.76 ? 53% perf-profile.children.cycles-pp.arch_call_rest_init
0.38 ? 26% +0.4 0.76 ? 53% perf-profile.children.cycles-pp.rest_init
5.35 ? 5% +0.5 5.89 ? 3% perf-profile.children.cycles-pp.perf_callchain_kernel
6.42 ? 5% +0.6 7.00 ? 3% perf-profile.children.cycles-pp.perf_callchain
8.17 ? 5% +0.6 8.76 ? 2% perf-profile.children.cycles-pp.perf_tp_event
6.40 ? 5% +0.6 6.99 ? 3% perf-profile.children.cycles-pp.get_perf_callchain
0.08 ? 21% +0.6 0.67 ? 4% perf-profile.children.cycles-pp.xfs_alloc_get_rec
6.76 ? 5% +0.6 7.36 ? 3% perf-profile.children.cycles-pp.perf_prepare_sample
7.88 ? 5% +0.6 8.48 ? 2% perf-profile.children.cycles-pp.perf_event_output_forward
7.93 ? 5% +0.6 8.53 ? 2% perf-profile.children.cycles-pp.__perf_event_overflow
2.89 ? 6% +0.7 3.58 ? 3% perf-profile.children.cycles-pp.schedule_timeout
0.08 ? 14% +0.7 0.80 ? 9% perf-profile.children.cycles-pp.xfs_extent_busy_trim
2.75 ? 6% +0.8 3.56 ? 4% perf-profile.children.cycles-pp.xfs_trans_read_buf_map
2.14 ? 6% +0.8 2.99 ? 4% perf-profile.children.cycles-pp.xfs_buf_get_map
2.15 ? 6% +0.8 3.00 ? 4% perf-profile.children.cycles-pp.xfs_buf_read_map
0.10 ? 11% +0.9 1.00 ? 6% perf-profile.children.cycles-pp.xfs_alloc_compute_aligned
1.72 ? 7% +0.9 2.66 ? 4% perf-profile.children.cycles-pp.xfs_buf_lookup
1.10 ? 7% +1.0 2.06 ? 4% perf-profile.children.cycles-pp.___down_common
1.10 ? 7% +1.0 2.06 ? 4% perf-profile.children.cycles-pp.__down
1.32 ? 7% +1.0 2.28 ? 3% perf-profile.children.cycles-pp.xfs_buf_find_lock
1.19 ? 7% +1.0 2.18 ? 4% perf-profile.children.cycles-pp.down
1.24 ? 7% +1.0 2.23 ? 4% perf-profile.children.cycles-pp.xfs_buf_lock
0.19 ? 9% +1.1 1.25 ? 4% perf-profile.children.cycles-pp.xfs_read_agf
0.20 ? 10% +1.1 1.26 ? 4% perf-profile.children.cycles-pp.xfs_alloc_read_agf
0.38 ? 11% +1.1 1.46 ? 4% perf-profile.children.cycles-pp.xfs_alloc_fix_freelist
31.98 ? 4% +1.3 33.30 ? 2% perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
31.95 ? 4% +1.3 33.28 ? 2% perf-profile.children.cycles-pp.do_syscall_64
42.99 ? 7% +1.4 44.41 ? 2% perf-profile.children.cycles-pp.intel_idle
0.00 +1.5 1.46 ? 4% perf-profile.children.cycles-pp.__xfs_alloc_vextent_this_ag
0.29 ? 14% +2.3 2.54 ? 3% perf-profile.children.cycles-pp.xfs_alloc_cur_check
19.33 ? 4% +2.3 21.66 ? 2% perf-profile.children.cycles-pp.__x64_sys_fsync
19.31 ? 4% +2.3 21.64 ? 2% perf-profile.children.cycles-pp.xfs_file_fsync
19.49 ? 4% +2.3 21.82 ? 2% perf-profile.children.cycles-pp.fsync
1.21 ? 8% +2.6 3.78 ? 4% perf-profile.children.cycles-pp.xfs_alloc_ag_vextent
0.35 ? 11% +2.6 2.96 ? 3% perf-profile.children.cycles-pp.xfs_alloc_walk_iter
1.05 ? 8% +2.7 3.73 ? 4% perf-profile.children.cycles-pp.xfs_alloc_ag_vextent_near
1.81 ? 6% +3.6 5.38 ? 3% perf-profile.children.cycles-pp.xfs_bmapi_allocate
1.73 ? 7% +3.6 5.31 ? 3% perf-profile.children.cycles-pp.xfs_bmap_btalloc
1.58 ? 8% +3.6 5.18 ? 3% perf-profile.children.cycles-pp.xfs_alloc_vextent_iterate_ags
1.60 ? 8% +3.7 5.27 ? 3% perf-profile.children.cycles-pp.xfs_alloc_vextent
5.04 ? 4% +3.7 8.75 ? 3% perf-profile.children.cycles-pp.file_write_and_wait_range
3.75 ? 5% +3.9 7.65 ? 3% perf-profile.children.cycles-pp.xfs_vm_writepages
3.79 ? 5% +3.9 7.69 ? 3% perf-profile.children.cycles-pp.do_writepages
3.72 ? 5% +3.9 7.64 ? 3% perf-profile.children.cycles-pp.filemap_fdatawrite_wbc
3.73 ? 5% +3.9 7.64 ? 3% perf-profile.children.cycles-pp.__filemap_fdatawrite_range
3.37 ? 5% +3.9 7.32 ? 3% perf-profile.children.cycles-pp.iomap_writepages
3.37 ? 5% +3.9 7.31 ? 3% perf-profile.children.cycles-pp.write_cache_pages
2.74 ? 5% +4.0 6.70 ? 3% perf-profile.children.cycles-pp.xfs_bmapi_convert_delalloc
3.06 ? 5% +4.0 7.02 ? 3% perf-profile.children.cycles-pp.iomap_writepage_map
2.77 ? 5% +4.0 6.74 ? 3% perf-profile.children.cycles-pp.xfs_map_blocks
6.32 ? 7% -0.9 5.38 ? 7% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
1.23 ? 29% -0.2 1.02 ? 9% perf-profile.self.cycles-pp.queue_event
0.51 ? 67% -0.1 0.37 ? 7% perf-profile.self.cycles-pp.poll_idle
1.45 ? 13% -0.1 1.36 ? 8% perf-profile.self.cycles-pp.ktime_get
0.81 ? 8% -0.1 0.72 ? 5% perf-profile.self.cycles-pp.mutex_spin_on_owner
0.89 ? 5% -0.1 0.82 ? 5% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.17 ? 11% -0.1 0.11 ? 19% perf-profile.self.cycles-pp.xfs_perag_get
3.61 ? 6% -0.1 3.54 ? 3% perf-profile.self.cycles-pp.osq_lock
0.38 ? 12% -0.0 0.34 ? 7% perf-profile.self.cycles-pp.xfs_buf_lookup
0.94 ? 9% -0.0 0.90 ? 10% perf-profile.self.cycles-pp.menu_select
0.05 ? 46% -0.0 0.02 ?141% perf-profile.self.cycles-pp.xfs_buf_offset
0.38 ? 10% -0.0 0.34 ? 7% perf-profile.self.cycles-pp.update_sg_lb_stats
0.16 ? 13% -0.0 0.13 ? 8% perf-profile.self.cycles-pp.xfs_trans_dirty_buf
0.15 ? 12% -0.0 0.12 ? 16% perf-profile.self.cycles-pp.__might_resched
0.04 ? 75% -0.0 0.01 ?223% perf-profile.self.cycles-pp.xfs_da3_node_lookup_int
0.12 ? 18% -0.0 0.09 ? 15% perf-profile.self.cycles-pp.xfs_buf_item_format_segment
0.14 ? 17% -0.0 0.11 ? 10% perf-profile.self.cycles-pp.xfs_trans_buf_item_match
0.03 ?102% -0.0 0.00 perf-profile.self.cycles-pp.__ordered_events__flush
0.03 ?100% -0.0 0.00 perf-profile.self.cycles-pp.__d_lookup
0.06 ? 6% -0.0 0.04 ? 71% perf-profile.self.cycles-pp.xas_find_marked
0.74 ? 18% -0.0 0.72 ? 4% perf-profile.self.cycles-pp.cpuidle_enter_state
0.02 ? 99% -0.0 0.00 perf-profile.self.cycles-pp.link_path_walk
0.02 ? 99% -0.0 0.00 perf-profile.self.cycles-pp.schedule
0.03 ?101% -0.0 0.01 ?223% perf-profile.self.cycles-pp.switch_mm_irqs_off
0.04 ? 71% -0.0 0.01 ?223% perf-profile.self.cycles-pp._find_next_and_bit
0.14 ? 6% -0.0 0.12 ? 13% perf-profile.self.cycles-pp.xlog_cil_alloc_shadow_bufs
0.05 ? 49% -0.0 0.03 ?100% perf-profile.self.cycles-pp.update_sd_lb_stats
0.06 ? 19% -0.0 0.04 ? 45% perf-profile.self.cycles-pp.xfs_perag_grab
0.13 ? 15% -0.0 0.11 ? 21% perf-profile.self.cycles-pp.setup_file_name
0.12 ? 21% -0.0 0.10 ? 16% perf-profile.self.cycles-pp.random
0.03 ?102% -0.0 0.01 ?223% perf-profile.self.cycles-pp.xfs_trans_buf_set_type
0.04 ? 71% -0.0 0.02 ?141% perf-profile.self.cycles-pp.rebalance_domains
0.08 ? 10% -0.0 0.06 ? 45% perf-profile.self.cycles-pp.cgroup_rstat_updated
0.25 ? 3% -0.0 0.23 ? 7% perf-profile.self.cycles-pp.__output_copy
0.20 ? 22% -0.0 0.18 ? 23% perf-profile.self.cycles-pp.timekeeping_max_deferment
0.11 ? 9% -0.0 0.09 ? 14% perf-profile.self.cycles-pp.kmem_cache_free
0.14 ? 10% -0.0 0.12 ? 8% perf-profile.self.cycles-pp.cgroup_rstat_flush_locked
0.12 ? 12% -0.0 0.10 ? 10% perf-profile.self.cycles-pp.do_idle
0.04 ? 71% -0.0 0.02 ?141% perf-profile.self.cycles-pp.xfs_dir2_leaf_search_hash
0.06 ? 11% -0.0 0.05 ? 50% perf-profile.self.cycles-pp.percpu_counter_add_batch
0.06 ? 11% -0.0 0.05 ? 45% perf-profile.self.cycles-pp.xfs_bmapi_read
0.05 ? 45% -0.0 0.03 ?101% perf-profile.self.cycles-pp.xfs_trans_precommit_sort
0.03 ?100% -0.0 0.01 ?223% perf-profile.self.cycles-pp._atomic_dec_and_lock
0.03 ?100% -0.0 0.01 ?223% perf-profile.self.cycles-pp.tick_nohz_tick_stopped
0.02 ?142% -0.0 0.00 perf-profile.self.cycles-pp.find_get_pages_range_tag
0.02 ?142% -0.0 0.00 perf-profile.self.cycles-pp.submit_bio_noacct
0.25 ? 12% -0.0 0.23 ? 7% perf-profile.self.cycles-pp.read_tsc
0.18 ? 12% -0.0 0.16 ? 4% perf-profile.self.cycles-pp.perf_tp_event
0.04 ? 47% -0.0 0.03 ?100% perf-profile.self.cycles-pp.finish_task_switch
0.02 ? 99% -0.0 0.01 ?223% perf-profile.self.cycles-pp.balance_dirty_pages_ratelimited_flags
0.02 ? 99% -0.0 0.01 ?223% perf-profile.self.cycles-pp.__mod_lruvec_page_state
0.04 ? 45% -0.0 0.03 ?100% perf-profile.self.cycles-pp.kvm_guest_state
0.02 ?141% -0.0 0.00 perf-profile.self.cycles-pp.xfs_iext_last
0.02 ?141% -0.0 0.00 perf-profile.self.cycles-pp.xlog_cil_force_seq
0.32 ? 12% -0.0 0.30 ? 6% perf-profile.self.cycles-pp.lapic_next_deadline
0.16 ? 14% -0.0 0.15 ? 6% perf-profile.self.cycles-pp.kmem_cache_alloc
0.11 ? 16% -0.0 0.09 ? 18% perf-profile.self.cycles-pp.__update_load_avg_se
0.09 ? 17% -0.0 0.08 ? 6% perf-profile.self.cycles-pp.trigger_load_balance
0.25 ? 7% -0.0 0.23 ? 7% perf-profile.self.cycles-pp.__list_del_entry_valid
0.15 ? 15% -0.0 0.14 ? 16% perf-profile.self.cycles-pp.ktime_get_update_offsets_now
0.12 ? 15% -0.0 0.10 ? 13% perf-profile.self.cycles-pp.___perf_sw_event
0.11 ? 16% -0.0 0.09 ? 13% perf-profile.self.cycles-pp.memset_erms
0.09 ? 4% -0.0 0.08 ? 8% perf-profile.self.cycles-pp.__hrtimer_next_event_base
0.06 ? 16% -0.0 0.05 ? 47% perf-profile.self.cycles-pp.dequeue_entity
0.06 ? 47% -0.0 0.04 ? 72% perf-profile.self.cycles-pp.__do_softirq
0.06 ? 46% -0.0 0.04 ? 47% perf-profile.self.cycles-pp.note_gp_changes
0.35 ? 7% -0.0 0.34 ? 6% perf-profile.self.cycles-pp.native_sched_clock
0.01 ?223% -0.0 0.00 perf-profile.self.cycles-pp.kallsyms__parse
0.01 ?223% -0.0 0.00 perf-profile.self.cycles-pp.perf_env__arch
0.04 ? 71% -0.0 0.02 ? 99% perf-profile.self.cycles-pp.xfs_trans_read_buf_map
0.15 ? 11% -0.0 0.14 ? 4% perf-profile.self.cycles-pp._raw_spin_trylock
0.10 ? 23% -0.0 0.09 ? 20% perf-profile.self.cycles-pp.delay_tsc
0.10 ? 19% -0.0 0.09 ? 12% perf-profile.self.cycles-pp.xfs_trans_ail_update_bulk
0.13 ? 12% -0.0 0.12 ? 12% perf-profile.self.cycles-pp.__list_add_valid
0.07 ? 19% -0.0 0.06 ? 11% perf-profile.self.cycles-pp.__flush_workqueue
0.14 ? 10% -0.0 0.14 ? 11% perf-profile.self.cycles-pp.__kmem_cache_alloc_node
0.08 ? 14% -0.0 0.08 ? 6% perf-profile.self.cycles-pp.hrtimer_interrupt
0.03 ?100% -0.0 0.02 ?141% perf-profile.self.cycles-pp.scheduler_tick
0.02 ?142% -0.0 0.01 ?223% perf-profile.self.cycles-pp.__cond_resched
0.01 ?223% -0.0 0.00 perf-profile.self.cycles-pp.ct_nmi_enter
0.01 ?223% -0.0 0.00 perf-profile.self.cycles-pp.apparmor_file_alloc_security
0.01 ?223% -0.0 0.00 perf-profile.self.cycles-pp.xfs_buf_item_size_segment
0.10 ? 20% -0.0 0.09 ? 27% perf-profile.self.cycles-pp.reader__read_event
0.09 ? 16% -0.0 0.08 ? 11% perf-profile.self.cycles-pp.xfs_iext_lookup_extent
0.06 ? 11% -0.0 0.05 ? 46% perf-profile.self.cycles-pp.memcg_slab_post_alloc_hook
0.10 ? 21% -0.0 0.10 ? 10% perf-profile.self.cycles-pp.xfs_trans_committed_bulk
0.10 ? 10% -0.0 0.09 ? 14% perf-profile.self.cycles-pp.__intel_pmu_enable_all
0.11 ? 8% -0.0 0.10 ? 20% perf-profile.self.cycles-pp.down_read
0.10 ? 15% -0.0 0.09 ? 10% perf-profile.self.cycles-pp.update_irq_load_avg
0.01 ?223% -0.0 0.00 perf-profile.self.cycles-pp.tick_irq_enter
0.01 ?223% -0.0 0.00 perf-profile.self.cycles-pp.rcu_sched_clock_irq
0.02 ?142% -0.0 0.01 ?223% perf-profile.self.cycles-pp.__wrgsbase_inactive
0.01 ?223% -0.0 0.00 perf-profile.self.cycles-pp.send_call_function_single_ipi
0.01 ?223% -0.0 0.00 perf-profile.self.cycles-pp.irqtime_account_process_tick
0.02 ?141% -0.0 0.01 ?223% perf-profile.self.cycles-pp.cpuacct_charge
0.02 ?142% -0.0 0.01 ?223% perf-profile.self.cycles-pp.io__get_char
0.02 ?141% -0.0 0.01 ?223% perf-profile.self.cycles-pp.bsearch
0.01 ?223% -0.0 0.00 perf-profile.self.cycles-pp.down_write
0.02 ?142% -0.0 0.01 ?223% perf-profile.self.cycles-pp.__folio_end_writeback
0.01 ?223% -0.0 0.00 perf-profile.self.cycles-pp.__d_lookup_rcu
0.01 ?223% -0.0 0.00 perf-profile.self.cycles-pp._find_next_bit
0.01 ?223% -0.0 0.00 perf-profile.self.cycles-pp.__sysvec_apic_timer_interrupt
0.01 ?223% -0.0 0.00 perf-profile.self.cycles-pp.list_lru_add
0.01 ?223% -0.0 0.00 perf-profile.self.cycles-pp.inode_permission
0.01 ?223% -0.0 0.00 perf-profile.self.cycles-pp.perf_output_copy
0.01 ?223% -0.0 0.00 perf-profile.self.cycles-pp.xfs_buffered_write_iomap_begin
0.01 ?223% -0.0 0.00 perf-profile.self.cycles-pp.timerqueue_add
0.01 ?223% -0.0 0.00 perf-profile.self.cycles-pp.rq_qos_wait
0.01 ?223% -0.0 0.00 perf-profile.self.cycles-pp.xfs_inobt_init_key_from_rec
0.01 ?223% -0.0 0.00 perf-profile.self.cycles-pp.flush_workqueue_prep_pwqs
0.01 ?223% -0.0 0.00 perf-profile.self.cycles-pp.__alloc_pages
0.01 ?223% -0.0 0.00 perf-profile.self.cycles-pp.xfs_allocbt_init_common
0.01 ?223% -0.0 0.00 perf-profile.self.cycles-pp.apparmor_file_open
0.18 ? 41% -0.0 0.17 ? 28% perf-profile.self.cycles-pp.tick_sched_do_timer
0.14 ? 6% -0.0 0.14 ? 11% perf-profile.self.cycles-pp.xfs_perag_put
0.08 ? 13% -0.0 0.07 ? 15% perf-profile.self.cycles-pp.clear_page_erms
0.07 ? 15% -0.0 0.06 ? 11% perf-profile.self.cycles-pp._raw_spin_unlock_irqrestore
0.03 ? 70% -0.0 0.02 ? 99% perf-profile.self.cycles-pp.rb_next
0.12 ? 4% -0.0 0.11 ? 10% perf-profile.self.cycles-pp.iomap_write_end
0.23 ? 8% -0.0 0.22 ? 7% perf-profile.self.cycles-pp._xfs_trans_bjoin
0.15 ? 10% -0.0 0.14 ? 10% perf-profile.self.cycles-pp.__slab_free
0.12 ? 13% -0.0 0.12 ? 13% perf-profile.self.cycles-pp.idle_cpu
0.08 ? 14% -0.0 0.07 ? 18% perf-profile.self.cycles-pp.update_curr
0.08 ? 14% -0.0 0.07 ? 23% perf-profile.self.cycles-pp.check_cpu_stall
0.08 ? 16% -0.0 0.08 ? 16% perf-profile.self.cycles-pp.perf_trace_sched_switch
0.08 ? 11% -0.0 0.08 ? 18% perf-profile.self.cycles-pp.__mod_memcg_lruvec_state
0.04 ? 72% -0.0 0.03 ? 70% perf-profile.self.cycles-pp.xfs_buf_lock
0.06 ? 7% -0.0 0.06 ? 13% perf-profile.self.cycles-pp._IO_default_xsputn
0.02 ?141% -0.0 0.01 ?223% perf-profile.self.cycles-pp.lockref_get_not_dead
0.03 ?103% -0.0 0.03 ?100% perf-profile.self.cycles-pp.xfs_iflush_cluster
0.12 ? 8% -0.0 0.11 ? 9% perf-profile.self.cycles-pp.update_load_avg
0.12 ? 17% -0.0 0.11 ? 14% perf-profile.self.cycles-pp.mutex_lock
0.12 ? 29% -0.0 0.12 ? 10% perf-profile.self.cycles-pp.evlist__parse_sample
0.07 ? 18% -0.0 0.07 ? 14% perf-profile.self.cycles-pp.xfs_log_ticket_ungrant
0.02 ? 99% -0.0 0.02 ?144% perf-profile.self.cycles-pp.lockref_put_return
0.06 ? 6% -0.0 0.06 ? 13% perf-profile.self.cycles-pp.__mutex_lock
0.12 ? 13% -0.0 0.11 ? 14% perf-profile.self.cycles-pp.xfs_btree_lookup
0.10 ? 8% -0.0 0.10 ? 18% perf-profile.self.cycles-pp.update_process_times
0.45 ? 8% -0.0 0.45 ? 8% perf-profile.self.cycles-pp.mem_cgroup_css_rstat_flush
0.11 ? 10% -0.0 0.11 ? 13% perf-profile.self.cycles-pp.update_rq_clock
0.06 ? 17% -0.0 0.06 ? 13% perf-profile.self.cycles-pp.xfs_buf_item_unpin
0.08 ? 12% -0.0 0.07 ? 18% perf-profile.self.cycles-pp.error_entry
0.07 ? 15% -0.0 0.07 ? 14% perf-profile.self.cycles-pp.__hrtimer_run_queues
0.03 ?102% -0.0 0.02 ? 99% perf-profile.self.cycles-pp.perf_rotate_context
0.23 ? 8% -0.0 0.23 ? 9% perf-profile.self.cycles-pp.xfs_buf_item_pin
0.08 -0.0 0.08 ? 19% perf-profile.self.cycles-pp.__might_sleep
0.05 ? 46% -0.0 0.05 ? 46% perf-profile.self.cycles-pp.xfs_buf_find_lock
0.05 ? 45% -0.0 0.04 ? 47% perf-profile.self.cycles-pp.___slab_alloc
0.17 ? 9% -0.0 0.17 ? 14% perf-profile.self.cycles-pp.xlog_cil_push_work
0.11 ? 17% -0.0 0.10 ? 11% perf-profile.self.cycles-pp.enqueue_entity
0.12 ? 22% -0.0 0.12 ? 12% perf-profile.self.cycles-pp.xfs_buf_item_init
0.12 ? 10% -0.0 0.12 ? 16% perf-profile.self.cycles-pp.__update_load_avg_cfs_rq
0.01 ?223% -0.0 0.01 ?223% perf-profile.self.cycles-pp.hrtimer_update_next_event
0.01 ?223% -0.0 0.01 ?223% perf-profile.self.cycles-pp.__cgroup_account_cputime
0.11 ? 13% -0.0 0.11 ? 18% perf-profile.self.cycles-pp.update_cfs_group
0.08 ? 8% -0.0 0.08 ? 14% perf-profile.self.cycles-pp.xlog_grant_add_space
0.07 ? 9% -0.0 0.07 ? 14% perf-profile.self.cycles-pp.xlog_cil_commit
0.05 ? 47% -0.0 0.04 ? 45% perf-profile.self.cycles-pp.__perf_event_header__init_id
0.03 ?100% -0.0 0.02 ? 99% perf-profile.self.cycles-pp.__update_blocked_fair
0.12 ? 14% -0.0 0.11 ? 12% perf-profile.self.cycles-pp.__switch_to
0.20 ? 6% +0.0 0.20 ? 3% perf-profile.self.cycles-pp.cpuidle_idle_call
0.07 ? 18% +0.0 0.07 ? 12% perf-profile.self.cycles-pp.xfs_trans_log_inode
0.07 ? 15% +0.0 0.07 ? 15% perf-profile.self.cycles-pp.irqtime_account_irq
0.07 ? 7% +0.0 0.07 ? 22% perf-profile.self.cycles-pp.kfree
0.06 ? 18% +0.0 0.06 ? 11% perf-profile.self.cycles-pp.asm_sysvec_apic_timer_interrupt
0.06 ? 17% +0.0 0.06 ? 11% perf-profile.self.cycles-pp.newidle_balance
0.02 ?141% +0.0 0.02 ?141% perf-profile.self.cycles-pp.xfs_iext_get_extent
0.02 ?141% +0.0 0.02 ?141% perf-profile.self.cycles-pp.pick_next_task_fair
0.01 ?223% +0.0 0.01 ?223% perf-profile.self.cycles-pp.select_task_rq_fair
0.01 ?223% +0.0 0.01 ?223% perf-profile.self.cycles-pp.xlog_prepare_iovec
0.01 ?223% +0.0 0.01 ?223% perf-profile.self.cycles-pp.ct_kernel_enter
0.01 ?223% +0.0 0.01 ?223% perf-profile.self.cycles-pp.inode_init_once
0.08 ? 20% +0.0 0.08 ? 27% perf-profile.self.cycles-pp.xfs_buf_get_map
0.08 ? 13% +0.0 0.08 ? 17% perf-profile.self.cycles-pp.tsc_verify_tsc_adjust
0.25 ? 8% +0.0 0.25 ? 4% perf-profile.self.cycles-pp._raw_spin_lock_irq
0.14 ? 8% +0.0 0.14 ? 10% perf-profile.self.cycles-pp.__unwind_start
0.10 ? 12% +0.0 0.10 ? 12% perf-profile.self.cycles-pp.__radix_tree_lookup
0.08 ? 28% +0.0 0.09 ? 5% perf-profile.self.cycles-pp.core_kernel_text
0.08 ? 29% +0.0 0.08 ? 16% perf-profile.self.cycles-pp.ftrace_graph_ret_addr
0.07 ? 25% +0.0 0.08 ? 14% perf-profile.self.cycles-pp.cmp_ex_search
0.03 ?100% +0.0 0.03 ?100% perf-profile.self.cycles-pp.xfs_inode_to_log_dinode
0.02 ?141% +0.0 0.02 ?142% perf-profile.self.cycles-pp.memmove
0.01 ?223% +0.0 0.01 ?223% perf-profile.self.cycles-pp.xfs_buf_read_map
0.06 ? 6% +0.0 0.06 ? 11% perf-profile.self.cycles-pp.dequeue_task_fair
0.06 ? 9% +0.0 0.06 ? 17% perf-profile.self.cycles-pp.random_r
0.04 ? 45% +0.0 0.04 ? 45% perf-profile.self.cycles-pp.perf_event_output_forward
0.20 ? 26% +0.0 0.20 ? 12% perf-profile.self.cycles-pp.evsel__parse_sample
0.07 ? 45% +0.0 0.07 ? 45% perf-profile.self.cycles-pp.fault_in_readable
0.13 ? 9% +0.0 0.13 ? 19% perf-profile.self.cycles-pp.calc_global_load_tick
0.08 ? 11% +0.0 0.08 ? 10% perf-profile.self.cycles-pp.get_perf_callchain
0.04 ? 45% +0.0 0.05 ? 49% perf-profile.self.cycles-pp.xfs_trans_del_item
0.11 ? 11% +0.0 0.11 ? 20% perf-profile.self.cycles-pp.tick_nohz_next_event
0.07 ? 17% +0.0 0.08 ? 14% perf-profile.self.cycles-pp.__kernel_text_address
0.22 ? 8% +0.0 0.23 ? 6% perf-profile.self.cycles-pp.xlog_cil_insert_items
0.46 ? 10% +0.0 0.46 ? 8% perf-profile.self.cycles-pp.native_irq_return_iret
0.06 ? 50% +0.0 0.06 ? 9% perf-profile.self.cycles-pp.xlog_space_left
0.01 ?223% +0.0 0.02 ?141% perf-profile.self.cycles-pp.__irq_exit_rcu
0.41 ? 8% +0.0 0.42 ? 3% perf-profile.self.cycles-pp.stack_access_ok
0.14 ? 10% +0.0 0.15 ? 10% perf-profile.self.cycles-pp.perf_output_sample
0.08 ? 19% +0.0 0.09 ? 14% perf-profile.self.cycles-pp.list_sort
0.08 ? 7% +0.0 0.08 ? 16% perf-profile.self.cycles-pp.available_idle_cpu
0.13 ? 5% +0.0 0.14 ? 22% perf-profile.self.cycles-pp.perf_output_begin_forward
0.07 ? 13% +0.0 0.08 ? 16% perf-profile.self.cycles-pp.xlog_force_lsn
0.07 ? 17% +0.0 0.08 ? 14% perf-profile.self.cycles-pp.ct_kernel_exit_state
0.07 ? 47% +0.0 0.08 ? 10% perf-profile.self.cycles-pp.llist_reverse_order
0.04 ? 72% +0.0 0.05 ? 45% perf-profile.self.cycles-pp.sched_clock_cpu
0.05 ? 46% +0.0 0.06 ? 13% perf-profile.self.cycles-pp.xlog_write
0.13 ? 15% +0.0 0.14 ? 13% perf-profile.self.cycles-pp.__switch_to_asm
0.26 ? 11% +0.0 0.27 ? 13% perf-profile.self.cycles-pp.arch_scale_freq_tick
0.09 ? 12% +0.0 0.10 ? 15% perf-profile.self.cycles-pp.flush_smp_call_function_queue
0.16 ? 17% +0.0 0.17 ? 13% perf-profile.self.cycles-pp.io_serial_in
0.01 ?223% +0.0 0.02 ?141% perf-profile.self.cycles-pp.perf_trace_sched_stat_runtime
0.00 +0.0 0.01 ?223% perf-profile.self.cycles-pp.cpuidle_governor_latency_req
0.00 +0.0 0.01 ?223% perf-profile.self.cycles-pp.do_writepages
0.00 +0.0 0.01 ?223% perf-profile.self.cycles-pp.xlog_calc_unit_res
0.00 +0.0 0.01 ?223% perf-profile.self.cycles-pp.up_read
0.00 +0.0 0.01 ?223% perf-profile.self.cycles-pp.sysvec_apic_timer_interrupt
0.00 +0.0 0.01 ?223% perf-profile.self.cycles-pp.xfs_btree_lookup_get_block
0.00 +0.0 0.01 ?223% perf-profile.self.cycles-pp.xfs_lookup_get_search_key
0.00 +0.0 0.01 ?223% perf-profile.self.cycles-pp.xfs_buf_item_format
0.00 +0.0 0.01 ?223% perf-profile.self.cycles-pp.apparmor_file_free_security
0.02 ?142% +0.0 0.03 ?100% perf-profile.self.cycles-pp.load_balance
0.02 ?141% +0.0 0.02 ? 99% perf-profile.self.cycles-pp.sched_ttwu_pending
0.09 ? 10% +0.0 0.10 ? 15% perf-profile.self.cycles-pp.perf_prepare_sample
0.08 ? 24% +0.0 0.09 ? 6% perf-profile.self.cycles-pp.__task_pid_nr_ns
0.06 ? 13% +0.0 0.07 ? 16% perf-profile.self.cycles-pp.__flush_smp_call_function_queue
0.01 ?223% +0.0 0.02 ?142% perf-profile.self.cycles-pp.folio_batch_move_lru
0.00 +0.0 0.01 ?223% perf-profile.self.cycles-pp.mutex_unlock
0.00 +0.0 0.01 ?223% perf-profile.self.cycles-pp.get_cpu_device
0.04 ? 72% +0.0 0.06 ? 47% perf-profile.self.cycles-pp.enqueue_task_fair
0.06 ? 7% +0.0 0.08 ? 14% perf-profile.self.cycles-pp.xfs_btree_ptr_to_daddr
0.04 ? 71% +0.0 0.05 ? 46% perf-profile.self.cycles-pp.shmem_write_end
0.04 ? 72% +0.0 0.05 ? 8% perf-profile.self.cycles-pp.rb_insert_color
0.02 ?142% +0.0 0.03 ? 70% perf-profile.self.cycles-pp.xfs_buf_rele
0.05 ? 46% +0.0 0.07 ? 8% perf-profile.self.cycles-pp.prepare_task_switch
0.07 ? 8% +0.0 0.09 ? 7% perf-profile.self.cycles-pp.ttwu_queue_wakelist
0.04 ? 73% +0.0 0.06 ? 13% perf-profile.self.cycles-pp.rcu_pending
0.01 ?223% +0.0 0.03 ?100% perf-profile.self.cycles-pp.rb_erase
0.17 ? 12% +0.0 0.19 ? 8% perf-profile.self.cycles-pp.__schedule
0.02 ?141% +0.0 0.04 ? 73% perf-profile.self.cycles-pp.nvme_poll_cq
0.47 ? 4% +0.0 0.49 perf-profile.self.cycles-pp.orc_find
0.11 ? 12% +0.0 0.13 ? 11% perf-profile.self.cycles-pp.llist_add_batch
0.06 ? 11% +0.0 0.09 ? 15% perf-profile.self.cycles-pp.is_module_text_address
0.12 ? 7% +0.0 0.15 ? 12% perf-profile.self.cycles-pp.kernel_text_address
0.07 ? 7% +0.0 0.09 ? 14% perf-profile.self.cycles-pp.unwind_get_return_address
0.28 ? 9% +0.0 0.31 ? 11% perf-profile.self.cycles-pp.perf_callchain_kernel
1.36 ? 2% +0.0 1.39 ? 3% perf-profile.self.cycles-pp.memcpy_erms
0.46 ? 11% +0.0 0.48 ? 6% perf-profile.self.cycles-pp.__get_user_nocheck_8
0.37 ? 16% +0.0 0.40 ? 9% perf-profile.self.cycles-pp.copy_user_enhanced_fast_string
0.02 ?141% +0.0 0.04 ? 45% perf-profile.self.cycles-pp.get_callchain_entry
0.02 ?142% +0.0 0.05 ? 45% perf-profile.self.cycles-pp.__folio_start_writeback
0.02 ?141% +0.0 0.05 ? 47% perf-profile.self.cycles-pp.rcu_is_watching
0.00 +0.0 0.04 ? 73% perf-profile.self.cycles-pp.xfs_alloc_fix_len
0.08 ? 25% +0.0 0.12 ? 9% perf-profile.self.cycles-pp.__module_address
1.38 ? 4% +0.0 1.42 ? 4% perf-profile.self.cycles-pp.unwind_next_frame
0.00 +0.1 0.06 ? 16% perf-profile.self.cycles-pp.xfs_allocbt_init_key_from_rec
0.03 ?100% +0.1 0.09 ? 13% perf-profile.self.cycles-pp.try_to_wake_up
45.23 ? 2% +0.1 45.30 ? 2% perf-profile.self.cycles-pp.mwait_idle_with_hints
0.34 ? 18% +0.1 0.42 ? 18% perf-profile.self.cycles-pp.mod_find
0.00 +0.1 0.08 ? 14% perf-profile.self.cycles-pp.xfs_btree_readahead
0.00 +0.1 0.10 ? 13% perf-profile.self.cycles-pp.xfs_btree_rec_offset
0.00 +0.1 0.12 ? 9% perf-profile.self.cycles-pp.xfs_alloc_walk_iter
0.00 +0.2 0.17 ? 2% perf-profile.self.cycles-pp.xfs_alloc_compute_aligned
0.00 +0.2 0.17 ? 10% perf-profile.self.cycles-pp.xfs_btree_get_rec
0.00 +0.2 0.20 ? 18% perf-profile.self.cycles-pp.xfs_alloc_compute_diff
0.00 +0.2 0.23 ? 11% perf-profile.self.cycles-pp.xfs_btree_increment
1.60 ? 6% +0.2 1.84 ? 3% perf-profile.self.cycles-pp.__orc_find
0.00 +0.3 0.26 ? 25% perf-profile.self.cycles-pp.xfs_extent_busy_trim
0.04 ? 75% +0.4 0.41 ? 7% perf-profile.self.cycles-pp.xfs_alloc_get_rec
1.08 ? 3% +0.4 1.48 ? 5% perf-profile.self.cycles-pp._raw_spin_lock
0.08 ? 17% +0.5 0.62 ? 8% perf-profile.self.cycles-pp.xfs_alloc_cur_check


>
> Cheers,
>
> Dave.
>
> --
> Dave Chinner
> [email protected]
>


Attachments:
(No filename) (214.98 kB)
config-6.2.0-rc6-00029-g2edf06a50f5b-CONFIG_XFS_DEBUG-CONFIG_XFS_WARN (158.66 kB)
Download all attachments

2023-05-12 23:31:53

by Dave Chinner

[permalink] [raw]
Subject: Re: [linus:master] [xfs] 2edf06a50f: fsmark.files_per_sec -5.7% regression

On Fri, May 12, 2023 at 03:44:29PM +0800, Oliver Sang wrote:
> hi, Dave Chinner,
>
> On Tue, May 09, 2023 at 05:10:53PM +1000, Dave Chinner wrote:
> > On Tue, May 09, 2023 at 04:54:33PM +1000, Dave Chinner wrote:
> > > On Tue, May 09, 2023 at 10:13:19AM +0800, kernel test robot wrote:
> > > >
> > > >
> > > > Hello,
> > > >
> > > > kernel test robot noticed a -5.7% regression of fsmark.files_per_sec on:
> > > >
> > > >
> > > > commit: 2edf06a50f5bbe664283f3c55c480fc013221d70 ("xfs: factor xfs_alloc_vextent_this_ag() for _iterate_ags()")
> > > > https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
> > >
> > > This is just a refactoring patch and doesn't change any logic.
> > > Hence I'm sceptical that it actually resulted in a performance
> > > regression. Indeed, the profile indicates a significant change of
> > > behaviour in the allocator and I can't see how the commit above
> > > would cause anything like that.
> > >
> > > Was this a result of a bisect? If so, what were the original kernel
> > > versions where the regression was detected?
> >
> > Oh, CONFIG_XFS_DEBUG=y, which means:
> >
> > static int
> > xfs_alloc_ag_vextent_lastblock(
> > struct xfs_alloc_arg *args,
> > struct xfs_alloc_cur *acur,
> > xfs_agblock_t *bno,
> > xfs_extlen_t *len,
> > bool *allocated)
> > {
> > int error;
> > int i;
> >
> > #ifdef DEBUG
> > /* Randomly don't execute the first algorithm. */
> > if (get_random_u32_below(2))
> > return 0;
> > #endif
> >
> > We randomly chose a near block allocation strategy to use to improve
> > code coverage, not the optimal one for IO performance. Hence the CPU
> > usage and allocation patterns that impact IO performance are simply
> > not predictable or reproducable from run to run. So, yeah, trying to
> > bisect a minor difference in performance as a result of this
> > randomness will not be reliable....
>
> Thanks a lot for guidance!
>
> we plan to disable XFS_DEBUG (as well as XFS_WARN) in our performance tests.
> want to consult with you if this is the correct thing to do?

You can use XFS_WARN=y with performance tests - that elides all the
debug specific code that changes behaviour but leaves all the
ASSERT-based correctness checks in the code.

> and I guess we should still keep them in functional tests, am I right?

Yes.

> BTW, regarding this case, we tested again with disabling XFS_DEBUG (as well as
> XFS_WARN), kconfig is attached, only diff with last time is:
> -CONFIG_XFS_DEBUG=y
> -CONFIG_XFS_ASSERT_FATAL=y
> +# CONFIG_XFS_WARN is not set
> +# CONFIG_XFS_DEBUG is not set
>
> but we still observed similar regression:
>
> ecd788a92460eef4 2edf06a50f5bbe664283f3c55c4
> ---------------- ---------------------------
> %stddev %change %stddev
> \ | \
> 8176057 ? 15% +4.7% 8558110 fsmark.app_overhead
> 14484 -6.3% 13568 fsmark.files_per_sec

So the application spent 5% more CPU time in userspace, and the rate
the kernel processed IO went down by 6%. Seems to me like
everything is running slower, not just the kernel code....

> 100.50 ? 5% +0.3% 100.83 turbostat.Avg_MHz
> 5.54 ? 11% +0.3 5.82 turbostat.Busy%
> 1863 ? 19% -6.9% 1733 turbostat.Bzy_MHz

Evidence that the CPU is running at a 7% lower clock rate when the
results are 6% slower is a bit suspicious to me. Shouldn't the CPU
clock rate be fixed to the same value for A-B performance regression
testing?

Cheers,

Dave.
--
Dave Chinner
[email protected]

2023-05-14 15:28:52

by Feng Tang

[permalink] [raw]
Subject: Re: [linus:master] [xfs] 2edf06a50f: fsmark.files_per_sec -5.7% regression

Hi Dave,

On Sat, May 13, 2023 at 09:05:04AM +1000, Dave Chinner wrote:
> On Fri, May 12, 2023 at 03:44:29PM +0800, Oliver Sang wrote:
[...]
> > Thanks a lot for guidance!
> >
> > we plan to disable XFS_DEBUG (as well as XFS_WARN) in our performance tests.
> > want to consult with you if this is the correct thing to do?
>
> You can use XFS_WARN=y with performance tests - that elides all the
> debug specific code that changes behaviour but leaves all the
> ASSERT-based correctness checks in the code.
>
> > and I guess we should still keep them in functional tests, am I right?
>
> Yes.
>
> > BTW, regarding this case, we tested again with disabling XFS_DEBUG (as well as
> > XFS_WARN), kconfig is attached, only diff with last time is:
> > -CONFIG_XFS_DEBUG=y
> > -CONFIG_XFS_ASSERT_FATAL=y
> > +# CONFIG_XFS_WARN is not set
> > +# CONFIG_XFS_DEBUG is not set
> >
> > but we still observed similar regression:
> >
> > ecd788a92460eef4 2edf06a50f5bbe664283f3c55c4
> > ---------------- ---------------------------
> > %stddev %change %stddev
> > \ | \
> > 8176057 ± 15% +4.7% 8558110 fsmark.app_overhead
> > 14484 -6.3% 13568 fsmark.files_per_sec
>
> So the application spent 5% more CPU time in userspace, and the rate
> the kernel processed IO went down by 6%. Seems to me like
> everything is running slower, not just the kernel code....
>
> > 100.50 ± 5% +0.3% 100.83 turbostat.Avg_MHz
> > 5.54 ± 11% +0.3 5.82 turbostat.Busy%
> > 1863 ± 19% -6.9% 1733 turbostat.Bzy_MHz
>
> Evidence that the CPU is running at a 7% lower clock rate when the
> results are 6% slower is a bit suspicious to me. Shouldn't the CPU
> clock rate be fixed to the same value for A-B performance regression
> testing?

For commit 2edf06a50f5, it seems to change the semantics a little
about handling of 'flags' for xfs_alloc_fix_freelist(). With the debug
below, the performance is restored.


ecd788a92460eef4 2edf06a50f5bbe664283f3c55c4 68721405630744da1c07c9c1c3c
---------------- --------------------------- ---------------------------

14349 -5.7% 13527 +0.6% 14437 fsmark.files_per_sec
486.29 +5.8% 514.28 -0.5% 483.70 fsmark.time.elapsed_time

Please help to review if the debug patch miss anything as I don't
know the internals of xfs, thanks.

---
diff --git a/fs/xfs/libxfs/xfs_alloc.c b/fs/xfs/libxfs/xfs_alloc.c
index 98defd19e09e..8c85cc68c5f4 100644
--- a/fs/xfs/libxfs/xfs_alloc.c
+++ b/fs/xfs/libxfs/xfs_alloc.c
@@ -3246,12 +3246,12 @@ xfs_alloc_vextent_set_fsbno(
*/
static int
__xfs_alloc_vextent_this_ag(
- struct xfs_alloc_arg *args)
+ struct xfs_alloc_arg *args, int flag)
{
struct xfs_mount *mp = args->mp;
int error;

- error = xfs_alloc_fix_freelist(args, 0);
+ error = xfs_alloc_fix_freelist(args, flag);
if (error) {
trace_xfs_alloc_vextent_nofix(args);
return error;
@@ -3289,7 +3289,7 @@ xfs_alloc_vextent_this_ag(
}

args->pag = xfs_perag_get(mp, args->agno);
- error = __xfs_alloc_vextent_this_ag(args);
+ error = __xfs_alloc_vextent_this_ag(args, 0);

xfs_alloc_vextent_set_fsbno(args, minimum_agno);
xfs_perag_put(args->pag);
@@ -3329,7 +3329,7 @@ xfs_alloc_vextent_iterate_ags(
args->agno = start_agno;
for (;;) {
args->pag = xfs_perag_get(mp, args->agno);
- error = __xfs_alloc_vextent_this_ag(args);
+ error = __xfs_alloc_vextent_this_ag(args, flags);
if (error) {
args->agbno = NULLAGBLOCK;
break;


Also for the turbostat.Bzy_MHz diff, IIUC, 0Day always uses
'performance' cpufreq governor. And as the test case is running
32 thread in a platform with 96 CPUs, there are many CPUs in idle
state in average, and I suspect the Bzy_MHz may be calculated
considering those cpufreq and cpuidle factors.

Thanks,
Feng

>
> Cheers,
>
> Dave.
> --
> Dave Chinner
> [email protected]

2023-05-15 17:34:26

by Darrick J. Wong

[permalink] [raw]
Subject: Re: [linus:master] [xfs] 2edf06a50f: fsmark.files_per_sec -5.7% regression

On Sun, May 14, 2023 at 10:36:48PM +0800, Feng Tang wrote:
> Hi Dave,
>
> On Sat, May 13, 2023 at 09:05:04AM +1000, Dave Chinner wrote:
> > On Fri, May 12, 2023 at 03:44:29PM +0800, Oliver Sang wrote:
> [...]
> > > Thanks a lot for guidance!
> > >
> > > we plan to disable XFS_DEBUG (as well as XFS_WARN) in our performance tests.
> > > want to consult with you if this is the correct thing to do?
> >
> > You can use XFS_WARN=y with performance tests - that elides all the
> > debug specific code that changes behaviour but leaves all the
> > ASSERT-based correctness checks in the code.
> >
> > > and I guess we should still keep them in functional tests, am I right?
> >
> > Yes.
> >
> > > BTW, regarding this case, we tested again with disabling XFS_DEBUG (as well as
> > > XFS_WARN), kconfig is attached, only diff with last time is:
> > > -CONFIG_XFS_DEBUG=y
> > > -CONFIG_XFS_ASSERT_FATAL=y
> > > +# CONFIG_XFS_WARN is not set
> > > +# CONFIG_XFS_DEBUG is not set
> > >
> > > but we still observed similar regression:
> > >
> > > ecd788a92460eef4 2edf06a50f5bbe664283f3c55c4
> > > ---------------- ---------------------------
> > > %stddev %change %stddev
> > > \ | \
> > > 8176057 ? 15% +4.7% 8558110 fsmark.app_overhead
> > > 14484 -6.3% 13568 fsmark.files_per_sec
> >
> > So the application spent 5% more CPU time in userspace, and the rate
> > the kernel processed IO went down by 6%. Seems to me like
> > everything is running slower, not just the kernel code....
> >
> > > 100.50 ? 5% +0.3% 100.83 turbostat.Avg_MHz
> > > 5.54 ? 11% +0.3 5.82 turbostat.Busy%
> > > 1863 ? 19% -6.9% 1733 turbostat.Bzy_MHz
> >
> > Evidence that the CPU is running at a 7% lower clock rate when the
> > results are 6% slower is a bit suspicious to me. Shouldn't the CPU
> > clock rate be fixed to the same value for A-B performance regression
> > testing?
>
> For commit 2edf06a50f5, it seems to change the semantics a little
> about handling of 'flags' for xfs_alloc_fix_freelist(). With the debug
> below, the performance is restored.
>
>
> ecd788a92460eef4 2edf06a50f5bbe664283f3c55c4 68721405630744da1c07c9c1c3c
> ---------------- --------------------------- ---------------------------
>
> 14349 -5.7% 13527 +0.6% 14437 fsmark.files_per_sec
> 486.29 +5.8% 514.28 -0.5% 483.70 fsmark.time.elapsed_time
>
> Please help to review if the debug patch miss anything as I don't
> know the internals of xfs, thanks.
>
> ---
> diff --git a/fs/xfs/libxfs/xfs_alloc.c b/fs/xfs/libxfs/xfs_alloc.c
> index 98defd19e09e..8c85cc68c5f4 100644
> --- a/fs/xfs/libxfs/xfs_alloc.c
> +++ b/fs/xfs/libxfs/xfs_alloc.c
> @@ -3246,12 +3246,12 @@ xfs_alloc_vextent_set_fsbno(
> */
> static int
> __xfs_alloc_vextent_this_ag(

Patches against upstream head only, please. This does not apply to
6.4-rc2 without modification, and we cannot go backwards in time. Do
you mean to pass XFS_ALLOC_FLAG_TRYLOCK from
xfs_alloc_vextent_iterate_ags into xfs_alloc_fix_freelist by way of
adding an alloc_flags argument to xfs_alloc_vextent_prepare_ag?

--D

> - struct xfs_alloc_arg *args)
> + struct xfs_alloc_arg *args, int flag)
> {
> struct xfs_mount *mp = args->mp;
> int error;
>
> - error = xfs_alloc_fix_freelist(args, 0);
> + error = xfs_alloc_fix_freelist(args, flag);
> if (error) {
> trace_xfs_alloc_vextent_nofix(args);
> return error;
> @@ -3289,7 +3289,7 @@ xfs_alloc_vextent_this_ag(
> }
>
> args->pag = xfs_perag_get(mp, args->agno);
> - error = __xfs_alloc_vextent_this_ag(args);
> + error = __xfs_alloc_vextent_this_ag(args, 0);
>
> xfs_alloc_vextent_set_fsbno(args, minimum_agno);
> xfs_perag_put(args->pag);
> @@ -3329,7 +3329,7 @@ xfs_alloc_vextent_iterate_ags(
> args->agno = start_agno;
> for (;;) {
> args->pag = xfs_perag_get(mp, args->agno);
> - error = __xfs_alloc_vextent_this_ag(args);
> + error = __xfs_alloc_vextent_this_ag(args, flags);
> if (error) {
> args->agbno = NULLAGBLOCK;
> break;
>
>
> Also for the turbostat.Bzy_MHz diff, IIUC, 0Day always uses
> 'performance' cpufreq governor. And as the test case is running
> 32 thread in a platform with 96 CPUs, there are many CPUs in idle
> state in average, and I suspect the Bzy_MHz may be calculated
> considering those cpufreq and cpuidle factors.
>
> Thanks,
> Feng
>
> >
> > Cheers,
> >
> > Dave.
> > --
> > Dave Chinner
> > [email protected]

2023-05-15 22:53:53

by Dave Chinner

[permalink] [raw]
Subject: Re: [linus:master] [xfs] 2edf06a50f: fsmark.files_per_sec -5.7% regression

On Sun, May 14, 2023 at 10:36:48PM +0800, Feng Tang wrote:
> Hi Dave,
>
> On Sat, May 13, 2023 at 09:05:04AM +1000, Dave Chinner wrote:
> > On Fri, May 12, 2023 at 03:44:29PM +0800, Oliver Sang wrote:
> [...]
> > > Thanks a lot for guidance!
> > >
> > > we plan to disable XFS_DEBUG (as well as XFS_WARN) in our performance tests.
> > > want to consult with you if this is the correct thing to do?
> >
> > You can use XFS_WARN=y with performance tests - that elides all the
> > debug specific code that changes behaviour but leaves all the
> > ASSERT-based correctness checks in the code.
> >
> > > and I guess we should still keep them in functional tests, am I right?
> >
> > Yes.
> >
> > > BTW, regarding this case, we tested again with disabling XFS_DEBUG (as well as
> > > XFS_WARN), kconfig is attached, only diff with last time is:
> > > -CONFIG_XFS_DEBUG=y
> > > -CONFIG_XFS_ASSERT_FATAL=y
> > > +# CONFIG_XFS_WARN is not set
> > > +# CONFIG_XFS_DEBUG is not set
> > >
> > > but we still observed similar regression:
> > >
> > > ecd788a92460eef4 2edf06a50f5bbe664283f3c55c4
> > > ---------------- ---------------------------
> > > %stddev %change %stddev
> > > \ | \
> > > 8176057 ? 15% +4.7% 8558110 fsmark.app_overhead
> > > 14484 -6.3% 13568 fsmark.files_per_sec
> >
> > So the application spent 5% more CPU time in userspace, and the rate
> > the kernel processed IO went down by 6%. Seems to me like
> > everything is running slower, not just the kernel code....
> >
> > > 100.50 ? 5% +0.3% 100.83 turbostat.Avg_MHz
> > > 5.54 ? 11% +0.3 5.82 turbostat.Busy%
> > > 1863 ? 19% -6.9% 1733 turbostat.Bzy_MHz
> >
> > Evidence that the CPU is running at a 7% lower clock rate when the
> > results are 6% slower is a bit suspicious to me. Shouldn't the CPU
> > clock rate be fixed to the same value for A-B performance regression
> > testing?
>
> For commit 2edf06a50f5, it seems to change the semantics a little
> about handling of 'flags' for xfs_alloc_fix_freelist(). With the debug
> below, the performance is restored.
>
>
> ecd788a92460eef4 2edf06a50f5bbe664283f3c55c4 68721405630744da1c07c9c1c3c
> ---------------- --------------------------- ---------------------------
>
> 14349 -5.7% 13527 +0.6% 14437 fsmark.files_per_sec
> 486.29 +5.8% 514.28 -0.5% 483.70 fsmark.time.elapsed_time
>
> Please help to review if the debug patch miss anything as I don't
> know the internals of xfs, thanks.

Well spotted. :)

The relevant commit dropped the trylock flag, so the perf regression
and change of allocator behaviour is due to it blocking on a busy AG
instead of skipping to the next uncontended on and so all
allocations came from extents in the last block of the free space
btree in that AG.

>
> ---
> diff --git a/fs/xfs/libxfs/xfs_alloc.c b/fs/xfs/libxfs/xfs_alloc.c
> index 98defd19e09e..8c85cc68c5f4 100644
> --- a/fs/xfs/libxfs/xfs_alloc.c
> +++ b/fs/xfs/libxfs/xfs_alloc.c
> @@ -3246,12 +3246,12 @@ xfs_alloc_vextent_set_fsbno(
> */
> static int
> __xfs_alloc_vextent_this_ag(
> - struct xfs_alloc_arg *args)
> + struct xfs_alloc_arg *args, int flag)
> {
> struct xfs_mount *mp = args->mp;
> int error;
>
> - error = xfs_alloc_fix_freelist(args, 0);
> + error = xfs_alloc_fix_freelist(args, flag);
> if (error) {
> trace_xfs_alloc_vextent_nofix(args);
> return error;
> @@ -3289,7 +3289,7 @@ xfs_alloc_vextent_this_ag(
> }
>
> args->pag = xfs_perag_get(mp, args->agno);
> - error = __xfs_alloc_vextent_this_ag(args);
> + error = __xfs_alloc_vextent_this_ag(args, 0);
>
> xfs_alloc_vextent_set_fsbno(args, minimum_agno);
> xfs_perag_put(args->pag);
> @@ -3329,7 +3329,7 @@ xfs_alloc_vextent_iterate_ags(
> args->agno = start_agno;
> for (;;) {
> args->pag = xfs_perag_get(mp, args->agno);
> - error = __xfs_alloc_vextent_this_ag(args);
> + error = __xfs_alloc_vextent_this_ag(args, flags);
> if (error) {
> args->agbno = NULLAGBLOCK;
> break;

I don't think this is the right way to fix this. The code is -very-
different at the end of the series that this is in the middle of,
and I need to check what callers now use the trylock behaviour to
determine how the trylock flag shold be passed around...

> Also for the turbostat.Bzy_MHz diff, IIUC, 0Day always uses
> 'performance' cpufreq governor. And as the test case is running
> 32 thread in a platform with 96 CPUs, there are many CPUs in idle
> state in average, and I suspect the Bzy_MHz may be calculated
> considering those cpufreq and cpuidle factors.

If "busy MHz" includes the speed of idle CPUs, then it's not really
a measure of the speed of "busy" CPUs. If what you say is true, then
it is, at best, badly names - it would just be the "average Mhz",
right?

-Dave.

--
Dave Chinner
[email protected]

2023-05-16 02:56:29

by Feng Tang

[permalink] [raw]
Subject: Re: [linus:master] [xfs] 2edf06a50f: fsmark.files_per_sec -5.7% regression

On Tue, May 16, 2023 at 08:20:34AM +1000, Dave Chinner wrote:
[...]
> >
> > For commit 2edf06a50f5, it seems to change the semantics a little
> > about handling of 'flags' for xfs_alloc_fix_freelist(). With the debug
> > below, the performance is restored.
> >
> >
> > ecd788a92460eef4 2edf06a50f5bbe664283f3c55c4 68721405630744da1c07c9c1c3c
> > ---------------- --------------------------- ---------------------------
> >
> > 14349 -5.7% 13527 +0.6% 14437 fsmark.files_per_sec
> > 486.29 +5.8% 514.28 -0.5% 483.70 fsmark.time.elapsed_time
> >
> > Please help to review if the debug patch miss anything as I don't
> > know the internals of xfs, thanks.
>
> Well spotted. :)
>
> The relevant commit dropped the trylock flag, so the perf regression
> and change of allocator behaviour is due to it blocking on a busy AG
> instead of skipping to the next uncontended on and so all
> allocations came from extents in the last block of the free space
> btree in that AG.

Thanks for the confirming and analysis!

Late yesterday, I added trace printk in xfs_alloc_vextent_iterate_ags()
and it did show the flag has XFS_ALLOC_FLAG_TRYLOCK bit set.

fs_mark-28005 [016] ..... 14993.945487: xfs_alloc_vextent_iterate_ags: flags = 0x1
fs_mark-28004 [002] ..... 14993.945487: xfs_alloc_vextent_iterate_ags: flags = 0x1
fs_mark-27986 [006] ..... 14993.945497: xfs_alloc_vextent_iterate_ags: flags = 0x1

> >
> > ---
> > diff --git a/fs/xfs/libxfs/xfs_alloc.c b/fs/xfs/libxfs/xfs_alloc.c
> > index 98defd19e09e..8c85cc68c5f4 100644
> > --- a/fs/xfs/libxfs/xfs_alloc.c
> > +++ b/fs/xfs/libxfs/xfs_alloc.c
> > @@ -3246,12 +3246,12 @@ xfs_alloc_vextent_set_fsbno(
> > */
> > static int
> > __xfs_alloc_vextent_this_ag(
> > - struct xfs_alloc_arg *args)
> > + struct xfs_alloc_arg *args, int flag)
> > {
> > struct xfs_mount *mp = args->mp;
> > int error;
> >
> > - error = xfs_alloc_fix_freelist(args, 0);
> > + error = xfs_alloc_fix_freelist(args, flag);
> > if (error) {
> > trace_xfs_alloc_vextent_nofix(args);
> > return error;
> > @@ -3289,7 +3289,7 @@ xfs_alloc_vextent_this_ag(
> > }
> >
> > args->pag = xfs_perag_get(mp, args->agno);
> > - error = __xfs_alloc_vextent_this_ag(args);
> > + error = __xfs_alloc_vextent_this_ag(args, 0);
> >
> > xfs_alloc_vextent_set_fsbno(args, minimum_agno);
> > xfs_perag_put(args->pag);
> > @@ -3329,7 +3329,7 @@ xfs_alloc_vextent_iterate_ags(
> > args->agno = start_agno;
> > for (;;) {
> > args->pag = xfs_perag_get(mp, args->agno);
> > - error = __xfs_alloc_vextent_this_ag(args);
> > + error = __xfs_alloc_vextent_this_ag(args, flags);
> > if (error) {
> > args->agbno = NULLAGBLOCK;
> > break;
>
> I don't think this is the right way to fix this.

I see. I understand this commit is in the middle of a series.

I should have stated clearly the debug hack patch was tried
mainly for debugging the regression.

> The code is -very-
> different at the end of the series that this is in the middle of,
> and I need to check what callers now use the trylock behaviour to
> determine how the trylock flag shold be passed around...

Sure. When the patch is fine, we can test it.


> > Also for the turbostat.Bzy_MHz diff, IIUC, 0Day always uses
> > 'performance' cpufreq governor. And as the test case is running
> > 32 thread in a platform with 96 CPUs, there are many CPUs in idle
> > state in average, and I suspect the Bzy_MHz may be calculated
> > considering those cpufreq and cpuidle factors.
>
> If "busy MHz" includes the speed of idle CPUs, then it's not really
> a measure of the speed of "busy" CPUs. If what you say is true, then
> it is, at best, badly names - it would just be the "average Mhz",
> right?


I found the turbostat.c in kernel tree tools/power/x86/turbostat/

if (DO_BIC(BIC_Bzy_MHz)) {
if (has_base_hz)
outp +=
sprintf(outp, "%s%.0f", (printed++ ? delim : ""), base_hz / units * t->aperf / t->mperf);
else
outp += sprintf(outp, "%s%.0f", (printed++ ? delim : ""),
tsc / units * t->aperf / t->mperf / interval_float);
}

Rui Zhang told me the 'aperf' is the actual cpu cycles of a CPU in a
period of time, and it only count when CPU is in C0 state, and will
stop counting when cpu is in idle power state. Like in one second
interval, if the CPU spends 500 ms running at 1000 MHz, and the other
500 ms in idle, then the Bzy_MHz will be shown 500 MHz.

Thanks,
Feng

> -Dave.
>
> --
> Dave Chinner
> [email protected]

2023-05-16 03:35:17

by Zhang, Rui

[permalink] [raw]
Subject: Re: [linus:master] [xfs] 2edf06a50f: fsmark.files_per_sec -5.7% regression


> > > Also for the turbostat.Bzy_MHz diff, IIUC, 0Day always uses
> > > 'performance' cpufreq governor. And as the test case is running
> > > 32 thread in a platform with 96 CPUs, there are many CPUs in idle
> > > state in average, and I suspect the Bzy_MHz may be calculated
> > > considering those cpufreq and cpuidle factors.
> >
> > If "busy MHz" includes the speed of idle CPUs, then it's not really
> > a measure of the speed of "busy" CPUs. If what you say is true,
> > then
> > it is, at best, badly names - it would just be the "average Mhz",
> > right?
>
> I found the turbostat.c in kernel tree tools/power/x86/turbostat/
>
> if (DO_BIC(BIC_Bzy_MHz)) {
> if (has_base_hz)
> outp +=
> sprintf(outp, "%s%.0f", (printed++ ? delim : ""),
> base_hz / units * t->aperf / t->mperf);
> else
> outp += sprintf(outp, "%s%.0f", (printed++ ? delim :
> ""),
> tsc / units * t->aperf / t->mperf /
> interval_float);
> }
>
> Rui Zhang told me the 'aperf' is the actual cpu cycles of a CPU in a
> period of time, and it only count when CPU is in C0 state, and will
> stop counting when cpu is in idle power state. Like in one second
> interval, if the CPU spends 500 ms running at 1000 MHz, and the other
> 500 ms in idle, then the Bzy_MHz will be shown 500 MHz.

Bzy_MHz will show 1000 MHz because it is the actual frequency when CPU
is in C0.
Avg_MHz will show 500 MHz because it is the average frequency including
the CPU idle time.

thanks,
rui