2023-10-31 05:51:31

by kernel test robot

[permalink] [raw]
Subject: [linus:master] [xfs] 62334fab47: stress-ng.rename.ops_per_sec 4.5% improvement



Hello,

kernel test robot noticed a 4.5% improvement of stress-ng.rename.ops_per_sec on:


commit: 62334fab47621dd91ab30dd5bb6c43d78a8ec279 ("xfs: use per-mount cpumask to track nonempty percpu inodegc lists")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master

testcase: stress-ng
test machine: 36 threads 1 sockets Intel(R) Core(TM) i9-9980XE CPU @ 3.00GHz (Skylake) with 32G memory
parameters:

nr_threads: 10%
disk: 1SSD
testtime: 60s
fs: xfs
class: filesystem
test: rename
cpufreq_governor: performance


In addition to that, the commit also has significant impact on the following tests:

+------------------+------------------------------------------------------------------------------------------+
| testcase: change | stress-ng: stress-ng.symlink.ops_per_sec 2.1% improvement |
| test machine | 36 threads 1 sockets Intel(R) Core(TM) i9-9980XE CPU @ 3.00GHz (Skylake) with 32G memory |
| test parameters | class=filesystem |
| | cpufreq_governor=performance |
| | disk=1SSD |
| | fs=xfs |
| | nr_threads=10% |
| | test=symlink |
| | testtime=60s |
+------------------+------------------------------------------------------------------------------------------+
| testcase: change | stress-ng: stress-ng.link.ops_per_sec 3.2% improvement |
| test machine | 36 threads 1 sockets Intel(R) Core(TM) i9-9980XE CPU @ 3.00GHz (Skylake) with 32G memory |
| test parameters | class=filesystem |
| | cpufreq_governor=performance |
| | disk=1SSD |
| | fs=xfs |
| | nr_threads=10% |
| | test=link |
| | testtime=60s |
+------------------+------------------------------------------------------------------------------------------+




Details are as below:
-------------------------------------------------------------------------------------------------->


The kernel config and materials to reproduce are available at:
https://download.01.org/0day-ci/archive/20231031/[email protected]

=========================================================================================
class/compiler/cpufreq_governor/disk/fs/kconfig/nr_threads/rootfs/tbox_group/test/testcase/testtime:
filesystem/gcc-12/performance/1SSD/xfs/x86_64-rhel-8.3/10%/debian-11.1-x86_64-20220510.cgz/lkp-skl-d08/rename/stress-ng/60s

commit:
ecd49f7a36 ("xfs: fix per-cpu CIL structure aggregation racing with dying cpus")
62334fab47 ("xfs: use per-mount cpumask to track nonempty percpu inodegc lists")

ecd49f7a36fbccc8 62334fab47621dd91ab30dd5bb6
---------------- ---------------------------
%stddev %change %stddev
\ | \
3065 ? 17% +21.5% 3724 ? 8% perf-sched.wait_and_delay.max.ms.smpboot_thread_fn.kthread.ret_from_fork.ret_from_fork_asm
3065 ? 17% +21.5% 3724 ? 8% perf-sched.wait_time.max.ms.smpboot_thread_fn.kthread.ret_from_fork.ret_from_fork_asm
13731223 +4.5% 14352233 stress-ng.rename.ops
228654 +4.5% 238992 stress-ng.rename.ops_per_sec
208138 +4.1% 216733 ? 3% stress-ng.time.voluntary_context_switches
1.14 +0.0 1.17 perf-stat.i.branch-miss-rate%
0.97 +1.5% 0.99 perf-stat.i.cpi
0.03 +0.0 0.03 perf-stat.i.dTLB-load-miss-rate%
798812 +3.5% 827138 perf-stat.i.dTLB-load-misses
0.00 ? 11% -0.0 0.00 ? 17% perf-stat.i.dTLB-store-miss-rate%
1.321e+09 +2.8% 1.359e+09 perf-stat.i.dTLB-stores
1.03 -1.4% 1.02 perf-stat.i.ipc
263342 +4.0% 273811 ? 2% perf-stat.i.node-loads
0.12 ? 2% +3.6% 0.12 perf-stat.overall.MPKI
0.97 +1.4% 0.99 perf-stat.overall.cpi
0.03 +0.0 0.03 perf-stat.overall.dTLB-load-miss-rate%
1.03 -1.4% 1.01 perf-stat.overall.ipc
786211 +3.5% 814068 perf-stat.ps.dTLB-load-misses
1.3e+09 +2.8% 1.337e+09 perf-stat.ps.dTLB-stores
259165 +4.0% 269495 ? 2% perf-stat.ps.node-loads
2.99 ? 4% -2.3 0.66 ? 10% perf-profile.calltrace.cycles-pp.xfs_inodegc_queue_all.xfs_fs_statfs.statfs_by_dentry.user_statfs.__do_sys_statfs
0.62 ? 6% +0.1 0.71 ? 8% perf-profile.calltrace.cycles-pp.lookup_dcache.lookup_one_qstr_excl.do_renameat2.__x64_sys_rename.do_syscall_64
0.60 ? 5% +0.1 0.68 ? 9% perf-profile.calltrace.cycles-pp.d_lookup.lookup_dcache.lookup_one_qstr_excl.do_renameat2.__x64_sys_rename
0.82 ? 6% +0.1 0.94 ? 7% perf-profile.calltrace.cycles-pp.d_move.vfs_rename.do_renameat2.__x64_sys_rename.do_syscall_64
0.68 ? 6% +0.1 0.81 ? 7% perf-profile.calltrace.cycles-pp.kmem_cache_alloc_lru.__d_alloc.d_alloc.lookup_one_qstr_excl.do_renameat2
1.58 ? 3% +0.2 1.74 ? 6% perf-profile.calltrace.cycles-pp.xfs_trans_reserve.xfs_trans_alloc.xfs_rename.xfs_vn_rename.vfs_rename
0.98 ? 4% +0.2 1.15 ? 7% perf-profile.calltrace.cycles-pp.xfs_dir_removename.xfs_rename.xfs_vn_rename.vfs_rename.do_renameat2
0.96 ? 11% +0.2 1.20 ? 8% perf-profile.calltrace.cycles-pp.mutex_spin_on_owner.__mutex_lock.do_renameat2.__x64_sys_rename.do_syscall_64
1.53 ? 3% +0.2 1.78 ? 7% perf-profile.calltrace.cycles-pp.xfs_inode_item_format.xlog_cil_insert_format_items.xlog_cil_insert_items.xlog_cil_commit.__xfs_trans_commit
1.94 ? 3% +0.3 2.19 ? 6% perf-profile.calltrace.cycles-pp.xfs_trans_alloc.xfs_rename.xfs_vn_rename.vfs_rename.do_renameat2
1.19 ? 10% +0.3 1.45 ? 9% perf-profile.calltrace.cycles-pp.__mutex_lock.do_renameat2.__x64_sys_rename.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.78 ? 3% +0.3 2.05 ? 7% perf-profile.calltrace.cycles-pp.xlog_cil_insert_format_items.xlog_cil_insert_items.xlog_cil_commit.__xfs_trans_commit.xfs_rename
2.24 ? 4% +0.3 2.53 ? 8% perf-profile.calltrace.cycles-pp.syscall_return_via_sysret.__statfs
2.61 ? 3% +0.3 2.94 ? 6% perf-profile.calltrace.cycles-pp.xlog_cil_insert_items.xlog_cil_commit.__xfs_trans_commit.xfs_rename.xfs_vn_rename
2.41 ? 4% +0.4 2.82 ? 10% perf-profile.calltrace.cycles-pp.path_parentat.__filename_parentat.do_renameat2.__x64_sys_rename.do_syscall_64
2.25 ? 4% +0.4 2.69 ? 6% perf-profile.calltrace.cycles-pp._find_next_or_bit.__percpu_counter_sum.xfs_fs_statfs.statfs_by_dentry.user_statfs
2.74 ? 4% +0.5 3.23 ? 9% perf-profile.calltrace.cycles-pp.__filename_parentat.do_renameat2.__x64_sys_rename.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.08 ?223% +0.5 0.58 ? 5% perf-profile.calltrace.cycles-pp.path_init.path_parentat.__filename_parentat.do_renameat2.__x64_sys_rename
4.69 ? 2% +0.6 5.32 ? 8% perf-profile.calltrace.cycles-pp.xlog_cil_commit.__xfs_trans_commit.xfs_rename.xfs_vn_rename.vfs_rename
8.39 ? 4% +1.2 9.54 ? 9% perf-profile.calltrace.cycles-pp.__percpu_counter_sum.xfs_fs_statfs.statfs_by_dentry.user_statfs.__do_sys_statfs
29.06 ? 2% +3.9 32.91 ? 7% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.rename
30.82 ? 2% +4.1 34.94 ? 7% perf-profile.calltrace.cycles-pp.rename
3.00 ? 4% -2.3 0.66 ? 10% perf-profile.children.cycles-pp.xfs_inodegc_queue_all
0.68 ? 7% -0.5 0.13 ? 16% perf-profile.children.cycles-pp._find_next_bit
0.10 ? 12% +0.0 0.13 ? 11% perf-profile.children.cycles-pp.kfree
0.08 ? 16% +0.0 0.12 ? 12% perf-profile.children.cycles-pp.__x64_sys_statfs
0.31 ? 8% +0.0 0.36 ? 8% perf-profile.children.cycles-pp.entry_SYSCALL_64_safe_stack
0.33 ? 7% +0.1 0.40 ? 9% perf-profile.children.cycles-pp.slab_pre_alloc_hook
0.11 ? 11% +0.1 0.18 ? 28% perf-profile.children.cycles-pp.__d_lookup_rcu
0.40 ? 7% +0.1 0.47 ? 10% perf-profile.children.cycles-pp.xfs_trans_del_item
0.14 ? 12% +0.1 0.22 ? 22% perf-profile.children.cycles-pp.lookup_fast
0.62 ? 5% +0.1 0.71 ? 8% perf-profile.children.cycles-pp.d_lookup
0.48 ? 7% +0.1 0.57 ? 7% perf-profile.children.cycles-pp.__cond_resched
0.64 ? 6% +0.1 0.74 ? 7% perf-profile.children.cycles-pp.lookup_dcache
0.50 ? 3% +0.1 0.60 ? 6% perf-profile.children.cycles-pp.xfs_dir2_sf_removename
0.48 ? 6% +0.1 0.58 ? 11% perf-profile.children.cycles-pp.xlog_cil_alloc_shadow_bufs
0.49 ? 8% +0.1 0.61 ? 9% perf-profile.children.cycles-pp.__d_move
0.82 ? 5% +0.1 0.96 ? 7% perf-profile.children.cycles-pp.d_move
0.82 ? 4% +0.1 0.96 ? 4% perf-profile.children.cycles-pp.__d_alloc
0.71 ? 5% +0.1 0.86 ? 5% perf-profile.children.cycles-pp.kmem_cache_alloc_lru
0.48 ? 11% +0.1 0.62 ? 14% perf-profile.children.cycles-pp.up_write
0.97 ? 5% +0.2 1.13 ? 5% perf-profile.children.cycles-pp.d_alloc
1.88 ? 2% +0.2 2.04 ? 9% perf-profile.children.cycles-pp.strncpy_from_user
1.00 ? 4% +0.2 1.18 ? 7% perf-profile.children.cycles-pp.xfs_dir_removename
0.36 ? 24% +0.2 0.58 ? 19% perf-profile.children.cycles-pp.__legitimize_mnt
0.96 ? 11% +0.2 1.20 ? 8% perf-profile.children.cycles-pp.mutex_spin_on_owner
0.77 ? 11% +0.2 1.02 ? 6% perf-profile.children.cycles-pp.path_init
1.58 ? 3% +0.3 1.83 ? 6% perf-profile.children.cycles-pp.xfs_inode_item_format
2.02 ? 3% +0.3 2.28 ? 6% perf-profile.children.cycles-pp.xfs_trans_alloc
2.23 ? 5% +0.3 2.49 ? 4% perf-profile.children.cycles-pp._raw_spin_lock
1.19 ? 10% +0.3 1.45 ? 9% perf-profile.children.cycles-pp.__mutex_lock
1.89 ? 2% +0.3 2.17 ? 6% perf-profile.children.cycles-pp.xlog_cil_insert_format_items
2.73 ? 3% +0.3 3.06 ? 6% perf-profile.children.cycles-pp.xlog_cil_insert_items
2.51 ? 4% +0.4 2.92 ? 9% perf-profile.children.cycles-pp.path_parentat
2.27 ? 3% +0.4 2.70 ? 6% perf-profile.children.cycles-pp._find_next_or_bit
3.61 ? 3% +0.5 4.07 ? 6% perf-profile.children.cycles-pp.syscall_return_via_sysret
2.83 ? 4% +0.5 3.31 ? 8% perf-profile.children.cycles-pp.__filename_parentat
4.97 ? 2% +0.6 5.60 ? 7% perf-profile.children.cycles-pp.xlog_cil_commit
7.91 ? 2% +0.9 8.81 ? 7% perf-profile.children.cycles-pp.__xfs_trans_commit
8.76 ? 3% +1.2 9.99 ? 9% perf-profile.children.cycles-pp.__percpu_counter_sum
26.27 ? 2% +3.5 29.73 ? 7% perf-profile.children.cycles-pp.do_renameat2
30.90 ? 2% +4.1 35.03 ? 7% perf-profile.children.cycles-pp.rename
1.93 ? 4% -1.8 0.11 ? 21% perf-profile.self.cycles-pp.xfs_inodegc_queue_all
0.66 ? 7% -0.5 0.13 ? 20% perf-profile.self.cycles-pp._find_next_bit
0.08 ? 17% +0.0 0.12 ? 11% perf-profile.self.cycles-pp.kfree
0.08 ? 18% +0.0 0.12 ? 15% perf-profile.self.cycles-pp.__x64_sys_statfs
0.12 ? 18% +0.0 0.17 ? 13% perf-profile.self.cycles-pp.__percpu_counter_compare
0.11 ? 11% +0.1 0.17 ? 31% perf-profile.self.cycles-pp.__d_lookup_rcu
0.17 ? 13% +0.1 0.24 ? 9% perf-profile.self.cycles-pp.xfs_inode_item_format_data_fork
0.17 ? 10% +0.1 0.24 ? 16% perf-profile.self.cycles-pp.xlog_cil_alloc_shadow_bufs
0.28 ? 5% +0.1 0.35 ? 9% perf-profile.self.cycles-pp.__cond_resched
0.32 ? 5% +0.1 0.39 ? 5% perf-profile.self.cycles-pp.__filename_parentat
0.36 ? 23% +0.2 0.57 ? 19% perf-profile.self.cycles-pp.__legitimize_mnt
1.52 ? 5% +0.2 1.74 ? 8% perf-profile.self.cycles-pp.__entry_text_start
0.96 ? 11% +0.2 1.20 ? 8% perf-profile.self.cycles-pp.mutex_spin_on_owner
2.09 ? 5% +0.2 2.34 ? 4% perf-profile.self.cycles-pp._raw_spin_lock
0.75 ? 11% +0.3 1.01 ? 6% perf-profile.self.cycles-pp.path_init
1.93 ? 5% +0.4 2.30 ? 7% perf-profile.self.cycles-pp._find_next_or_bit
3.60 ? 3% +0.5 4.06 ? 6% perf-profile.self.cycles-pp.syscall_return_via_sysret
5.79 ? 3% +0.8 6.64 ? 10% perf-profile.self.cycles-pp.__percpu_counter_sum


***************************************************************************************************
lkp-skl-d08: 36 threads 1 sockets Intel(R) Core(TM) i9-9980XE CPU @ 3.00GHz (Skylake) with 32G memory
=========================================================================================
class/compiler/cpufreq_governor/disk/fs/kconfig/nr_threads/rootfs/tbox_group/test/testcase/testtime:
filesystem/gcc-12/performance/1SSD/xfs/x86_64-rhel-8.3/10%/debian-11.1-x86_64-20220510.cgz/lkp-skl-d08/symlink/stress-ng/60s

commit:
ecd49f7a36 ("xfs: fix per-cpu CIL structure aggregation racing with dying cpus")
62334fab47 ("xfs: use per-mount cpumask to track nonempty percpu inodegc lists")

ecd49f7a36fbccc8 62334fab47621dd91ab30dd5bb6
---------------- ---------------------------
%stddev %change %stddev
\ | \
117708 +1.9% 119936 vmstat.system.cs
1092777 +1.7% 1110814 proc-vmstat.numa_hit
1092614 +2.3% 1117304 proc-vmstat.numa_local
474.50 +2.1% 484.50 stress-ng.symlink.ops
7.90 +2.1% 8.06 stress-ng.symlink.ops_per_sec
0.02 ?131% +5345.1% 1.29 ?130% perf-sched.sch_delay.max.ms.__cond_resched.down_read.xlog_cil_commit.__xfs_trans_commit.xfs_inactive_ifree
0.01 ? 85% -62.4% 0.01 ? 48% perf-sched.sch_delay.max.ms.__cond_resched.kmem_cache_alloc.__xfs_free_extent_later.xfs_bmap_del_extent_real.__xfs_bunmapi
0.02 ? 51% +422.5% 0.10 ?146% perf-sched.sch_delay.max.ms.__cond_resched.kmem_cache_alloc.xfs_trans_alloc.xfs_trans_alloc_dir.xfs_remove
0.01 ? 24% +392.7% 0.07 ?147% perf-sched.wait_time.avg.ms.__cond_resched.down_read.xlog_cil_commit.__xfs_trans_commit.xfs_trans_roll
0.77 ?145% +129.3% 1.77 ? 90% perf-sched.wait_time.max.ms.__cond_resched.mutex_lock.perf_event_ctx_lock_nested.constprop.0
16.85 -0.5 16.33 ? 2% perf-stat.i.cache-miss-rate%
123504 +1.6% 125516 perf-stat.i.context-switches
0.99 +1.0% 1.00 perf-stat.i.cpi
16.87 -0.6 16.26 ? 2% perf-stat.overall.cache-miss-rate%
0.99 +1.0% 1.00 perf-stat.overall.cpi
121555 +1.6% 123540 perf-stat.ps.context-switches
3.17 ? 2% -1.1 2.02 ? 2% perf-profile.calltrace.cycles-pp.xfs_inodegc_queue_all.xfs_fs_statfs.statfs_by_dentry.user_statfs.__do_sys_statfs
12.49 -1.0 11.44 ? 2% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__statfs
8.02 ? 2% -1.0 6.98 ? 2% perf-profile.calltrace.cycles-pp.xfs_fs_statfs.statfs_by_dentry.user_statfs.__do_sys_statfs.do_syscall_64
11.88 -1.0 10.84 ? 2% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__statfs
8.14 ? 2% -1.0 7.10 ? 2% perf-profile.calltrace.cycles-pp.statfs_by_dentry.user_statfs.__do_sys_statfs.do_syscall_64.entry_SYSCALL_64_after_hwframe
10.51 -1.0 9.48 ? 2% perf-profile.calltrace.cycles-pp.user_statfs.__do_sys_statfs.do_syscall_64.entry_SYSCALL_64_after_hwframe.__statfs
10.76 -1.0 9.74 ? 2% perf-profile.calltrace.cycles-pp.__do_sys_statfs.do_syscall_64.entry_SYSCALL_64_after_hwframe.__statfs
14.25 -1.0 13.23 ? 2% perf-profile.calltrace.cycles-pp.__statfs
3.19 ? 2% -1.2 2.03 ? 2% perf-profile.children.cycles-pp.xfs_inodegc_queue_all
8.14 ? 2% -1.0 7.10 ? 2% perf-profile.children.cycles-pp.statfs_by_dentry
8.05 ? 2% -1.0 7.00 ? 2% perf-profile.children.cycles-pp.xfs_fs_statfs
14.34 -1.0 13.31 ? 2% perf-profile.children.cycles-pp.__statfs
10.51 -1.0 9.49 ? 2% perf-profile.children.cycles-pp.user_statfs
10.76 -1.0 9.74 ? 2% perf-profile.children.cycles-pp.__do_sys_statfs
0.33 ? 10% -0.2 0.13 ? 8% perf-profile.children.cycles-pp._find_next_bit
0.08 ? 9% +0.0 0.10 ? 6% perf-profile.children.cycles-pp.xfs_btree_ptr_to_daddr
0.34 ? 6% +0.1 0.39 ? 7% perf-profile.children.cycles-pp.xlog_ticket_alloc
0.65 ? 4% +0.1 0.73 ? 5% perf-profile.children.cycles-pp.__filename_parentat
1.12 -0.9 0.22 ? 7% perf-profile.self.cycles-pp.xfs_inodegc_queue_all
0.32 ? 11% -0.2 0.13 ? 8% perf-profile.self.cycles-pp._find_next_bit
0.35 ? 5% -0.0 0.30 ? 4% perf-profile.self.cycles-pp.__list_del_entry_valid_or_report
0.08 ? 9% +0.0 0.10 ? 7% perf-profile.self.cycles-pp.xfs_btree_ptr_to_daddr



***************************************************************************************************
lkp-skl-d08: 36 threads 1 sockets Intel(R) Core(TM) i9-9980XE CPU @ 3.00GHz (Skylake) with 32G memory
=========================================================================================
class/compiler/cpufreq_governor/disk/fs/kconfig/nr_threads/rootfs/tbox_group/test/testcase/testtime:
filesystem/gcc-12/performance/1SSD/xfs/x86_64-rhel-8.3/10%/debian-11.1-x86_64-20220510.cgz/lkp-skl-d08/link/stress-ng/60s

commit:
ecd49f7a36 ("xfs: fix per-cpu CIL structure aggregation racing with dying cpus")
62334fab47 ("xfs: use per-mount cpumask to track nonempty percpu inodegc lists")

ecd49f7a36fbccc8 62334fab47621dd91ab30dd5bb6
---------------- ---------------------------
%stddev %change %stddev
\ | \
0.10 ? 27% -37.0% 0.06 ? 20% perf-sched.wait_time.avg.ms.__cond_resched.down.xfs_buf_lock.xfs_buf_find_lock.xfs_buf_lookup
862.00 +3.2% 889.50 stress-ng.link.ops
14.35 +3.2% 14.81 stress-ng.link.ops_per_sec
2.423e+09 -1.3% 2.39e+09 perf-stat.i.branch-instructions
49572825 ? 2% +5.6% 52346561 ? 3% perf-stat.i.cache-references
0.03 +0.0 0.03 perf-stat.i.dTLB-load-miss-rate%
1021147 +1.8% 1039653 perf-stat.i.dTLB-load-misses
2.028e+09 +2.3% 2.074e+09 perf-stat.i.dTLB-stores
0.03 +0.0 0.03 perf-stat.overall.dTLB-load-miss-rate%
2.385e+09 -1.3% 2.353e+09 perf-stat.ps.branch-instructions
48787960 ? 2% +5.6% 51523373 ? 3% perf-stat.ps.cache-references
1004980 +1.8% 1023298 perf-stat.ps.dTLB-load-misses
1.996e+09 +2.3% 2.041e+09 perf-stat.ps.dTLB-stores
46299 ? 56% +97.1% 91275 ? 5% sched_debug.cfs_rq:/.avg_vruntime.max
9861 ? 64% +101.9% 19910 ? 3% sched_debug.cfs_rq:/.avg_vruntime.stddev
46299 ? 56% +97.1% 91275 ? 5% sched_debug.cfs_rq:/.min_vruntime.max
9861 ? 64% +101.9% 19910 ? 3% sched_debug.cfs_rq:/.min_vruntime.stddev
38310 ? 36% +52.4% 58379 sched_debug.cpu.clock.avg
38311 ? 36% +52.4% 58381 sched_debug.cpu.clock.max
38308 ? 36% +52.4% 58378 sched_debug.cpu.clock.min
38210 ? 36% +52.1% 58133 sched_debug.cpu.clock_task.avg
38238 ? 36% +52.1% 58173 sched_debug.cpu.clock_task.max
37820 ? 36% +52.7% 57762 sched_debug.cpu.clock_task.min
3535 ? 8% +12.4% 3975 sched_debug.cpu.curr->pid.max
14219 ? 93% +132.7% 33087 sched_debug.cpu.nr_switches.avg
31.08 ? 92% +153.1% 78.67 ? 8% sched_debug.cpu.nr_uninterruptible.max
-23.83 +161.5% -62.33 sched_debug.cpu.nr_uninterruptible.min
10.35 ? 78% +127.4% 23.54 ? 4% sched_debug.cpu.nr_uninterruptible.stddev
38309 ? 36% +52.4% 58378 sched_debug.cpu_clk
38230 ? 36% +52.5% 58299 sched_debug.ktime
38330 ? 36% +52.4% 58401 sched_debug.sched_clk
16.79 ? 2% -2.1 14.74 ? 3% perf-profile.calltrace.cycles-pp.__do_sys_statfs.do_syscall_64.entry_SYSCALL_64_after_hwframe.__statfs
16.32 ? 2% -2.0 14.28 ? 3% perf-profile.calltrace.cycles-pp.user_statfs.__do_sys_statfs.do_syscall_64.entry_SYSCALL_64_after_hwframe.__statfs
18.83 ? 2% -2.0 16.84 ? 3% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__statfs
19.91 ? 2% -2.0 17.95 ? 3% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__statfs
12.10 ? 2% -1.9 10.18 ? 3% perf-profile.calltrace.cycles-pp.xfs_fs_statfs.statfs_by_dentry.user_statfs.__do_sys_statfs.do_syscall_64
12.29 ? 2% -1.9 10.36 ? 3% perf-profile.calltrace.cycles-pp.statfs_by_dentry.user_statfs.__do_sys_statfs.do_syscall_64.entry_SYSCALL_64_after_hwframe
22.92 -1.7 21.20 ? 3% perf-profile.calltrace.cycles-pp.__statfs
0.74 ? 4% +0.1 0.85 perf-profile.calltrace.cycles-pp.xfs_trans_read_buf_map.xfs_da_read_buf.xfs_da3_node_lookup_int.xfs_dir2_node_addname.xfs_dir_createname
1.32 ? 5% +0.1 1.44 ? 3% perf-profile.calltrace.cycles-pp.xfs_da_read_buf.xfs_da3_node_lookup_int.xfs_dir2_node_addname.xfs_dir_createname.xfs_link
1.88 ? 3% +0.2 2.07 ? 2% perf-profile.calltrace.cycles-pp.syscall_return_via_sysret.__statfs
2.34 ? 4% -2.2 0.16 ? 16% perf-profile.children.cycles-pp.xfs_inodegc_queue_all
16.80 ? 2% -2.1 14.74 ? 3% perf-profile.children.cycles-pp.__do_sys_statfs
16.33 ? 2% -2.0 14.29 ? 3% perf-profile.children.cycles-pp.user_statfs
12.14 ? 2% -1.9 10.21 ? 3% perf-profile.children.cycles-pp.xfs_fs_statfs
12.29 ? 2% -1.9 10.36 ? 3% perf-profile.children.cycles-pp.statfs_by_dentry
23.06 -1.7 21.34 ? 3% perf-profile.children.cycles-pp.__statfs
0.58 ? 8% -0.4 0.13 ? 14% perf-profile.children.cycles-pp._find_next_bit
0.08 ? 16% +0.0 0.10 ? 13% perf-profile.children.cycles-pp.xfs_qm_need_dqattach
0.14 ? 18% +0.0 0.18 ? 8% perf-profile.children.cycles-pp.xfs_vn_getattr
3.96 ? 3% +0.2 4.19 ? 2% perf-profile.children.cycles-pp.syscall_return_via_sysret
0.57 ? 9% -0.4 0.13 ? 15% perf-profile.self.cycles-pp._find_next_bit
0.06 ? 13% +0.0 0.08 ? 18% perf-profile.self.cycles-pp.xfs_dir2_node_find_freeblk
0.07 ? 18% +0.0 0.10 ? 9% perf-profile.self.cycles-pp.xfs_mod_freecounter
0.07 ? 16% +0.0 0.10 ? 13% perf-profile.self.cycles-pp.xfs_qm_need_dqattach
0.17 ? 11% +0.0 0.21 ? 7% perf-profile.self.cycles-pp.xfs_link
1.71 ? 5% +0.1 1.84 ? 4% perf-profile.self.cycles-pp._find_next_or_bit
3.95 ? 3% +0.2 4.19 ? 2% perf-profile.self.cycles-pp.syscall_return_via_sysret



Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki