Greeting,
FYI, we noticed a -6.5% regression of fsmark.files_per_sec due to commit:
commit: c6138abff45e497e14575b6ac321b17899a5944c ("[PATCH v2 1/1] NFSD: sleeping function called from invalid context at kernel/locking/rwsem.c")
url: https://github.com/intel-lab-lkp/linux/commits/Dai-Ngo/NFSD-sleeping-function-called-from-invalid-context-at-kernel-locking-rwsem-c/20220521-045428
base: https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git 3b5e1590a26713a8c76896f0f1b99f52ec24e72f
patch link: https://lore.kernel.org/linux-nfs/[email protected]
in testcase: fsmark
on test machine: 144 threads 4 sockets Intel(R) Xeon(R) Gold 5318H CPU @ 2.50GHz with 128G memory
with following parameters:
iterations: 1x
disk: 1SSD
nr_threads: 6t
fs: xfs
filesize: 8K
test_size: 60G
sync_method: NoSync
cpufreq_governor: performance
ucode: 0x7002402
fs2: nfsv4
test-description: The fsmark is a file system benchmark to test synchronous write workloads, for example, mail servers workload.
test-url: https://sourceforge.net/projects/fsmark/
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <[email protected]>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
sudo bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
sudo bin/lkp run generated-yaml-file
# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.
=========================================================================================
compiler/cpufreq_governor/disk/filesize/fs2/fs/iterations/kconfig/nr_threads/rootfs/sync_method/tbox_group/test_size/testcase/ucode:
gcc-11/performance/1SSD/8K/nfsv4/xfs/1x/x86_64-rhel-8.3/6t/debian-10.4-x86_64-20200603.cgz/NoSync/lkp-cpl-4sp1/60G/fsmark/0x7002402
commit:
3b5e1590a2 ("Merge tag 'gpio-fixes-for-v5.18' of git://git.kernel.org/pub/scm/linux/kernel/git/brgl/linux")
c6138abff4 ("NFSD: sleeping function called from invalid context at kernel/locking/rwsem.c")
3b5e1590a26713a8 c6138abff45e497e14575b6ac32
---------------- ---------------------------
%stddev %change %stddev
\ | \
3861 -6.5% 3612 fsmark.files_per_sec
1554 +6.9% 1661 fsmark.time.elapsed_time
1554 +6.9% 1661 fsmark.time.elapsed_time.max
90439 -6.5% 84537 vmstat.io.bo
330422 -6.0% 310581 vmstat.system.cs
1.309e+09 -6.5% 1.224e+09 ? 2% perf-stat.i.branch-instructions
333399 -6.3% 312399 perf-stat.i.context-switches
419.54 -5.5% 396.45 perf-stat.i.cpu-migrations
1.728e+09 -6.3% 1.619e+09 ? 2% perf-stat.i.dTLB-loads
8.609e+08 -6.3% 8.066e+08 ? 2% perf-stat.i.dTLB-stores
6.143e+09 -6.4% 5.748e+09 ? 3% perf-stat.i.instructions
28.45 -6.3% 26.66 ? 2% perf-stat.i.metric.M/sec
1.306e+09 -6.2% 1.224e+09 ? 3% perf-stat.ps.branch-instructions
330583 -6.0% 310704 perf-stat.ps.context-switches
415.90 -5.3% 393.96 perf-stat.ps.cpu-migrations
1.726e+09 -6.1% 1.62e+09 ? 2% perf-stat.ps.dTLB-loads
8.575e+08 -6.1% 8.056e+08 ? 2% perf-stat.ps.dTLB-stores
6.126e+09 -6.2% 5.747e+09 ? 3% perf-stat.ps.instructions
245690 -1.8% 241299 proc-vmstat.nr_anon_pages
13392029 -2.5% 13055008 proc-vmstat.nr_file_pages
14449706 +3.1% 14903918 proc-vmstat.nr_free_pages
252964 -2.0% 247951 proc-vmstat.nr_inactive_anon
12791936 -2.6% 12455581 proc-vmstat.nr_inactive_file
3110850 -2.6% 3031433 proc-vmstat.nr_slab_reclaimable
1086429 -2.0% 1064320 proc-vmstat.nr_slab_unreclaimable
252965 -2.0% 247951 proc-vmstat.nr_zone_inactive_anon
12791941 -2.6% 12455588 proc-vmstat.nr_zone_inactive_file
36318 ? 25% -41.2% 21359 ? 25% proc-vmstat.numa_hint_faults
357725 ? 4% -20.2% 285481 ? 4% proc-vmstat.pgactivate
6109087 +6.7% 6516788 proc-vmstat.pgfault
484834 +7.6% 521446 proc-vmstat.pgreuse
1.60 ? 14% -0.7 0.92 ? 22% perf-profile.calltrace.cycles-pp.perf_callchain_kernel.get_perf_callchain.perf_callchain.perf_prepare_sample.perf_event_output_forward
1.75 ? 17% -0.6 1.13 ? 25% perf-profile.calltrace.cycles-pp.perf_callchain.perf_prepare_sample.perf_event_output_forward.__perf_event_overflow.perf_tp_event
1.74 ? 18% -0.6 1.12 ? 25% perf-profile.calltrace.cycles-pp.get_perf_callchain.perf_callchain.perf_prepare_sample.perf_event_output_forward.__perf_event_overflow
1.88 ? 19% -0.6 1.28 ? 21% perf-profile.calltrace.cycles-pp.find_busiest_group.load_balance.newidle_balance.pick_next_task_fair.__schedule
1.86 ? 19% -0.6 1.27 ? 21% perf-profile.calltrace.cycles-pp.update_sd_lb_stats.find_busiest_group.load_balance.newidle_balance.pick_next_task_fair
3.74 ? 5% -0.5 3.19 ? 5% perf-profile.calltrace.cycles-pp.schedule.worker_thread.kthread.ret_from_fork
3.71 ? 5% -0.5 3.17 ? 5% perf-profile.calltrace.cycles-pp.__schedule.schedule.worker_thread.kthread.ret_from_fork
1.36 ? 24% -0.4 0.94 ? 4% perf-profile.calltrace.cycles-pp.perf_prepare_sample.perf_event_output_forward.__perf_event_overflow.perf_tp_event.perf_trace_sched_switch
1.67 ? 5% -0.3 1.37 ? 5% perf-profile.calltrace.cycles-pp.__close
1.65 ? 5% -0.3 1.35 ? 5% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__close
1.65 ? 5% -0.3 1.35 ? 5% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__close
2.12 ? 6% -0.3 1.85 ? 5% perf-profile.calltrace.cycles-pp.perf_event_output_forward.__perf_event_overflow.perf_tp_event.perf_trace_sched_switch.__schedule
1.57 ? 6% -0.3 1.31 ? 8% perf-profile.calltrace.cycles-pp.newidle_balance.pick_next_task_fair.__schedule.schedule.worker_thread
1.60 ? 6% -0.3 1.33 ? 8% perf-profile.calltrace.cycles-pp.pick_next_task_fair.__schedule.schedule.worker_thread.kthread
2.75 ? 6% -0.2 2.53 ? 6% perf-profile.calltrace.cycles-pp.do_nfsd_create.do_open_lookup.nfsd4_open.nfsd4_proc_compound.nfsd_dispatch
2.80 ? 6% -0.2 2.58 ? 6% perf-profile.calltrace.cycles-pp.do_open_lookup.nfsd4_open.nfsd4_proc_compound.nfsd_dispatch.svc_process_common
1.30 ? 6% -0.2 1.09 ? 8% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.open64
1.30 ? 6% -0.2 1.09 ? 8% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.open64
1.30 ? 6% -0.2 1.09 ? 8% perf-profile.calltrace.cycles-pp.__x64_sys_openat.do_syscall_64.entry_SYSCALL_64_after_hwframe.open64
1.31 ? 6% -0.2 1.10 ? 8% perf-profile.calltrace.cycles-pp.open64
0.68 ? 5% -0.2 0.47 ? 44% perf-profile.calltrace.cycles-pp.nfs4_file_flush.filp_close.__x64_sys_close.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.30 ? 6% -0.2 1.09 ? 8% perf-profile.calltrace.cycles-pp.do_sys_openat2.__x64_sys_openat.do_syscall_64.entry_SYSCALL_64_after_hwframe.open64
0.66 ? 5% -0.2 0.46 ? 45% perf-profile.calltrace.cycles-pp.filemap_write_and_wait_range.nfs_wb_all.nfs4_file_flush.filp_close.__x64_sys_close
0.67 ? 5% -0.2 0.46 ? 45% perf-profile.calltrace.cycles-pp.nfs_wb_all.nfs4_file_flush.filp_close.__x64_sys_close.do_syscall_64
1.27 ? 6% -0.2 1.07 ? 8% perf-profile.calltrace.cycles-pp.path_openat.do_filp_open.do_sys_openat2.__x64_sys_openat.do_syscall_64
1.28 ? 6% -0.2 1.07 ? 8% perf-profile.calltrace.cycles-pp.do_filp_open.do_sys_openat2.__x64_sys_openat.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.48 ? 5% -0.2 1.29 ? 5% perf-profile.calltrace.cycles-pp.perf_tp_event.perf_trace_sched_switch.__schedule.schedule_idle.do_idle
1.44 ? 5% -0.2 1.25 ? 5% perf-profile.calltrace.cycles-pp.__perf_event_overflow.perf_tp_event.perf_trace_sched_switch.__schedule.schedule_idle
1.19 ? 6% -0.2 1.00 ? 8% perf-profile.calltrace.cycles-pp.open_last_lookups.path_openat.do_filp_open.do_sys_openat2.__x64_sys_openat
1.62 ? 5% -0.2 1.44 ? 4% perf-profile.calltrace.cycles-pp.perf_trace_sched_switch.__schedule.schedule_idle.do_idle.cpu_startup_entry
1.17 ? 6% -0.2 0.98 ? 9% perf-profile.calltrace.cycles-pp.lookup_open.open_last_lookups.path_openat.do_filp_open.do_sys_openat2
0.92 ? 7% -0.2 0.74 ? 6% perf-profile.calltrace.cycles-pp.nfs_file_release.__fput.task_work_run.exit_to_user_mode_loop.exit_to_user_mode_prepare
0.91 ? 7% -0.2 0.73 ? 6% perf-profile.calltrace.cycles-pp.__put_nfs_open_context.nfs_file_release.__fput.task_work_run.exit_to_user_mode_loop
0.96 ? 7% -0.2 0.78 ? 6% perf-profile.calltrace.cycles-pp.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe.__close
1.12 ? 7% -0.2 0.94 ? 8% perf-profile.calltrace.cycles-pp.nfs_atomic_open.lookup_open.open_last_lookups.path_openat.do_filp_open
0.96 ? 7% -0.2 0.78 ? 6% perf-profile.calltrace.cycles-pp.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe.__close
0.88 ? 8% -0.2 0.71 ? 7% perf-profile.calltrace.cycles-pp.nfs4_do_close.__put_nfs_open_context.nfs_file_release.__fput.task_work_run
0.95 ? 7% -0.2 0.78 ? 6% perf-profile.calltrace.cycles-pp.task_work_run.exit_to_user_mode_loop.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64
1.02 ? 7% -0.2 0.84 ? 9% perf-profile.calltrace.cycles-pp._nfs4_open_and_get_state._nfs4_do_open.nfs4_do_open.nfs4_atomic_open.nfs_atomic_open
0.94 ? 7% -0.2 0.77 ? 5% perf-profile.calltrace.cycles-pp.__fput.task_work_run.exit_to_user_mode_loop.exit_to_user_mode_prepare.syscall_exit_to_user_mode
1.08 ? 7% -0.2 0.91 ? 8% perf-profile.calltrace.cycles-pp.nfs4_atomic_open.nfs_atomic_open.lookup_open.open_last_lookups.path_openat
1.08 ? 6% -0.2 0.90 ? 8% perf-profile.calltrace.cycles-pp.nfs4_do_open.nfs4_atomic_open.nfs_atomic_open.lookup_open.open_last_lookups
1.08 ? 6% -0.2 0.90 ? 8% perf-profile.calltrace.cycles-pp._nfs4_do_open.nfs4_do_open.nfs4_atomic_open.nfs_atomic_open.lookup_open
0.95 ? 7% -0.2 0.78 ? 6% perf-profile.calltrace.cycles-pp.exit_to_user_mode_loop.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.04 ? 5% -0.2 0.88 ? 6% perf-profile.calltrace.cycles-pp.dequeue_task_fair.__schedule.schedule.worker_thread.kthread
0.62 ? 6% -0.2 0.45 ? 45% perf-profile.calltrace.cycles-pp.perf_prepare_sample.perf_event_output_forward.__perf_event_overflow.perf_tp_event.perf_trace_sched_stat_runtime
1.01 ? 5% -0.2 0.85 ? 6% perf-profile.calltrace.cycles-pp.dequeue_entity.dequeue_task_fair.__schedule.schedule.worker_thread
1.83 ? 5% -0.2 1.67 ? 5% perf-profile.calltrace.cycles-pp.__local_bh_enable_ip.ip6_finish_output2.ip6_xmit.inet6_csk_xmit.__tcp_transmit_skb
1.82 ? 5% -0.2 1.65 ? 5% perf-profile.calltrace.cycles-pp.__softirqentry_text_start.do_softirq.__local_bh_enable_ip.ip6_finish_output2.ip6_xmit
1.82 ? 5% -0.2 1.67 ? 5% perf-profile.calltrace.cycles-pp.do_softirq.__local_bh_enable_ip.ip6_finish_output2.ip6_xmit.inet6_csk_xmit
1.76 ? 5% -0.2 1.61 ? 5% perf-profile.calltrace.cycles-pp.net_rx_action.__softirqentry_text_start.do_softirq.__local_bh_enable_ip.ip6_finish_output2
1.74 ? 5% -0.1 1.60 ? 5% perf-profile.calltrace.cycles-pp.process_backlog.__napi_poll.net_rx_action.__softirqentry_text_start.do_softirq
0.92 ? 4% -0.1 0.78 ? 6% perf-profile.calltrace.cycles-pp.update_curr.dequeue_entity.dequeue_task_fair.__schedule.schedule
0.89 ? 4% -0.1 0.75 ? 6% perf-profile.calltrace.cycles-pp.perf_trace_sched_stat_runtime.update_curr.dequeue_entity.dequeue_task_fair.__schedule
0.84 ? 5% -0.1 0.70 ? 6% perf-profile.calltrace.cycles-pp.perf_tp_event.perf_trace_sched_stat_runtime.update_curr.dequeue_entity.dequeue_task_fair
0.69 ? 5% -0.1 0.56 ? 5% perf-profile.calltrace.cycles-pp.filp_close.__x64_sys_close.do_syscall_64.entry_SYSCALL_64_after_hwframe.__close
0.69 ? 5% -0.1 0.56 ? 4% perf-profile.calltrace.cycles-pp.__x64_sys_close.do_syscall_64.entry_SYSCALL_64_after_hwframe.__close
0.80 ? 4% -0.1 0.68 ? 7% perf-profile.calltrace.cycles-pp.__perf_event_overflow.perf_tp_event.perf_trace_sched_stat_runtime.update_curr.dequeue_entity
0.80 ? 4% -0.1 0.68 ? 6% perf-profile.calltrace.cycles-pp.perf_event_output_forward.__perf_event_overflow.perf_tp_event.perf_trace_sched_stat_runtime.update_curr
0.77 ? 9% -0.1 0.66 ? 7% perf-profile.calltrace.cycles-pp.poll_idle.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call.do_idle
1.12 ? 4% -0.1 1.04 ? 3% perf-profile.calltrace.cycles-pp.xfs_log_force_seq.xfs_file_fsync.xfs_file_buffered_write.do_iter_readv_writev.do_iter_write
0.67 ? 13% +0.2 0.82 ? 10% perf-profile.calltrace.cycles-pp.nfsd_file_lru_cb.__list_lru_walk_one.list_lru_walk_node.nfsd_file_lru_walk_list.nfsd_file_acquire
11.69 ? 6% +1.7 13.37 ? 5% perf-profile.calltrace.cycles-pp.svc_process_common.svc_process.nfsd.kthread.ret_from_fork
11.70 ? 6% +1.7 13.38 ? 5% perf-profile.calltrace.cycles-pp.svc_process.nfsd.kthread.ret_from_fork
11.18 ? 6% +1.7 12.89 ? 5% perf-profile.calltrace.cycles-pp.nfsd_dispatch.svc_process_common.svc_process.nfsd.kthread
10.84 ? 6% +1.7 12.58 ? 5% perf-profile.calltrace.cycles-pp.nfsd4_proc_compound.nfsd_dispatch.svc_process_common.svc_process.nfsd
6.58 ? 8% +2.0 8.60 ? 6% perf-profile.calltrace.cycles-pp.nfsd4_open.nfsd4_proc_compound.nfsd_dispatch.svc_process_common.svc_process
3.70 ? 10% +2.2 5.92 ? 7% perf-profile.calltrace.cycles-pp.nfsd4_process_open2.nfsd4_open.nfsd4_proc_compound.nfsd_dispatch.svc_process_common
3.47 ? 11% +2.2 5.69 ? 7% perf-profile.calltrace.cycles-pp.nfsd_file_acquire.nfs4_get_vfs_file.nfsd4_process_open2.nfsd4_open.nfsd4_proc_compound
3.48 ? 11% +2.2 5.70 ? 7% perf-profile.calltrace.cycles-pp.nfs4_get_vfs_file.nfsd4_process_open2.nfsd4_open.nfsd4_proc_compound.nfsd_dispatch
3.21 ? 11% +2.3 5.47 ? 7% perf-profile.calltrace.cycles-pp.nfsd_file_lru_walk_list.nfsd_file_acquire.nfs4_get_vfs_file.nfsd4_process_open2.nfsd4_open
3.00 ? 10% +2.3 5.32 ? 6% perf-profile.calltrace.cycles-pp.list_lru_walk_node.nfsd_file_lru_walk_list.nfsd_file_acquire.nfs4_get_vfs_file.nfsd4_process_open2
2.96 ? 10% +2.3 5.30 ? 6% perf-profile.calltrace.cycles-pp.__list_lru_walk_one.list_lru_walk_node.nfsd_file_lru_walk_list.nfsd_file_acquire.nfs4_get_vfs_file
12.12 ? 7% -1.6 10.48 ? 5% perf-profile.children.cycles-pp.__schedule
9.30 ? 7% -1.3 8.03 ? 5% perf-profile.children.cycles-pp.schedule
9.74 ? 6% -1.0 8.72 ? 4% perf-profile.children.cycles-pp.perf_tp_event
9.37 ? 6% -1.0 8.38 ? 5% perf-profile.children.cycles-pp.__perf_event_overflow
9.33 ? 6% -1.0 8.33 ? 5% perf-profile.children.cycles-pp.perf_event_output_forward
7.75 ? 6% -0.9 6.88 ? 5% perf-profile.children.cycles-pp.perf_prepare_sample
7.35 ? 6% -0.8 6.54 ? 5% perf-profile.children.cycles-pp.perf_callchain
7.32 ? 6% -0.8 6.52 ? 5% perf-profile.children.cycles-pp.get_perf_callchain
6.85 ? 6% -0.8 6.09 ? 5% perf-profile.children.cycles-pp.perf_callchain_kernel
3.58 ? 8% -0.7 2.88 ? 7% perf-profile.children.cycles-pp.pick_next_task_fair
3.29 ? 8% -0.7 2.63 ? 7% perf-profile.children.cycles-pp.newidle_balance
6.50 ? 6% -0.6 5.89 ? 5% perf-profile.children.cycles-pp.do_syscall_64
6.51 ? 6% -0.6 5.90 ? 5% perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
3.28 ? 7% -0.6 2.67 ? 7% perf-profile.children.cycles-pp.load_balance
5.49 ? 6% -0.6 4.90 ? 5% perf-profile.children.cycles-pp.unwind_next_frame
3.14 ? 7% -0.6 2.56 ? 7% perf-profile.children.cycles-pp.find_busiest_group
3.10 ? 6% -0.6 2.53 ? 7% perf-profile.children.cycles-pp.update_sd_lb_stats
2.96 ? 7% -0.5 2.42 ? 7% perf-profile.children.cycles-pp.update_sg_lb_stats
3.12 ? 5% -0.4 2.76 ? 5% perf-profile.children.cycles-pp.update_curr
2.96 ? 5% -0.3 2.63 ? 5% perf-profile.children.cycles-pp.perf_trace_sched_stat_runtime
3.01 ? 4% -0.3 2.69 ? 4% perf-profile.children.cycles-pp.dequeue_task_fair
2.94 ? 4% -0.3 2.62 ? 4% perf-profile.children.cycles-pp.dequeue_entity
1.67 ? 5% -0.3 1.37 ? 5% perf-profile.children.cycles-pp.__close
3.20 ? 5% -0.3 2.90 ? 4% perf-profile.children.cycles-pp.perf_trace_sched_wakeup_template
2.42 ? 7% -0.3 2.12 ? 4% perf-profile.children.cycles-pp.__unwind_start
1.21 ? 9% -0.3 0.95 ? 11% perf-profile.children.cycles-pp.rpc_wait_bit_killable
1.27 ? 8% -0.3 1.01 ? 9% perf-profile.children.cycles-pp.__wait_on_bit
2.76 ? 3% -0.3 2.50 ? 5% perf-profile.children.cycles-pp.__queue_work
1.27 ? 8% -0.3 1.01 ? 9% perf-profile.children.cycles-pp.out_of_line_wait_on_bit
2.70 ? 2% -0.2 2.47 ? 4% perf-profile.children.cycles-pp.queue_work_on
2.75 ? 6% -0.2 2.53 ? 6% perf-profile.children.cycles-pp.do_nfsd_create
2.80 ? 6% -0.2 2.58 ? 6% perf-profile.children.cycles-pp.do_open_lookup
1.31 ? 6% -0.2 1.10 ? 8% perf-profile.children.cycles-pp.open64
1.34 ? 6% -0.2 1.13 ? 8% perf-profile.children.cycles-pp.__x64_sys_openat
2.28 ? 7% -0.2 2.07 ? 5% perf-profile.children.cycles-pp.xfs_log_force_seq
1.34 ? 6% -0.2 1.13 ? 8% perf-profile.children.cycles-pp.do_sys_openat2
1.31 ? 6% -0.2 1.11 ? 7% perf-profile.children.cycles-pp.do_filp_open
2.03 ? 6% -0.2 1.83 ? 7% perf-profile.children.cycles-pp.__orc_find
1.30 ? 6% -0.2 1.11 ? 7% perf-profile.children.cycles-pp.path_openat
1.17 ? 6% -0.2 0.98 ? 9% perf-profile.children.cycles-pp.lookup_open
1.20 ? 6% -0.2 1.01 ? 8% perf-profile.children.cycles-pp.open_last_lookups
0.92 ? 7% -0.2 0.74 ? 6% perf-profile.children.cycles-pp.nfs_file_release
0.98 ? 7% -0.2 0.80 ? 6% perf-profile.children.cycles-pp.syscall_exit_to_user_mode
1.12 ? 7% -0.2 0.94 ? 8% perf-profile.children.cycles-pp.nfs_atomic_open
0.91 ? 8% -0.2 0.74 ? 7% perf-profile.children.cycles-pp.__put_nfs_open_context
0.88 ? 8% -0.2 0.71 ? 7% perf-profile.children.cycles-pp.nfs4_do_close
0.98 ? 7% -0.2 0.80 ? 6% perf-profile.children.cycles-pp.exit_to_user_mode_prepare
0.97 ? 7% -0.2 0.80 ? 6% perf-profile.children.cycles-pp.exit_to_user_mode_loop
1.02 ? 7% -0.2 0.84 ? 9% perf-profile.children.cycles-pp._nfs4_open_and_get_state
1.08 ? 7% -0.2 0.91 ? 8% perf-profile.children.cycles-pp.nfs4_atomic_open
1.08 ? 6% -0.2 0.90 ? 8% perf-profile.children.cycles-pp.nfs4_do_open
1.08 ? 6% -0.2 0.90 ? 8% perf-profile.children.cycles-pp._nfs4_do_open
0.99 ? 7% -0.2 0.82 ? 6% perf-profile.children.cycles-pp.__fput
0.96 ? 7% -0.2 0.79 ? 6% perf-profile.children.cycles-pp.task_work_run
1.33 ? 8% -0.2 1.17 ? 7% perf-profile.children.cycles-pp.orc_find
2.31 ? 3% -0.2 2.15 ? 4% perf-profile.children.cycles-pp.__local_bh_enable_ip
2.29 ? 3% -0.2 2.13 ? 4% perf-profile.children.cycles-pp.do_softirq
0.69 ? 5% -0.1 0.57 ? 5% perf-profile.children.cycles-pp.__x64_sys_close
0.68 ? 5% -0.1 0.55 ? 5% perf-profile.children.cycles-pp.nfs4_file_flush
0.65 ? 9% -0.1 0.53 ? 7% perf-profile.children.cycles-pp.__filemap_fdatawait_range
0.69 ? 5% -0.1 0.57 ? 5% perf-profile.children.cycles-pp.filp_close
1.06 ? 5% -0.1 0.94 ? 5% perf-profile.children.cycles-pp.filemap_fdatawrite_wbc
0.66 ? 5% -0.1 0.54 ? 6% perf-profile.children.cycles-pp.filemap_write_and_wait_range
0.67 ? 5% -0.1 0.55 ? 5% perf-profile.children.cycles-pp.nfs_wb_all
0.64 ? 8% -0.1 0.56 ? 6% perf-profile.children.cycles-pp.stack_access_ok
0.22 ? 30% -0.1 0.14 ? 35% perf-profile.children.cycles-pp.process_srcu
0.23 ? 21% -0.1 0.16 ? 18% perf-profile.children.cycles-pp.flush_workqueue_prep_pwqs
0.23 ? 13% -0.1 0.18 ? 23% perf-profile.children.cycles-pp.__cond_resched
0.22 ? 17% -0.0 0.17 ? 6% perf-profile.children.cycles-pp.__filemap_get_folio
0.14 ? 15% -0.0 0.09 ? 30% perf-profile.children.cycles-pp.nfsd_file_mark_find_or_create
0.25 ? 6% -0.0 0.22 ? 10% perf-profile.children.cycles-pp.nfs_end_delegation_return
0.25 ? 6% -0.0 0.22 ? 10% perf-profile.children.cycles-pp.nfs_do_return_delegation
0.15 ? 5% -0.0 0.12 ? 7% perf-profile.children.cycles-pp.___slab_alloc
0.29 ? 7% -0.0 0.26 ? 6% perf-profile.children.cycles-pp.xlog_write
0.12 ? 9% -0.0 0.09 ? 11% perf-profile.children.cycles-pp.pagecache_get_page
0.12 ? 7% -0.0 0.10 ? 6% perf-profile.children.cycles-pp.kmem_cache_alloc_lru
0.08 ? 5% -0.0 0.07 ? 11% perf-profile.children.cycles-pp.d_alloc_parallel
0.42 ? 12% +0.2 0.66 ? 31% perf-profile.children.cycles-pp.queue_event
0.44 ? 12% +0.2 0.68 ? 29% perf-profile.children.cycles-pp.ordered_events__queue
11.69 ? 6% +1.7 13.37 ? 5% perf-profile.children.cycles-pp.svc_process_common
11.70 ? 6% +1.7 13.38 ? 5% perf-profile.children.cycles-pp.svc_process
11.18 ? 6% +1.7 12.90 ? 5% perf-profile.children.cycles-pp.nfsd_dispatch
10.84 ? 6% +1.7 12.58 ? 5% perf-profile.children.cycles-pp.nfsd4_proc_compound
6.58 ? 8% +2.0 8.60 ? 6% perf-profile.children.cycles-pp.nfsd4_open
3.70 ? 10% +2.2 5.92 ? 7% perf-profile.children.cycles-pp.nfsd4_process_open2
3.47 ? 11% +2.2 5.69 ? 7% perf-profile.children.cycles-pp.nfsd_file_acquire
3.48 ? 11% +2.2 5.70 ? 7% perf-profile.children.cycles-pp.nfs4_get_vfs_file
3.21 ? 11% +2.3 5.47 ? 7% perf-profile.children.cycles-pp.nfsd_file_lru_walk_list
2.99 ? 10% +2.3 5.31 ? 6% perf-profile.children.cycles-pp.__list_lru_walk_one
3.00 ? 10% +2.3 5.32 ? 6% perf-profile.children.cycles-pp.list_lru_walk_node
2.22 ? 6% -0.4 1.80 ? 5% perf-profile.self.cycles-pp.update_sg_lb_stats
1.91 ? 7% -0.2 1.70 ? 5% perf-profile.self.cycles-pp.unwind_next_frame
2.01 ? 6% -0.2 1.81 ? 7% perf-profile.self.cycles-pp.__orc_find
0.61 ? 9% -0.1 0.52 ? 7% perf-profile.self.cycles-pp.stack_access_ok
0.38 ? 6% -0.1 0.32 ? 8% perf-profile.self.cycles-pp.__schedule
0.25 ? 7% -0.0 0.22 ? 11% perf-profile.self.cycles-pp.perf_output_sample
0.10 ? 10% -0.0 0.07 ? 16% perf-profile.self.cycles-pp.core_kernel_text
0.11 ? 6% -0.0 0.08 ? 11% perf-profile.self.cycles-pp.finish_task_switch
0.69 ? 11% +0.1 0.83 ? 9% perf-profile.self.cycles-pp.nfsd_file_lru_cb
0.41 ? 12% +0.2 0.64 ? 30% perf-profile.self.cycles-pp.queue_event
2.26 ? 12% +2.1 4.38 ? 5% perf-profile.self.cycles-pp.__list_lru_walk_one
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
--
0-DAY CI Kernel Test Service
https://01.org/lkp