Greeting,
FYI, we noticed a 39.7% improvement of fsmark.files_per_sec due to commit:
commit: 4a0e73e635e3f36b616ad5c943e3d23debe4632f ("NFSD: Leave open files out of the filecache LRU")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: fsmark
on test machine: 144 threads 4 sockets Intel(R) Xeon(R) Gold 5318H CPU @ 2.50GHz with 128G memory
with following parameters:
iterations: 1x
disk: 1SSD
nr_threads: 6t
fs: ext4
filesize: 8K
test_size: 60G
sync_method: fsyncBeforeClose
cpufreq_governor: performance
ucode: 0x7002501
fs2: nfsv4
test-description: The fsmark is a file system benchmark to test synchronous write workloads, for example, mail servers workload.
test-url: https://sourceforge.net/projects/fsmark/
In addition to that, the commit also has significant impact on the following tests:
+------------------+----------------------------------------------------------------------------------+
| testcase: change | fsmark: fsmark.files_per_sec 42.6% improvement |
| test machine | 144 threads 4 sockets Intel(R) Xeon(R) Gold 5318H CPU @ 2.50GHz with 128G memory |
| test parameters | cpufreq_governor=performance |
| | disk=1SSD |
| | filesize=8K |
| | fs2=nfsv4 |
| | fs=btrfs |
| | iterations=1x |
| | nr_threads=6t |
| | sync_method=NoSync |
| | test_size=60G |
| | ucode=0x7002501 |
+------------------+----------------------------------------------------------------------------------+
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
sudo bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
sudo bin/lkp run generated-yaml-file
# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.
=========================================================================================
compiler/cpufreq_governor/disk/filesize/fs2/fs/iterations/kconfig/nr_threads/rootfs/sync_method/tbox_group/test_size/testcase/ucode:
gcc-11/performance/1SSD/8K/nfsv4/ext4/1x/x86_64-rhel-8.3/6t/debian-11.1-x86_64-20220510.cgz/fsyncBeforeClose/lkp-cpl-4sp1/60G/fsmark/0x7002501
commit:
c46203acdd ("NFSD: Trace filecache LRU activity")
4a0e73e635 ("NFSD: Leave open files out of the filecache LRU")
c46203acddd9b920 4a0e73e635e3f36b616ad5c943e
---------------- ---------------------------
%stddev %change %stddev
\ | \
3448 +39.7% 4818 fsmark.files_per_sec
1740 -28.4% 1245 fsmark.time.elapsed_time
1740 -28.4% 1245 fsmark.time.elapsed_time.max
20.83 +36.8% 28.50 fsmark.time.percent_of_cpu_this_job_got
2.81 ? 2% +4.8% 2.94 iostat.cpu.system
2.428e+11 -28.6% 1.734e+11 cpuidle..time
7.354e+08 ? 3% -17.8% 6.049e+08 cpuidle..usage
1800 -27.5% 1304 uptime.boot
249778 -27.9% 180062 uptime.idle
0.45 ? 2% +0.2 0.68 ? 2% mpstat.cpu.all.iowait%
0.17 ? 3% +0.0 0.22 mpstat.cpu.all.soft%
0.02 +0.0 0.02 ? 2% mpstat.cpu.all.usr%
695.33 ? 12% +45.3% 1010 ? 17% numa-meminfo.node1.Writeback
707.83 ? 26% +87.9% 1330 ? 15% numa-meminfo.node2.Writeback
894.00 ? 16% +57.0% 1403 ? 7% numa-meminfo.node3.Writeback
117.50 ? 2% +12.3% 132.00 turbostat.Avg_MHz
5.395e+08 ? 4% -24.7% 4.062e+08 turbostat.IRQ
0.19 ? 9% +0.1 0.28 ? 2% turbostat.POLL%
210500 +39.8% 294319 vmstat.io.bo
0.00 +1e+102% 1.00 vmstat.procs.b
312567 +39.8% 437052 vmstat.system.cs
30060 -14.0% 25861 ? 3% meminfo.Active(anon)
135881 ? 2% +16.6% 158421 ? 3% meminfo.Dirty
31672 +12.7% 35699 ? 2% meminfo.Mapped
2731 ? 16% +68.4% 4601 ? 12% meminfo.Writeback
173.83 ? 14% +44.5% 251.17 ? 17% numa-vmstat.node1.nr_writeback
5569 ? 6% -9.7% 5028 ? 2% numa-vmstat.node2.nr_kernel_stack
178.67 ? 27% +88.7% 337.17 ? 15% numa-vmstat.node2.nr_writeback
225.67 ? 15% +55.9% 351.83 ? 9% numa-vmstat.node3.nr_writeback
9.00 ? 6% -13.3% 7.80 ? 3% sched_debug.cfs_rq:/.util_est_enqueued.avg
58.16 ? 3% -12.8% 50.74 ? 3% sched_debug.cfs_rq:/.util_est_enqueued.stddev
25501 ? 5% -23.4% 19532 ? 4% sched_debug.cpu.avg_idle.min
908566 -27.7% 657327 ? 2% sched_debug.cpu.clock.avg
908573 -27.7% 657335 ? 2% sched_debug.cpu.clock.max
908559 -27.7% 657319 ? 2% sched_debug.cpu.clock.min
898886 -27.6% 650626 ? 2% sched_debug.cpu.clock_task.avg
901474 -27.7% 652118 ? 2% sched_debug.cpu.clock_task.max
890317 -27.8% 642509 ? 2% sched_debug.cpu.clock_task.min
2072 ? 47% -46.0% 1119 ? 16% sched_debug.cpu.clock_task.stddev
432.77 ? 4% -15.1% 367.42 ? 3% sched_debug.cpu.curr->pid.avg
32453 -23.6% 24787 ? 2% sched_debug.cpu.curr->pid.max
3132 ? 2% -20.4% 2493 ? 2% sched_debug.cpu.curr->pid.stddev
908559 -27.7% 657319 ? 2% sched_debug.cpu_clk
907687 -27.7% 656447 ? 2% sched_debug.ktime
909001 -27.6% 658253 ? 2% sched_debug.sched_clk
7515 -14.0% 6466 ? 3% proc-vmstat.nr_active_anon
515453 -1.8% 506394 proc-vmstat.nr_active_file
225258 -2.4% 219861 proc-vmstat.nr_anon_pages
26516210 -1.7% 26068107 proc-vmstat.nr_dirtied
33929 ? 2% +16.6% 39562 ? 3% proc-vmstat.nr_dirty
13926966 -3.2% 13476142 proc-vmstat.nr_file_pages
14835760 +3.9% 15410703 proc-vmstat.nr_free_pages
228208 -1.8% 224112 proc-vmstat.nr_inactive_anon
12709320 -3.5% 12267056 proc-vmstat.nr_inactive_file
7928 +12.8% 8941 ? 2% proc-vmstat.nr_mapped
3349101 -3.4% 3233596 proc-vmstat.nr_slab_reclaimable
404594 -2.4% 395044 proc-vmstat.nr_slab_unreclaimable
682.67 ? 16% +64.7% 1124 ? 12% proc-vmstat.nr_writeback
26471840 -1.8% 25998960 proc-vmstat.nr_written
7515 -14.0% 6466 ? 3% proc-vmstat.nr_zone_active_anon
515454 -1.8% 506394 proc-vmstat.nr_zone_active_file
228208 -1.8% 224112 proc-vmstat.nr_zone_inactive_anon
12709325 -3.5% 12267061 proc-vmstat.nr_zone_inactive_file
34600 ? 2% +17.6% 40693 ? 3% proc-vmstat.nr_zone_write_pending
339155 ? 3% +17.5% 398440 ? 2% proc-vmstat.pgactivate
66257629 -1.1% 65543970 proc-vmstat.pgalloc_normal
4740808 -25.6% 3526474 proc-vmstat.pgfault
40585822 -1.8% 39863950 proc-vmstat.pgfree
180347 -26.8% 132006 proc-vmstat.pgreuse
28.33 ? 5% -22.1% 22.07 ? 2% perf-stat.i.MPKI
1.236e+09 +11.1% 1.373e+09 perf-stat.i.branch-instructions
29.67 ? 5% +6.3 35.96 ? 2% perf-stat.i.cache-miss-rate%
48409275 ? 5% +10.8% 53618347 perf-stat.i.cache-misses
1.614e+08 ? 4% -9.8% 1.456e+08 perf-stat.i.cache-references
314921 +39.3% 438729 perf-stat.i.context-switches
1.619e+10 ? 2% +12.9% 1.828e+10 perf-stat.i.cpu-cycles
325.18 +24.4% 404.64 ? 3% perf-stat.i.cpu-migrations
0.10 ? 11% -0.0 0.07 ? 14% perf-stat.i.dTLB-load-miss-rate%
1.618e+09 +20.4% 1.948e+09 perf-stat.i.dTLB-loads
7.959e+08 +22.0% 9.714e+08 perf-stat.i.dTLB-stores
3201950 ? 2% +25.8% 4028734 ? 2% perf-stat.i.iTLB-load-misses
5751845 ? 2% +21.0% 6962257 perf-stat.i.iTLB-loads
5.714e+09 +15.0% 6.573e+09 perf-stat.i.instructions
1787 -8.8% 1630 perf-stat.i.instructions-per-iTLB-miss
0.11 ? 2% +12.9% 0.13 perf-stat.i.metric.GHz
326.56 ? 15% +50.2% 490.62 ? 2% perf-stat.i.metric.K/sec
26.31 +16.0% 30.52 perf-stat.i.metric.M/sec
2606 +3.2% 2689 perf-stat.i.minor-faults
13721668 ? 3% +17.4% 16113273 ? 5% perf-stat.i.node-load-misses
3116210 ? 2% +39.5% 4347292 ? 4% perf-stat.i.node-store-misses
455438 ? 11% +33.0% 605838 ? 7% perf-stat.i.node-stores
2607 +3.2% 2689 perf-stat.i.page-faults
28.44 ? 5% -21.4% 22.35 perf-stat.overall.MPKI
30.21 ? 5% +6.7 36.90 ? 2% perf-stat.overall.cache-miss-rate%
0.10 ? 10% -0.0 0.07 ? 13% perf-stat.overall.dTLB-load-miss-rate%
1791 -8.6% 1637 perf-stat.overall.instructions-per-iTLB-miss
1.236e+09 +11.3% 1.375e+09 perf-stat.ps.branch-instructions
12985527 ? 11% +16.3% 15103417 ? 4% perf-stat.ps.branch-misses
48987860 ? 5% +10.8% 54259351 perf-stat.ps.cache-misses
1.624e+08 ? 4% -9.4% 1.471e+08 perf-stat.ps.cache-references
312811 +39.9% 437721 perf-stat.ps.context-switches
1.623e+10 ? 2% +13.0% 1.833e+10 perf-stat.ps.cpu-cycles
323.20 +24.8% 403.27 ? 3% perf-stat.ps.cpu-migrations
1.619e+09 +20.6% 1.952e+09 perf-stat.ps.dTLB-loads
7.942e+08 +22.3% 9.716e+08 perf-stat.ps.dTLB-stores
3188981 ? 2% +26.1% 4021804 ? 2% perf-stat.ps.iTLB-load-misses
5725778 ? 2% +21.2% 6940028 perf-stat.ps.iTLB-loads
5.711e+09 +15.3% 6.583e+09 perf-stat.ps.instructions
2590 +3.0% 2667 perf-stat.ps.minor-faults
13891577 ? 3% +17.3% 16296450 ? 5% perf-stat.ps.node-load-misses
3123127 ? 2% +39.6% 4360952 ? 4% perf-stat.ps.node-store-misses
449355 ? 11% +34.0% 602159 ? 7% perf-stat.ps.node-stores
2591 +3.0% 2667 perf-stat.ps.page-faults
9.948e+12 -17.5% 8.21e+12 perf-stat.total.instructions
5.25 ? 6% -2.6 2.69 ? 6% perf-profile.calltrace.cycles-pp.nfsd4_open.nfsd4_proc_compound.nfsd_dispatch.svc_process_common.svc_process
3.13 ? 7% -2.5 0.61 ? 4% perf-profile.calltrace.cycles-pp.nfsd4_process_open2.nfsd4_open.nfsd4_proc_compound.nfsd_dispatch.svc_process_common
8.40 ? 5% -2.3 6.10 ? 4% perf-profile.calltrace.cycles-pp.nfsd4_proc_compound.nfsd_dispatch.svc_process_common.svc_process.nfsd
8.68 ? 4% -2.3 6.41 ? 5% perf-profile.calltrace.cycles-pp.nfsd_dispatch.svc_process_common.svc_process.nfsd.kthread
9.17 ? 5% -2.2 6.95 ? 5% perf-profile.calltrace.cycles-pp.svc_process_common.svc_process.nfsd.kthread.ret_from_fork
9.18 ? 5% -2.2 6.96 ? 5% perf-profile.calltrace.cycles-pp.svc_process.nfsd.kthread.ret_from_fork
15.41 ? 3% -1.7 13.67 ? 5% perf-profile.calltrace.cycles-pp.nfsd.kthread.ret_from_fork
0.66 ? 4% +0.1 0.73 ? 5% perf-profile.calltrace.cycles-pp.inet6_recvmsg.svc_tcp_read_marker.svc_tcp_recvfrom.svc_handle_xprt.svc_recv
0.64 ? 6% +0.1 0.72 ? 5% perf-profile.calltrace.cycles-pp.tcp_recvmsg.inet6_recvmsg.svc_tcp_read_marker.svc_tcp_recvfrom.svc_handle_xprt
0.56 ? 6% +0.1 0.64 ? 6% perf-profile.calltrace.cycles-pp.perf_tp_event.perf_trace_sched_stat_runtime.update_curr.dequeue_entity.dequeue_task_fair
0.46 ? 44% +0.2 0.61 ? 6% perf-profile.calltrace.cycles-pp.__perf_event_overflow.perf_tp_event.perf_trace_sched_stat_runtime.update_curr.dequeue_entity
0.86 ? 6% +0.2 1.01 ? 4% perf-profile.calltrace.cycles-pp.__tcp_transmit_skb.tcp_write_xmit.__tcp_push_pending_frames.tcp_sock_set_cork.xs_tcp_send_request
0.45 ? 44% +0.2 0.60 ? 6% perf-profile.calltrace.cycles-pp.perf_event_output_forward.__perf_event_overflow.perf_tp_event.perf_trace_sched_stat_runtime.update_curr
0.92 ? 6% +0.2 1.08 ? 5% perf-profile.calltrace.cycles-pp.__tcp_push_pending_frames.tcp_sock_set_cork.xs_tcp_send_request.xprt_request_transmit.xprt_transmit
0.92 ? 6% +0.2 1.08 ? 5% perf-profile.calltrace.cycles-pp.tcp_write_xmit.__tcp_push_pending_frames.tcp_sock_set_cork.xs_tcp_send_request.xprt_request_transmit
0.99 ? 6% +0.2 1.16 ? 5% perf-profile.calltrace.cycles-pp.tcp_sock_set_cork.xs_tcp_send_request.xprt_request_transmit.xprt_transmit.call_transmit
1.52 ? 3% +0.2 1.70 ? 6% perf-profile.calltrace.cycles-pp.process_backlog.__napi_poll.net_rx_action.__softirqentry_text_start.do_softirq
1.41 ? 2% +0.2 1.60 ? 6% perf-profile.calltrace.cycles-pp.tcp_v6_rcv.ip6_protocol_deliver_rcu.ip6_input_finish.__netif_receive_skb_one_core.process_backlog
1.52 ? 3% +0.2 1.71 ? 6% perf-profile.calltrace.cycles-pp.__napi_poll.net_rx_action.__softirqentry_text_start.do_softirq.__local_bh_enable_ip
1.48 ? 3% +0.2 1.68 ? 5% perf-profile.calltrace.cycles-pp.__netif_receive_skb_one_core.process_backlog.__napi_poll.net_rx_action.__softirqentry_text_start
1.43 ? 3% +0.2 1.63 ? 6% perf-profile.calltrace.cycles-pp.ip6_protocol_deliver_rcu.ip6_input_finish.__netif_receive_skb_one_core.process_backlog.__napi_poll
1.44 ? 3% +0.2 1.63 ? 6% perf-profile.calltrace.cycles-pp.ip6_input_finish.__netif_receive_skb_one_core.process_backlog.__napi_poll.net_rx_action
1.98 ? 3% +0.2 2.18 ? 6% perf-profile.calltrace.cycles-pp.inet6_csk_xmit.__tcp_transmit_skb.tcp_write_xmit.__tcp_push_pending_frames.tcp_sock_set_cork
1.88 ? 3% +0.2 2.08 ? 6% perf-profile.calltrace.cycles-pp.ip6_finish_output2.ip6_xmit.inet6_csk_xmit.__tcp_transmit_skb.tcp_write_xmit
1.93 ? 3% +0.2 2.14 ? 6% perf-profile.calltrace.cycles-pp.ip6_xmit.inet6_csk_xmit.__tcp_transmit_skb.tcp_write_xmit.__tcp_push_pending_frames
1.67 ? 3% +0.2 1.88 ? 6% perf-profile.calltrace.cycles-pp.net_rx_action.__softirqentry_text_start.do_softirq.__local_bh_enable_ip.ip6_finish_output2
1.72 ? 3% +0.2 1.93 ? 6% perf-profile.calltrace.cycles-pp.__softirqentry_text_start.do_softirq.__local_bh_enable_ip.ip6_finish_output2.ip6_xmit
1.73 ? 3% +0.2 1.95 ? 6% perf-profile.calltrace.cycles-pp.__local_bh_enable_ip.ip6_finish_output2.ip6_xmit.inet6_csk_xmit.__tcp_transmit_skb
1.73 ? 3% +0.2 1.95 ? 6% perf-profile.calltrace.cycles-pp.do_softirq.__local_bh_enable_ip.ip6_finish_output2.ip6_xmit.inet6_csk_xmit
1.43 ? 7% +0.3 1.68 ? 6% perf-profile.calltrace.cycles-pp.xs_tcp_send_request.xprt_request_transmit.xprt_transmit.call_transmit.__rpc_execute
1.58 ? 7% +0.3 1.86 ? 6% perf-profile.calltrace.cycles-pp.xprt_transmit.call_transmit.__rpc_execute.rpc_async_schedule.process_one_work
1.61 ? 7% +0.3 1.89 ? 6% perf-profile.calltrace.cycles-pp.call_transmit.__rpc_execute.rpc_async_schedule.process_one_work.worker_thread
1.56 ? 7% +0.3 1.84 ? 6% perf-profile.calltrace.cycles-pp.xprt_request_transmit.xprt_transmit.call_transmit.__rpc_execute.rpc_async_schedule
0.19 ?141% +0.5 0.66 ? 9% perf-profile.calltrace.cycles-pp.flush_smp_call_function_queue.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
3.28 ? 6% +0.5 3.79 ? 7% perf-profile.calltrace.cycles-pp.rpc_async_schedule.process_one_work.worker_thread.kthread.ret_from_fork
3.27 ? 6% +0.5 3.78 ? 7% perf-profile.calltrace.cycles-pp.__rpc_execute.rpc_async_schedule.process_one_work.worker_thread.kthread
6.26 ? 4% +0.7 6.98 ? 4% perf-profile.calltrace.cycles-pp.process_one_work.worker_thread.kthread.ret_from_fork
5.25 ? 6% -2.6 2.69 ? 6% perf-profile.children.cycles-pp.nfsd4_open
2.57 ? 6% -2.5 0.04 ? 45% perf-profile.children.cycles-pp.list_lru_walk_node
3.13 ? 7% -2.5 0.61 ? 4% perf-profile.children.cycles-pp.nfsd4_process_open2
2.91 ? 7% -2.5 0.40 ? 11% perf-profile.children.cycles-pp.nfs4_get_vfs_file
2.91 ? 7% -2.5 0.40 ? 10% perf-profile.children.cycles-pp.nfsd_do_file_acquire
2.73 ? 7% -2.5 0.22 ? 9% perf-profile.children.cycles-pp.nfsd_file_gc
8.40 ? 5% -2.3 6.10 ? 4% perf-profile.children.cycles-pp.nfsd4_proc_compound
8.68 ? 4% -2.3 6.41 ? 5% perf-profile.children.cycles-pp.nfsd_dispatch
9.18 ? 5% -2.2 6.96 ? 5% perf-profile.children.cycles-pp.svc_process
9.17 ? 5% -2.2 6.95 ? 5% perf-profile.children.cycles-pp.svc_process_common
15.41 ? 3% -1.7 13.67 ? 5% perf-profile.children.cycles-pp.nfsd
0.15 ? 23% -0.1 0.08 ? 40% perf-profile.children.cycles-pp.io_serial_in
0.09 ? 24% -0.1 0.03 ?100% perf-profile.children.cycles-pp.read_counters
0.09 ? 24% -0.1 0.03 ?100% perf-profile.children.cycles-pp.cmd_stat
0.09 ? 24% -0.1 0.03 ?100% perf-profile.children.cycles-pp.dispatch_events
0.09 ? 24% -0.1 0.03 ?100% perf-profile.children.cycles-pp.process_interval
0.16 ? 4% -0.0 0.14 ? 9% perf-profile.children.cycles-pp.__dev_queue_xmit
0.06 ? 16% +0.0 0.08 ? 20% perf-profile.children.cycles-pp.__folio_end_writeback
0.07 ? 8% +0.0 0.10 ? 13% perf-profile.children.cycles-pp.__handle_mm_fault
0.08 ? 10% +0.0 0.10 ? 13% perf-profile.children.cycles-pp.handle_mm_fault
0.09 ? 12% +0.0 0.12 ? 10% perf-profile.children.cycles-pp.llist_reverse_order
0.12 ? 11% +0.0 0.16 ? 7% perf-profile.children.cycles-pp.exc_page_fault
0.25 ? 5% +0.0 0.28 ? 6% perf-profile.children.cycles-pp.rpc_async_release
0.09 ? 15% +0.0 0.12 ? 10% perf-profile.children.cycles-pp.shmem_write_end
0.25 ? 6% +0.0 0.28 ? 6% perf-profile.children.cycles-pp.nfs_write_completion
0.08 ? 13% +0.0 0.12 ? 10% perf-profile.children.cycles-pp.do_user_addr_fault
0.27 ? 4% +0.0 0.31 ? 5% perf-profile.children.cycles-pp.asm_exc_page_fault
0.27 ? 9% +0.0 0.32 ? 8% perf-profile.children.cycles-pp.__skb_datagram_iter
0.14 ? 11% +0.0 0.19 ? 8% perf-profile.children.cycles-pp.__flush_smp_call_function_queue
0.23 ? 9% +0.0 0.27 ? 8% perf-profile.children.cycles-pp.call_decode
0.27 ? 9% +0.0 0.32 ? 8% perf-profile.children.cycles-pp.skb_copy_datagram_iter
0.28 ? 8% +0.1 0.34 ? 5% perf-profile.children.cycles-pp.nfsd_setuser
0.27 ? 9% +0.1 0.32 ? 9% perf-profile.children.cycles-pp.call_encode
0.30 ? 9% +0.1 0.35 ? 6% perf-profile.children.cycles-pp.nfsd_setuser_and_check_port
0.14 ? 14% +0.1 0.20 ? 15% perf-profile.children.cycles-pp.__smp_call_single_queue
0.23 ? 8% +0.1 0.29 ? 10% perf-profile.children.cycles-pp.kmem_cache_free
0.14 ? 14% +0.1 0.20 ? 13% perf-profile.children.cycles-pp.llist_add_batch
0.33 ? 12% +0.1 0.39 ? 10% perf-profile.children.cycles-pp.generic_write_end
0.24 ? 8% +0.1 0.30 ? 12% perf-profile.children.cycles-pp.finish_task_switch
0.10 ? 10% +0.1 0.16 ? 14% perf-profile.children.cycles-pp.ext4_mb_mark_diskspace_used
0.16 ? 12% +0.1 0.23 ? 10% perf-profile.children.cycles-pp.__slab_free
0.13 ? 9% +0.1 0.20 ? 10% perf-profile.children.cycles-pp.ext4_mb_new_blocks
0.09 ? 23% +0.1 0.16 ? 20% perf-profile.children.cycles-pp.run_ksoftirqd
0.30 ? 9% +0.1 0.38 ? 11% perf-profile.children.cycles-pp.ext4_ext_map_blocks
0.19 ? 5% +0.1 0.27 ? 9% perf-profile.children.cycles-pp.mpage_map_one_extent
0.27 ? 3% +0.1 0.35 ? 9% perf-profile.children.cycles-pp.mpage_map_and_submit_extent
0.62 ? 5% +0.1 0.70 ? 9% perf-profile.children.cycles-pp.__lock_sock
0.29 ? 8% +0.1 0.38 ? 7% perf-profile.children.cycles-pp.ext4_dirty_inode
0.38 ? 6% +0.1 0.46 ? 11% perf-profile.children.cycles-pp._raw_spin_lock_bh
0.27 ? 10% +0.1 0.36 ? 5% perf-profile.children.cycles-pp.ttwu_queue_wakelist
0.44 ? 8% +0.1 0.54 ? 9% perf-profile.children.cycles-pp.ext4_writepages
0.44 ? 4% +0.1 0.54 ? 9% perf-profile.children.cycles-pp.ext4_map_blocks
0.52 ? 11% +0.1 0.63 ? 8% perf-profile.children.cycles-pp.rcu_do_batch
0.29 ? 16% +0.1 0.40 ? 11% perf-profile.children.cycles-pp.sched_ttwu_pending
0.76 ? 3% +0.1 0.87 ? 6% perf-profile.children.cycles-pp.__filemap_fdatawrite_range
0.75 ? 3% +0.1 0.87 ? 6% perf-profile.children.cycles-pp.filemap_fdatawrite_wbc
0.28 ? 10% +0.1 0.41 ? 13% perf-profile.children.cycles-pp.crc32c_pcl_intel_update
0.84 ? 4% +0.1 0.96 ? 9% perf-profile.children.cycles-pp.lock_sock_nested
0.79 ? 3% +0.1 0.92 ? 6% perf-profile.children.cycles-pp.do_writepages
1.12 ? 4% +0.2 1.29 ? 8% perf-profile.children.cycles-pp.new_sync_write
0.48 ? 14% +0.2 0.67 ? 10% perf-profile.children.cycles-pp.flush_smp_call_function_queue
1.86 ? 8% +0.3 2.15 ? 10% perf-profile.children.cycles-pp.xprt_request_transmit
3.28 ? 6% +0.5 3.79 ? 7% perf-profile.children.cycles-pp.rpc_async_schedule
4.28 ? 5% +0.6 4.87 ? 8% perf-profile.children.cycles-pp.__rpc_execute
6.27 ? 4% +0.7 6.98 ? 4% perf-profile.children.cycles-pp.process_one_work
0.15 ? 23% -0.1 0.08 ? 40% perf-profile.self.cycles-pp.io_serial_in
0.06 ? 9% +0.0 0.08 ? 10% perf-profile.self.cycles-pp.jbd2_journal_add_journal_head
0.08 ? 14% +0.0 0.11 ? 6% perf-profile.self.cycles-pp.ttwu_queue_wakelist
0.09 ? 12% +0.0 0.12 ? 10% perf-profile.self.cycles-pp.llist_reverse_order
0.03 ?100% +0.0 0.07 ? 13% perf-profile.self.cycles-pp.svc_xprt_release
0.51 ? 2% +0.1 0.57 ? 4% perf-profile.self.cycles-pp.stack_access_ok
0.26 ? 6% +0.1 0.32 ? 11% perf-profile.self.cycles-pp._raw_spin_lock_bh
0.14 ? 14% +0.1 0.20 ? 15% perf-profile.self.cycles-pp.llist_add_batch
0.16 ? 12% +0.1 0.23 ? 10% perf-profile.self.cycles-pp.__slab_free
***************************************************************************************************
lkp-cpl-4sp1: 144 threads 4 sockets Intel(R) Xeon(R) Gold 5318H CPU @ 2.50GHz with 128G memory
=========================================================================================
compiler/cpufreq_governor/disk/filesize/fs2/fs/iterations/kconfig/nr_threads/rootfs/sync_method/tbox_group/test_size/testcase/ucode:
gcc-11/performance/1SSD/8K/nfsv4/btrfs/1x/x86_64-rhel-8.3/6t/debian-11.1-x86_64-20220510.cgz/NoSync/lkp-cpl-4sp1/60G/fsmark/0x7002501
commit:
c46203acdd ("NFSD: Trace filecache LRU activity")
4a0e73e635 ("NFSD: Leave open files out of the filecache LRU")
c46203acddd9b920 4a0e73e635e3f36b616ad5c943e
---------------- ---------------------------
%stddev %change %stddev
\ | \
4.143e+08 +3.1% 4.271e+08 fsmark.app_overhead
4134 +42.6% 5898 fsmark.files_per_sec
1451 -29.9% 1018 fsmark.time.elapsed_time
1451 -29.9% 1018 fsmark.time.elapsed_time.max
25.00 +40.0% 35.00 fsmark.time.percent_of_cpu_this_job_got
2.01e+11 -30.2% 1.402e+11 cpuidle..time
6.322e+08 -19.0% 5.124e+08 cpuidle..usage
1509 -28.7% 1076 uptime.boot
206498 -29.7% 145153 uptime.idle
95.40 -1.3% 94.15 iostat.cpu.idle
1.06 +61.6% 1.71 ? 2% iostat.cpu.iowait
3.52 +16.7% 4.11 iostat.cpu.system
376665 +32.1% 497638 vmstat.io.bo
1.00 +125.0% 2.25 ? 19% vmstat.procs.b
329374 +45.7% 480056 vmstat.system.cs
33570 ? 2% +18.4% 39756 ? 2% meminfo.Mapped
1839 -11.7% 1623 meminfo.Mlocked
43642 +18.6% 51769 ? 3% meminfo.Shmem
7223 ? 3% +51.8% 10965 ? 8% meminfo.Writeback
1.06 +0.7 1.71 ? 2% mpstat.cpu.all.iowait%
0.22 ? 2% +0.1 0.31 mpstat.cpu.all.soft%
2.41 +0.5 2.93 mpstat.cpu.all.sys%
0.02 +0.0 0.03 ? 2% mpstat.cpu.all.usr%
2298104 ?114% +116.1% 4965958 ? 50% numa-numastat.node1.numa_miss
2356463 ?110% +113.2% 5023281 ? 49% numa-numastat.node1.other_node
12840205 ? 13% +24.9% 16032617 ? 6% numa-numastat.node3.local_node
12892669 ? 12% +24.8% 16096191 ? 5% numa-numastat.node3.numa_hit
4781033 ? 34% -75.3% 1181721 ? 99% numa-numastat.node3.numa_miss
4833694 ? 34% -74.2% 1245220 ? 94% numa-numastat.node3.other_node
145.33 +19.7% 174.00 turbostat.Avg_MHz
4.83 +0.7 5.57 turbostat.Busy%
3014 +3.9% 3132 turbostat.Bzy_MHz
2.42 ? 6% +4.6 7.07 ?101% turbostat.C1%
4.573e+08 -28.3% 3.28e+08 ? 2% turbostat.IRQ
0.17 +0.1 0.26 ? 7% turbostat.POLL%
337.62 +2.4% 345.56 turbostat.PkgWatt
9.34 +3.4% 9.66 turbostat.RAMWatt
66.15 ? 22% +106.3% 136.47 ? 37% sched_debug.cfs_rq:/.MIN_vruntime.avg
7110 ? 15% +91.8% 13638 ? 38% sched_debug.cfs_rq:/.MIN_vruntime.max
677.23 ? 18% +94.2% 1314 ? 37% sched_debug.cfs_rq:/.MIN_vruntime.stddev
0.05 ? 3% +16.5% 0.06 ? 4% sched_debug.cfs_rq:/.h_nr_running.avg
66.15 ? 22% +106.3% 136.47 ? 37% sched_debug.cfs_rq:/.max_vruntime.avg
7110 ? 15% +91.8% 13638 ? 38% sched_debug.cfs_rq:/.max_vruntime.max
677.23 ? 18% +94.2% 1314 ? 37% sched_debug.cfs_rq:/.max_vruntime.stddev
0.05 ? 3% +17.1% 0.06 ? 4% sched_debug.cfs_rq:/.nr_running.avg
49.04 ? 2% +12.9% 55.36 ? 5% sched_debug.cfs_rq:/.runnable_avg.avg
-28763 -51.4% -13976 sched_debug.cfs_rq:/.spread0.avg
47.61 ? 3% +13.0% 53.79 ? 5% sched_debug.cfs_rq:/.util_avg.avg
13.36 ? 5% +12.9% 15.09 ? 8% sched_debug.cfs_rq:/.util_est_enqueued.avg
593.37 -9.0% 540.01 ? 2% sched_debug.cfs_rq:/.util_est_enqueued.max
323584 +8.5% 351191 ? 2% sched_debug.cpu.avg_idle.stddev
766299 -28.1% 550821 ? 3% sched_debug.cpu.clock.avg
766307 -28.1% 550829 ? 3% sched_debug.cpu.clock.max
766291 -28.1% 550813 ? 3% sched_debug.cpu.clock.min
757983 -28.1% 544726 ? 3% sched_debug.cpu.clock_task.avg
759900 -28.1% 546136 ? 3% sched_debug.cpu.clock_task.max
749089 -28.4% 536679 ? 3% sched_debug.cpu.clock_task.min
1540 ? 18% -20.9% 1218 ? 21% sched_debug.cpu.clock_task.stddev
28054 ? 2% -23.0% 21602 ? 2% sched_debug.cpu.curr->pid.max
2806 ? 2% -19.6% 2257 ? 2% sched_debug.cpu.curr->pid.stddev
0.04 ? 2% +23.9% 0.05 ? 8% sched_debug.cpu.nr_running.avg
0.21 ? 2% +12.2% 0.23 ? 7% sched_debug.cpu.nr_running.stddev
766292 -28.1% 550813 ? 3% sched_debug.cpu_clk
765418 -28.2% 549939 ? 3% sched_debug.ktime
766762 -28.1% 551270 ? 3% sched_debug.sched_clk
267479 ? 31% +49.5% 399750 ? 12% numa-meminfo.node0.AnonHugePages
301161 ? 27% +43.2% 431320 ? 12% numa-meminfo.node0.AnonPages
411982 ? 16% +34.7% 554848 ? 14% numa-meminfo.node0.AnonPages.max
10917572 ? 23% +32.1% 14423003 ? 9% numa-meminfo.node0.Inactive
302326 ? 26% +43.7% 434486 ? 12% numa-meminfo.node0.Inactive(anon)
10615245 ? 23% +31.8% 13988516 ? 10% numa-meminfo.node0.Inactive(file)
788566 ? 16% +21.4% 957508 ? 13% numa-meminfo.node0.SUnreclaim
1554 ? 15% +67.7% 2607 ? 21% numa-meminfo.node0.Writeback
196067 ? 22% -56.8% 84725 ? 28% numa-meminfo.node1.AnonHugePages
204926 ? 19% -54.1% 93983 ? 25% numa-meminfo.node1.AnonPages
372886 ? 6% -40.2% 223159 ? 8% numa-meminfo.node1.AnonPages.max
208375 ? 19% -53.2% 97432 ? 27% numa-meminfo.node1.Inactive(anon)
1884 ? 2% +35.7% 2556 ? 13% numa-meminfo.node1.Writeback
4311047 ? 5% -59.5% 1744442 ? 43% numa-meminfo.node2.Active
4309173 ? 5% -59.6% 1741161 ? 44% numa-meminfo.node2.Active(file)
238350 ? 17% -46.5% 127457 ? 8% numa-meminfo.node2.AnonHugePages
250313 ? 17% -44.0% 140123 ? 10% numa-meminfo.node2.AnonPages
98382 ? 26% -40.9% 58131 ? 19% numa-meminfo.node2.Dirty
19919717 ? 7% -32.8% 13391104 ? 8% numa-meminfo.node2.FilePages
14971231 ? 4% -26.1% 11066751 ? 13% numa-meminfo.node2.Inactive
252112 ? 16% -42.1% 146022 ? 15% numa-meminfo.node2.Inactive(anon)
14719118 ? 4% -25.8% 10920729 ? 13% numa-meminfo.node2.Inactive(file)
3835961 ? 8% -26.7% 2813345 ? 5% numa-meminfo.node2.KReclaimable
7870409 ? 16% +100.1% 15745825 ? 7% numa-meminfo.node2.MemFree
25151769 ? 5% -31.3% 17276353 ? 7% numa-meminfo.node2.MemUsed
3835961 ? 8% -26.7% 2813345 ? 5% numa-meminfo.node2.SReclaimable
985935 -23.7% 751841 ? 11% numa-meminfo.node2.SUnreclaim
4821896 ? 6% -26.1% 3565187 ? 5% numa-meminfo.node2.Slab
1869 ? 10% +57.0% 2936 ? 4% numa-meminfo.node2.Writeback
3294 ? 10% +73.3% 5709 ? 29% numa-meminfo.node3.Mapped
864.67 ? 10% +33.1% 1151 ? 19% numa-meminfo.node3.PageTables
769276 ? 10% +15.1% 885450 ? 11% numa-meminfo.node3.SUnreclaim
2054 ? 3% +37.2% 2819 ? 9% numa-meminfo.node3.Writeback
44099 ? 36% +434.5% 235724 ? 56% proc-vmstat.compact_daemon_free_scanned
604.33 ? 6% +77.6% 1073 ? 17% proc-vmstat.compact_daemon_wake
27690816 ? 25% -28.4% 19815672 ? 11% proc-vmstat.compact_free_scanned
189.67 ? 2% +267.1% 696.25 ? 32% proc-vmstat.kswapd_low_wmark_hit_quickly
3037734 -4.3% 2908587 proc-vmstat.nr_active_file
227523 -2.5% 221728 proc-vmstat.nr_anon_pages
1.433e+08 -6.6% 1.339e+08 proc-vmstat.nr_dirtied
72534 +4.7% 75956 proc-vmstat.nr_dirty
16470705 -3.5% 15892452 proc-vmstat.nr_file_pages
12005289 +5.8% 12703834 proc-vmstat.nr_free_pages
12729417 -3.5% 12278550 proc-vmstat.nr_inactive_file
8401 ? 2% +18.4% 9951 ? 2% proc-vmstat.nr_mapped
459.67 -11.8% 405.25 proc-vmstat.nr_mlock
10917 +18.6% 12953 ? 3% proc-vmstat.nr_shmem
3162367 -3.4% 3055913 proc-vmstat.nr_slab_reclaimable
870083 -1.9% 853258 proc-vmstat.nr_slab_unreclaimable
1844 +46.5% 2701 ? 9% proc-vmstat.nr_writeback
1.431e+08 -6.6% 1.338e+08 proc-vmstat.nr_written
3037734 -4.3% 2908591 proc-vmstat.nr_zone_active_file
12729444 -3.5% 12278598 proc-vmstat.nr_zone_inactive_file
74465 +5.8% 78752 proc-vmstat.nr_zone_write_pending
9959 ? 7% +120.2% 21927 ? 14% proc-vmstat.numa_hint_faults_local
736.67 ? 7% +58.8% 1170 ? 14% proc-vmstat.pageoutrun
2963172 +8.5% 3214287 proc-vmstat.pgactivate
4055119 -26.2% 2994679 proc-vmstat.pgfault
984.33 ? 21% +79.6% 1767 ? 37% proc-vmstat.pgmigrate_fail
5.482e+08 -7.2% 5.086e+08 proc-vmstat.pgpgout
153472 -27.6% 111098 proc-vmstat.pgreuse
3027210 +10.8% 3354204 proc-vmstat.pgscan_file
3026026 +10.8% 3353563 proc-vmstat.pgscan_kswapd
3027171 +10.8% 3354164 proc-vmstat.pgsteal_file
3025987 +10.8% 3353523 proc-vmstat.pgsteal_kswapd
5516800 +4.1% 5742944 proc-vmstat.slabs_scanned
92355 +7.7% 99441 ? 2% proc-vmstat.workingset_nodes
75289 ? 27% +43.2% 107836 ? 12% numa-vmstat.node0.nr_anon_pages
130.00 ? 32% +49.8% 194.75 ? 13% numa-vmstat.node0.nr_anon_transparent_hugepages
75581 ? 26% +43.7% 108629 ? 12% numa-vmstat.node0.nr_inactive_anon
2654078 ? 23% +31.8% 3497062 ? 10% numa-vmstat.node0.nr_inactive_file
197152 ? 16% +21.4% 239363 ? 13% numa-vmstat.node0.nr_slab_unreclaimable
383.33 ? 14% +70.4% 653.25 ? 22% numa-vmstat.node0.nr_writeback
75581 ? 26% +43.7% 108629 ? 12% numa-vmstat.node0.nr_zone_inactive_anon
2654085 ? 23% +31.8% 3497082 ? 10% numa-vmstat.node0.nr_zone_inactive_file
51222 ? 19% -54.1% 23503 ? 25% numa-vmstat.node1.nr_anon_pages
95.33 ? 22% -57.3% 40.75 ? 28% numa-vmstat.node1.nr_anon_transparent_hugepages
52099 ? 19% -53.2% 24372 ? 27% numa-vmstat.node1.nr_inactive_anon
475.00 ? 2% +33.6% 634.75 ? 14% numa-vmstat.node1.nr_writeback
52099 ? 19% -53.2% 24372 ? 27% numa-vmstat.node1.nr_zone_inactive_anon
2298104 ?114% +116.1% 4965958 ? 50% numa-vmstat.node1.numa_miss
2356463 ?110% +113.2% 5023281 ? 49% numa-vmstat.node1.numa_other
1077297 ? 5% -59.6% 435264 ? 44% numa-vmstat.node2.nr_active_file
62561 ? 17% -44.0% 35042 ? 10% numa-vmstat.node2.nr_anon_pages
48104444 ? 6% -54.3% 21967588 ? 33% numa-vmstat.node2.nr_dirtied
24587 ? 26% -40.8% 14546 ? 19% numa-vmstat.node2.nr_dirty
4979931 ? 7% -32.8% 3347355 ? 8% numa-vmstat.node2.nr_file_pages
1967602 ? 16% +100.1% 3937001 ? 7% numa-vmstat.node2.nr_free_pages
63016 ? 16% -42.0% 36528 ? 15% numa-vmstat.node2.nr_inactive_anon
3679775 ? 4% -25.8% 2729774 ? 13% numa-vmstat.node2.nr_inactive_file
958970 ? 8% -26.7% 703232 ? 5% numa-vmstat.node2.nr_slab_reclaimable
246490 -23.8% 187934 ? 11% numa-vmstat.node2.nr_slab_unreclaimable
459.67 ? 11% +58.8% 730.00 ? 5% numa-vmstat.node2.nr_writeback
48044016 ? 6% -54.3% 21947898 ? 33% numa-vmstat.node2.nr_written
1077297 ? 5% -59.6% 435265 ? 44% numa-vmstat.node2.nr_zone_active_file
63016 ? 16% -42.0% 36528 ? 15% numa-vmstat.node2.nr_zone_inactive_anon
3679781 ? 4% -25.8% 2729783 ? 13% numa-vmstat.node2.nr_zone_inactive_file
25072 ? 25% -39.0% 15295 ? 18% numa-vmstat.node2.nr_zone_write_pending
839.00 ? 10% +72.6% 1448 ? 30% numa-vmstat.node3.nr_mapped
216.00 ? 10% +33.1% 287.50 ? 19% numa-vmstat.node3.nr_page_table_pages
192311 ? 10% +15.1% 221327 ? 11% numa-vmstat.node3.nr_slab_unreclaimable
513.67 ? 3% +35.9% 698.00 ? 10% numa-vmstat.node3.nr_writeback
12892313 ? 12% +24.8% 16095872 ? 5% numa-vmstat.node3.numa_hit
12839849 ? 13% +24.9% 16032298 ? 6% numa-vmstat.node3.numa_local
4781033 ? 34% -75.3% 1181721 ? 99% numa-vmstat.node3.numa_miss
4833694 ? 34% -74.2% 1245220 ? 94% numa-vmstat.node3.numa_other
24.77 ? 2% -16.3% 20.72 ? 2% perf-stat.i.MPKI
1.754e+09 +16.5% 2.043e+09 perf-stat.i.branch-instructions
1.04 ? 6% +0.1 1.14 ? 5% perf-stat.i.branch-miss-rate%
18436526 ? 6% +27.6% 23521123 ? 5% perf-stat.i.branch-misses
27.26 +6.2 33.49 ? 3% perf-stat.i.cache-miss-rate%
57023945 ? 2% +24.2% 70804771 perf-stat.i.cache-misses
333141 +45.1% 483235 perf-stat.i.context-switches
2.012e+10 +20.8% 2.43e+10 perf-stat.i.cpu-cycles
334.80 +29.1% 432.35 perf-stat.i.cpu-migrations
0.12 ? 3% -0.0 0.10 ? 8% perf-stat.i.dTLB-load-miss-rate%
2.339e+09 +23.5% 2.89e+09 perf-stat.i.dTLB-loads
1.16e+09 +24.7% 1.446e+09 perf-stat.i.dTLB-stores
3432298 +27.7% 4384238 ? 2% perf-stat.i.iTLB-load-misses
7994406 +30.9% 10462000 perf-stat.i.iTLB-loads
8.552e+09 +19.8% 1.024e+10 perf-stat.i.instructions
2516 -5.4% 2380 perf-stat.i.instructions-per-iTLB-miss
0.27 ? 8% +35.7% 0.36 ? 6% perf-stat.i.major-faults
0.14 +20.9% 0.17 perf-stat.i.metric.GHz
259.32 ? 2% +40.8% 365.25 ? 7% perf-stat.i.metric.K/sec
37.86 +20.7% 45.68 perf-stat.i.metric.M/sec
2675 +4.4% 2793 perf-stat.i.minor-faults
18165804 +28.7% 23374912 perf-stat.i.node-load-misses
2624237 ? 23% +37.2% 3599336 ? 7% perf-stat.i.node-loads
4740291 +47.1% 6974820 perf-stat.i.node-store-misses
670283 ? 3% +35.6% 908806 perf-stat.i.node-stores
2675 +4.4% 2794 perf-stat.i.page-faults
24.77 ? 2% -15.8% 20.86 ? 2% perf-stat.overall.MPKI
1.05 ? 6% +0.1 1.15 ? 4% perf-stat.overall.branch-miss-rate%
27.21 ? 2% +6.3 33.48 ? 2% perf-stat.overall.cache-miss-rate%
0.12 ? 3% -0.0 0.10 ? 7% perf-stat.overall.dTLB-load-miss-rate%
2498 -6.3% 2341 perf-stat.overall.instructions-per-iTLB-miss
87.72 +0.8 88.55 perf-stat.overall.node-store-miss-rate%
1.747e+09 +16.8% 2.041e+09 perf-stat.ps.branch-instructions
18281097 ? 6% +28.0% 23408559 ? 5% perf-stat.ps.branch-misses
57413570 ? 2% +24.4% 71407605 perf-stat.ps.cache-misses
329744 +45.9% 480983 perf-stat.ps.context-switches
2.01e+10 +21.0% 2.433e+10 perf-stat.ps.cpu-cycles
332.44 +29.6% 430.99 perf-stat.ps.cpu-migrations
2.332e+09 +23.8% 2.888e+09 perf-stat.ps.dTLB-loads
1.155e+09 +25.0% 1.443e+09 perf-stat.ps.dTLB-stores
3411098 +28.1% 4370844 ? 2% perf-stat.ps.iTLB-load-misses
7931724 +31.4% 10419759 perf-stat.ps.iTLB-loads
8.52e+09 +20.1% 1.023e+10 perf-stat.ps.instructions
0.24 ? 8% +37.7% 0.34 ? 6% perf-stat.ps.major-faults
2651 +4.3% 2766 perf-stat.ps.minor-faults
18280474 +28.8% 23538582 perf-stat.ps.node-load-misses
2647359 ? 23% +38.0% 3654167 ? 7% perf-stat.ps.node-loads
4737626 +47.6% 6991866 perf-stat.ps.node-store-misses
663157 ? 3% +36.4% 904383 perf-stat.ps.node-stores
2652 +4.3% 2766 perf-stat.ps.page-faults
1.238e+13 -15.7% 1.044e+13 perf-stat.total.instructions
3.11 ? 5% -2.5 0.65 ? 3% perf-profile.calltrace.cycles-pp.nfsd4_process_open2.nfsd4_open.nfsd4_proc_compound.nfsd_dispatch.svc_process_common
4.80 ? 3% -2.3 2.49 ? 2% perf-profile.calltrace.cycles-pp.nfsd4_open.nfsd4_proc_compound.nfsd_dispatch.svc_process_common.svc_process
1.33 ? 6% -0.2 1.14 ? 11% perf-profile.calltrace.cycles-pp.__irq_exit_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter
0.63 ? 4% +0.0 0.66 ? 3% perf-profile.calltrace.cycles-pp.svc_xprt_received.svc_tcp_recvfrom.svc_handle_xprt.svc_recv.nfsd
1.43 ? 2% +0.1 1.48 ? 2% perf-profile.calltrace.cycles-pp.__tcp_push_pending_frames.tcp_sock_set_cork.svc_tcp_sendto.svc_send.nfsd
0.85 ? 2% +0.1 0.90 ? 4% perf-profile.calltrace.cycles-pp.update_sg_lb_stats.update_sd_lb_stats.find_busiest_group.load_balance.newidle_balance
1.43 ? 2% +0.1 1.48 ? 2% perf-profile.calltrace.cycles-pp.tcp_write_xmit.__tcp_push_pending_frames.tcp_sock_set_cork.svc_tcp_sendto.svc_send
1.01 ? 3% +0.1 1.06 perf-profile.calltrace.cycles-pp.btrfs_create_new_inode.btrfs_create_common.vfs_create.dentry_create.nfsd4_create_file
0.88 ? 3% +0.1 0.94 ? 5% perf-profile.calltrace.cycles-pp.find_busiest_group.load_balance.newidle_balance.pick_next_task_fair.__schedule
0.87 ? 3% +0.1 0.93 ? 5% perf-profile.calltrace.cycles-pp.update_sd_lb_stats.find_busiest_group.load_balance.newidle_balance.pick_next_task_fair
0.55 ? 3% +0.1 0.61 ? 8% perf-profile.calltrace.cycles-pp.filp_close.__x64_sys_close.do_syscall_64.entry_SYSCALL_64_after_hwframe.__close
0.90 ? 3% +0.1 0.96 ? 4% perf-profile.calltrace.cycles-pp.load_balance.newidle_balance.pick_next_task_fair.__schedule.schedule
0.55 ? 3% +0.1 0.62 ? 8% perf-profile.calltrace.cycles-pp.__x64_sys_close.do_syscall_64.entry_SYSCALL_64_after_hwframe.__close
0.73 ? 6% +0.1 0.82 ? 4% perf-profile.calltrace.cycles-pp.tcp_recvmsg.inet6_recvmsg.svc_tcp_read_marker.svc_tcp_recvfrom.svc_handle_xprt
0.74 ? 7% +0.1 0.82 ? 3% perf-profile.calltrace.cycles-pp.inet6_recvmsg.svc_tcp_read_marker.svc_tcp_recvfrom.svc_handle_xprt.svc_recv
0.97 ? 3% +0.1 1.06 ? 4% perf-profile.calltrace.cycles-pp.pick_next_task_fair.__schedule.schedule.worker_thread.kthread
1.13 ? 2% +0.1 1.22 perf-profile.calltrace.cycles-pp.btrfs_create_common.vfs_create.dentry_create.nfsd4_create_file.do_open_lookup
1.56 ? 5% +0.1 1.65 ? 3% perf-profile.calltrace.cycles-pp.tcp_v6_rcv.ip6_protocol_deliver_rcu.ip6_input_finish.__netif_receive_skb_one_core.process_backlog
1.11 +0.1 1.20 perf-profile.calltrace.cycles-pp.schedule.schedule_timeout.svc_get_next_xprt.svc_recv.nfsd
0.95 ? 3% +0.1 1.04 ? 4% perf-profile.calltrace.cycles-pp.newidle_balance.pick_next_task_fair.__schedule.schedule.worker_thread
1.10 +0.1 1.19 perf-profile.calltrace.cycles-pp.__schedule.schedule.schedule_timeout.svc_get_next_xprt.svc_recv
0.80 ? 6% +0.1 0.89 ? 2% perf-profile.calltrace.cycles-pp.svc_tcp_read_marker.svc_tcp_recvfrom.svc_handle_xprt.svc_recv.nfsd
1.64 ? 4% +0.1 1.74 ? 3% perf-profile.calltrace.cycles-pp.__napi_poll.net_rx_action.__softirqentry_text_start.do_softirq.__local_bh_enable_ip
1.64 ? 4% +0.1 1.74 ? 3% perf-profile.calltrace.cycles-pp.process_backlog.__napi_poll.net_rx_action.__softirqentry_text_start.do_softirq
1.61 ? 4% +0.1 1.71 ? 3% perf-profile.calltrace.cycles-pp.__netif_receive_skb_one_core.process_backlog.__napi_poll.net_rx_action.__softirqentry_text_start
1.57 ? 5% +0.1 1.67 ? 3% perf-profile.calltrace.cycles-pp.ip6_input_finish.__netif_receive_skb_one_core.process_backlog.__napi_poll.net_rx_action
1.63 ? 2% +0.1 1.73 ? 3% perf-profile.calltrace.cycles-pp.tcp_sock_set_cork.svc_tcp_sendto.svc_send.nfsd.kthread
1.18 ? 2% +0.1 1.28 perf-profile.calltrace.cycles-pp.vfs_create.dentry_create.nfsd4_create_file.do_open_lookup.nfsd4_open
1.15 +0.1 1.26 perf-profile.calltrace.cycles-pp.schedule_timeout.svc_get_next_xprt.svc_recv.nfsd.kthread
1.22 +0.1 1.33 perf-profile.calltrace.cycles-pp.dentry_create.nfsd4_create_file.do_open_lookup.nfsd4_open.nfsd4_proc_compound
1.89 ? 5% +0.1 2.02 ? 4% perf-profile.calltrace.cycles-pp.__local_bh_enable_ip.ip6_finish_output2.ip6_xmit.inet6_csk_xmit.__tcp_transmit_skb
1.55 +0.1 1.68 perf-profile.calltrace.cycles-pp.nfsd4_create_file.do_open_lookup.nfsd4_open.nfsd4_proc_compound.nfsd_dispatch
1.89 ? 5% +0.1 2.02 ? 4% perf-profile.calltrace.cycles-pp.do_softirq.__local_bh_enable_ip.ip6_finish_output2.ip6_xmit.inet6_csk_xmit
2.08 ? 5% +0.1 2.21 ? 3% perf-profile.calltrace.cycles-pp.ip6_xmit.inet6_csk_xmit.__tcp_transmit_skb.tcp_write_xmit.__tcp_push_pending_frames
2.13 ? 5% +0.1 2.27 ? 3% perf-profile.calltrace.cycles-pp.inet6_csk_xmit.__tcp_transmit_skb.tcp_write_xmit.__tcp_push_pending_frames.tcp_sock_set_cork
1.87 ? 5% +0.1 2.01 ? 3% perf-profile.calltrace.cycles-pp.__softirqentry_text_start.do_softirq.__local_bh_enable_ip.ip6_finish_output2.ip6_xmit
1.59 +0.1 1.72 perf-profile.calltrace.cycles-pp.do_open_lookup.nfsd4_open.nfsd4_proc_compound.nfsd_dispatch.svc_process_common
1.81 ? 5% +0.1 1.95 ? 3% perf-profile.calltrace.cycles-pp.net_rx_action.__softirqentry_text_start.do_softirq.__local_bh_enable_ip.ip6_finish_output2
1.15 ? 2% +0.1 1.30 ? 9% perf-profile.calltrace.cycles-pp.do_mkdirat.__x64_sys_mkdir.do_syscall_64.entry_SYSCALL_64_after_hwframe.mkdir
1.34 +0.1 1.48 perf-profile.calltrace.cycles-pp.svc_get_next_xprt.svc_recv.nfsd.kthread.ret_from_fork
1.15 ? 3% +0.1 1.29 ? 9% perf-profile.calltrace.cycles-pp.filename_create.do_mkdirat.__x64_sys_mkdir.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.18 +0.1 1.32 ? 9% perf-profile.calltrace.cycles-pp.mkdir
0.79 ? 6% +0.1 0.93 ? 14% perf-profile.calltrace.cycles-pp._nfs4_open_and_get_state._nfs4_do_open.nfs4_do_open.nfs4_atomic_open.nfs_atomic_open
1.16 ? 2% +0.1 1.31 ? 8% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.mkdir
1.16 ? 2% +0.1 1.31 ? 8% perf-profile.calltrace.cycles-pp.__x64_sys_mkdir.do_syscall_64.entry_SYSCALL_64_after_hwframe.mkdir
2.03 ? 5% +0.1 2.18 ? 3% perf-profile.calltrace.cycles-pp.ip6_finish_output2.ip6_xmit.inet6_csk_xmit.__tcp_transmit_skb.tcp_write_xmit
0.85 ? 6% +0.1 1.00 ? 14% perf-profile.calltrace.cycles-pp.nfs4_atomic_open.nfs_atomic_open.lookup_open.open_last_lookups.path_openat
0.85 ? 6% +0.1 1.00 ? 14% perf-profile.calltrace.cycles-pp._nfs4_do_open.nfs4_do_open.nfs4_atomic_open.nfs_atomic_open.lookup_open
1.16 ? 2% +0.1 1.31 ? 9% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.mkdir
0.85 ? 6% +0.1 1.00 ? 14% perf-profile.calltrace.cycles-pp.nfs4_do_open.nfs4_atomic_open.nfs_atomic_open.lookup_open.open_last_lookups
0.90 ? 6% +0.2 1.05 ? 15% perf-profile.calltrace.cycles-pp.nfs_atomic_open.lookup_open.open_last_lookups.path_openat.do_filp_open
0.94 ? 5% +0.2 1.10 ? 15% perf-profile.calltrace.cycles-pp.lookup_open.open_last_lookups.path_openat.do_filp_open.do_sys_openat2
0.96 ? 5% +0.2 1.12 ? 14% perf-profile.calltrace.cycles-pp.open_last_lookups.path_openat.do_filp_open.do_sys_openat2.__x64_sys_openat
1.07 ? 4% +0.2 1.24 ? 14% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.open64
1.03 ? 4% +0.2 1.22 ? 14% perf-profile.calltrace.cycles-pp.do_filp_open.do_sys_openat2.__x64_sys_openat.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.03 ? 4% +0.2 1.21 ? 14% perf-profile.calltrace.cycles-pp.path_openat.do_filp_open.do_sys_openat2.__x64_sys_openat.do_syscall_64
1.05 ? 4% +0.2 1.24 ? 14% perf-profile.calltrace.cycles-pp.__x64_sys_openat.do_syscall_64.entry_SYSCALL_64_after_hwframe.open64
1.07 ? 4% +0.2 1.26 ? 13% perf-profile.calltrace.cycles-pp.open64
1.05 ? 4% +0.2 1.24 ? 14% perf-profile.calltrace.cycles-pp.do_sys_openat2.__x64_sys_openat.do_syscall_64.entry_SYSCALL_64_after_hwframe.open64
1.06 ? 5% +0.2 1.24 ? 14% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.open64
0.35 ? 70% +0.2 0.54 ? 2% perf-profile.calltrace.cycles-pp.perf_trace_sched_switch.__schedule.schedule.worker_thread.kthread
2.45 ? 6% +0.2 2.67 ? 3% perf-profile.calltrace.cycles-pp.__schedule.schedule.worker_thread.kthread.ret_from_fork
2.47 ? 6% +0.2 2.68 ? 3% perf-profile.calltrace.cycles-pp.schedule.worker_thread.kthread.ret_from_fork
1.29 ? 9% +0.2 1.51 ? 7% perf-profile.calltrace.cycles-pp.xs_stream_data_receive_workfn.process_one_work.worker_thread.kthread.ret_from_fork
1.25 ? 8% +0.2 1.48 ? 7% perf-profile.calltrace.cycles-pp.xs_read_stream.xs_stream_data_receive_workfn.process_one_work.worker_thread.kthread
1.17 ? 21% +0.2 1.39 ? 4% perf-profile.calltrace.cycles-pp.tcp_rcv_established.tcp_v6_do_rcv.tcp_v6_rcv.ip6_protocol_deliver_rcu.ip6_input_finish
1.18 ? 21% +0.2 1.40 ? 4% perf-profile.calltrace.cycles-pp.tcp_v6_do_rcv.tcp_v6_rcv.ip6_protocol_deliver_rcu.ip6_input_finish.__netif_receive_skb_one_core
1.58 ? 12% +0.2 1.82 ? 4% perf-profile.calltrace.cycles-pp.xs_tcp_send_request.xprt_request_transmit.xprt_transmit.call_transmit.__rpc_execute
0.87 ? 25% +0.2 1.11 ? 11% perf-profile.calltrace.cycles-pp.btrfs_work_helper.process_one_work.worker_thread.kthread.ret_from_fork
1.41 ? 18% +0.3 1.68 ? 11% perf-profile.calltrace.cycles-pp.__btrfs_cow_block.btrfs_cow_block.btrfs_search_slot.btrfs_lookup_csum.btrfs_csum_file_blocks
1.41 ? 18% +0.3 1.68 ? 11% perf-profile.calltrace.cycles-pp.btrfs_cow_block.btrfs_search_slot.btrfs_lookup_csum.btrfs_csum_file_blocks.log_csums
1.74 ? 13% +0.3 2.02 ? 4% perf-profile.calltrace.cycles-pp.xprt_transmit.call_transmit.__rpc_execute.rpc_async_schedule.process_one_work
1.49 ? 18% +0.3 1.77 ? 12% perf-profile.calltrace.cycles-pp.btrfs_lookup_csum.btrfs_csum_file_blocks.log_csums.copy_items.copy_inode_items_to_log
1.76 ? 13% +0.3 2.04 ? 4% perf-profile.calltrace.cycles-pp.call_transmit.__rpc_execute.rpc_async_schedule.process_one_work.worker_thread
1.72 ? 13% +0.3 2.00 ? 4% perf-profile.calltrace.cycles-pp.xprt_request_transmit.xprt_transmit.call_transmit.__rpc_execute.rpc_async_schedule
1.48 ? 18% +0.3 1.77 ? 11% perf-profile.calltrace.cycles-pp.btrfs_search_slot.btrfs_lookup_csum.btrfs_csum_file_blocks.log_csums.copy_items
2.79 ? 2% +0.3 3.08 ? 3% perf-profile.calltrace.cycles-pp.svc_tcp_sendto.svc_send.nfsd.kthread.ret_from_fork
1.59 ? 18% +0.3 1.90 ? 11% perf-profile.calltrace.cycles-pp.btrfs_csum_file_blocks.log_csums.copy_items.copy_inode_items_to_log.btrfs_log_inode
1.59 ? 18% +0.3 1.90 ? 11% perf-profile.calltrace.cycles-pp.log_csums.copy_items.copy_inode_items_to_log.btrfs_log_inode.btrfs_log_inode_parent
2.86 ? 2% +0.3 3.17 ? 3% perf-profile.calltrace.cycles-pp.svc_send.nfsd.kthread.ret_from_fork
3.56 ? 3% +0.3 3.90 perf-profile.calltrace.cycles-pp.svc_recv.nfsd.kthread.ret_from_fork
0.17 ?141% +0.4 0.52 ? 2% perf-profile.calltrace.cycles-pp.perf_tp_event.perf_trace_sched_switch.__schedule.schedule.worker_thread
2.28 ? 17% +0.4 2.64 ? 10% perf-profile.calltrace.cycles-pp.copy_items.copy_inode_items_to_log.btrfs_log_inode.btrfs_log_inode_parent.btrfs_log_dentry_safe
2.83 ? 18% +0.4 3.26 ? 8% perf-profile.calltrace.cycles-pp.btrfs_log_inode.btrfs_log_inode_parent.btrfs_log_dentry_safe.btrfs_sync_file.btrfs_do_write_iter
3.48 ? 10% +0.6 4.06 ? 4% perf-profile.calltrace.cycles-pp.__rpc_execute.rpc_async_schedule.process_one_work.worker_thread.kthread
0.00 +0.6 0.58 ? 5% perf-profile.calltrace.cycles-pp.flush_smp_call_function_queue.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
3.48 ? 10% +0.6 4.06 ? 4% perf-profile.calltrace.cycles-pp.rpc_async_schedule.process_one_work.worker_thread.kthread.ret_from_fork
2.97 ? 18% +0.9 3.84 ? 8% perf-profile.calltrace.cycles-pp.btrfs_log_dentry_safe.btrfs_sync_file.btrfs_do_write_iter.do_iter_readv_writev.do_iter_write
2.96 ? 18% +0.9 3.84 ? 8% perf-profile.calltrace.cycles-pp.btrfs_log_inode_parent.btrfs_log_dentry_safe.btrfs_sync_file.btrfs_do_write_iter.do_iter_readv_writev
6.86 ? 5% +1.1 7.96 ? 5% perf-profile.calltrace.cycles-pp.process_one_work.worker_thread.kthread.ret_from_fork
9.49 ? 6% +1.3 10.82 ? 4% perf-profile.calltrace.cycles-pp.worker_thread.kthread.ret_from_fork
2.87 ? 5% -2.5 0.42 ? 6% perf-profile.children.cycles-pp.nfs4_get_vfs_file
3.11 ? 5% -2.5 0.65 ? 3% perf-profile.children.cycles-pp.nfsd4_process_open2
2.68 ? 5% -2.5 0.23 ? 9% perf-profile.children.cycles-pp.nfsd_file_gc
2.86 ? 5% -2.5 0.41 ? 5% perf-profile.children.cycles-pp.nfsd_do_file_acquire
2.48 ? 6% -2.4 0.03 ?100% perf-profile.children.cycles-pp.list_lru_walk_node
4.80 ? 3% -2.3 2.49 ? 2% perf-profile.children.cycles-pp.nfsd4_open
0.59 ? 6% -0.1 0.48 ? 18% perf-profile.children.cycles-pp.process_simple
0.26 ? 11% -0.1 0.15 ? 9% perf-profile.children.cycles-pp.serial8250_console_write
0.25 ? 9% -0.1 0.15 ? 10% perf-profile.children.cycles-pp.wait_for_lsr
0.26 ? 11% -0.1 0.16 ? 8% perf-profile.children.cycles-pp.vprintk_emit
0.26 ? 11% -0.1 0.16 ? 8% perf-profile.children.cycles-pp.console_unlock
0.26 ? 11% -0.1 0.16 ? 8% perf-profile.children.cycles-pp.console_emit_next_record
0.26 ? 13% -0.1 0.16 ? 9% perf-profile.children.cycles-pp.irq_work_run_list
0.25 ? 10% -0.1 0.16 ? 8% perf-profile.children.cycles-pp.asm_sysvec_irq_work
0.25 ? 10% -0.1 0.16 ? 8% perf-profile.children.cycles-pp.sysvec_irq_work
0.25 ? 10% -0.1 0.16 ? 8% perf-profile.children.cycles-pp.__sysvec_irq_work
0.25 ? 10% -0.1 0.16 ? 8% perf-profile.children.cycles-pp.irq_work_single
0.25 ? 11% -0.1 0.16 ? 8% perf-profile.children.cycles-pp.irq_work_run
0.25 ? 12% -0.1 0.16 ? 8% perf-profile.children.cycles-pp._printk
0.16 ? 10% -0.1 0.10 ? 13% perf-profile.children.cycles-pp.io_serial_in
0.40 ? 4% -0.1 0.35 ? 7% perf-profile.children.cycles-pp.native_sched_clock
0.40 ? 3% -0.0 0.35 ? 7% perf-profile.children.cycles-pp.insert_with_overflow
0.18 ? 11% -0.0 0.13 ? 10% perf-profile.children.cycles-pp.check_extent_data_item
0.10 ? 17% -0.0 0.05 ? 62% perf-profile.children.cycles-pp.cpuidle_governor_latency_req
0.39 ? 5% -0.0 0.35 ? 4% perf-profile.children.cycles-pp.native_irq_return_iret
0.27 ? 6% -0.0 0.24 ? 5% perf-profile.children.cycles-pp.perf_mux_hrtimer_handler
0.15 ? 8% -0.0 0.12 ? 13% perf-profile.children.cycles-pp.ksys_read
0.15 ? 11% -0.0 0.12 ? 5% perf-profile.children.cycles-pp.check_cpu_stall
0.07 ? 7% -0.0 0.04 ? 58% perf-profile.children.cycles-pp.vsnprintf
0.11 ? 11% -0.0 0.09 ? 7% perf-profile.children.cycles-pp.seq_read_iter
0.09 ? 9% -0.0 0.07 ? 12% perf-profile.children.cycles-pp.nfsd_file_free
0.08 ? 5% -0.0 0.06 ? 13% perf-profile.children.cycles-pp.call_rcu
0.07 ? 6% -0.0 0.06 ? 7% perf-profile.children.cycles-pp.read
0.06 ? 7% +0.0 0.08 ? 10% perf-profile.children.cycles-pp.queue_delayed_work_on
0.06 ? 13% +0.0 0.08 ? 6% perf-profile.children.cycles-pp.xprt_complete_rqst
0.06 ? 7% +0.0 0.08 ? 8% perf-profile.children.cycles-pp.svc_xprt_release
0.08 ? 5% +0.0 0.10 ? 7% perf-profile.children.cycles-pp.llist_reverse_order
0.19 ? 4% +0.0 0.21 ? 3% perf-profile.children.cycles-pp.__switch_to_asm
0.05 +0.0 0.07 ? 10% perf-profile.children.cycles-pp.nfs4_close_prepare
0.11 ? 7% +0.0 0.13 ? 9% perf-profile.children.cycles-pp.svc_alloc_arg
0.06 ? 13% +0.0 0.08 perf-profile.children.cycles-pp.xprt_request_wait_receive
0.15 ? 5% +0.0 0.17 ? 4% perf-profile.children.cycles-pp.__flush_smp_call_function_queue
0.13 ? 7% +0.0 0.15 ? 2% perf-profile.children.cycles-pp.rpc_xdr_encode
0.07 ? 14% +0.0 0.09 ? 12% perf-profile.children.cycles-pp.rpc_task_set_transport
0.07 ? 7% +0.0 0.09 ? 9% perf-profile.children.cycles-pp.xprt_free_slot
0.06 ? 8% +0.0 0.08 ? 5% perf-profile.children.cycles-pp.xprt_lookup_rqst
0.09 ? 15% +0.0 0.11 ? 13% perf-profile.children.cycles-pp.asm_sysvec_call_function_single
0.06 ? 23% +0.0 0.08 ? 17% perf-profile.children.cycles-pp.sysvec_call_function_single
0.29 ? 4% +0.0 0.31 ? 2% perf-profile.children.cycles-pp.kmem_cache_free
0.07 ? 18% +0.0 0.09 ? 7% perf-profile.children.cycles-pp.__writeback_single_inode
0.09 ? 5% +0.0 0.11 ? 7% perf-profile.children.cycles-pp.nfsd4_close
0.63 ? 4% +0.0 0.66 ? 3% perf-profile.children.cycles-pp.svc_xprt_received
0.18 ? 4% +0.0 0.21 ? 7% perf-profile.children.cycles-pp.__switch_to
0.06 ? 13% +0.0 0.09 ? 14% perf-profile.children.cycles-pp.nfs_fhget
0.09 ? 22% +0.0 0.12 ? 10% perf-profile.children.cycles-pp.__writeback_inodes_wb
0.09 ? 22% +0.0 0.12 ? 10% perf-profile.children.cycles-pp.writeback_sb_inodes
0.06 ? 14% +0.0 0.09 ? 4% perf-profile.children.cycles-pp.destroy_unhashed_deleg
0.15 ? 3% +0.0 0.18 ? 10% perf-profile.children.cycles-pp.refcount_dec_not_one
0.09 ? 15% +0.0 0.12 ? 8% perf-profile.children.cycles-pp.__task_pid_nr_ns
0.14 ? 15% +0.0 0.17 ? 7% perf-profile.children.cycles-pp.btrfs_release_path
0.09 ? 13% +0.0 0.12 ? 6% perf-profile.children.cycles-pp.xprt_reserve
0.11 ? 7% +0.0 0.14 ? 11% perf-profile.children.cycles-pp.security_cred_free
0.06 ? 8% +0.0 0.09 ? 7% perf-profile.children.cycles-pp.xas_find_marked
0.10 ? 9% +0.0 0.13 ? 16% perf-profile.children.cycles-pp.refcount_dec_and_lock_irqsave
0.18 ? 11% +0.0 0.22 ? 7% perf-profile.children.cycles-pp.wake_up_bit
0.08 ? 14% +0.0 0.12 ? 7% perf-profile.children.cycles-pp.xprt_alloc_slot
0.10 ? 16% +0.0 0.14 ? 7% perf-profile.children.cycles-pp.nfs4_close_done
0.20 ? 14% +0.0 0.24 ? 7% perf-profile.children.cycles-pp.nfs_updatepage
0.10 ? 9% +0.0 0.13 ? 14% perf-profile.children.cycles-pp.free_uid
0.19 ? 8% +0.0 0.23 ? 3% perf-profile.children.cycles-pp.__list_del_entry_valid
0.11 ? 11% +0.0 0.14 ? 3% perf-profile.children.cycles-pp.__release_sock
0.20 ? 9% +0.0 0.24 ? 3% perf-profile.children.cycles-pp.tcp_clean_rtx_queue
0.04 ? 71% +0.0 0.08 ? 5% perf-profile.children.cycles-pp.btrfs_log_all_xattrs
0.02 ?141% +0.0 0.06 ? 7% perf-profile.children.cycles-pp.nfs4_alloc_stid
0.02 ?141% +0.0 0.06 ? 7% perf-profile.children.cycles-pp.unlock_up
0.99 ? 4% +0.0 1.03 ? 3% perf-profile.children.cycles-pp.svc_xprt_enqueue
0.02 ?141% +0.0 0.06 perf-profile.children.cycles-pp.nfs4_put_stid
0.02 ?141% +0.0 0.06 perf-profile.children.cycles-pp.generic_setlease
0.08 ? 5% +0.0 0.13 ? 3% perf-profile.children.cycles-pp.pagevec_lookup_range_tag
0.08 ? 5% +0.0 0.13 ? 3% perf-profile.children.cycles-pp.find_get_pages_range_tag
0.02 ?141% +0.0 0.06 ? 17% perf-profile.children.cycles-pp.nfs4_alloc_slot
0.05 ? 72% +0.0 0.09 ? 4% perf-profile.children.cycles-pp._atomic_dec_and_lock_irqsave
0.21 ? 9% +0.0 0.25 ? 8% perf-profile.children.cycles-pp.rpc_release_resources_task
0.06 ? 73% +0.0 0.10 ? 24% perf-profile.children.cycles-pp.add_delayed_ref_head
0.02 ?141% +0.0 0.06 ? 17% perf-profile.children.cycles-pp.nfs_wait_on_sequence
0.22 ? 13% +0.0 0.27 ? 4% perf-profile.children.cycles-pp.nfs_write_end
0.10 ? 17% +0.0 0.14 ? 7% perf-profile.children.cycles-pp.wake_up_q
0.27 ? 8% +0.1 0.32 ? 4% perf-profile.children.cycles-pp.tcp_ack
0.30 ? 9% +0.1 0.36 ? 7% perf-profile.children.cycles-pp.wake_bit_function
0.64 ? 4% +0.1 0.70 ? 5% perf-profile.children.cycles-pp.tcp_sendmsg_locked
0.49 ? 5% +0.1 0.54 ? 4% perf-profile.children.cycles-pp.svc_tcp_sendmsg
0.02 ?141% +0.1 0.07 ? 11% perf-profile.children.cycles-pp.nfsd_file_put
0.14 ? 20% +0.1 0.20 ? 3% perf-profile.children.cycles-pp.__folio_mark_dirty
0.14 ? 5% +0.1 0.20 ? 13% perf-profile.children.cycles-pp.tcp_data_queue
0.09 ? 15% +0.1 0.15 ? 10% perf-profile.children.cycles-pp.rwsem_wake
1.01 ? 3% +0.1 1.06 perf-profile.children.cycles-pp.btrfs_create_new_inode
0.66 ? 7% +0.1 0.72 ? 5% perf-profile.children.cycles-pp.rpc_wake_up_queued_task
0.56 ? 3% +0.1 0.62 ? 8% perf-profile.children.cycles-pp.filp_close
0.00 +0.1 0.06 ? 13% perf-profile.children.cycles-pp.delayed_fput
0.60 ? 2% +0.1 0.66 ? 5% perf-profile.children.cycles-pp.rpc_free_task
0.20 ? 14% +0.1 0.26 ? 5% perf-profile.children.cycles-pp.__folio_end_writeback
0.14 ? 6% +0.1 0.20 ? 8% perf-profile.children.cycles-pp.rwsem_optimistic_spin
0.55 ? 3% +0.1 0.62 ? 8% perf-profile.children.cycles-pp.__x64_sys_close
0.30 ? 3% +0.1 0.37 ? 3% perf-profile.children.cycles-pp.call_encode
0.10 ? 29% +0.1 0.17 ? 7% perf-profile.children.cycles-pp.btrfs_lookup_file_extent
0.54 ? 6% +0.1 0.60 ? 7% perf-profile.children.cycles-pp.__kernel_text_address
0.43 ? 7% +0.1 0.50 ? 4% perf-profile.children.cycles-pp.write
0.43 ? 8% +0.1 0.50 ? 7% perf-profile.children.cycles-pp.rpc_release_task
0.41 ? 5% +0.1 0.48 ? 8% perf-profile.children.cycles-pp._raw_spin_lock_bh
0.28 ? 5% +0.1 0.36 ? 4% perf-profile.children.cycles-pp.sched_ttwu_pending
0.40 ? 11% +0.1 0.47 ? 3% perf-profile.children.cycles-pp.btrfs_wait_ordered_range
0.39 ? 5% +0.1 0.46 ? 4% perf-profile.children.cycles-pp.nfs_file_write
0.15 ? 21% +0.1 0.23 ? 6% perf-profile.children.cycles-pp.btrfs_drop_extents
0.61 ? 5% +0.1 0.69 ? 7% perf-profile.children.cycles-pp.unwind_get_return_address
1.03 ? 3% +0.1 1.11 ? 4% perf-profile.children.cycles-pp.tcp_recvmsg_locked
0.21 ? 5% +0.1 0.30 ? 6% perf-profile.children.cycles-pp.__btrfs_tree_lock
1.25 +0.1 1.34 perf-profile.children.cycles-pp.schedule_timeout
1.83 ? 3% +0.1 1.92 ? 3% perf-profile.children.cycles-pp.__unwind_start
0.04 ? 70% +0.1 0.13 ? 10% perf-profile.children.cycles-pp.btrfs_use_block_rsv
0.23 ? 6% +0.1 0.32 ? 4% perf-profile.children.cycles-pp.rwsem_down_write_slowpath
1.13 ? 2% +0.1 1.22 perf-profile.children.cycles-pp.btrfs_create_common
1.63 ? 3% +0.1 1.73 ? 3% perf-profile.children.cycles-pp.__orc_find
0.20 ? 24% +0.1 0.30 ? 17% perf-profile.children.cycles-pp.set_extent_buffer_dirty
0.41 ? 9% +0.1 0.51 ? 17% perf-profile.children.cycles-pp.nfs4_run_open_task
0.00 +0.1 0.10 ? 20% perf-profile.children.cycles-pp.work_busy
0.80 ? 6% +0.1 0.90 ? 3% perf-profile.children.cycles-pp.svc_tcp_read_marker
1.18 ? 2% +0.1 1.28 perf-profile.children.cycles-pp.vfs_create
0.82 ? 5% +0.1 0.92 ? 3% perf-profile.children.cycles-pp.sock_sendmsg
0.28 ? 14% +0.1 0.39 ? 6% perf-profile.children.cycles-pp.end_bio_extent_buffer_writepage
0.21 ? 24% +0.1 0.32 ? 15% perf-profile.children.cycles-pp.btrfs_mark_buffer_dirty
1.96 ? 3% +0.1 2.06 ? 3% perf-profile.children.cycles-pp.__netif_receive_skb_one_core
0.72 ? 6% +0.1 0.83 ? 6% perf-profile.children.cycles-pp.folio_wait_writeback
0.09 ? 30% +0.1 0.20 ? 16% perf-profile.children.cycles-pp.__reserve_bytes
1.98 ? 3% +0.1 2.09 ? 3% perf-profile.children.cycles-pp.__napi_poll
1.98 ? 3% +0.1 2.09 ? 3% perf-profile.children.cycles-pp.process_backlog
0.78 ? 5% +0.1 0.89 ? 3% perf-profile.children.cycles-pp.tcp_sendmsg
0.09 ? 28% +0.1 0.20 ? 14% perf-profile.children.cycles-pp.btrfs_reserve_metadata_bytes
1.22 +0.1 1.33 perf-profile.children.cycles-pp.dentry_create
0.47 ? 4% +0.1 0.58 ? 5% perf-profile.children.cycles-pp.flush_smp_call_function_queue
0.20 ? 2% +0.1 0.32 ? 10% perf-profile.children.cycles-pp.finish_task_switch
0.56 ? 10% +0.1 0.68 ? 2% perf-profile.children.cycles-pp.folio_end_writeback
1.03 +0.1 1.14 ? 9% perf-profile.children.cycles-pp.nfs4_lookup_revalidate
0.41 ? 8% +0.1 0.53 ? 7% perf-profile.children.cycles-pp.put_cred_rcu
1.87 ? 3% +0.1 1.99 ? 3% perf-profile.children.cycles-pp.tcp_v6_rcv
1.89 ? 3% +0.1 2.01 ? 3% perf-profile.children.cycles-pp.ip6_input_finish
1.89 ? 3% +0.1 2.01 ? 3% perf-profile.children.cycles-pp.ip6_protocol_deliver_rcu
0.29 ? 22% +0.1 0.42 ? 9% perf-profile.children.cycles-pp.insert_reserved_file_extent
1.68 ? 3% +0.1 1.80 ? 4% perf-profile.children.cycles-pp.tcp_v6_do_rcv
1.67 ? 3% +0.1 1.79 ? 4% perf-profile.children.cycles-pp.tcp_rcv_established
1.55 +0.1 1.68 perf-profile.children.cycles-pp.nfsd4_create_file
0.78 ? 5% +0.1 0.92 ? 6% perf-profile.children.cycles-pp.__filemap_fdatawait_range
0.68 ? 7% +0.1 0.82 ? 5% perf-profile.children.cycles-pp.rcu_do_batch
0.00 +0.1 0.14 ? 6% perf-profile.children.cycles-pp.wait_log_commit
1.59 +0.1 1.72 perf-profile.children.cycles-pp.do_open_lookup
1.34 +0.1 1.48 perf-profile.children.cycles-pp.svc_get_next_xprt
1.15 ? 3% +0.1 1.29 ? 9% perf-profile.children.cycles-pp.filename_create
0.79 ? 6% +0.1 0.93 ? 14% perf-profile.children.cycles-pp._nfs4_open_and_get_state
1.15 ? 2% +0.1 1.30 ? 9% perf-profile.children.cycles-pp.do_mkdirat
1.16 ? 2% +0.1 1.31 ? 8% perf-profile.children.cycles-pp.__x64_sys_mkdir
0.59 ? 14% +0.1 0.74 ? 4% perf-profile.children.cycles-pp.btrfs_end_bio
0.85 ? 6% +0.1 1.00 ? 14% perf-profile.children.cycles-pp._nfs4_do_open
0.84 ? 6% +0.1 0.99 ? 3% perf-profile.children.cycles-pp.rcu_core
1.18 +0.1 1.33 ? 8% perf-profile.children.cycles-pp.mkdir
0.85 ? 6% +0.1 1.00 ? 14% perf-profile.children.cycles-pp.nfs4_atomic_open
0.85 ? 6% +0.1 1.00 ? 14% perf-profile.children.cycles-pp.nfs4_do_open
2.51 ? 3% +0.2 2.66 ? 3% perf-profile.children.cycles-pp.ip6_finish_output2
0.90 ? 6% +0.2 1.05 ? 14% perf-profile.children.cycles-pp.nfs_atomic_open
2.56 ? 3% +0.2 2.72 ? 3% perf-profile.children.cycles-pp.ip6_xmit
0.67 ? 6% +0.2 0.83 ? 7% perf-profile.children.cycles-pp.__lock_sock
0.94 ? 5% +0.2 1.10 ? 15% perf-profile.children.cycles-pp.lookup_open
2.23 ? 3% +0.2 2.39 ? 3% perf-profile.children.cycles-pp.net_rx_action
0.49 ? 3% +0.2 0.66 ? 5% perf-profile.children.cycles-pp.autoremove_wake_function
2.31 ? 3% +0.2 2.47 ? 4% perf-profile.children.cycles-pp.do_softirq
0.96 ? 4% +0.2 1.12 ? 14% perf-profile.children.cycles-pp.open_last_lookups
3.21 ? 3% +0.2 3.37 ? 4% perf-profile.children.cycles-pp.perf_trace_sched_switch
2.33 ? 4% +0.2 2.49 ? 3% perf-profile.children.cycles-pp.__local_bh_enable_ip
0.59 ? 8% +0.2 0.76 ? 8% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
0.79 ? 6% +0.2 0.96 ? 4% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
1.05 ? 4% +0.2 1.23 ? 14% perf-profile.children.cycles-pp.path_openat
1.07 ? 4% +0.2 1.25 ? 13% perf-profile.children.cycles-pp.do_sys_openat2
1.08 ? 4% +0.2 1.26 ? 13% perf-profile.children.cycles-pp.__x64_sys_openat
1.05 ? 4% +0.2 1.23 ? 13% perf-profile.children.cycles-pp.do_filp_open
0.67 +0.2 0.86 ? 4% perf-profile.children.cycles-pp.release_sock
1.07 ? 5% +0.2 1.26 ? 13% perf-profile.children.cycles-pp.open64
2.84 ? 3% +0.2 3.02 ? 3% perf-profile.children.cycles-pp.__tcp_transmit_skb
0.91 ? 6% +0.2 1.09 ? 7% perf-profile.children.cycles-pp.lock_sock_nested
2.17 ? 5% +0.2 2.36 ? 4% perf-profile.children.cycles-pp.schedule_idle
2.18 ? 3% +0.2 2.37 perf-profile.children.cycles-pp.perf_trace_sched_stat_runtime
2.28 ? 3% +0.2 2.47 perf-profile.children.cycles-pp.update_curr
0.74 +0.2 0.95 ? 5% perf-profile.children.cycles-pp.__wake_up_common_lock
2.23 ? 3% +0.2 2.44 perf-profile.children.cycles-pp.dequeue_entity
2.63 ? 5% +0.2 2.85 ? 3% perf-profile.children.cycles-pp.tcp_write_xmit
2.64 ? 5% +0.2 2.85 ? 3% perf-profile.children.cycles-pp.__tcp_push_pending_frames
1.29 ? 9% +0.2 1.51 ? 7% perf-profile.children.cycles-pp.xs_stream_data_receive_workfn
2.29 ? 3% +0.2 2.51 perf-profile.children.cycles-pp.dequeue_task_fair
1.25 ? 8% +0.2 1.48 ? 7% perf-profile.children.cycles-pp.xs_read_stream
4.00 ? 3% +0.2 4.23 ? 2% perf-profile.children.cycles-pp.__softirqentry_text_start
2.49 +0.2 2.72 ? 2% perf-profile.children.cycles-pp.perf_trace_sched_wakeup_template
1.86 ? 2% +0.2 2.10 ? 5% perf-profile.children.cycles-pp.inet6_recvmsg
1.85 +0.2 2.08 ? 5% perf-profile.children.cycles-pp.tcp_recvmsg
0.87 ? 25% +0.2 1.11 ? 11% perf-profile.children.cycles-pp.btrfs_work_helper
1.26 ? 3% +0.3 1.51 ? 3% perf-profile.children.cycles-pp.__wake_up_common
1.78 ? 11% +0.3 2.04 ? 5% perf-profile.children.cycles-pp.xs_tcp_send_request
4.30 ? 2% +0.3 4.57 ? 2% perf-profile.children.cycles-pp.unwind_next_frame
2.85 ? 5% +0.3 3.12 ? 4% perf-profile.children.cycles-pp.tcp_sock_set_cork
1.54 ? 19% +0.3 1.82 ? 11% perf-profile.children.cycles-pp.btrfs_lookup_csum
2.79 ? 2% +0.3 3.08 ? 3% perf-profile.children.cycles-pp.svc_tcp_sendto
1.97 ? 12% +0.3 2.26 ? 4% perf-profile.children.cycles-pp.xprt_transmit
1.54 ? 3% +0.3 1.84 ? 2% perf-profile.children.cycles-pp.update_sg_lb_stats
1.93 ? 12% +0.3 2.24 ? 4% perf-profile.children.cycles-pp.xprt_request_transmit
1.59 ? 18% +0.3 1.90 ? 11% perf-profile.children.cycles-pp.log_csums
1.63 ? 3% +0.3 1.94 ? 3% perf-profile.children.cycles-pp.find_busiest_group
1.61 ? 3% +0.3 1.92 ? 3% perf-profile.children.cycles-pp.update_sd_lb_stats
1.99 ? 12% +0.3 2.30 ? 4% perf-profile.children.cycles-pp.call_transmit
1.72 ? 3% +0.3 2.03 ? 3% perf-profile.children.cycles-pp.load_balance
2.86 ? 2% +0.3 3.17 ? 3% perf-profile.children.cycles-pp.svc_send
2.35 ? 5% +0.3 2.67 ? 4% perf-profile.children.cycles-pp._raw_spin_lock
4.01 +0.3 4.35 perf-profile.children.cycles-pp.try_to_wake_up
3.56 ? 3% +0.3 3.90 perf-profile.children.cycles-pp.svc_recv
1.70 ? 19% +0.3 2.05 ? 10% perf-profile.children.cycles-pp.btrfs_csum_file_blocks
2.28 ? 17% +0.4 2.64 ? 10% perf-profile.children.cycles-pp.copy_items
5.30 ? 2% +0.4 5.70 ? 2% perf-profile.children.cycles-pp.perf_callchain_kernel
1.65 ? 3% +0.4 2.05 ? 2% perf-profile.children.cycles-pp.newidle_balance
1.85 ? 2% +0.4 2.27 ? 2% perf-profile.children.cycles-pp.pick_next_task_fair
5.65 ? 2% +0.4 6.07 ? 2% perf-profile.children.cycles-pp.get_perf_callchain
2.83 ? 18% +0.4 3.26 ? 8% perf-profile.children.cycles-pp.btrfs_log_inode
5.66 ? 2% +0.4 6.10 ? 2% perf-profile.children.cycles-pp.perf_callchain
0.10 ? 19% +0.4 0.55 ? 9% perf-profile.children.cycles-pp.start_log_trans
5.95 ? 2% +0.5 6.42 ? 2% perf-profile.children.cycles-pp.perf_prepare_sample
0.31 ? 16% +0.5 0.80 ? 5% perf-profile.children.cycles-pp.mutex_spin_on_owner
0.40 ? 14% +0.5 0.90 ? 6% perf-profile.children.cycles-pp.__mutex_lock
7.18 ? 2% +0.5 7.72 ? 2% perf-profile.children.cycles-pp.perf_event_output_forward
7.22 ? 2% +0.5 7.76 ? 2% perf-profile.children.cycles-pp.__perf_event_overflow
3.48 ? 10% +0.6 4.06 ? 4% perf-profile.children.cycles-pp.rpc_async_schedule
7.56 ? 2% +0.6 8.15 ? 2% perf-profile.children.cycles-pp.perf_tp_event
4.43 ? 8% +0.7 5.10 ? 4% perf-profile.children.cycles-pp.__rpc_execute
6.40 +0.8 7.21 perf-profile.children.cycles-pp.schedule
2.97 ? 18% +0.9 3.84 ? 8% perf-profile.children.cycles-pp.btrfs_log_dentry_safe
2.96 ? 18% +0.9 3.84 ? 8% perf-profile.children.cycles-pp.btrfs_log_inode_parent
8.61 ? 2% +1.0 9.59 ? 2% perf-profile.children.cycles-pp.__schedule
6.86 ? 5% +1.1 7.96 ? 5% perf-profile.children.cycles-pp.process_one_work
9.49 ? 6% +1.3 10.82 ? 4% perf-profile.children.cycles-pp.worker_thread
0.96 ? 7% -0.2 0.72 ? 6% perf-profile.self.cycles-pp.menu_select
0.81 ? 6% -0.2 0.60 ? 13% perf-profile.self.cycles-pp.cpuidle_enter_state
0.16 ? 10% -0.1 0.10 ? 13% perf-profile.self.cycles-pp.io_serial_in
0.38 ? 4% -0.0 0.34 ? 8% perf-profile.self.cycles-pp.native_sched_clock
0.39 ? 4% -0.0 0.35 ? 5% perf-profile.self.cycles-pp.native_irq_return_iret
0.11 ? 12% -0.0 0.08 ? 5% perf-profile.self.cycles-pp.irqtime_account_irq
0.14 ? 8% -0.0 0.12 ? 5% perf-profile.self.cycles-pp.check_cpu_stall
0.05 ? 8% +0.0 0.06 ? 7% perf-profile.self.cycles-pp.finish_task_switch
0.08 ? 12% +0.0 0.09 ? 8% perf-profile.self.cycles-pp.net_rx_action
0.07 ? 14% +0.0 0.08 ? 15% perf-profile.self.cycles-pp.___slab_alloc
0.08 ? 5% +0.0 0.10 ? 7% perf-profile.self.cycles-pp.llist_reverse_order
0.07 ? 11% +0.0 0.09 ? 9% perf-profile.self.cycles-pp.xprt_request_enqueue_receive
0.19 ? 2% +0.0 0.20 ? 2% perf-profile.self.cycles-pp.__switch_to_asm
0.06 ? 19% +0.0 0.08 ? 10% perf-profile.self.cycles-pp.core_kernel_text
0.05 ? 8% +0.0 0.07 ? 5% perf-profile.self.cycles-pp.xprt_lookup_rqst
0.12 ? 10% +0.0 0.14 ? 5% perf-profile.self.cycles-pp.enqueue_entity
0.08 ? 10% +0.0 0.10 ? 7% perf-profile.self.cycles-pp.svc_get_next_xprt
0.07 ? 11% +0.0 0.09 ? 4% perf-profile.self.cycles-pp.apparmor_cred_free
0.06 ? 16% +0.0 0.08 ? 8% perf-profile.self.cycles-pp.tcp_clean_rtx_queue
0.18 ? 4% +0.0 0.20 ? 7% perf-profile.self.cycles-pp.__switch_to
0.09 ? 15% +0.0 0.12 ? 8% perf-profile.self.cycles-pp.__task_pid_nr_ns
0.19 ? 4% +0.0 0.22 ? 6% perf-profile.self.cycles-pp.perf_output_sample
0.15 ? 3% +0.0 0.18 ? 10% perf-profile.self.cycles-pp.refcount_dec_not_one
0.04 ? 71% +0.0 0.07 ? 10% perf-profile.self.cycles-pp.__mod_memcg_lruvec_state
0.46 ? 4% +0.0 0.50 perf-profile.self.cycles-pp.stack_access_ok
0.18 ? 11% +0.0 0.22 ? 5% perf-profile.self.cycles-pp.__list_del_entry_valid
0.14 ? 17% +0.0 0.18 ? 10% perf-profile.self.cycles-pp.__list_add_valid
0.07 +0.0 0.11 ? 7% perf-profile.self.cycles-pp.newidle_balance
0.08 ? 5% +0.0 0.12 ? 12% perf-profile.self.cycles-pp.flush_smp_call_function_queue
0.02 ?141% +0.0 0.06 ? 7% perf-profile.self.cycles-pp.tcp_recvmsg
0.02 ?141% +0.0 0.06 ? 7% perf-profile.self.cycles-pp.tcp_ack
0.25 ? 8% +0.0 0.29 ? 5% perf-profile.self.cycles-pp.perf_tp_event
0.05 ? 72% +0.0 0.09 ? 23% perf-profile.self.cycles-pp.add_delayed_ref_head
0.23 ? 3% +0.0 0.28 ? 7% perf-profile.self.cycles-pp.perf_callchain_kernel
0.05 ? 72% +0.0 0.09 ? 4% perf-profile.self.cycles-pp._atomic_dec_and_lock_irqsave
0.28 ? 5% +0.0 0.33 ? 5% perf-profile.self.cycles-pp._raw_spin_lock_bh
0.04 ? 70% +0.0 0.09 ? 12% perf-profile.self.cycles-pp.xas_find_marked
0.24 ? 8% +0.1 0.30 ? 3% perf-profile.self.cycles-pp.update_rq_clock
0.00 +0.1 0.05 ? 8% perf-profile.self.cycles-pp.__folio_start_writeback
0.00 +0.1 0.05 ? 8% perf-profile.self.cycles-pp.find_get_pages_range_tag
0.09 ? 5% +0.1 0.15 ? 10% perf-profile.self.cycles-pp.rwsem_optimistic_spin
0.00 +0.1 0.06 ? 9% perf-profile.self.cycles-pp.queue_delayed_work_on
0.00 +0.1 0.06 ? 13% perf-profile.self.cycles-pp.update_sd_lb_stats
1.62 ? 3% +0.1 1.71 ? 3% perf-profile.self.cycles-pp.__orc_find
0.78 ? 6% +0.1 0.89 ? 5% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.58 ? 8% +0.2 0.75 ? 9% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
1.94 ? 5% +0.2 2.16 ? 3% perf-profile.self.cycles-pp._raw_spin_lock
1.22 ? 3% +0.2 1.46 ? 3% perf-profile.self.cycles-pp.update_sg_lb_stats
0.30 ? 15% +0.5 0.79 ? 5% perf-profile.self.cycles-pp.mutex_spin_on_owner
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
--
0-DAY CI Kernel Test Service
https://01.org/lkp