2021-07-12 14:41:47

by kernel test robot

[permalink] [raw]
Subject: [xfs] a79b28c284: fsmark.files_per_sec -4.6% regression



Greeting,

FYI, we noticed a -4.6% regression of fsmark.files_per_sec due to commit:


commit: a79b28c284fd910bb291dbf307a26f4d432e88f3 ("xfs: separate CIL commit record IO")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master


in testcase: fsmark
on test machine: 96 threads 2 sockets Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 512G memory
with following parameters:

iterations: 1x
nr_threads: 32t
disk: 1SSD
fs: xfs
filesize: 8K
test_size: 400M
sync_method: fsyncBeforeClose
nr_directories: 16d
nr_files_per_directory: 256fpd
cpufreq_governor: performance
ucode: 0x5003006

test-description: The fsmark is a file system benchmark to test synchronous write workloads, for example, mail servers workload.
test-url: https://sourceforge.net/projects/fsmark/

In addition to that, the commit also has significant impact on the following tests:

+------------------+---------------------------------------------------------------------------------+
| testcase: change | aim7: aim7.jobs-per-min 22.0% improvement |
| test machine | 88 threads 2 sockets Intel(R) Xeon(R) Gold 6238M CPU @ 2.10GHz with 128G memory |
| test parameters | cpufreq_governor=performance |
| | disk=4BRD_12G |
| | fs=xfs |
| | load=300 |
| | md=RAID0 |
| | test=sync_disk_rw |
| | ucode=0x5003006 |
+------------------+---------------------------------------------------------------------------------+


If you fix the issue, kindly add following tag
Reported-by: kernel test robot <[email protected]>


Details are as below:
-------------------------------------------------------------------------------------------------->


To reproduce:

git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
bin/lkp run generated-yaml-file

=========================================================================================
compiler/cpufreq_governor/disk/filesize/fs/iterations/kconfig/nr_directories/nr_files_per_directory/nr_threads/rootfs/sync_method/tbox_group/test_size/testcase/ucode:
gcc-9/performance/1SSD/8K/xfs/1x/x86_64-rhel-8.3/16d/256fpd/32t/debian-10.4-x86_64-20200603.cgz/fsyncBeforeClose/lkp-csl-2sp7/400M/fsmark/0x5003006

commit:
18842e0a4f ("xfs: Fix 64-bit division on 32-bit in xlog_state_switch_iclogs()")
a79b28c284 ("xfs: separate CIL commit record IO")

18842e0a4f48564b a79b28c284fd910bb291dbf307a
---------------- ---------------------------
%stddev %change %stddev
\ | \
16388 -4.6% 15631 ? 2% fsmark.files_per_sec
19379 ? 6% -31.7% 13238 ? 3% fsmark.time.involuntary_context_switches
294578 +11.5% 328546 fsmark.time.voluntary_context_switches
11335 ? 11% +67.3% 18968 ? 56% cpuidle.POLL.usage
2860 ?199% -98.4% 45.67 ?127% softirqs.CPU72.TIMER
114218 ? 7% -11.2% 101430 vmstat.io.bo
23503 ? 12% +19.7% 28140 ? 9% numa-vmstat.node0.nr_slab_unreclaimable
588.67 ? 39% -44.8% 325.17 ? 50% numa-vmstat.node1.nr_page_table_pages
94014 ? 12% +19.7% 112564 ? 9% numa-meminfo.node0.SUnreclaim
164603 ? 67% -70.4% 48754 ? 74% numa-meminfo.node1.Inactive
2357 ? 39% -44.8% 1301 ? 50% numa-meminfo.node1.PageTables
70708 +0.7% 71212 proc-vmstat.nr_inactive_anon
18040 -3.2% 17455 proc-vmstat.nr_kernel_stack
70708 +0.7% 71212 proc-vmstat.nr_zone_inactive_anon
370332 +2.0% 377771 proc-vmstat.pgalloc_normal
157090 ? 41% +34.6% 211411 proc-vmstat.pgfree
3271411 ? 3% -8.3% 3001095 ? 3% perf-stat.i.iTLB-load-misses
2245484 ? 53% -64.3% 802425 ? 66% perf-stat.i.node-load-misses
56.94 ? 24% -22.2 34.72 ? 29% perf-stat.i.node-store-miss-rate%
1090824 ? 57% -65.7% 374199 ? 70% perf-stat.i.node-store-misses
59.80 ? 26% -23.7 36.06 ? 32% perf-stat.overall.node-store-miss-rate%
0.99 ? 45% -48.8% 0.51 ? 31% perf-stat.ps.major-faults
1716083 ? 54% -64.0% 618497 ? 70% perf-stat.ps.node-load-misses
834091 ? 58% -65.4% 288523 ? 74% perf-stat.ps.node-store-misses
487.67 ? 17% -35.0% 317.17 ? 12% slabinfo.biovec-max.active_objs
487.67 ? 17% -35.0% 317.17 ? 12% slabinfo.biovec-max.num_objs
8026 ? 5% +60.8% 12901 ? 3% slabinfo.kmalloc-1k.active_objs
252.33 ? 5% +61.1% 406.50 ? 3% slabinfo.kmalloc-1k.active_slabs
8086 ? 5% +61.0% 13017 ? 3% slabinfo.kmalloc-1k.num_objs
252.33 ? 5% +61.1% 406.50 ? 3% slabinfo.kmalloc-1k.num_slabs
2465 ? 6% -21.0% 1946 ? 14% slabinfo.pool_workqueue.active_objs
2475 ? 6% -20.9% 1958 ? 14% slabinfo.pool_workqueue.num_objs
18532 ? 7% -12.6% 16189 slabinfo.xfs_ili.active_objs
18570 ? 7% -12.6% 16222 slabinfo.xfs_ili.num_objs
57483 ? 5% -10.4% 51530 ? 3% interrupts.CAL:Function_call_interrupts
818.17 ? 45% -38.5% 503.00 interrupts.CPU11.CAL:Function_call_interrupts
572.83 ? 10% -12.6% 500.67 interrupts.CPU15.CAL:Function_call_interrupts
667.17 ? 20% -29.4% 470.83 ? 14% interrupts.CPU17.CAL:Function_call_interrupts
623.17 ? 12% -18.1% 510.50 ? 3% interrupts.CPU18.CAL:Function_call_interrupts
588.83 ? 2% -14.2% 505.00 interrupts.CPU19.CAL:Function_call_interrupts
606.33 ? 7% -17.3% 501.17 interrupts.CPU21.CAL:Function_call_interrupts
907.00 ? 30% -32.8% 609.67 ? 17% interrupts.CPU25.CAL:Function_call_interrupts
588.67 ? 5% -12.2% 516.67 ? 3% interrupts.CPU3.CAL:Function_call_interrupts
604.00 ? 13% -16.0% 507.50 ? 5% interrupts.CPU31.CAL:Function_call_interrupts
573.50 ? 3% -16.0% 481.67 ? 15% interrupts.CPU4.CAL:Function_call_interrupts
617.17 ? 15% -17.7% 507.83 ? 4% interrupts.CPU44.CAL:Function_call_interrupts
595.00 ? 4% -13.4% 515.33 ? 4% interrupts.CPU49.CAL:Function_call_interrupts
572.17 ? 4% -8.5% 523.67 ? 5% interrupts.CPU52.CAL:Function_call_interrupts
581.83 ? 5% -11.6% 514.17 ? 4% interrupts.CPU53.CAL:Function_call_interrupts
578.67 -10.2% 519.50 ? 4% interrupts.CPU54.CAL:Function_call_interrupts
581.00 ? 3% -12.4% 508.67 interrupts.CPU56.CAL:Function_call_interrupts
582.17 ? 3% -11.8% 513.67 interrupts.CPU57.CAL:Function_call_interrupts
581.67 ? 4% -13.2% 504.83 interrupts.CPU59.CAL:Function_call_interrupts
630.83 ? 18% -20.5% 501.50 interrupts.CPU61.CAL:Function_call_interrupts
633.00 ? 26% -19.5% 509.67 ? 3% interrupts.CPU64.CAL:Function_call_interrupts
604.00 ? 14% -16.6% 503.83 ? 4% interrupts.CPU75.CAL:Function_call_interrupts
603.67 ? 12% -16.0% 507.33 ? 4% interrupts.CPU78.CAL:Function_call_interrupts
602.17 ? 13% -16.0% 506.00 ? 4% interrupts.CPU80.CAL:Function_call_interrupts
618.33 ? 13% -14.3% 530.17 ? 7% interrupts.CPU90.CAL:Function_call_interrupts
616.00 ? 13% -15.2% 522.67 ? 5% interrupts.CPU91.CAL:Function_call_interrupts
57.10 ? 15% -16.4 40.71 ? 26% perf-profile.calltrace.cycles-pp.secondary_startup_64_no_verify
57.10 ? 15% -16.4 40.71 ? 26% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64_no_verify
57.10 ? 15% -16.4 40.71 ? 26% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
57.10 ? 15% -16.4 40.71 ? 26% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
14.67 ? 60% -10.1 4.57 ?148% perf-profile.calltrace.cycles-pp.seq_read_iter.proc_reg_read_iter.new_sync_read.vfs_read.ksys_read
14.67 ? 60% -10.1 4.57 ?148% perf-profile.calltrace.cycles-pp.proc_reg_read_iter.new_sync_read.vfs_read.ksys_read.do_syscall_64
14.67 ? 60% -8.7 6.02 ?161% perf-profile.calltrace.cycles-pp.new_sync_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
12.00 ? 86% -7.4 4.56 ?148% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.read
12.00 ? 86% -7.4 4.56 ?148% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
12.00 ? 86% -7.4 4.56 ?148% perf-profile.calltrace.cycles-pp.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
12.00 ? 86% -7.4 4.56 ?148% perf-profile.calltrace.cycles-pp.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
12.00 ? 86% -7.4 4.56 ?148% perf-profile.calltrace.cycles-pp.read
5.45 ?104% -4.7 0.72 ?223% perf-profile.calltrace.cycles-pp.arch_show_interrupts.seq_read_iter.proc_reg_read_iter.new_sync_read.vfs_read
6.14 ?108% -4.7 1.45 ?223% perf-profile.calltrace.cycles-pp.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
6.14 ?108% -4.7 1.45 ?223% perf-profile.calltrace.cycles-pp.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
4.75 ?104% -3.6 1.19 ?223% perf-profile.calltrace.cycles-pp.__x64_sys_getdents64.do_syscall_64.entry_SYSCALL_64_after_hwframe
4.75 ?104% -3.6 1.19 ?223% perf-profile.calltrace.cycles-pp.iterate_dir.__x64_sys_getdents64.do_syscall_64.entry_SYSCALL_64_after_hwframe
6.55 ?100% -2.7 3.84 ?143% perf-profile.calltrace.cycles-pp.show_interrupts.seq_read_iter.proc_reg_read_iter.new_sync_read.vfs_read
57.10 ? 15% -16.4 40.71 ? 26% perf-profile.children.cycles-pp.secondary_startup_64_no_verify
57.10 ? 15% -16.4 40.71 ? 26% perf-profile.children.cycles-pp.start_secondary
57.10 ? 15% -16.4 40.71 ? 26% perf-profile.children.cycles-pp.cpu_startup_entry
57.10 ? 15% -16.4 40.71 ? 26% perf-profile.children.cycles-pp.do_idle
18.14 ? 62% -12.8 5.29 ?155% perf-profile.children.cycles-pp.seq_read_iter
18.14 ? 62% -12.1 6.02 ?161% perf-profile.children.cycles-pp.ksys_read
18.14 ? 62% -12.1 6.02 ?161% perf-profile.children.cycles-pp.vfs_read
14.67 ? 60% -10.1 4.56 ?148% perf-profile.children.cycles-pp.proc_reg_read_iter
14.67 ? 60% -8.7 6.02 ?161% perf-profile.children.cycles-pp.new_sync_read
12.00 ? 86% -6.7 5.29 ?155% perf-profile.children.cycles-pp.read
5.45 ?104% -4.7 0.72 ?223% perf-profile.children.cycles-pp.arch_show_interrupts
5.75 ?105% -4.3 1.45 ?223% perf-profile.children.cycles-pp.vsnprintf
5.75 ?105% -3.6 2.17 ?223% perf-profile.children.cycles-pp.seq_vprintf
5.75 ?105% -3.6 2.17 ?223% perf-profile.children.cycles-pp.seq_printf
4.75 ?104% -3.6 1.19 ?223% perf-profile.children.cycles-pp.__x64_sys_getdents64
4.75 ?104% -3.6 1.19 ?223% perf-profile.children.cycles-pp.iterate_dir
6.55 ?100% -2.7 3.84 ?143% perf-profile.children.cycles-pp.show_interrupts
5.45 ?104% -5.4 0.00 perf-profile.self.cycles-pp.arch_show_interrupts



fsmark.files_per_sec

17000 +-------------------------------------------------------------------+
|+.+++ + :++.++ + +.++++.++++.+ +.+ + .++++. + .++ +.+ + .+ + |
16500 |-+ + + ++.+ :: :: + ++ + + + ::+. |
16000 |-+ O O OO O OO+O OOO OO O O+O O O + +|
| O O O O O O OOO O O OO OO O O|
15500 |O+ O O O O |
15000 |-+ O O O O O O O |
| O |
14500 |-+ |
14000 |-+ O |
| O |
13500 |-+ O |
13000 |-+ |
| O |
12500 +-------------------------------------------------------------------+


fsmark.time.voluntary_context_switches

350000 +------------------------------------------------------------------+
| O O |
340000 |-+ |
| |
| O O O O O |
330000 |OO O OO OOO O O OOOO OOOO OOOO OOOOO OOOO OOO OO OO OO OO|
| O O |
320000 |-+ O |
| |
310000 |-+ |
| |
| |
300000 |-+ |
|++.++++.++++.++++.++++.++++.++++.+++++.++++.++++.++++.++++.++++.++|
290000 +------------------------------------------------------------------+


[*] bisect-good sample
[O] bisect-bad sample

***************************************************************************************************
lkp-csl-2sp9: 88 threads 2 sockets Intel(R) Xeon(R) Gold 6238M CPU @ 2.10GHz with 128G memory
=========================================================================================
compiler/cpufreq_governor/disk/fs/kconfig/load/md/rootfs/tbox_group/test/testcase/ucode:
gcc-9/performance/4BRD_12G/xfs/x86_64-rhel-8.3/300/RAID0/debian-10.4-x86_64-20200603.cgz/lkp-csl-2sp9/sync_disk_rw/aim7/0x5003006

commit:
18842e0a4f ("xfs: Fix 64-bit division on 32-bit in xlog_state_switch_iclogs()")
a79b28c284 ("xfs: separate CIL commit record IO")

18842e0a4f48564b a79b28c284fd910bb291dbf307a
---------------- ---------------------------
%stddev %change %stddev
\ | \
13879 +22.0% 16929 aim7.jobs-per-min
129.73 -18.0% 106.37 aim7.time.elapsed_time
129.73 -18.0% 106.37 aim7.time.elapsed_time.max
1647556 +16.5% 1919576 aim7.time.involuntary_context_switches
41390 ? 3% -11.2% 36759 aim7.time.minor_page_faults
5461 -17.9% 4483 aim7.time.system_time
72997986 -10.5% 65359678 aim7.time.voluntary_context_switches
0.01 +0.0 0.02 ? 9% mpstat.cpu.all.iowait%
0.54 -0.1 0.48 ? 2% mpstat.cpu.all.usr%
2982166 -9.3% 2704857 numa-numastat.node1.local_node
3008831 -9.0% 2737175 numa-numastat.node1.numa_hit
58057 +10.9% 64387 ? 5% slabinfo.anon_vma_chain.active_objs
58142 +10.7% 64387 ? 5% slabinfo.anon_vma_chain.num_objs
168.11 -13.9% 144.77 uptime.boot
8419 -11.9% 7419 uptime.idle
558231 +21.8% 679782 vmstat.io.bo
3562019 -20.1% 2846094 vmstat.memory.cache
1098857 +12.1% 1231460 vmstat.system.cs
9.824e+08 +11.9% 1.099e+09 ? 2% cpuidle.C1.time
21821055 +11.0% 24216149 ? 2% cpuidle.C1.usage
3.872e+09 ? 10% -29.1% 2.745e+09 ? 15% cpuidle.C1E.time
46682234 -20.9% 36905264 ? 3% cpuidle.C1E.usage
1122822 +12.3% 1260895 cpuidle.POLL.usage
8.51 +3.1 11.59 turbostat.C1%
46681814 -20.9% 36904474 ? 3% turbostat.C1E
29377272 -16.7% 24460328 ? 3% turbostat.IRQ
64.33 -5.7% 60.67 turbostat.PkgTmp
55.96 +1.3% 56.68 turbostat.RAMWatt
335419 ? 2% -69.3% 102817 ? 8% meminfo.Active
335163 ? 2% -69.4% 102561 ? 8% meminfo.Active(anon)
131608 -13.7% 113590 meminfo.AnonHugePages
3403302 -20.7% 2700472 meminfo.Cached
1483499 -47.5% 778943 ? 2% meminfo.Committed_AS
977192 -48.9% 499429 ? 2% meminfo.Inactive
831379 -57.0% 357444 ? 3% meminfo.Inactive(anon)
364174 -83.5% 60025 ? 3% meminfo.Mapped
5288714 -13.5% 4573105 meminfo.Memused
888801 -78.6% 189797 ? 10% meminfo.Shmem
5365908 -13.0% 4670485 meminfo.max_used_kB
91803 ? 6% -78.3% 19879 ? 32% numa-meminfo.node0.Active
91632 ? 6% -78.5% 19709 ? 32% numa-meminfo.node0.Active(anon)
144688 ? 8% -70.1% 43249 ? 9% numa-meminfo.node0.Mapped
278561 ? 5% -79.3% 57643 ? 37% numa-meminfo.node0.Shmem
244509 -65.6% 84084 ? 2% numa-meminfo.node1.Active
244424 -65.6% 83999 ? 2% numa-meminfo.node1.Active(anon)
1475164 ? 64% -75.5% 361168 ? 29% numa-meminfo.node1.FilePages
555525 ? 16% -65.2% 193567 ? 32% numa-meminfo.node1.Inactive
482733 ? 19% -74.4% 123553 ? 50% numa-meminfo.node1.Inactive(anon)
221734 ? 5% -92.4% 16893 ? 11% numa-meminfo.node1.Mapped
2341429 ? 42% -51.0% 1147446 ? 25% numa-meminfo.node1.MemUsed
612619 ? 2% -78.0% 134759 ? 4% numa-meminfo.node1.Shmem
22915 ? 6% -78.5% 4927 ? 32% numa-vmstat.node0.nr_active_anon
36199 ? 8% -70.1% 10813 ? 9% numa-vmstat.node0.nr_mapped
69661 ? 5% -79.3% 14415 ? 36% numa-vmstat.node0.nr_shmem
22915 ? 6% -78.5% 4927 ? 32% numa-vmstat.node0.nr_zone_active_anon
7370 ? 5% -6.5% 6890 ? 3% numa-vmstat.node0.nr_zone_write_pending
61128 -65.6% 21003 ? 2% numa-vmstat.node1.nr_active_anon
368867 ? 64% -75.5% 90307 ? 29% numa-vmstat.node1.nr_file_pages
120723 ? 19% -74.4% 30895 ? 50% numa-vmstat.node1.nr_inactive_anon
55470 ? 6% -92.4% 4222 ? 11% numa-vmstat.node1.nr_mapped
153215 ? 2% -78.0% 33695 ? 4% numa-vmstat.node1.nr_shmem
61128 -65.6% 21002 ? 2% numa-vmstat.node1.nr_zone_active_anon
120723 ? 19% -74.4% 30894 ? 50% numa-vmstat.node1.nr_zone_inactive_anon
83793 ? 2% -69.4% 25645 ? 8% proc-vmstat.nr_active_anon
69383 -2.5% 67666 proc-vmstat.nr_anon_pages
850838 -20.7% 675125 proc-vmstat.nr_file_pages
207856 -57.0% 89363 ? 3% proc-vmstat.nr_inactive_anon
36450 -2.6% 35495 proc-vmstat.nr_inactive_file
91055 -83.5% 15006 ? 3% proc-vmstat.nr_mapped
222214 -78.6% 47456 ? 11% proc-vmstat.nr_shmem
83793 ? 2% -69.4% 25645 ? 8% proc-vmstat.nr_zone_active_anon
207856 -57.0% 89363 ? 3% proc-vmstat.nr_zone_inactive_anon
36450 -2.6% 35495 proc-vmstat.nr_zone_inactive_file
15021 ? 2% -5.8% 14149 proc-vmstat.nr_zone_write_pending
266167 -35.6% 171458 ? 11% proc-vmstat.numa_hint_faults
140448 ? 3% -37.3% 88003 ? 10% proc-vmstat.numa_hint_faults_local
5650544 -6.4% 5290494 ? 2% proc-vmstat.numa_hit
5570801 -6.5% 5210756 ? 2% proc-vmstat.numa_local
380199 -50.1% 189829 ? 8% proc-vmstat.numa_pte_updates
5717786 -5.7% 5391070 proc-vmstat.pgalloc_normal
808432 -25.4% 602788 ? 3% proc-vmstat.pgfault
29460 -13.0% 25626 ? 2% proc-vmstat.pgreuse
4.307e+09 +3.6% 4.464e+09 perf-stat.i.branch-instructions
39861920 +11.3% 44384422 perf-stat.i.branch-misses
23.30 +0.8 24.14 perf-stat.i.cache-miss-rate%
52855610 +17.7% 62197817 perf-stat.i.cache-misses
2.166e+08 +12.0% 2.426e+08 perf-stat.i.cache-references
1118434 +12.8% 1261487 perf-stat.i.context-switches
6.75 -3.9% 6.48 perf-stat.i.cpi
1.308e+11 +1.1% 1.322e+11 perf-stat.i.cpu-cycles
178676 +17.9% 210712 perf-stat.i.cpu-migrations
2593 ? 4% -12.7% 2265 ? 7% perf-stat.i.cycles-between-cache-misses
5.075e+09 +5.3% 5.342e+09 perf-stat.i.dTLB-loads
512902 ? 7% +17.5% 602445 ? 5% perf-stat.i.dTLB-store-misses
1.715e+09 +10.7% 1.899e+09 perf-stat.i.dTLB-stores
8968888 +12.4% 10078328 perf-stat.i.iTLB-load-misses
16507011 +15.0% 18977719 ? 3% perf-stat.i.iTLB-loads
1.925e+10 +4.6% 2.014e+10 perf-stat.i.instructions
2247 -4.8% 2139 perf-stat.i.instructions-per-iTLB-miss
1.49 +1.1% 1.50 perf-stat.i.metric.GHz
610.38 ? 2% +19.5% 729.31 ? 5% perf-stat.i.metric.K/sec
128.52 +5.6% 135.69 perf-stat.i.metric.M/sec
5939 -9.8% 5356 ? 2% perf-stat.i.minor-faults
19010898 +18.2% 22479012 perf-stat.i.node-load-misses
2079877 +11.3% 2314010 perf-stat.i.node-loads
8988994 +17.4% 10554292 perf-stat.i.node-store-misses
1813880 +13.9% 2065595 perf-stat.i.node-stores
5941 -9.8% 5359 ? 2% perf-stat.i.page-faults
11.25 +7.1% 12.05 perf-stat.overall.MPKI
0.93 +0.1 0.99 perf-stat.overall.branch-miss-rate%
24.41 +1.2 25.64 perf-stat.overall.cache-miss-rate%
6.79 -3.3% 6.57 perf-stat.overall.cpi
2474 -14.1% 2125 perf-stat.overall.cycles-between-cache-misses
2146 -6.9% 1998 perf-stat.overall.instructions-per-iTLB-miss
0.15 +3.4% 0.15 perf-stat.overall.ipc
4.274e+09 +3.5% 4.423e+09 perf-stat.ps.branch-instructions
39550311 +11.2% 43964111 perf-stat.ps.branch-misses
52450879 +17.5% 61628414 perf-stat.ps.cache-misses
2.149e+08 +11.9% 2.404e+08 perf-stat.ps.cache-references
1109849 +12.6% 1249872 perf-stat.ps.context-switches
177305 +17.8% 208787 perf-stat.ps.cpu-migrations
5.037e+09 +5.1% 5.293e+09 perf-stat.ps.dTLB-loads
508884 ? 7% +17.3% 597002 ? 5% perf-stat.ps.dTLB-store-misses
1.702e+09 +10.5% 1.881e+09 perf-stat.ps.dTLB-stores
8901107 +12.2% 9985139 perf-stat.ps.iTLB-load-misses
16380357 +14.8% 18802343 ? 3% perf-stat.ps.iTLB-loads
1.911e+10 +4.4% 1.995e+10 perf-stat.ps.instructions
5893 -10.0% 5301 ? 2% perf-stat.ps.minor-faults
18864869 +18.1% 22272948 perf-stat.ps.node-load-misses
2063954 +11.1% 2293240 perf-stat.ps.node-loads
8920023 +17.2% 10457333 perf-stat.ps.node-store-misses
1800221 +13.7% 2046553 perf-stat.ps.node-stores
5895 -10.0% 5304 ? 2% perf-stat.ps.page-faults
2.486e+12 -14.2% 2.134e+12 perf-stat.total.instructions
5157270 -6.3% 4834857 interrupts.CAL:Function_call_interrupts
258813 -19.8% 207639 ? 4% interrupts.CPU0.LOC:Local_timer_interrupts
258857 -20.0% 207130 ? 4% interrupts.CPU1.LOC:Local_timer_interrupts
258867 -19.9% 207436 ? 4% interrupts.CPU10.LOC:Local_timer_interrupts
258829 -19.8% 207511 ? 4% interrupts.CPU11.LOC:Local_timer_interrupts
258768 -19.8% 207624 ? 4% interrupts.CPU12.LOC:Local_timer_interrupts
258820 -19.8% 207594 ? 4% interrupts.CPU13.LOC:Local_timer_interrupts
258806 -19.8% 207625 ? 4% interrupts.CPU14.LOC:Local_timer_interrupts
258664 -19.6% 207905 ? 4% interrupts.CPU15.LOC:Local_timer_interrupts
6059 ? 2% -8.9% 5518 ? 2% interrupts.CPU15.RES:Rescheduling_interrupts
258859 -19.8% 207619 ? 4% interrupts.CPU16.LOC:Local_timer_interrupts
6175 ? 4% -10.3% 5540 ? 3% interrupts.CPU16.RES:Rescheduling_interrupts
258722 -19.7% 207647 ? 4% interrupts.CPU17.LOC:Local_timer_interrupts
258646 -19.7% 207580 ? 4% interrupts.CPU18.LOC:Local_timer_interrupts
258799 -19.8% 207564 ? 4% interrupts.CPU19.LOC:Local_timer_interrupts
258801 -19.8% 207563 ? 4% interrupts.CPU2.LOC:Local_timer_interrupts
258723 -19.8% 207565 ? 4% interrupts.CPU20.LOC:Local_timer_interrupts
258804 -19.8% 207615 ? 4% interrupts.CPU21.LOC:Local_timer_interrupts
57725 ? 2% -7.5% 53408 ? 4% interrupts.CPU22.CAL:Function_call_interrupts
258842 -20.3% 206199 ? 5% interrupts.CPU22.LOC:Local_timer_interrupts
57984 ? 2% -7.6% 53551 ? 3% interrupts.CPU23.CAL:Function_call_interrupts
258760 -20.3% 206156 ? 5% interrupts.CPU23.LOC:Local_timer_interrupts
58472 -8.1% 53738 ? 3% interrupts.CPU24.CAL:Function_call_interrupts
258969 -20.4% 206187 ? 5% interrupts.CPU24.LOC:Local_timer_interrupts
59121 -8.9% 53835 ? 3% interrupts.CPU25.CAL:Function_call_interrupts
258770 -20.3% 206220 ? 5% interrupts.CPU25.LOC:Local_timer_interrupts
59203 -8.8% 53976 ? 4% interrupts.CPU26.CAL:Function_call_interrupts
258771 -20.3% 206166 ? 5% interrupts.CPU26.LOC:Local_timer_interrupts
57731 ? 2% -7.9% 53145 ? 3% interrupts.CPU27.CAL:Function_call_interrupts
258765 -20.3% 206110 ? 5% interrupts.CPU27.LOC:Local_timer_interrupts
59022 -9.1% 53645 ? 4% interrupts.CPU28.CAL:Function_call_interrupts
259049 -20.5% 206059 ? 5% interrupts.CPU28.LOC:Local_timer_interrupts
58573 -8.2% 53783 ? 4% interrupts.CPU29.CAL:Function_call_interrupts
258649 -20.3% 206190 ? 5% interrupts.CPU29.LOC:Local_timer_interrupts
258909 -19.9% 207403 ? 4% interrupts.CPU3.LOC:Local_timer_interrupts
58830 -8.8% 53648 ? 3% interrupts.CPU30.CAL:Function_call_interrupts
258775 -20.3% 206235 ? 5% interrupts.CPU30.LOC:Local_timer_interrupts
58760 -8.4% 53810 ? 3% interrupts.CPU31.CAL:Function_call_interrupts
258780 -20.4% 206118 ? 5% interrupts.CPU31.LOC:Local_timer_interrupts
58551 -8.7% 53477 ? 3% interrupts.CPU32.CAL:Function_call_interrupts
258680 -20.3% 206191 ? 5% interrupts.CPU32.LOC:Local_timer_interrupts
58657 ? 2% -8.8% 53517 ? 4% interrupts.CPU33.CAL:Function_call_interrupts
258752 -20.4% 206003 ? 5% interrupts.CPU33.LOC:Local_timer_interrupts
58161 -7.4% 53868 ? 4% interrupts.CPU34.CAL:Function_call_interrupts
258778 -20.4% 206106 ? 5% interrupts.CPU34.LOC:Local_timer_interrupts
58619 -8.4% 53676 ? 4% interrupts.CPU35.CAL:Function_call_interrupts
258721 -20.3% 206175 ? 5% interrupts.CPU35.LOC:Local_timer_interrupts
58349 -7.6% 53907 ? 4% interrupts.CPU36.CAL:Function_call_interrupts
258753 -20.3% 206154 ? 5% interrupts.CPU36.LOC:Local_timer_interrupts
58714 -9.2% 53323 ? 4% interrupts.CPU37.CAL:Function_call_interrupts
258694 -20.2% 206499 ? 5% interrupts.CPU37.LOC:Local_timer_interrupts
58291 -7.7% 53799 ? 4% interrupts.CPU38.CAL:Function_call_interrupts
259023 -20.4% 206168 ? 5% interrupts.CPU38.LOC:Local_timer_interrupts
58924 -7.8% 54338 ? 4% interrupts.CPU39.CAL:Function_call_interrupts
258920 -20.4% 206187 ? 5% interrupts.CPU39.LOC:Local_timer_interrupts
258817 -19.8% 207532 ? 4% interrupts.CPU4.LOC:Local_timer_interrupts
5957 ? 3% -8.7% 5441 ? 3% interrupts.CPU4.RES:Rescheduling_interrupts
258718 -20.3% 206124 ? 5% interrupts.CPU40.LOC:Local_timer_interrupts
57580 ? 2% -7.3% 53377 ? 4% interrupts.CPU41.CAL:Function_call_interrupts
258739 -20.3% 206212 ? 5% interrupts.CPU41.LOC:Local_timer_interrupts
57848 ? 2% -7.7% 53401 ? 3% interrupts.CPU42.CAL:Function_call_interrupts
258729 -20.3% 206208 ? 5% interrupts.CPU42.LOC:Local_timer_interrupts
58375 -7.0% 54263 ? 4% interrupts.CPU43.CAL:Function_call_interrupts
258771 -20.3% 206256 ? 5% interrupts.CPU43.LOC:Local_timer_interrupts
258781 -19.8% 207611 ? 4% interrupts.CPU44.LOC:Local_timer_interrupts
258798 -19.8% 207429 ? 4% interrupts.CPU45.LOC:Local_timer_interrupts
258765 -19.8% 207525 ? 4% interrupts.CPU46.LOC:Local_timer_interrupts
5658 ? 2% -8.3% 5188 ? 2% interrupts.CPU46.RES:Rescheduling_interrupts
258787 -19.8% 207521 ? 4% interrupts.CPU47.LOC:Local_timer_interrupts
5631 -9.1% 5116 ? 2% interrupts.CPU47.RES:Rescheduling_interrupts
258813 -19.8% 207610 ? 4% interrupts.CPU48.LOC:Local_timer_interrupts
258805 -19.8% 207627 ? 4% interrupts.CPU49.LOC:Local_timer_interrupts
258833 -19.6% 207990 ? 4% interrupts.CPU5.LOC:Local_timer_interrupts
258740 -19.8% 207549 ? 4% interrupts.CPU50.LOC:Local_timer_interrupts
258775 -19.8% 207483 ? 4% interrupts.CPU51.LOC:Local_timer_interrupts
258790 -19.8% 207538 ? 4% interrupts.CPU52.LOC:Local_timer_interrupts
258802 -19.8% 207564 ? 4% interrupts.CPU53.LOC:Local_timer_interrupts
258827 -19.8% 207583 ? 4% interrupts.CPU54.LOC:Local_timer_interrupts
258601 -19.8% 207522 ? 4% interrupts.CPU55.LOC:Local_timer_interrupts
5690 ? 2% -7.7% 5251 ? 2% interrupts.CPU55.RES:Rescheduling_interrupts
258790 -19.8% 207540 ? 4% interrupts.CPU56.LOC:Local_timer_interrupts
258822 -19.8% 207524 ? 4% interrupts.CPU57.LOC:Local_timer_interrupts
258775 -19.8% 207552 ? 4% interrupts.CPU58.LOC:Local_timer_interrupts
258818 -19.8% 207591 ? 4% interrupts.CPU59.LOC:Local_timer_interrupts
258798 -19.8% 207591 ? 4% interrupts.CPU6.LOC:Local_timer_interrupts
258785 -19.8% 207634 ? 4% interrupts.CPU60.LOC:Local_timer_interrupts
258835 -19.8% 207540 ? 4% interrupts.CPU61.LOC:Local_timer_interrupts
5474 -8.1% 5032 ? 3% interrupts.CPU61.RES:Rescheduling_interrupts
258841 -19.8% 207593 ? 4% interrupts.CPU62.LOC:Local_timer_interrupts
258828 -19.8% 207511 ? 4% interrupts.CPU63.LOC:Local_timer_interrupts
258799 -19.8% 207562 ? 4% interrupts.CPU64.LOC:Local_timer_interrupts
258810 -19.8% 207574 ? 4% interrupts.CPU65.LOC:Local_timer_interrupts
58711 -8.7% 53620 ? 4% interrupts.CPU66.CAL:Function_call_interrupts
258759 -20.3% 206149 ? 5% interrupts.CPU66.LOC:Local_timer_interrupts
58760 -8.7% 53648 ? 4% interrupts.CPU67.CAL:Function_call_interrupts
258798 -20.3% 206243 ? 5% interrupts.CPU67.LOC:Local_timer_interrupts
5834 -8.3% 5348 ? 4% interrupts.CPU67.RES:Rescheduling_interrupts
58749 -8.8% 53569 ? 4% interrupts.CPU68.CAL:Function_call_interrupts
258762 -20.3% 206156 ? 5% interrupts.CPU68.LOC:Local_timer_interrupts
58503 -8.5% 53511 ? 4% interrupts.CPU69.CAL:Function_call_interrupts
258766 -20.3% 206179 ? 5% interrupts.CPU69.LOC:Local_timer_interrupts
258815 -19.8% 207607 ? 4% interrupts.CPU7.LOC:Local_timer_interrupts
58243 -9.0% 53008 ? 4% interrupts.CPU70.CAL:Function_call_interrupts
258760 -20.3% 206133 ? 5% interrupts.CPU70.LOC:Local_timer_interrupts
58155 -8.6% 53157 ? 4% interrupts.CPU71.CAL:Function_call_interrupts
258816 -20.3% 206150 ? 5% interrupts.CPU71.LOC:Local_timer_interrupts
58617 -8.4% 53666 ? 4% interrupts.CPU72.CAL:Function_call_interrupts
258800 -20.3% 206195 ? 5% interrupts.CPU72.LOC:Local_timer_interrupts
56202 ? 3% -5.3% 53235 ? 4% interrupts.CPU73.CAL:Function_call_interrupts
258813 -20.4% 206117 ? 5% interrupts.CPU73.LOC:Local_timer_interrupts
58430 -8.8% 53264 ? 4% interrupts.CPU74.CAL:Function_call_interrupts
258763 -20.3% 206118 ? 5% interrupts.CPU74.LOC:Local_timer_interrupts
58240 -8.7% 53145 ? 4% interrupts.CPU75.CAL:Function_call_interrupts
258774 -20.3% 206145 ? 5% interrupts.CPU75.LOC:Local_timer_interrupts
58193 -7.9% 53585 ? 4% interrupts.CPU76.CAL:Function_call_interrupts
258747 -20.3% 206179 ? 5% interrupts.CPU76.LOC:Local_timer_interrupts
6221 -14.0% 5352 ? 2% interrupts.CPU76.RES:Rescheduling_interrupts
59053 -9.8% 53280 ? 4% interrupts.CPU77.CAL:Function_call_interrupts
258776 -20.3% 206177 ? 5% interrupts.CPU77.LOC:Local_timer_interrupts
6068 -15.4% 5132 interrupts.CPU77.RES:Rescheduling_interrupts
57716 -8.0% 53124 ? 4% interrupts.CPU78.CAL:Function_call_interrupts
258754 -20.3% 206213 ? 5% interrupts.CPU78.LOC:Local_timer_interrupts
5830 -13.3% 5056 interrupts.CPU78.RES:Rescheduling_interrupts
58692 -8.6% 53667 ? 4% interrupts.CPU79.CAL:Function_call_interrupts
258794 -20.3% 206167 ? 5% interrupts.CPU79.LOC:Local_timer_interrupts
6035 -14.1% 5186 interrupts.CPU79.RES:Rescheduling_interrupts
258767 -19.8% 207598 ? 4% interrupts.CPU8.LOC:Local_timer_interrupts
58102 -8.1% 53421 ? 4% interrupts.CPU80.CAL:Function_call_interrupts
258638 -20.3% 206237 ? 5% interrupts.CPU80.LOC:Local_timer_interrupts
5924 ? 2% -11.9% 5217 ? 2% interrupts.CPU80.RES:Rescheduling_interrupts
58216 -9.3% 52787 ? 3% interrupts.CPU81.CAL:Function_call_interrupts
258674 -20.3% 206164 ? 5% interrupts.CPU81.LOC:Local_timer_interrupts
6053 -11.9% 5335 interrupts.CPU81.RES:Rescheduling_interrupts
58210 -8.7% 53122 ? 4% interrupts.CPU82.CAL:Function_call_interrupts
258680 -20.3% 206108 ? 5% interrupts.CPU82.LOC:Local_timer_interrupts
6617 ? 5% -18.7% 5379 interrupts.CPU82.RES:Rescheduling_interrupts
58532 -9.1% 53179 ? 4% interrupts.CPU83.CAL:Function_call_interrupts
258721 -20.3% 206164 ? 5% interrupts.CPU83.LOC:Local_timer_interrupts
5855 ? 2% -14.3% 5015 ? 2% interrupts.CPU83.RES:Rescheduling_interrupts
58049 -8.3% 53240 ? 4% interrupts.CPU84.CAL:Function_call_interrupts
258762 -20.4% 206103 ? 5% interrupts.CPU84.LOC:Local_timer_interrupts
58529 -8.7% 53457 ? 4% interrupts.CPU85.CAL:Function_call_interrupts
258746 -20.3% 206226 ? 5% interrupts.CPU85.LOC:Local_timer_interrupts
58118 -8.2% 53328 ? 4% interrupts.CPU86.CAL:Function_call_interrupts
258769 -20.4% 206092 ? 5% interrupts.CPU86.LOC:Local_timer_interrupts
258862 -20.3% 206240 ? 5% interrupts.CPU87.LOC:Local_timer_interrupts
258817 -19.8% 207486 ? 4% interrupts.CPU9.LOC:Local_timer_interrupts
5855 -9.1% 5321 interrupts.CPU9.RES:Rescheduling_interrupts
2099 -23.1% 1613 ? 8% interrupts.IWI:IRQ_work_interrupts
22773085 -20.1% 18204614 ? 4% interrupts.LOC:Local_timer_interrupts
18980 -11.0% 16886 ? 3% softirqs.CPU0.RCU
33504 -15.1% 28441 softirqs.CPU0.SCHED
17793 ? 7% -13.5% 15389 ? 5% softirqs.CPU1.RCU
31237 ? 4% -13.8% 26937 ? 6% softirqs.CPU1.SCHED
16570 ? 2% -12.4% 14507 ? 4% softirqs.CPU10.RCU
29665 -14.1% 25496 ? 2% softirqs.CPU10.SCHED
30185 -17.4% 24946 ? 2% softirqs.CPU11.SCHED
30314 ? 2% -15.8% 25525 ? 2% softirqs.CPU12.SCHED
16626 ? 2% -12.0% 14628 ? 4% softirqs.CPU13.RCU
29804 -15.7% 25129 softirqs.CPU13.SCHED
29843 -15.9% 25104 softirqs.CPU14.SCHED
29723 -14.0% 25574 ? 2% softirqs.CPU15.SCHED
16869 ? 2% -12.7% 14725 ? 4% softirqs.CPU16.RCU
30037 -15.9% 25247 ? 2% softirqs.CPU16.SCHED
16731 ? 3% -10.3% 15002 ? 5% softirqs.CPU17.RCU
29838 -16.1% 25032 softirqs.CPU17.SCHED
17111 ? 4% -13.0% 14891 ? 2% softirqs.CPU18.RCU
30271 -15.8% 25493 softirqs.CPU18.SCHED
29760 -14.1% 25557 ? 4% softirqs.CPU19.SCHED
31874 ? 2% -16.9% 26476 ? 2% softirqs.CPU2.SCHED
18355 ? 8% -18.4% 14976 ? 6% softirqs.CPU20.RCU
29939 -15.5% 25295 ? 2% softirqs.CPU20.SCHED
29891 -15.7% 25204 ? 2% softirqs.CPU21.SCHED
16564 -11.1% 14727 softirqs.CPU22.RCU
29167 ? 2% -13.1% 25349 softirqs.CPU22.SCHED
16200 ? 2% -10.0% 14581 ? 2% softirqs.CPU23.RCU
29737 ? 2% -15.4% 25156 softirqs.CPU23.SCHED
29658 -15.3% 25110 softirqs.CPU24.SCHED
16316 -11.0% 14527 ? 2% softirqs.CPU25.RCU
29893 -16.3% 25012 softirqs.CPU25.SCHED
16615 -11.4% 14715 softirqs.CPU26.RCU
29791 -16.5% 24880 softirqs.CPU26.SCHED
16479 -9.0% 14999 ? 5% softirqs.CPU27.RCU
29569 -14.1% 25401 softirqs.CPU27.SCHED
16763 ? 3% -11.5% 14836 softirqs.CPU28.RCU
30093 -16.1% 25253 softirqs.CPU28.SCHED
16407 -11.7% 14482 ? 2% softirqs.CPU29.RCU
29681 -15.3% 25135 softirqs.CPU29.SCHED
17138 ? 4% -13.0% 14913 ? 5% softirqs.CPU3.RCU
29927 -13.4% 25914 ? 2% softirqs.CPU3.SCHED
16520 -11.6% 14611 ? 2% softirqs.CPU30.RCU
29770 -16.3% 24908 softirqs.CPU30.SCHED
16676 ? 2% -12.4% 14606 ? 4% softirqs.CPU31.RCU
29802 -15.8% 25086 softirqs.CPU31.SCHED
16998 ? 3% -13.3% 14735 ? 2% softirqs.CPU32.RCU
30551 ? 4% -17.1% 25319 softirqs.CPU32.SCHED
16477 -10.7% 14720 softirqs.CPU33.RCU
29801 ? 2% -16.0% 25025 softirqs.CPU33.SCHED
16949 ? 3% -10.6% 15156 ? 4% softirqs.CPU34.RCU
29803 -15.6% 25147 softirqs.CPU34.SCHED
16807 ? 3% -11.8% 14821 softirqs.CPU35.RCU
30197 -16.3% 25273 softirqs.CPU35.SCHED
29782 -15.8% 25072 softirqs.CPU36.SCHED
16477 -10.3% 14772 softirqs.CPU37.RCU
29601 -13.9% 25498 softirqs.CPU37.SCHED
16904 ? 2% -13.0% 14701 ? 2% softirqs.CPU38.RCU
29795 ? 2% -16.6% 24842 softirqs.CPU38.SCHED
16908 ? 3% -14.5% 14453 ? 2% softirqs.CPU39.RCU
29966 -16.5% 25032 softirqs.CPU39.SCHED
29868 -14.2% 25620 softirqs.CPU4.SCHED
17225 ? 6% -16.1% 14454 ? 4% softirqs.CPU40.RCU
29566 -15.3% 25057 softirqs.CPU40.SCHED
16558 -13.1% 14393 ? 4% softirqs.CPU41.RCU
29721 -14.5% 25411 softirqs.CPU41.SCHED
16780 -13.3% 14543 ? 3% softirqs.CPU42.RCU
29813 -15.9% 25065 softirqs.CPU42.SCHED
16691 -11.4% 14793 ? 2% softirqs.CPU43.RCU
29058 ? 2% -15.9% 24424 softirqs.CPU43.SCHED
29016 -13.3% 25153 ? 2% softirqs.CPU44.SCHED
16236 -11.5% 14361 ? 4% softirqs.CPU45.RCU
29485 -13.4% 25525 softirqs.CPU45.SCHED
29643 -16.0% 24890 ? 2% softirqs.CPU46.SCHED
29747 -14.7% 25386 softirqs.CPU47.SCHED
29573 -14.2% 25365 softirqs.CPU48.SCHED
16929 -14.7% 14446 ? 4% softirqs.CPU49.RCU
29866 -15.5% 25238 ? 2% softirqs.CPU49.SCHED
30110 -15.5% 25444 ? 2% softirqs.CPU5.SCHED
16731 ? 4% -14.9% 14246 ? 4% softirqs.CPU50.RCU
29804 -14.9% 25359 ? 2% softirqs.CPU50.SCHED
16865 ? 5% -14.2% 14467 ? 6% softirqs.CPU51.RCU
29674 -15.9% 24950 ? 2% softirqs.CPU51.SCHED
16599 -11.1% 14751 ? 5% softirqs.CPU52.RCU
29968 -15.0% 25467 softirqs.CPU52.SCHED
29848 -15.2% 25326 ? 2% softirqs.CPU53.SCHED
16584 ? 3% -10.5% 14841 ? 7% softirqs.CPU54.RCU
29831 -14.9% 25379 ? 2% softirqs.CPU54.SCHED
16716 ? 4% -12.2% 14669 ? 5% softirqs.CPU55.RCU
29856 -15.6% 25207 ? 2% softirqs.CPU55.SCHED
16415 ? 2% -10.8% 14642 ? 4% softirqs.CPU56.RCU
29554 -14.9% 25148 softirqs.CPU56.SCHED
16554 ? 3% -12.6% 14467 ? 5% softirqs.CPU57.RCU
30048 -16.5% 25096 ? 2% softirqs.CPU57.SCHED
16407 ? 2% -9.8% 14803 ? 8% softirqs.CPU58.RCU
29805 -15.5% 25191 softirqs.CPU58.SCHED
16966 ? 6% -14.2% 14563 ? 4% softirqs.CPU59.RCU
29788 -14.6% 25446 ? 2% softirqs.CPU59.SCHED
17270 ? 5% -14.1% 14842 ? 5% softirqs.CPU6.RCU
29950 -14.1% 25724 ? 2% softirqs.CPU6.SCHED
16423 ? 4% -10.4% 14707 ? 6% softirqs.CPU60.RCU
29734 -13.7% 25658 ? 2% softirqs.CPU60.SCHED
16520 ? 2% -11.5% 14614 ? 5% softirqs.CPU61.RCU
29831 -15.3% 25259 ? 2% softirqs.CPU61.SCHED
29301 -13.6% 25312 softirqs.CPU62.SCHED
16495 ? 2% -12.0% 14520 ? 4% softirqs.CPU63.RCU
29708 -14.3% 25458 softirqs.CPU63.SCHED
16599 ? 2% -13.4% 14369 ? 3% softirqs.CPU64.RCU
29984 -16.0% 25189 softirqs.CPU64.SCHED
16851 ? 3% -10.8% 15031 ? 8% softirqs.CPU65.RCU
29863 -15.0% 25384 ? 2% softirqs.CPU65.SCHED
18595 ? 15% -20.2% 14841 ? 2% softirqs.CPU66.RCU
29553 -14.8% 25187 softirqs.CPU66.SCHED
30196 -15.7% 25465 ? 2% softirqs.CPU67.SCHED
16409 -9.6% 14827 softirqs.CPU68.RCU
30172 -16.8% 25097 softirqs.CPU68.SCHED
30371 ? 3% -15.4% 25695 ? 3% softirqs.CPU69.SCHED
16520 ? 2% -11.0% 14710 ? 5% softirqs.CPU7.RCU
29770 -15.4% 25176 ? 2% softirqs.CPU7.SCHED
16658 ? 2% -10.7% 14868 ? 4% softirqs.CPU70.RCU
29774 -15.6% 25123 softirqs.CPU70.SCHED
16370 -11.6% 14475 ? 4% softirqs.CPU71.RCU
29799 -16.1% 24998 softirqs.CPU71.SCHED
30299 -17.7% 24938 softirqs.CPU72.SCHED
29062 ? 3% -14.0% 24986 softirqs.CPU73.SCHED
16933 ? 5% -11.7% 14945 softirqs.CPU74.RCU
30213 ? 2% -16.7% 25171 softirqs.CPU74.SCHED
15997 -11.2% 14199 ? 4% softirqs.CPU75.RCU
29767 -15.3% 25220 softirqs.CPU75.SCHED
16015 -11.2% 14225 ? 3% softirqs.CPU76.RCU
29681 -15.3% 25132 softirqs.CPU76.SCHED
29659 -15.6% 25036 softirqs.CPU77.SCHED
15990 -12.1% 14048 ? 3% softirqs.CPU78.RCU
29814 -16.1% 25013 softirqs.CPU78.SCHED
16210 ? 2% -13.0% 14101 ? 3% softirqs.CPU79.RCU
29738 -15.8% 25042 softirqs.CPU79.SCHED
16905 ? 5% -14.1% 14527 ? 4% softirqs.CPU8.RCU
29852 -15.5% 25239 ? 2% softirqs.CPU8.SCHED
16792 ? 6% -14.9% 14293 ? 3% softirqs.CPU80.RCU
30105 -17.0% 24988 softirqs.CPU80.SCHED
16428 ? 2% -13.3% 14236 ? 3% softirqs.CPU81.RCU
29787 -16.0% 25019 softirqs.CPU81.SCHED
29496 -14.4% 25253 softirqs.CPU82.SCHED
29706 -14.9% 25293 softirqs.CPU83.SCHED
15801 -10.1% 14210 softirqs.CPU84.RCU
29653 -15.1% 25181 softirqs.CPU84.SCHED
16095 -12.9% 14020 softirqs.CPU85.RCU
29799 -15.0% 25323 softirqs.CPU85.SCHED
16123 -10.7% 14391 ? 2% softirqs.CPU86.RCU
29644 -14.0% 25502 ? 2% softirqs.CPU86.SCHED
16822 ? 2% -13.1% 14626 softirqs.CPU87.RCU
28356 -13.3% 24594 softirqs.CPU87.SCHED
17085 ? 3% -13.2% 14838 ? 6% softirqs.CPU9.RCU
29917 -14.3% 25653 ? 3% softirqs.CPU9.SCHED
1467534 -11.5% 1298751 ? 3% softirqs.RCU
2628721 -15.3% 2226374 softirqs.SCHED
20535 -9.4% 18603 softirqs.TIMER
11.00 -10.2 0.77 ? 4% perf-profile.calltrace.cycles-pp.__xfs_log_force_lsn.xfs_file_fsync.xfs_file_buffered_write.new_sync_write.vfs_write
7.10 -7.1 0.00 perf-profile.calltrace.cycles-pp._raw_spin_lock.__xfs_log_force_lsn.xfs_file_fsync.xfs_file_buffered_write.new_sync_write
7.08 -7.1 0.00 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.__xfs_log_force_lsn.xfs_file_fsync.xfs_file_buffered_write
68.11 -1.6 66.52 perf-profile.calltrace.cycles-pp.xfs_file_fsync.xfs_file_buffered_write.new_sync_write.vfs_write.ksys_write
70.73 -1.5 69.20 perf-profile.calltrace.cycles-pp.xfs_file_buffered_write.new_sync_write.vfs_write.ksys_write.do_syscall_64
70.75 -1.5 69.22 perf-profile.calltrace.cycles-pp.new_sync_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
70.83 -1.5 69.32 perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
70.84 -1.5 69.33 perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
70.93 -1.5 69.42 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.write
70.92 -1.5 69.41 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
71.09 -1.5 69.61 perf-profile.calltrace.cycles-pp.write
0.82 -0.1 0.71 perf-profile.calltrace.cycles-pp.xlog_ioend_work.process_one_work.worker_thread.kthread.ret_from_fork
1.01 -0.1 0.94 ? 2% perf-profile.calltrace.cycles-pp.xlog_cil_push_work.process_one_work.worker_thread.kthread.ret_from_fork
0.54 +0.0 0.57 ? 2% perf-profile.calltrace.cycles-pp.schedule_idle.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
0.52 +0.0 0.55 ? 2% perf-profile.calltrace.cycles-pp.__schedule.schedule_idle.do_idle.cpu_startup_entry.start_secondary
0.62 +0.0 0.66 ? 2% perf-profile.calltrace.cycles-pp.xlog_state_do_callback.xlog_ioend_work.process_one_work.worker_thread.kthread
2.79 +0.1 2.87 ? 2% perf-profile.calltrace.cycles-pp.__flush_work.xlog_cil_force_lsn.xfs_log_force_lsn.xfs_file_fsync.xfs_file_buffered_write
0.52 ? 2% +0.1 0.61 perf-profile.calltrace.cycles-pp.wait_for_completion.__flush_work.xlog_cil_force_lsn.xfs_log_force_lsn.xfs_file_fsync
0.56 +0.1 0.66 perf-profile.calltrace.cycles-pp.iomap_apply.iomap_file_buffered_write.xfs_file_buffered_write.new_sync_write.vfs_write
0.56 +0.1 0.66 perf-profile.calltrace.cycles-pp.iomap_file_buffered_write.xfs_file_buffered_write.new_sync_write.vfs_write.ksys_write
0.95 ? 4% +0.1 1.07 ? 5% perf-profile.calltrace.cycles-pp.brd_submit_bio.submit_bio_noacct.submit_bio.iomap_submit_ioend.xfs_vm_writepages
0.54 +0.1 0.66 perf-profile.calltrace.cycles-pp.complete.process_one_work.worker_thread.kthread.ret_from_fork
0.70 +0.2 0.85 perf-profile.calltrace.cycles-pp.md_submit_flush_data.process_one_work.worker_thread.kthread.ret_from_fork
1.16 ? 4% +0.2 1.32 ? 4% perf-profile.calltrace.cycles-pp.submit_bio.iomap_submit_ioend.xfs_vm_writepages.do_writepages.__filemap_fdatawrite_range
1.13 ? 4% +0.2 1.28 ? 4% perf-profile.calltrace.cycles-pp.submit_bio_noacct.submit_bio.iomap_submit_ioend.xfs_vm_writepages.do_writepages
1.17 ? 3% +0.2 1.33 ? 4% perf-profile.calltrace.cycles-pp.iomap_submit_ioend.xfs_vm_writepages.do_writepages.__filemap_fdatawrite_range.file_write_and_wait_range
4.28 +0.2 4.45 perf-profile.calltrace.cycles-pp.process_one_work.worker_thread.kthread.ret_from_fork
4.56 +0.2 4.75 perf-profile.calltrace.cycles-pp.worker_thread.kthread.ret_from_fork
4.56 +0.2 4.76 perf-profile.calltrace.cycles-pp.ret_from_fork
4.56 +0.2 4.76 perf-profile.calltrace.cycles-pp.kthread.ret_from_fork
1.96 ? 5% +0.3 2.22 ? 5% perf-profile.calltrace.cycles-pp.__filemap_fdatawrite_range.file_write_and_wait_range.xfs_file_fsync.xfs_file_buffered_write.new_sync_write
1.95 ? 5% +0.3 2.21 ? 5% perf-profile.calltrace.cycles-pp.do_writepages.__filemap_fdatawrite_range.file_write_and_wait_range.xfs_file_fsync.xfs_file_buffered_write
1.94 ? 5% +0.3 2.21 ? 5% perf-profile.calltrace.cycles-pp.xfs_vm_writepages.do_writepages.__filemap_fdatawrite_range.file_write_and_wait_range.xfs_file_fsync
2.09 ? 5% +0.3 2.36 ? 5% perf-profile.calltrace.cycles-pp.file_write_and_wait_range.xfs_file_fsync.xfs_file_buffered_write.new_sync_write.vfs_write
8.93 +0.4 9.33 perf-profile.calltrace.cycles-pp.remove_wait_queue.xlog_cil_force_lsn.xfs_log_force_lsn.xfs_file_fsync.xfs_file_buffered_write
8.90 +0.4 9.30 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.remove_wait_queue.xlog_cil_force_lsn.xfs_log_force_lsn.xfs_file_fsync
8.86 +0.4 9.27 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.remove_wait_queue.xlog_cil_force_lsn.xfs_log_force_lsn
0.17 ?141% +0.4 0.58 perf-profile.calltrace.cycles-pp.prepare_to_wait_event.md_flush_request.raid0_make_request.md_handle_request.md_submit_bio
0.00 +0.5 0.52 perf-profile.calltrace.cycles-pp.iomap_write_actor.iomap_apply.iomap_file_buffered_write.xfs_file_buffered_write.new_sync_write
0.00 +0.6 0.58 perf-profile.calltrace.cycles-pp.try_to_wake_up.swake_up_locked.complete.process_one_work.worker_thread
0.00 +0.6 0.61 perf-profile.calltrace.cycles-pp.swake_up_locked.complete.process_one_work.worker_thread.kthread
9.75 +0.8 10.59 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.remove_wait_queue.xlog_wait_on_iclog.__xfs_log_force_lsn
21.50 +1.1 22.59 perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
22.08 +1.1 23.19 perf-profile.calltrace.cycles-pp.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
22.05 +1.1 23.16 perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary
23.93 +1.2 25.15 perf-profile.calltrace.cycles-pp.secondary_startup_64_no_verify
23.63 +1.3 24.88 perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
23.64 +1.3 24.90 perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
23.64 +1.3 24.90 perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64_no_verify
21.00 +1.9 22.92 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irq.md_flush_request.raid0_make_request.md_handle_request
21.09 +1.9 23.03 perf-profile.calltrace.cycles-pp._raw_spin_lock_irq.md_flush_request.raid0_make_request.md_handle_request.md_submit_bio
13.72 +2.0 15.74 ? 2% perf-profile.calltrace.cycles-pp.__xfs_log_force_lsn.xfs_log_force_lsn.xfs_file_fsync.xfs_file_buffered_write.new_sync_write
22.13 +2.1 24.24 perf-profile.calltrace.cycles-pp.md_flush_request.raid0_make_request.md_handle_request.md_submit_bio.submit_bio_noacct
22.16 +2.1 24.27 perf-profile.calltrace.cycles-pp.raid0_make_request.md_handle_request.md_submit_bio.submit_bio_noacct.submit_bio
22.22 +2.1 24.34 perf-profile.calltrace.cycles-pp.md_handle_request.md_submit_bio.submit_bio_noacct.submit_bio.submit_bio_wait
22.28 +2.1 24.41 perf-profile.calltrace.cycles-pp.md_submit_bio.submit_bio_noacct.submit_bio.submit_bio_wait.blkdev_issue_flush
22.34 +2.1 24.48 perf-profile.calltrace.cycles-pp.submit_bio.submit_bio_wait.blkdev_issue_flush.xfs_file_fsync.xfs_file_buffered_write
22.37 +2.1 24.51 perf-profile.calltrace.cycles-pp.submit_bio_wait.blkdev_issue_flush.xfs_file_fsync.xfs_file_buffered_write.new_sync_write
22.38 +2.1 24.53 perf-profile.calltrace.cycles-pp.blkdev_issue_flush.xfs_file_fsync.xfs_file_buffered_write.new_sync_write.vfs_write
22.33 +2.1 24.48 perf-profile.calltrace.cycles-pp.submit_bio_noacct.submit_bio.submit_bio_wait.blkdev_issue_flush.xfs_file_fsync
6.70 ? 2% +3.7 10.35 ? 2% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.xlog_cil_force_lsn.xfs_log_force_lsn.xfs_file_fsync
6.80 ? 2% +3.7 10.46 ? 2% perf-profile.calltrace.cycles-pp._raw_spin_lock.xlog_cil_force_lsn.xfs_log_force_lsn.xfs_file_fsync.xfs_file_buffered_write
18.82 +4.2 23.01 perf-profile.calltrace.cycles-pp.xlog_cil_force_lsn.xfs_log_force_lsn.xfs_file_fsync.xfs_file_buffered_write.new_sync_write
6.05 +4.6 10.62 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.remove_wait_queue.xlog_wait_on_iclog.__xfs_log_force_lsn.xfs_log_force_lsn
6.06 +4.6 10.65 perf-profile.calltrace.cycles-pp.remove_wait_queue.xlog_wait_on_iclog.__xfs_log_force_lsn.xfs_log_force_lsn.xfs_file_fsync
6.25 +4.7 10.97 perf-profile.calltrace.cycles-pp.xlog_wait_on_iclog.__xfs_log_force_lsn.xfs_log_force_lsn.xfs_file_fsync.xfs_file_buffered_write
32.55 +6.2 38.76 perf-profile.calltrace.cycles-pp.xfs_log_force_lsn.xfs_file_fsync.xfs_file_buffered_write.new_sync_write.vfs_write
24.73 -8.2 16.50 ? 2% perf-profile.children.cycles-pp.__xfs_log_force_lsn
20.13 -3.4 16.69 perf-profile.children.cycles-pp._raw_spin_lock
66.06 -2.2 63.83 perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
68.11 -1.6 66.52 perf-profile.children.cycles-pp.xfs_file_fsync
70.73 -1.5 69.20 perf-profile.children.cycles-pp.xfs_file_buffered_write
70.76 -1.5 69.23 perf-profile.children.cycles-pp.new_sync_write
70.84 -1.5 69.32 perf-profile.children.cycles-pp.vfs_write
70.85 -1.5 69.33 perf-profile.children.cycles-pp.ksys_write
71.12 -1.5 69.62 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
71.10 -1.5 69.61 perf-profile.children.cycles-pp.do_syscall_64
71.12 -1.5 69.64 perf-profile.children.cycles-pp.write
21.61 -0.9 20.73 perf-profile.children.cycles-pp.remove_wait_queue
23.44 -0.7 22.73 perf-profile.children.cycles-pp._raw_spin_lock_irqsave
0.62 -0.2 0.43 ? 2% perf-profile.children.cycles-pp.xlog_write
0.20 ? 6% -0.1 0.05 perf-profile.children.cycles-pp.xlog_state_done_syncing
0.82 -0.1 0.71 perf-profile.children.cycles-pp.xlog_ioend_work
0.39 -0.1 0.29 ? 2% perf-profile.children.cycles-pp.xlog_state_release_iclog
1.01 -0.1 0.94 ? 2% perf-profile.children.cycles-pp.xlog_cil_push_work
0.23 ? 11% -0.1 0.16 ? 24% perf-profile.children.cycles-pp.xlog_grant_add_space
0.24 ? 17% -0.1 0.17 ? 19% perf-profile.children.cycles-pp.xfs_log_ticket_ungrant
0.13 -0.1 0.07 perf-profile.children.cycles-pp.xlog_state_get_iclog_space
0.68 -0.0 0.64 ? 2% perf-profile.children.cycles-pp.asm_sysvec_apic_timer_interrupt
0.43 -0.0 0.40 ? 3% perf-profile.children.cycles-pp.__sysvec_apic_timer_interrupt
0.60 -0.0 0.57 ? 2% perf-profile.children.cycles-pp.sysvec_apic_timer_interrupt
0.42 -0.0 0.39 ? 2% perf-profile.children.cycles-pp.hrtimer_interrupt
0.05 +0.0 0.06 perf-profile.children.cycles-pp.__radix_tree_lookup
0.12 +0.0 0.13 perf-profile.children.cycles-pp.__update_load_avg_cfs_rq
0.08 +0.0 0.09 perf-profile.children.cycles-pp.iomap_set_page_dirty
0.08 +0.0 0.09 perf-profile.children.cycles-pp.__list_add_valid
0.06 +0.0 0.07 perf-profile.children.cycles-pp.ttwu_do_wakeup
0.09 +0.0 0.10 ? 4% perf-profile.children.cycles-pp.iomap_set_range_uptodate
0.07 ? 6% +0.0 0.09 ? 5% perf-profile.children.cycles-pp.copyin
0.21 ? 2% +0.0 0.22 perf-profile.children.cycles-pp.ttwu_queue_wakelist
0.09 ? 5% +0.0 0.11 ? 4% perf-profile.children.cycles-pp.xfs_btree_lookup
0.08 +0.0 0.09 ? 5% perf-profile.children.cycles-pp.iov_iter_copy_from_user_atomic
0.09 ? 5% +0.0 0.10 perf-profile.children.cycles-pp.llseek
0.10 ? 4% +0.0 0.11 ? 4% perf-profile.children.cycles-pp.queue_work_on
0.16 ? 5% +0.0 0.18 ? 2% perf-profile.children.cycles-pp.update_rq_clock
0.13 ? 3% +0.0 0.14 ? 3% perf-profile.children.cycles-pp.syscall_exit_to_user_mode
0.09 ? 5% +0.0 0.10 ? 4% perf-profile.children.cycles-pp.__queue_work
0.05 ? 8% +0.0 0.07 ? 11% perf-profile.children.cycles-pp.xfs_map_blocks
0.12 ? 3% +0.0 0.14 ? 5% perf-profile.children.cycles-pp.exit_to_user_mode_prepare
0.11 ? 4% +0.0 0.13 perf-profile.children.cycles-pp.pagecache_get_page
0.11 +0.0 0.13 ? 3% perf-profile.children.cycles-pp.set_task_cpu
0.19 ? 4% +0.0 0.21 perf-profile.children.cycles-pp.__list_del_entry_valid
0.07 ? 6% +0.0 0.09 ? 5% perf-profile.children.cycles-pp.insert_work
0.08 ? 6% +0.0 0.10 ? 4% perf-profile.children.cycles-pp.migrate_task_rq_fair
0.17 ? 2% +0.0 0.19 ? 2% perf-profile.children.cycles-pp.iomap_write_begin
0.16 ? 3% +0.0 0.18 ? 2% perf-profile.children.cycles-pp.perf_trace_sched_wakeup_template
0.12 +0.0 0.14 perf-profile.children.cycles-pp.grab_cache_page_write_begin
0.28 +0.0 0.30 ? 2% perf-profile.children.cycles-pp.pick_next_task_fair
0.11 ? 4% +0.0 0.13 perf-profile.children.cycles-pp.xfs_trans_committed_bulk
0.09 ? 5% +0.0 0.11 perf-profile.children.cycles-pp.xfs_buffered_write_iomap_begin
0.16 +0.0 0.19 ? 6% perf-profile.children.cycles-pp.update_cfs_group
0.14 ? 3% +0.0 0.17 ? 4% perf-profile.children.cycles-pp.__switch_to_asm
0.17 ? 4% +0.0 0.20 ? 7% perf-profile.children.cycles-pp.xfs_inode_item_format
0.15 +0.0 0.18 ? 4% perf-profile.children.cycles-pp.iomap_write_end
0.30 +0.0 0.33 perf-profile.children.cycles-pp.select_idle_cpu
0.19 ? 4% +0.0 0.22 ? 2% perf-profile.children.cycles-pp.xfs_bmap_add_extent_unwritten_real
0.17 ? 4% +0.0 0.20 ? 2% perf-profile.children.cycles-pp.xlog_cil_process_committed
0.17 ? 4% +0.0 0.20 ? 2% perf-profile.children.cycles-pp.xlog_cil_committed
1.25 +0.0 1.29 perf-profile.children.cycles-pp.__wake_up_common_lock
0.26 +0.0 0.30 perf-profile.children.cycles-pp.available_idle_cpu
0.19 ? 4% +0.0 0.23 ? 3% perf-profile.children.cycles-pp.xfs_bmapi_convert_unwritten
0.48 +0.0 0.51 ? 2% perf-profile.children.cycles-pp.dequeue_entity
0.02 ?141% +0.0 0.06 ? 8% perf-profile.children.cycles-pp.xfs_iext_lookup_extent
0.15 ? 6% +0.0 0.19 ? 4% perf-profile.children.cycles-pp.poll_idle
0.11 ? 4% +0.0 0.15 ? 12% perf-profile.children.cycles-pp.submit_bio_checks
0.62 +0.0 0.66 ? 2% perf-profile.children.cycles-pp.xlog_state_do_callback
0.54 +0.0 0.58 ? 2% perf-profile.children.cycles-pp.schedule_idle
0.23 ? 3% +0.0 0.27 perf-profile.children.cycles-pp.xfs_bmapi_write
0.38 +0.0 0.42 perf-profile.children.cycles-pp.xlog_state_clean_iclog
0.37 ? 3% +0.0 0.42 perf-profile.children.cycles-pp.sched_ttwu_pending
0.40 +0.0 0.45 perf-profile.children.cycles-pp.select_idle_sibling
0.59 +0.0 0.64 perf-profile.children.cycles-pp.dequeue_task_fair
0.53 +0.0 0.58 ? 2% perf-profile.children.cycles-pp.select_task_rq_fair
0.00 +0.1 0.05 perf-profile.children.cycles-pp.__pagevec_release
0.56 +0.1 0.61 ? 3% perf-profile.children.cycles-pp.update_load_avg
0.00 +0.1 0.05 ? 8% perf-profile.children.cycles-pp.kfree
0.00 +0.1 0.05 ? 8% perf-profile.children.cycles-pp.submit_flushes
0.00 +0.1 0.05 ? 8% perf-profile.children.cycles-pp.bio_alloc_bioset
0.41 +0.1 0.46 perf-profile.children.cycles-pp.brd_do_bvec
1.19 +0.1 1.24 perf-profile.children.cycles-pp.__wake_up_common
0.47 ? 2% +0.1 0.52 perf-profile.children.cycles-pp.enqueue_entity
0.59 +0.1 0.64 perf-profile.children.cycles-pp.enqueue_task_fair
0.61 ? 2% +0.1 0.68 perf-profile.children.cycles-pp.ttwu_do_activate
0.00 +0.1 0.07 ? 35% perf-profile.children.cycles-pp.blk_throtl_bio
0.44 +0.1 0.51 perf-profile.children.cycles-pp.flush_smp_call_function_from_idle
0.45 +0.1 0.52 perf-profile.children.cycles-pp.iomap_write_actor
0.44 +0.1 0.51 perf-profile.children.cycles-pp.schedule_timeout
2.79 +0.1 2.87 ? 2% perf-profile.children.cycles-pp.__flush_work
0.52 ? 2% +0.1 0.62 perf-profile.children.cycles-pp.wait_for_completion
0.39 ? 2% +0.1 0.49 perf-profile.children.cycles-pp.autoremove_wake_function
0.49 ? 3% +0.1 0.58 perf-profile.children.cycles-pp.prepare_to_wait_event
0.56 +0.1 0.66 perf-profile.children.cycles-pp.iomap_apply
0.56 +0.1 0.66 perf-profile.children.cycles-pp.iomap_file_buffered_write
1.32 +0.1 1.43 perf-profile.children.cycles-pp.schedule
0.49 +0.1 0.61 perf-profile.children.cycles-pp.swake_up_locked
0.54 +0.1 0.67 perf-profile.children.cycles-pp.complete
1.00 ? 4% +0.1 1.13 ? 5% perf-profile.children.cycles-pp.brd_submit_bio
0.71 +0.1 0.85 perf-profile.children.cycles-pp.md_submit_flush_data
1.83 +0.2 1.98 perf-profile.children.cycles-pp.__schedule
4.29 +0.2 4.45 perf-profile.children.cycles-pp.process_one_work
1.17 ? 3% +0.2 1.33 ? 4% perf-profile.children.cycles-pp.iomap_submit_ioend
1.80 +0.2 1.99 perf-profile.children.cycles-pp.try_to_wake_up
4.56 +0.2 4.76 perf-profile.children.cycles-pp.ret_from_fork
4.56 +0.2 4.76 perf-profile.children.cycles-pp.kthread
4.56 +0.2 4.76 perf-profile.children.cycles-pp.worker_thread
1.96 ? 5% +0.3 2.22 ? 5% perf-profile.children.cycles-pp.__filemap_fdatawrite_range
1.95 ? 5% +0.3 2.21 ? 5% perf-profile.children.cycles-pp.do_writepages
1.95 ? 5% +0.3 2.21 ? 5% perf-profile.children.cycles-pp.xfs_vm_writepages
2.09 ? 5% +0.3 2.36 ? 5% perf-profile.children.cycles-pp.file_write_and_wait_range
21.76 +1.1 22.82 perf-profile.children.cycles-pp.intel_idle
22.34 +1.1 23.43 perf-profile.children.cycles-pp.cpuidle_enter
22.34 +1.1 23.43 perf-profile.children.cycles-pp.cpuidle_enter_state
23.93 +1.2 25.15 perf-profile.children.cycles-pp.secondary_startup_64_no_verify
23.93 +1.2 25.15 perf-profile.children.cycles-pp.cpu_startup_entry
23.92 +1.2 25.15 perf-profile.children.cycles-pp.do_idle
23.64 +1.3 24.90 perf-profile.children.cycles-pp.start_secondary
10.12 +1.3 11.45 perf-profile.children.cycles-pp.xlog_wait_on_iclog
23.85 +2.0 25.87 perf-profile.children.cycles-pp._raw_spin_lock_irq
22.33 +2.1 24.47 perf-profile.children.cycles-pp.md_flush_request
22.37 +2.1 24.51 perf-profile.children.cycles-pp.submit_bio_wait
22.38 +2.1 24.53 perf-profile.children.cycles-pp.blkdev_issue_flush
22.43 +2.2 24.59 perf-profile.children.cycles-pp.raid0_make_request
22.53 +2.2 24.70 perf-profile.children.cycles-pp.md_handle_request
22.62 +2.2 24.80 perf-profile.children.cycles-pp.md_submit_bio
23.73 +2.3 26.07 perf-profile.children.cycles-pp.submit_bio
23.75 +2.3 26.09 perf-profile.children.cycles-pp.submit_bio_noacct
18.82 +4.2 23.01 perf-profile.children.cycles-pp.xlog_cil_force_lsn
32.56 +6.2 38.77 perf-profile.children.cycles-pp.xfs_log_force_lsn
65.90 -2.2 63.69 perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
0.22 ? 9% -0.1 0.16 ? 24% perf-profile.self.cycles-pp.xlog_grant_add_space
0.21 ? 13% -0.1 0.15 ? 16% perf-profile.self.cycles-pp.xfs_log_ticket_ungrant
0.08 +0.0 0.09 perf-profile.self.cycles-pp.__list_add_valid
0.06 +0.0 0.07 perf-profile.self.cycles-pp.write
0.20 ? 2% +0.0 0.21 perf-profile.self.cycles-pp.menu_select
0.11 ? 4% +0.0 0.12 perf-profile.self.cycles-pp.xfs_log_commit_cil
0.09 +0.0 0.10 ? 4% perf-profile.self.cycles-pp.iomap_set_range_uptodate
0.07 ? 6% +0.0 0.09 ? 5% perf-profile.self.cycles-pp.insert_work
0.07 +0.0 0.08 ? 5% perf-profile.self.cycles-pp.memcpy_erms
0.07 ? 7% +0.0 0.08 perf-profile.self.cycles-pp.perf_trace_sched_wakeup_template
0.06 +0.0 0.07 ? 6% perf-profile.self.cycles-pp.flush_smp_call_function_from_idle
0.11 ? 4% +0.0 0.12 ? 3% perf-profile.self.cycles-pp.try_to_wake_up
0.09 ? 5% +0.0 0.10 ? 4% perf-profile.self.cycles-pp.__switch_to
0.08 ? 12% +0.0 0.09 ? 10% perf-profile.self.cycles-pp.xfs_inode_item_format
0.07 ? 6% +0.0 0.09 perf-profile.self.cycles-pp.xlog_cil_force_lsn
0.06 ? 8% +0.0 0.07 ? 6% perf-profile.self.cycles-pp.prepare_to_wait_event
0.16 +0.0 0.18 ? 5% perf-profile.self.cycles-pp.update_cfs_group
0.18 ? 2% +0.0 0.21 perf-profile.self.cycles-pp.__list_del_entry_valid
0.14 ? 3% +0.0 0.17 ? 4% perf-profile.self.cycles-pp.__switch_to_asm
0.36 ? 2% +0.0 0.40 perf-profile.self.cycles-pp.__schedule
0.33 +0.0 0.37 perf-profile.self.cycles-pp.brd_do_bvec
0.30 +0.0 0.33 ? 5% perf-profile.self.cycles-pp.update_load_avg
0.26 ? 3% +0.0 0.30 perf-profile.self.cycles-pp.available_idle_cpu
0.14 ? 6% +0.0 0.18 ? 5% perf-profile.self.cycles-pp.poll_idle
0.57 ? 3% +0.0 0.61 ? 2% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.29 +0.0 0.34 ? 3% perf-profile.self.cycles-pp._raw_spin_lock_irq
0.00 +0.1 0.05 perf-profile.self.cycles-pp.kmem_cache_alloc
0.00 +0.1 0.05 ? 8% perf-profile.self.cycles-pp.kfree
0.00 +0.1 0.06 ? 8% perf-profile.self.cycles-pp.migrate_task_rq_fair
0.00 +0.1 0.06 ? 16% perf-profile.self.cycles-pp.percpu_counter_add_batch
21.76 +1.1 22.82 perf-profile.self.cycles-pp.intel_idle
0.04 ? 3% -13.2% 0.04 ? 4% perf-sched.sch_delay.avg.ms.__xfs_log_force_lsn.xfs_log_force_lsn.xfs_file_fsync.xfs_file_buffered_write
0.02 ? 17% -41.4% 0.01 ? 14% perf-sched.sch_delay.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64
0.24 ? 27% -62.7% 0.09 ? 64% perf-sched.sch_delay.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown]
0.11 ? 12% -59.5% 0.04 ? 16% perf-sched.sch_delay.avg.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.09 -40.1% 0.05 ? 3% perf-sched.sch_delay.avg.ms.io_schedule.wait_on_page_bit.wait_on_page_writeback.__filemap_fdatawait_range
0.08 ? 2% -19.0% 0.07 perf-sched.sch_delay.avg.ms.md_flush_request.raid0_make_request.md_handle_request.md_submit_bio
0.01 ? 35% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.__alloc_pages.pagecache_get_page.grab_cache_page_write_begin
0.01 ? 5% -23.8% 0.01 ? 4% perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.__flush_work.xlog_cil_force_lsn.xfs_log_force_lsn
0.09 ? 26% -69.2% 0.03 ?100% perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.__kmalloc.kmem_alloc.kmem_alloc_large
0.13 ? 68% -74.1% 0.03 ?102% perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
0.07 ? 22% -41.1% 0.04 ? 36% perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.down_read.xfs_file_fsync.xfs_file_buffered_write
0.04 ? 15% -41.0% 0.03 ? 15% perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.down_write.xfs_ilock.xfs_buffered_write_iomap_begin
0.11 ? 58% -76.4% 0.03 ?107% perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.down_write.xfs_ilock.xfs_file_buffered_write
0.04 ? 28% -63.3% 0.01 ? 17% perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.down_write.xfs_ilock.xfs_vn_update_time
0.00 ? 28% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.down_write_killable.vm_mmap_pgoff.ksys_mmap_pgoff
0.26 -40.4% 0.15 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.process_one_work.worker_thread.kthread
0.08 ? 26% -30.9% 0.06 ? 22% perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.migrate_task_to.task_numa_migrate
0.14 ? 16% -83.5% 0.02 ?101% perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.submit_bio_checks.submit_bio_noacct.raid0_make_request
0.03 ? 5% -19.0% 0.03 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.submit_bio_checks.submit_bio_noacct.submit_bio
0.11 ? 3% -63.9% 0.04 ? 4% perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.wait_for_completion.__flush_work.xlog_cil_force_lsn
0.07 ? 6% -64.7% 0.02 ? 39% perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.wait_for_completion.stop_two_cpus.migrate_swap
0.37 ? 2% -19.9% 0.29 ? 2% perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.wait_for_completion_io_timeout.submit_bio_wait.blkdev_issue_flush
0.09 ? 29% -58.2% 0.04 ? 36% perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.write_cache_pages.iomap_writepages.xfs_vm_writepages
0.03 ? 21% -41.4% 0.02 ? 17% perf-sched.sch_delay.avg.ms.rcu_gp_kthread.kthread.ret_from_fork
0.03 ? 28% +80.9% 0.05 ? 17% perf-sched.sch_delay.avg.ms.rwsem_down_read_slowpath.xfs_log_commit_cil.__xfs_trans_commit.xfs_free_eofblocks
0.19 ? 20% -84.8% 0.03 ?110% perf-sched.sch_delay.avg.ms.rwsem_down_read_slowpath.xfs_log_commit_cil.__xfs_trans_commit.xfs_inactive_ifree
0.13 ? 13% -46.4% 0.07 ? 44% perf-sched.sch_delay.avg.ms.rwsem_down_read_slowpath.xfs_log_commit_cil.__xfs_trans_commit.xfs_remove
0.07 ? 13% -61.3% 0.03 ? 37% perf-sched.sch_delay.avg.ms.rwsem_down_read_slowpath.xfs_log_commit_cil.__xfs_trans_commit.xfs_trans_roll
0.19 ? 3% -36.5% 0.12 ? 7% perf-sched.sch_delay.avg.ms.rwsem_down_read_slowpath.xfs_log_commit_cil.__xfs_trans_commit.xfs_vn_update_time
0.12 ? 15% -74.7% 0.03 ?117% perf-sched.sch_delay.avg.ms.rwsem_down_write_slowpath.do_unlinkat.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.05 ? 11% -46.9% 0.03 ? 4% perf-sched.sch_delay.avg.ms.rwsem_down_write_slowpath.xlog_cil_push_work.process_one_work.worker_thread
0.00 ? 10% +28.6% 0.01 perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_select
0.05 ? 5% -52.9% 0.02 ? 4% perf-sched.sch_delay.avg.ms.schedule_timeout.__down.down.xfs_buf_lock
0.04 ? 57% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_timeout.__down.down.xlog_write_iclog
0.03 ? 31% -57.3% 0.01 ? 36% perf-sched.sch_delay.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_fork
0.03 ? 8% -25.5% 0.02 ? 10% perf-sched.sch_delay.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork
0.06 -25.7% 0.04 perf-sched.sch_delay.avg.ms.schedule_timeout.wait_for_completion.__flush_work.xlog_cil_force_lsn
0.03 ? 7% -15.6% 0.02 ? 5% perf-sched.sch_delay.avg.ms.smpboot_thread_fn.kthread.ret_from_fork
0.02 -16.7% 0.01 ? 3% perf-sched.sch_delay.avg.ms.worker_thread.kthread.ret_from_fork
0.02 ? 4% -27.5% 0.01 ? 3% perf-sched.sch_delay.avg.ms.xlog_cil_force_lsn.xfs_log_force_lsn.xfs_file_fsync.xfs_file_buffered_write
0.00 -100.0% 0.00 perf-sched.sch_delay.avg.ms.xlog_state_get_iclog_space.xlog_write.xlog_cil_push_work.process_one_work
0.11 ? 2% -60.7% 0.04 ? 2% perf-sched.sch_delay.avg.ms.xlog_wait_on_iclog.__xfs_log_force_lsn.xfs_file_fsync.xfs_file_buffered_write
0.06 -11.5% 0.05 perf-sched.sch_delay.avg.ms.xlog_wait_on_iclog.__xfs_log_force_lsn.xfs_log_force_lsn.xfs_file_fsync
5.52 ? 28% -51.9% 2.66 ? 47% perf-sched.sch_delay.max.ms.__xfs_log_force_lsn.xfs_log_force_lsn.xfs_file_fsync.xfs_file_buffered_write
2.27 ? 6% -47.7% 1.19 ? 56% perf-sched.sch_delay.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown]
2.36 ? 6% -42.5% 1.36 ? 9% perf-sched.sch_delay.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.01 ? 56% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.__alloc_pages.pagecache_get_page.grab_cache_page_write_begin
0.22 ? 67% -88.5% 0.03 ? 12% perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.__filemap_fdatawait_range.file_write_and_wait_range.xfs_file_fsync
2.00 ? 14% -35.0% 1.30 ? 30% perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.__flush_work.xlog_cil_force_lsn.xfs_log_force_lsn
2.24 ? 20% -75.6% 0.55 ? 73% perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.__kmalloc.kmem_alloc.kmem_alloc_large
0.74 ? 75% -85.2% 0.11 ? 82% perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
2.18 ? 8% -43.3% 1.24 ? 35% perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.down_read.xfs_file_fsync.xfs_file_buffered_write
1.72 ? 39% -71.1% 0.50 ? 72% perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.down_read.xfs_log_commit_cil.__xfs_trans_commit
2.17 ? 3% -42.6% 1.25 ? 9% perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.down_write.xfs_ilock.xfs_buffered_write_iomap_begin
1.60 ? 62% -86.3% 0.22 ?106% perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.down_write.xfs_ilock.xfs_file_buffered_write
1.38 ? 37% -47.5% 0.72 ? 14% perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.down_write.xfs_ilock.xfs_vn_update_time
0.00 ? 28% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.down_write_killable.vm_mmap_pgoff.ksys_mmap_pgoff
1.97 ? 10% -30.2% 1.38 ? 5% perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.mempool_alloc.md_submit_bio.submit_bio_noacct
1.85 ? 7% -31.2% 1.27 ? 2% perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.process_one_work.worker_thread.kthread
1.59 ? 25% -89.9% 0.16 ? 93% perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.submit_bio_checks.submit_bio_noacct.raid0_make_request
2.56 ? 9% -28.7% 1.82 ? 8% perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.wait_for_completion.__flush_work.xlog_cil_force_lsn
2.31 ? 11% -62.5% 0.87 ? 15% perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.write_cache_pages.iomap_writepages.xfs_vm_writepages
1.06 ? 20% -59.5% 0.43 ? 48% perf-sched.sch_delay.max.ms.rwsem_down_read_slowpath.xfs_log_commit_cil.__xfs_trans_commit.xfs_create
0.94 ? 40% -77.1% 0.22 ?129% perf-sched.sch_delay.max.ms.rwsem_down_read_slowpath.xfs_log_commit_cil.__xfs_trans_commit.xfs_inactive_ifree
1.61 ? 62% -51.7% 0.78 ? 15% perf-sched.sch_delay.max.ms.rwsem_down_read_slowpath.xfs_log_commit_cil.__xfs_trans_commit.xfs_trans_roll
0.62 ? 24% -75.1% 0.15 ?133% perf-sched.sch_delay.max.ms.rwsem_down_write_slowpath.do_unlinkat.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.11 ? 9% -61.9% 0.42 ? 8% perf-sched.sch_delay.max.ms.rwsem_down_write_slowpath.xlog_cil_push_work.process_one_work.worker_thread
0.01 ? 8% +135.3% 0.01 ? 7% perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_select
0.18 ? 28% -89.7% 0.02 ? 13% perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll
2.54 ? 42% -46.1% 1.37 ? 26% perf-sched.sch_delay.max.ms.schedule_timeout.__down.down.xfs_buf_lock
0.33 ? 78% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_timeout.__down.down.xlog_write_iclog
0.70 ? 51% -78.2% 0.15 ? 65% perf-sched.sch_delay.max.ms.schedule_timeout.kcompactd.kthread.ret_from_fork
0.01 ? 14% -100.0% 0.00 perf-sched.sch_delay.max.ms.xlog_state_get_iclog_space.xlog_write.xlog_cil_push_work.process_one_work
4.73 ? 12% -46.7% 2.52 ? 61% perf-sched.sch_delay.max.ms.xlog_wait_on_iclog.__xfs_log_force_lsn.xfs_file_fsync.xfs_file_buffered_write
5.49 ? 20% +24.0% 6.80 ? 3% perf-sched.sch_delay.max.ms.xlog_wait_on_iclog.__xfs_log_force_lsn.xfs_log_force_lsn.xfs_file_fsync
0.06 -23.4% 0.05 perf-sched.total_sch_delay.average.ms
1.45 -11.8% 1.28 perf-sched.total_wait_and_delay.average.ms
4218600 -22.8% 3256636 ? 9% perf-sched.total_wait_and_delay.count.ms
8577 ? 4% -20.4% 6826 ? 10% perf-sched.total_wait_and_delay.max.ms
1.39 -11.3% 1.23 perf-sched.total_wait_time.average.ms
8577 ? 4% -20.4% 6826 ? 10% perf-sched.total_wait_time.max.ms
0.78 +36.9% 1.07 ? 2% perf-sched.wait_and_delay.avg.ms.__xfs_log_force_lsn.xfs_log_force_lsn.xfs_file_fsync.xfs_file_buffered_write
0.49 ?141% +1.3e+05% 618.51 ? 16% perf-sched.wait_and_delay.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64
788.02 ? 5% -32.2% 533.91 ? 4% perf-sched.wait_and_delay.avg.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64
0.50 ?141% +1.2e+05% 618.53 ? 16% perf-sched.wait_and_delay.avg.ms.do_syslog.part.0.kmsg_read.vfs_read
272.43 -61.4% 105.13 ? 6% perf-sched.wait_and_delay.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
0.34 -44.4% 0.19 ? 2% perf-sched.wait_and_delay.avg.ms.io_schedule.wait_on_page_bit.wait_on_page_writeback.__filemap_fdatawait_range
0.83 +13.0% 0.94 perf-sched.wait_and_delay.avg.ms.md_flush_request.raid0_make_request.md_handle_request.md_submit_bio
127.64 ? 22% +59.7% 203.80 ? 15% perf-sched.wait_and_delay.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read
20.10 ?101% -87.9% 2.43 ? 95% perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.__alloc_pages.alloc_pages_vma.shmem_alloc_page
4.68 ? 14% -79.3% 0.97 ?141% perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.pagecache_get_page.shmem_getpage_gfp.shmem_write_begin
5.34 ? 29% -42.3% 3.08 ? 10% perf-sched.wait_and_delay.avg.ms.rwsem_down_read_slowpath.xfs_log_commit_cil.__xfs_trans_commit.xfs_iomap_write_unwritten
3.68 -47.5% 1.93 ? 2% perf-sched.wait_and_delay.avg.ms.rwsem_down_read_slowpath.xfs_log_commit_cil.__xfs_trans_commit.xfs_trans_roll
213.02 ? 3% +33.6% 284.56 ? 21% perf-sched.wait_and_delay.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait
7.00 ? 2% -12.6% 6.12 ? 7% perf-sched.wait_and_delay.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork
0.54 -42.8% 0.31 perf-sched.wait_and_delay.avg.ms.schedule_timeout.wait_for_completion.__flush_work.xlog_cil_force_lsn
487.35 ? 3% -17.6% 401.38 ? 4% perf-sched.wait_and_delay.avg.ms.smpboot_thread_fn.kthread.ret_from_fork
4.28 ? 2% -23.3% 3.28 perf-sched.wait_and_delay.avg.ms.worker_thread.kthread.ret_from_fork
0.92 -12.7% 0.80 perf-sched.wait_and_delay.avg.ms.xlog_cil_force_lsn.xfs_log_force_lsn.xfs_file_fsync.xfs_file_buffered_write
0.97 -71.8% 0.27 perf-sched.wait_and_delay.avg.ms.xlog_wait_on_iclog.__xfs_log_force_lsn.xfs_file_fsync.xfs_file_buffered_write
20.00 -26.7% 14.67 ? 12% perf-sched.wait_and_delay.count.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown]
245148 -86.0% 34284 ? 10% perf-sched.wait_and_delay.count.__xfs_log_force_lsn.xfs_log_force_lsn.xfs_file_fsync.xfs_file_buffered_write
0.67 ?141% +1600.0% 11.33 ? 8% perf-sched.wait_and_delay.count.devkmsg_read.vfs_read.ksys_read.do_syscall_64
0.67 ?141% +1600.0% 11.33 ? 8% perf-sched.wait_and_delay.count.do_syslog.part.0.kmsg_read.vfs_read
247.00 +119.8% 543.00 ? 3% perf-sched.wait_and_delay.count.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
168.00 ? 70% +178.4% 467.67 ? 4% perf-sched.wait_and_delay.count.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64
251.67 ? 12% -73.8% 66.00 ?141% perf-sched.wait_and_delay.count.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe
201465 -21.1% 158976 ? 9% perf-sched.wait_and_delay.count.io_schedule.wait_on_page_bit.wait_on_page_writeback.__filemap_fdatawait_range
808550 -16.0% 679238 ? 9% perf-sched.wait_and_delay.count.md_flush_request.raid0_make_request.md_handle_request.md_submit_bio
1320 ? 21% -56.4% 576.33 ? 20% perf-sched.wait_and_delay.count.pipe_read.new_sync_read.vfs_read.ksys_read
27.67 ? 14% -43.4% 15.67 ? 23% perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.generic_perform_write.__generic_file_write_iter.generic_file_write_iter
8.33 ? 11% -88.0% 1.00 ?141% perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.pagecache_get_page.shmem_getpage_gfp.shmem_write_begin
7471 -53.7% 3455 ? 10% perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.process_one_work.worker_thread.kthread
969.67 -20.1% 774.33 ? 12% perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.wait_for_completion.affine_move_task.__set_cpus_allowed_ptr
246.33 ? 7% -21.7% 193.00 ? 15% perf-sched.wait_and_delay.count.rcu_gp_kthread.kthread.ret_from_fork
118.33 ? 8% +18.9% 140.67 ? 4% perf-sched.wait_and_delay.count.rwsem_down_read_slowpath.xfs_log_commit_cil.__xfs_trans_commit.xfs_trans_roll
1035 -37.9% 643.67 ? 10% perf-sched.wait_and_delay.count.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll
39.33 ? 2% -27.1% 28.67 ? 11% perf-sched.wait_and_delay.count.schedule_timeout.kcompactd.kthread.ret_from_fork
1217352 -15.4% 1030088 ? 9% perf-sched.wait_and_delay.count.schedule_timeout.wait_for_completion.__flush_work.xlog_cil_force_lsn
206.67 -25.2% 154.67 ? 9% perf-sched.wait_and_delay.count.schedule_timeout.xfsaild.kthread.ret_from_fork
1994 ? 3% -13.6% 1722 ? 13% perf-sched.wait_and_delay.count.smpboot_thread_fn.kthread.ret_from_fork
470869 -13.6% 406938 ? 9% perf-sched.wait_and_delay.count.worker_thread.kthread.ret_from_fork
517416 -27.7% 374206 ? 9% perf-sched.wait_and_delay.count.xlog_cil_force_lsn.xfs_log_force_lsn.xfs_file_fsync.xfs_file_buffered_write
241356 -86.0% 33853 ? 10% perf-sched.wait_and_delay.count.xlog_wait_on_iclog.__xfs_log_force_lsn.xfs_file_fsync.xfs_file_buffered_write
8.75 ? 23% -44.0% 4.90 ? 16% perf-sched.wait_and_delay.max.ms.__xfs_log_force_lsn.xfs_log_force_lsn.xfs_file_fsync.xfs_file_buffered_write
0.98 ?141% +7e+05% 6824 ? 10% perf-sched.wait_and_delay.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64
0.99 ?141% +6.9e+05% 6824 ? 10% perf-sched.wait_and_delay.max.ms.do_syslog.part.0.kmsg_read.vfs_read
999.86 -66.7% 333.06 ?141% perf-sched.wait_and_delay.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe
13.31 ? 16% -39.9% 8.00 ? 16% perf-sched.wait_and_delay.max.ms.io_schedule.wait_on_page_bit.wait_on_page_writeback.__filemap_fdatawait_range
90.59 ? 72% -72.0% 25.40 ? 12% perf-sched.wait_and_delay.max.ms.md_flush_request.raid0_make_request.md_handle_request.md_submit_bio
1013 +573.9% 6826 ? 10% perf-sched.wait_and_delay.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read
85.23 ?123% -94.7% 4.51 ? 88% perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.__alloc_pages.alloc_pages_vma.shmem_alloc_page
174.15 ? 55% -78.9% 36.75 ? 95% perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.generic_perform_write.__generic_file_write_iter.generic_file_write_iter
13.38 ? 19% -84.0% 2.14 ?141% perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.pagecache_get_page.shmem_getpage_gfp.shmem_write_begin
63.82 ? 16% +3174.1% 2089 ?132% perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.process_one_work.worker_thread.kthread
6.62 ? 11% -20.9% 5.24 ? 2% perf-sched.wait_and_delay.max.ms.rcu_gp_kthread.kthread.ret_from_fork
500.62 +1181.3% 6414 ? 11% perf-sched.wait_and_delay.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait
8033 ? 10% -40.4% 4787 ? 22% perf-sched.wait_and_delay.max.ms.smpboot_thread_fn.kthread.ret_from_fork
8134 ? 4% -34.5% 5325 ? 7% perf-sched.wait_and_delay.max.ms.worker_thread.kthread.ret_from_fork
0.74 +39.6% 1.04 ? 2% perf-sched.wait_time.avg.ms.__xfs_log_force_lsn.xfs_log_force_lsn.xfs_file_fsync.xfs_file_buffered_write
1.32 ? 8% +46932.4% 618.48 ? 16% perf-sched.wait_time.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64
787.97 ? 5% -32.2% 533.88 ? 4% perf-sched.wait_time.avg.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64
1.33 ? 8% +46567.6% 618.50 ? 16% perf-sched.wait_time.avg.ms.do_syslog.part.0.kmsg_read.vfs_read
272.41 -61.4% 105.12 ? 6% perf-sched.wait_time.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
0.13 ? 24% -41.9% 0.08 ? 45% perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_call_function_single.[unknown]
0.25 -45.6% 0.14 perf-sched.wait_time.avg.ms.io_schedule.wait_on_page_bit.wait_on_page_writeback.__filemap_fdatawait_range
0.75 +16.4% 0.88 perf-sched.wait_time.avg.ms.md_flush_request.raid0_make_request.md_handle_request.md_submit_bio
127.63 ? 22% +59.7% 203.79 ? 15% perf-sched.wait_time.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read
20.10 ?101% -87.5% 2.51 ? 89% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.__alloc_pages.alloc_pages_vma.shmem_alloc_page
0.05 ?109% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.__alloc_pages.pagecache_get_page.grab_cache_page_write_begin
0.04 ? 3% -33.6% 0.03 ? 8% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.__filemap_fdatawait_range.file_write_and_wait_range.xfs_file_fsync
0.70 -38.4% 0.43 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.__flush_work.xlog_cil_force_lsn.xfs_log_force_lsn
1.40 ? 41% -66.0% 0.48 ? 86% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
0.16 ? 12% -35.7% 0.10 ? 18% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.down_read.xfs_file_fsync.xfs_file_buffered_write
0.20 ? 37% -66.6% 0.07 ? 37% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.down_read.xfs_log_commit_cil.__xfs_trans_commit
0.15 ? 15% -39.6% 0.09 ? 6% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.down_write.xfs_ilock.xfs_buffered_write_iomap_begin
0.27 ? 45% -67.2% 0.09 ? 39% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.down_write.xfs_ilock.xfs_file_buffered_write
0.19 ? 16% -58.3% 0.08 ? 29% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.down_write.xfs_ilock.xfs_vn_update_time
0.22 ? 24% -35.1% 0.14 ? 33% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.iomap_write_actor.iomap_apply.iomap_file_buffered_write
0.23 ? 16% -21.8% 0.18 ? 6% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_trans_alloc.xfs_vn_update_time
0.08 ? 96% +345.4% 0.34 ? 66% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.mempool_alloc.bio_alloc_bioset.submit_flushes
0.52 ? 2% +42.1% 0.73 ? 3% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.mempool_alloc.md_submit_bio.submit_bio_noacct
0.00 ?141% +14412.5% 0.39 ?113% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.mnt_want_write.do_unlinkat.do_syscall_64
4.68 ? 14% -76.7% 1.09 ?118% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.pagecache_get_page.shmem_getpage_gfp.shmem_write_begin
0.81 ? 15% -70.7% 0.24 ? 15% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.migrate_task_to.task_numa_migrate
0.25 ? 10% -62.7% 0.09 ? 61% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.submit_bio_checks.submit_bio_noacct.raid0_make_request
0.33 ? 6% -39.7% 0.20 ? 4% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.wait_for_completion.__flush_work.xlog_cil_force_lsn
0.75 ? 17% -61.3% 0.29 ? 22% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.wait_for_completion.stop_two_cpus.migrate_swap
1.07 +11.9% 1.19 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.wait_for_completion_io_timeout.submit_bio_wait.blkdev_issue_flush
0.18 ? 25% -49.3% 0.09 ? 32% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.write_cache_pages.iomap_writepages.xfs_vm_writepages
0.22 ? 8% -31.9% 0.15 ? 25% perf-sched.wait_time.avg.ms.rwsem_down_read_slowpath.xfs_log_commit_cil.__xfs_trans_commit.xfs_bmapi_convert_delalloc
0.22 ? 15% -46.3% 0.12 ? 21% perf-sched.wait_time.avg.ms.rwsem_down_read_slowpath.xfs_log_commit_cil.__xfs_trans_commit.xfs_create
0.14 ? 8% -21.6% 0.11 ? 3% perf-sched.wait_time.avg.ms.rwsem_down_read_slowpath.xfs_log_commit_cil.__xfs_trans_commit.xfs_free_eofblocks
0.31 ? 15% -76.0% 0.08 ? 42% perf-sched.wait_time.avg.ms.rwsem_down_read_slowpath.xfs_log_commit_cil.__xfs_trans_commit.xfs_inactive_ifree
5.33 ? 29% -42.4% 3.07 ? 10% perf-sched.wait_time.avg.ms.rwsem_down_read_slowpath.xfs_log_commit_cil.__xfs_trans_commit.xfs_iomap_write_unwritten
0.34 ? 14% -33.2% 0.23 ? 23% perf-sched.wait_time.avg.ms.rwsem_down_read_slowpath.xfs_log_commit_cil.__xfs_trans_commit.xfs_remove
3.61 -47.3% 1.90 ? 2% perf-sched.wait_time.avg.ms.rwsem_down_read_slowpath.xfs_log_commit_cil.__xfs_trans_commit.xfs_trans_roll
0.50 ? 4% -42.0% 0.29 ? 7% perf-sched.wait_time.avg.ms.rwsem_down_read_slowpath.xfs_log_commit_cil.__xfs_trans_commit.xfs_vn_update_time
213.00 ? 3% +33.6% 284.53 ? 21% perf-sched.wait_time.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait
0.65 ? 9% -28.3% 0.46 ? 8% perf-sched.wait_time.avg.ms.schedule_timeout.__down.down.xfs_buf_lock
0.33 ? 40% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.__down.down.xlog_write_iclog
6.97 ? 2% -12.6% 6.09 ? 7% perf-sched.wait_time.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork
0.48 -44.9% 0.26 perf-sched.wait_time.avg.ms.schedule_timeout.wait_for_completion.__flush_work.xlog_cil_force_lsn
0.10 ? 20% -64.3% 0.04 ? 75% perf-sched.wait_time.avg.ms.schedule_timeout.wait_for_completion.stop_two_cpus.migrate_swap
487.33 ? 3% -17.6% 401.36 ? 4% perf-sched.wait_time.avg.ms.smpboot_thread_fn.kthread.ret_from_fork
4.26 ? 2% -23.4% 3.27 perf-sched.wait_time.avg.ms.worker_thread.kthread.ret_from_fork
0.90 -12.5% 0.79 perf-sched.wait_time.avg.ms.xlog_cil_force_lsn.xfs_log_force_lsn.xfs_file_fsync.xfs_file_buffered_write
0.39 ?128% -100.0% 0.00 perf-sched.wait_time.avg.ms.xlog_state_get_iclog_space.xlog_write.xlog_cil_push_work.process_one_work
0.87 -73.2% 0.23 perf-sched.wait_time.avg.ms.xlog_wait_on_iclog.__xfs_log_force_lsn.xfs_file_fsync.xfs_file_buffered_write
5.18 ? 8% -35.3% 3.35 ? 2% perf-sched.wait_time.max.ms.__xfs_log_force_lsn.xfs_log_force_lsn.xfs_file_fsync.xfs_file_buffered_write
2.63 ? 8% +2.6e+05% 6824 ? 10% perf-sched.wait_time.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64
2.65 ? 8% +2.6e+05% 6824 ? 10% perf-sched.wait_time.max.ms.do_syslog.part.0.kmsg_read.vfs_read
3.16 ? 38% -49.3% 1.60 ? 72% perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_call_function_single.[unknown]
999.83 -66.2% 337.99 ?138% perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe
7.37 ? 16% -41.0% 4.35 ? 5% perf-sched.wait_time.max.ms.io_schedule.wait_on_page_bit.wait_on_page_writeback.__filemap_fdatawait_range
90.51 ? 72% -71.9% 25.40 ? 12% perf-sched.wait_time.max.ms.md_flush_request.raid0_make_request.md_handle_request.md_submit_bio
1013 +573.9% 6826 ? 10% perf-sched.wait_time.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read
85.23 ?123% -94.6% 4.59 ? 85% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.__alloc_pages.alloc_pages_vma.shmem_alloc_page
0.08 ?121% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.__alloc_pages.pagecache_get_page.grab_cache_page_write_begin
0.34 ? 24% -84.0% 0.05 ? 15% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.__filemap_fdatawait_range.file_write_and_wait_range.xfs_file_fsync
4.02 ? 9% -44.5% 2.23 ? 36% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.__flush_work.xlog_cil_force_lsn.xfs_log_force_lsn
6.69 ? 65% -66.8% 2.22 ? 86% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
2.84 -50.4% 1.41 ? 29% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.down_read.xfs_file_fsync.xfs_file_buffered_write
2.16 ? 34% -74.2% 0.56 ? 59% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.down_read.xfs_log_commit_cil.__xfs_trans_commit
3.30 ? 13% -42.0% 1.91 ? 23% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.down_write.xfs_ilock.xfs_buffered_write_iomap_begin
2.18 ? 19% -46.4% 1.17 ? 11% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.down_write.xfs_ilock.xfs_vn_update_time
174.15 ? 55% -78.9% 36.75 ? 95% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.generic_perform_write.__generic_file_write_iter.generic_file_write_iter
2.15 ? 22% -36.2% 1.37 ? 10% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_trans_alloc.xfs_vn_update_time
1.62 ? 31% -55.2% 0.72 ? 57% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.mempool_alloc.bio_alloc_bioset.iomap_writepage_map
0.08 ? 96% +345.4% 0.34 ? 66% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.mempool_alloc.bio_alloc_bioset.submit_flushes
0.00 ?141% +14412.5% 0.39 ?113% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.mnt_want_write.do_unlinkat.do_syscall_64
13.38 ? 19% -81.9% 2.43 ?116% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.pagecache_get_page.shmem_getpage_gfp.shmem_write_begin
63.48 ? 17% +3191.3% 2089 ?132% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.process_one_work.worker_thread.kthread
21.36 ? 34% -63.4% 7.81 ? 26% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.migrate_task_to.task_numa_migrate
0.85 ? 47% -49.3% 0.43 ? 69% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.sched_exec.bprm_execve
1.98 ? 29% -53.5% 0.92 ? 66% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.submit_bio_checks.submit_bio_noacct.raid0_make_request
3.17 ? 7% -25.2% 2.37 ? 14% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.wait_for_completion.__flush_work.xlog_cil_force_lsn
25.72 ? 7% -45.8% 13.95 ? 8% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.wait_for_completion.stop_two_cpus.migrate_swap
3.07 ? 8% -56.4% 1.34 ? 21% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.write_cache_pages.iomap_writepages.xfs_vm_writepages
1.62 ? 49% -68.9% 0.50 ? 69% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.xfs_trans_alloc.xfs_vn_update_time.file_update_time
1.23 ? 17% -25.0% 0.92 ? 26% perf-sched.wait_time.max.ms.rwsem_down_read_slowpath.xfs_log_commit_cil.__xfs_trans_commit.xfs_bmapi_convert_delalloc
1.40 ? 28% -54.1% 0.64 ? 14% perf-sched.wait_time.max.ms.rwsem_down_read_slowpath.xfs_log_commit_cil.__xfs_trans_commit.xfs_create
1.14 ? 34% -67.1% 0.37 ? 50% perf-sched.wait_time.max.ms.rwsem_down_read_slowpath.xfs_log_commit_cil.__xfs_trans_commit.xfs_inactive_ifree
0.40 ? 71% -70.6% 0.12 ? 95% perf-sched.wait_time.max.ms.rwsem_down_write_slowpath.path_openat.do_filp_open.do_sys_openat2
500.24 +1182.3% 6414 ? 11% perf-sched.wait_time.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait
48.99 ? 43% -54.0% 22.53 ? 17% perf-sched.wait_time.max.ms.schedule_timeout.__down.down.xfs_buf_lock
1.29 ? 19% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.__down.down.xlog_write_iclog
0.10 ? 20% -64.3% 0.04 ? 75% perf-sched.wait_time.max.ms.schedule_timeout.wait_for_completion.stop_two_cpus.migrate_swap
8033 ? 10% -40.4% 4787 ? 22% perf-sched.wait_time.max.ms.smpboot_thread_fn.kthread.ret_from_fork
8134 ? 4% -34.5% 5325 ? 7% perf-sched.wait_time.max.ms.worker_thread.kthread.ret_from_fork
1.88 ?135% -100.0% 0.00 perf-sched.wait_time.max.ms.xlog_state_get_iclog_space.xlog_write.xlog_cil_push_work.process_one_work





Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


---
0DAY/LKP+ Test Infrastructure Open Source Technology Center
https://lists.01.org/hyperkitty/list/[email protected] Intel Corporation

Thanks,
Oliver Sang


Attachments:
(No filename) (109.40 kB)
config-5.13.0-rc4-00087-ga79b28c284fd (176.94 kB)
job-script (8.61 kB)
job.yaml (5.85 kB)
reproduce (934.00 B)
Download all attachments