Greeting,
FYI, we noticed a -56.3% regression of aim7.jobs-per-min due to commit:
commit: 7c60766161cfaf557cd6b4c5670bba8cae7ca15e ("xfs: parallelize inode inactivation")
https://git.kernel.org/cgit/linux/kernel/git/djwong/xfs-linux.git deferred-inactivation-5.13
in testcase: aim7
on test machine: 88 threads Intel(R) Xeon(R) Gold 6238M CPU @ 2.10GHz with 128G memory
with following parameters:
disk: 4BRD_12G
md: RAID0
fs: xfs
test: disk_src
load: 3000
cpufreq_governor: performance
ucode: 0x5003006
test-description: AIM7 is a traditional UNIX system level benchmark suite which is used to test and measure the performance of multiuser system.
test-url: https://sourceforge.net/projects/aimbench/files/aim-suite7/
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <[email protected]>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml
bin/lkp run compatible-job.yaml
=========================================================================================
compiler/cpufreq_governor/disk/fs/kconfig/load/md/rootfs/tbox_group/test/testcase/ucode:
gcc-9/performance/4BRD_12G/xfs/x86_64-rhel-8.3/3000/RAID0/debian-10.4-x86_64-20200603.cgz/lkp-csl-2sp9/disk_src/aim7/0x5003006
commit:
ebb422f9fa ("xfs: force inode garbage collection before fallocate when space is low")
7c60766161 ("xfs: parallelize inode inactivation")
ebb422f9fa8f8a37 7c60766161cfaf557cd6b4c5670
---------------- ---------------------------
%stddev %change %stddev
\ | \
103868 -56.3% 45381 aim7.jobs-per-min
241.43 +65.9% 400.54 aim7.time.elapsed_time
241.43 +65.9% 400.54 aim7.time.elapsed_time.max
4124 ? 2% +1073.4% 48399 aim7.time.involuntary_context_switches
177044 +46.9% 260081 ? 2% aim7.time.minor_page_faults
2151 ? 2% +416.6% 11113 aim7.time.system_time
31.61 +7.0% 33.83 aim7.time.user_time
6980799 -34.4% 4579992 aim7.time.voluntary_context_switches
87.82 -24.5% 66.27 iostat.cpu.idle
11.97 +180.7% 33.59 iostat.cpu.system
272.45 +58.2% 431.07 uptime.boot
20954 +21.8% 25530 uptime.idle
2245124 ? 6% -38.6% 1379177 ? 9% numa-numastat.node0.local_node
2286679 ? 6% -38.7% 1400987 ? 11% numa-numastat.node0.numa_hit
2332655 ? 6% -28.4% 1670783 ? 7% numa-numastat.node1.local_node
2370607 ? 5% -27.1% 1728538 ? 8% numa-numastat.node1.numa_hit
88.29 -21.3 66.97 mpstat.cpu.all.idle%
0.86 ? 4% +0.1 0.99 mpstat.cpu.all.irq%
0.12 -0.0 0.10 mpstat.cpu.all.soft%
10.51 ? 2% +21.3 31.79 mpstat.cpu.all.sys%
0.23 -0.1 0.15 mpstat.cpu.all.usr%
3.117e+08 ?145% -77.4% 70319812 cpuidle.C1.time
6145751 ? 88% -68.0% 1966383 cpuidle.C1.usage
1.298e+10 ? 22% +57.6% 2.046e+10 ? 14% cpuidle.C1E.time
34481454 ? 20% +69.2% 58331548 ? 7% cpuidle.C1E.usage
935608 +78.3% 1668616 cpuidle.POLL.time
86525 ? 5% +46.4% 126681 ? 4% cpuidle.POLL.usage
87.33 -24.4% 66.00 vmstat.cpu.id
65157 -36.6% 41341 vmstat.io.bo
8066544 -73.8% 2117128 vmstat.memory.cache
1.148e+08 +11.0% 1.274e+08 vmstat.memory.free
9.67 ? 4% +205.2% 29.50 ? 5% vmstat.procs.r
94281 -3.3% 91165 vmstat.system.cs
180241 -1.5% 177616 vmstat.system.in
14321328 +11.3% 15933124 numa-vmstat.node0.nr_free_pages
6422 ? 7% -31.1% 4425 ? 14% numa-vmstat.node0.nr_mapped
846522 ? 6% -90.9% 76934 ? 2% numa-vmstat.node0.nr_slab_reclaimable
309310 ? 3% -83.7% 50333 ? 18% numa-vmstat.node0.nr_slab_unreclaimable
2056566 ? 3% -29.4% 1452854 ? 10% numa-vmstat.node0.numa_hit
2012375 ? 4% -31.2% 1384720 ? 11% numa-vmstat.node0.numa_local
14375891 +10.8% 15922291 numa-vmstat.node1.nr_free_pages
807319 ? 7% -90.7% 75387 ? 2% numa-vmstat.node1.nr_slab_reclaimable
310169 ? 3% -79.0% 65094 ? 14% numa-vmstat.node1.nr_slab_unreclaimable
2090249 ? 3% -21.7% 1636530 ? 8% numa-vmstat.node1.numa_hit
1902519 ? 5% -22.6% 1472650 ? 9% numa-vmstat.node1.numa_local
3387504 ? 6% -90.9% 307740 ? 2% numa-meminfo.node0.KReclaimable
25410 ? 7% -31.0% 17526 ? 14% numa-meminfo.node0.Mapped
57282380 +11.3% 63732312 numa-meminfo.node0.MemFree
8377326 ? 3% -77.0% 1927394 ? 11% numa-meminfo.node0.MemUsed
3387504 ? 6% -90.9% 307740 ? 2% numa-meminfo.node0.SReclaimable
1237091 ? 3% -83.7% 201330 ? 18% numa-meminfo.node0.SUnreclaim
4624596 ? 5% -89.0% 509071 ? 5% numa-meminfo.node0.Slab
3230602 ? 7% -90.7% 301545 ? 2% numa-meminfo.node1.KReclaimable
27912 ? 3% -17.4% 23042 ? 9% numa-meminfo.node1.Mapped
57500718 +10.8% 63689085 numa-meminfo.node1.MemFree
8513696 ? 3% -72.7% 2325329 ? 10% numa-meminfo.node1.MemUsed
3230602 ? 7% -90.7% 301545 ? 2% numa-meminfo.node1.SReclaimable
1240587 ? 3% -79.0% 260337 ? 14% numa-meminfo.node1.SUnreclaim
4471190 ? 6% -87.4% 561884 ? 5% numa-meminfo.node1.Slab
390475 +13.5% 443059 meminfo.Active
390323 +13.5% 442904 meminfo.Active(anon)
152449 +11.1% 169441 meminfo.AnonHugePages
390765 +14.6% 447876 meminfo.AnonPages
2454699 +25.9% 3089894 meminfo.Committed_AS
422163 +11.0% 468525 meminfo.Inactive
421286 +10.9% 467364 meminfo.Inactive(anon)
876.83 +32.3% 1160 meminfo.Inactive(file)
6614139 -90.8% 609309 meminfo.KReclaimable
49922 +22.6% 61197 meminfo.KernelStack
52957 ? 3% -23.9% 40326 meminfo.Mapped
1.148e+08 +11.0% 1.274e+08 meminfo.MemFree
16885322 -74.8% 4253077 meminfo.Memused
78130 ? 7% +42.5% 111359 ? 7% meminfo.PageTables
6614139 -90.8% 609309 meminfo.SReclaimable
2475426 -81.3% 461849 meminfo.SUnreclaim
421677 +9.9% 463502 meminfo.Shmem
9089566 -88.2% 1071159 meminfo.Slab
261955 -18.9% 212529 meminfo.VmallocUsed
17858400 -74.1% 4626435 meminfo.max_used_kB
97433 +13.6% 110699 proc-vmstat.nr_active_anon
97681 +14.6% 111920 proc-vmstat.nr_anon_pages
2849980 +11.1% 3165453 proc-vmstat.nr_dirty_background_threshold
5706930 +11.1% 6338648 proc-vmstat.nr_dirty_threshold
366334 +2.9% 376884 proc-vmstat.nr_file_pages
28695994 +11.0% 31855282 proc-vmstat.nr_free_pages
105403 +10.8% 116838 proc-vmstat.nr_inactive_anon
219.33 +31.8% 289.17 proc-vmstat.nr_inactive_file
49920 +22.5% 61159 proc-vmstat.nr_kernel_stack
13526 ? 3% -24.0% 10282 proc-vmstat.nr_mapped
19521 ? 7% +42.5% 27815 ? 7% proc-vmstat.nr_page_table_pages
105363 +10.0% 115894 proc-vmstat.nr_shmem
1654156 -90.8% 152315 proc-vmstat.nr_slab_reclaimable
619721 -81.4% 115426 proc-vmstat.nr_slab_unreclaimable
97433 +13.6% 110699 proc-vmstat.nr_zone_active_anon
105403 +10.8% 116838 proc-vmstat.nr_zone_inactive_anon
219.33 +31.8% 289.17 proc-vmstat.nr_zone_inactive_file
11573 ? 15% +655.2% 87401 ? 2% proc-vmstat.numa_hint_faults
4906 ? 19% +891.6% 48654 ? 3% proc-vmstat.numa_hint_faults_local
4694695 -32.6% 3165637 proc-vmstat.numa_hit
4615144 -33.1% 3086051 proc-vmstat.numa_local
12829 ? 60% +291.5% 50231 ? 26% proc-vmstat.numa_pages_migrated
145877 ? 12% +34.0% 195546 ? 5% proc-vmstat.numa_pte_updates
153269 -12.2% 134517 proc-vmstat.pgactivate
8492113 -44.3% 4727932 proc-vmstat.pgalloc_normal
1015325 +48.8% 1511305 proc-vmstat.pgfault
5991453 ? 10% -27.6% 4335465 proc-vmstat.pgfree
12829 ? 60% +291.5% 50231 ? 26% proc-vmstat.pgmigrate_success
15951508 +4.5% 16674034 proc-vmstat.pgpgout
47020 ? 2% +60.9% 75646 proc-vmstat.pgreuse
1811 ? 2% +2.6% 1858 proc-vmstat.unevictable_pgs_culled
317.34 ? 4% -41.4% 186.03 ? 8% sched_debug.cfs_rq:/.load_avg.avg
5745 ? 13% -40.8% 3400 ? 21% sched_debug.cfs_rq:/.load_avg.max
1062 ? 7% -41.9% 616.66 ? 11% sched_debug.cfs_rq:/.load_avg.stddev
569999 ? 4% +500.9% 3425306 sched_debug.cfs_rq:/.min_vruntime.avg
665110 ? 2% +439.5% 3588206 sched_debug.cfs_rq:/.min_vruntime.max
538264 ? 5% +504.4% 3253074 sched_debug.cfs_rq:/.min_vruntime.min
17948 ? 15% +515.7% 110507 ? 2% sched_debug.cfs_rq:/.min_vruntime.stddev
243.30 ? 7% -46.3% 130.76 ? 46% sched_debug.cfs_rq:/.removed.load_avg.max
172.21 ? 8% -18.7% 140.09 ? 9% sched_debug.cfs_rq:/.runnable_avg.stddev
17970 ? 15% +515.3% 110579 ? 2% sched_debug.cfs_rq:/.spread0.stddev
163.92 ? 7% -16.1% 137.57 ? 8% sched_debug.cfs_rq:/.util_avg.stddev
631.80 ? 19% -48.9% 322.90 ? 17% sched_debug.cfs_rq:/.util_est_enqueued.max
103.15 ? 27% -37.8% 64.16 ? 20% sched_debug.cfs_rq:/.util_est_enqueued.stddev
177765 ? 6% +34.2% 238536 ? 9% sched_debug.cpu.avg_idle.stddev
124729 ? 8% +68.1% 209635 sched_debug.cpu.clock.avg
124748 ? 8% +68.1% 209663 sched_debug.cpu.clock.max
124710 ? 8% +68.1% 209606 sched_debug.cpu.clock.min
11.06 ? 9% +48.7% 16.44 ? 5% sched_debug.cpu.clock.stddev
123662 ? 8% +67.8% 207488 sched_debug.cpu.clock_task.avg
123868 ? 8% +67.9% 207979 sched_debug.cpu.clock_task.max
118850 ? 9% +70.4% 202543 sched_debug.cpu.clock_task.min
609.19 ? 4% +17.5% 715.72 ? 4% sched_debug.cpu.clock_task.stddev
7318 ? 4% +32.9% 9728 sched_debug.cpu.curr->pid.max
1160 ? 4% +20.8% 1402 ? 7% sched_debug.cpu.curr->pid.stddev
129324 ? 8% +41.7% 183301 sched_debug.cpu.nr_switches.avg
88553 ? 9% -38.4% 54583 sched_debug.cpu.nr_switches.min
49447 ? 7% +151.3% 124272 sched_debug.cpu.nr_switches.stddev
16.30 ? 7% +74.5% 28.46 sched_debug.cpu.nr_uninterruptible.avg
75.42 ? 13% +589.5% 520.07 ? 11% sched_debug.cpu.nr_uninterruptible.max
-32.64 +1063.7% -379.86 sched_debug.cpu.nr_uninterruptible.min
21.71 ? 11% +1284.1% 300.50 ? 2% sched_debug.cpu.nr_uninterruptible.stddev
124711 ? 8% +68.1% 209605 sched_debug.cpu_clk
124217 ? 8% +68.3% 209111 sched_debug.ktime
125069 ? 8% +67.9% 209964 sched_debug.sched_clk
5922 ? 4% -34.6% 3875 ? 4% slabinfo.btrfs_ordered_extent.active_objs
5931 ? 4% -34.7% 3875 ? 4% slabinfo.btrfs_ordered_extent.num_objs
26425 ? 2% -27.3% 19209 ? 2% slabinfo.kmalloc-128.active_objs
826.50 ? 2% -27.3% 600.67 ? 2% slabinfo.kmalloc-128.active_slabs
26459 ? 2% -27.3% 19233 ? 2% slabinfo.kmalloc-128.num_objs
826.50 ? 2% -27.3% 600.67 ? 2% slabinfo.kmalloc-128.num_slabs
4128207 -93.4% 273094 slabinfo.kmalloc-16.active_objs
20515 -93.7% 1292 slabinfo.kmalloc-16.active_slabs
5252025 -93.7% 330905 slabinfo.kmalloc-16.num_objs
20515 -93.7% 1292 slabinfo.kmalloc-16.num_slabs
210283 -50.6% 103829 slabinfo.kmalloc-32.active_objs
1830 -55.7% 810.83 slabinfo.kmalloc-32.active_slabs
234315 -55.7% 103855 slabinfo.kmalloc-32.num_objs
1830 -55.7% 810.83 slabinfo.kmalloc-32.num_slabs
3323631 -93.7% 210791 slabinfo.kmalloc-512.active_objs
130137 -92.6% 9688 slabinfo.kmalloc-512.active_slabs
4164416 -92.6% 310053 slabinfo.kmalloc-512.num_objs
130137 -92.6% 9688 slabinfo.kmalloc-512.num_slabs
57834 -12.6% 50523 slabinfo.kmalloc-64.active_objs
908.67 -11.6% 803.17 slabinfo.kmalloc-64.active_slabs
58192 -11.6% 51440 slabinfo.kmalloc-64.num_objs
908.67 -11.6% 803.17 slabinfo.kmalloc-64.num_slabs
1915 -16.4% 1601 slabinfo.kmalloc-8k.active_objs
2119 -17.4% 1749 slabinfo.kmalloc-8k.num_objs
165857 -92.2% 13012 slabinfo.kmalloc-rcl-256.active_objs
5226 -92.0% 417.83 slabinfo.kmalloc-rcl-256.active_slabs
167271 -92.0% 13391 slabinfo.kmalloc-rcl-256.num_objs
5226 -92.0% 417.83 slabinfo.kmalloc-rcl-256.num_slabs
32234 ? 2% -69.9% 9711 ? 3% slabinfo.numa_policy.active_objs
520.00 ? 2% -70.0% 156.00 ? 3% slabinfo.numa_policy.active_slabs
32283 ? 2% -69.9% 9712 ? 3% slabinfo.numa_policy.num_objs
520.00 ? 2% -70.0% 156.00 ? 3% slabinfo.numa_policy.num_slabs
36871 +17.3% 43241 slabinfo.proc_inode_cache.active_objs
800.83 +14.0% 912.67 slabinfo.proc_inode_cache.active_slabs
38467 +13.9% 43824 slabinfo.proc_inode_cache.num_objs
800.83 +14.0% 912.67 slabinfo.proc_inode_cache.num_slabs
95975 -76.6% 22452 slabinfo.radix_tree_node.active_objs
2017 -78.6% 430.83 slabinfo.radix_tree_node.active_slabs
112988 -78.6% 24151 slabinfo.radix_tree_node.num_objs
2017 -78.6% 430.83 slabinfo.radix_tree_node.num_slabs
4629 ? 2% +12.4% 5202 slabinfo.sighand_cache.active_objs
4672 ? 2% +12.0% 5233 slabinfo.sighand_cache.num_objs
3394 +19.8% 4065 slabinfo.task_struct.active_objs
3400 +19.8% 4072 slabinfo.task_struct.active_slabs
3400 +19.8% 4072 slabinfo.task_struct.num_objs
3400 +19.8% 4072 slabinfo.task_struct.num_slabs
127276 ? 2% +18.3% 150509 slabinfo.vm_area_struct.active_objs
3201 ? 2% +18.3% 3787 slabinfo.vm_area_struct.active_slabs
128058 ? 2% +18.3% 151520 slabinfo.vm_area_struct.num_objs
3201 ? 2% +18.3% 3787 slabinfo.vm_area_struct.num_slabs
15270 +24.9% 19068 slabinfo.vmap_area.active_objs
16558 +15.2% 19076 slabinfo.vmap_area.num_objs
132222 -89.2% 14290 slabinfo.xfs_buf.active_objs
3891 -90.4% 375.17 slabinfo.xfs_buf.active_slabs
163464 -90.3% 15781 slabinfo.xfs_buf.num_objs
3891 -90.4% 375.17 slabinfo.xfs_buf.num_slabs
6003 ? 6% -34.9% 3907 ? 3% slabinfo.xfs_efi_item.active_objs
6011 ? 6% -35.0% 3907 ? 3% slabinfo.xfs_efi_item.num_objs
4223236 -92.9% 301290 slabinfo.xfs_ili.active_objs
125606 -92.6% 9278 slabinfo.xfs_ili.active_slabs
5275484 -92.6% 389732 slabinfo.xfs_ili.num_objs
125606 -92.6% 9278 slabinfo.xfs_ili.num_slabs
4222429 -92.9% 300876 slabinfo.xfs_inode.active_objs
164531 -92.7% 12056 slabinfo.xfs_inode.active_slabs
5265014 -92.7% 385835 slabinfo.xfs_inode.num_objs
164531 -92.7% 12056 slabinfo.xfs_inode.num_slabs
5.17 -38.8% 3.16 ? 2% perf-stat.i.MPKI
3.678e+09 +64.0% 6.032e+09 perf-stat.i.branch-instructions
0.76 -0.3 0.46 ? 2% perf-stat.i.branch-miss-rate%
25381601 -23.2% 19491257 perf-stat.i.branch-misses
38.91 -12.2 26.68 ? 2% perf-stat.i.cache-miss-rate%
31752204 -39.1% 19347426 perf-stat.i.cache-misses
87495104 -20.4% 69657677 perf-stat.i.cache-references
95465 -3.9% 91750 perf-stat.i.context-switches
1.47 +65.8% 2.43 perf-stat.i.cpi
3.047e+10 ? 2% +169.7% 8.217e+10 perf-stat.i.cpu-cycles
964.20 ? 5% +265.7% 3525 perf-stat.i.cpu-migrations
1004 +273.8% 3753 perf-stat.i.cycles-between-cache-misses
0.05 ? 2% -0.0 0.02 ? 6% perf-stat.i.dTLB-load-miss-rate%
1948405 ? 3% -38.3% 1201407 ? 5% perf-stat.i.dTLB-load-misses
4.429e+09 +77.6% 7.865e+09 perf-stat.i.dTLB-loads
0.01 -0.0 0.01 ? 5% perf-stat.i.dTLB-store-miss-rate%
146526 -39.1% 89272 ? 5% perf-stat.i.dTLB-store-misses
1.515e+09 -9.0% 1.38e+09 perf-stat.i.dTLB-stores
42.54 -7.4 35.19 perf-stat.i.iTLB-load-miss-rate%
5199356 -26.1% 3844466 perf-stat.i.iTLB-load-misses
6761807 ? 2% +5.2% 7115224 perf-stat.i.iTLB-loads
1.824e+10 +65.9% 3.025e+10 perf-stat.i.instructions
3708 ? 2% +98.1% 7348 perf-stat.i.instructions-per-iTLB-miss
0.74 -37.9% 0.46 perf-stat.i.ipc
2.20 ? 19% -36.0% 1.41 ? 14% perf-stat.i.major-faults
0.35 ? 2% +169.7% 0.93 perf-stat.i.metric.GHz
0.32 ? 4% +102.8% 0.65 ? 6% perf-stat.i.metric.K/sec
110.56 +57.7% 174.36 perf-stat.i.metric.M/sec
4061 -9.5% 3677 perf-stat.i.minor-faults
74.31 +3.1 77.43 perf-stat.i.node-load-miss-rate%
9945829 -41.5% 5822848 perf-stat.i.node-load-misses
2862763 -42.5% 1645180 perf-stat.i.node-loads
66.38 ? 2% +3.3 69.64 perf-stat.i.node-store-miss-rate%
4017305 -39.8% 2420317 perf-stat.i.node-store-misses
1661067 ? 3% -39.0% 1013524 perf-stat.i.node-stores
4064 -9.5% 3678 perf-stat.i.page-faults
4.78 -51.9% 2.30 perf-stat.overall.MPKI
0.69 -0.4 0.32 perf-stat.overall.branch-miss-rate%
36.28 -8.4 27.84 ? 2% perf-stat.overall.cache-miss-rate%
1.68 +62.0% 2.72 perf-stat.overall.cpi
968.73 ? 2% +338.9% 4252 perf-stat.overall.cycles-between-cache-misses
0.04 ? 3% -0.0 0.02 ? 5% perf-stat.overall.dTLB-load-miss-rate%
0.01 -0.0 0.01 ? 4% perf-stat.overall.dTLB-store-miss-rate%
43.46 -8.4 35.08 perf-stat.overall.iTLB-load-miss-rate%
3525 ? 2% +123.7% 7886 perf-stat.overall.instructions-per-iTLB-miss
0.60 -38.3% 0.37 perf-stat.overall.ipc
3.682e+09 +64.1% 6.04e+09 perf-stat.ps.branch-instructions
25306363 -23.0% 19487396 perf-stat.ps.branch-misses
31640953 -38.8% 19373001 perf-stat.ps.cache-misses
87205376 -20.2% 69625320 perf-stat.ps.cache-references
95039 -3.8% 91386 perf-stat.ps.context-switches
3.065e+10 ? 2% +168.7% 8.237e+10 perf-stat.ps.cpu-cycles
966.78 ? 5% +263.1% 3510 perf-stat.ps.cpu-migrations
1941597 ? 3% -38.1% 1201583 ? 5% perf-stat.ps.dTLB-load-misses
4.435e+09 +77.6% 7.875e+09 perf-stat.ps.dTLB-loads
145970 -38.9% 89228 ? 5% perf-stat.ps.dTLB-store-misses
1.511e+09 -8.9% 1.377e+09 perf-stat.ps.dTLB-stores
5180307 -25.9% 3840935 perf-stat.ps.iTLB-load-misses
6740199 ? 2% +5.5% 7108148 perf-stat.ps.iTLB-loads
1.826e+10 +65.9% 3.029e+10 perf-stat.ps.instructions
2.21 ? 19% -37.3% 1.38 ? 15% perf-stat.ps.major-faults
4049 -9.6% 3662 perf-stat.ps.minor-faults
9910990 -41.2% 5832160 perf-stat.ps.node-load-misses
2850049 -42.3% 1644855 perf-stat.ps.node-loads
4004179 -39.4% 2425107 perf-stat.ps.node-store-misses
1654526 ? 3% -38.7% 1013581 perf-stat.ps.node-stores
4052 -9.6% 3663 perf-stat.ps.page-faults
4.431e+12 +174.5% 1.217e+13 perf-stat.total.instructions
47.47 -4.1 43.35 perf-profile.calltrace.cycles-pp.creat64
47.44 -4.1 43.33 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.creat64
47.43 -4.1 43.33 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.creat64
47.42 -4.1 43.32 perf-profile.calltrace.cycles-pp.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe.creat64
47.42 -4.1 43.32 perf-profile.calltrace.cycles-pp.do_sys_openat2.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe.creat64
47.36 -4.1 43.28 perf-profile.calltrace.cycles-pp.do_filp_open.do_sys_openat2.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
47.35 -4.1 43.28 perf-profile.calltrace.cycles-pp.path_openat.do_filp_open.do_sys_openat2.do_sys_open.do_syscall_64
42.84 -2.9 39.90 perf-profile.calltrace.cycles-pp.rwsem_down_write_slowpath.path_openat.do_filp_open.do_sys_openat2.do_sys_open
40.49 -2.3 38.22 perf-profile.calltrace.cycles-pp.osq_lock.rwsem_down_write_slowpath.path_openat.do_filp_open.do_sys_openat2
3.01 -1.1 1.95 perf-profile.calltrace.cycles-pp.vfs_unlink.do_unlinkat.do_syscall_64.entry_SYSCALL_64_after_hwframe.unlink
2.12 ? 2% -0.6 1.52 perf-profile.calltrace.cycles-pp.rwsem_spin_on_owner.rwsem_down_write_slowpath.path_openat.do_filp_open.do_sys_openat2
2.47 -0.5 1.97 perf-profile.calltrace.cycles-pp.rwsem_spin_on_owner.rwsem_down_write_slowpath.do_unlinkat.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.45 ? 3% +0.4 1.87 ? 4% perf-profile.calltrace.cycles-pp.ret_from_fork
1.45 ? 3% +0.4 1.87 ? 4% perf-profile.calltrace.cycles-pp.kthread.ret_from_fork
0.44 ? 45% +0.5 0.90 ? 7% perf-profile.calltrace.cycles-pp.xfs_ifree.xfs_inactive_ifree.xfs_inactive.xfs_inactive_inode.xfs_inode_walk_ag
1.26 ? 4% +0.5 1.74 ? 5% perf-profile.calltrace.cycles-pp.worker_thread.kthread.ret_from_fork
1.25 ? 4% +0.5 1.74 ? 5% perf-profile.calltrace.cycles-pp.process_one_work.worker_thread.kthread.ret_from_fork
40.14 +3.9 44.02 perf-profile.calltrace.cycles-pp.unlink
40.07 +3.9 43.98 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.unlink
40.04 +3.9 43.96 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.unlink
40.01 +3.9 43.94 perf-profile.calltrace.cycles-pp.do_unlinkat.do_syscall_64.entry_SYSCALL_64_after_hwframe.unlink
36.63 +5.1 41.74 perf-profile.calltrace.cycles-pp.rwsem_down_write_slowpath.do_unlinkat.do_syscall_64.entry_SYSCALL_64_after_hwframe.unlink
33.99 +5.7 39.65 perf-profile.calltrace.cycles-pp.osq_lock.rwsem_down_write_slowpath.do_unlinkat.do_syscall_64.entry_SYSCALL_64_after_hwframe
47.48 -4.1 43.36 perf-profile.children.cycles-pp.creat64
47.43 -4.1 43.33 perf-profile.children.cycles-pp.do_sys_openat2
47.43 -4.1 43.33 perf-profile.children.cycles-pp.do_sys_open
47.37 -4.1 43.29 perf-profile.children.cycles-pp.do_filp_open
47.36 -4.1 43.29 perf-profile.children.cycles-pp.path_openat
4.59 -1.1 3.49 perf-profile.children.cycles-pp.rwsem_spin_on_owner
3.01 -1.1 1.95 perf-profile.children.cycles-pp.vfs_unlink
2.94 -1.0 1.91 perf-profile.children.cycles-pp.xfs_remove
2.63 ? 3% -0.7 1.96 perf-profile.children.cycles-pp.xfs_log_commit_cil
2.64 -0.7 1.97 perf-profile.children.cycles-pp.__xfs_trans_commit
1.63 ? 2% -0.6 1.07 ? 2% perf-profile.children.cycles-pp.__xfs_dir3_data_check
1.63 ? 2% -0.6 1.07 ? 2% perf-profile.children.cycles-pp.xfs_dir3_data_check
1.34 -0.3 1.01 ? 2% perf-profile.children.cycles-pp.xlog_cil_insert_items
1.03 ? 2% -0.2 0.78 ? 3% perf-profile.children.cycles-pp.xfs_trans_read_buf_map
0.62 ? 3% -0.2 0.40 ? 2% perf-profile.children.cycles-pp.xfs_dir2_block_addname
0.56 ? 7% -0.2 0.35 ? 7% perf-profile.children.cycles-pp.xfs_dir2_block_lookup_int
0.77 -0.2 0.61 perf-profile.children.cycles-pp.xfs_buf_item_format
0.63 ? 10% -0.1 0.48 ? 9% perf-profile.children.cycles-pp.xfs_buf_get_map
0.49 ? 3% -0.1 0.35 ? 2% perf-profile.children.cycles-pp._raw_spin_lock
0.50 ? 3% -0.1 0.38 ? 4% perf-profile.children.cycles-pp.xfs_btree_lookup
0.26 ? 4% -0.1 0.14 ? 6% perf-profile.children.cycles-pp.xfs_dialloc_ag_update_inobt
0.59 ? 2% -0.1 0.47 ? 2% perf-profile.children.cycles-pp.xfs_buf_read_map
0.52 ? 4% -0.1 0.41 perf-profile.children.cycles-pp.xfs_buf_find
0.50 ? 2% -0.1 0.39 ? 2% perf-profile.children.cycles-pp.memcpy_erms
0.28 ? 9% -0.1 0.18 ? 7% perf-profile.children.cycles-pp.xfs_iunlink
0.27 ? 6% -0.1 0.17 ? 2% perf-profile.children.cycles-pp.xfs_dialloc_select_ag
0.26 ? 2% -0.1 0.16 ? 6% perf-profile.children.cycles-pp.__xstat64
0.24 ? 5% -0.1 0.14 ? 8% perf-profile.children.cycles-pp.__percpu_counter_compare
1.55 -0.1 1.46 perf-profile.children.cycles-pp.xfs_dir_ialloc
0.23 ? 4% -0.1 0.14 ? 6% perf-profile.children.cycles-pp.__percpu_counter_sum
0.23 ? 2% -0.1 0.15 ? 6% perf-profile.children.cycles-pp.__do_sys_newstat
0.21 ? 5% -0.1 0.13 ? 7% perf-profile.children.cycles-pp.vfs_statx
0.22 ? 2% -0.1 0.14 ? 3% perf-profile.children.cycles-pp.kmem_cache_alloc
0.19 ? 6% -0.1 0.11 ? 9% perf-profile.children.cycles-pp.xfs_iget
0.20 ? 3% -0.1 0.13 ? 5% perf-profile.children.cycles-pp.link_path_walk
0.17 ? 5% -0.1 0.10 ? 6% perf-profile.children.cycles-pp.xfsaild
0.17 ? 5% -0.1 0.10 ? 6% perf-profile.children.cycles-pp.xfsaild_push
0.20 ? 2% -0.1 0.14 ? 5% perf-profile.children.cycles-pp.xfs_trans_alloc
0.16 ? 5% -0.1 0.10 ? 12% perf-profile.children.cycles-pp.xfs_dir2_leaf_removename
0.09 ? 6% -0.1 0.03 ? 70% perf-profile.children.cycles-pp.xfs_inode_alloc
0.17 ? 8% -0.1 0.12 ? 6% perf-profile.children.cycles-pp.xfs_buf_item_init
0.19 ? 11% -0.1 0.14 ? 11% perf-profile.children.cycles-pp.xfs_trans_log_inode
0.21 ? 5% -0.1 0.16 ? 8% perf-profile.children.cycles-pp.xfs_inode_item_format
0.25 ? 4% -0.0 0.20 ? 3% perf-profile.children.cycles-pp.xfs_buf_item_size
0.14 ? 4% -0.0 0.09 perf-profile.children.cycles-pp.filename_lookup
0.15 ? 7% -0.0 0.10 ? 6% perf-profile.children.cycles-pp.xfs_dir2_node_addname
0.16 ? 6% -0.0 0.11 ? 3% perf-profile.children.cycles-pp.xfs_buf_rele
0.13 ? 5% -0.0 0.09 ? 5% perf-profile.children.cycles-pp.path_lookupat
0.12 ? 9% -0.0 0.07 ? 10% perf-profile.children.cycles-pp.xfs_inode_item_push
0.11 ? 4% -0.0 0.06 ? 7% perf-profile.children.cycles-pp.getname_flags
0.07 ? 6% -0.0 0.03 ? 70% perf-profile.children.cycles-pp.__alloc_file
0.10 ? 4% -0.0 0.06 ? 8% perf-profile.children.cycles-pp.__kmalloc
0.10 ? 7% -0.0 0.06 ? 11% perf-profile.children.cycles-pp.__close
0.15 ? 8% -0.0 0.11 ? 4% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
0.09 ? 8% -0.0 0.05 ? 45% perf-profile.children.cycles-pp.destroy_inode
0.11 ? 8% -0.0 0.07 ? 8% perf-profile.children.cycles-pp.syscall_exit_to_user_mode
0.08 ? 7% -0.0 0.04 ? 45% perf-profile.children.cycles-pp.strncpy_from_user
0.11 ? 13% -0.0 0.08 ? 11% perf-profile.children.cycles-pp.xfs_dir2_leaf_addname
0.10 ? 8% -0.0 0.06 ? 7% perf-profile.children.cycles-pp.exit_to_user_mode_prepare
0.10 ? 4% -0.0 0.07 ? 5% perf-profile.children.cycles-pp.inode_permission
0.08 ? 6% -0.0 0.04 ? 45% perf-profile.children.cycles-pp.alloc_empty_file
0.09 ? 4% -0.0 0.06 ? 11% perf-profile.children.cycles-pp.walk_component
0.11 ? 8% -0.0 0.09 ? 8% perf-profile.children.cycles-pp.xlog_cil_push_work
0.09 ? 5% -0.0 0.06 ? 11% perf-profile.children.cycles-pp.xfs_trans_alloc_icreate
0.09 ? 6% -0.0 0.06 ? 9% perf-profile.children.cycles-pp.xfs_dir2_leafn_remove
0.08 ? 5% -0.0 0.06 ? 9% perf-profile.children.cycles-pp.filename_parentat
0.10 ? 8% -0.0 0.07 ? 5% perf-profile.children.cycles-pp.xfs_dir2_leafn_lookup_for_entry
0.09 ? 9% -0.0 0.06 ? 7% perf-profile.children.cycles-pp.xfs_trans_log_buf
0.10 ? 7% -0.0 0.08 ? 6% perf-profile.children.cycles-pp.kmem_cache_free
0.08 ? 4% -0.0 0.05 ? 8% perf-profile.children.cycles-pp.path_parentat
0.09 ? 5% -0.0 0.06 ? 11% perf-profile.children.cycles-pp.rcu_do_batch
0.09 ? 7% -0.0 0.07 ? 10% perf-profile.children.cycles-pp.xlog_write
0.10 ? 5% -0.0 0.08 ? 9% perf-profile.children.cycles-pp.rcu_core
0.06 ? 6% -0.0 0.04 ? 44% perf-profile.children.cycles-pp.xfs_btree_update
0.17 ? 4% -0.0 0.15 ? 5% perf-profile.children.cycles-pp.schedule
0.10 ? 7% -0.0 0.08 ? 7% perf-profile.children.cycles-pp.xfs_buf_offset
0.09 ? 5% -0.0 0.08 ? 4% perf-profile.children.cycles-pp.rwsem_wake
0.07 ? 10% -0.0 0.05 ? 8% perf-profile.children.cycles-pp.xfs_inobt_update
0.06 ? 6% -0.0 0.05 perf-profile.children.cycles-pp.wake_up_q
0.10 ? 4% +0.0 0.14 ? 11% perf-profile.children.cycles-pp.menu_select
0.00 +0.1 0.06 ? 7% perf-profile.children.cycles-pp.xfs_inobt_btrec_to_irec
0.11 ? 20% +0.1 0.21 ? 14% perf-profile.children.cycles-pp.xfs_iunlink_remove
0.20 ? 6% +0.2 0.43 ? 2% perf-profile.children.cycles-pp.xfs_btree_increment
0.29 ? 4% +0.3 0.55 ? 7% perf-profile.children.cycles-pp.xfs_difree_inobt
0.52 ? 6% +0.4 0.91 ? 7% perf-profile.children.cycles-pp.xfs_ifree
1.45 ? 3% +0.4 1.87 ? 4% perf-profile.children.cycles-pp.kthread
1.45 ? 3% +0.4 1.87 ? 4% perf-profile.children.cycles-pp.ret_from_fork
1.26 ? 4% +0.5 1.74 ? 5% perf-profile.children.cycles-pp.worker_thread
1.25 ? 4% +0.5 1.74 ? 5% perf-profile.children.cycles-pp.process_one_work
0.86 ? 6% +0.5 1.41 ? 6% perf-profile.children.cycles-pp.xfs_inactive
0.88 ? 6% +0.6 1.43 ? 6% perf-profile.children.cycles-pp.xfs_inactive_inode
0.89 ? 6% +0.6 1.46 ? 5% perf-profile.children.cycles-pp.xfs_inode_walk_ag
0.75 ? 20% +0.6 1.33 ? 15% perf-profile.children.cycles-pp.xfs_inactive_ifree
0.70 ? 3% +0.6 1.28 ? 3% perf-profile.children.cycles-pp.xfs_check_agi_freecount
79.50 +2.2 81.66 perf-profile.children.cycles-pp.rwsem_down_write_slowpath
74.50 +3.4 77.89 perf-profile.children.cycles-pp.osq_lock
40.15 +3.9 44.02 perf-profile.children.cycles-pp.unlink
40.01 +3.9 43.94 perf-profile.children.cycles-pp.do_unlinkat
4.53 -1.1 3.46 perf-profile.self.cycles-pp.rwsem_spin_on_owner
1.11 ? 3% -0.4 0.74 ? 2% perf-profile.self.cycles-pp.__xfs_dir3_data_check
0.32 ? 3% -0.1 0.19 ? 8% perf-profile.self.cycles-pp.rwsem_down_write_slowpath
0.46 ? 3% -0.1 0.34 ? 3% perf-profile.self.cycles-pp._raw_spin_lock
0.49 -0.1 0.39 perf-profile.self.cycles-pp.memcpy_erms
0.20 ? 2% -0.1 0.14 ? 5% perf-profile.self.cycles-pp.xfs_log_commit_cil
0.15 ? 6% -0.1 0.10 ? 9% perf-profile.self.cycles-pp.__percpu_counter_sum
0.21 ? 3% -0.0 0.16 ? 2% perf-profile.self.cycles-pp.xfs_buf_item_format
0.16 ? 6% -0.0 0.11 ? 8% perf-profile.self.cycles-pp.xfs_buf_item_init
0.14 ? 3% -0.0 0.09 ? 7% perf-profile.self.cycles-pp.xfs_inode_item_format
0.15 ? 2% -0.0 0.11 ? 6% perf-profile.self.cycles-pp.xlog_cil_insert_items
0.12 ? 12% -0.0 0.09 ? 12% perf-profile.self.cycles-pp.xfs_verify_ino
0.15 ? 5% -0.0 0.11 ? 10% perf-profile.self.cycles-pp.xfs_buf_find
0.13 ? 7% -0.0 0.10 ? 6% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.07 ? 12% -0.0 0.05 perf-profile.self.cycles-pp.kmem_cache_alloc
0.08 ? 6% -0.0 0.06 ? 8% perf-profile.self.cycles-pp.kmem_cache_free
0.10 ? 6% -0.0 0.08 ? 7% perf-profile.self.cycles-pp.__list_del_entry_valid
0.08 ? 9% -0.0 0.06 ? 7% perf-profile.self.cycles-pp.xfs_buf_offset
0.06 ? 7% +0.0 0.10 ? 9% perf-profile.self.cycles-pp.cpuidle_enter_state
0.04 ? 44% +0.1 0.09 ? 5% perf-profile.self.cycles-pp.xfs_btree_check_sblock
0.00 +0.1 0.06 ? 6% perf-profile.self.cycles-pp.xfs_inobt_btrec_to_irec
0.06 ? 6% +0.1 0.13 ? 3% perf-profile.self.cycles-pp.xfs_btree_increment
0.09 ? 26% +0.1 0.22 ? 20% perf-profile.self.cycles-pp.__xfs_btree_check_sblock
74.08 +3.4 77.46 perf-profile.self.cycles-pp.osq_lock
37121 ? 4% +52.2% 56511 ? 7% softirqs.CPU0.RCU
34482 ? 2% +33.0% 45877 ? 2% softirqs.CPU0.SCHED
32842 ? 4% +61.7% 53104 ? 5% softirqs.CPU1.RCU
32329 ? 2% +36.2% 44025 softirqs.CPU1.SCHED
32915 ? 2% +60.5% 52837 ? 5% softirqs.CPU10.RCU
31769 ? 2% +36.6% 43401 ? 2% softirqs.CPU10.SCHED
30236 ? 7% +76.0% 53224 ? 5% softirqs.CPU11.RCU
32120 ? 2% +34.5% 43201 softirqs.CPU11.SCHED
33099 ? 3% +62.0% 53630 ? 5% softirqs.CPU12.RCU
32145 +33.6% 42941 softirqs.CPU12.SCHED
33816 ? 6% +56.7% 53000 ? 5% softirqs.CPU13.RCU
31094 ? 4% +39.4% 43348 ? 2% softirqs.CPU13.SCHED
32436 +59.5% 51745 ? 5% softirqs.CPU14.RCU
31779 ? 3% +35.2% 42956 softirqs.CPU14.SCHED
34001 +45.8% 49559 ? 4% softirqs.CPU15.RCU
31532 ? 4% +37.4% 43328 softirqs.CPU15.SCHED
34075 ? 2% +44.8% 49341 ? 4% softirqs.CPU16.RCU
31592 ? 4% +36.5% 43115 ? 2% softirqs.CPU16.SCHED
33635 ? 2% +45.5% 48931 ? 4% softirqs.CPU17.RCU
31374 ? 3% +36.1% 42691 ? 2% softirqs.CPU17.SCHED
33855 ? 2% +44.8% 49036 ? 4% softirqs.CPU18.RCU
30890 ? 7% +39.3% 43029 softirqs.CPU18.SCHED
34413 ? 2% +43.8% 49484 ? 4% softirqs.CPU19.RCU
31951 ? 3% +35.0% 43147 softirqs.CPU19.SCHED
32602 ? 2% +62.4% 52934 ? 4% softirqs.CPU2.RCU
32104 ? 3% +33.9% 42994 ? 2% softirqs.CPU2.SCHED
34271 +45.8% 49984 ? 4% softirqs.CPU20.RCU
31474 ? 2% +36.9% 43096 ? 2% softirqs.CPU20.SCHED
34236 +45.5% 49828 ? 4% softirqs.CPU21.RCU
31648 ? 3% +37.1% 43386 softirqs.CPU21.SCHED
33872 ? 2% +54.4% 52292 ? 2% softirqs.CPU22.RCU
30987 ? 5% +38.4% 42892 ? 3% softirqs.CPU22.SCHED
33191 ? 2% +53.8% 51034 ? 3% softirqs.CPU23.RCU
30446 ? 6% +38.4% 42138 ? 3% softirqs.CPU23.SCHED
33441 ? 2% +53.3% 51264 ? 2% softirqs.CPU24.RCU
30811 ? 5% +37.6% 42403 ? 3% softirqs.CPU24.SCHED
33565 ? 2% +52.5% 51180 ? 2% softirqs.CPU25.RCU
29711 ? 13% +41.2% 41957 ? 3% softirqs.CPU25.SCHED
33735 ? 2% +52.9% 51584 ? 2% softirqs.CPU26.RCU
30379 ? 5% +36.4% 41429 ? 4% softirqs.CPU26.SCHED
33758 ? 2% +52.9% 51602 softirqs.CPU27.RCU
29705 ? 6% +43.2% 42537 softirqs.CPU27.SCHED
32310 ? 3% +51.1% 48824 ? 3% softirqs.CPU28.RCU
30430 ? 6% +37.7% 41897 ? 2% softirqs.CPU28.SCHED
32732 ? 2% +51.6% 49612 ? 4% softirqs.CPU29.RCU
31577 ? 4% +33.2% 42057 ? 3% softirqs.CPU29.SCHED
32597 +60.6% 52361 ? 4% softirqs.CPU3.RCU
31587 ? 3% +36.5% 43124 ? 2% softirqs.CPU3.SCHED
33967 ? 3% +61.6% 54896 ? 2% softirqs.CPU30.RCU
31413 ? 4% +35.7% 42612 ? 3% softirqs.CPU30.SCHED
33024 ? 3% +63.7% 54067 ? 3% softirqs.CPU31.RCU
31213 ? 3% +34.0% 41822 ? 3% softirqs.CPU31.SCHED
33802 ? 3% +62.3% 54847 ? 2% softirqs.CPU32.RCU
30509 ? 4% +38.5% 42259 ? 2% softirqs.CPU32.SCHED
33814 ? 2% +61.0% 54453 ? 3% softirqs.CPU33.RCU
27916 ? 25% +52.3% 42518 softirqs.CPU33.SCHED
33997 ? 3% +63.5% 55578 ? 2% softirqs.CPU34.RCU
31451 ? 4% +34.0% 42130 ? 3% softirqs.CPU34.SCHED
33989 ? 2% +61.7% 54965 ? 2% softirqs.CPU35.RCU
28031 ? 22% +50.3% 42130 ? 3% softirqs.CPU35.SCHED
33747 ? 2% +57.4% 53121 ? 7% softirqs.CPU36.RCU
31368 ? 4% +34.7% 42256 ? 2% softirqs.CPU36.SCHED
33913 ? 2% +62.3% 55036 softirqs.CPU37.RCU
31284 ? 5% +34.3% 42023 ? 3% softirqs.CPU37.SCHED
33991 ? 3% +61.0% 54726 ? 3% softirqs.CPU38.RCU
30910 ? 5% +35.1% 41759 ? 2% softirqs.CPU38.SCHED
34163 ? 3% +59.5% 54506 ? 2% softirqs.CPU39.RCU
31625 ? 4% +32.8% 42002 ? 3% softirqs.CPU39.SCHED
32566 ? 2% +57.6% 51324 ? 11% softirqs.CPU4.RCU
31991 ? 2% +36.1% 43547 softirqs.CPU4.SCHED
33996 ? 2% +56.5% 53210 ? 6% softirqs.CPU40.RCU
31613 ? 4% +33.9% 42345 ? 2% softirqs.CPU40.SCHED
33662 ? 3% +61.6% 54393 softirqs.CPU41.RCU
30360 ? 8% +39.3% 42302 ? 2% softirqs.CPU41.SCHED
33450 ? 3% +63.9% 54821 ? 3% softirqs.CPU42.RCU
31039 ? 5% +35.1% 41925 ? 2% softirqs.CPU42.SCHED
34155 ? 2% +60.6% 54844 ? 3% softirqs.CPU43.RCU
30524 ? 7% +35.9% 41476 softirqs.CPU43.SCHED
27696 +34.4% 37213 ? 4% softirqs.CPU44.RCU
30874 ? 3% +35.7% 41886 ? 2% softirqs.CPU44.SCHED
31866 ? 6% +65.1% 52616 ? 6% softirqs.CPU45.RCU
31864 ? 9% +33.9% 42657 softirqs.CPU45.SCHED
32425 ? 2% +62.4% 52656 ? 5% softirqs.CPU46.RCU
31350 ? 5% +34.6% 42209 softirqs.CPU46.SCHED
32887 +60.8% 52896 ? 5% softirqs.CPU47.RCU
29897 ? 5% +43.1% 42791 softirqs.CPU47.SCHED
33282 ? 2% +54.9% 51545 ? 11% softirqs.CPU48.RCU
29970 ? 15% +43.6% 43046 softirqs.CPU48.SCHED
33587 +58.8% 53347 ? 5% softirqs.CPU49.RCU
27647 ? 25% +55.3% 42941 softirqs.CPU49.SCHED
32459 ? 5% +64.9% 53511 ? 6% softirqs.CPU5.RCU
31113 ? 4% +40.3% 43667 softirqs.CPU5.SCHED
32524 ? 2% +58.2% 51460 ? 5% softirqs.CPU50.RCU
30525 ? 4% +37.6% 42007 ? 3% softirqs.CPU50.SCHED
32138 ? 2% +58.4% 50916 ? 5% softirqs.CPU51.RCU
31034 ? 2% +35.0% 41905 ? 2% softirqs.CPU51.SCHED
33022 +60.9% 53128 ? 4% softirqs.CPU52.RCU
31176 ? 4% +36.9% 42689 ? 2% softirqs.CPU52.SCHED
32604 ? 2% +63.0% 53151 ? 4% softirqs.CPU53.RCU
30414 ? 6% +39.2% 42324 ? 2% softirqs.CPU53.SCHED
33253 ? 2% +59.4% 53018 ? 5% softirqs.CPU54.RCU
30930 ? 6% +36.9% 42354 softirqs.CPU54.SCHED
32019 ? 4% +66.4% 53272 ? 5% softirqs.CPU55.RCU
31951 ? 7% +34.5% 42969 softirqs.CPU55.SCHED
33287 ? 2% +59.9% 53214 ? 5% softirqs.CPU56.RCU
30810 ? 6% +40.1% 43150 softirqs.CPU56.SCHED
33317 +58.8% 52902 ? 5% softirqs.CPU57.RCU
31676 ? 4% +36.1% 43116 softirqs.CPU57.SCHED
32572 ? 5% +62.0% 52772 ? 5% softirqs.CPU58.RCU
30910 ? 5% +37.1% 42373 softirqs.CPU58.SCHED
33319 +59.9% 53262 ? 5% softirqs.CPU59.RCU
31009 ? 4% +39.1% 43147 softirqs.CPU59.SCHED
32354 ? 4% +55.8% 50399 ? 9% softirqs.CPU6.RCU
31566 ? 4% +37.0% 43238 softirqs.CPU6.SCHED
34225 ? 2% +37.3% 46982 ? 5% softirqs.CPU60.RCU
31194 ? 6% +36.9% 42700 ? 2% softirqs.CPU60.SCHED
33050 ? 2% +43.9% 47543 ? 3% softirqs.CPU61.RCU
31157 ? 4% +36.1% 42415 softirqs.CPU61.SCHED
33349 +42.2% 47408 ? 4% softirqs.CPU62.RCU
31662 ? 3% +35.9% 43025 ? 2% softirqs.CPU62.SCHED
34357 ? 5% +39.0% 47762 ? 4% softirqs.CPU63.RCU
31168 ? 4% +37.1% 42721 softirqs.CPU63.SCHED
34025 +41.0% 47974 ? 4% softirqs.CPU64.RCU
31864 ? 4% +35.0% 43007 softirqs.CPU64.SCHED
33747 +43.4% 48380 ? 4% softirqs.CPU65.RCU
31379 ? 4% +37.9% 43278 softirqs.CPU65.SCHED
34346 ? 2% +56.7% 53831 softirqs.CPU66.RCU
31418 ? 4% +34.0% 42105 softirqs.CPU66.SCHED
33707 ? 2% +53.4% 51721 ? 2% softirqs.CPU67.RCU
31090 ? 5% +33.6% 41547 softirqs.CPU67.SCHED
33964 ? 3% +50.2% 51028 ? 2% softirqs.CPU68.RCU
31449 ? 8% +33.9% 42125 ? 3% softirqs.CPU68.SCHED
33895 ? 3% +50.7% 51078 ? 3% softirqs.CPU69.RCU
30839 ? 6% +36.3% 42025 softirqs.CPU69.SCHED
32245 ? 2% +60.5% 51741 ? 5% softirqs.CPU7.RCU
31239 ? 4% +37.1% 42826 softirqs.CPU7.SCHED
33702 ? 2% +53.5% 51717 ? 2% softirqs.CPU70.RCU
31124 ? 5% +34.0% 41702 ? 4% softirqs.CPU70.SCHED
33969 ? 2% +53.6% 52162 ? 3% softirqs.CPU71.RCU
31707 ? 4% +35.1% 42835 softirqs.CPU71.SCHED
32460 ? 2% +51.8% 49264 ? 4% softirqs.CPU72.RCU
29659 ? 5% +39.6% 41407 softirqs.CPU72.SCHED
32999 ? 2% +50.6% 49685 ? 5% softirqs.CPU73.RCU
29714 ? 6% +39.6% 41481 ? 2% softirqs.CPU73.SCHED
33776 ? 2% +51.0% 51002 ? 2% softirqs.CPU74.RCU
28432 ? 14% +47.4% 41918 ? 2% softirqs.CPU74.SCHED
30930 ? 4% +60.9% 49766 ? 4% softirqs.CPU75.RCU
27676 ? 15% +49.6% 41407 ? 2% softirqs.CPU75.SCHED
31857 ? 2% +60.1% 51006 ? 3% softirqs.CPU76.RCU
29988 ? 8% +40.0% 41980 softirqs.CPU76.SCHED
31215 ? 4% +62.8% 50832 ? 4% softirqs.CPU77.RCU
30183 ? 4% +39.0% 41964 softirqs.CPU77.SCHED
31939 ? 2% +61.3% 51525 ? 2% softirqs.CPU78.RCU
30231 ? 6% +39.0% 42010 ? 2% softirqs.CPU78.SCHED
31269 ? 4% +63.0% 50971 ? 2% softirqs.CPU79.RCU
31090 ? 5% +35.8% 42228 softirqs.CPU79.SCHED
32650 ? 2% +62.1% 52925 ? 4% softirqs.CPU8.RCU
30902 ? 4% +40.4% 43373 softirqs.CPU8.SCHED
31577 ? 3% +60.3% 50633 ? 3% softirqs.CPU80.RCU
30222 ? 6% +39.7% 42226 ? 3% softirqs.CPU80.SCHED
31793 ? 2% +59.4% 50674 ? 2% softirqs.CPU81.RCU
30971 ? 6% +36.8% 42355 ? 2% softirqs.CPU81.SCHED
31400 ? 3% +60.0% 50231 ? 3% softirqs.CPU82.RCU
30874 ? 4% +36.7% 42213 softirqs.CPU82.SCHED
31868 ? 3% +59.2% 50722 ? 3% softirqs.CPU83.RCU
29835 ? 6% +41.3% 42166 ? 2% softirqs.CPU83.SCHED
32071 ? 3% +55.9% 49992 ? 3% softirqs.CPU84.RCU
31018 ? 4% +36.9% 42451 ? 2% softirqs.CPU84.SCHED
31458 ? 3% +60.3% 50431 softirqs.CPU85.RCU
30315 ? 7% +39.6% 42313 ? 3% softirqs.CPU85.SCHED
31563 ? 3% +61.4% 50956 ? 3% softirqs.CPU86.RCU
30603 ? 6% +38.9% 42501 ? 4% softirqs.CPU86.SCHED
32022 ? 2% +59.9% 51201 ? 3% softirqs.CPU87.RCU
29704 ? 6% +43.4% 42592 ? 2% softirqs.CPU87.SCHED
32551 ? 3% +61.7% 52641 ? 4% softirqs.CPU9.RCU
31772 ? 2% +35.3% 42987 softirqs.CPU9.SCHED
7874 ? 27% +113.2% 16791 ? 59% softirqs.NET_RX
2907639 +56.2% 4542804 ? 3% softirqs.RCU
2720415 +37.6% 3744489 softirqs.SCHED
44441 +65.8% 73673 softirqs.TIMER
0.00 ? 11% +226.3% 0.01 ? 66% perf-sched.sch_delay.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
0.00 ? 55% +600.0% 0.02 ? 69% perf-sched.sch_delay.avg.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown]
0.00 ? 95% +9600.0% 0.19 ?100% perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.__kmalloc.kmem_alloc.xfs_dir_createname
0.00 ?123% +5200.0% 0.09 ?192% perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.__kmalloc.kmem_alloc.xfs_dir_removename
0.00 ? 68% +5128.6% 0.12 ?153% perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.__kmalloc.kmem_alloc.xfs_iunlink
0.00 ? 66% +3990.0% 0.07 ?109% perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.down_read.xfs_log_commit_cil.__xfs_trans_commit
0.00 ? 71% +17431.6% 0.56 ?202% perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.down_write.vfs_unlink.do_unlinkat
0.00 ?137% +11164.3% 0.26 ?149% perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.down_write.xfs_ilock.xfs_create
0.00 ?142% +8963.6% 0.17 ?104% perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.down_write.xfs_ilock.xfs_iget
0.00 ? 82% +1150.0% 0.02 ? 44% perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.dput.path_put.vfs_statx
0.00 ? 35% +381.3% 0.01 ? 61% perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_buf_item_init._xfs_trans_bjoin
0.00 ? 28% +1457.9% 0.05 ?120% perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_inobt_init_common.xfs_inobt_init_cursor
0.00 ? 52% +1526.3% 0.05 ?115% perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_inode_alloc.xfs_iget
0.00 ? 7% +434.5% 0.03 ?159% perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_trans_alloc.xfs_inactive_ifree
0.00 ? 62% +20430.8% 0.44 ? 61% perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_trans_alloc.xfs_trans_alloc_icreate
0.00 ?223% +2925.0% 0.02 ?132% perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xlog_ticket_alloc.xfs_log_reserve
0.00 ?142% +7618.2% 0.14 ?162% perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.mnt_want_write.do_unlinkat.do_syscall_64
0.03 +14.4% 0.03 ? 2% perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.process_one_work.worker_thread.kthread
0.02 ?204% +6172.3% 1.43 ?173% perf-sched.sch_delay.avg.ms.rwsem_down_read_slowpath.xfs_log_commit_cil.__xfs_trans_commit.xfs_create
0.02 ?203% +18257.8% 4.13 ?202% perf-sched.sch_delay.avg.ms.rwsem_down_read_slowpath.xfs_log_commit_cil.__xfs_trans_commit.xfs_remove
0.02 ? 38% +420.1% 0.12 ?107% perf-sched.sch_delay.avg.ms.rwsem_down_write_slowpath.path_openat.do_filp_open.do_sys_openat2
0.00 ? 12% +63.6% 0.01 perf-sched.sch_delay.avg.ms.rwsem_down_write_slowpath.xlog_cil_push_work.process_one_work.worker_thread
0.01 ? 8% +325.0% 0.02 ? 93% perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait
0.01 ? 6% +37.8% 0.01 ? 8% perf-sched.sch_delay.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_fork
0.00 ? 7% +93.1% 0.01 ? 47% perf-sched.sch_delay.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork
0.01 ? 9% +57.6% 0.01 ? 14% perf-sched.sch_delay.avg.ms.schedule_timeout.wait_for_completion.__flush_work.xlog_cil_force_lsn
0.00 ? 7% +44.8% 0.01 ? 8% perf-sched.sch_delay.avg.ms.schedule_timeout.xfsaild.kthread.ret_from_fork
0.01 ? 3% +39.0% 0.01 ? 6% perf-sched.sch_delay.avg.ms.worker_thread.kthread.ret_from_fork
0.04 ?132% +3665.7% 1.50 ?113% perf-sched.sch_delay.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
0.03 ?177% +5260.4% 1.67 ?159% perf-sched.sch_delay.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown]
0.01 ? 18% +91.8% 0.02 ? 27% perf-sched.sch_delay.max.ms.md_flush_request.raid0_make_request.md_handle_request.md_submit_bio
0.00 ? 71% +81905.9% 2.32 ?115% perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.__kmalloc.kmem_alloc.xfs_dir_createname
0.00 ? 74% +59122.2% 1.78 ?213% perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.__kmalloc.kmem_alloc.xfs_dir_removename
0.01 ? 80% +28850.0% 1.54 ?147% perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.__kmalloc.kmem_alloc.xfs_iunlink
0.00 ? 60% +21788.5% 0.95 ?166% perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.down_read.xfs_log_commit_cil.__xfs_trans_commit
0.01 ? 71% +80317.9% 5.23 ?191% perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.down_write.vfs_unlink.do_unlinkat
0.00 ?142% +2.2e+05% 4.08 ?109% perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.down_write.xfs_ilock.xfs_iget
0.00 ? 71% +4641.2% 0.13 ? 66% perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.dput.path_put.vfs_statx
0.00 ? 15% +1885.7% 0.09 ?113% perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_buf_item_init._xfs_trans_bjoin
0.01 ? 72% +37595.7% 2.89 ?137% perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_inobt_init_common.xfs_inobt_init_cursor
0.01 ? 30% +22469.0% 1.58 ?151% perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_inode_alloc.xfs_iget
0.00 ? 82% +10603.7% 0.48 ?157% perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_inode_item_init.xfs_trans_ijoin
0.01 ? 17% +3009.4% 0.17 ?201% perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_trans_alloc.xfs_inactive_ifree
0.01 ? 70% +88230.0% 5.89 ? 61% perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_trans_alloc.xfs_trans_alloc_icreate
0.00 ?223% +5775.0% 0.04 ?132% perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xlog_ticket_alloc.xfs_log_reserve
0.00 ?142% +45381.8% 0.83 ?179% perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.mnt_want_write.do_unlinkat.do_syscall_64
0.00 ? 89% +896.6% 0.05 ?174% perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.mnt_want_write.path_openat.do_filp_open
0.00 ? 25% +50218.2% 1.84 ?216% perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.sched_exec.bprm_execve
2.17 ?222% +7861.2% 172.39 ?183% perf-sched.sch_delay.max.ms.rwsem_down_read_slowpath.xfs_log_commit_cil.__xfs_trans_commit.xfs_create
0.01 ? 13% +9517.6% 1.09 ?143% perf-sched.sch_delay.max.ms.rwsem_down_read_slowpath.xfs_log_commit_cil.__xfs_trans_commit.xfs_inactive_ifree
0.01 ? 25% +34.5% 0.01 ? 12% perf-sched.sch_delay.max.ms.rwsem_down_write_slowpath.xlog_cil_push_work.process_one_work.worker_thread
0.01 ? 44% +1600.0% 0.24 ?135% perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait
0.14 ?141% +2138.0% 3.20 ?134% perf-sched.sch_delay.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork
7.70 ? 63% -75.8% 1.86 ?175% perf-sched.sch_delay.max.ms.smpboot_thread_fn.kthread.ret_from_fork
2.64 ? 5% +133.6% 6.16 ? 20% perf-sched.sch_delay.max.ms.worker_thread.kthread.ret_from_fork
0.01 ? 32% +126.0% 0.02 ? 28% perf-sched.sch_delay.max.ms.xlog_wait_on_iclog.xfsaild_push.xfsaild.kthread
50.30 +23.8% 62.25 ? 2% perf-sched.total_wait_and_delay.average.ms
629065 -24.9% 472686 ? 2% perf-sched.total_wait_and_delay.count.ms
50.29 +23.7% 62.22 ? 2% perf-sched.total_wait_time.average.ms
2.23 +151.7% 5.62 ? 5% perf-sched.wait_and_delay.avg.ms.md_flush_request.raid0_make_request.md_handle_request.md_submit_bio
22.65 ?223% +903.9% 227.34 ? 19% perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.__kmalloc.kmem_alloc.kmem_alloc_large
26.09 ?141% +1011.8% 290.09 ? 13% perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.__kmalloc.kmem_alloc.xfs_iunlink
25.68 ?141% +1215.1% 337.67 ? 31% perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.down_read.xfs_log_commit_cil.__xfs_trans_commit
49.92 ?157% +442.4% 270.77 ? 62% perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.down_write.vfs_unlink.do_unlinkat
23.97 ?223% +1471.4% 376.65 ? 11% perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.dput.do_unlinkat.do_syscall_64
33.17 ?165% +762.5% 286.08 ? 9% perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.dput.path_openat.do_filp_open
18.03 ?223% +725.8% 148.92 ? 38% perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.dput.path_put.vfs_statx
9.51 ?223% +3046.0% 299.31 ? 15% perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_buf_item_init._xfs_trans_bjoin
17.93 ?223% +1301.2% 251.22 ? 15% perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_inode_alloc.xfs_iget
43.28 ?143% +547.7% 280.31 ? 70% perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_inode_item_init.xfs_trans_ijoin
31.63 ?152% +645.5% 235.83 ? 21% perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_trans_alloc.xfs_trans_alloc_icreate
0.02 ? 18% +83.2% 0.04 ? 12% perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.mempool_alloc.bio_alloc_bioset._xfs_buf_ioapply
21.28 ?223% +862.4% 204.83 ? 52% perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.mnt_want_write.path_openat.do_filp_open
0.29 ? 45% +371.4% 1.36 ? 26% perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.process_one_work.worker_thread.kthread
0.08 ? 8% +221.5% 0.26 ? 18% perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.submit_bio_checks.submit_bio_noacct.submit_bio
76.31 +83.3% 139.89 ? 6% perf-sched.wait_and_delay.avg.ms.rwsem_down_write_slowpath.do_unlinkat.do_syscall_64.entry_SYSCALL_64_after_hwframe
79.84 ? 2% +198.1% 237.96 ? 4% perf-sched.wait_and_delay.avg.ms.rwsem_down_write_slowpath.path_openat.do_filp_open.do_sys_openat2
867.29 ? 4% -12.4% 760.12 ? 4% perf-sched.wait_and_delay.avg.ms.smpboot_thread_fn.kthread.ret_from_fork
24.85 ? 3% +134.3% 58.22 ? 6% perf-sched.wait_and_delay.avg.ms.worker_thread.kthread.ret_from_fork
6693 -56.7% 2901 ? 3% perf-sched.wait_and_delay.count.md_flush_request.raid0_make_request.md_handle_request.md_submit_bio
6.33 ?223% +1705.3% 114.33 ? 9% perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.__kmalloc.kmem_alloc.kmem_alloc_large
0.17 ?223% +12800.0% 21.50 ? 6% perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.__kmalloc.kmem_alloc.xfs_dir_removename
1.67 ?157% +990.0% 18.17 ? 26% perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.__kmalloc.kmem_alloc.xfs_iunlink
2.00 ?165% +575.0% 13.50 ? 31% perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.down_read.xfs_log_commit_cil.__xfs_trans_commit
2.00 ?144% +350.0% 9.00 ? 55% perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.down_write.vfs_unlink.do_unlinkat
5.83 ?223% +3245.7% 195.17 ? 8% perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.dput.do_unlinkat.do_syscall_64
9.00 ?167% +2574.1% 240.67 ? 14% perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.dput.path_openat.do_filp_open
0.33 ?223% +3300.0% 11.33 ? 19% perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.dput.path_put.vfs_statx
0.67 ?223% +2125.0% 14.83 ? 17% perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_buf_item_init._xfs_trans_bjoin
2.00 ?223% +2016.7% 42.33 ? 10% perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_inode_alloc.xfs_iget
0.83 ?145% +620.0% 6.00 ? 19% perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_inode_item_init.xfs_trans_ijoin
3.50 ?152% +352.4% 15.83 ? 29% perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_trans_alloc.xfs_trans_alloc_icreate
18320 ? 3% -70.8% 5353 ? 12% perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.mempool_alloc.bio_alloc_bioset._xfs_buf_ioapply
0.33 ?223% +1150.0% 4.17 ? 16% perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.mnt_want_write.path_openat.do_filp_open
6616 -52.1% 3166 ? 4% perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.process_one_work.worker_thread.kthread
6764 -53.0% 3178 ? 4% perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.submit_bio_checks.submit_bio_noacct.submit_bio
114041 -72.6% 31242 perf-sched.wait_and_delay.count.rwsem_down_write_slowpath.do_unlinkat.do_syscall_64.entry_SYSCALL_64_after_hwframe
79562 -51.5% 38567 perf-sched.wait_and_delay.count.rwsem_down_write_slowpath.path_openat.do_filp_open.do_sys_openat2
281.50 -25.9% 208.50 ? 44% perf-sched.wait_and_delay.count.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll
40812 -59.7% 16442 ? 6% perf-sched.wait_and_delay.count.worker_thread.kthread.ret_from_fork
980.20 ? 41% +81.1% 1775 ? 17% perf-sched.wait_and_delay.max.ms.md_flush_request.raid0_make_request.md_handle_request.md_submit_bio
136.29 ?223% +680.9% 1064 ? 22% perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.__kmalloc.kmem_alloc.kmem_alloc_large
103.59 ?223% +885.2% 1020 ? 27% perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.__kmalloc.kmem_alloc.xfs_dir_removename
73.61 ?141% +1243.7% 989.15 ? 29% perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.__kmalloc.kmem_alloc.xfs_iunlink
156.13 ?167% +409.0% 794.66 ? 11% perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.down_read.xfs_log_commit_cil.__xfs_trans_commit
156.22 ?167% +390.7% 766.59 ? 38% perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.down_write.vfs_unlink.do_unlinkat
124.56 ?223% +790.3% 1108 ? 23% perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.dput.do_unlinkat.do_syscall_64
171.72 ?172% +543.6% 1105 ? 23% perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.dput.path_openat.do_filp_open
36.07 ?223% +1738.4% 663.04 ? 23% perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.dput.path_put.vfs_statx
37.49 ?223% +2634.9% 1025 ? 18% perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_buf_item_init._xfs_trans_bjoin
118.66 ?223% +705.2% 955.51 ? 14% perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_inode_alloc.xfs_iget
74.31 ?141% +1060.2% 862.16 ? 40% perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_inode_item_init.xfs_trans_ijoin
166.99 ?170% +481.4% 970.89 ? 18% perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_trans_alloc.xfs_trans_alloc_icreate
36.27 ?223% +1393.0% 541.57 ? 66% perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.mnt_want_write.path_openat.do_filp_open
328.36 ? 63% +137.4% 779.43 ? 18% perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.process_one_work.worker_thread.kthread
43.87 ? 41% +362.8% 203.03 ? 29% perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.submit_bio_checks.submit_bio_noacct.submit_bio
330.79 ? 67% +244.3% 1138 ? 23% perf-sched.wait_and_delay.max.ms.rwsem_down_write_slowpath.do_unlinkat.do_syscall_64.entry_SYSCALL_64_after_hwframe
330.80 ? 67% +246.2% 1145 ? 24% perf-sched.wait_and_delay.max.ms.rwsem_down_write_slowpath.path_openat.do_filp_open.do_sys_openat2
231.82 +297.8% 922.16 ? 2% perf-sched.wait_and_delay.max.ms.schedule_timeout.__down.down.xfs_buf_lock
0.56 ?223% +22592.9% 127.16 ? 39% perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown]
4.13 ?218% +1814.4% 79.14 ? 69% perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown]
2.23 +151.7% 5.62 ? 5% perf-sched.wait_time.avg.ms.md_flush_request.raid0_make_request.md_handle_request.md_submit_bio
38.97 ?117% +483.1% 227.22 ? 19% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.__kmalloc.kmem_alloc.kmem_alloc_large
0.00 ? 48% +5.6e+06% 259.01 ? 35% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.__kmalloc.kmem_alloc.xfs_dir_createname
26.09 ?141% +1011.3% 289.97 ? 13% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.__kmalloc.kmem_alloc.xfs_iunlink
25.68 ?141% +1214.6% 337.61 ? 31% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.down_read.xfs_log_commit_cil.__xfs_trans_commit
49.93 ?157% +441.2% 270.21 ? 62% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.down_write.vfs_unlink.do_unlinkat
0.00 ? 85% +7.8e+06% 311.88 ? 16% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.down_write.xfs_ilock.xfs_iget
0.00 ?223% +1.1e+06% 21.52 ?222% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.dput.__fput.task_work_run
33.93 ?149% +1009.9% 376.56 ? 11% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.dput.do_unlinkat.do_syscall_64
49.14 ? 92% +481.8% 285.93 ? 9% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.dput.path_openat.do_filp_open
18.10 ?222% +722.7% 148.90 ? 38% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.dput.path_put.vfs_statx
49.29 ?223% +312.1% 203.16 ? 89% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.dput.terminate_walk.path_openat
0.00 ?142% +7.4e+06% 110.48 ?117% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.security_file_alloc.__alloc_file
16.42 ?132% +1723.1% 299.30 ? 15% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_buf_item_init._xfs_trans_bjoin
40.97 ? 79% +513.1% 251.17 ? 15% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_inode_alloc.xfs_iget
50.78 ?116% +451.9% 280.22 ? 70% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_inode_item_init.xfs_trans_ijoin
0.03 ? 9% +915.1% 0.31 ? 6% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_trans_alloc.xfs_inactive_ifree
31.64 ?152% +644.0% 235.38 ? 21% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_trans_alloc.xfs_trans_alloc_icreate
0.01 ?185% +3.4e+06% 187.05 ? 73% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xlog_ticket_alloc.xfs_log_reserve
0.02 ? 18% +83.2% 0.04 ? 12% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.mempool_alloc.bio_alloc_bioset._xfs_buf_ioapply
16.56 ?163% +1155.7% 207.90 ? 36% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.mnt_want_write.do_unlinkat.do_syscall_64
21.29 ?223% +862.1% 204.82 ? 52% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.mnt_want_write.path_openat.do_filp_open
0.26 ? 50% +412.0% 1.33 ? 26% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.process_one_work.worker_thread.kthread
0.05 ? 15% +375.4% 0.23 ? 21% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.submit_bio_checks.submit_bio_noacct.submit_bio
24.75 ? 22% +1000.8% 272.48 ? 23% perf-sched.wait_time.avg.ms.rwsem_down_read_slowpath.xfs_log_commit_cil.__xfs_trans_commit.xfs_create
0.03 +505.7% 0.19 ? 10% perf-sched.wait_time.avg.ms.rwsem_down_read_slowpath.xfs_log_commit_cil.__xfs_trans_commit.xfs_inactive_ifree
28.70 ? 13% +840.1% 269.77 ? 18% perf-sched.wait_time.avg.ms.rwsem_down_read_slowpath.xfs_log_commit_cil.__xfs_trans_commit.xfs_remove
76.28 +83.3% 139.83 ? 6% perf-sched.wait_time.avg.ms.rwsem_down_write_slowpath.do_unlinkat.do_syscall_64.entry_SYSCALL_64_after_hwframe
79.82 ? 2% +198.0% 237.84 ? 4% perf-sched.wait_time.avg.ms.rwsem_down_write_slowpath.path_openat.do_filp_open.do_sys_openat2
3.62 +13.3% 4.10 ? 2% perf-sched.wait_time.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork
23.57 ? 4% +33.2% 31.41 ? 3% perf-sched.wait_time.avg.ms.schedule_timeout.wait_for_completion.__flush_work.xlog_cil_force_lsn
19.89 ? 4% +67.4% 33.28 ? 2% perf-sched.wait_time.avg.ms.schedule_timeout.xfsaild.kthread.ret_from_fork
867.28 ? 4% -12.4% 760.11 ? 4% perf-sched.wait_time.avg.ms.smpboot_thread_fn.kthread.ret_from_fork
24.84 ? 3% +134.3% 58.21 ? 6% perf-sched.wait_time.avg.ms.worker_thread.kthread.ret_from_fork
1.09 ?223% +83496.7% 911.34 ? 16% perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown]
36.87 ?220% +2896.5% 1104 ? 22% perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown]
980.20 ? 41% +81.1% 1775 ? 17% perf-sched.wait_time.max.ms.md_flush_request.raid0_make_request.md_handle_request.md_submit_bio
284.65 ? 88% +273.5% 1063 ? 22% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.__kmalloc.kmem_alloc.kmem_alloc_large
0.01 ? 57% +1.3e+07% 809.92 ? 37% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.__kmalloc.kmem_alloc.xfs_dir_createname
103.60 ?223% +885.1% 1020 ? 27% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.__kmalloc.kmem_alloc.xfs_dir_removename
73.61 ?141% +1243.7% 989.13 ? 29% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.__kmalloc.kmem_alloc.xfs_iunlink
156.14 ?167% +408.9% 794.64 ? 11% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.down_read.xfs_log_commit_cil.__xfs_trans_commit
156.24 ?167% +390.6% 766.59 ? 38% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.down_write.vfs_unlink.do_unlinkat
0.00 ? 89% +2e+07% 854.06 ? 5% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.down_write.xfs_ilock.xfs_iget
0.00 ?223% +4.6e+06% 129.42 ?222% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.dput.__fput.task_work_run
198.68 ?132% +458.1% 1108 ? 23% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.dput.do_unlinkat.do_syscall_64
320.06 ? 68% +245.2% 1104 ? 23% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.dput.path_openat.do_filp_open
36.19 ?222% +1731.9% 663.03 ? 23% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.dput.path_put.vfs_statx
62.25 ?223% +668.6% 478.45 ? 87% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.dput.terminate_walk.path_openat
0.00 ?145% +1.2e+07% 203.54 ? 80% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.security_file_alloc.__alloc_file
76.76 ?135% +1235.9% 1025 ? 18% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_buf_item_init._xfs_trans_bjoin
174.07 ? 76% +479.7% 1009 ? 33% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_inobt_init_common.xfs_inobt_init_cursor
267.47 ? 80% +257.2% 955.50 ? 14% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_inode_alloc.xfs_iget
104.17 ?100% +727.5% 862.02 ? 40% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_inode_item_init.xfs_trans_ijoin
0.04 ? 31% +2120.0% 0.89 ? 23% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_trans_alloc.xfs_inactive_ifree
167.00 ?170% +481.3% 970.86 ? 18% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_trans_alloc.xfs_trans_alloc_icreate
0.01 ?185% +8.8e+06% 482.59 ? 64% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xlog_ticket_alloc.xfs_log_reserve
41.02 ?196% +1521.2% 664.95 ? 27% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.mnt_want_write.do_unlinkat.do_syscall_64
36.28 ?223% +1392.6% 541.56 ? 66% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.mnt_want_write.path_openat.do_filp_open
328.33 ? 63% +137.4% 779.38 ? 18% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.process_one_work.worker_thread.kthread
43.17 ? 44% +370.3% 203.03 ? 29% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.submit_bio_checks.submit_bio_noacct.submit_bio
280.51 ? 42% +295.3% 1108 ? 25% perf-sched.wait_time.max.ms.rwsem_down_read_slowpath.xfs_log_commit_cil.__xfs_trans_commit.xfs_create
0.07 ? 16% +2283.2% 1.56 ? 85% perf-sched.wait_time.max.ms.rwsem_down_read_slowpath.xfs_log_commit_cil.__xfs_trans_commit.xfs_inactive_ifree
307.54 ? 58% +251.4% 1080 ? 24% perf-sched.wait_time.max.ms.rwsem_down_read_slowpath.xfs_log_commit_cil.__xfs_trans_commit.xfs_remove
330.78 ? 67% +244.3% 1138 ? 23% perf-sched.wait_time.max.ms.rwsem_down_write_slowpath.do_unlinkat.do_syscall_64.entry_SYSCALL_64_after_hwframe
330.79 ? 67% +244.9% 1141 ? 23% perf-sched.wait_time.max.ms.rwsem_down_write_slowpath.path_openat.do_filp_open.do_sys_openat2
231.82 +297.8% 922.15 ? 2% perf-sched.wait_time.max.ms.schedule_timeout.__down.down.xfs_buf_lock
10.64 ? 2% +32.5% 14.10 ? 7% perf-sched.wait_time.max.ms.xlog_wait_on_iclog.xfsaild_push.xfsaild.kthread
487.00 +65.2% 804.67 interrupts.9:IO-APIC.9-fasteoi.acpi
633281 ? 2% -74.0% 164497 interrupts.CAL:Function_call_interrupts
7417 ? 6% -63.3% 2719 ? 26% interrupts.CPU0.CAL:Function_call_interrupts
486401 +65.3% 804225 interrupts.CPU0.LOC:Local_timer_interrupts
273.00 ? 22% +327.0% 1165 ? 48% interrupts.CPU0.RES:Rescheduling_interrupts
17.00 ?145% +9841.2% 1690 ? 13% interrupts.CPU0.TLB:TLB_shootdowns
487.00 +65.2% 804.67 interrupts.CPU1.9:IO-APIC.9-fasteoi.acpi
7712 ? 18% -62.7% 2873 ? 19% interrupts.CPU1.CAL:Function_call_interrupts
486477 +65.3% 804218 interrupts.CPU1.LOC:Local_timer_interrupts
250.67 ? 33% +289.8% 977.17 ? 59% interrupts.CPU1.RES:Rescheduling_interrupts
4.00 ? 54% +6358.3% 258.33 ? 18% interrupts.CPU1.TLB:TLB_shootdowns
7189 ? 5% -68.0% 2298 ? 24% interrupts.CPU10.CAL:Function_call_interrupts
486301 +65.4% 804244 interrupts.CPU10.LOC:Local_timer_interrupts
3823 -25.2% 2858 ? 27% interrupts.CPU10.NMI:Non-maskable_interrupts
3823 -25.2% 2858 ? 27% interrupts.CPU10.PMI:Performance_monitoring_interrupts
225.00 ? 35% +305.8% 913.00 ? 56% interrupts.CPU10.RES:Rescheduling_interrupts
7198 ? 8% -63.0% 2660 ? 16% interrupts.CPU11.CAL:Function_call_interrupts
486320 +65.4% 804166 interrupts.CPU11.LOC:Local_timer_interrupts
229.67 ? 34% +307.5% 935.83 ? 56% interrupts.CPU11.RES:Rescheduling_interrupts
7428 ? 5% -69.0% 2305 ? 24% interrupts.CPU12.CAL:Function_call_interrupts
486490 +65.3% 804129 interrupts.CPU12.LOC:Local_timer_interrupts
3836 -24.9% 2881 ? 27% interrupts.CPU12.NMI:Non-maskable_interrupts
3836 -24.9% 2881 ? 27% interrupts.CPU12.PMI:Performance_monitoring_interrupts
249.33 ? 32% +260.1% 897.83 ? 55% interrupts.CPU12.RES:Rescheduling_interrupts
7166 ? 6% -68.9% 2231 ? 21% interrupts.CPU13.CAL:Function_call_interrupts
485758 +65.6% 804229 interrupts.CPU13.LOC:Local_timer_interrupts
3835 ? 2% -17.5% 3163 ? 19% interrupts.CPU13.NMI:Non-maskable_interrupts
3835 ? 2% -17.5% 3163 ? 19% interrupts.CPU13.PMI:Performance_monitoring_interrupts
215.67 ? 17% +311.8% 888.17 ? 57% interrupts.CPU13.RES:Rescheduling_interrupts
7077 ? 4% -68.3% 2243 ? 23% interrupts.CPU14.CAL:Function_call_interrupts
486400 +65.3% 804204 interrupts.CPU14.LOC:Local_timer_interrupts
218.33 ? 17% +345.5% 972.67 ? 69% interrupts.CPU14.RES:Rescheduling_interrupts
7752 ? 14% -71.4% 2218 ? 23% interrupts.CPU15.CAL:Function_call_interrupts
486269 +65.4% 804220 interrupts.CPU15.LOC:Local_timer_interrupts
233.00 ? 24% +306.4% 947.00 ? 63% interrupts.CPU15.RES:Rescheduling_interrupts
7006 ? 11% -67.9% 2246 ? 23% interrupts.CPU16.CAL:Function_call_interrupts
485974 +65.5% 804358 interrupts.CPU16.LOC:Local_timer_interrupts
227.33 ? 18% +314.1% 941.50 ? 61% interrupts.CPU16.RES:Rescheduling_interrupts
6947 ? 4% -67.7% 2245 ? 21% interrupts.CPU17.CAL:Function_call_interrupts
486140 +65.4% 804246 interrupts.CPU17.LOC:Local_timer_interrupts
232.67 ? 20% +287.1% 900.67 ? 56% interrupts.CPU17.RES:Rescheduling_interrupts
6906 ? 6% -66.7% 2297 ? 22% interrupts.CPU18.CAL:Function_call_interrupts
486391 +65.3% 804213 interrupts.CPU18.LOC:Local_timer_interrupts
235.17 ? 22% +286.9% 909.83 ? 59% interrupts.CPU18.RES:Rescheduling_interrupts
7767 ? 7% -71.5% 2213 ? 23% interrupts.CPU19.CAL:Function_call_interrupts
486480 +65.3% 804222 interrupts.CPU19.LOC:Local_timer_interrupts
202.33 ? 17% +353.2% 917.00 ? 59% interrupts.CPU19.RES:Rescheduling_interrupts
7176 ? 4% -60.8% 2812 ? 39% interrupts.CPU2.CAL:Function_call_interrupts
486466 +65.3% 804179 interrupts.CPU2.LOC:Local_timer_interrupts
229.00 ? 26% +310.6% 940.33 ? 56% interrupts.CPU2.RES:Rescheduling_interrupts
5.67 ? 76% +3011.8% 176.33 ? 23% interrupts.CPU2.TLB:TLB_shootdowns
7354 ? 5% -69.8% 2224 ? 22% interrupts.CPU20.CAL:Function_call_interrupts
485894 +65.5% 804208 interrupts.CPU20.LOC:Local_timer_interrupts
199.50 ? 16% +351.6% 901.00 ? 57% interrupts.CPU20.RES:Rescheduling_interrupts
7393 ? 6% -69.2% 2279 ? 24% interrupts.CPU21.CAL:Function_call_interrupts
486472 +65.3% 804159 interrupts.CPU21.LOC:Local_timer_interrupts
227.83 ? 8% +296.6% 903.67 ? 55% interrupts.CPU21.RES:Rescheduling_interrupts
7497 ? 6% -79.2% 1559 ? 32% interrupts.CPU22.CAL:Function_call_interrupts
486229 +65.4% 804270 interrupts.CPU22.LOC:Local_timer_interrupts
241.83 ? 23% +651.8% 1818 ? 27% interrupts.CPU22.RES:Rescheduling_interrupts
5.50 ? 88% +22036.4% 1217 ? 18% interrupts.CPU22.TLB:TLB_shootdowns
6941 ? 7% -78.4% 1499 ? 36% interrupts.CPU23.CAL:Function_call_interrupts
486478 +65.3% 804194 interrupts.CPU23.LOC:Local_timer_interrupts
246.00 ? 25% +646.1% 1835 ? 27% interrupts.CPU23.RES:Rescheduling_interrupts
3.83 ? 74% +5704.3% 222.50 ? 15% interrupts.CPU23.TLB:TLB_shootdowns
6739 ? 7% -78.1% 1475 ? 36% interrupts.CPU24.CAL:Function_call_interrupts
486470 +65.3% 804236 interrupts.CPU24.LOC:Local_timer_interrupts
264.33 ? 25% +583.7% 1807 ? 28% interrupts.CPU24.RES:Rescheduling_interrupts
7.33 ?127% +2070.5% 159.17 ? 8% interrupts.CPU24.TLB:TLB_shootdowns
6603 ? 8% -77.4% 1489 ? 35% interrupts.CPU25.CAL:Function_call_interrupts
486537 +65.3% 804234 interrupts.CPU25.LOC:Local_timer_interrupts
226.33 ? 20% +664.4% 1730 ? 27% interrupts.CPU25.RES:Rescheduling_interrupts
5.67 ? 57% +2302.9% 136.17 ? 27% interrupts.CPU25.TLB:TLB_shootdowns
6914 ? 6% -80.2% 1365 ? 41% interrupts.CPU26.CAL:Function_call_interrupts
486528 +65.3% 804241 interrupts.CPU26.LOC:Local_timer_interrupts
241.83 ? 19% +614.1% 1727 ? 25% interrupts.CPU26.RES:Rescheduling_interrupts
9.50 ? 90% +900.0% 95.00 ? 12% interrupts.CPU26.TLB:TLB_shootdowns
6978 ? 10% -79.8% 1410 ? 38% interrupts.CPU27.CAL:Function_call_interrupts
486545 +65.3% 804279 interrupts.CPU27.LOC:Local_timer_interrupts
3917 ? 3% -24.5% 2956 ? 28% interrupts.CPU27.NMI:Non-maskable_interrupts
3917 ? 3% -24.5% 2956 ? 28% interrupts.CPU27.PMI:Performance_monitoring_interrupts
276.50 ? 11% +532.7% 1749 ? 26% interrupts.CPU27.RES:Rescheduling_interrupts
7.67 ?126% +1269.6% 105.00 ? 18% interrupts.CPU27.TLB:TLB_shootdowns
7057 ? 5% -79.4% 1457 ? 35% interrupts.CPU28.CAL:Function_call_interrupts
486578 +65.3% 804259 interrupts.CPU28.LOC:Local_timer_interrupts
236.50 ? 19% +647.1% 1767 ? 26% interrupts.CPU28.RES:Rescheduling_interrupts
6.83 ? 69% +970.7% 73.17 ? 27% interrupts.CPU28.TLB:TLB_shootdowns
6923 ? 11% -79.9% 1394 ? 34% interrupts.CPU29.CAL:Function_call_interrupts
486562 +65.3% 804220 interrupts.CPU29.LOC:Local_timer_interrupts
296.67 ? 39% +484.7% 1734 ? 26% interrupts.CPU29.RES:Rescheduling_interrupts
7584 ? 11% -61.4% 2929 ? 36% interrupts.CPU3.CAL:Function_call_interrupts
486493 +65.3% 804143 interrupts.CPU3.LOC:Local_timer_interrupts
233.00 ? 24% +313.7% 964.00 ? 60% interrupts.CPU3.RES:Rescheduling_interrupts
2.67 ? 98% +5718.8% 155.17 ? 7% interrupts.CPU3.TLB:TLB_shootdowns
6861 ? 12% -79.7% 1395 ? 38% interrupts.CPU30.CAL:Function_call_interrupts
486469 +65.3% 804242 interrupts.CPU30.LOC:Local_timer_interrupts
270.33 ? 26% +540.8% 1732 ? 27% interrupts.CPU30.RES:Rescheduling_interrupts
4.00 ? 73% +1604.2% 68.17 ? 26% interrupts.CPU30.TLB:TLB_shootdowns
6736 ? 8% -79.1% 1408 ? 41% interrupts.CPU31.CAL:Function_call_interrupts
486679 +65.2% 804106 interrupts.CPU31.LOC:Local_timer_interrupts
229.67 ? 21% +655.5% 1735 ? 27% interrupts.CPU31.RES:Rescheduling_interrupts
7042 ? 7% -79.9% 1412 ? 39% interrupts.CPU32.CAL:Function_call_interrupts
486413 +65.3% 804237 interrupts.CPU32.LOC:Local_timer_interrupts
226.67 ? 24% +665.2% 1734 ? 26% interrupts.CPU32.RES:Rescheduling_interrupts
6925 ? 6% -80.4% 1359 ? 34% interrupts.CPU33.CAL:Function_call_interrupts
486582 +65.3% 804210 interrupts.CPU33.LOC:Local_timer_interrupts
3824 -30.6% 2652 ? 33% interrupts.CPU33.NMI:Non-maskable_interrupts
3824 -30.6% 2652 ? 33% interrupts.CPU33.PMI:Performance_monitoring_interrupts
247.33 ? 18% +630.1% 1805 ? 28% interrupts.CPU33.RES:Rescheduling_interrupts
7279 ? 10% -81.1% 1376 ? 38% interrupts.CPU34.CAL:Function_call_interrupts
486566 +65.3% 804144 interrupts.CPU34.LOC:Local_timer_interrupts
257.67 ? 27% +589.3% 1776 ? 27% interrupts.CPU34.RES:Rescheduling_interrupts
7021 ? 5% -77.2% 1602 ? 43% interrupts.CPU35.CAL:Function_call_interrupts
486437 +65.3% 804159 interrupts.CPU35.LOC:Local_timer_interrupts
236.67 ? 24% +646.6% 1767 ? 26% interrupts.CPU35.RES:Rescheduling_interrupts
7088 ? 3% -80.8% 1358 ? 34% interrupts.CPU36.CAL:Function_call_interrupts
485937 +65.5% 804216 interrupts.CPU36.LOC:Local_timer_interrupts
239.00 ? 16% +640.1% 1768 ? 27% interrupts.CPU36.RES:Rescheduling_interrupts
7044 ? 10% -80.5% 1377 ? 35% interrupts.CPU37.CAL:Function_call_interrupts
486353 +65.4% 804242 interrupts.CPU37.LOC:Local_timer_interrupts
229.17 ? 19% +646.0% 1709 ? 26% interrupts.CPU37.RES:Rescheduling_interrupts
7292 ? 6% -81.7% 1333 ? 37% interrupts.CPU38.CAL:Function_call_interrupts
486553 +65.3% 804173 interrupts.CPU38.LOC:Local_timer_interrupts
253.50 ? 34% +605.3% 1788 ? 27% interrupts.CPU38.RES:Rescheduling_interrupts
7100 ? 4% -80.7% 1369 ? 35% interrupts.CPU39.CAL:Function_call_interrupts
486656 +65.3% 804225 interrupts.CPU39.LOC:Local_timer_interrupts
252.17 ? 25% +589.4% 1738 ? 28% interrupts.CPU39.RES:Rescheduling_interrupts
7257 ? 11% -66.9% 2399 ? 10% interrupts.CPU4.CAL:Function_call_interrupts
486441 +65.3% 804178 interrupts.CPU4.LOC:Local_timer_interrupts
239.33 ? 21% +290.7% 935.17 ? 57% interrupts.CPU4.RES:Rescheduling_interrupts
3.33 ? 50% +3910.0% 133.67 ? 14% interrupts.CPU4.TLB:TLB_shootdowns
7237 ? 5% -80.6% 1402 ? 40% interrupts.CPU40.CAL:Function_call_interrupts
486507 +65.3% 804207 interrupts.CPU40.LOC:Local_timer_interrupts
224.83 ? 20% +693.8% 1784 ? 26% interrupts.CPU40.RES:Rescheduling_interrupts
2.67 ? 55% +1606.3% 45.50 ? 58% interrupts.CPU40.TLB:TLB_shootdowns
7077 ? 10% -80.7% 1368 ? 36% interrupts.CPU41.CAL:Function_call_interrupts
486517 +65.3% 804210 interrupts.CPU41.LOC:Local_timer_interrupts
222.33 ? 26% +692.0% 1760 ? 27% interrupts.CPU41.RES:Rescheduling_interrupts
7094 ? 5% -80.9% 1354 ? 38% interrupts.CPU42.CAL:Function_call_interrupts
486500 +65.3% 804199 interrupts.CPU42.LOC:Local_timer_interrupts
222.67 ? 22% +689.1% 1757 ? 27% interrupts.CPU42.RES:Rescheduling_interrupts
7314 ? 4% -80.3% 1443 ? 42% interrupts.CPU43.CAL:Function_call_interrupts
486507 +65.3% 804202 interrupts.CPU43.LOC:Local_timer_interrupts
225.00 ? 18% +678.4% 1751 ? 25% interrupts.CPU43.RES:Rescheduling_interrupts
7073 ? 3% -68.8% 2206 ? 23% interrupts.CPU44.CAL:Function_call_interrupts
486416 +65.3% 804207 interrupts.CPU44.LOC:Local_timer_interrupts
3809 -8.3% 3492 ? 2% interrupts.CPU44.NMI:Non-maskable_interrupts
3809 -8.3% 3492 ? 2% interrupts.CPU44.PMI:Performance_monitoring_interrupts
221.67 ? 34% +306.9% 902.00 ? 57% interrupts.CPU44.RES:Rescheduling_interrupts
8825 ? 48% -74.6% 2238 ? 21% interrupts.CPU45.CAL:Function_call_interrupts
486393 +65.3% 804210 interrupts.CPU45.LOC:Local_timer_interrupts
216.83 ? 28% +317.4% 905.00 ? 55% interrupts.CPU45.RES:Rescheduling_interrupts
7054 ? 3% -68.6% 2217 ? 22% interrupts.CPU46.CAL:Function_call_interrupts
486494 +65.3% 804384 interrupts.CPU46.LOC:Local_timer_interrupts
233.67 ? 24% +280.4% 888.83 ? 53% interrupts.CPU46.RES:Rescheduling_interrupts
6976 ? 3% -67.5% 2263 ? 21% interrupts.CPU47.CAL:Function_call_interrupts
486484 +65.3% 804214 interrupts.CPU47.LOC:Local_timer_interrupts
232.83 ? 16% +291.2% 910.83 ? 58% interrupts.CPU47.RES:Rescheduling_interrupts
7033 ? 7% -68.9% 2188 ? 23% interrupts.CPU48.CAL:Function_call_interrupts
486500 +65.3% 804166 interrupts.CPU48.LOC:Local_timer_interrupts
3516 ? 20% -34.0% 2321 ? 37% interrupts.CPU48.NMI:Non-maskable_interrupts
3516 ? 20% -34.0% 2321 ? 37% interrupts.CPU48.PMI:Performance_monitoring_interrupts
197.67 ? 19% +366.0% 921.17 ? 57% interrupts.CPU48.RES:Rescheduling_interrupts
7061 ? 5% -68.2% 2248 ? 21% interrupts.CPU49.CAL:Function_call_interrupts
485955 +65.5% 804197 interrupts.CPU49.LOC:Local_timer_interrupts
231.00 ? 23% +292.8% 907.33 ? 58% interrupts.CPU49.RES:Rescheduling_interrupts
6511 ? 16% -52.2% 3112 ? 46% interrupts.CPU5.CAL:Function_call_interrupts
485657 +65.6% 804177 interrupts.CPU5.LOC:Local_timer_interrupts
228.67 ? 31% +324.9% 971.50 ? 68% interrupts.CPU5.RES:Rescheduling_interrupts
6.83 ? 85% +1436.6% 105.00 ? 29% interrupts.CPU5.TLB:TLB_shootdowns
7305 ? 5% -70.0% 2195 ? 20% interrupts.CPU50.CAL:Function_call_interrupts
486234 +65.4% 804108 interrupts.CPU50.LOC:Local_timer_interrupts
234.83 ? 28% +274.6% 879.67 ? 53% interrupts.CPU50.RES:Rescheduling_interrupts
7171 ? 2% -69.2% 2210 ? 21% interrupts.CPU51.CAL:Function_call_interrupts
486151 +65.4% 804193 interrupts.CPU51.LOC:Local_timer_interrupts
3862 ? 3% -25.1% 2893 ? 29% interrupts.CPU51.NMI:Non-maskable_interrupts
3862 ? 3% -25.1% 2893 ? 29% interrupts.CPU51.PMI:Performance_monitoring_interrupts
217.17 ? 25% +301.7% 872.33 ? 56% interrupts.CPU51.RES:Rescheduling_interrupts
7096 -68.8% 2213 ? 22% interrupts.CPU52.CAL:Function_call_interrupts
486310 +65.4% 804178 interrupts.CPU52.LOC:Local_timer_interrupts
235.33 ? 23% +280.4% 895.17 ? 55% interrupts.CPU52.RES:Rescheduling_interrupts
6897 ? 3% -68.1% 2201 ? 23% interrupts.CPU53.CAL:Function_call_interrupts
486329 +65.4% 804174 interrupts.CPU53.LOC:Local_timer_interrupts
223.83 ? 22% +300.8% 897.17 ? 56% interrupts.CPU53.RES:Rescheduling_interrupts
7292 ? 5% -69.6% 2214 ? 22% interrupts.CPU54.CAL:Function_call_interrupts
486384 +65.3% 804220 interrupts.CPU54.LOC:Local_timer_interrupts
225.17 ? 16% +304.0% 909.67 ? 57% interrupts.CPU54.RES:Rescheduling_interrupts
7378 ? 4% -70.3% 2189 ? 23% interrupts.CPU55.CAL:Function_call_interrupts
486473 +65.3% 804226 interrupts.CPU55.LOC:Local_timer_interrupts
240.83 ? 22% +271.2% 894.00 ? 62% interrupts.CPU55.RES:Rescheduling_interrupts
7176 ? 5% -69.0% 2225 ? 21% interrupts.CPU56.CAL:Function_call_interrupts
486517 +65.3% 804209 interrupts.CPU56.LOC:Local_timer_interrupts
216.00 ? 30% +313.8% 893.83 ? 58% interrupts.CPU56.RES:Rescheduling_interrupts
7228 -68.9% 2248 ? 20% interrupts.CPU57.CAL:Function_call_interrupts
486021 +65.5% 804223 interrupts.CPU57.LOC:Local_timer_interrupts
205.00 ? 18% +348.2% 918.83 ? 61% interrupts.CPU57.RES:Rescheduling_interrupts
7047 ? 5% -68.4% 2223 ? 21% interrupts.CPU58.CAL:Function_call_interrupts
486430 +65.3% 804178 interrupts.CPU58.LOC:Local_timer_interrupts
199.50 ? 13% +347.4% 892.50 ? 55% interrupts.CPU58.RES:Rescheduling_interrupts
7338 ? 3% -70.0% 2201 ? 24% interrupts.CPU59.CAL:Function_call_interrupts
486509 +65.3% 804239 interrupts.CPU59.LOC:Local_timer_interrupts
215.83 ? 19% +317.4% 900.83 ? 59% interrupts.CPU59.RES:Rescheduling_interrupts
6855 ? 8% -59.6% 2771 ? 32% interrupts.CPU6.CAL:Function_call_interrupts
486254 +65.4% 804229 interrupts.CPU6.LOC:Local_timer_interrupts
237.17 ? 29% +296.7% 940.83 ? 58% interrupts.CPU6.RES:Rescheduling_interrupts
7.50 ? 82% +1340.0% 108.00 ? 26% interrupts.CPU6.TLB:TLB_shootdowns
7444 ? 5% -70.6% 2185 ? 21% interrupts.CPU60.CAL:Function_call_interrupts
486003 +65.5% 804178 interrupts.CPU60.LOC:Local_timer_interrupts
3794 -31.4% 2603 ? 32% interrupts.CPU60.NMI:Non-maskable_interrupts
3794 -31.4% 2603 ? 32% interrupts.CPU60.PMI:Performance_monitoring_interrupts
191.17 ? 18% +342.5% 846.00 ? 53% interrupts.CPU60.RES:Rescheduling_interrupts
7209 ? 3% -68.4% 2276 ? 21% interrupts.CPU61.CAL:Function_call_interrupts
486161 +65.4% 804228 interrupts.CPU61.LOC:Local_timer_interrupts
3832 -24.5% 2892 ? 28% interrupts.CPU61.NMI:Non-maskable_interrupts
3832 -24.5% 2892 ? 28% interrupts.CPU61.PMI:Performance_monitoring_interrupts
229.83 ? 28% +311.6% 946.00 ? 59% interrupts.CPU61.RES:Rescheduling_interrupts
7136 ? 5% -68.7% 2231 ? 24% interrupts.CPU62.CAL:Function_call_interrupts
486424 +65.3% 804234 interrupts.CPU62.LOC:Local_timer_interrupts
3829 -17.1% 3174 ? 20% interrupts.CPU62.NMI:Non-maskable_interrupts
3829 -17.1% 3174 ? 20% interrupts.CPU62.PMI:Performance_monitoring_interrupts
265.17 ? 35% +236.1% 891.17 ? 54% interrupts.CPU62.RES:Rescheduling_interrupts
7516 ? 4% -70.3% 2232 ? 21% interrupts.CPU63.CAL:Function_call_interrupts
486494 +65.3% 804207 interrupts.CPU63.LOC:Local_timer_interrupts
270.67 ? 29% +241.9% 925.33 ? 57% interrupts.CPU63.RES:Rescheduling_interrupts
7757 ? 4% -71.7% 2194 ? 22% interrupts.CPU64.CAL:Function_call_interrupts
485711 +65.6% 804259 interrupts.CPU64.LOC:Local_timer_interrupts
265.83 ? 28% +252.1% 936.00 ? 56% interrupts.CPU64.RES:Rescheduling_interrupts
7716 ? 7% -70.7% 2263 ? 24% interrupts.CPU65.CAL:Function_call_interrupts
486485 +65.3% 804231 interrupts.CPU65.LOC:Local_timer_interrupts
3784 -30.8% 2618 ? 34% interrupts.CPU65.NMI:Non-maskable_interrupts
3784 -30.8% 2618 ? 34% interrupts.CPU65.PMI:Performance_monitoring_interrupts
248.00 ? 28% +278.8% 939.50 ? 61% interrupts.CPU65.RES:Rescheduling_interrupts
8026 ? 7% -82.5% 1404 ? 36% interrupts.CPU66.CAL:Function_call_interrupts
486109 +65.4% 804250 interrupts.CPU66.LOC:Local_timer_interrupts
238.33 ? 28% +651.3% 1790 ? 29% interrupts.CPU66.RES:Rescheduling_interrupts
7181 ? 10% -80.6% 1396 ? 37% interrupts.CPU67.CAL:Function_call_interrupts
486264 +65.4% 804186 interrupts.CPU67.LOC:Local_timer_interrupts
256.50 ? 29% +577.4% 1737 ? 26% interrupts.CPU67.RES:Rescheduling_interrupts
4.33 ? 39% +1038.5% 49.33 ? 62% interrupts.CPU67.TLB:TLB_shootdowns
6951 ? 7% -80.4% 1361 ? 35% interrupts.CPU68.CAL:Function_call_interrupts
486479 +65.3% 804252 interrupts.CPU68.LOC:Local_timer_interrupts
280.67 ? 26% +537.5% 1789 ? 27% interrupts.CPU68.RES:Rescheduling_interrupts
6882 ? 8% -80.9% 1316 ? 40% interrupts.CPU69.CAL:Function_call_interrupts
486525 +65.3% 804257 interrupts.CPU69.LOC:Local_timer_interrupts
254.50 ? 23% +587.6% 1749 ? 28% interrupts.CPU69.RES:Rescheduling_interrupts
7296 ? 9% -67.7% 2356 ? 24% interrupts.CPU7.CAL:Function_call_interrupts
486166 +65.4% 804361 interrupts.CPU7.LOC:Local_timer_interrupts
3860 ? 2% -17.9% 3170 ? 20% interrupts.CPU7.NMI:Non-maskable_interrupts
3860 ? 2% -17.9% 3170 ? 20% interrupts.CPU7.PMI:Performance_monitoring_interrupts
212.83 ? 24% +343.1% 943.00 ? 58% interrupts.CPU7.RES:Rescheduling_interrupts
4.33 ? 81% +1926.9% 87.83 ? 11% interrupts.CPU7.TLB:TLB_shootdowns
7054 ? 7% -81.1% 1333 ? 39% interrupts.CPU70.CAL:Function_call_interrupts
486542 +65.3% 804272 interrupts.CPU70.LOC:Local_timer_interrupts
236.67 ? 21% +640.1% 1751 ? 26% interrupts.CPU70.RES:Rescheduling_interrupts
7135 ? 4% -80.6% 1385 ? 36% interrupts.CPU71.CAL:Function_call_interrupts
486529 +65.3% 804259 interrupts.CPU71.LOC:Local_timer_interrupts
249.33 ? 23% +595.5% 1734 ? 27% interrupts.CPU71.RES:Rescheduling_interrupts
7345 ? 4% -81.2% 1384 ? 37% interrupts.CPU72.CAL:Function_call_interrupts
486517 +65.3% 804284 interrupts.CPU72.LOC:Local_timer_interrupts
255.67 ? 19% +589.8% 1763 ? 27% interrupts.CPU72.RES:Rescheduling_interrupts
7148 ? 5% -77.5% 1608 ? 60% interrupts.CPU73.CAL:Function_call_interrupts
486530 +65.3% 804431 interrupts.CPU73.LOC:Local_timer_interrupts
245.67 ? 20% +611.0% 1746 ? 27% interrupts.CPU73.RES:Rescheduling_interrupts
7018 ? 6% -81.0% 1334 ? 33% interrupts.CPU74.CAL:Function_call_interrupts
486478 +65.3% 804237 interrupts.CPU74.LOC:Local_timer_interrupts
251.83 ? 21% +606.4% 1778 ? 27% interrupts.CPU74.RES:Rescheduling_interrupts
7376 ? 6% -81.7% 1348 ? 37% interrupts.CPU75.CAL:Function_call_interrupts
486530 +65.3% 804230 interrupts.CPU75.LOC:Local_timer_interrupts
248.50 ? 22% +586.6% 1706 ? 26% interrupts.CPU75.RES:Rescheduling_interrupts
7419 -82.3% 1311 ? 40% interrupts.CPU76.CAL:Function_call_interrupts
486548 +65.3% 804255 interrupts.CPU76.LOC:Local_timer_interrupts
239.83 ? 16% +606.7% 1695 ? 27% interrupts.CPU76.RES:Rescheduling_interrupts
6936 ? 11% -80.0% 1390 ? 36% interrupts.CPU77.CAL:Function_call_interrupts
486489 +65.3% 804238 interrupts.CPU77.LOC:Local_timer_interrupts
232.33 ? 27% +677.7% 1806 ? 28% interrupts.CPU77.RES:Rescheduling_interrupts
6910 ? 7% -80.9% 1318 ? 34% interrupts.CPU78.CAL:Function_call_interrupts
486525 +65.3% 804220 interrupts.CPU78.LOC:Local_timer_interrupts
232.17 ? 31% +643.6% 1726 ? 26% interrupts.CPU78.RES:Rescheduling_interrupts
4.33 ? 25% +842.3% 40.83 ? 86% interrupts.CPU78.TLB:TLB_shootdowns
6824 ? 8% -80.1% 1354 ? 38% interrupts.CPU79.CAL:Function_call_interrupts
486535 +65.3% 804185 interrupts.CPU79.LOC:Local_timer_interrupts
3487 ? 20% -41.4% 2044 ? 32% interrupts.CPU79.NMI:Non-maskable_interrupts
3487 ? 20% -41.4% 2044 ? 32% interrupts.CPU79.PMI:Performance_monitoring_interrupts
248.00 ? 33% +593.9% 1720 ? 26% interrupts.CPU79.RES:Rescheduling_interrupts
6996 ? 4% -67.6% 2266 ? 23% interrupts.CPU8.CAL:Function_call_interrupts
486351 +65.4% 804223 interrupts.CPU8.LOC:Local_timer_interrupts
219.33 ? 21% +319.5% 920.00 ? 54% interrupts.CPU8.RES:Rescheduling_interrupts
4.67 ? 66% +1475.0% 73.50 ? 33% interrupts.CPU8.TLB:TLB_shootdowns
7225 ? 4% -80.8% 1390 ? 37% interrupts.CPU80.CAL:Function_call_interrupts
486375 +65.4% 804251 interrupts.CPU80.LOC:Local_timer_interrupts
3525 ? 21% -33.4% 2348 ? 35% interrupts.CPU80.NMI:Non-maskable_interrupts
3525 ? 21% -33.4% 2348 ? 35% interrupts.CPU80.PMI:Performance_monitoring_interrupts
219.50 ? 21% +675.2% 1701 ? 27% interrupts.CPU80.RES:Rescheduling_interrupts
7546 ? 3% -81.5% 1394 ? 38% interrupts.CPU81.CAL:Function_call_interrupts
486317 +65.4% 804248 interrupts.CPU81.LOC:Local_timer_interrupts
3548 ? 21% -34.1% 2338 ? 34% interrupts.CPU81.NMI:Non-maskable_interrupts
3548 ? 21% -34.1% 2338 ? 34% interrupts.CPU81.PMI:Performance_monitoring_interrupts
227.67 ? 27% +650.8% 1709 ? 28% interrupts.CPU81.RES:Rescheduling_interrupts
7440 ? 4% -80.1% 1477 ? 33% interrupts.CPU82.CAL:Function_call_interrupts
486671 +65.2% 804216 interrupts.CPU82.LOC:Local_timer_interrupts
216.83 ? 21% +700.2% 1735 ? 27% interrupts.CPU82.RES:Rescheduling_interrupts
7492 ? 6% -80.0% 1501 ? 36% interrupts.CPU83.CAL:Function_call_interrupts
486513 +65.3% 804223 interrupts.CPU83.LOC:Local_timer_interrupts
223.67 ? 24% +671.8% 1726 ? 26% interrupts.CPU83.RES:Rescheduling_interrupts
7438 ? 5% -81.9% 1344 ? 38% interrupts.CPU84.CAL:Function_call_interrupts
486492 +65.3% 804232 interrupts.CPU84.LOC:Local_timer_interrupts
295.67 ? 12% +521.0% 1836 ? 27% interrupts.CPU84.RES:Rescheduling_interrupts
7006 ? 7% -80.8% 1348 ? 38% interrupts.CPU85.CAL:Function_call_interrupts
486529 +65.3% 804230 interrupts.CPU85.LOC:Local_timer_interrupts
291.17 ? 15% +535.4% 1850 ? 25% interrupts.CPU85.RES:Rescheduling_interrupts
7175 ? 7% -80.4% 1406 ? 38% interrupts.CPU86.CAL:Function_call_interrupts
486620 +65.3% 804411 interrupts.CPU86.LOC:Local_timer_interrupts
281.67 ? 7% +515.4% 1733 ? 26% interrupts.CPU86.RES:Rescheduling_interrupts
7291 ? 6% -81.7% 1334 ? 38% interrupts.CPU87.CAL:Function_call_interrupts
486552 +65.3% 804256 interrupts.CPU87.LOC:Local_timer_interrupts
321.83 ? 5% +448.3% 1764 ? 25% interrupts.CPU87.RES:Rescheduling_interrupts
53.17 ? 90% +190.9% 154.67 ? 72% interrupts.CPU87.TLB:TLB_shootdowns
6955 ? 6% -67.4% 2266 ? 21% interrupts.CPU9.CAL:Function_call_interrupts
486417 +65.3% 804200 interrupts.CPU9.LOC:Local_timer_interrupts
206.00 ? 16% +359.8% 947.17 ? 59% interrupts.CPU9.RES:Rescheduling_interrupts
3.50 ? 56% +1942.9% 71.50 ? 33% interrupts.CPU9.TLB:TLB_shootdowns
299.50 ? 3% -18.9% 243.00 interrupts.IWI:IRQ_work_interrupts
42802199 +65.3% 70771755 interrupts.LOC:Local_timer_interrupts
318583 ? 6% -16.6% 265581 ? 6% interrupts.NMI:Non-maskable_interrupts
318583 ? 6% -16.6% 265581 ? 6% interrupts.PMI:Performance_monitoring_interrupts
20957 ? 4% +462.7% 117938 ? 2% interrupts.RES:Rescheduling_interrupts
769.83 ? 47% +873.7% 7496 ? 4% interrupts.TLB:TLB_shootdowns
aim7.jobs-per-min
120000 +------------------------------------------------------------------+
|+++.++++++.+++++++.++++++.++++++.+++++++.++++++.+++++++.+ |
110000 |-+ ++ ++. +|
100000 |-+ + ++ |
| |
90000 |-+ |
80000 |-+ |
| |
70000 |-+ |
60000 |-+ |
| |
50000 |-+ |
40000 |-OO OOOOOO OOOOOO OOOO O OOOO O OOOOOO |
|O O O O |
30000 +------------------------------------------------------------------+
perf-sched.wait_time.max.ms.schedule_timeout.__down.down.xfs_buf_lock
2500 +--------------------------------------------------------------------+
| O |
| O O OO OO OOO OO O O |
2000 |-+ O O |
| |
| |
1500 |-+ O O |
|O O O |
1000 |-+ O |
| OO O OO O OOOOOO |
| |
500 |-+ |
| |
|++.+++++.++++++.+++++.+++++.+++++.++++++.+++++.+++++.++++++.+++++.++|
0 +--------------------------------------------------------------------+
perf-sched.wait_and_delay.max.ms.schedule_timeout.__down.down.xfs_buf_lock
2500 +--------------------------------------------------------------------+
| O |
| O O OO OO OOO OO O O |
2000 |-+ O O |
| |
| |
1500 |-+ O O |
|O O O O O |
1000 |-+ O |
| O O O O OOOOOO |
| |
500 |-+ |
| +. |
|++.+++++.++++++.+++++.++++ +++++.++++++.+++++.+++++.++++++.+++++.++|
0 +--------------------------------------------------------------------+
400 +---------------------------------------------------------------------+
| O |
350 |-+ O OO O O O |
300 |-+ OO O O |
| O O OO O O O O O |
250 |-+ O O O O |
| O O O |
200 |-+ O O O O |
|O O O |
150 |-+ |
100 |-+ |
| |
50 |-+ |
| +. .+ |
0 +---------------------------------------------------------------------+
600 +---------------------------------------------------------------------+
| O |
500 |-O O O O O O |
| O O O O O O O |
| O O O O O |
400 |-+ O O OO O |
| O O O O O OO O |
300 |O+ O |
| |
200 |-+ |
| |
| +|
100 |-+ :|
|+ .+ .+ .++ +.: |
0 +---------------------------------------------------------------------+
300 +---------------------------------------------------------------------+
| O |
250 |-+ OO O O O O O |
| O OO O |
| O O O O O OO |
200 |-+ O O O O O O O O |
| O O O |
150 |O+ O |
| O O |
100 |-+ |
| |
| |
50 |-+ |
| |
0 +---------------------------------------------------------------------+
600 +---------------------------------------------------------------------+
| O |
500 |-O O O O O O |
| O OO O OO O |
| O O O O O |
400 |-+ O O OO O |
| O O OO O OO O |
300 |O+ O |
| |
200 |-+ |
| |
| |
100 |-+ |
| + +. :|
0 +---------------------------------------------------------------------+
180 +---------------------------------------------------------------------+
| O O O |
160 |-+ O |
140 |-+ O O OOO O O |
|OO O OO O O OO O |
120 |-+ O O O OO OO |
100 |-+ O O O OO O |
| |
80 |-+ |
60 |-+ |
| |
40 |-+ |
20 |-+ |
| :|
0 +---------------------------------------------------------------------+
0.25 +--------------------------------------------------------------------+
| O O O O |
|O O O O O |
0.2 |-O O O O O O O OO O |
| OOO O OOO OO O OO O OO |
| |
0.15 |-+ |
| + |
0.1 |-+ : |
| : |
| : : |
0.05 |-+ : : |
| +.+ +++.++|
|+ ++++.+ +++. + + .++ ++.++ + +++.+ .+ +++.+ ++ : |
0 +--------------------------------------------------------------------+
2500 +--------------------------------------------------------------------+
| |
| O O |
2000 |-+ |
| O |
| O |
1500 |-+ O O O |
| |
1000 |-+ OOOO OO OO O O OO OOO OO O |
| O O |
| OOO O O |
500 |-+ O |
| |
|O |
0 +--------------------------------------------------------------------+
2500 +--------------------------------------------------------------------+
| |
| |
2000 |-+ O O O O |
| |
| O |
1500 |-+ |
| |
1000 |-+ O O |
|OO OO OOOO O O O O O O O O O OOOOOO |
| O O O O |
500 |-+ |
| |
| .+ +++. + + ++. +. + + + |
0 +--------------------------------------------------------------------+
2500 +--------------------------------------------------------------------+
| |
| |
2000 |-+ O O O O |
| |
| O |
1500 |-+ |
| |
1000 |-+ O O |
| OO OO O O O O O O O O O O O O O OO O O OO |
| O O O O |
500 |-+ |
| |
| +.+. +. .+ .+. .+. .|
0 +--------------------------------------------------------------------+
70 +----------------------------------------------------------------------+
| |
60 |O+ O O |
| O O O O O |
50 |-+ O O |
| O O O O O O O OOOO |
40 |-+ O O O O O |
| OOO O O |
30 |-+ O O |
| |
20 |-+ |
| |
10 |-+ |
| +.+ |
0 +----------------------------------------------------------------------+
25 +----------------------------------------------------------------------+
| O O O |
| OO O OO O O |
20 |-+ O O |
|O OO O O O O O OO |
| O O O |
15 |-+ O O O O O O |
| OO |
10 |-+ O |
| |
| |
5 |-+ |
| + +|
| :+ :|
0 +----------------------------------------------------------------------+
aim7.time.system_time
12000 +-------------------------------------------------------------------+
| OOOO OOOOO OOOO OOO |
10000 |O+ O O O |
| |
| |
8000 |-+ |
| |
6000 |-+ |
| |
4000 |-+ |
| |
| + ++. +|
2000 |-+ + +++ + |
|++.++++++.++++++.++++++.++++++.+++++.++++++.++++++.++++++ |
0 +-------------------------------------------------------------------+
aim7.time.elapsed_time
500 +---------------------------------------------------------------------+
|O O O O |
450 |-+ |
| |
400 |-O OOOOO OOOOO OOO OOOO OOOOO OOOO OOO |
| |
350 |-+ |
| |
300 |-+ |
| |
250 |-+ +.+++++.++|
| : |
200 |-+ : |
| +. : |
150 +---------------------------------------------------------------------+
aim7.time.elapsed_time.max
500 +---------------------------------------------------------------------+
|O O O O |
450 |-+ |
| |
400 |-O OOOOO OOOOO OOO OOOO OOOOO OOOO OOO |
| |
350 |-+ |
| |
300 |-+ |
| |
250 |-+ +.+++++.++|
| : |
200 |-+ : |
| +. : |
150 +---------------------------------------------------------------------+
aim7.time.minor_page_faults
280000 +------------------------------------------------------------------+
| OO O OO OOO O |
260000 |O+ O O OOO OOO OO OOO OO O OO |
| O O O O O |
| |
240000 |-+ |
| |
220000 |-+ |
| |
200000 |-+ |
| |
| .+ ++. ++ + +. + |
180000 |+++ :++ ++++ +.++++++.++ +++.+++ ++ ++++++. + + ++.+ ++++.+++|
| + + + + + |
160000 +------------------------------------------------------------------+
aim7.time.involuntary_context_switches
70000 +-------------------------------------------------------------------+
|O O O O |
60000 |-+ |
| |
50000 |-+ OOOO OOOOO OOO OOO |
| O |
40000 |-O OOOOOO OOOOOO O |
| |
30000 |-+ |
| |
20000 |-+ |
| |
10000 |-+ |
| .++++++.++|
0 +-------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
---
0DAY/LKP+ Test Infrastructure Open Source Technology Center
https://lists.01.org/hyperkitty/list/[email protected] Intel Corporation
Thanks,
Oliver Sang