2019-04-08 07:12:51

by Chen, Rong A

[permalink] [raw]
Subject: [MD] 4bc034d353: aim7.jobs-per-min -86.0% regression

Greeting,

FYI, we noticed a -86.0% regression of aim7.jobs-per-min due to commit:


commit: 4bc034d35377196c854236133b07730a777c4aba ("Revert "MD: fix lock contention for flush bios"")
https://git.kernel.org/cgit/linux/kernel/git/axboe/linux-block.git for-5.2/block

in testcase: aim7
on test machine: 40 threads Intel(R) Xeon(R) CPU E5-2690 v2 @ 3.00GHz with 384G memory
with following parameters:

disk: 4BRD_12G
md: RAID0
fs: xfs
test: sync_disk_rw
load: 300
cpufreq_governor: performance

test-description: AIM7 is a traditional UNIX system level benchmark suite which is used to test and measure the performance of multiuser system.
test-url: https://sourceforge.net/projects/aimbench/files/aim-suite7/



Details are as below:
-------------------------------------------------------------------------------------------------->


To reproduce:

git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml

=========================================================================================
compiler/cpufreq_governor/disk/fs/kconfig/load/md/rootfs/tbox_group/test/testcase:
gcc-7/performance/4BRD_12G/xfs/x86_64-rhel-7.6/300/RAID0/debian-x86_64-2018-04-03.cgz/lkp-ivb-ep01/sync_disk_rw/aim7

commit:
4f4fd7c579 ("Don't jump to compute_result state from check_result state")
4bc034d353 ("Revert "MD: fix lock contention for flush bios"")

4f4fd7c5798bbdd5 4bc034d35377196c854236133b0
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
:4 25% 1:4 dmesg.WARNING:at#for_ip_interrupt_entry/0x
%stddev %change %stddev
\ | \
30282 -86.0% 4234 aim7.jobs-per-min
59.51 +615.0% 425.47 aim7.time.elapsed_time
59.51 +615.0% 425.47 aim7.time.elapsed_time.max
661580 +3562.5% 24230472 aim7.time.involuntary_context_switches
21614 +255.2% 76779 ± 3% aim7.time.minor_page_faults
1450 +927.6% 14908 aim7.time.system_time
15.71 +10.6% 17.38 aim7.time.user_time
35275823 +506.3% 2.139e+08 aim7.time.voluntary_context_switches
41.04 ± 9% -78.6% 8.77 ± 3% iostat.cpu.idle
58.07 ± 6% +56.8% 91.07 iostat.cpu.system
465169 ± 6% -84.0% 74280 iostat.md0.w/s
987203 ± 6% -83.6% 161868 iostat.md0.wkB/s
2121170 +19.1% 2526977 numa-numastat.node0.local_node
2128094 +18.9% 2530235 numa-numastat.node0.numa_hit
2085181 +16.0% 2418368 numa-numastat.node1.local_node
2091318 +16.1% 2428211 numa-numastat.node1.numa_hit
39.33 ± 10% -31.0 8.34 ± 2% mpstat.cpu.all.idle%
0.13 ± 8% -0.1 0.00 ± 50% mpstat.cpu.all.iowait%
0.03 ± 76% -0.0 0.00 ± 35% mpstat.cpu.all.soft%
59.76 ± 6% +31.7 91.51 mpstat.cpu.all.sys%
0.75 ± 12% -0.6 0.15 ± 5% mpstat.cpu.all.usr%
67834 ± 9% +168.7% 182251 meminfo.AnonHugePages
38178 ± 8% +21.3% 46322 meminfo.Dirty
136721 ± 6% +28.6% 175883 meminfo.Inactive
116365 ± 7% +33.8% 155661 meminfo.Inactive(file)
32594 +46.9% 47891 ± 7% meminfo.Shmem
37104 ± 6% -84.2% 5856 meminfo.max_used_kB
40.67 ± 9% -79.7% 8.25 ± 5% vmstat.cpu.id
57.67 ± 6% +57.4% 90.75 vmstat.cpu.sy
984524 ± 6% -83.6% 161406 vmstat.io.bo
62.00 ± 16% +124.6% 139.25 vmstat.procs.r
741410 ± 6% -5.4% 701270 vmstat.system.cs
239168 ± 4% +12.6% 269384 ± 2% vmstat.system.in
3.594e+08 +145.6% 8.829e+08 cpuidle.C1.time
13251444 +193.7% 38919005 cpuidle.C1.usage
8302780 ± 3% +274.0% 31050667 ± 89% cpuidle.C1E.time
199348 ± 4% +155.4% 509135 ± 47% cpuidle.C1E.usage
2.557e+08 ± 28% -78.4% 55352560 ± 87% cpuidle.C3.time
721299 ± 23% -65.1% 251650 ± 71% cpuidle.C3.usage
290310 ± 3% +124.2% 650941 ± 10% cpuidle.POLL.time
131887 +19.0% 156975 ± 2% cpuidle.POLL.usage
32021 ± 88% +191.6% 93372 ± 11% numa-meminfo.node0.AnonHugePages
18534 ± 8% +28.0% 23724 numa-meminfo.node0.Dirty
67343 ± 8% +35.2% 91023 ± 2% numa-meminfo.node0.Inactive
58866 ± 7% +35.3% 79629 numa-meminfo.node0.Inactive(file)
36057 ± 80% +146.6% 88911 ± 12% numa-meminfo.node1.AnonHugePages
18655 ± 7% +21.3% 22635 numa-meminfo.node1.Dirty
69191 ± 11% +22.5% 84740 ± 2% numa-meminfo.node1.Inactive
57378 ± 7% +32.3% 75902 numa-meminfo.node1.Inactive(file)
3792 ± 11% +76.1% 6677 ± 38% numa-meminfo.node1.KernelStack
1117084 ± 7% +5.4% 1177261 ± 6% numa-meminfo.node1.MemUsed
4815 ± 9% +22.9% 5920 numa-vmstat.node0.nr_dirty
14782 ± 8% +34.8% 19925 numa-vmstat.node0.nr_inactive_file
14783 ± 8% +34.8% 19925 numa-vmstat.node0.nr_zone_inactive_file
4754 ± 8% +27.4% 6058 numa-vmstat.node0.nr_zone_write_pending
1355519 ± 6% +22.3% 1658417 ± 3% numa-vmstat.node0.numa_hit
1348673 ± 6% +22.7% 1655017 ± 3% numa-vmstat.node0.numa_local
4847 ± 6% +16.6% 5652 numa-vmstat.node1.nr_dirty
14427 ± 5% +31.6% 18989 numa-vmstat.node1.nr_inactive_file
3792 ± 10% +76.0% 6676 ± 38% numa-vmstat.node1.nr_kernel_stack
14427 ± 5% +31.6% 18989 numa-vmstat.node1.nr_zone_inactive_file
4712 ± 5% +22.9% 5792 numa-vmstat.node1.nr_zone_write_pending
1279822 ± 9% +24.8% 1597807 ± 3% numa-vmstat.node1.numa_hit
1083191 ± 10% +29.0% 1397507 ± 4% numa-vmstat.node1.numa_local
1706 ± 6% +61.4% 2754 turbostat.Avg_MHz
61.91 ± 6% +30.2 92.13 turbostat.Busy%
13247955 +193.7% 38914065 turbostat.C1
13.47 ± 7% -8.3 5.17 turbostat.C1%
199030 ± 4% +155.6% 508725 ± 47% turbostat.C1E
720658 ± 23% -65.2% 250930 ± 71% turbostat.C3
9.40 ± 24% -9.1 0.32 ± 87% turbostat.C3%
15.30 ± 18% -13.1 2.25 ± 11% turbostat.C6%
27.43 ± 16% -77.1% 6.27 ± 6% turbostat.CPU%c1
3.55 ± 37% -94.4% 0.20 ±115% turbostat.CPU%c3
7.11 ± 18% -80.4% 1.39 ± 31% turbostat.CPU%c6
99.14 ± 4% +40.4% 139.14 turbostat.CorWatt
63.67 ± 5% +10.7% 70.50 turbostat.CoreTmp
16370255 +605.1% 1.154e+08 ± 2% turbostat.IRQ
7.31 ± 2% -87.4% 0.92 ± 30% turbostat.Pkg%pc2
63.33 ± 5% +11.3% 70.50 turbostat.PkgTmp
126.14 ± 3% +32.0% 166.50 turbostat.PkgWatt
38.35 -10.7% 34.26 turbostat.RAMWatt
5373 ± 5% +535.9% 34170 turbostat.SMI
63371 +8.7% 68901 proc-vmstat.nr_active_anon
60172 +3.1% 62025 proc-vmstat.nr_anon_pages
9332 ± 7% +24.1% 11581 proc-vmstat.nr_dirty
350500 +3.9% 364042 proc-vmstat.nr_file_pages
29173 ± 7% +33.3% 38875 proc-vmstat.nr_inactive_file
13447 ± 2% +6.4% 14313 proc-vmstat.nr_kernel_stack
6155 -2.9% 5977 proc-vmstat.nr_mapped
8144 +47.1% 11976 ± 7% proc-vmstat.nr_shmem
16872 +2.2% 17235 proc-vmstat.nr_slab_reclaimable
20465 +4.6% 21403 proc-vmstat.nr_slab_unreclaimable
63371 +8.7% 68901 proc-vmstat.nr_zone_active_anon
29173 ± 7% +33.3% 38875 proc-vmstat.nr_zone_inactive_file
8903 ± 6% +33.1% 11852 proc-vmstat.nr_zone_write_pending
5134 ± 7% +1135.3% 63418 ± 8% proc-vmstat.numa_hint_faults
2712 ± 9% +1452.4% 42101 ± 11% proc-vmstat.numa_hint_faults_local
4245003 +17.4% 4985607 proc-vmstat.numa_hit
4231936 +17.5% 4972502 proc-vmstat.numa_local
2422 ± 6% +316.6% 10089 ± 2% proc-vmstat.numa_pages_migrated
6322 ± 14% +944.4% 66031 ± 11% proc-vmstat.numa_pte_updates
4423 ± 3% +44.7% 6400 ± 13% proc-vmstat.pgactivate
4263668 +18.3% 5042078 proc-vmstat.pgalloc_normal
211570 ± 5% +452.3% 1168563 proc-vmstat.pgfault
4180467 +18.7% 4964108 proc-vmstat.pgfree
2422 ± 6% +316.6% 10089 ± 2% proc-vmstat.pgmigrate_success
66934980 +3.3% 69136453 proc-vmstat.pgpgout
1029 ± 4% +16.6% 1200 ± 4% slabinfo.UNIX.active_objs
1029 ± 4% +16.6% 1200 ± 4% slabinfo.UNIX.num_objs
2102 +17.3% 2464 slabinfo.ebitmap_node.active_objs
2102 +17.3% 2464 slabinfo.ebitmap_node.num_objs
1485 +15.4% 1715 slabinfo.ip6-frags.active_objs
1485 +15.4% 1715 slabinfo.ip6-frags.num_objs
783.33 +25.4% 982.00 slabinfo.kmalloc-4k.active_objs
796.00 +26.8% 1009 slabinfo.kmalloc-4k.num_objs
4231 +43.1% 6056 ± 2% slabinfo.kmalloc-96.active_objs
4326 ± 2% +40.2% 6066 ± 2% slabinfo.kmalloc-96.num_objs
94.67 ± 14% +201.6% 285.50 ± 8% slabinfo.nfs_commit_data.active_objs
94.67 ± 14% +201.6% 285.50 ± 8% slabinfo.nfs_commit_data.num_objs
61.67 ± 9% +252.3% 217.25 ± 12% slabinfo.nfs_read_data.active_objs
61.67 ± 9% +252.3% 217.25 ± 12% slabinfo.nfs_read_data.num_objs
331.33 ± 12% +37.6% 455.75 ± 16% slabinfo.skbuff_ext_cache.active_objs
331.33 ± 12% +37.6% 455.75 ± 16% slabinfo.skbuff_ext_cache.num_objs
296.67 ± 11% +47.4% 437.25 ± 11% slabinfo.skbuff_fclone_cache.active_objs
296.67 ± 11% +47.4% 437.25 ± 11% slabinfo.skbuff_fclone_cache.num_objs
1861 ± 7% +13.1% 2104 ± 3% slabinfo.sock_inode_cache.active_objs
1861 ± 7% +13.1% 2104 ± 3% slabinfo.sock_inode_cache.num_objs
1730 +14.9% 1988 slabinfo.xfrm_dst_cache.active_objs
1730 +14.9% 1988 slabinfo.xfrm_dst_cache.num_objs
1210 +15.9% 1402 slabinfo.xfs_btree_cur.active_objs
1210 +15.9% 1402 slabinfo.xfs_btree_cur.num_objs
1124 +14.5% 1287 slabinfo.xfs_buf_item.active_objs
1124 +14.5% 1287 slabinfo.xfs_buf_item.num_objs
1197 +18.6% 1420 slabinfo.xfs_efd_item.active_objs
1197 +18.6% 1420 slabinfo.xfs_efd_item.num_objs
1147 ± 2% +16.8% 1339 slabinfo.xfs_inode.active_objs
1147 ± 2% +16.8% 1339 slabinfo.xfs_inode.num_objs
27.09 ± 12% -72.5% 7.44 ± 7% perf-stat.i.MPKI
4.056e+09 ± 4% +41.5% 5.739e+09 perf-stat.i.branch-instructions
3.69 ± 17% -3.0 0.68 ± 8% perf-stat.i.branch-miss-rate%
50722991 ± 3% -54.4% 23140156 perf-stat.i.branch-misses
19.17 -2.7 16.51 perf-stat.i.cache-miss-rate%
54086897 ± 5% -54.7% 24479387 perf-stat.i.cache-misses
2.929e+08 ± 4% -48.7% 1.501e+08 perf-stat.i.cache-references
772970 ± 5% -8.9% 704502 perf-stat.i.context-switches
6.913e+10 ± 4% +58.8% 1.098e+11 perf-stat.i.cpu-cycles
89123 ± 5% -59.7% 35907 perf-stat.i.cpu-migrations
1133 ± 2% +289.5% 4415 perf-stat.i.cycles-between-cache-misses
1.03 -0.7 0.35 ± 7% perf-stat.i.dTLB-load-miss-rate%
55146957 ± 5% -62.3% 20764703 ± 5% perf-stat.i.dTLB-load-misses
5.071e+09 ± 4% +21.3% 6.15e+09 perf-stat.i.dTLB-loads
0.17 ± 2% +0.0 0.20 ± 5% perf-stat.i.dTLB-store-miss-rate%
3860806 ± 4% -57.1% 1657096 ± 6% perf-stat.i.dTLB-store-misses
2.019e+09 ± 4% -58.5% 8.387e+08 perf-stat.i.dTLB-stores
87.20 +3.3 90.51 perf-stat.i.iTLB-load-miss-rate%
16459537 ± 5% -63.6% 5994301 perf-stat.i.iTLB-load-misses
2753032 ± 3% -77.2% 628945 ± 5% perf-stat.i.iTLB-loads
1.833e+10 ± 4% +30.4% 2.389e+10 perf-stat.i.instructions
1127 ± 2% +250.8% 3956 ± 2% perf-stat.i.instructions-per-iTLB-miss
0.24 ± 2% -10.3% 0.22 perf-stat.i.ipc
2981 -10.9% 2656 perf-stat.i.minor-faults
43.54 +3.9 47.45 perf-stat.i.node-load-miss-rate%
19607511 ± 7% -30.5% 13627815 perf-stat.i.node-load-misses
25859572 ± 6% -41.9% 15014790 perf-stat.i.node-loads
18026722 ± 5% -69.0% 5593547 perf-stat.i.node-store-misses
28667327 ± 5% -67.0% 9470534 perf-stat.i.node-stores
2981 -10.9% 2656 perf-stat.i.page-faults
15.98 -60.7% 6.28 perf-stat.overall.MPKI
1.25 -0.8 0.40 perf-stat.overall.branch-miss-rate%
18.46 -2.2 16.31 perf-stat.overall.cache-miss-rate%
3.77 +21.8% 4.60 perf-stat.overall.cpi
1278 +250.9% 4485 perf-stat.overall.cycles-between-cache-misses
1.08 -0.7 0.34 ± 5% perf-stat.overall.dTLB-load-miss-rate%
85.65 +4.9 90.50 perf-stat.overall.iTLB-load-miss-rate%
1114 +257.8% 3986 perf-stat.overall.instructions-per-iTLB-miss
0.27 -17.9% 0.22 perf-stat.overall.ipc
43.11 +4.5 47.58 perf-stat.overall.node-load-miss-rate%
38.60 -1.5 37.13 perf-stat.overall.node-store-miss-rate%
3.992e+09 ± 4% +43.4% 5.725e+09 perf-stat.ps.branch-instructions
49933802 ± 3% -53.8% 23085684 perf-stat.ps.branch-misses
53231168 ± 5% -54.1% 24422253 perf-stat.ps.cache-misses
2.883e+08 ± 4% -48.0% 1.498e+08 perf-stat.ps.cache-references
760697 ± 5% -7.6% 702869 perf-stat.ps.context-switches
6.804e+10 ± 4% +61.0% 1.095e+11 perf-stat.ps.cpu-cycles
87708 ± 5% -59.2% 35824 perf-stat.ps.cpu-migrations
54271302 ± 5% -61.8% 20716539 ± 5% perf-stat.ps.dTLB-load-misses
4.991e+09 ± 4% +22.9% 6.135e+09 perf-stat.ps.dTLB-loads
3799715 ± 4% -56.5% 1653235 ± 6% perf-stat.ps.dTLB-store-misses
1.987e+09 ± 4% -57.9% 8.367e+08 perf-stat.ps.dTLB-stores
16198464 ± 5% -63.1% 5980364 perf-stat.ps.iTLB-load-misses
2709431 ± 3% -76.8% 627483 ± 5% perf-stat.ps.iTLB-loads
1.804e+10 ± 4% +32.2% 2.384e+10 perf-stat.ps.instructions
2937 -9.8% 2650 perf-stat.ps.minor-faults
78775 +1.3% 79809 perf-stat.ps.msec
19295918 ± 7% -29.5% 13596222 perf-stat.ps.node-load-misses
25448471 ± 6% -41.1% 14979981 perf-stat.ps.node-loads
17740171 ± 5% -68.5% 5580578 perf-stat.ps.node-store-misses
28211759 ± 5% -66.5% 9448596 perf-stat.ps.node-stores
2937 -9.7% 2650 perf-stat.ps.page-faults
1.191e+12 ± 2% +754.0% 1.017e+13 perf-stat.total.instructions
26147 ± 74% +1249.4% 352831 ± 41% sched_debug.cfs_rq:/.MIN_vruntime.avg
419652 ± 70% +1206.8% 5484055 ± 18% sched_debug.cfs_rq:/.MIN_vruntime.max
100681 ± 71% +1199.0% 1307904 ± 29% sched_debug.cfs_rq:/.MIN_vruntime.stddev
700.02 ± 25% -48.6% 359.80 ± 2% sched_debug.cfs_rq:/.load_avg.avg
128.00 ± 10% -63.8% 46.38 ± 3% sched_debug.cfs_rq:/.load_avg.min
26147 ± 74% +1249.4% 352831 ± 41% sched_debug.cfs_rq:/.max_vruntime.avg
419652 ± 70% +1206.8% 5484055 ± 18% sched_debug.cfs_rq:/.max_vruntime.max
100682 ± 71% +1199.0% 1307905 ± 29% sched_debug.cfs_rq:/.max_vruntime.stddev
631056 +923.3% 6457752 sched_debug.cfs_rq:/.min_vruntime.avg
657496 +893.4% 6531324 sched_debug.cfs_rq:/.min_vruntime.max
622268 +927.2% 6392164 sched_debug.cfs_rq:/.min_vruntime.min
6424 ± 21% +491.2% 37984 ± 8% sched_debug.cfs_rq:/.min_vruntime.stddev
0.53 ± 10% +59.6% 0.84 ± 5% sched_debug.cfs_rq:/.nr_running.avg
0.48 ± 11% -14.3% 0.41 ± 9% sched_debug.cfs_rq:/.nr_running.stddev
33.75 ± 17% -79.2% 7.04 ± 49% sched_debug.cfs_rq:/.removed.load_avg.avg
512.00 -75.4% 126.00 sched_debug.cfs_rq:/.removed.load_avg.max
125.70 ± 8% -77.8% 27.85 ± 23% sched_debug.cfs_rq:/.removed.load_avg.stddev
1558 ± 17% -79.2% 324.85 ± 48% sched_debug.cfs_rq:/.removed.runnable_sum.avg
23640 -75.3% 5832 sched_debug.cfs_rq:/.removed.runnable_sum.max
5805 ± 8% -77.8% 1286 ± 23% sched_debug.cfs_rq:/.removed.runnable_sum.stddev
10.35 ± 11% -79.6% 2.11 ± 44% sched_debug.cfs_rq:/.removed.util_avg.avg
210.50 ± 15% -75.7% 51.25 ± 24% sched_debug.cfs_rq:/.removed.util_avg.max
43.22 ± 11% -79.0% 9.08 ± 28% sched_debug.cfs_rq:/.removed.util_avg.stddev
-7180 +1028.8% -81052 sched_debug.cfs_rq:/.spread0.min
6424 ± 21% +491.2% 37980 ± 8% sched_debug.cfs_rq:/.spread0.stddev
124.22 ± 17% +165.5% 329.82 ± 10% sched_debug.cfs_rq:/.util_est_enqueued.avg
725.50 ± 17% +26.8% 919.75 sched_debug.cfs_rq:/.util_est_enqueued.max
163.23 ± 3% +55.2% 253.33 sched_debug.cfs_rq:/.util_est_enqueued.stddev
249918 ± 8% -67.7% 80842 ± 3% sched_debug.cpu.avg_idle.avg
613303 ± 6% -69.6% 186565 ± 3% sched_debug.cpu.avg_idle.max
36370 ± 28% -58.9% 14938 ± 29% sched_debug.cpu.avg_idle.min
137943 ± 3% -69.8% 41695 ± 7% sched_debug.cpu.avg_idle.stddev
69483 +259.8% 249970 sched_debug.cpu.clock.avg
69485 +259.8% 249975 sched_debug.cpu.clock.max
69481 +259.8% 249965 sched_debug.cpu.clock.min
1.29 ± 10% +120.8% 2.84 ± 15% sched_debug.cpu.clock.stddev
69483 +259.8% 249970 sched_debug.cpu.clock_task.avg
69485 +259.8% 249975 sched_debug.cpu.clock_task.max
69481 +259.8% 249965 sched_debug.cpu.clock_task.min
1.29 ± 10% +120.8% 2.84 ± 15% sched_debug.cpu.clock_task.stddev
6.17 ± 30% +52.5% 9.41 ± 7% sched_debug.cpu.cpu_load[2].min
178.83 ± 30% -54.3% 81.69 ± 14% sched_debug.cpu.cpu_load[3].max
8.67 ± 26% +54.7% 13.41 ± 3% sched_debug.cpu.cpu_load[3].min
32.77 ± 36% -64.6% 11.60 ± 15% sched_debug.cpu.cpu_load[3].stddev
224.67 ± 13% -59.8% 90.22 ± 9% sched_debug.cpu.cpu_load[4].max
9.17 ± 24% +75.6% 16.09 ± 3% sched_debug.cpu.cpu_load[4].min
38.70 ± 18% -68.1% 12.34 ± 11% sched_debug.cpu.cpu_load[4].stddev
932.68 ± 19% +81.7% 1694 ± 5% sched_debug.cpu.curr->pid.avg
2647 +173.0% 7228 sched_debug.cpu.curr->pid.max
793.79 ± 7% +51.0% 1198 ± 4% sched_debug.cpu.curr->pid.stddev
0.00 ± 15% +83.5% 0.00 ± 2% sched_debug.cpu.next_balance.stddev
40300 +445.7% 219939 sched_debug.cpu.nr_load_updates.avg
50191 ± 2% +353.4% 227592 sched_debug.cpu.nr_load_updates.max
37603 +475.9% 216550 sched_debug.cpu.nr_load_updates.min
0.88 ± 26% +220.9% 2.81 ± 11% sched_debug.cpu.nr_running.avg
4.17 ± 53% +80.0% 7.50 ± 2% sched_debug.cpu.nr_running.max
0.98 ± 42% +112.5% 2.09 ± 3% sched_debug.cpu.nr_running.stddev
629284 +474.1% 3612460 sched_debug.cpu.nr_switches.avg
638714 +472.3% 3655423 sched_debug.cpu.nr_switches.max
622843 +473.1% 3569313 sched_debug.cpu.nr_switches.min
3755 ± 10% +669.9% 28910 ± 4% sched_debug.cpu.nr_switches.stddev
1.93 ± 65% +96.8% 3.81 ± 8% sched_debug.cpu.nr_uninterruptible.avg
414.50 ± 9% +94.8% 807.34 ± 9% sched_debug.cpu.nr_uninterruptible.max
-322.00 +70.9% -550.16 sched_debug.cpu.nr_uninterruptible.min
211.68 ± 13% +88.3% 398.65 ± 3% sched_debug.cpu.nr_uninterruptible.stddev
69481 +259.8% 249965 sched_debug.cpu_clk
65881 +274.0% 246371 sched_debug.ktime
69932 +258.1% 250410 sched_debug.sched_clk
43.10 -38.7 4.43 ± 2% perf-profile.calltrace.cycles-pp.xfs_log_force_lsn.xfs_file_fsync.xfs_file_buffered_aio_write.new_sync_write.vfs_write
22.52 -20.3 2.21 perf-profile.calltrace.cycles-pp.__xfs_log_force_lsn.xfs_log_force_lsn.xfs_file_fsync.xfs_file_buffered_aio_write.new_sync_write
20.53 -18.3 2.21 ± 4% perf-profile.calltrace.cycles-pp.xlog_cil_force_lsn.xfs_log_force_lsn.xfs_file_fsync.xfs_file_buffered_aio_write.new_sync_write
12.59 -12.6 0.00 perf-profile.calltrace.cycles-pp._raw_spin_lock.__xfs_log_force_lsn.xfs_log_force_lsn.xfs_file_fsync.xfs_file_buffered_aio_write
12.49 -12.5 0.00 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.__xfs_log_force_lsn.xfs_log_force_lsn.xfs_file_fsync
11.12 -10.4 0.71 ± 5% perf-profile.calltrace.cycles-pp._raw_spin_lock.xlog_cil_force_lsn.xfs_log_force_lsn.xfs_file_fsync.xfs_file_buffered_aio_write
13.95 -10.2 3.73 perf-profile.calltrace.cycles-pp.__xfs_log_force_lsn.xfs_file_fsync.xfs_file_buffered_aio_write.new_sync_write.vfs_write
10.86 -10.2 0.69 ± 6% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.xlog_cil_force_lsn.xfs_log_force_lsn.xfs_file_fsync
9.65 ± 2% -8.9 0.74 ± 5% perf-profile.calltrace.cycles-pp.file_write_and_wait_range.xfs_file_fsync.xfs_file_buffered_aio_write.new_sync_write.vfs_write
9.43 ± 2% -8.7 0.71 ± 5% perf-profile.calltrace.cycles-pp.__filemap_fdatawrite_range.file_write_and_wait_range.xfs_file_fsync.xfs_file_buffered_aio_write.new_sync_write
9.36 ± 2% -8.7 0.70 ± 5% perf-profile.calltrace.cycles-pp.do_writepages.__filemap_fdatawrite_range.file_write_and_wait_range.xfs_file_fsync.xfs_file_buffered_aio_write
9.34 ± 2% -8.6 0.70 ± 4% perf-profile.calltrace.cycles-pp.xfs_vm_writepages.do_writepages.__filemap_fdatawrite_range.file_write_and_wait_range.xfs_file_fsync
11.71 -8.2 3.51 perf-profile.calltrace.cycles-pp.secondary_startup_64
11.00 -7.5 3.46 perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
11.00 -7.5 3.46 perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
10.99 -7.5 3.46 perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
8.61 -6.5 2.13 perf-profile.calltrace.cycles-pp._raw_spin_lock.__xfs_log_force_lsn.xfs_file_fsync.xfs_file_buffered_aio_write.new_sync_write
8.57 -6.5 2.12 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.__xfs_log_force_lsn.xfs_file_fsync.xfs_file_buffered_aio_write
6.19 -6.2 0.00 perf-profile.calltrace.cycles-pp.remove_wait_queue.__xfs_log_force_lsn.xfs_log_force_lsn.xfs_file_fsync.xfs_file_buffered_aio_write
6.14 -6.1 0.00 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.remove_wait_queue.__xfs_log_force_lsn.xfs_log_force_lsn.xfs_file_fsync
6.11 -6.1 0.00 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.remove_wait_queue.__xfs_log_force_lsn.xfs_log_force_lsn
8.87 -6.0 2.86 perf-profile.calltrace.cycles-pp.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
8.61 -5.8 2.79 ± 2% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
5.46 ± 2% -5.5 0.00 perf-profile.calltrace.cycles-pp.xfs_submit_ioend.xfs_vm_writepages.do_writepages.__filemap_fdatawrite_range.file_write_and_wait_range
5.43 ± 2% -5.4 0.00 perf-profile.calltrace.cycles-pp.submit_bio.xfs_submit_ioend.xfs_vm_writepages.do_writepages.__filemap_fdatawrite_range
5.26 ± 2% -5.3 0.00 perf-profile.calltrace.cycles-pp.generic_make_request.submit_bio.xfs_submit_ioend.xfs_vm_writepages.do_writepages
5.08 ± 2% -4.5 0.55 perf-profile.calltrace.cycles-pp.xfs_file_aio_write_checks.xfs_file_buffered_aio_write.new_sync_write.vfs_write.ksys_write
4.82 ± 2% -4.4 0.38 ± 57% perf-profile.calltrace.cycles-pp.file_update_time.xfs_file_aio_write_checks.xfs_file_buffered_aio_write.new_sync_write.vfs_write
4.74 -3.6 1.11 ± 4% perf-profile.calltrace.cycles-pp.remove_wait_queue.__xfs_log_force_lsn.xfs_file_fsync.xfs_file_buffered_aio_write.new_sync_write
4.71 -3.6 1.10 ± 4% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.remove_wait_queue.__xfs_log_force_lsn.xfs_file_fsync.xfs_file_buffered_aio_write
4.68 -3.6 1.09 ± 4% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.remove_wait_queue.__xfs_log_force_lsn.xfs_file_fsync
3.79 -2.8 0.99 ± 9% perf-profile.calltrace.cycles-pp.remove_wait_queue.xlog_cil_force_lsn.xfs_log_force_lsn.xfs_file_fsync.xfs_file_buffered_aio_write
3.76 -2.8 0.98 ± 9% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.remove_wait_queue.xlog_cil_force_lsn.xfs_log_force_lsn.xfs_file_fsync
3.72 -2.7 0.98 ± 9% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.remove_wait_queue.xlog_cil_force_lsn.xfs_log_force_lsn
5.50 -1.9 3.57 perf-profile.calltrace.cycles-pp.ret_from_fork
5.50 -1.9 3.57 perf-profile.calltrace.cycles-pp.kthread.ret_from_fork
5.49 -1.9 3.56 perf-profile.calltrace.cycles-pp.worker_thread.kthread.ret_from_fork
5.29 -1.9 3.39 perf-profile.calltrace.cycles-pp.process_one_work.worker_thread.kthread.ret_from_fork
2.73 -1.6 1.16 perf-profile.calltrace.cycles-pp.xlog_wait.__xfs_log_force_lsn.xfs_log_force_lsn.xfs_file_fsync.xfs_file_buffered_aio_write
2.25 ± 2% -1.3 0.99 perf-profile.calltrace.cycles-pp.remove_wait_queue.xlog_wait.__xfs_log_force_lsn.xfs_log_force_lsn.xfs_file_fsync
2.22 ± 2% -1.2 0.98 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.remove_wait_queue.xlog_wait.__xfs_log_force_lsn.xfs_log_force_lsn
2.20 ± 2% -1.2 0.97 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.remove_wait_queue.xlog_wait.__xfs_log_force_lsn
1.81 ± 2% -1.0 0.78 perf-profile.calltrace.cycles-pp.xlog_cil_push.process_one_work.worker_thread.kthread.ret_from_fork
0.88 ± 3% -0.3 0.59 ± 2% perf-profile.calltrace.cycles-pp.xlog_write.xlog_cil_push.process_one_work.worker_thread.kthread
0.00 +0.5 0.53 perf-profile.calltrace.cycles-pp.dequeue_task_fair.__sched_text_start.schedule.md_flush_request.raid0_make_request
0.00 +0.6 0.59 ± 4% perf-profile.calltrace.cycles-pp.select_task_rq_fair.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
0.00 +0.6 0.64 ± 2% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.prepare_to_wait_event.md_flush_request.raid0_make_request
0.00 +0.7 0.68 ± 4% perf-profile.calltrace.cycles-pp.sched_ttwu_pending.scheduler_ipi.reschedule_interrupt.md_flush_request.raid0_make_request
0.00 +0.7 0.72 ± 4% perf-profile.calltrace.cycles-pp.scheduler_ipi.reschedule_interrupt.md_flush_request.raid0_make_request.md_handle_request
0.00 +0.8 0.79 ± 2% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.prepare_to_wait_event.md_flush_request.raid0_make_request.md_handle_request
0.00 +0.8 0.83 ± 2% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.finish_wait.md_flush_request.raid0_make_request
0.00 +0.8 0.84 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.finish_wait.md_flush_request.raid0_make_request.md_handle_request
0.00 +0.8 0.85 ± 4% perf-profile.calltrace.cycles-pp.reschedule_interrupt.md_flush_request.raid0_make_request.md_handle_request.md_make_request
0.00 +0.9 0.85 perf-profile.calltrace.cycles-pp.finish_wait.md_flush_request.raid0_make_request.md_handle_request.md_make_request
0.00 +1.0 1.04 perf-profile.calltrace.cycles-pp.prepare_to_wait_event.md_flush_request.raid0_make_request.md_handle_request.md_make_request
0.00 +1.4 1.41 ± 2% perf-profile.calltrace.cycles-pp.__sched_text_start.schedule.md_flush_request.raid0_make_request.md_handle_request
0.00 +1.4 1.42 ± 3% perf-profile.calltrace.cycles-pp.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock.md_submit_flush_data
0.00 +1.4 1.43 ± 2% perf-profile.calltrace.cycles-pp.schedule.md_flush_request.raid0_make_request.md_handle_request.md_make_request
0.00 +1.5 1.53 ± 3% perf-profile.calltrace.cycles-pp.autoremove_wake_function.__wake_up_common.__wake_up_common_lock.md_submit_flush_data.process_one_work
0.00 +1.6 1.57 ± 3% perf-profile.calltrace.cycles-pp.__wake_up_common.__wake_up_common_lock.md_submit_flush_data.process_one_work.worker_thread
0.00 +1.8 1.78 ± 3% perf-profile.calltrace.cycles-pp.__wake_up_common_lock.md_submit_flush_data.process_one_work.worker_thread.kthread
0.00 +1.8 1.78 ± 2% perf-profile.calltrace.cycles-pp.md_submit_flush_data.process_one_work.worker_thread.kthread.ret_from_fork
81.32 +11.3 92.58 perf-profile.calltrace.cycles-pp.write
80.05 +12.1 92.17 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.write
80.00 +12.2 92.16 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
79.89 +12.3 92.15 perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
79.81 +12.3 92.14 perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
79.30 +12.8 92.06 perf-profile.calltrace.cycles-pp.new_sync_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
79.07 +12.9 92.02 perf-profile.calltrace.cycles-pp.xfs_file_buffered_aio_write.new_sync_write.vfs_write.ksys_write.do_syscall_64
71.03 +20.1 91.10 perf-profile.calltrace.cycles-pp.xfs_file_fsync.xfs_file_buffered_aio_write.new_sync_write.vfs_write.ksys_write
0.00 +76.8 76.79 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irq.md_flush_request.raid0_make_request.md_handle_request
0.00 +77.0 77.02 perf-profile.calltrace.cycles-pp._raw_spin_lock_irq.md_flush_request.raid0_make_request.md_handle_request.md_make_request
4.03 ± 2% +78.1 82.15 perf-profile.calltrace.cycles-pp.blkdev_issue_flush.xfs_file_fsync.xfs_file_buffered_aio_write.new_sync_write.vfs_write
3.72 ± 2% +78.4 82.10 perf-profile.calltrace.cycles-pp.submit_bio_wait.blkdev_issue_flush.xfs_file_fsync.xfs_file_buffered_aio_write.new_sync_write
3.45 ± 2% +78.5 81.97 perf-profile.calltrace.cycles-pp.md_make_request.generic_make_request.submit_bio.submit_bio_wait.blkdev_issue_flush
3.27 ± 3% +78.7 81.95 perf-profile.calltrace.cycles-pp.md_handle_request.md_make_request.generic_make_request.submit_bio.submit_bio_wait
2.99 ± 3% +78.8 81.79 perf-profile.calltrace.cycles-pp.md_flush_request.raid0_make_request.md_handle_request.md_make_request.generic_make_request
3.07 ± 3% +78.9 81.93 perf-profile.calltrace.cycles-pp.raid0_make_request.md_handle_request.md_make_request.generic_make_request.submit_bio
2.76 ± 2% +79.2 82.00 perf-profile.calltrace.cycles-pp.generic_make_request.submit_bio.submit_bio_wait.blkdev_issue_flush.xfs_file_fsync
2.75 ± 2% +79.2 82.00 perf-profile.calltrace.cycles-pp.submit_bio.submit_bio_wait.blkdev_issue_flush.xfs_file_fsync.xfs_file_buffered_aio_write
14881 ± 6% +486.4% 87257 softirqs.CPU0.RCU
24878 +270.9% 92277 softirqs.CPU0.SCHED
32103 ± 6% +414.9% 165306 softirqs.CPU0.TIMER
15176 ± 2% +471.7% 86759 softirqs.CPU1.RCU
22055 ± 2% +306.8% 89728 softirqs.CPU1.SCHED
32444 ± 3% +411.3% 165897 ± 3% softirqs.CPU1.TIMER
14864 ± 4% +489.2% 87587 softirqs.CPU10.RCU
21273 ± 2% +315.1% 88301 softirqs.CPU10.SCHED
27880 ± 5% +528.9% 175353 ± 16% softirqs.CPU10.TIMER
14830 ± 11% +483.3% 86503 softirqs.CPU11.RCU
21874 +308.2% 89296 softirqs.CPU11.SCHED
30818 ± 4% +426.0% 162109 softirqs.CPU11.TIMER
17879 ± 13% +408.6% 90939 softirqs.CPU12.RCU
22410 ± 2% +295.3% 88582 softirqs.CPU12.SCHED
34255 ± 2% +385.3% 166243 softirqs.CPU12.TIMER
18805 ± 7% +359.6% 86430 softirqs.CPU13.RCU
21827 +309.3% 89328 softirqs.CPU13.SCHED
29925 ± 4% +441.7% 162101 softirqs.CPU13.TIMER
14412 ± 2% +510.1% 87931 ± 4% softirqs.CPU14.RCU
22144 ± 2% +300.4% 88659 softirqs.CPU14.SCHED
39515 ± 27% +315.1% 164042 ± 2% softirqs.CPU14.TIMER
14377 ± 6% +504.6% 86919 softirqs.CPU15.RCU
21897 ± 2% +308.0% 89347 softirqs.CPU15.SCHED
30163 ± 3% +438.9% 162561 softirqs.CPU15.TIMER
14409 ± 4% +507.2% 87490 softirqs.CPU16.RCU
22024 ± 3% +303.3% 88831 softirqs.CPU16.SCHED
31665 ± 4% +415.9% 163359 softirqs.CPU16.TIMER
14241 ± 4% +516.5% 87798 softirqs.CPU17.RCU
22039 +307.1% 89714 softirqs.CPU17.SCHED
30190 ± 2% +438.1% 162438 softirqs.CPU17.TIMER
14232 +508.6% 86615 softirqs.CPU18.RCU
22227 ± 2% +298.6% 88600 softirqs.CPU18.SCHED
31872 ± 5% +412.8% 163432 ± 2% softirqs.CPU18.TIMER
14326 ± 6% +511.9% 87667 softirqs.CPU19.RCU
21860 ± 2% +309.2% 89450 softirqs.CPU19.SCHED
30517 ± 3% +432.4% 162482 softirqs.CPU19.TIMER
15468 ± 3% +459.8% 86599 softirqs.CPU2.RCU
21947 +306.9% 89296 softirqs.CPU2.SCHED
29942 ± 7% +438.8% 161329 softirqs.CPU2.TIMER
15524 ± 10% +460.9% 87075 ± 2% softirqs.CPU20.RCU
22247 ± 3% +297.6% 88449 softirqs.CPU20.SCHED
32511 ± 9% +399.9% 162531 softirqs.CPU20.TIMER
16124 ± 3% +438.1% 86772 ± 2% softirqs.CPU21.RCU
21867 ± 2% +308.2% 89270 softirqs.CPU21.SCHED
32222 ± 7% +403.3% 162187 softirqs.CPU21.TIMER
15245 ± 6% +469.9% 86890 softirqs.CPU22.RCU
22141 ± 2% +301.4% 88866 softirqs.CPU22.SCHED
30630 ± 7% +438.4% 164905 ± 2% softirqs.CPU22.TIMER
16203 ± 16% +433.7% 86477 softirqs.CPU23.RCU
21894 +309.1% 89560 softirqs.CPU23.SCHED
32179 ± 8% +404.7% 162403 softirqs.CPU23.TIMER
14825 ± 2% +492.1% 87781 softirqs.CPU24.RCU
22102 +301.8% 88810 softirqs.CPU24.SCHED
30067 ± 3% +443.4% 163384 softirqs.CPU24.TIMER
14649 ± 6% +494.4% 87072 softirqs.CPU25.RCU
21811 +310.6% 89558 softirqs.CPU25.SCHED
32069 ± 8% +406.6% 162452 softirqs.CPU25.TIMER
15524 ± 12% +463.1% 87416 softirqs.CPU26.RCU
22060 +304.3% 89198 softirqs.CPU26.SCHED
30608 ± 2% +431.6% 162708 softirqs.CPU26.TIMER
14375 ± 4% +511.8% 87948 softirqs.CPU27.RCU
21802 +311.4% 89696 softirqs.CPU27.SCHED
31032 ± 6% +422.6% 162165 softirqs.CPU27.TIMER
15014 ± 6% +477.5% 86704 softirqs.CPU28.RCU
21935 +303.6% 88529 softirqs.CPU28.SCHED
29871 +441.4% 161730 softirqs.CPU28.TIMER
14457 ± 7% +508.1% 87911 softirqs.CPU29.RCU
21845 +310.4% 89653 softirqs.CPU29.SCHED
30666 ± 5% +430.1% 162564 softirqs.CPU29.TIMER
15720 ± 12% +449.2% 86335 softirqs.CPU3.RCU
21911 +307.6% 89305 softirqs.CPU3.SCHED
32375 ± 7% +400.3% 161979 softirqs.CPU3.TIMER
16382 ± 3% +436.7% 87921 softirqs.CPU30.RCU
21787 +308.4% 88981 softirqs.CPU30.SCHED
29038 ± 2% +455.4% 161277 softirqs.CPU30.TIMER
14836 ± 8% +488.9% 87372 softirqs.CPU31.RCU
21773 +312.8% 89874 softirqs.CPU31.SCHED
30245 ± 5% +437.4% 162532 softirqs.CPU31.TIMER
14911 ± 14% +469.4% 84903 ± 2% softirqs.CPU32.RCU
22122 ± 2% +303.3% 89218 softirqs.CPU32.SCHED
31883 ± 10% +409.2% 162341 softirqs.CPU32.TIMER
13353 ± 4% +534.8% 84767 softirqs.CPU33.RCU
21842 ± 2% +310.5% 89664 softirqs.CPU33.SCHED
30126 ± 3% +438.3% 162162 softirqs.CPU33.TIMER
12271 ± 2% +587.1% 84320 softirqs.CPU34.RCU
22168 +302.3% 89183 softirqs.CPU34.SCHED
30379 ± 4% +433.9% 162194 softirqs.CPU34.TIMER
13700 ± 10% +531.4% 86500 softirqs.CPU35.RCU
21773 ± 2% +311.6% 89617 softirqs.CPU35.SCHED
29374 ± 4% +457.8% 163844 ± 2% softirqs.CPU35.TIMER
14197 ± 7% +504.9% 85881 softirqs.CPU36.RCU
22217 ± 2% +300.1% 88890 softirqs.CPU36.SCHED
34490 ± 9% +369.5% 161930 softirqs.CPU36.TIMER
12721 ± 9% +567.6% 84926 softirqs.CPU37.RCU
21844 ± 2% +308.0% 89118 softirqs.CPU37.SCHED
37740 ± 30% +328.7% 161776 softirqs.CPU37.TIMER
12972 ± 8% +555.4% 85014 softirqs.CPU38.RCU
22224 ± 2% +298.8% 88630 softirqs.CPU38.SCHED
31492 ± 4% +464.2% 177686 ± 14% softirqs.CPU38.TIMER
13739 ± 8% +520.3% 85230 ± 2% softirqs.CPU39.RCU
22027 ± 2% +305.3% 89286 softirqs.CPU39.SCHED
30457 ± 3% +484.2% 177924 ± 14% softirqs.CPU39.TIMER
15752 ± 4% +458.5% 87970 softirqs.CPU4.RCU
22056 +304.4% 89195 softirqs.CPU4.SCHED
34247 ± 11% +375.5% 162860 softirqs.CPU4.TIMER
15513 ± 5% +458.2% 86595 softirqs.CPU5.RCU
21876 ± 2% +308.6% 89388 softirqs.CPU5.SCHED
32238 ± 8% +449.3% 177102 ± 14% softirqs.CPU5.TIMER
15190 ± 7% +470.0% 86580 softirqs.CPU6.RCU
22013 +304.2% 88985 softirqs.CPU6.SCHED
30196 +437.6% 162339 softirqs.CPU6.TIMER
14199 ± 4% +520.5% 88112 softirqs.CPU7.RCU
21843 ± 2% +310.9% 89752 softirqs.CPU7.SCHED
31088 ± 6% +424.4% 163017 softirqs.CPU7.TIMER
15641 ± 7% +458.1% 87287 softirqs.CPU8.RCU
21911 +304.4% 88603 softirqs.CPU8.SCHED
30063 ± 2% +441.0% 162632 softirqs.CPU8.TIMER
14905 ± 6% +483.9% 87032 softirqs.CPU9.RCU
21715 +309.3% 88870 softirqs.CPU9.SCHED
30645 ± 5% +430.4% 162556 softirqs.CPU9.TIMER
595866 ± 3% +483.2% 3475309 softirqs.RCU
881282 +305.0% 3569582 softirqs.SCHED
1259248 ± 4% +421.7% 6569855 softirqs.TIMER
382.00 ± 97% +151.4% 960.50 ± 14% interrupts.37:IR-PCI-MSI.524289-edge.eth0-TxRx-0
126.00 ± 77% +255.4% 447.75 ± 35% interrupts.38:IR-PCI-MSI.524290-edge.eth0-TxRx-1
127.33 ± 63% +620.4% 917.25 ± 97% interrupts.39:IR-PCI-MSI.524291-edge.eth0-TxRx-2
73.00 ± 36% +1571.6% 1220 ±102% interrupts.40:IR-PCI-MSI.524292-edge.eth0-TxRx-3
67.00 ± 51% +492.9% 397.25 ± 39% interrupts.41:IR-PCI-MSI.524293-edge.eth0-TxRx-4
50.67 ± 26% +693.9% 402.25 ± 49% interrupts.42:IR-PCI-MSI.524294-edge.eth0-TxRx-5
94.00 ± 76% +425.5% 494.00 ± 64% interrupts.43:IR-PCI-MSI.524295-edge.eth0-TxRx-6
39316 ± 19% +290.3% 153433 ± 3% interrupts.CAL:Function_call_interrupts
1048 ± 18% +254.6% 3717 ± 5% interrupts.CPU0.CAL:Function_call_interrupts
133588 ± 6% +506.6% 810374 ± 9% interrupts.CPU0.LOC:Local_timer_interrupts
6938 ± 2% -70.0% 2078 ± 93% interrupts.CPU0.NMI:Non-maskable_interrupts
6938 ± 2% -70.0% 2078 ± 93% interrupts.CPU0.PMI:Performance_monitoring_interrupts
270741 +676.9% 2103498 ± 2% interrupts.CPU0.RES:Rescheduling_interrupts
992.00 ± 18% +297.3% 3941 ± 10% interrupts.CPU1.CAL:Function_call_interrupts
133024 ± 5% +509.4% 810700 ± 8% interrupts.CPU1.LOC:Local_timer_interrupts
268182 +661.2% 2041439 ± 3% interrupts.CPU1.RES:Rescheduling_interrupts
919.33 ± 27% +324.9% 3906 ± 6% interrupts.CPU10.CAL:Function_call_interrupts
133250 ± 6% +507.4% 809341 ± 9% interrupts.CPU10.LOC:Local_timer_interrupts
271163 +666.7% 2078939 ± 2% interrupts.CPU10.RES:Rescheduling_interrupts
1017 ± 20% +316.2% 4232 ± 3% interrupts.CPU11.CAL:Function_call_interrupts
133396 ± 6% +507.1% 809829 ± 9% interrupts.CPU11.LOC:Local_timer_interrupts
2329 ± 70% +159.9% 6053 ± 33% interrupts.CPU11.NMI:Non-maskable_interrupts
2329 ± 70% +159.9% 6053 ± 33% interrupts.CPU11.PMI:Performance_monitoring_interrupts
269666 +648.2% 2017734 ± 2% interrupts.CPU11.RES:Rescheduling_interrupts
1008 ± 17% +302.7% 4059 ± 4% interrupts.CPU12.CAL:Function_call_interrupts
133147 ± 6% +507.8% 809269 ± 9% interrupts.CPU12.LOC:Local_timer_interrupts
269665 +667.6% 2069971 ± 2% interrupts.CPU12.RES:Rescheduling_interrupts
1035 ± 19% +307.4% 4216 ± 4% interrupts.CPU13.CAL:Function_call_interrupts
132326 ± 5% +511.6% 809355 ± 9% interrupts.CPU13.LOC:Local_timer_interrupts
268233 +651.0% 2014555 ± 2% interrupts.CPU13.RES:Rescheduling_interrupts
955.00 ± 22% +310.5% 3920 ± 9% interrupts.CPU14.CAL:Function_call_interrupts
132902 ± 6% +508.2% 808332 ± 9% interrupts.CPU14.LOC:Local_timer_interrupts
270232 +669.5% 2079387 ± 2% interrupts.CPU14.RES:Rescheduling_interrupts
958.00 ± 15% +331.2% 4130 ± 10% interrupts.CPU15.CAL:Function_call_interrupts
133491 ± 6% +506.1% 809122 ± 9% interrupts.CPU15.LOC:Local_timer_interrupts
268772 +653.5% 2025140 ± 3% interrupts.CPU15.RES:Rescheduling_interrupts
982.33 ± 20% +283.5% 3767 ± 9% interrupts.CPU16.CAL:Function_call_interrupts
133026 ± 6% +508.4% 809297 ± 9% interrupts.CPU16.LOC:Local_timer_interrupts
32.00 ±139% +6310.9% 2051 ± 97% interrupts.CPU16.NMI:Non-maskable_interrupts
32.00 ±139% +6310.9% 2051 ± 97% interrupts.CPU16.PMI:Performance_monitoring_interrupts
270141 +674.6% 2092564 ± 2% interrupts.CPU16.RES:Rescheduling_interrupts
932.33 ± 11% +285.7% 3596 ± 6% interrupts.CPU17.CAL:Function_call_interrupts
133459 ± 6% +506.5% 809457 ± 9% interrupts.CPU17.LOC:Local_timer_interrupts
268937 +664.9% 2057174 ± 2% interrupts.CPU17.RES:Rescheduling_interrupts
997.33 ± 22% +272.8% 3717 ± 11% interrupts.CPU18.CAL:Function_call_interrupts
133142 ± 6% +508.0% 809548 ± 9% interrupts.CPU18.LOC:Local_timer_interrupts
1161 ±141% +247.1% 4030 ± 70% interrupts.CPU18.NMI:Non-maskable_interrupts
1161 ±141% +247.1% 4030 ± 70% interrupts.CPU18.PMI:Performance_monitoring_interrupts
271465 +671.4% 2094183 ± 2% interrupts.CPU18.RES:Rescheduling_interrupts
984.67 ± 15% +290.8% 3848 ± 9% interrupts.CPU19.CAL:Function_call_interrupts
133426 ± 6% +506.5% 809213 ± 9% interrupts.CPU19.LOC:Local_timer_interrupts
269250 +658.3% 2041823 ± 3% interrupts.CPU19.RES:Rescheduling_interrupts
790.33 ± 10% +339.7% 3475 ± 5% interrupts.CPU2.CAL:Function_call_interrupts
133443 ± 6% +506.4% 809224 ± 9% interrupts.CPU2.LOC:Local_timer_interrupts
5796 ± 29% -99.5% 28.75 ±167% interrupts.CPU2.NMI:Non-maskable_interrupts
5796 ± 29% -99.5% 28.75 ±167% interrupts.CPU2.PMI:Performance_monitoring_interrupts
270575 +681.7% 2115196 ± 2% interrupts.CPU2.RES:Rescheduling_interrupts
972.00 ± 24% +281.5% 3708 ± 6% interrupts.CPU20.CAL:Function_call_interrupts
133230 ± 6% +507.9% 809922 ± 9% interrupts.CPU20.LOC:Local_timer_interrupts
269876 +675.8% 2093778 ± 2% interrupts.CPU20.RES:Rescheduling_interrupts
1005 ± 20% +292.3% 3944 ± 5% interrupts.CPU21.CAL:Function_call_interrupts
133184 ± 5% +508.8% 810772 ± 8% interrupts.CPU21.LOC:Local_timer_interrupts
268330 +658.8% 2036131 interrupts.CPU21.RES:Rescheduling_interrupts
967.33 ± 20% +304.5% 3913 ± 8% interrupts.CPU22.CAL:Function_call_interrupts
133003 ± 6% +508.8% 809723 ± 9% interrupts.CPU22.LOC:Local_timer_interrupts
9.00 ± 86% +78477.8% 7072 ± 24% interrupts.CPU22.NMI:Non-maskable_interrupts
9.00 ± 86% +78477.8% 7072 ± 24% interrupts.CPU22.PMI:Performance_monitoring_interrupts
269563 +668.4% 2071335 ± 2% interrupts.CPU22.RES:Rescheduling_interrupts
954.33 ± 18% +295.1% 3770 ± 6% interrupts.CPU23.CAL:Function_call_interrupts
133483 ± 6% +507.2% 810519 ± 9% interrupts.CPU23.LOC:Local_timer_interrupts
40.67 ±139% +9827.7% 4037 ± 70% interrupts.CPU23.NMI:Non-maskable_interrupts
40.67 ±139% +9827.7% 4037 ± 70% interrupts.CPU23.PMI:Performance_monitoring_interrupts
268426 +661.6% 2044292 ± 2% interrupts.CPU23.RES:Rescheduling_interrupts
972.33 ± 20% +297.8% 3867 ± 9% interrupts.CPU24.CAL:Function_call_interrupts
133172 ± 5% +508.1% 809821 ± 9% interrupts.CPU24.LOC:Local_timer_interrupts
270065 +670.5% 2080975 interrupts.CPU24.RES:Rescheduling_interrupts
951.00 ± 18% +298.8% 3792 ± 6% interrupts.CPU25.CAL:Function_call_interrupts
133180 ± 5% +508.4% 810270 ± 9% interrupts.CPU25.LOC:Local_timer_interrupts
268233 +663.0% 2046605 ± 2% interrupts.CPU25.RES:Rescheduling_interrupts
382.00 ± 97% +151.4% 960.50 ± 14% interrupts.CPU26.37:IR-PCI-MSI.524289-edge.eth0-TxRx-0
994.00 ± 22% +252.5% 3503 ± 7% interrupts.CPU26.CAL:Function_call_interrupts
133368 ± 6% +507.1% 809711 ± 9% interrupts.CPU26.LOC:Local_timer_interrupts
269891 +679.0% 2102493 interrupts.CPU26.RES:Rescheduling_interrupts
898.33 ± 11% +333.8% 3897 ± 5% interrupts.CPU27.CAL:Function_call_interrupts
133388 ± 6% +507.4% 810264 ± 9% interrupts.CPU27.LOC:Local_timer_interrupts
268700 +657.8% 2036093 interrupts.CPU27.RES:Rescheduling_interrupts
126.00 ± 77% +255.4% 447.75 ± 35% interrupts.CPU28.38:IR-PCI-MSI.524290-edge.eth0-TxRx-1
944.33 ± 22% +302.1% 3797 ± 7% interrupts.CPU28.CAL:Function_call_interrupts
133140 ± 6% +507.8% 809290 ± 9% interrupts.CPU28.LOC:Local_timer_interrupts
270210 +677.1% 2099843 ± 3% interrupts.CPU28.RES:Rescheduling_interrupts
969.67 ± 20% +297.5% 3854 ± 8% interrupts.CPU29.CAL:Function_call_interrupts
133520 ± 6% +507.1% 810570 ± 9% interrupts.CPU29.LOC:Local_timer_interrupts
268826 +663.3% 2051921 ± 2% interrupts.CPU29.RES:Rescheduling_interrupts
1038 ± 19% +293.7% 4089 ± 8% interrupts.CPU3.CAL:Function_call_interrupts
133191 ± 5% +508.1% 809881 ± 8% interrupts.CPU3.LOC:Local_timer_interrupts
6940 ± 2% -56.4% 3024 ± 57% interrupts.CPU3.NMI:Non-maskable_interrupts
6940 ± 2% -56.4% 3024 ± 57% interrupts.CPU3.PMI:Performance_monitoring_interrupts
268572 +654.7% 2026828 ± 2% interrupts.CPU3.RES:Rescheduling_interrupts
127.33 ± 63% +620.4% 917.25 ± 97% interrupts.CPU30.39:IR-PCI-MSI.524291-edge.eth0-TxRx-2
963.00 ± 26% +281.9% 3678 ± 4% interrupts.CPU30.CAL:Function_call_interrupts
133054 ± 6% +508.3% 809434 ± 9% interrupts.CPU30.LOC:Local_timer_interrupts
271207 +674.4% 2100122 ± 2% interrupts.CPU30.RES:Rescheduling_interrupts
995.00 ± 18% +267.3% 3654 ± 4% interrupts.CPU31.CAL:Function_call_interrupts
133386 ± 6% +507.3% 810065 ± 9% interrupts.CPU31.LOC:Local_timer_interrupts
269250 +664.1% 2057223 ± 2% interrupts.CPU31.RES:Rescheduling_interrupts
73.00 ± 36% +1571.6% 1220 ±102% interrupts.CPU32.40:IR-PCI-MSI.524292-edge.eth0-TxRx-3
1002 ± 22% +258.4% 3592 ± 4% interrupts.CPU32.CAL:Function_call_interrupts
132940 ± 6% +508.8% 809389 ± 9% interrupts.CPU32.LOC:Local_timer_interrupts
269559 +683.9% 2113083 ± 2% interrupts.CPU32.RES:Rescheduling_interrupts
975.33 ± 19% +275.5% 3662 ± 4% interrupts.CPU33.CAL:Function_call_interrupts
133459 ± 5% +506.4% 809294 ± 9% interrupts.CPU33.LOC:Local_timer_interrupts
267896 +667.4% 2055845 ± 2% interrupts.CPU33.RES:Rescheduling_interrupts
67.00 ± 51% +492.9% 397.25 ± 39% interrupts.CPU34.41:IR-PCI-MSI.524293-edge.eth0-TxRx-4
1017 ± 20% +245.0% 3508 ± 8% interrupts.CPU34.CAL:Function_call_interrupts
133243 ± 6% +507.5% 809390 ± 9% interrupts.CPU34.LOC:Local_timer_interrupts
269293 +680.6% 2102164 ± 3% interrupts.CPU34.RES:Rescheduling_interrupts
1031 ± 20% +271.9% 3834 ± 6% interrupts.CPU35.CAL:Function_call_interrupts
132493 ± 7% +510.7% 809145 ± 9% interrupts.CPU35.LOC:Local_timer_interrupts
268627 +662.3% 2047686 interrupts.CPU35.RES:Rescheduling_interrupts
50.67 ± 26% +693.9% 402.25 ± 49% interrupts.CPU36.42:IR-PCI-MSI.524294-edge.eth0-TxRx-5
1035 ± 18% +275.0% 3880 ± 3% interrupts.CPU36.CAL:Function_call_interrupts
132759 ± 6% +509.0% 808458 ± 9% interrupts.CPU36.LOC:Local_timer_interrupts
269908 +675.6% 2093509 ± 3% interrupts.CPU36.RES:Rescheduling_interrupts
1008 ± 18% +296.3% 3997 ± 4% interrupts.CPU37.CAL:Function_call_interrupts
133226 ± 6% +507.3% 809019 ± 9% interrupts.CPU37.LOC:Local_timer_interrupts
2505 ±131% +141.5% 6051 ± 33% interrupts.CPU37.NMI:Non-maskable_interrupts
2505 ±131% +141.5% 6051 ± 33% interrupts.CPU37.PMI:Performance_monitoring_interrupts
268397 +655.3% 2027303 interrupts.CPU37.RES:Rescheduling_interrupts
94.00 ± 76% +425.5% 494.00 ± 64% interrupts.CPU38.43:IR-PCI-MSI.524295-edge.eth0-TxRx-6
943.00 ± 21% +306.9% 3837 ± 5% interrupts.CPU38.CAL:Function_call_interrupts
132971 ± 6% +508.5% 809161 ± 9% interrupts.CPU38.LOC:Local_timer_interrupts
270860 +668.1% 2080363 ± 2% interrupts.CPU38.RES:Rescheduling_interrupts
1005 ± 22% +294.7% 3966 ± 5% interrupts.CPU39.CAL:Function_call_interrupts
132952 ± 6% +507.9% 808170 ± 9% interrupts.CPU39.LOC:Local_timer_interrupts
268740 +658.0% 2037012 interrupts.CPU39.RES:Rescheduling_interrupts
955.00 ± 30% +259.7% 3435 ± 7% interrupts.CPU4.CAL:Function_call_interrupts
133712 ± 6% +505.5% 809605 ± 9% interrupts.CPU4.LOC:Local_timer_interrupts
270806 +680.7% 2114232 ± 2% interrupts.CPU4.RES:Rescheduling_interrupts
1039 ± 19% +288.5% 4037 ± 7% interrupts.CPU5.CAL:Function_call_interrupts
133190 ± 5% +508.5% 810503 ± 9% interrupts.CPU5.LOC:Local_timer_interrupts
6988 ± 2% -55.5% 3107 ± 52% interrupts.CPU5.NMI:Non-maskable_interrupts
6988 ± 2% -55.5% 3107 ± 52% interrupts.CPU5.PMI:Performance_monitoring_interrupts
269305 +652.6% 2026721 ± 2% interrupts.CPU5.RES:Rescheduling_interrupts
1037 ± 18% +283.8% 3980 ± 6% interrupts.CPU6.CAL:Function_call_interrupts
133350 ± 6% +507.2% 809668 ± 9% interrupts.CPU6.LOC:Local_timer_interrupts
270185 +671.4% 2084081 ± 3% interrupts.CPU6.RES:Rescheduling_interrupts
1023 ± 16% +286.0% 3951 ± 10% interrupts.CPU7.CAL:Function_call_interrupts
133398 ± 6% +507.7% 810619 ± 9% interrupts.CPU7.LOC:Local_timer_interrupts
268918 +657.6% 2037320 ± 3% interrupts.CPU7.RES:Rescheduling_interrupts
959.00 ± 26% +285.6% 3697 ± 6% interrupts.CPU8.CAL:Function_call_interrupts
133200 ± 6% +508.3% 810215 ± 9% interrupts.CPU8.LOC:Local_timer_interrupts
270558 +676.3% 2100284 interrupts.CPU8.RES:Rescheduling_interrupts
1040 ± 19% +289.8% 4054 ± 6% interrupts.CPU9.CAL:Function_call_interrupts
133076 ± 5% +508.7% 810082 ± 9% interrupts.CPU9.LOC:Local_timer_interrupts
268991 +655.1% 2031167 ± 2% interrupts.CPU9.RES:Rescheduling_interrupts
5327898 ± 6% +507.9% 32386035 ± 9% interrupts.LOC:Local_timer_interrupts
10780226 +666.5% 82630024 interrupts.RES:Rescheduling_interrupts
223.67 ± 6% +595.5% 1555 ± 4% interrupts.TLB:TLB_shootdowns



aim7.jobs-per-min

35000 +-+-----------------------------------------------------------------+
| |
30000 +-+.+.+..+.+.+ +.+.+ + +.+.+.+ +..+.+.+.+.+ +.+..+.+.+ |
| : : : : : : : : : : : |
25000 +-+ : : : : : : : : : : |
| : : : : : : : : : : : : |
20000 +-+ : : : : : : : : : : : :|
|: : : : : : : : : : : : :|
15000 +-+ : : : : : : : : : : : :|
|: O O: :O O O:O: O: :O O :O: : : : :|
10000 O-+ O : : : : : : : : : : : :|
|: :: :: :: :: :: ::|
5000 +-+ : :: : : O O O O : : |
| : : : : : : |
0 +-O---O----O---O------------O---O---O-------------------------------+


aim7.time.system_time

16000 +-+-----------------------------------------------------------------+
| O O O O |
14000 +-+ |
12000 +-+ |
| |
10000 +-+ |
| |
8000 +-+ |
| |
6000 +-+ |
4000 O-+ O O O O O O |
| O O O O O |
2000 +-+ |
|.+.+.+..+.+.+. .+.+.+. .+. .+.+.+.+. .+..+.+.+.+.+. .+.+..+.+.+. .|
0 +-O---O----O---O------------O---O---O-------------------------------+


aim7.time.elapsed_time

450 +-+-------------------------------------------------------------------+
| O O O O |
400 +-+ |
350 +-+ |
| |
300 +-+ |
250 +-+ |
| |
200 +-+ |
150 O-+ O O O O O O O O O |
| O O |
100 +-+ |
50 +-+.+..+.+.+.+ +.+.+ + +.+.+.+ +.+.+.+.+..+ +.+.+.+..+ |
|+ + .. + + + .. + .. + + + +|
0 +-O----O---O---O------------O----O---O--------------------------------+


aim7.time.elapsed_time.max

450 +-+-------------------------------------------------------------------+
| O O O O |
400 +-+ |
350 +-+ |
| |
300 +-+ |
250 +-+ |
| |
200 +-+ |
150 O-+ O O O O O O O O O |
| O O |
100 +-+ |
50 +-+.+..+.+.+.+ +.+.+ + +.+.+.+ +.+.+.+.+..+ +.+.+.+..+ |
|+ + .. + + + .. + .. + + + +|
0 +-O----O---O---O------------O----O---O--------------------------------+


aim7.time.minor_page_faults

90000 +-+-----------------------------------------------------------------+
| O |
80000 +-+ O O |
70000 +-+ O |
| |
60000 +-+ |
50000 +-+ |
| |
40000 +-+ |
30000 O-+ O O O O O O O O O O |
| O |
20000 +-+.+.+..+.+.+ +.+.+ + +.+.+.+ +..+.+.+.+.+ +.+..+.+.+ |
10000 +-+ : : : + : : : : : : : :|
|: : : : + : : : : : : : :|
0 +-O---O----O---O------------O---O---O-------------------------------+


aim7.time.voluntary_context_switches

2.5e+08 +-+---------------------------------------------------------------+
| |
| O O O O |
2e+08 +-+ |
| |
| |
1.5e+08 +-+ |
| |
1e+08 +-+ |
| |
| |
5e+07 O-+ O O O O O O O O O O O |
| +.+.+.+.+.+ +.+.+ + +.+.+.+ +.+.+.+.+..+ +.+.+.+.+ |
|+ + .. + + + + + + + + + +|
0 +-O---O---O---O------------O---O---O------------------------------+


aim7.time.involuntary_context_switches

2.5e+07 +-+----------------------------------------O-O--------------------+
| O O |
| |
2e+07 +-+ |
| |
| |
1.5e+07 +-+ |
| |
1e+07 +-+ |
| |
| |
5e+06 +-+ |
O O O O O O O O O O O O |
| |
0 +-O---O---O---O------------O---O---O------------------------------+


[*] bisect-good sample
[O] bisect-bad sample



Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


Thanks,
Rong Chen


Attachments:
(No filename) (69.12 kB)
config-5.1.0-rc3-00024-g4bc034d (196.56 kB)
job-script (7.92 kB)
job.yaml (5.55 kB)
reproduce (1.06 kB)
Download all attachments

2019-04-08 07:46:58

by NeilBrown

[permalink] [raw]
Subject: Re: [MD] 4bc034d353: aim7.jobs-per-min -86.0% regression

On Mon, Apr 08 2019, kernel test robot wrote:

> Greeting,
>
> FYI, we noticed a -86.0% regression of aim7.jobs-per-min due to commit:

That is expected. The following commit
2bc13b83e6298486371761de503faeffd15b7534
should restore the performance.

NeilBrown



>
>
> commit: 4bc034d35377196c854236133b07730a777c4aba ("Revert "MD: fix lock contention for flush bios"")
> https://git.kernel.org/cgit/linux/kernel/git/axboe/linux-block.git for-5.2/block
>
> in testcase: aim7
> on test machine: 40 threads Intel(R) Xeon(R) CPU E5-2690 v2 @ 3.00GHz with 384G memory
> with following parameters:
>
> disk: 4BRD_12G
> md: RAID0
> fs: xfs
> test: sync_disk_rw
> load: 300
> cpufreq_governor: performance
>
> test-description: AIM7 is a traditional UNIX system level benchmark suite which is used to test and measure the performance of multiuser system.
> test-url: https://sourceforge.net/projects/aimbench/files/aim-suite7/
>
>


Attachments:
signature.asc (847.00 B)