2021-06-27 13:56:09

by kernel test robot

[permalink] [raw]
Subject: [xfs] 25f25648e5: aim7.jobs-per-min 22.3% improvement



Greeting,

FYI, we noticed a 22.3% improvement of aim7.jobs-per-min due to commit:


commit: 25f25648e57c793b4b18b010eac18a4e2f2b3050 ("xfs: separate CIL commit record IO")
https://git.kernel.org/cgit/linux/kernel/git/djwong/xfs-linux.git xfs-merge-5.14


in testcase: aim7
on test machine: 88 threads 2 sockets Intel(R) Xeon(R) Gold 6238M CPU @ 2.10GHz with 128G memory
with following parameters:

disk: 4BRD_12G
md: RAID0
fs: xfs
test: sync_disk_rw
load: 300
cpufreq_governor: performance
ucode: 0x5003006

test-description: AIM7 is a traditional UNIX system level benchmark suite which is used to test and measure the performance of multiuser system.
test-url: https://sourceforge.net/projects/aimbench/files/aim-suite7/





Details are as below:
-------------------------------------------------------------------------------------------------->


To reproduce:

git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
bin/lkp run generated-yaml-file

=========================================================================================
compiler/cpufreq_governor/disk/fs/kconfig/load/md/rootfs/tbox_group/test/testcase/ucode:
gcc-9/performance/4BRD_12G/xfs/x86_64-rhel-8.3/300/RAID0/debian-10.4-x86_64-20200603.cgz/lkp-csl-2sp9/sync_disk_rw/aim7/0x5003006

commit:
18842e0a4f ("xfs: Fix 64-bit division on 32-bit in xlog_state_switch_iclogs()")
25f25648e5 ("xfs: separate CIL commit record IO")

18842e0a4f48564b 25f25648e57c793b4b18b010eac
---------------- ---------------------------
%stddev %change %stddev
\ | \
13815 +22.3% 16889 aim7.jobs-per-min
130.32 -18.2% 106.64 aim7.time.elapsed_time
130.32 -18.2% 106.64 aim7.time.elapsed_time.max
1646003 +16.3% 1913955 aim7.time.involuntary_context_switches
42538 ? 2% -10.0% 38271 aim7.time.minor_page_faults
5500 -18.3% 4494 aim7.time.system_time
72944741 -10.5% 65303598 aim7.time.voluntary_context_switches
0.01 +0.0 0.03 ? 10% mpstat.cpu.all.iowait%
0.53 -0.1 0.44 ? 10% mpstat.cpu.all.usr%
8419 -9.5% 7615 slabinfo.kmalloc-1k.active_objs
8440 -9.3% 7654 slabinfo.kmalloc-1k.num_objs
168.83 -14.0% 145.18 uptime.boot
8433 -11.7% 7446 uptime.idle
556844 +21.7% 677674 vmstat.io.bo
3565134 -19.6% 2866640 ? 2% vmstat.memory.cache
1095947 +12.1% 1228114 vmstat.system.cs
217079 ? 2% +5.2% 228390 vmstat.system.in
9.886e+08 ? 3% +8.5% 1.073e+09 cpuidle.C1.time
21880453 ? 3% +8.4% 23726361 cpuidle.C1.usage
4.21e+09 ? 8% -27.0% 3.072e+09 ? 12% cpuidle.C1E.time
46804996 ? 2% -19.0% 37889790 cpuidle.C1E.usage
1132585 ? 2% +11.4% 1261749 ? 2% cpuidle.POLL.usage
21879557 ? 3% +8.4% 23719474 turbostat.C1
8.52 ? 3% +2.8 11.28 ? 2% turbostat.C1%
46804444 ? 2% -19.0% 37888773 turbostat.C1E
28919152 ? 3% -13.5% 25009870 turbostat.IRQ
63.33 -5.0% 60.17 turbostat.PkgTmp
55.51 +1.6% 56.38 turbostat.RAMWatt
340608 ? 2% -68.4% 107793 ? 19% meminfo.Active
340405 ? 2% -68.4% 107572 ? 19% meminfo.Active(anon)
131581 -13.9% 113279 meminfo.AnonHugePages
3418203 -20.5% 2716452 meminfo.Cached
1494601 -47.0% 791470 ? 6% meminfo.Committed_AS
982289 -48.4% 506435 ? 6% meminfo.Inactive
836764 -56.5% 364176 ? 8% meminfo.Inactive(anon)
400957 ? 3% -84.9% 60497 ? 9% meminfo.Mapped
5305128 -13.5% 4586518 meminfo.Memused
900371 -77.6% 201888 ? 25% meminfo.Shmem
5385905 -13.0% 4685824 meminfo.max_used_kB
27327 ? 5% -81.8% 4974 ? 33% numa-vmstat.node0.nr_active_anon
44110 ? 7% -75.1% 10996 ? 8% numa-vmstat.node0.nr_mapped
79437 ? 4% -81.5% 14675 ? 27% numa-vmstat.node0.nr_shmem
27327 ? 5% -81.8% 4974 ? 33% numa-vmstat.node0.nr_zone_active_anon
58009 ? 3% -61.3% 22426 ? 15% numa-vmstat.node1.nr_active_anon
283317 ? 65% -69.9% 85400 ? 34% numa-vmstat.node1.nr_file_pages
113831 ? 21% -70.7% 33371 ? 73% numa-vmstat.node1.nr_inactive_anon
56885 ? 10% -92.4% 4306 ? 20% numa-vmstat.node1.nr_mapped
146223 ? 4% -74.7% 36961 ? 23% numa-vmstat.node1.nr_shmem
58009 ? 3% -61.3% 22426 ? 15% numa-vmstat.node1.nr_zone_active_anon
113831 ? 21% -70.7% 33371 ? 73% numa-vmstat.node1.nr_zone_inactive_anon
109487 ? 5% -82.8% 18787 ? 34% numa-meminfo.node0.Active
109351 ? 5% -83.0% 18640 ? 34% numa-meminfo.node0.Active(anon)
176140 ? 7% -75.5% 43183 ? 8% numa-meminfo.node0.Mapped
318127 ? 4% -82.6% 55458 ? 28% numa-meminfo.node0.Shmem
232079 ? 3% -62.1% 87960 ? 14% numa-meminfo.node1.Active
232012 ? 3% -62.1% 87885 ? 14% numa-meminfo.node1.Active(anon)
1133114 ? 65% -70.3% 337069 ? 35% numa-meminfo.node1.FilePages
527450 ? 18% -61.9% 201191 ? 48% numa-meminfo.node1.Inactive
455376 ? 21% -71.3% 130653 ? 74% numa-meminfo.node1.Inactive(anon)
227695 ? 8% -92.6% 16935 ? 21% numa-meminfo.node1.Mapped
1945489 ? 41% -44.2% 1086330 ? 23% numa-meminfo.node1.MemUsed
585009 ? 3% -75.5% 143332 ? 23% numa-meminfo.node1.Shmem
85445 ? 2% -68.7% 26735 ? 17% proc-vmstat.nr_active_anon
69278 -2.6% 67508 proc-vmstat.nr_anon_pages
855520 -20.7% 678672 proc-vmstat.nr_file_pages
209826 -56.8% 90688 ? 7% proc-vmstat.nr_inactive_anon
36373 -2.0% 35643 proc-vmstat.nr_inactive_file
101433 ? 3% -84.9% 15269 ? 8% proc-vmstat.nr_mapped
226062 -77.9% 49947 ? 24% proc-vmstat.nr_shmem
29591 -1.9% 29036 proc-vmstat.nr_slab_reclaimable
85445 ? 2% -68.7% 26735 ? 17% proc-vmstat.nr_zone_active_anon
209826 -56.8% 90688 ? 7% proc-vmstat.nr_zone_inactive_anon
36373 -2.0% 35643 proc-vmstat.nr_zone_inactive_file
5684492 -6.6% 5310139 ? 3% proc-vmstat.numa_hit
5604706 -6.7% 5230464 ? 3% proc-vmstat.numa_local
379121 ? 3% -47.2% 200073 ? 26% proc-vmstat.numa_pte_updates
5739362 -5.3% 5436386 ? 3% proc-vmstat.pgalloc_normal
862428 -24.4% 651794 ? 9% proc-vmstat.pgfault
4448641 -1.8% 4368530 proc-vmstat.pgfree
31068 -17.3% 25700 ? 2% proc-vmstat.pgreuse
4.314e+09 +3.6% 4.471e+09 perf-stat.i.branch-instructions
39371378 +13.3% 44623196 perf-stat.i.branch-misses
52626946 +16.6% 61383092 perf-stat.i.cache-misses
2.135e+08 +12.2% 2.396e+08 perf-stat.i.cache-references
1113293 +13.0% 1258353 perf-stat.i.context-switches
6.68 -4.3% 6.39 perf-stat.i.cpi
178513 +17.6% 209921 perf-stat.i.cpu-migrations
5.08e+09 +5.4% 5.354e+09 perf-stat.i.dTLB-loads
1.71e+09 +11.3% 1.903e+09 perf-stat.i.dTLB-stores
8659988 +13.7% 9842498 perf-stat.i.iTLB-load-misses
16746799 +17.1% 19606097 perf-stat.i.iTLB-loads
1.928e+10 +4.6% 2.017e+10 perf-stat.i.instructions
2288 -6.4% 2141 perf-stat.i.instructions-per-iTLB-miss
597.14 ? 3% +18.4% 707.15 ? 4% perf-stat.i.metric.K/sec
128.58 +5.7% 135.95 perf-stat.i.metric.M/sec
18956317 +18.0% 22374047 perf-stat.i.node-load-misses
2072096 +12.1% 2323175 perf-stat.i.node-loads
8938071 +17.8% 10530875 perf-stat.i.node-store-misses
1829617 +15.1% 2105455 perf-stat.i.node-stores
11.07 +7.3% 11.88 perf-stat.overall.MPKI
0.91 +0.1 1.00 perf-stat.overall.branch-miss-rate%
24.66 +1.0 25.62 perf-stat.overall.cache-miss-rate%
6.81 -3.7% 6.56 perf-stat.overall.cpi
2493 -13.6% 2154 perf-stat.overall.cycles-between-cache-misses
2226 -7.9% 2049 perf-stat.overall.instructions-per-iTLB-miss
0.15 +3.8% 0.15 perf-stat.overall.ipc
4.282e+09 +3.5% 4.43e+09 perf-stat.ps.branch-instructions
39068166 +13.1% 44205158 perf-stat.ps.branch-misses
52231496 +16.5% 60824668 perf-stat.ps.cache-misses
2.118e+08 +12.1% 2.374e+08 perf-stat.ps.cache-references
1104945 +12.8% 1246887 perf-stat.ps.context-switches
177170 +17.4% 208015 perf-stat.ps.cpu-migrations
5.042e+09 +5.2% 5.305e+09 perf-stat.ps.dTLB-loads
1.697e+09 +11.1% 1.886e+09 perf-stat.ps.dTLB-stores
8595508 +13.5% 9752538 perf-stat.ps.iTLB-load-misses
16620999 +16.9% 19426903 perf-stat.ps.iTLB-loads
1.913e+10 +4.5% 1.999e+10 perf-stat.ps.instructions
18813745 +17.8% 22171223 perf-stat.ps.node-load-misses
2056347 +12.0% 2302678 perf-stat.ps.node-loads
8871079 +17.6% 10435111 perf-stat.ps.node-store-misses
1816009 +14.9% 2086216 perf-stat.ps.node-stores
2.505e+12 -14.4% 2.145e+12 ? 2% perf-stat.total.instructions
34145 ? 2% -13.8% 29436 ? 3% softirqs.CPU0.SCHED
31043 ? 2% -11.7% 27404 ? 6% softirqs.CPU1.SCHED
30254 -17.4% 24985 ? 2% softirqs.CPU10.SCHED
29933 -15.2% 25371 softirqs.CPU11.SCHED
30730 ? 2% -17.0% 25513 ? 2% softirqs.CPU12.SCHED
29999 -14.3% 25706 ? 4% softirqs.CPU13.SCHED
29912 -15.2% 25375 ? 2% softirqs.CPU14.SCHED
30198 ? 2% -15.4% 25551 ? 2% softirqs.CPU15.SCHED
29840 -15.2% 25297 ? 2% softirqs.CPU16.SCHED
29987 -14.9% 25510 ? 2% softirqs.CPU17.SCHED
29991 -16.3% 25107 ? 2% softirqs.CPU18.SCHED
29822 -15.4% 25222 ? 2% softirqs.CPU19.SCHED
31108 ? 2% -14.4% 26613 ? 5% softirqs.CPU2.SCHED
30054 -15.2% 25486 softirqs.CPU20.SCHED
30324 -16.3% 25387 ? 2% softirqs.CPU21.SCHED
29969 -15.5% 25327 softirqs.CPU22.SCHED
30175 -16.7% 25133 softirqs.CPU23.SCHED
30118 -16.6% 25105 softirqs.CPU24.SCHED
29918 -15.7% 25222 softirqs.CPU25.SCHED
30296 -17.0% 25143 softirqs.CPU26.SCHED
30012 -15.9% 25238 softirqs.CPU27.SCHED
30092 -16.3% 25181 softirqs.CPU28.SCHED
30081 -16.4% 25155 softirqs.CPU29.SCHED
30222 ? 2% -15.7% 25489 ? 2% softirqs.CPU3.SCHED
30021 -16.6% 25032 softirqs.CPU30.SCHED
29786 ? 2% -15.3% 25223 softirqs.CPU31.SCHED
30079 -13.4% 26049 ? 6% softirqs.CPU32.SCHED
30143 -15.6% 25442 ? 2% softirqs.CPU33.SCHED
30203 -16.6% 25200 softirqs.CPU34.SCHED
30174 -16.1% 25330 softirqs.CPU35.SCHED
29785 -15.7% 25109 softirqs.CPU36.SCHED
30023 -15.7% 25294 softirqs.CPU37.SCHED
29865 -15.2% 25323 softirqs.CPU38.SCHED
30165 -16.4% 25209 softirqs.CPU39.SCHED
30772 -16.7% 25624 ? 2% softirqs.CPU4.SCHED
30063 -15.6% 25383 softirqs.CPU40.SCHED
30022 -15.8% 25281 softirqs.CPU41.SCHED
30197 -16.4% 25234 softirqs.CPU42.SCHED
27959 ? 3% -12.8% 24383 ? 5% softirqs.CPU43.SCHED
30338 -17.3% 25090 softirqs.CPU44.SCHED
30336 -17.9% 24895 ? 3% softirqs.CPU45.SCHED
30148 ? 2% -16.6% 25131 ? 2% softirqs.CPU46.SCHED
30044 -15.8% 25283 ? 2% softirqs.CPU47.SCHED
30176 -16.9% 25075 ? 2% softirqs.CPU48.SCHED
30153 -16.1% 25301 ? 2% softirqs.CPU49.SCHED
30694 ? 3% -17.0% 25480 ? 2% softirqs.CPU5.SCHED
30245 -15.8% 25456 ? 2% softirqs.CPU50.SCHED
30007 -16.5% 25067 ? 2% softirqs.CPU51.SCHED
30791 ? 5% -17.2% 25499 ? 2% softirqs.CPU52.SCHED
30122 -16.3% 25207 ? 3% softirqs.CPU53.SCHED
30087 -15.7% 25377 softirqs.CPU54.SCHED
30129 -15.5% 25447 ? 2% softirqs.CPU55.SCHED
30239 -16.7% 25198 ? 2% softirqs.CPU56.SCHED
29958 -15.7% 25257 ? 2% softirqs.CPU57.SCHED
30172 -16.0% 25352 ? 2% softirqs.CPU58.SCHED
30140 -14.7% 25714 ? 3% softirqs.CPU59.SCHED
30144 -15.0% 25635 softirqs.CPU6.SCHED
30008 -15.3% 25419 ? 2% softirqs.CPU60.SCHED
30347 -16.4% 25370 ? 2% softirqs.CPU61.SCHED
30110 -14.8% 25657 softirqs.CPU62.SCHED
30004 -14.9% 25535 ? 2% softirqs.CPU63.SCHED
30970 ? 5% -18.0% 25397 ? 3% softirqs.CPU64.SCHED
29981 -15.0% 25498 ? 2% softirqs.CPU65.SCHED
29652 ? 3% -14.6% 25334 softirqs.CPU66.SCHED
30362 ? 2% -16.0% 25510 ? 2% softirqs.CPU67.SCHED
30238 -16.6% 25227 softirqs.CPU68.SCHED
30257 -16.2% 25344 softirqs.CPU69.SCHED
30073 ? 2% -15.2% 25499 ? 2% softirqs.CPU7.SCHED
30118 -16.0% 25289 softirqs.CPU70.SCHED
30027 -16.2% 25173 softirqs.CPU71.SCHED
30264 -17.2% 25063 softirqs.CPU72.SCHED
30264 -16.7% 25211 softirqs.CPU73.SCHED
30213 -16.5% 25230 softirqs.CPU74.SCHED
29908 ? 3% -15.4% 25307 softirqs.CPU75.SCHED
30410 -16.9% 25281 softirqs.CPU76.SCHED
30241 -16.0% 25411 softirqs.CPU77.SCHED
30264 -16.6% 25233 softirqs.CPU78.SCHED
30244 -16.3% 25315 ? 2% softirqs.CPU79.SCHED
29946 -14.9% 25497 ? 2% softirqs.CPU8.SCHED
30320 -16.5% 25318 softirqs.CPU80.SCHED
30080 -15.6% 25377 ? 2% softirqs.CPU81.SCHED
30221 -16.1% 25363 softirqs.CPU82.SCHED
30485 ? 2% -16.9% 25345 softirqs.CPU83.SCHED
30134 -16.3% 25234 softirqs.CPU84.SCHED
30053 -15.9% 25270 softirqs.CPU85.SCHED
30043 -16.2% 25166 softirqs.CPU86.SCHED
28670 -14.2% 24604 softirqs.CPU87.SCHED
30268 -16.3% 25332 softirqs.CPU9.SCHED
2655637 -15.9% 2234406 softirqs.SCHED
5106788 -6.0% 4799556 interrupts.CAL:Function_call_interrupts
254706 ? 3% -16.3% 213241 interrupts.CPU0.LOC:Local_timer_interrupts
5549 ? 2% -8.2% 5094 ? 3% interrupts.CPU0.RES:Rescheduling_interrupts
254788 ? 3% -16.4% 213107 interrupts.CPU1.LOC:Local_timer_interrupts
254758 ? 3% -16.3% 213108 interrupts.CPU10.LOC:Local_timer_interrupts
254694 ? 3% -16.4% 213047 interrupts.CPU11.LOC:Local_timer_interrupts
254900 ? 3% -16.3% 213307 interrupts.CPU12.LOC:Local_timer_interrupts
254718 ? 3% -16.3% 213128 interrupts.CPU13.LOC:Local_timer_interrupts
254642 ? 3% -16.3% 213089 interrupts.CPU14.LOC:Local_timer_interrupts
254633 ? 3% -16.3% 213119 interrupts.CPU15.LOC:Local_timer_interrupts
254557 ? 3% -16.3% 213151 interrupts.CPU16.LOC:Local_timer_interrupts
254686 ? 3% -16.3% 213079 interrupts.CPU17.LOC:Local_timer_interrupts
254714 ? 3% -16.4% 213054 interrupts.CPU18.LOC:Local_timer_interrupts
254664 ? 3% -16.3% 213203 interrupts.CPU19.LOC:Local_timer_interrupts
254837 ? 3% -16.3% 213174 interrupts.CPU2.LOC:Local_timer_interrupts
254882 ? 3% -16.4% 213148 interrupts.CPU20.LOC:Local_timer_interrupts
254724 ? 3% -16.3% 213112 interrupts.CPU21.LOC:Local_timer_interrupts
57158 -6.4% 53497 ? 5% interrupts.CPU22.CAL:Function_call_interrupts
254225 ? 4% -16.1% 213202 interrupts.CPU22.LOC:Local_timer_interrupts
5992 ? 4% -8.5% 5480 ? 5% interrupts.CPU22.RES:Rescheduling_interrupts
58035 -7.4% 53744 ? 5% interrupts.CPU23.CAL:Function_call_interrupts
254101 ? 4% -16.1% 213115 interrupts.CPU23.LOC:Local_timer_interrupts
58241 -7.9% 53628 ? 5% interrupts.CPU24.CAL:Function_call_interrupts
253922 ? 4% -16.1% 213135 interrupts.CPU24.LOC:Local_timer_interrupts
5918 ? 3% -8.6% 5406 ? 4% interrupts.CPU24.RES:Rescheduling_interrupts
254010 ? 4% -16.1% 213200 interrupts.CPU25.LOC:Local_timer_interrupts
5920 ? 4% -10.1% 5324 ? 2% interrupts.CPU25.RES:Rescheduling_interrupts
254076 ? 4% -16.1% 213120 interrupts.CPU26.LOC:Local_timer_interrupts
254042 ? 4% -16.1% 213105 interrupts.CPU27.LOC:Local_timer_interrupts
254118 ? 4% -16.1% 213204 interrupts.CPU28.LOC:Local_timer_interrupts
6004 ? 4% -12.1% 5278 ? 4% interrupts.CPU28.RES:Rescheduling_interrupts
254055 ? 4% -16.0% 213287 interrupts.CPU29.LOC:Local_timer_interrupts
5897 ? 4% -9.9% 5315 ? 3% interrupts.CPU29.RES:Rescheduling_interrupts
254835 ? 3% -16.4% 213103 interrupts.CPU3.LOC:Local_timer_interrupts
58134 -7.5% 53755 ? 5% interrupts.CPU30.CAL:Function_call_interrupts
254113 ? 4% -16.1% 213130 interrupts.CPU30.LOC:Local_timer_interrupts
5992 ? 5% -9.3% 5435 ? 4% interrupts.CPU30.RES:Rescheduling_interrupts
57281 -6.3% 53656 ? 5% interrupts.CPU31.CAL:Function_call_interrupts
254240 ? 4% -16.2% 213120 interrupts.CPU31.LOC:Local_timer_interrupts
57835 -7.0% 53769 ? 5% interrupts.CPU32.CAL:Function_call_interrupts
254052 ? 4% -16.1% 213142 interrupts.CPU32.LOC:Local_timer_interrupts
253556 ? 4% -15.9% 213167 interrupts.CPU33.LOC:Local_timer_interrupts
58237 -7.3% 53994 ? 5% interrupts.CPU34.CAL:Function_call_interrupts
254196 ? 4% -16.2% 213126 interrupts.CPU34.LOC:Local_timer_interrupts
5910 ? 5% -8.0% 5437 ? 4% interrupts.CPU34.RES:Rescheduling_interrupts
254077 ? 4% -16.1% 213108 interrupts.CPU35.LOC:Local_timer_interrupts
254172 ? 4% -16.2% 212995 interrupts.CPU36.LOC:Local_timer_interrupts
254049 ? 4% -16.2% 213017 interrupts.CPU37.LOC:Local_timer_interrupts
254041 ? 4% -16.1% 213062 interrupts.CPU38.LOC:Local_timer_interrupts
254097 ? 4% -16.1% 213150 interrupts.CPU39.LOC:Local_timer_interrupts
254718 ? 3% -16.3% 213169 interrupts.CPU4.LOC:Local_timer_interrupts
254045 ? 4% -16.1% 213159 interrupts.CPU40.LOC:Local_timer_interrupts
57864 -7.3% 53645 ? 5% interrupts.CPU41.CAL:Function_call_interrupts
254050 ? 4% -16.1% 213094 interrupts.CPU41.LOC:Local_timer_interrupts
254059 ? 4% -16.1% 213123 interrupts.CPU42.LOC:Local_timer_interrupts
254208 ? 4% -16.2% 213137 interrupts.CPU43.LOC:Local_timer_interrupts
5949 ? 3% -7.0% 5530 ? 2% interrupts.CPU43.RES:Rescheduling_interrupts
254697 ? 3% -16.3% 213126 interrupts.CPU44.LOC:Local_timer_interrupts
5526 ? 3% -9.1% 5026 ? 2% interrupts.CPU44.RES:Rescheduling_interrupts
254768 ? 3% -16.3% 213115 interrupts.CPU45.LOC:Local_timer_interrupts
254707 ? 3% -16.3% 213169 interrupts.CPU46.LOC:Local_timer_interrupts
254751 ? 3% -16.3% 213200 interrupts.CPU47.LOC:Local_timer_interrupts
254683 ? 3% -16.3% 213094 interrupts.CPU48.LOC:Local_timer_interrupts
254744 ? 3% -16.3% 213126 interrupts.CPU49.LOC:Local_timer_interrupts
254724 ? 3% -16.3% 213292 interrupts.CPU5.LOC:Local_timer_interrupts
254729 ? 3% -16.3% 213295 interrupts.CPU50.LOC:Local_timer_interrupts
254646 ? 3% -16.3% 213166 interrupts.CPU51.LOC:Local_timer_interrupts
254658 ? 3% -16.3% 213180 interrupts.CPU52.LOC:Local_timer_interrupts
254492 ? 3% -16.2% 213152 interrupts.CPU53.LOC:Local_timer_interrupts
254682 ? 3% -16.3% 213087 interrupts.CPU54.LOC:Local_timer_interrupts
5673 ? 2% -7.9% 5227 interrupts.CPU54.RES:Rescheduling_interrupts
254897 ? 3% -16.4% 213137 interrupts.CPU55.LOC:Local_timer_interrupts
254751 ? 3% -16.3% 213151 interrupts.CPU56.LOC:Local_timer_interrupts
5573 ? 2% -10.7% 4979 interrupts.CPU56.RES:Rescheduling_interrupts
254729 ? 3% -16.3% 213131 interrupts.CPU57.LOC:Local_timer_interrupts
254750 ? 3% -16.4% 213035 interrupts.CPU58.LOC:Local_timer_interrupts
254663 ? 3% -16.3% 213118 interrupts.CPU59.LOC:Local_timer_interrupts
254713 ? 3% -16.3% 213217 interrupts.CPU6.LOC:Local_timer_interrupts
254725 ? 3% -16.3% 213135 interrupts.CPU60.LOC:Local_timer_interrupts
254744 ? 3% -16.4% 213048 interrupts.CPU61.LOC:Local_timer_interrupts
254684 ? 3% -16.4% 213043 interrupts.CPU62.LOC:Local_timer_interrupts
254603 ? 3% -16.3% 213156 interrupts.CPU63.LOC:Local_timer_interrupts
254945 ? 3% -16.4% 213126 interrupts.CPU64.LOC:Local_timer_interrupts
254724 ? 3% -16.3% 213147 interrupts.CPU65.LOC:Local_timer_interrupts
56461 ? 3% -5.8% 53206 ? 5% interrupts.CPU66.CAL:Function_call_interrupts
254226 ? 4% -16.2% 213129 interrupts.CPU66.LOC:Local_timer_interrupts
3759 ? 35% +56.0% 5865 interrupts.CPU66.NMI:Non-maskable_interrupts
3759 ? 35% +56.0% 5865 interrupts.CPU66.PMI:Performance_monitoring_interrupts
254025 ? 4% -16.1% 213181 interrupts.CPU67.LOC:Local_timer_interrupts
254041 ? 4% -16.1% 213113 interrupts.CPU68.LOC:Local_timer_interrupts
254099 ? 4% -16.1% 213124 interrupts.CPU69.LOC:Local_timer_interrupts
254678 ? 3% -16.3% 213143 interrupts.CPU7.LOC:Local_timer_interrupts
57751 -7.2% 53601 ? 5% interrupts.CPU70.CAL:Function_call_interrupts
254074 ? 4% -16.1% 213094 interrupts.CPU70.LOC:Local_timer_interrupts
254142 ? 4% -16.1% 213124 interrupts.CPU71.LOC:Local_timer_interrupts
57704 -7.4% 53407 ? 5% interrupts.CPU72.CAL:Function_call_interrupts
254085 ? 4% -16.1% 213145 interrupts.CPU72.LOC:Local_timer_interrupts
254042 ? 4% -16.1% 213146 interrupts.CPU73.LOC:Local_timer_interrupts
57350 -7.1% 53269 ? 5% interrupts.CPU74.CAL:Function_call_interrupts
254092 ? 4% -16.1% 213160 interrupts.CPU74.LOC:Local_timer_interrupts
56960 ? 3% -6.6% 53196 ? 5% interrupts.CPU75.CAL:Function_call_interrupts
254212 ? 4% -16.2% 213140 interrupts.CPU75.LOC:Local_timer_interrupts
57787 -8.1% 53122 ? 5% interrupts.CPU76.CAL:Function_call_interrupts
254041 ? 4% -16.1% 213107 interrupts.CPU76.LOC:Local_timer_interrupts
57694 -7.5% 53373 ? 5% interrupts.CPU77.CAL:Function_call_interrupts
253523 ? 4% -15.9% 213162 interrupts.CPU77.LOC:Local_timer_interrupts
57702 -7.0% 53665 ? 5% interrupts.CPU78.CAL:Function_call_interrupts
254071 ? 4% -16.1% 213119 interrupts.CPU78.LOC:Local_timer_interrupts
57718 -7.3% 53526 ? 5% interrupts.CPU79.CAL:Function_call_interrupts
254050 ? 4% -16.1% 213125 interrupts.CPU79.LOC:Local_timer_interrupts
254684 ? 3% -16.3% 213174 interrupts.CPU8.LOC:Local_timer_interrupts
55672 -8.4% 51021 ? 5% interrupts.CPU80.CAL:Function_call_interrupts
254106 ? 4% -16.1% 213133 interrupts.CPU80.LOC:Local_timer_interrupts
57747 -7.9% 53205 ? 4% interrupts.CPU81.CAL:Function_call_interrupts
254236 ? 4% -16.2% 213139 interrupts.CPU81.LOC:Local_timer_interrupts
5911 ? 5% -9.5% 5352 ? 2% interrupts.CPU81.RES:Rescheduling_interrupts
56980 -7.0% 53004 ? 5% interrupts.CPU82.CAL:Function_call_interrupts
254077 ? 4% -16.1% 213101 interrupts.CPU82.LOC:Local_timer_interrupts
57620 -7.2% 53453 ? 5% interrupts.CPU83.CAL:Function_call_interrupts
254028 ? 4% -16.0% 213294 interrupts.CPU83.LOC:Local_timer_interrupts
254047 ? 4% -16.1% 213174 interrupts.CPU84.LOC:Local_timer_interrupts
5910 ? 3% -9.9% 5327 ? 3% interrupts.CPU84.RES:Rescheduling_interrupts
57302 -7.3% 53135 ? 5% interrupts.CPU85.CAL:Function_call_interrupts
253971 ? 4% -16.1% 213150 interrupts.CPU85.LOC:Local_timer_interrupts
57562 -7.2% 53418 ? 5% interrupts.CPU86.CAL:Function_call_interrupts
254074 ? 4% -16.1% 213158 interrupts.CPU86.LOC:Local_timer_interrupts
5935 ? 3% -9.7% 5360 ? 2% interrupts.CPU86.RES:Rescheduling_interrupts
254345 ? 4% -16.2% 213154 interrupts.CPU87.LOC:Local_timer_interrupts
254464 ? 3% -16.2% 213199 interrupts.CPU9.LOC:Local_timer_interrupts
22386636 ? 4% -16.2% 18756407 interrupts.LOC:Local_timer_interrupts
0.04 ? 2% -41.6% 0.02 ? 71% perf-sched.sch_delay.avg.ms.__xfs_log_force_lsn.xfs_log_force_lsn.xfs_file_fsync.xfs_file_buffered_write
0.02 ? 20% -55.5% 0.01 ? 77% perf-sched.sch_delay.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
0.02 ? 15% -61.8% 0.01 ? 71% perf-sched.sch_delay.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64
0.07 ? 34% -72.0% 0.02 ?117% perf-sched.sch_delay.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_call_function_single.[unknown]
0.16 ? 33% -72.4% 0.04 ? 90% perf-sched.sch_delay.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown]
0.11 ? 18% -72.3% 0.03 ? 74% perf-sched.sch_delay.avg.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.09 ? 2% -62.2% 0.03 ? 70% perf-sched.sch_delay.avg.ms.io_schedule.wait_on_page_bit.wait_on_page_writeback.__filemap_fdatawait_range
0.08 -46.6% 0.04 ? 70% perf-sched.sch_delay.avg.ms.md_flush_request.raid0_make_request.md_handle_request.md_submit_bio
0.01 ? 15% -48.2% 0.00 ? 73% perf-sched.sch_delay.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read
0.01 ? 10% -51.7% 0.01 ? 71% perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.__flush_work.xlog_cil_force_lsn.xfs_log_force_lsn
0.04 ? 12% -59.5% 0.02 ? 73% perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.down_write.xfs_ilock.xfs_buffered_write_iomap_begin
0.12 ? 74% -93.5% 0.01 ?151% perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.down_write.xfs_ilock.xfs_file_buffered_write
0.04 ? 14% -70.9% 0.01 ? 86% perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.down_write.xfs_ilock.xfs_vn_update_time
0.10 ? 36% -84.6% 0.02 ?117% perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_trans_alloc.xfs_vn_update_time
0.14 ? 56% -88.0% 0.02 ?190% perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xlog_ticket_alloc.xfs_log_reserve
0.13 ? 32% -55.5% 0.06 ? 91% perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.mempool_alloc.md_submit_bio.submit_bio_noacct
0.26 ? 2% -60.1% 0.11 ? 70% perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.process_one_work.worker_thread.kthread
0.10 ? 19% -70.0% 0.03 ? 72% perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.migrate_task_to.task_numa_migrate
0.03 ? 2% -49.0% 0.02 ? 70% perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.submit_bio_checks.submit_bio_noacct.submit_bio
0.12 ? 6% -75.0% 0.03 ? 72% perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.wait_for_completion.__flush_work.xlog_cil_force_lsn
0.05 ? 23% -69.4% 0.02 ? 79% perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.wait_for_completion.stop_two_cpus.migrate_swap
0.37 ? 2% -48.7% 0.19 ? 70% perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.wait_for_completion_io.submit_bio_wait.blkdev_issue_flush
0.05 ? 27% -58.5% 0.02 ?102% perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.write_cache_pages.iomap_writepages.xfs_vm_writepages
0.03 ? 19% -57.1% 0.01 ? 75% perf-sched.sch_delay.avg.ms.rcu_gp_kthread.kthread.ret_from_fork
0.06 ? 18% -69.8% 0.02 ? 89% perf-sched.sch_delay.avg.ms.rwsem_down_read_slowpath.xfs_log_commit_cil.__xfs_trans_commit.xfs_trans_roll
0.19 -60.5% 0.07 ? 70% perf-sched.sch_delay.avg.ms.rwsem_down_read_slowpath.xfs_log_commit_cil.__xfs_trans_commit.xfs_vn_update_time
0.06 ? 8% -60.7% 0.02 ? 76% perf-sched.sch_delay.avg.ms.rwsem_down_write_slowpath.xlog_cil_push_work.process_one_work.worker_thread
0.05 ? 7% -72.4% 0.01 ? 73% perf-sched.sch_delay.avg.ms.schedule_timeout.__down.down.xfs_buf_lock
0.01 ?126% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_timeout.__down.down.xlog_write_iclog
0.04 ? 52% -66.7% 0.01 ? 87% perf-sched.sch_delay.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_fork
0.03 ? 15% -53.0% 0.01 ? 71% perf-sched.sch_delay.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork
0.06 -51.4% 0.03 ? 70% perf-sched.sch_delay.avg.ms.schedule_timeout.wait_for_completion.__flush_work.xlog_cil_force_lsn
0.07 ?129% -80.2% 0.01 ? 72% perf-sched.sch_delay.avg.ms.schedule_timeout.xfsaild.kthread.ret_from_fork
0.03 ? 25% -59.1% 0.01 ? 71% perf-sched.sch_delay.avg.ms.smpboot_thread_fn.kthread.ret_from_fork
0.02 ? 2% -46.4% 0.01 ? 70% perf-sched.sch_delay.avg.ms.worker_thread.kthread.ret_from_fork
0.02 ? 2% -52.9% 0.01 ? 70% perf-sched.sch_delay.avg.ms.xlog_cil_force_lsn.xfs_log_force_lsn.xfs_file_fsync.xfs_file_buffered_write
0.00 ? 21% -100.0% 0.00 perf-sched.sch_delay.avg.ms.xlog_state_get_iclog_space.xlog_write.xlog_cil_push_work.process_one_work
0.11 ? 2% -74.2% 0.03 ? 71% perf-sched.sch_delay.avg.ms.xlog_wait_on_iclog.__xfs_log_force_lsn.xfs_file_fsync.xfs_file_buffered_write
0.05 -41.9% 0.03 ? 70% perf-sched.sch_delay.avg.ms.xlog_wait_on_iclog.__xfs_log_force_lsn.xfs_log_force_lsn.xfs_file_fsync
0.45 ? 62% -67.5% 0.14 ?149% perf-sched.sch_delay.max.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown]
1.71 ? 36% -76.7% 0.40 ? 91% perf-sched.sch_delay.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_call_function_single.[unknown]
2.01 ? 21% -64.5% 0.71 ?101% perf-sched.sch_delay.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown]
2.45 ? 15% -58.6% 1.02 ? 75% perf-sched.sch_delay.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe
7.19 ? 19% -48.4% 3.71 ? 72% perf-sched.sch_delay.max.ms.io_schedule.wait_on_page_bit.wait_on_page_writeback.__filemap_fdatawait_range
1.48 ? 46% -85.5% 0.22 ? 82% perf-sched.sch_delay.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read
0.18 ?179% -90.8% 0.02 ? 83% perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.__filemap_fdatawait_range.file_write_and_wait_range.xfs_file_fsync
1.45 ? 51% -77.3% 0.33 ?139% perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.down_read.xfs_log_commit_cil.__xfs_trans_commit
0.98 ? 63% -96.5% 0.03 ?180% perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.down_write.xfs_ilock.xfs_file_buffered_write
1.43 ? 38% -84.0% 0.23 ?114% perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_trans_alloc.xfs_vn_update_time
2.42 ? 36% -62.9% 0.90 ? 76% perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.mempool_alloc.md_submit_bio.submit_bio_noacct
1.71 ? 19% -63.7% 0.62 ? 77% perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.pagecache_get_page.grab_cache_page_write_begin.iomap_write_begin
2.38 ? 54% -66.3% 0.80 ? 70% perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.process_one_work.worker_thread.kthread
0.97 ?116% -98.7% 0.01 ?115% perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.vfs_write.ksys_write.do_syscall_64
1.66 ? 39% -65.5% 0.57 ? 76% perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.wait_for_completion.stop_two_cpus.migrate_swap
9.25 ? 15% -43.3% 5.24 ? 71% perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.wait_for_completion_io.submit_bio_wait.blkdev_issue_flush
1.67 ? 23% -64.8% 0.59 ? 95% perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.write_cache_pages.iomap_writepages.xfs_vm_writepages
1.23 ? 60% -93.6% 0.08 ?214% perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.xfs_trans_alloc.xfs_vn_update_time.file_update_time
1.24 ? 18% -67.9% 0.40 ? 93% perf-sched.sch_delay.max.ms.rwsem_down_read_slowpath.xfs_log_commit_cil.__xfs_trans_commit.xfs_trans_roll
5.75 ? 33% -60.3% 2.28 ? 77% perf-sched.sch_delay.max.ms.rwsem_down_read_slowpath.xfs_log_commit_cil.__xfs_trans_commit.xfs_vn_update_time
1.24 ? 18% -62.4% 0.47 ? 94% perf-sched.sch_delay.max.ms.rwsem_down_write_slowpath.xlog_cil_push_work.process_one_work.worker_thread
0.08 ?147% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_timeout.__down.down.xlog_write_iclog
0.01 ? 69% -100.0% 0.00 perf-sched.sch_delay.max.ms.xlog_state_get_iclog_space.xlog_write.xlog_cil_push_work.process_one_work
0.06 -50.0% 0.03 ? 70% perf-sched.total_sch_delay.average.ms
1.47 -41.6% 0.86 ? 70% perf-sched.total_wait_and_delay.average.ms
4177172 -52.5% 1983450 ? 75% perf-sched.total_wait_and_delay.count.ms
9084 ? 4% -54.1% 4167 ? 76% perf-sched.total_wait_and_delay.max.ms
1.41 -41.3% 0.83 ? 70% perf-sched.total_wait_time.average.ms
9084 ? 4% -54.1% 4167 ? 76% perf-sched.total_wait_time.max.ms
875.96 ? 2% -38.2% 541.19 ? 71% perf-sched.wait_and_delay.avg.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown]
745.97 ? 7% -51.8% 359.59 ? 72% perf-sched.wait_and_delay.avg.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64
268.60 ? 3% -75.4% 66.08 ? 72% perf-sched.wait_and_delay.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
0.35 ? 2% -64.2% 0.12 ? 70% perf-sched.wait_and_delay.avg.ms.io_schedule.wait_on_page_bit.wait_on_page_writeback.__filemap_fdatawait_range
2.33 ? 12% -49.5% 1.17 ? 70% perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.process_one_work.worker_thread.kthread
6.29 ? 25% -63.3% 2.31 ? 72% perf-sched.wait_and_delay.avg.ms.rwsem_down_read_slowpath.xfs_log_commit_cil.__xfs_trans_commit.xfs_iomap_write_unwritten
4.41 ? 62% -81.5% 0.82 ?102% perf-sched.wait_and_delay.avg.ms.rwsem_down_read_slowpath.xfs_log_commit_cil.__xfs_trans_commit.xfs_trans_roll
0.55 -62.1% 0.21 ? 70% perf-sched.wait_and_delay.avg.ms.schedule_timeout.wait_for_completion.__flush_work.xlog_cil_force_lsn
481.36 -42.8% 275.22 ? 70% perf-sched.wait_and_delay.avg.ms.smpboot_thread_fn.kthread.ret_from_fork
4.29 ? 2% -49.4% 2.17 ? 70% perf-sched.wait_and_delay.avg.ms.worker_thread.kthread.ret_from_fork
0.92 -41.7% 0.54 ? 70% perf-sched.wait_and_delay.avg.ms.xlog_cil_force_lsn.xfs_log_force_lsn.xfs_file_fsync.xfs_file_buffered_write
0.98 -81.4% 0.18 ? 70% perf-sched.wait_and_delay.avg.ms.xlog_wait_on_iclog.__xfs_log_force_lsn.xfs_file_fsync.xfs_file_buffered_write
20.17 -54.5% 9.17 ? 73% perf-sched.wait_and_delay.count.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown]
243371 -91.5% 20675 ? 75% perf-sched.wait_and_delay.count.__xfs_log_force_lsn.xfs_log_force_lsn.xfs_file_fsync.xfs_file_buffered_write
202463 -51.9% 97384 ? 77% perf-sched.wait_and_delay.count.io_schedule.wait_on_page_bit.wait_on_page_writeback.__filemap_fdatawait_range
798137 -48.2% 413471 ? 75% perf-sched.wait_and_delay.count.md_flush_request.raid0_make_request.md_handle_request.md_submit_bio
1185 ? 19% -57.5% 503.67 ? 71% perf-sched.wait_and_delay.count.pipe_read.new_sync_read.vfs_read.ksys_read
31.33 ? 23% -62.8% 11.67 ? 81% perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.generic_perform_write.__generic_file_write_iter.generic_file_write_iter
7442 -72.1% 2078 ? 75% perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.process_one_work.worker_thread.kthread
969.17 -50.4% 481.00 ? 74% perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.wait_for_completion.affine_move_task.__set_cpus_allowed_ptr
391.50 ? 41% -64.3% 139.83 ? 72% perf-sched.wait_and_delay.count.rcu_gp_kthread.kthread.ret_from_fork
1080 ? 2% -62.6% 404.50 ? 78% perf-sched.wait_and_delay.count.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll
39.50 -55.3% 17.67 ? 75% perf-sched.wait_and_delay.count.schedule_timeout.kcompactd.kthread.ret_from_fork
1203696 -47.8% 627784 ? 76% perf-sched.wait_and_delay.count.schedule_timeout.wait_for_completion.__flush_work.xlog_cil_force_lsn
207.50 -54.9% 93.67 ? 74% perf-sched.wait_and_delay.count.schedule_timeout.xfsaild.kthread.ret_from_fork
2062 -48.9% 1053 ? 75% perf-sched.wait_and_delay.count.smpboot_thread_fn.kthread.ret_from_fork
512183 -55.4% 228429 ? 76% perf-sched.wait_and_delay.count.xlog_cil_force_lsn.xfs_log_force_lsn.xfs_file_fsync.xfs_file_buffered_write
239503 -91.5% 20401 ? 75% perf-sched.wait_and_delay.count.xlog_wait_on_iclog.__xfs_log_force_lsn.xfs_file_fsync.xfs_file_buffered_write
40.69 ? 34% -60.3% 16.17 ? 72% perf-sched.wait_and_delay.max.ms.md_flush_request.raid0_make_request.md_handle_request.md_submit_bio
994.02 ?114% -95.7% 42.55 ?114% perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.process_one_work.worker_thread.kthread
19.19 ? 12% -41.7% 11.18 ? 71% perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.wait_for_completion_io.submit_bio_wait.blkdev_issue_flush
213.56 ?188% -93.6% 13.74 ?102% perf-sched.wait_and_delay.max.ms.rwsem_down_read_slowpath.xfs_log_commit_cil.__xfs_trans_commit.xfs_trans_roll
8509 ? 14% -58.7% 3515 ? 75% perf-sched.wait_and_delay.max.ms.smpboot_thread_fn.kthread.ret_from_fork
8266 ? 10% -60.4% 3272 ? 79% perf-sched.wait_and_delay.max.ms.worker_thread.kthread.ret_from_fork
9.23 ? 38% -65.6% 3.17 ? 80% perf-sched.wait_and_delay.max.ms.xlog_wait_on_iclog.__xfs_log_force_lsn.xfs_file_fsync.xfs_file_buffered_write
875.91 ? 2% -38.2% 541.17 ? 71% perf-sched.wait_time.avg.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown]
745.93 ? 7% -51.8% 359.58 ? 72% perf-sched.wait_time.avg.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64
268.58 ? 3% -75.4% 66.08 ? 72% perf-sched.wait_time.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
0.22 ? 41% -63.4% 0.08 ? 85% perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_call_function_single.[unknown]
0.40 ? 35% -69.9% 0.12 ? 72% perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown]
0.26 ? 2% -65.0% 0.09 ? 70% perf-sched.wait_time.avg.ms.io_schedule.wait_on_page_bit.wait_on_page_writeback.__filemap_fdatawait_range
0.43 ?105% -89.4% 0.05 ?223% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.__alloc_pages.pagecache_get_page.grab_cache_page_write_begin
0.72 ? 2% -59.4% 0.29 ? 70% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.__flush_work.xlog_cil_force_lsn.xfs_log_force_lsn
0.22 ? 24% -65.3% 0.08 ? 73% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.__kmalloc.kmem_alloc.kmem_alloc_large
2.84 ? 81% -88.3% 0.33 ?223% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
0.17 ? 16% -52.7% 0.08 ? 76% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.down_read.xfs_file_fsync.xfs_file_buffered_write
0.18 ? 34% -70.0% 0.05 ? 96% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.down_read.xfs_log_commit_cil.__xfs_trans_commit
0.14 ? 10% -56.0% 0.06 ? 71% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.down_write.xfs_ilock.xfs_buffered_write_iomap_begin
0.24 ? 47% -83.1% 0.04 ? 92% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.down_write.xfs_ilock.xfs_file_buffered_write
0.18 ? 11% -62.3% 0.07 ? 73% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.down_write.xfs_ilock.xfs_vn_update_time
0.25 ? 21% -68.2% 0.08 ? 77% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.iomap_write_actor.iomap_apply.iomap_file_buffered_write
0.23 ? 15% -73.2% 0.06 ? 74% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_trans_alloc.xfs_vn_update_time
2.06 ? 13% -48.2% 1.07 ? 70% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.process_one_work.worker_thread.kthread
0.85 ? 11% -74.4% 0.22 ? 79% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.migrate_task_to.task_numa_migrate
0.02 ? 44% -75.0% 0.00 ? 98% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.sched_exec.bprm_execve
0.33 ? 4% -60.9% 0.13 ? 70% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.wait_for_completion.__flush_work.xlog_cil_force_lsn
0.63 ? 20% -74.4% 0.16 ? 75% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.wait_for_completion.stop_two_cpus.migrate_swap
0.15 ? 10% -59.8% 0.06 ? 79% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.write_cache_pages.iomap_writepages.xfs_vm_writepages
0.18 ? 14% -60.2% 0.07 ? 73% perf-sched.wait_time.avg.ms.rwsem_down_read_slowpath.xfs_log_commit_cil.__xfs_trans_commit.xfs_free_eofblocks
6.29 ? 25% -63.4% 2.30 ? 72% perf-sched.wait_time.avg.ms.rwsem_down_read_slowpath.xfs_log_commit_cil.__xfs_trans_commit.xfs_iomap_write_unwritten
0.39 ? 15% -58.1% 0.16 ? 73% perf-sched.wait_time.avg.ms.rwsem_down_read_slowpath.xfs_log_commit_cil.__xfs_trans_commit.xfs_remove
4.35 ? 62% -77.7% 0.97 ? 76% perf-sched.wait_time.avg.ms.rwsem_down_read_slowpath.xfs_log_commit_cil.__xfs_trans_commit.xfs_trans_roll
0.50 ? 3% -61.7% 0.19 ? 71% perf-sched.wait_time.avg.ms.rwsem_down_read_slowpath.xfs_log_commit_cil.__xfs_trans_commit.xfs_vn_update_time
0.54 ? 34% -81.3% 0.10 ?101% perf-sched.wait_time.avg.ms.rwsem_down_write_slowpath.do_unlinkat.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.74 ? 6% -45.7% 0.40 ? 71% perf-sched.wait_time.avg.ms.rwsem_down_write_slowpath.xlog_cil_push_work.process_one_work.worker_thread
0.60 ? 11% -51.4% 0.29 ? 71% perf-sched.wait_time.avg.ms.schedule_timeout.__down.down.xfs_buf_lock
0.24 ? 34% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.__down.down.xlog_write_iclog
0.08 ? 8% -43.0% 0.04 ? 71% perf-sched.wait_time.avg.ms.schedule_timeout.io_schedule_timeout.wait_for_completion_io.submit_bio_wait
0.49 -63.5% 0.18 ? 70% perf-sched.wait_time.avg.ms.schedule_timeout.wait_for_completion.__flush_work.xlog_cil_force_lsn
481.33 -42.8% 275.20 ? 70% perf-sched.wait_time.avg.ms.smpboot_thread_fn.kthread.ret_from_fork
4.27 ? 2% -49.4% 2.16 ? 70% perf-sched.wait_time.avg.ms.worker_thread.kthread.ret_from_fork
0.91 -41.5% 0.53 ? 70% perf-sched.wait_time.avg.ms.xlog_cil_force_lsn.xfs_log_force_lsn.xfs_file_fsync.xfs_file_buffered_write
0.13 ?142% -100.0% 0.00 perf-sched.wait_time.avg.ms.xlog_state_get_iclog_space.xlog_write.xlog_cil_push_work.process_one_work
0.88 -82.3% 0.16 ? 70% perf-sched.wait_time.avg.ms.xlog_wait_on_iclog.__xfs_log_force_lsn.xfs_file_fsync.xfs_file_buffered_write
4.37 ? 73% -76.6% 1.02 ? 78% perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown]
6.89 ? 8% -40.0% 4.13 ? 70% perf-sched.wait_time.max.ms.io_schedule.wait_on_page_bit.wait_on_page_writeback.__filemap_fdatawait_range
40.68 ? 34% -61.8% 15.56 ? 72% perf-sched.wait_time.max.ms.md_flush_request.raid0_make_request.md_handle_request.md_submit_bio
0.51 ? 93% -91.0% 0.05 ?223% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.__alloc_pages.pagecache_get_page.grab_cache_page_write_begin
4.01 ? 6% -69.9% 1.21 ? 71% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.__flush_work.xlog_cil_force_lsn.xfs_log_force_lsn
4.25 ? 58% -75.6% 1.04 ? 91% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.__kmalloc.kmem_alloc.kmem_alloc_large
2.07 ? 39% -75.5% 0.51 ?126% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.down_read.xfs_log_commit_cil.__xfs_trans_commit
1.40 ? 51% -93.8% 0.09 ? 87% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.down_write.xfs_ilock.xfs_file_buffered_write
2.26 ? 40% -53.5% 1.05 ? 74% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.down_write.xfs_ilock.xfs_vn_update_time
2.05 ? 35% -70.2% 0.61 ? 93% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_trans_alloc.xfs_vn_update_time
2.40 ? 19% -60.3% 0.96 ? 89% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.pagecache_get_page.grab_cache_page_write_begin.iomap_write_begin
993.73 ?114% -95.7% 42.53 ?114% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.process_one_work.worker_thread.kthread
0.80 ? 43% -81.5% 0.15 ? 86% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.sched_exec.bprm_execve
8.63 ? 37% -66.6% 2.88 ? 76% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.submit_bio_checks.submit_bio_noacct.submit_bio
10.48 ? 15% -41.2% 6.16 ? 71% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.wait_for_completion_io.submit_bio_wait.blkdev_issue_flush
2.49 ? 25% -69.2% 0.77 ? 85% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.write_cache_pages.iomap_writepages.xfs_vm_writepages
1.62 ? 56% -88.6% 0.18 ?170% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.xfs_trans_alloc.xfs_vn_update_time.file_update_time
1.64 ? 66% -68.6% 0.51 ? 83% perf-sched.wait_time.max.ms.rwsem_down_read_slowpath.xfs_log_commit_cil.__xfs_trans_commit.xfs_free_eofblocks
1.95 ? 13% -45.2% 1.07 ? 72% perf-sched.wait_time.max.ms.rwsem_down_read_slowpath.xfs_log_commit_cil.__xfs_trans_commit.xfs_remove
213.55 ?188% -92.1% 16.91 ? 74% perf-sched.wait_time.max.ms.rwsem_down_read_slowpath.xfs_log_commit_cil.__xfs_trans_commit.xfs_trans_roll
5.68 ? 5% -61.1% 2.21 ? 72% perf-sched.wait_time.max.ms.rwsem_down_read_slowpath.xfs_log_commit_cil.__xfs_trans_commit.xfs_vn_update_time
1.98 ? 52% -74.7% 0.50 ?115% perf-sched.wait_time.max.ms.rwsem_down_write_slowpath.do_unlinkat.do_syscall_64.entry_SYSCALL_64_after_hwframe
33.04 ? 17% -58.1% 13.83 ? 73% perf-sched.wait_time.max.ms.schedule_timeout.__down.down.xfs_buf_lock
0.94 ? 30% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.__down.down.xlog_write_iclog
8509 ? 14% -58.7% 3515 ? 75% perf-sched.wait_time.max.ms.smpboot_thread_fn.kthread.ret_from_fork
8266 ? 10% -60.4% 3272 ? 79% perf-sched.wait_time.max.ms.worker_thread.kthread.ret_from_fork
1.26 ?202% -100.0% 0.00 perf-sched.wait_time.max.ms.xlog_state_get_iclog_space.xlog_write.xlog_cil_push_work.process_one_work
5.12 ? 32% -69.6% 1.55 ? 72% perf-sched.wait_time.max.ms.xlog_wait_on_iclog.__xfs_log_force_lsn.xfs_file_fsync.xfs_file_buffered_write
10.63 -9.8 0.79 ? 4% perf-profile.calltrace.cycles-pp.__xfs_log_force_lsn.xfs_file_fsync.xfs_file_buffered_write.new_sync_write.vfs_write
6.84 -6.8 0.00 perf-profile.calltrace.cycles-pp._raw_spin_lock.__xfs_log_force_lsn.xfs_file_fsync.xfs_file_buffered_write.new_sync_write
6.82 -6.8 0.00 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.__xfs_log_force_lsn.xfs_file_fsync.xfs_file_buffered_write
3.75 -3.5 0.25 ?100% perf-profile.calltrace.cycles-pp.xlog_wait_on_iclog.__xfs_log_force_lsn.xfs_file_fsync.xfs_file_buffered_write.new_sync_write
68.04 -1.5 66.52 perf-profile.calltrace.cycles-pp.xfs_file_fsync.xfs_file_buffered_write.new_sync_write.vfs_write.ksys_write
70.66 -1.4 69.28 perf-profile.calltrace.cycles-pp.xfs_file_buffered_write.new_sync_write.vfs_write.ksys_write.do_syscall_64
70.68 -1.4 69.31 perf-profile.calltrace.cycles-pp.new_sync_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
70.76 -1.4 69.40 perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
70.77 -1.4 69.41 perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
70.85 -1.3 69.50 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.write
70.84 -1.3 69.49 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
71.01 -1.3 69.68 perf-profile.calltrace.cycles-pp.write
0.81 -0.1 0.72 perf-profile.calltrace.cycles-pp.xlog_ioend_work.process_one_work.worker_thread.kthread.ret_from_fork
1.01 -0.1 0.94 perf-profile.calltrace.cycles-pp.xlog_cil_push_work.process_one_work.worker_thread.kthread.ret_from_fork
0.99 -0.0 0.94 ? 2% perf-profile.calltrace.cycles-pp.xfs_end_io.process_one_work.worker_thread.kthread.ret_from_fork
0.99 -0.0 0.94 ? 2% perf-profile.calltrace.cycles-pp.xfs_end_ioend.xfs_end_io.process_one_work.worker_thread.kthread
0.73 -0.0 0.70 perf-profile.calltrace.cycles-pp.xfs_iomap_write_unwritten.xfs_end_ioend.xfs_end_io.process_one_work.worker_thread
0.62 +0.1 0.67 perf-profile.calltrace.cycles-pp.xlog_state_do_callback.xlog_ioend_work.process_one_work.worker_thread.kthread
2.70 +0.1 2.78 perf-profile.calltrace.cycles-pp.__flush_work.xlog_cil_force_lsn.xfs_log_force_lsn.xfs_file_fsync.xfs_file_buffered_write
0.57 +0.1 0.66 perf-profile.calltrace.cycles-pp.iomap_apply.iomap_file_buffered_write.xfs_file_buffered_write.new_sync_write.vfs_write
1.16 ? 3% +0.1 1.26 ? 4% perf-profile.calltrace.cycles-pp.submit_bio_noacct.submit_bio.iomap_submit_ioend.xfs_vm_writepages.do_writepages
0.57 +0.1 0.67 perf-profile.calltrace.cycles-pp.iomap_file_buffered_write.xfs_file_buffered_write.new_sync_write.vfs_write.ksys_write
1.19 ? 2% +0.1 1.30 ? 4% perf-profile.calltrace.cycles-pp.submit_bio.iomap_submit_ioend.xfs_vm_writepages.do_writepages.__filemap_fdatawrite_range
1.20 ? 3% +0.1 1.30 ? 4% perf-profile.calltrace.cycles-pp.iomap_submit_ioend.xfs_vm_writepages.do_writepages.__filemap_fdatawrite_range.file_write_and_wait_range
0.53 +0.1 0.66 perf-profile.calltrace.cycles-pp.complete.process_one_work.worker_thread.kthread.ret_from_fork
0.43 ? 44% +0.1 0.56 ? 2% perf-profile.calltrace.cycles-pp.schedule_idle.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
0.72 +0.1 0.86 perf-profile.calltrace.cycles-pp.md_submit_flush_data.process_one_work.worker_thread.kthread.ret_from_fork
4.29 +0.1 4.43 perf-profile.calltrace.cycles-pp.process_one_work.worker_thread.kthread.ret_from_fork
0.43 ? 44% +0.2 0.61 perf-profile.calltrace.cycles-pp.wait_for_completion.__flush_work.xlog_cil_force_lsn.xfs_log_force_lsn.xfs_file_fsync
4.55 +0.2 4.73 perf-profile.calltrace.cycles-pp.worker_thread.kthread.ret_from_fork
4.55 +0.2 4.74 perf-profile.calltrace.cycles-pp.ret_from_fork
4.55 +0.2 4.74 perf-profile.calltrace.cycles-pp.kthread.ret_from_fork
0.17 ?141% +0.4 0.54 ? 2% perf-profile.calltrace.cycles-pp.__schedule.schedule_idle.do_idle.cpu_startup_entry.start_secondary
0.00 +0.5 0.52 perf-profile.calltrace.cycles-pp.iomap_write_actor.iomap_apply.iomap_file_buffered_write.xfs_file_buffered_write.new_sync_write
0.00 +0.6 0.57 perf-profile.calltrace.cycles-pp.prepare_to_wait_event.md_flush_request.raid0_make_request.md_handle_request.md_submit_bio
0.00 +0.6 0.58 perf-profile.calltrace.cycles-pp.try_to_wake_up.swake_up_locked.complete.process_one_work.worker_thread
0.00 +0.6 0.61 perf-profile.calltrace.cycles-pp.swake_up_locked.complete.process_one_work.worker_thread.kthread
21.57 +0.9 22.48 perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
9.74 +0.9 10.65 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.remove_wait_queue.xlog_wait_on_iclog.__xfs_log_force_lsn
22.16 +0.9 23.09 perf-profile.calltrace.cycles-pp.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
22.12 +0.9 23.06 perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary
23.70 +1.1 24.79 perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
23.72 +1.1 24.80 perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
24.02 +1.1 25.10 perf-profile.calltrace.cycles-pp.secondary_startup_64_no_verify
23.72 +1.1 24.80 perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64_no_verify
21.21 +1.9 23.15 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irq.md_flush_request.raid0_make_request.md_handle_request
21.30 +2.0 23.26 perf-profile.calltrace.cycles-pp._raw_spin_lock_irq.md_flush_request.raid0_make_request.md_handle_request.md_submit_bio
13.79 +2.1 15.86 perf-profile.calltrace.cycles-pp.__xfs_log_force_lsn.xfs_log_force_lsn.xfs_file_fsync.xfs_file_buffered_write.new_sync_write
22.35 +2.2 24.51 perf-profile.calltrace.cycles-pp.md_flush_request.raid0_make_request.md_handle_request.md_submit_bio.submit_bio_noacct
22.38 +2.2 24.55 perf-profile.calltrace.cycles-pp.raid0_make_request.md_handle_request.md_submit_bio.submit_bio_noacct.submit_bio
22.44 +2.2 24.62 perf-profile.calltrace.cycles-pp.md_handle_request.md_submit_bio.submit_bio_noacct.submit_bio.submit_bio_wait
22.50 +2.2 24.69 perf-profile.calltrace.cycles-pp.md_submit_bio.submit_bio_noacct.submit_bio.submit_bio_wait.blkdev_issue_flush
22.55 +2.2 24.75 perf-profile.calltrace.cycles-pp.submit_bio_noacct.submit_bio.submit_bio_wait.blkdev_issue_flush.xfs_file_fsync
22.55 +2.2 24.75 perf-profile.calltrace.cycles-pp.submit_bio.submit_bio_wait.blkdev_issue_flush.xfs_file_fsync.xfs_file_buffered_write
22.58 +2.2 24.79 perf-profile.calltrace.cycles-pp.submit_bio_wait.blkdev_issue_flush.xfs_file_fsync.xfs_file_buffered_write.new_sync_write
22.60 +2.2 24.81 perf-profile.calltrace.cycles-pp.blkdev_issue_flush.xfs_file_fsync.xfs_file_buffered_write.new_sync_write.vfs_write
6.57 ? 2% +3.6 10.18 ? 2% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.xlog_cil_force_lsn.xfs_log_force_lsn.xfs_file_fsync
6.66 ? 2% +3.6 10.29 ? 2% perf-profile.calltrace.cycles-pp._raw_spin_lock.xlog_cil_force_lsn.xfs_log_force_lsn.xfs_file_fsync.xfs_file_buffered_write
18.75 +3.9 22.64 perf-profile.calltrace.cycles-pp.xlog_cil_force_lsn.xfs_log_force_lsn.xfs_file_fsync.xfs_file_buffered_write.new_sync_write
6.16 +4.5 10.68 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.remove_wait_queue.xlog_wait_on_iclog.__xfs_log_force_lsn.xfs_log_force_lsn
6.18 +4.5 10.71 perf-profile.calltrace.cycles-pp.remove_wait_queue.xlog_wait_on_iclog.__xfs_log_force_lsn.xfs_log_force_lsn.xfs_file_fsync
6.37 +4.7 11.04 perf-profile.calltrace.cycles-pp.xlog_wait_on_iclog.__xfs_log_force_lsn.xfs_log_force_lsn.xfs_file_fsync.xfs_file_buffered_write
32.56 +6.0 38.51 perf-profile.calltrace.cycles-pp.xfs_log_force_lsn.xfs_file_fsync.xfs_file_buffered_write.new_sync_write.vfs_write
24.42 -7.8 16.65 perf-profile.children.cycles-pp.__xfs_log_force_lsn
19.82 -3.2 16.59 perf-profile.children.cycles-pp._raw_spin_lock
66.00 -2.2 63.76 perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
68.04 -1.5 66.52 perf-profile.children.cycles-pp.xfs_file_fsync
70.66 -1.4 69.28 perf-profile.children.cycles-pp.xfs_file_buffered_write
70.68 -1.4 69.31 perf-profile.children.cycles-pp.new_sync_write
70.76 -1.4 69.40 perf-profile.children.cycles-pp.vfs_write
70.77 -1.4 69.42 perf-profile.children.cycles-pp.ksys_write
71.04 -1.3 69.72 perf-profile.children.cycles-pp.write
71.02 -1.3 69.70 perf-profile.children.cycles-pp.do_syscall_64
71.03 -1.3 69.71 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
21.63 -0.9 20.69 perf-profile.children.cycles-pp.remove_wait_queue
23.54 -0.9 22.62 perf-profile.children.cycles-pp._raw_spin_lock_irqsave
0.62 -0.2 0.43 ? 2% perf-profile.children.cycles-pp.xlog_write
0.19 ? 2% -0.1 0.05 ? 8% perf-profile.children.cycles-pp.xlog_state_done_syncing
0.39 ? 2% -0.1 0.29 perf-profile.children.cycles-pp.xlog_state_release_iclog
0.81 -0.1 0.72 perf-profile.children.cycles-pp.xlog_ioend_work
1.01 -0.1 0.94 perf-profile.children.cycles-pp.xlog_cil_push_work
0.13 ? 8% -0.1 0.07 ? 7% perf-profile.children.cycles-pp.xlog_state_get_iclog_space
0.99 -0.0 0.94 ? 2% perf-profile.children.cycles-pp.xfs_end_io
0.99 -0.0 0.94 ? 2% perf-profile.children.cycles-pp.xfs_end_ioend
0.73 -0.0 0.70 perf-profile.children.cycles-pp.xfs_iomap_write_unwritten
0.05 +0.0 0.06 perf-profile.children.cycles-pp.sync_disk_rw
0.05 +0.0 0.06 ? 6% perf-profile.children.cycles-pp.__radix_tree_lookup
0.08 +0.0 0.09 ? 4% perf-profile.children.cycles-pp.brd_lookup_page
0.08 ? 4% +0.0 0.09 perf-profile.children.cycles-pp.perf_tp_event
0.09 +0.0 0.10 ? 4% perf-profile.children.cycles-pp.__smp_call_single_queue
0.09 +0.0 0.10 ? 4% perf-profile.children.cycles-pp.llist_add_batch
0.07 +0.0 0.08 ? 5% perf-profile.children.cycles-pp.llist_reverse_order
0.05 ? 8% +0.0 0.07 ? 7% perf-profile.children.cycles-pp.xfs_map_blocks
0.05 ? 8% +0.0 0.07 ? 7% perf-profile.children.cycles-pp.__dentry_kill
0.09 ? 4% +0.0 0.10 ? 3% perf-profile.children.cycles-pp.xfs_btree_lookup
0.06 +0.0 0.07 ? 6% perf-profile.children.cycles-pp.mempool_alloc
0.27 ? 2% +0.0 0.29 ? 2% perf-profile.children.cycles-pp.pick_next_task_fair
0.11 ? 6% +0.0 0.12 ? 4% perf-profile.children.cycles-pp.set_task_cpu
0.09 ? 5% +0.0 0.11 ? 7% perf-profile.children.cycles-pp.finish_task_switch
0.09 +0.0 0.11 ? 4% perf-profile.children.cycles-pp.iomap_set_range_uptodate
0.06 ? 6% +0.0 0.08 ? 8% perf-profile.children.cycles-pp.xfs_iextents_copy
0.08 ? 4% +0.0 0.10 ? 3% perf-profile.children.cycles-pp.llseek
0.08 ? 4% +0.0 0.10 ? 5% perf-profile.children.cycles-pp.migrate_task_rq_fair
0.10 ? 4% +0.0 0.12 ? 4% perf-profile.children.cycles-pp.xfs_buffered_write_iomap_begin
0.04 ? 44% +0.0 0.06 perf-profile.children.cycles-pp.___might_sleep
0.13 ? 2% +0.0 0.15 ? 5% perf-profile.children.cycles-pp.syscall_exit_to_user_mode
0.09 ? 5% +0.0 0.11 ? 6% perf-profile.children.cycles-pp.queue_work_on
0.12 ? 5% +0.0 0.14 ? 4% perf-profile.children.cycles-pp.exit_to_user_mode_prepare
0.16 ? 3% +0.0 0.18 ? 2% perf-profile.children.cycles-pp.perf_trace_sched_wakeup_template
0.09 ? 6% +0.0 0.11 ? 9% perf-profile.children.cycles-pp.down_read
0.08 ? 6% +0.0 0.10 ? 5% perf-profile.children.cycles-pp.xfs_inode_item_format_data_fork
0.11 ? 3% +0.0 0.14 ? 5% perf-profile.children.cycles-pp.submit_bio_checks
0.19 ? 3% +0.0 0.21 ? 4% perf-profile.children.cycles-pp.xfs_bmap_add_extent_unwritten_real
0.16 ? 3% +0.0 0.18 ? 2% perf-profile.children.cycles-pp.update_cfs_group
0.11 ? 4% +0.0 0.13 ? 5% perf-profile.children.cycles-pp.xfs_trans_committed_bulk
0.19 ? 3% +0.0 0.22 ? 4% perf-profile.children.cycles-pp.xfs_bmapi_convert_unwritten
1.20 +0.0 1.22 perf-profile.children.cycles-pp.__wake_up_common
0.16 ? 3% +0.0 0.18 ? 4% perf-profile.children.cycles-pp.iomap_write_end
0.20 ? 4% +0.0 0.23 ? 2% perf-profile.children.cycles-pp.ttwu_queue_wakelist
0.23 ? 3% +0.0 0.26 ? 3% perf-profile.children.cycles-pp.xfs_bmapi_write
0.17 ? 4% +0.0 0.20 ? 5% perf-profile.children.cycles-pp.iomap_write_begin
0.26 +0.0 0.30 ? 3% perf-profile.children.cycles-pp.available_idle_cpu
0.30 ? 3% +0.0 0.33 ? 3% perf-profile.children.cycles-pp.select_idle_cpu
0.36 +0.0 0.40 ? 3% perf-profile.children.cycles-pp.sched_ttwu_pending
0.17 ? 2% +0.0 0.20 ? 2% perf-profile.children.cycles-pp.xfs_inode_item_format
0.53 +0.0 0.57 perf-profile.children.cycles-pp.select_task_rq_fair
0.16 ? 3% +0.0 0.20 ? 3% perf-profile.children.cycles-pp.poll_idle
0.38 +0.0 0.43 perf-profile.children.cycles-pp.xlog_state_clean_iclog
0.16 ? 2% +0.0 0.21 ? 3% perf-profile.children.cycles-pp.xlog_cil_process_committed
0.16 ? 2% +0.0 0.21 ? 3% perf-profile.children.cycles-pp.xlog_cil_committed
0.40 ? 2% +0.0 0.45 ? 2% perf-profile.children.cycles-pp.select_idle_sibling
0.44 +0.0 0.50 ? 2% perf-profile.children.cycles-pp.enqueue_entity
0.00 +0.1 0.05 perf-profile.children.cycles-pp.__kmalloc
0.44 ? 2% +0.1 0.48 ? 2% perf-profile.children.cycles-pp.flush_smp_call_function_from_idle
0.60 +0.1 0.65 perf-profile.children.cycles-pp.ttwu_do_activate
0.57 +0.1 0.62 ? 2% perf-profile.children.cycles-pp.enqueue_task_fair
0.00 +0.1 0.05 ? 7% perf-profile.children.cycles-pp.submit_flushes
0.52 ? 2% +0.1 0.57 ? 2% perf-profile.children.cycles-pp.schedule_idle
0.00 +0.1 0.05 ? 8% perf-profile.children.cycles-pp.bio_alloc_bioset
0.62 +0.1 0.67 perf-profile.children.cycles-pp.xlog_state_do_callback
0.40 ? 2% +0.1 0.47 ? 2% perf-profile.children.cycles-pp.brd_do_bvec
0.53 ? 2% +0.1 0.60 perf-profile.children.cycles-pp.update_load_avg
0.47 +0.1 0.54 perf-profile.children.cycles-pp.dequeue_entity
0.59 +0.1 0.66 perf-profile.children.cycles-pp.dequeue_task_fair
0.40 +0.1 0.48 perf-profile.children.cycles-pp.autoremove_wake_function
0.45 ? 2% +0.1 0.52 perf-profile.children.cycles-pp.iomap_write_actor
2.71 +0.1 2.78 perf-profile.children.cycles-pp.__flush_work
0.44 +0.1 0.52 perf-profile.children.cycles-pp.schedule_timeout
0.52 ? 2% +0.1 0.61 perf-profile.children.cycles-pp.wait_for_completion
0.48 ? 2% +0.1 0.58 perf-profile.children.cycles-pp.prepare_to_wait_event
0.57 +0.1 0.66 perf-profile.children.cycles-pp.iomap_apply
0.57 +0.1 0.67 perf-profile.children.cycles-pp.iomap_file_buffered_write
1.20 ? 2% +0.1 1.30 ? 4% perf-profile.children.cycles-pp.iomap_submit_ioend
1.33 +0.1 1.45 perf-profile.children.cycles-pp.schedule
0.48 +0.1 0.61 perf-profile.children.cycles-pp.swake_up_locked
0.54 +0.1 0.67 perf-profile.children.cycles-pp.complete
0.72 +0.1 0.86 perf-profile.children.cycles-pp.md_submit_flush_data
4.29 +0.1 4.43 perf-profile.children.cycles-pp.process_one_work
1.81 +0.2 1.98 perf-profile.children.cycles-pp.__schedule
1.79 +0.2 1.97 perf-profile.children.cycles-pp.try_to_wake_up
4.55 +0.2 4.73 perf-profile.children.cycles-pp.worker_thread
4.55 +0.2 4.74 perf-profile.children.cycles-pp.ret_from_fork
4.55 +0.2 4.74 perf-profile.children.cycles-pp.kthread
21.84 +0.9 22.75 perf-profile.children.cycles-pp.intel_idle
22.43 +0.9 23.37 perf-profile.children.cycles-pp.cpuidle_enter
22.43 +0.9 23.36 perf-profile.children.cycles-pp.cpuidle_enter_state
24.02 +1.1 25.10 perf-profile.children.cycles-pp.secondary_startup_64_no_verify
24.02 +1.1 25.10 perf-profile.children.cycles-pp.cpu_startup_entry
24.01 +1.1 25.10 perf-profile.children.cycles-pp.do_idle
23.72 +1.1 24.80 perf-profile.children.cycles-pp.start_secondary
10.12 +1.4 11.54 perf-profile.children.cycles-pp.xlog_wait_on_iclog
23.98 +2.1 26.03 perf-profile.children.cycles-pp._raw_spin_lock_irq
22.55 +2.2 24.74 perf-profile.children.cycles-pp.md_flush_request
22.58 +2.2 24.79 perf-profile.children.cycles-pp.submit_bio_wait
22.65 +2.2 24.86 perf-profile.children.cycles-pp.raid0_make_request
22.60 +2.2 24.81 perf-profile.children.cycles-pp.blkdev_issue_flush
22.75 +2.2 24.99 perf-profile.children.cycles-pp.md_handle_request
22.84 +2.3 25.09 perf-profile.children.cycles-pp.md_submit_bio
23.97 +2.3 26.32 perf-profile.children.cycles-pp.submit_bio
23.99 +2.3 26.34 perf-profile.children.cycles-pp.submit_bio_noacct
18.75 +3.9 22.64 perf-profile.children.cycles-pp.xlog_cil_force_lsn
32.56 +6.0 38.52 perf-profile.children.cycles-pp.xfs_log_force_lsn
65.82 -2.2 63.62 perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
0.09 +0.0 0.10 ? 3% perf-profile.self.cycles-pp.__flush_work
0.05 +0.0 0.06 ? 6% perf-profile.self.cycles-pp.__radix_tree_lookup
0.11 ? 3% +0.0 0.12 ? 4% perf-profile.self.cycles-pp.enqueue_entity
0.07 ? 5% +0.0 0.08 ? 4% perf-profile.self.cycles-pp.llist_reverse_order
0.19 ? 4% +0.0 0.20 ? 2% perf-profile.self.cycles-pp.__list_del_entry_valid
0.09 ? 4% +0.0 0.10 ? 4% perf-profile.self.cycles-pp.llist_add_batch
0.09 ? 4% +0.0 0.10 ? 4% perf-profile.self.cycles-pp.iomap_set_range_uptodate
0.08 ? 8% +0.0 0.10 ? 6% perf-profile.self.cycles-pp.down_read
0.07 ? 5% +0.0 0.09 ? 7% perf-profile.self.cycles-pp.insert_work
0.06 ? 7% +0.0 0.08 ? 4% perf-profile.self.cycles-pp.md_submit_bio
0.10 ? 5% +0.0 0.12 ? 3% perf-profile.self.cycles-pp.md_handle_request
0.36 ? 2% +0.0 0.39 perf-profile.self.cycles-pp.__schedule
0.26 +0.0 0.29 ? 2% perf-profile.self.cycles-pp.available_idle_cpu
0.48 +0.0 0.52 perf-profile.self.cycles-pp._raw_spin_lock
0.27 +0.0 0.31 ? 3% perf-profile.self.cycles-pp.update_load_avg
0.02 ?141% +0.0 0.06 ? 6% perf-profile.self.cycles-pp.md_flush_request
0.14 ? 3% +0.0 0.19 ? 3% perf-profile.self.cycles-pp.poll_idle
0.32 ? 2% +0.0 0.37 ? 2% perf-profile.self.cycles-pp.brd_do_bvec
0.29 ? 2% +0.1 0.34 ? 3% perf-profile.self.cycles-pp._raw_spin_lock_irq
0.55 ? 2% +0.1 0.61 perf-profile.self.cycles-pp._raw_spin_lock_irqsave
21.84 +0.9 22.75 perf-profile.self.cycles-pp.intel_idle



aim7.jobs-per-min

20000 +-------------------------------------------------------------------+
| O O |
19000 |-+ O |
| O O OO O O O O OO O O O O OO O O O O O O O |
18000 |-+ O O |
| O |
17000 |-+ O O O |
| OO |
16000 |-+ |
| |
15000 |-+ |
| .+ |
14000 |.+.+ +.+.+.+.+.++.+.+.+.+.++. .+.+. .++.+.+.+.+.++.+.+. .+.++.+.+.|
| + + + |
13000 +-------------------------------------------------------------------+


aim7.time.system_time

5600 +--------------------------------------------------------------------+
|.+.+ +. .+.+. .+. .+ ++ +.+.+.+.++.+.+. .+.+.+.++.+.+.|
5400 |-+ + : + +.+.+.++ +.+ + |
5200 |-+ + |
| |
5000 |-+ |
| |
4800 |-+ |
| |
4600 |-+ O OO |
4400 |-+ O O |
| O O O |
4200 |-O O O O O O O O O OO O O O O O O O O O O O |
| O O |
4000 +--------------------------------------------------------------------+


aim7.time.elapsed_time

135 +---------------------------------------------------------------------+
| .+.+. .+. .++.+. .+. .+.++.+.+.+.|
130 |.+.+. .++.+.+.+.+.+.+.+.++.+ +.+ +.+ + +.+.+ |
125 |-+ + |
| |
120 |-+ |
115 |-+ |
| |
110 |-+ O O |
105 |-+ O O O |
| O |
100 |-+ O O O OO O O O O |
95 |-O O O OO O O O O O O OO O O O |
| O O O |
90 +---------------------------------------------------------------------+


aim7.time.elapsed_time.max

135 +---------------------------------------------------------------------+
| .+.+. .+. .++.+. .+. .+.++.+.+.+.|
130 |.+.+. .++.+.+.+.+.+.+.+.++.+ +.+ +.+ + +.+.+ |
125 |-+ + |
| |
120 |-+ |
115 |-+ |
| |
110 |-+ O O |
105 |-+ O O O |
| O |
100 |-+ O O O OO O O O O |
95 |-O O O OO O O O O O O OO O O O |
| O O O |
90 +---------------------------------------------------------------------+


aim7.time.voluntary_context_switches

7.4e+07 +-----------------------------------------------------------------+
7.3e+07 |.+. +. .+. .++. .++. .+.+ .+. .+.+ .+. +.+.+. +.+.|
| + : +.++ +.+ +.+.+ + + + + +.+. : + |
7.2e+07 |-+ : : + |
7.1e+07 |-+ :: |
7e+07 |-+ + |
6.9e+07 |-+ |
| |
6.8e+07 |-O O O O O OO O O OO O O O O OO O O |
6.7e+07 |-+ OO O OO O |
6.6e+07 |-+ |
6.5e+07 |-+ O O OO O O |
| O |
6.4e+07 |-+ O |
6.3e+07 +-----------------------------------------------------------------+


[*] bisect-good sample
[O] bisect-bad sample



Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


---
0DAY/LKP+ Test Infrastructure Open Source Technology Center
https://lists.01.org/hyperkitty/list/[email protected] Intel Corporation

Thanks,
Oliver Sang


Attachments:
(No filename) (84.04 kB)
config-5.13.0-rc4-00087-g25f25648e57c (176.84 kB)
job-script (8.37 kB)
job.yaml (5.69 kB)
reproduce (1.05 kB)
Download all attachments