2021-08-09 06:29:47

by kernel test robot

[permalink] [raw]
Subject: [xfs] 6df693ed7b: aim7.jobs-per-min -15.7% regression



Greeting,

FYI, we noticed a -15.7% regression of aim7.jobs-per-min due to commit:


commit: 6df693ed7ba9ec03cafc38d5064de376a11243e2 ("xfs: per-cpu deferred inode inactivation queues")
https://git.kernel.org/cgit/linux/kernel/git/djwong/xfs-linux.git xfs-5.15-merge


in testcase: aim7
on test machine: 88 threads 2 sockets Intel(R) Xeon(R) Gold 6238M CPU @ 2.10GHz with 128G memory
with following parameters:

disk: 4BRD_12G
md: RAID1
fs: xfs
test: disk_wrt
load: 3000
cpufreq_governor: performance
ucode: 0x5003006

test-description: AIM7 is a traditional UNIX system level benchmark suite which is used to test and measure the performance of multiuser system.
test-url: https://sourceforge.net/projects/aimbench/files/aim-suite7/



If you fix the issue, kindly add following tag
Reported-by: kernel test robot <[email protected]>


Details are as below:
-------------------------------------------------------------------------------------------------->


To reproduce:

git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
bin/lkp run generated-yaml-file

=========================================================================================
compiler/cpufreq_governor/disk/fs/kconfig/load/md/rootfs/tbox_group/test/testcase/ucode:
gcc-9/performance/4BRD_12G/xfs/x86_64-rhel-8.3/3000/RAID1/debian-10.4-x86_64-20200603.cgz/lkp-csl-2sp9/disk_wrt/aim7/0x5003006

commit:
52cba078c8 ("xfs: detach dquots from inode if we don't need to inactivate it")
6df693ed7b ("xfs: per-cpu deferred inode inactivation queues")

52cba078c8b4b003 6df693ed7ba9ec03cafc38d5064
---------------- ---------------------------
%stddev %change %stddev
\ | \
539418 -15.7% 454630 aim7.jobs-per-min
33.57 +18.5% 39.79 aim7.time.elapsed_time
33.57 +18.5% 39.79 aim7.time.elapsed_time.max
2056 ? 7% +779.6% 18087 ? 2% aim7.time.involuntary_context_switches
673.92 ? 4% +29.2% 870.54 ? 2% aim7.time.system_time
912022 -34.2% 599694 aim7.time.voluntary_context_switches
2.328e+09 +14.0% 2.654e+09 cpuidle..time
76.70 -2.9% 74.48 iostat.cpu.idle
22.49 ? 3% +10.1% 24.77 ? 2% iostat.cpu.system
2388 ? 28% +148.1% 5926 ? 8% vmstat.io.bo
52485 -15.1% 44577 vmstat.system.cs
83851 ? 10% +134.4% 196539 ? 64% turbostat.C1
6217110 ? 18% +24.3% 7729630 ? 3% turbostat.IRQ
57.07 -1.8% 56.05 turbostat.RAMWatt
6857 ? 9% +24.6% 8544 ? 16% softirqs.CPU22.SCHED
6116 ? 9% +22.6% 7498 ? 17% softirqs.CPU4.RCU
6001 ? 7% +23.3% 7402 ? 25% softirqs.CPU46.RCU
521240 ? 3% +11.4% 580436 softirqs.RCU
4788 ? 9% -22.6% 3706 ? 5% numa-meminfo.node0.Dirty
13081 ? 8% +27.1% 16622 ? 5% numa-meminfo.node1.Active
13081 ? 8% +27.1% 16622 ? 5% numa-meminfo.node1.Active(anon)
4776 ? 10% -25.6% 3553 ? 4% numa-meminfo.node1.Dirty
4829 ? 9% -27.2% 3513 ? 12% numa-meminfo.node1.Inactive(file)
14497 ? 7% +28.8% 18670 ? 3% meminfo.Active
14273 ? 7% +29.2% 18446 ? 3% meminfo.Active(anon)
57491 ? 3% +9.4% 62898 ? 3% meminfo.AnonHugePages
10248 ? 8% -29.5% 7226 ? 12% meminfo.Dirty
11118 ? 7% -28.0% 8004 ? 10% meminfo.Inactive(file)
25436 ? 6% +13.2% 28794 ? 2% meminfo.Shmem
1317 ? 6% -25.6% 979.50 ? 5% numa-vmstat.node0.nr_dirty
1137 ? 8% -24.5% 859.33 ? 3% numa-vmstat.node0.nr_zone_write_pending
3270 ? 8% +27.1% 4155 ? 5% numa-vmstat.node1.nr_active_anon
1285 ? 8% -29.7% 903.33 ? 6% numa-vmstat.node1.nr_dirty
1290 ? 10% -29.7% 907.17 ? 12% numa-vmstat.node1.nr_inactive_file
3270 ? 8% +27.1% 4155 ? 5% numa-vmstat.node1.nr_zone_active_anon
1281 ? 9% -30.1% 895.50 ? 13% numa-vmstat.node1.nr_zone_inactive_file
1106 ? 10% -29.4% 781.33 ? 6% numa-vmstat.node1.nr_zone_write_pending
1877 +19.8% 2249 ? 2% slabinfo.kmalloc-4k.active_objs
1893 +20.1% 2273 ? 2% slabinfo.kmalloc-4k.num_objs
22962 +14.1% 26210 slabinfo.kmalloc-512.num_objs
39153 +13.4% 44407 ? 2% slabinfo.radix_tree_node.active_objs
39186 +13.4% 44444 slabinfo.radix_tree_node.num_objs
3085 ? 12% +37.2% 4234 ? 9% slabinfo.xfs_ili.active_objs
3182 ? 13% +36.7% 4349 ? 10% slabinfo.xfs_ili.num_objs
2856 ? 8% +43.2% 4089 ? 4% slabinfo.xfs_inode.active_objs
3025 ? 11% +35.9% 4111 ? 4% slabinfo.xfs_inode.num_objs
3567 ? 7% +29.2% 4611 ? 3% proc-vmstat.nr_active_anon
88289 +1.0% 89173 proc-vmstat.nr_anon_pages
2562 ? 9% -29.7% 1801 ? 7% proc-vmstat.nr_dirty
91001 +0.8% 91759 proc-vmstat.nr_inactive_anon
2761 ? 9% -27.2% 2009 ? 7% proc-vmstat.nr_inactive_file
8700 -3.6% 8389 proc-vmstat.nr_mapped
6358 ? 6% +13.2% 7198 ? 2% proc-vmstat.nr_shmem
33104 +4.8% 34684 proc-vmstat.nr_slab_reclaimable
74009 +1.8% 75370 proc-vmstat.nr_slab_unreclaimable
3567 ? 7% +29.2% 4611 ? 3% proc-vmstat.nr_zone_active_anon
91001 +0.8% 91759 proc-vmstat.nr_zone_inactive_anon
2760 ? 8% -27.2% 2009 ? 7% proc-vmstat.nr_zone_inactive_file
2215 ? 10% -30.3% 1544 ? 7% proc-vmstat.nr_zone_write_pending
87711 ? 29% +190.2% 254537 ? 8% proc-vmstat.pgpgout
13887 ? 3% +9.0% 15130 ? 2% proc-vmstat.pgreuse
48068515 ? 2% -15.1% 40805286 ? 4% perf-stat.i.branch-misses
19.85 ? 5% +2.7 22.53 ? 5% perf-stat.i.cache-miss-rate%
45310093 ? 2% -17.1% 37543487 ? 2% perf-stat.i.cache-misses
1.882e+08 ? 6% -24.0% 1.43e+08 ? 7% perf-stat.i.cache-references
53987 -15.2% 45784 perf-stat.i.context-switches
5.826e+10 ? 3% +10.9% 6.46e+10 ? 2% perf-stat.i.cpu-cycles
2287 ? 5% -21.3% 1801 ? 2% perf-stat.i.cpu-migrations
6.456e+09 -12.9% 5.621e+09 perf-stat.i.dTLB-stores
25783020 ? 2% -19.8% 20672383 ? 3% perf-stat.i.iTLB-load-misses
4769740 ? 7% -18.4% 3893412 ? 2% perf-stat.i.iTLB-loads
0.66 ? 3% +10.9% 0.73 ? 2% perf-stat.i.metric.GHz
302.18 -3.4% 292.05 perf-stat.i.metric.M/sec
8537 -10.4% 7648 perf-stat.i.minor-faults
77.37 -1.3 76.06 perf-stat.i.node-load-miss-rate%
8111247 -16.6% 6764651 ? 2% perf-stat.i.node-load-misses
2452827 ? 3% -8.9% 2234334 perf-stat.i.node-loads
2472287 -7.3% 2292430 perf-stat.i.node-store-misses
4055317 -8.2% 3721608 perf-stat.i.node-stores
8600 -10.5% 7694 perf-stat.i.page-faults
4.52 ? 6% -24.3% 3.42 ? 7% perf-stat.overall.MPKI
0.60 ? 2% -0.1 0.51 ? 4% perf-stat.overall.branch-miss-rate%
1.40 ? 3% +10.4% 1.55 ? 2% perf-stat.overall.cpi
1286 ? 2% +33.9% 1721 ? 2% perf-stat.overall.cycles-between-cache-misses
0.00 ? 23% +0.0 0.00 ? 18% perf-stat.overall.dTLB-store-miss-rate%
1614 ? 2% +25.3% 2022 ? 3% perf-stat.overall.instructions-per-iTLB-miss
0.71 ? 3% -9.5% 0.65 ? 2% perf-stat.overall.ipc
76.79 -1.6 75.17 perf-stat.overall.node-load-miss-rate%
46704997 ? 2% -14.8% 39799133 ? 4% perf-stat.ps.branch-misses
44087380 ? 2% -16.8% 36668286 ? 2% perf-stat.ps.cache-misses
1.831e+08 ? 6% -23.7% 1.397e+08 ? 7% perf-stat.ps.cache-references
52527 -14.9% 44714 perf-stat.ps.context-switches
5.67e+10 ? 3% +11.3% 6.311e+10 ? 2% perf-stat.ps.cpu-cycles
2226 ? 5% -21.0% 1759 ? 2% perf-stat.ps.cpu-migrations
6.282e+09 -12.6% 5.491e+09 perf-stat.ps.dTLB-stores
25088280 ? 2% -19.5% 20192512 ? 3% perf-stat.ps.iTLB-load-misses
4638285 ? 7% -18.0% 3801129 ? 2% perf-stat.ps.iTLB-loads
8244 -9.9% 7426 perf-stat.ps.minor-faults
7893317 -16.3% 6606963 ? 2% perf-stat.ps.node-load-misses
2386209 ? 3% -8.6% 2182055 perf-stat.ps.node-loads
2405566 -6.9% 2238887 perf-stat.ps.node-store-misses
3945838 -7.9% 3634966 perf-stat.ps.node-stores
8305 -10.0% 7470 perf-stat.ps.page-faults
1.387e+12 +19.1% 1.652e+12 perf-stat.total.instructions
57332 -2.6% 55842 interrupts.CAL:Function_call_interrupts
64023 ? 20% +26.8% 81169 ? 2% interrupts.CPU0.LOC:Local_timer_interrupts
44.50 ? 20% +176.8% 123.17 ? 15% interrupts.CPU0.RES:Rescheduling_interrupts
63922 ? 20% +27.0% 81199 ? 2% interrupts.CPU1.LOC:Local_timer_interrupts
44.17 ? 11% +161.9% 115.67 ? 8% interrupts.CPU1.RES:Rescheduling_interrupts
63976 ? 19% +27.0% 81225 ? 2% interrupts.CPU10.LOC:Local_timer_interrupts
45.33 ? 19% +173.2% 123.83 ? 21% interrupts.CPU10.RES:Rescheduling_interrupts
63987 ? 20% +27.0% 81249 ? 2% interrupts.CPU11.LOC:Local_timer_interrupts
39.83 ? 13% +233.5% 132.83 ? 21% interrupts.CPU11.RES:Rescheduling_interrupts
63953 ? 20% +27.1% 81285 ? 2% interrupts.CPU12.LOC:Local_timer_interrupts
41.33 ? 34% +213.3% 129.50 ? 18% interrupts.CPU12.RES:Rescheduling_interrupts
63973 ? 20% +26.9% 81195 ? 2% interrupts.CPU13.LOC:Local_timer_interrupts
36.67 ? 15% +225.5% 119.33 ? 24% interrupts.CPU13.RES:Rescheduling_interrupts
63962 ? 20% +27.1% 81276 ? 2% interrupts.CPU14.LOC:Local_timer_interrupts
37.83 ? 14% +218.9% 120.67 ? 12% interrupts.CPU14.RES:Rescheduling_interrupts
63933 ? 20% +27.0% 81208 ? 3% interrupts.CPU15.LOC:Local_timer_interrupts
37.33 ? 11% +251.8% 131.33 ? 19% interrupts.CPU15.RES:Rescheduling_interrupts
63866 ? 20% +27.2% 81263 ? 2% interrupts.CPU16.LOC:Local_timer_interrupts
39.00 ? 17% +190.2% 113.17 ? 21% interrupts.CPU16.RES:Rescheduling_interrupts
63966 ? 20% +27.7% 81714 ? 3% interrupts.CPU17.LOC:Local_timer_interrupts
33.83 ? 14% +235.0% 113.33 ? 10% interrupts.CPU17.RES:Rescheduling_interrupts
63916 ? 20% +26.9% 81087 ? 2% interrupts.CPU18.LOC:Local_timer_interrupts
36.00 ? 15% +233.8% 120.17 ? 12% interrupts.CPU18.RES:Rescheduling_interrupts
63940 ? 20% +27.0% 81205 ? 2% interrupts.CPU19.LOC:Local_timer_interrupts
37.83 ? 13% +196.9% 112.33 ? 21% interrupts.CPU19.RES:Rescheduling_interrupts
63962 ? 20% +27.2% 81365 ? 3% interrupts.CPU2.LOC:Local_timer_interrupts
39.33 ? 19% +202.5% 119.00 ? 14% interrupts.CPU2.RES:Rescheduling_interrupts
63946 ? 20% +27.0% 81203 ? 2% interrupts.CPU20.LOC:Local_timer_interrupts
39.00 ? 21% +197.0% 115.83 ? 16% interrupts.CPU20.RES:Rescheduling_interrupts
63933 ? 20% +27.0% 81211 ? 2% interrupts.CPU21.LOC:Local_timer_interrupts
37.83 ? 32% +202.2% 114.33 ? 16% interrupts.CPU21.RES:Rescheduling_interrupts
63985 ? 20% +25.8% 80517 ? 4% interrupts.CPU22.LOC:Local_timer_interrupts
35.67 ? 16% +242.1% 122.00 ? 20% interrupts.CPU22.RES:Rescheduling_interrupts
63912 ? 20% +25.9% 80475 ? 4% interrupts.CPU23.LOC:Local_timer_interrupts
33.17 ? 21% +187.9% 95.50 ? 11% interrupts.CPU23.RES:Rescheduling_interrupts
63925 ? 20% +25.9% 80473 ? 4% interrupts.CPU24.LOC:Local_timer_interrupts
33.00 ? 21% +235.9% 110.83 ? 20% interrupts.CPU24.RES:Rescheduling_interrupts
63905 ? 20% +25.9% 80448 ? 4% interrupts.CPU25.LOC:Local_timer_interrupts
32.00 ? 13% +249.5% 111.83 ? 19% interrupts.CPU25.RES:Rescheduling_interrupts
63910 ? 20% +25.8% 80427 ? 4% interrupts.CPU26.LOC:Local_timer_interrupts
32.50 ? 15% +206.7% 99.67 ? 12% interrupts.CPU26.RES:Rescheduling_interrupts
63903 ? 20% +25.9% 80460 ? 4% interrupts.CPU27.LOC:Local_timer_interrupts
31.17 ? 11% +244.9% 107.50 ? 18% interrupts.CPU27.RES:Rescheduling_interrupts
63871 ? 20% +26.0% 80467 ? 4% interrupts.CPU28.LOC:Local_timer_interrupts
30.00 ? 24% +243.9% 103.17 ? 16% interrupts.CPU28.RES:Rescheduling_interrupts
63910 ? 20% +25.9% 80477 ? 4% interrupts.CPU29.LOC:Local_timer_interrupts
33.33 ? 17% +210.5% 103.50 ? 17% interrupts.CPU29.RES:Rescheduling_interrupts
63824 ? 20% +27.3% 81236 ? 2% interrupts.CPU3.LOC:Local_timer_interrupts
40.83 ? 19% +202.0% 123.33 ? 12% interrupts.CPU3.RES:Rescheduling_interrupts
63896 ? 20% +25.9% 80448 ? 4% interrupts.CPU30.LOC:Local_timer_interrupts
32.17 ? 21% +244.6% 110.83 ? 8% interrupts.CPU30.RES:Rescheduling_interrupts
63879 ? 20% +26.0% 80460 ? 4% interrupts.CPU31.LOC:Local_timer_interrupts
35.33 ? 16% +217.9% 112.33 ? 18% interrupts.CPU31.RES:Rescheduling_interrupts
63875 ? 20% +26.0% 80487 ? 4% interrupts.CPU32.LOC:Local_timer_interrupts
31.83 ? 22% +219.4% 101.67 ? 13% interrupts.CPU32.RES:Rescheduling_interrupts
63942 ? 20% +25.8% 80467 ? 4% interrupts.CPU33.LOC:Local_timer_interrupts
34.33 ? 16% +195.6% 101.50 ? 18% interrupts.CPU33.RES:Rescheduling_interrupts
63867 ? 20% +26.0% 80494 ? 4% interrupts.CPU34.LOC:Local_timer_interrupts
34.17 ? 20% +235.1% 114.50 ? 21% interrupts.CPU34.RES:Rescheduling_interrupts
63953 ? 20% +25.8% 80461 ? 4% interrupts.CPU35.LOC:Local_timer_interrupts
34.17 ? 12% +246.8% 118.50 ? 17% interrupts.CPU35.RES:Rescheduling_interrupts
63888 ? 20% +26.0% 80485 ? 4% interrupts.CPU36.LOC:Local_timer_interrupts
38.83 ? 15% +183.3% 110.00 ? 16% interrupts.CPU36.RES:Rescheduling_interrupts
63958 ? 20% +25.8% 80462 ? 4% interrupts.CPU37.LOC:Local_timer_interrupts
28.00 ? 22% +261.9% 101.33 ? 20% interrupts.CPU37.RES:Rescheduling_interrupts
63921 ? 20% +25.9% 80464 ? 4% interrupts.CPU38.LOC:Local_timer_interrupts
33.67 ? 17% +257.4% 120.33 ? 23% interrupts.CPU38.RES:Rescheduling_interrupts
63914 ? 20% +25.9% 80450 ? 4% interrupts.CPU39.LOC:Local_timer_interrupts
31.50 ? 29% +258.7% 113.00 ? 31% interrupts.CPU39.RES:Rescheduling_interrupts
1687 ? 47% -61.6% 647.67 ? 30% interrupts.CPU4.CAL:Function_call_interrupts
63896 ? 20% +27.0% 81178 ? 2% interrupts.CPU4.LOC:Local_timer_interrupts
38.83 ? 28% +171.7% 105.50 ? 15% interrupts.CPU4.RES:Rescheduling_interrupts
63914 ? 20% +25.9% 80467 ? 4% interrupts.CPU40.LOC:Local_timer_interrupts
36.33 ? 26% +189.0% 105.00 ? 13% interrupts.CPU40.RES:Rescheduling_interrupts
63848 ? 20% +26.1% 80505 ? 4% interrupts.CPU41.LOC:Local_timer_interrupts
32.50 ? 17% +268.7% 119.83 ? 16% interrupts.CPU41.RES:Rescheduling_interrupts
64075 ? 20% +25.6% 80479 ? 4% interrupts.CPU42.LOC:Local_timer_interrupts
30.67 ? 17% +252.2% 108.00 ? 17% interrupts.CPU42.RES:Rescheduling_interrupts
63957 ? 20% +25.8% 80490 ? 4% interrupts.CPU43.LOC:Local_timer_interrupts
63899 ? 20% +27.0% 81180 ? 3% interrupts.CPU44.LOC:Local_timer_interrupts
32.00 ? 17% +281.2% 122.00 ? 6% interrupts.CPU44.RES:Rescheduling_interrupts
63835 ? 20% +27.3% 81286 ? 2% interrupts.CPU45.LOC:Local_timer_interrupts
28.17 ? 25% +353.8% 127.83 ? 23% interrupts.CPU45.RES:Rescheduling_interrupts
63921 ? 20% +27.0% 81153 ? 3% interrupts.CPU46.LOC:Local_timer_interrupts
29.00 ? 16% +258.6% 104.00 ? 9% interrupts.CPU46.RES:Rescheduling_interrupts
63931 ? 20% +27.0% 81214 ? 2% interrupts.CPU47.LOC:Local_timer_interrupts
26.67 ? 24% +284.4% 102.50 ? 11% interrupts.CPU47.RES:Rescheduling_interrupts
63941 ? 20% +27.1% 81256 ? 2% interrupts.CPU48.LOC:Local_timer_interrupts
29.33 ? 25% +271.0% 108.83 ? 19% interrupts.CPU48.RES:Rescheduling_interrupts
63827 ? 20% +27.2% 81212 ? 2% interrupts.CPU49.LOC:Local_timer_interrupts
26.83 ? 30% +352.8% 121.50 ? 24% interrupts.CPU49.RES:Rescheduling_interrupts
63841 ? 20% +27.2% 81185 ? 2% interrupts.CPU5.LOC:Local_timer_interrupts
36.67 ? 30% +220.0% 117.33 ? 27% interrupts.CPU5.RES:Rescheduling_interrupts
63907 ? 20% +27.4% 81433 ? 2% interrupts.CPU50.LOC:Local_timer_interrupts
26.83 ? 7% +340.4% 118.17 ? 18% interrupts.CPU50.RES:Rescheduling_interrupts
63909 ? 20% +27.0% 81136 ? 2% interrupts.CPU51.LOC:Local_timer_interrupts
29.17 ? 10% +283.4% 111.83 ? 21% interrupts.CPU51.RES:Rescheduling_interrupts
63925 ? 20% +27.1% 81267 ? 2% interrupts.CPU52.LOC:Local_timer_interrupts
26.67 ? 21% +305.0% 108.00 ? 16% interrupts.CPU52.RES:Rescheduling_interrupts
63923 ? 20% +27.1% 81236 ? 2% interrupts.CPU53.LOC:Local_timer_interrupts
28.67 ? 15% +254.7% 101.67 ? 20% interrupts.CPU53.RES:Rescheduling_interrupts
63902 ? 20% +27.1% 81217 ? 2% interrupts.CPU54.LOC:Local_timer_interrupts
26.33 ? 18% +337.3% 115.17 ? 17% interrupts.CPU54.RES:Rescheduling_interrupts
63986 ? 20% +27.0% 81278 ? 2% interrupts.CPU55.LOC:Local_timer_interrupts
27.33 ? 16% +307.9% 111.50 ? 8% interrupts.CPU55.RES:Rescheduling_interrupts
63897 ? 20% +27.2% 81285 ? 2% interrupts.CPU56.LOC:Local_timer_interrupts
30.33 ? 21% +225.8% 98.83 ? 9% interrupts.CPU56.RES:Rescheduling_interrupts
63940 ? 20% +27.1% 81273 ? 2% interrupts.CPU57.LOC:Local_timer_interrupts
27.17 ? 24% +301.8% 109.17 ? 24% interrupts.CPU57.RES:Rescheduling_interrupts
63909 ? 20% +27.1% 81211 ? 2% interrupts.CPU58.LOC:Local_timer_interrupts
25.83 ? 35% +318.1% 108.00 ? 12% interrupts.CPU58.RES:Rescheduling_interrupts
63940 ? 20% +27.1% 81255 ? 2% interrupts.CPU59.LOC:Local_timer_interrupts
29.00 ? 15% +308.0% 118.33 ? 14% interrupts.CPU59.RES:Rescheduling_interrupts
63918 ? 20% +27.1% 81266 ? 2% interrupts.CPU6.LOC:Local_timer_interrupts
45.33 ? 29% +163.6% 119.50 ? 16% interrupts.CPU6.RES:Rescheduling_interrupts
63902 ? 20% +27.0% 81154 ? 3% interrupts.CPU60.LOC:Local_timer_interrupts
28.00 ? 17% +281.5% 106.83 ? 8% interrupts.CPU60.RES:Rescheduling_interrupts
63927 ? 20% +27.1% 81260 ? 2% interrupts.CPU61.LOC:Local_timer_interrupts
28.17 ? 26% +358.0% 129.00 ? 13% interrupts.CPU61.RES:Rescheduling_interrupts
63954 ? 20% +27.1% 81264 ? 2% interrupts.CPU62.LOC:Local_timer_interrupts
38.67 ? 30% +291.4% 151.33 ? 22% interrupts.CPU62.RES:Rescheduling_interrupts
63971 ? 20% +27.0% 81228 ? 3% interrupts.CPU63.LOC:Local_timer_interrupts
54.00 ? 24% +149.1% 134.50 ? 21% interrupts.CPU63.RES:Rescheduling_interrupts
64162 ? 20% +26.6% 81218 ? 2% interrupts.CPU64.LOC:Local_timer_interrupts
45.67 ? 18% +242.3% 156.33 ? 13% interrupts.CPU64.RES:Rescheduling_interrupts
63945 ? 20% +27.2% 81315 ? 2% interrupts.CPU65.LOC:Local_timer_interrupts
43.33 ? 33% +192.7% 126.83 ? 20% interrupts.CPU65.RES:Rescheduling_interrupts
63894 ? 20% +26.0% 80533 ? 4% interrupts.CPU66.LOC:Local_timer_interrupts
28.83 ? 30% +276.9% 108.67 ? 13% interrupts.CPU66.RES:Rescheduling_interrupts
63902 ? 20% +26.0% 80508 ? 4% interrupts.CPU67.LOC:Local_timer_interrupts
24.17 ? 36% +387.6% 117.83 ? 12% interrupts.CPU67.RES:Rescheduling_interrupts
63935 ? 20% +26.0% 80537 ? 4% interrupts.CPU68.LOC:Local_timer_interrupts
27.83 ? 17% +307.2% 113.33 ? 29% interrupts.CPU68.RES:Rescheduling_interrupts
63921 ? 20% +26.0% 80509 ? 4% interrupts.CPU69.LOC:Local_timer_interrupts
25.83 ? 31% +331.6% 111.50 ? 15% interrupts.CPU69.RES:Rescheduling_interrupts
63961 ? 20% +27.0% 81241 ? 2% interrupts.CPU7.LOC:Local_timer_interrupts
43.33 ? 22% +187.3% 124.50 ? 13% interrupts.CPU7.RES:Rescheduling_interrupts
63938 ? 20% +25.9% 80509 ? 4% interrupts.CPU70.LOC:Local_timer_interrupts
27.33 ? 42% +290.9% 106.83 ? 14% interrupts.CPU70.RES:Rescheduling_interrupts
63975 ? 20% +25.8% 80494 ? 4% interrupts.CPU71.LOC:Local_timer_interrupts
30.17 ? 34% +253.6% 106.67 ? 24% interrupts.CPU71.RES:Rescheduling_interrupts
63932 ? 20% +26.0% 80527 ? 4% interrupts.CPU72.LOC:Local_timer_interrupts
26.33 ? 19% +340.5% 116.00 ? 13% interrupts.CPU72.RES:Rescheduling_interrupts
63908 ? 20% +26.0% 80542 ? 4% interrupts.CPU73.LOC:Local_timer_interrupts
27.67 ? 35% +325.3% 117.67 ? 19% interrupts.CPU73.RES:Rescheduling_interrupts
63928 ? 20% +25.9% 80517 ? 4% interrupts.CPU74.LOC:Local_timer_interrupts
22.33 ? 36% +391.0% 109.67 ? 18% interrupts.CPU74.RES:Rescheduling_interrupts
63962 ? 20% +25.9% 80529 ? 4% interrupts.CPU75.LOC:Local_timer_interrupts
25.50 ? 29% +317.0% 106.33 ? 24% interrupts.CPU75.RES:Rescheduling_interrupts
63964 ? 20% +25.8% 80480 ? 4% interrupts.CPU76.LOC:Local_timer_interrupts
22.33 ? 17% +398.5% 111.33 ? 13% interrupts.CPU76.RES:Rescheduling_interrupts
63992 ? 20% +26.0% 80637 ? 5% interrupts.CPU77.LOC:Local_timer_interrupts
25.83 ? 38% +381.9% 124.50 ? 23% interrupts.CPU77.RES:Rescheduling_interrupts
63974 ? 20% +25.9% 80537 ? 4% interrupts.CPU78.LOC:Local_timer_interrupts
23.50 ? 27% +464.5% 132.67 ? 24% interrupts.CPU78.RES:Rescheduling_interrupts
63941 ? 20% +25.9% 80498 ? 4% interrupts.CPU79.LOC:Local_timer_interrupts
25.33 ? 17% +376.3% 120.67 ? 13% interrupts.CPU79.RES:Rescheduling_interrupts
737.17 ? 18% -21.1% 581.50 ? 2% interrupts.CPU8.CAL:Function_call_interrupts
63928 ? 20% +27.0% 81214 ? 2% interrupts.CPU8.LOC:Local_timer_interrupts
36.50 ? 15% +235.2% 122.33 ? 18% interrupts.CPU8.RES:Rescheduling_interrupts
63939 ? 20% +25.9% 80519 ? 4% interrupts.CPU80.LOC:Local_timer_interrupts
24.83 ? 31% +355.7% 113.17 ? 19% interrupts.CPU80.RES:Rescheduling_interrupts
63944 ? 20% +25.9% 80484 ? 4% interrupts.CPU81.LOC:Local_timer_interrupts
22.00 ? 29% +412.9% 112.83 ? 12% interrupts.CPU81.RES:Rescheduling_interrupts
63986 ? 20% +25.8% 80499 ? 4% interrupts.CPU82.LOC:Local_timer_interrupts
28.33 ? 20% +263.5% 103.00 ? 17% interrupts.CPU82.RES:Rescheduling_interrupts
63943 ? 20% +25.9% 80499 ? 4% interrupts.CPU83.LOC:Local_timer_interrupts
26.33 ? 26% +366.5% 122.83 ? 14% interrupts.CPU83.RES:Rescheduling_interrupts
63926 ? 20% +25.9% 80513 ? 4% interrupts.CPU84.LOC:Local_timer_interrupts
30.67 ? 34% +350.0% 138.00 ? 15% interrupts.CPU84.RES:Rescheduling_interrupts
63921 ? 20% +25.9% 80487 ? 4% interrupts.CPU85.LOC:Local_timer_interrupts
29.50 ? 28% +319.8% 123.83 ? 17% interrupts.CPU85.RES:Rescheduling_interrupts
63911 ? 20% +26.0% 80506 ? 4% interrupts.CPU86.LOC:Local_timer_interrupts
32.17 ? 14% +289.1% 125.17 ? 13% interrupts.CPU86.RES:Rescheduling_interrupts
63922 ? 20% +25.9% 80467 ? 4% interrupts.CPU87.LOC:Local_timer_interrupts
33.83 ? 31% +249.3% 118.17 ? 16% interrupts.CPU87.RES:Rescheduling_interrupts
63946 ? 20% +27.1% 81291 ? 3% interrupts.CPU9.LOC:Local_timer_interrupts
36.67 ? 12% +221.4% 117.83 ? 19% interrupts.CPU9.RES:Rescheduling_interrupts
5625932 ? 20% +26.5% 7116528 ? 3% interrupts.LOC:Local_timer_interrupts
2901 ? 8% +251.4% 10198 ? 3% interrupts.RES:Rescheduling_interrupts
50.68 ? 5% -15.1 35.59 ? 4% perf-profile.calltrace.cycles-pp.write
48.72 ? 4% -14.4 34.28 ? 4% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.write
48.32 ? 4% -14.4 33.96 ? 4% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
47.65 ? 4% -14.3 33.38 ? 4% perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
46.82 ? 4% -14.1 32.71 ? 4% perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
44.83 ? 4% -13.8 31.01 ? 4% perf-profile.calltrace.cycles-pp.new_sync_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
43.94 ? 4% -13.7 30.29 ? 4% perf-profile.calltrace.cycles-pp.xfs_file_buffered_write.new_sync_write.vfs_write.ksys_write.do_syscall_64
39.94 ? 5% -12.8 27.13 ? 5% perf-profile.calltrace.cycles-pp.iomap_file_buffered_write.xfs_file_buffered_write.new_sync_write.vfs_write.ksys_write
39.76 ? 5% -12.8 26.98 ? 5% perf-profile.calltrace.cycles-pp.iomap_apply.iomap_file_buffered_write.xfs_file_buffered_write.new_sync_write.vfs_write
34.88 ? 5% -11.9 23.02 ? 5% perf-profile.calltrace.cycles-pp.iomap_write_actor.iomap_apply.iomap_file_buffered_write.xfs_file_buffered_write.new_sync_write
30.05 ? 12% -9.3 20.78 ? 10% perf-profile.calltrace.cycles-pp.secondary_startup_64_no_verify
23.63 ? 6% -9.2 14.42 ? 6% perf-profile.calltrace.cycles-pp.iomap_write_begin.iomap_write_actor.iomap_apply.iomap_file_buffered_write.xfs_file_buffered_write
29.74 ? 12% -9.2 20.55 ? 10% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
29.74 ? 12% -9.2 20.55 ? 10% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64_no_verify
28.83 ? 12% -9.2 19.64 ? 10% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
29.73 ? 12% -9.2 20.55 ? 10% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
29.41 ? 12% -9.1 20.28 ? 10% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary
29.45 ? 12% -9.1 20.32 ? 10% perf-profile.calltrace.cycles-pp.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
18.92 ? 7% -8.6 10.34 ? 8% perf-profile.calltrace.cycles-pp.grab_cache_page_write_begin.iomap_write_begin.iomap_write_actor.iomap_apply.iomap_file_buffered_write
18.74 ? 8% -8.6 10.19 ? 8% perf-profile.calltrace.cycles-pp.pagecache_get_page.grab_cache_page_write_begin.iomap_write_begin.iomap_write_actor.iomap_apply
15.83 ? 9% -8.1 7.76 ? 11% perf-profile.calltrace.cycles-pp.add_to_page_cache_lru.pagecache_get_page.grab_cache_page_write_begin.iomap_write_begin.iomap_write_actor
14.37 ? 8% -7.8 6.56 ? 9% perf-profile.calltrace.cycles-pp.__close
14.36 ? 8% -7.8 6.55 ? 9% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__close
14.35 ? 8% -7.8 6.55 ? 9% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__close
14.34 ? 8% -7.8 6.54 ? 9% perf-profile.calltrace.cycles-pp.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe.__close
14.34 ? 8% -7.8 6.54 ? 9% perf-profile.calltrace.cycles-pp.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe.__close
14.34 ? 8% -7.8 6.54 ? 9% perf-profile.calltrace.cycles-pp.__fput.task_work_run.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64
14.34 ? 8% -7.8 6.54 ? 9% perf-profile.calltrace.cycles-pp.task_work_run.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe
14.32 ? 8% -7.8 6.52 ? 9% perf-profile.calltrace.cycles-pp.dput.__fput.task_work_run.exit_to_user_mode_prepare.syscall_exit_to_user_mode
14.30 ? 8% -7.8 6.51 ? 9% perf-profile.calltrace.cycles-pp.__dentry_kill.dput.__fput.task_work_run.exit_to_user_mode_prepare
13.35 ? 9% -6.9 6.48 ? 9% perf-profile.calltrace.cycles-pp.evict.__dentry_kill.dput.__fput.task_work_run
13.32 ? 9% -6.9 6.46 ? 9% perf-profile.calltrace.cycles-pp.truncate_inode_pages_range.evict.__dentry_kill.dput.__fput
9.02 ? 11% -5.5 3.52 ? 13% perf-profile.calltrace.cycles-pp.__pagevec_release.truncate_inode_pages_range.evict.__dentry_kill.dput
8.62 ? 11% -5.2 3.40 ? 13% perf-profile.calltrace.cycles-pp.release_pages.__pagevec_release.truncate_inode_pages_range.evict.__dentry_kill
7.57 ? 12% -4.7 2.89 ? 14% perf-profile.calltrace.cycles-pp.lru_cache_add.add_to_page_cache_lru.pagecache_get_page.grab_cache_page_write_begin.iomap_write_begin
7.46 ? 12% -4.7 2.80 ? 15% perf-profile.calltrace.cycles-pp.__pagevec_lru_add.lru_cache_add.add_to_page_cache_lru.pagecache_get_page.grab_cache_page_write_begin
6.00 ? 14% -4.3 1.70 ? 22% perf-profile.calltrace.cycles-pp.lock_page_lruvec_irqsave.release_pages.__pagevec_release.truncate_inode_pages_range.evict
5.96 ? 14% -4.3 1.68 ? 22% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.lock_page_lruvec_irqsave.release_pages.__pagevec_release.truncate_inode_pages_range
5.92 ? 15% -4.3 1.66 ? 22% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.lock_page_lruvec_irqsave.release_pages.__pagevec_release
5.66 ? 15% -4.0 1.62 ? 22% perf-profile.calltrace.cycles-pp.lock_page_lruvec_irqsave.__pagevec_lru_add.lru_cache_add.add_to_page_cache_lru.pagecache_get_page
5.62 ? 15% -4.0 1.60 ? 22% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.lock_page_lruvec_irqsave.__pagevec_lru_add.lru_cache_add.add_to_page_cache_lru
5.58 ? 15% -4.0 1.56 ? 22% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.lock_page_lruvec_irqsave.__pagevec_lru_add.lru_cache_add
8.20 ? 6% -3.4 4.82 ? 9% perf-profile.calltrace.cycles-pp.__add_to_page_cache_locked.add_to_page_cache_lru.pagecache_get_page.grab_cache_page_write_begin.iomap_write_begin
5.14 ? 7% -2.3 2.81 ? 10% perf-profile.calltrace.cycles-pp.mem_cgroup_charge.__add_to_page_cache_locked.add_to_page_cache_lru.pagecache_get_page.grab_cache_page_write_begin
7.11 ? 4% -1.9 5.18 ? 4% perf-profile.calltrace.cycles-pp.iomap_write_end.iomap_write_actor.iomap_apply.iomap_file_buffered_write.xfs_file_buffered_write
4.59 ? 5% -1.5 3.06 ? 6% perf-profile.calltrace.cycles-pp.__set_page_dirty_nobuffers.iomap_write_end.iomap_write_actor.iomap_apply.iomap_file_buffered_write
3.27 ? 5% -1.2 2.11 ? 6% perf-profile.calltrace.cycles-pp.__set_page_dirty.__set_page_dirty_nobuffers.iomap_write_end.iomap_write_actor.iomap_apply
2.13 ? 7% -1.0 1.15 ? 10% perf-profile.calltrace.cycles-pp.get_mem_cgroup_from_mm.mem_cgroup_charge.__add_to_page_cache_locked.add_to_page_cache_lru.pagecache_get_page
2.18 ? 7% -0.9 1.26 ? 10% perf-profile.calltrace.cycles-pp.__mod_lruvec_page_state.__add_to_page_cache_locked.add_to_page_cache_lru.pagecache_get_page.grab_cache_page_write_begin
2.14 ? 6% -0.9 1.24 ? 10% perf-profile.calltrace.cycles-pp.__mod_lruvec_page_state.__set_page_dirty.__set_page_dirty_nobuffers.iomap_write_end.iomap_write_actor
2.05 ? 6% -0.9 1.17 ? 10% perf-profile.calltrace.cycles-pp.__mem_cgroup_charge.mem_cgroup_charge.__add_to_page_cache_locked.add_to_page_cache_lru.pagecache_get_page
2.20 ? 5% -0.8 1.40 ? 5% perf-profile.calltrace.cycles-pp.truncate_cleanup_page.truncate_inode_pages_range.evict.__dentry_kill.dput
2.02 ? 5% -0.8 1.28 ? 6% perf-profile.calltrace.cycles-pp.__cancel_dirty_page.truncate_cleanup_page.truncate_inode_pages_range.evict.__dentry_kill
1.80 ? 6% -0.7 1.10 ? 6% perf-profile.calltrace.cycles-pp.account_page_cleaned.__cancel_dirty_page.truncate_cleanup_page.truncate_inode_pages_range.evict
1.53 ? 7% -0.6 0.88 ? 8% perf-profile.calltrace.cycles-pp.__mod_lruvec_page_state.account_page_cleaned.__cancel_dirty_page.truncate_cleanup_page.truncate_inode_pages_range
1.15 ? 6% -0.6 0.52 ? 47% perf-profile.calltrace.cycles-pp.__mod_memcg_state.__mod_memcg_lruvec_state.__mod_lruvec_page_state.__add_to_page_cache_locked.add_to_page_cache_lru
0.91 ? 7% -0.6 0.28 ?100% perf-profile.calltrace.cycles-pp.__mod_memcg_lruvec_state.__pagevec_lru_add.lru_cache_add.add_to_page_cache_lru.pagecache_get_page
3.20 ? 4% -0.6 2.57 ? 3% perf-profile.calltrace.cycles-pp.xfs_buffered_write_iomap_begin.iomap_apply.iomap_file_buffered_write.xfs_file_buffered_write.new_sync_write
1.38 ? 5% -0.6 0.76 ? 11% perf-profile.calltrace.cycles-pp.__mod_memcg_lruvec_state.__mod_lruvec_page_state.__set_page_dirty.__set_page_dirty_nobuffers.iomap_write_end
1.24 ? 6% -0.6 0.66 ? 13% perf-profile.calltrace.cycles-pp.__mod_memcg_state.__mod_memcg_lruvec_state.__mod_lruvec_page_state.__set_page_dirty.__set_page_dirty_nobuffers
1.28 ? 7% -0.6 0.70 ? 13% perf-profile.calltrace.cycles-pp.__mod_memcg_lruvec_state.__mod_lruvec_page_state.__add_to_page_cache_locked.add_to_page_cache_lru.pagecache_get_page
2.30 ? 3% -0.6 1.74 ? 2% perf-profile.calltrace.cycles-pp.xfs_file_write_checks.xfs_file_buffered_write.new_sync_write.vfs_write.ksys_write
0.99 ? 7% -0.5 0.46 ? 45% perf-profile.calltrace.cycles-pp.__mod_memcg_lruvec_state.__mod_lruvec_page_state.account_page_cleaned.__cancel_dirty_page.truncate_cleanup_page
1.24 ? 5% -0.5 0.73 ? 6% perf-profile.calltrace.cycles-pp.mem_cgroup_uncharge_list.release_pages.__pagevec_release.truncate_inode_pages_range.evict
1.26 ? 22% -0.5 0.77 perf-profile.calltrace.cycles-pp.__entry_text_start.write
1.62 ? 6% -0.5 1.14 ? 5% perf-profile.calltrace.cycles-pp.delete_from_page_cache_batch.truncate_inode_pages_range.evict.__dentry_kill.dput
0.95 ? 8% -0.5 0.49 ? 45% perf-profile.calltrace.cycles-pp.__mod_lruvec_page_state.unaccount_page_cache_page.delete_from_page_cache_batch.truncate_inode_pages_range.evict
2.71 ? 4% -0.4 2.32 perf-profile.calltrace.cycles-pp.iomap_set_range_uptodate.iomap_write_begin.iomap_write_actor.iomap_apply.iomap_file_buffered_write
1.03 ? 7% -0.4 0.64 ? 7% perf-profile.calltrace.cycles-pp.unaccount_page_cache_page.delete_from_page_cache_batch.truncate_inode_pages_range.evict.__dentry_kill
1.11 ? 6% -0.4 0.75 ? 5% perf-profile.calltrace.cycles-pp.file_update_time.xfs_file_write_checks.xfs_file_buffered_write.new_sync_write.vfs_write
2.09 ? 5% -0.3 1.75 ? 3% perf-profile.calltrace.cycles-pp.copy_page_from_iter_atomic.iomap_write_actor.iomap_apply.iomap_file_buffered_write.xfs_file_buffered_write
1.72 ? 4% -0.3 1.46 ? 3% perf-profile.calltrace.cycles-pp.iomap_set_range_uptodate.iomap_write_end.iomap_write_actor.iomap_apply.iomap_file_buffered_write
1.58 ? 5% -0.3 1.33 ? 3% perf-profile.calltrace.cycles-pp.copyin.copy_page_from_iter_atomic.iomap_write_actor.iomap_apply.iomap_file_buffered_write
1.42 ? 5% -0.2 1.20 ? 3% perf-profile.calltrace.cycles-pp.copy_user_enhanced_fast_string.copyin.copy_page_from_iter_atomic.iomap_write_actor.iomap_apply
0.83 ? 4% -0.1 0.69 ? 3% perf-profile.calltrace.cycles-pp.xfs_ilock.xfs_buffered_write_iomap_begin.iomap_apply.iomap_file_buffered_write.xfs_file_buffered_write
0.87 ? 5% -0.1 0.74 ? 2% perf-profile.calltrace.cycles-pp.__alloc_pages.pagecache_get_page.grab_cache_page_write_begin.iomap_write_begin.iomap_write_actor
0.77 ? 6% -0.1 0.65 ? 4% perf-profile.calltrace.cycles-pp.iov_iter_fault_in_readable.iomap_write_actor.iomap_apply.iomap_file_buffered_write.xfs_file_buffered_write
0.70 ? 4% -0.1 0.58 ? 3% perf-profile.calltrace.cycles-pp.down_write.xfs_ilock.xfs_buffered_write_iomap_begin.iomap_apply.iomap_file_buffered_write
0.70 ? 5% -0.1 0.58 ? 3% perf-profile.calltrace.cycles-pp.xfs_ilock.xfs_file_buffered_write.new_sync_write.vfs_write.ksys_write
0.65 ? 5% -0.1 0.56 ? 3% perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages.pagecache_get_page.grab_cache_page_write_begin.iomap_write_begin
0.68 ? 6% -0.1 0.59 ? 2% perf-profile.calltrace.cycles-pp.security_file_permission.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.88 ? 4% +0.1 0.97 ? 2% perf-profile.calltrace.cycles-pp.rwsem_spin_on_owner.rwsem_down_write_slowpath.do_unlinkat.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.67 ? 4% +0.1 0.78 perf-profile.calltrace.cycles-pp.rwsem_spin_on_owner.rwsem_down_write_slowpath.path_openat.do_filp_open.do_sys_openat2
0.45 ? 44% +0.2 0.60 ? 3% perf-profile.calltrace.cycles-pp.xfs_vn_unlink.vfs_unlink.do_unlinkat.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.45 ? 44% +0.2 0.60 ? 3% perf-profile.calltrace.cycles-pp.xfs_remove.xfs_vn_unlink.vfs_unlink.do_unlinkat.do_syscall_64
0.46 ? 44% +0.2 0.62 ? 3% perf-profile.calltrace.cycles-pp.vfs_unlink.do_unlinkat.do_syscall_64.entry_SYSCALL_64_after_hwframe.unlink
0.00 +0.8 0.76 ? 4% perf-profile.calltrace.cycles-pp.xfs_inactive_ifree.xfs_inactive.xfs_inodegc_worker.process_one_work.worker_thread
0.00 +0.8 0.83 ? 4% perf-profile.calltrace.cycles-pp.xfs_inactive.xfs_inodegc_worker.process_one_work.worker_thread.kthread
0.00 +0.8 0.84 ? 4% perf-profile.calltrace.cycles-pp.xfs_inodegc_worker.process_one_work.worker_thread.kthread.ret_from_fork
0.00 +0.9 0.90 ? 4% perf-profile.calltrace.cycles-pp.process_one_work.worker_thread.kthread.ret_from_fork
0.00 +0.9 0.90 ? 4% perf-profile.calltrace.cycles-pp.worker_thread.kthread.ret_from_fork
0.00 +0.9 0.93 ? 4% perf-profile.calltrace.cycles-pp.ret_from_fork
0.00 +0.9 0.93 ? 4% perf-profile.calltrace.cycles-pp.kthread.ret_from_fork
0.00 +15.4 15.36 ? 3% perf-profile.calltrace.cycles-pp.osq_lock.rwsem_down_write_slowpath.path_openat.do_filp_open.do_sys_openat2
2.00 ? 3% +15.4 17.40 ? 2% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.creat64
2.01 ? 4% +15.4 17.42 ? 2% perf-profile.calltrace.cycles-pp.creat64
2.00 ? 3% +15.4 17.40 ? 2% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.creat64
1.99 ? 3% +15.4 17.40 ? 2% perf-profile.calltrace.cycles-pp.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe.creat64
1.99 ? 3% +15.4 17.40 ? 2% perf-profile.calltrace.cycles-pp.do_sys_openat2.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe.creat64
1.96 ? 4% +15.4 17.37 ? 2% perf-profile.calltrace.cycles-pp.path_openat.do_filp_open.do_sys_openat2.do_sys_open.do_syscall_64
1.96 ? 4% +15.4 17.37 ? 2% perf-profile.calltrace.cycles-pp.do_filp_open.do_sys_openat2.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.60 ? 14% +15.4 16.02 ? 2% perf-profile.calltrace.cycles-pp.osq_lock.rwsem_down_write_slowpath.do_unlinkat.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.80 ? 9% +15.4 16.25 ? 2% perf-profile.calltrace.cycles-pp.rwsem_down_write_slowpath.path_openat.do_filp_open.do_sys_openat2.do_sys_open
1.54 ? 8% +15.5 17.08 ? 2% perf-profile.calltrace.cycles-pp.rwsem_down_write_slowpath.do_unlinkat.do_syscall_64.entry_SYSCALL_64_after_hwframe.unlink
2.22 ? 6% +15.6 17.82 ? 2% perf-profile.calltrace.cycles-pp.unlink
2.20 ? 6% +15.6 17.80 ? 2% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.unlink
2.20 ? 6% +15.6 17.80 ? 2% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.unlink
2.18 ? 6% +15.6 17.78 ? 2% perf-profile.calltrace.cycles-pp.do_unlinkat.do_syscall_64.entry_SYSCALL_64_after_hwframe.unlink
50.84 ? 4% -14.8 36.04 ? 4% perf-profile.children.cycles-pp.write
47.68 ? 4% -14.3 33.40 ? 4% perf-profile.children.cycles-pp.ksys_write
46.88 ? 4% -14.1 32.74 ? 4% perf-profile.children.cycles-pp.vfs_write
44.85 ? 4% -13.8 31.02 ? 4% perf-profile.children.cycles-pp.new_sync_write
43.97 ? 4% -13.7 30.31 ? 4% perf-profile.children.cycles-pp.xfs_file_buffered_write
39.96 ? 5% -12.8 27.14 ? 5% perf-profile.children.cycles-pp.iomap_file_buffered_write
39.79 ? 5% -12.8 27.00 ? 5% perf-profile.children.cycles-pp.iomap_apply
34.94 ? 5% -11.9 23.07 ? 5% perf-profile.children.cycles-pp.iomap_write_actor
29.14 ? 12% -9.3 19.86 ? 10% perf-profile.children.cycles-pp.intel_idle
30.05 ? 12% -9.3 20.78 ? 10% perf-profile.children.cycles-pp.secondary_startup_64_no_verify
30.05 ? 12% -9.3 20.78 ? 10% perf-profile.children.cycles-pp.cpu_startup_entry
30.05 ? 12% -9.3 20.78 ? 10% perf-profile.children.cycles-pp.do_idle
29.77 ? 12% -9.2 20.54 ? 10% perf-profile.children.cycles-pp.cpuidle_enter
29.77 ? 12% -9.2 20.54 ? 10% perf-profile.children.cycles-pp.cpuidle_enter_state
23.64 ? 6% -9.2 14.44 ? 6% perf-profile.children.cycles-pp.iomap_write_begin
29.74 ? 12% -9.2 20.55 ? 10% perf-profile.children.cycles-pp.start_secondary
12.10 ? 15% -8.6 3.46 ? 21% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
12.21 ? 14% -8.6 3.60 ? 21% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
12.01 ? 15% -8.6 3.42 ? 22% perf-profile.children.cycles-pp.lock_page_lruvec_irqsave
18.96 ? 7% -8.6 10.37 ? 8% perf-profile.children.cycles-pp.grab_cache_page_write_begin
18.81 ? 8% -8.6 10.24 ? 8% perf-profile.children.cycles-pp.pagecache_get_page
15.83 ? 9% -8.1 7.77 ? 11% perf-profile.children.cycles-pp.add_to_page_cache_lru
14.75 ? 8% -7.8 6.90 ? 8% perf-profile.children.cycles-pp.syscall_exit_to_user_mode
14.61 ? 8% -7.8 6.77 ? 8% perf-profile.children.cycles-pp.exit_to_user_mode_prepare
14.37 ? 8% -7.8 6.56 ? 9% perf-profile.children.cycles-pp.__close
14.34 ? 8% -7.8 6.54 ? 9% perf-profile.children.cycles-pp.dput
14.34 ? 8% -7.8 6.54 ? 9% perf-profile.children.cycles-pp.__fput
14.34 ? 8% -7.8 6.54 ? 9% perf-profile.children.cycles-pp.task_work_run
14.30 ? 8% -7.8 6.51 ? 9% perf-profile.children.cycles-pp.__dentry_kill
13.35 ? 9% -6.9 6.48 ? 9% perf-profile.children.cycles-pp.evict
13.32 ? 9% -6.9 6.46 ? 9% perf-profile.children.cycles-pp.truncate_inode_pages_range
9.02 ? 11% -5.5 3.52 ? 13% perf-profile.children.cycles-pp.__pagevec_release
8.75 ? 11% -5.3 3.49 ? 13% perf-profile.children.cycles-pp.release_pages
7.84 ? 13% -4.9 2.92 ? 15% perf-profile.children.cycles-pp.__pagevec_lru_add
7.58 ? 12% -4.7 2.90 ? 14% perf-profile.children.cycles-pp.lru_cache_add
8.21 ? 6% -3.4 4.83 ? 9% perf-profile.children.cycles-pp.__add_to_page_cache_locked
6.80 ? 7% -2.8 3.96 ? 9% perf-profile.children.cycles-pp.__mod_lruvec_page_state
5.87 ? 6% -2.6 3.24 ? 11% perf-profile.children.cycles-pp.__mod_memcg_lruvec_state
5.12 ? 6% -2.4 2.70 ? 12% perf-profile.children.cycles-pp.__mod_memcg_state
5.16 ? 7% -2.3 2.82 ? 10% perf-profile.children.cycles-pp.mem_cgroup_charge
7.14 ? 4% -1.9 5.21 ? 4% perf-profile.children.cycles-pp.iomap_write_end
4.62 ? 5% -1.5 3.09 ? 6% perf-profile.children.cycles-pp.__set_page_dirty_nobuffers
3.29 ? 5% -1.2 2.12 ? 6% perf-profile.children.cycles-pp.__set_page_dirty
2.14 ? 7% -1.0 1.15 ? 9% perf-profile.children.cycles-pp.get_mem_cgroup_from_mm
2.06 ? 6% -0.9 1.17 ? 9% perf-profile.children.cycles-pp.__mem_cgroup_charge
2.20 ? 5% -0.8 1.40 ? 5% perf-profile.children.cycles-pp.truncate_cleanup_page
2.03 ? 5% -0.8 1.28 ? 6% perf-profile.children.cycles-pp.__cancel_dirty_page
1.81 ? 6% -0.7 1.11 ? 7% perf-profile.children.cycles-pp.account_page_cleaned
4.43 ? 4% -0.7 3.78 ? 2% perf-profile.children.cycles-pp.iomap_set_range_uptodate
3.29 ? 4% -0.6 2.64 ? 3% perf-profile.children.cycles-pp.xfs_buffered_write_iomap_begin
2.33 ? 3% -0.6 1.76 ? 2% perf-profile.children.cycles-pp.xfs_file_write_checks
1.25 ? 5% -0.5 0.74 ? 6% perf-profile.children.cycles-pp.mem_cgroup_uncharge_list
1.63 ? 5% -0.5 1.15 ? 5% perf-profile.children.cycles-pp.delete_from_page_cache_batch
1.04 ? 7% -0.4 0.65 ? 8% perf-profile.children.cycles-pp.unaccount_page_cache_page
1.12 ? 5% -0.4 0.77 ? 5% perf-profile.children.cycles-pp.file_update_time
2.10 ? 5% -0.3 1.76 ? 3% perf-profile.children.cycles-pp.copy_page_from_iter_atomic
0.78 ? 4% -0.3 0.48 ? 5% perf-profile.children.cycles-pp.uncharge_batch
0.38 ? 15% -0.3 0.11 ? 19% perf-profile.children.cycles-pp.lru_add_drain_cpu
0.38 ? 15% -0.3 0.12 ? 18% perf-profile.children.cycles-pp.lru_add_drain
1.57 ? 5% -0.3 1.31 ? 2% perf-profile.children.cycles-pp.xfs_ilock
1.60 ? 5% -0.3 1.34 ? 3% perf-profile.children.cycles-pp.copyin
1.51 ? 5% -0.2 1.28 ? 3% perf-profile.children.cycles-pp.copy_user_enhanced_fast_string
0.56 ? 15% -0.2 0.33 ? 14% perf-profile.children.cycles-pp.xfs_vn_update_time
0.64 ? 6% -0.2 0.42 ? 7% perf-profile.children.cycles-pp.lock_page_memcg
0.61 ? 5% -0.2 0.39 ? 3% perf-profile.children.cycles-pp.page_counter_uncharge
1.36 ? 5% -0.2 1.14 ? 3% perf-profile.children.cycles-pp.down_write
0.45 ? 7% -0.2 0.25 ? 9% perf-profile.children.cycles-pp.uncharge_page
0.47 ? 5% -0.2 0.31 ? 4% perf-profile.children.cycles-pp.page_counter_cancel
0.37 ? 10% -0.2 0.22 ? 8% perf-profile.children.cycles-pp.__count_memcg_events
0.89 ? 4% -0.2 0.74 ? 4% perf-profile.children.cycles-pp.xfs_iunlock
0.92 ? 5% -0.1 0.78 ? 2% perf-profile.children.cycles-pp.__entry_text_start
0.53 ? 6% -0.1 0.38 ? 8% perf-profile.children.cycles-pp.balance_dirty_pages_ratelimited
0.81 ? 5% -0.1 0.67 ? 4% perf-profile.children.cycles-pp.syscall_return_via_sysret
0.89 ? 5% -0.1 0.75 ? 2% perf-profile.children.cycles-pp.__alloc_pages
0.34 ? 8% -0.1 0.21 ? 9% perf-profile.children.cycles-pp.mem_cgroup_charge_statistics
0.75 ? 4% -0.1 0.62 ? 5% perf-profile.children.cycles-pp.xas_load
0.79 ? 6% -0.1 0.66 ? 4% perf-profile.children.cycles-pp.iov_iter_fault_in_readable
0.54 ? 7% -0.1 0.41 ? 4% perf-profile.children.cycles-pp.xfs_buffered_write_iomap_end
0.88 ? 4% -0.1 0.76 perf-profile.children.cycles-pp.___might_sleep
0.40 ? 4% -0.1 0.28 ? 3% perf-profile.children.cycles-pp.try_charge_memcg
0.64 ? 6% -0.1 0.52 ? 5% perf-profile.children.cycles-pp.up_write
0.34 ? 7% -0.1 0.22 ? 6% perf-profile.children.cycles-pp.mem_cgroup_track_foreign_dirty_slowpath
0.63 ? 7% -0.1 0.51 ? 4% perf-profile.children.cycles-pp.__fdget_pos
0.40 ? 7% -0.1 0.30 ? 8% perf-profile.children.cycles-pp._raw_spin_lock
0.22 ? 12% -0.1 0.12 ? 8% perf-profile.children.cycles-pp.xfs_trans_alloc
0.54 ? 6% -0.1 0.44 ? 4% perf-profile.children.cycles-pp.__fget_light
0.50 ? 3% -0.1 0.40 perf-profile.children.cycles-pp.percpu_counter_add_batch
0.29 ? 6% -0.1 0.20 ? 4% perf-profile.children.cycles-pp.page_counter_try_charge
0.66 ? 5% -0.1 0.56 ? 2% perf-profile.children.cycles-pp.get_page_from_freelist
0.20 ? 13% -0.1 0.10 ? 8% perf-profile.children.cycles-pp.xfs_trans_reserve
0.59 ? 5% -0.1 0.50 ? 4% perf-profile.children.cycles-pp.__get_user_nocheck_1
0.52 ? 2% -0.1 0.42 ? 3% perf-profile.children.cycles-pp.xfs_break_layouts
0.19 ? 14% -0.1 0.09 ? 7% perf-profile.children.cycles-pp.xfs_log_reserve
0.47 ? 4% -0.1 0.38 ? 4% perf-profile.children.cycles-pp.__mod_lruvec_state
0.69 ? 6% -0.1 0.60 ? 2% perf-profile.children.cycles-pp.security_file_permission
0.65 ? 5% -0.1 0.56 ? 3% perf-profile.children.cycles-pp.xas_store
0.19 ? 7% -0.1 0.10 ? 7% perf-profile.children.cycles-pp.propagate_protected_usage
0.29 ? 7% -0.1 0.21 ? 7% perf-profile.children.cycles-pp.xfs_bmbt_to_iomap
0.44 ? 4% -0.1 0.36 ? 3% perf-profile.children.cycles-pp.xfs_trans_read_buf_map
0.14 ? 6% -0.1 0.06 ? 6% perf-profile.children.cycles-pp.xfs_inactive_truncate
0.48 ? 4% -0.1 0.40 ? 7% perf-profile.children.cycles-pp.xfs_file_write_iter
0.55 ? 6% -0.1 0.47 ? 3% perf-profile.children.cycles-pp.common_file_perm
0.43 ? 4% -0.1 0.36 ? 3% perf-profile.children.cycles-pp.__cond_resched
0.11 ? 17% -0.1 0.03 ? 70% perf-profile.children.cycles-pp.xlog_grant_add_space
0.56 ? 5% -0.1 0.48 ? 5% perf-profile.children.cycles-pp.xfs_ifree
0.38 ? 3% -0.1 0.30 ? 4% perf-profile.children.cycles-pp.__mod_node_page_state
0.16 ? 10% -0.1 0.08 ? 5% perf-profile.children.cycles-pp._raw_spin_unlock_irqrestore
0.38 ? 6% -0.1 0.31 ? 4% perf-profile.children.cycles-pp.find_lock_entries
0.21 ? 5% -0.1 0.15 ? 7% perf-profile.children.cycles-pp.__unlock_page_memcg
0.31 ? 6% -0.1 0.25 ? 5% perf-profile.children.cycles-pp.current_time
0.30 ? 6% -0.1 0.24 ? 4% perf-profile.children.cycles-pp.xfs_buf_read_map
0.27 ? 5% -0.1 0.21 ? 4% perf-profile.children.cycles-pp.xfs_buf_find
0.28 ? 5% -0.1 0.22 ? 4% perf-profile.children.cycles-pp.xfs_buf_get_map
0.35 ? 5% -0.1 0.29 ? 2% perf-profile.children.cycles-pp.unlock_page
0.09 ? 15% -0.1 0.04 ? 71% perf-profile.children.cycles-pp.xfs_log_ticket_ungrant
0.30 ? 5% -0.1 0.25 ? 5% perf-profile.children.cycles-pp.xfs_break_leased_layouts
0.17 ? 9% -0.1 0.12 ? 4% perf-profile.children.cycles-pp.xfs_iunlink_remove
0.18 ? 8% -0.1 0.14 ? 9% perf-profile.children.cycles-pp.xfs_read_agi
0.35 ? 5% -0.0 0.31 ? 5% perf-profile.children.cycles-pp.xfs_iext_lookup_extent
0.29 ? 3% -0.0 0.25 ? 8% perf-profile.children.cycles-pp.xas_start
0.26 ? 4% -0.0 0.22 ? 3% perf-profile.children.cycles-pp.__fsnotify_parent
0.24 ? 7% -0.0 0.20 ? 4% perf-profile.children.cycles-pp.free_unref_page_list
0.15 ? 7% -0.0 0.12 ? 6% perf-profile.children.cycles-pp.xfs_iread_extents
0.15 ? 4% -0.0 0.12 ? 6% perf-profile.children.cycles-pp.kmem_cache_alloc
0.16 ? 9% -0.0 0.13 ? 3% perf-profile.children.cycles-pp.cgroup_rstat_updated
0.07 ? 10% -0.0 0.03 ? 70% perf-profile.children.cycles-pp.schedule_idle
0.14 ? 3% -0.0 0.11 ? 3% perf-profile.children.cycles-pp.try_to_wake_up
0.15 ? 3% -0.0 0.11 ? 6% perf-profile.children.cycles-pp.xfs_btree_lookup
0.25 ? 3% -0.0 0.21 ? 3% perf-profile.children.cycles-pp.xfs_da3_node_lookup_int
0.23 ? 3% -0.0 0.20 ? 6% perf-profile.children.cycles-pp.node_dirty_ok
0.19 ? 3% -0.0 0.16 ? 5% perf-profile.children.cycles-pp.rcu_all_qs
0.10 -0.0 0.07 ? 7% perf-profile.children.cycles-pp.up
0.11 -0.0 0.08 ? 6% perf-profile.children.cycles-pp.xfs_buf_item_release
0.10 ? 3% -0.0 0.07 ? 5% perf-profile.children.cycles-pp.xfs_buf_unlock
0.08 ? 10% -0.0 0.05 ? 46% perf-profile.children.cycles-pp.__x64_sys_write
0.10 ? 4% -0.0 0.07 ? 9% perf-profile.children.cycles-pp.xfs_btree_lookup_get_block
0.10 ? 7% -0.0 0.06 ? 7% perf-profile.children.cycles-pp.xfs_buf_lock
0.15 ? 7% -0.0 0.12 ? 5% perf-profile.children.cycles-pp.xfs_dir_createname
0.19 ? 5% -0.0 0.16 ? 4% perf-profile.children.cycles-pp.xfs_da_read_buf
0.13 ? 10% -0.0 0.10 ? 3% perf-profile.children.cycles-pp.xfs_bmapi_reserve_delalloc
0.09 ? 10% -0.0 0.06 ? 7% perf-profile.children.cycles-pp.down
0.09 ? 10% -0.0 0.06 ? 7% perf-profile.children.cycles-pp.__down
0.12 ? 22% -0.0 0.09 ? 4% perf-profile.children.cycles-pp.file_remove_privs
0.14 ? 7% -0.0 0.12 ? 4% perf-profile.children.cycles-pp.xfs_dir2_node_addname
0.15 ? 10% -0.0 0.12 ? 6% perf-profile.children.cycles-pp.__xa_set_mark
0.08 ? 8% -0.0 0.06 ? 9% perf-profile.children.cycles-pp.schedule_timeout
0.14 ? 4% -0.0 0.11 ? 6% perf-profile.children.cycles-pp.rcu_read_unlock_strict
0.14 ? 9% -0.0 0.12 ? 5% perf-profile.children.cycles-pp.aa_file_perm
0.11 ? 12% -0.0 0.09 ? 5% perf-profile.children.cycles-pp.timestamp_truncate
0.08 ? 5% -0.0 0.06 ? 9% perf-profile.children.cycles-pp.xfs_btree_read_buf_block
0.11 ? 4% -0.0 0.09 ? 7% perf-profile.children.cycles-pp.__list_add_valid
0.14 ? 4% -0.0 0.12 ? 9% perf-profile.children.cycles-pp.alloc_pages
0.12 ? 8% -0.0 0.10 ? 8% perf-profile.children.cycles-pp.xas_create
0.13 ? 4% -0.0 0.11 ? 6% perf-profile.children.cycles-pp.entry_SYSCALL_64_safe_stack
0.10 ? 10% -0.0 0.08 ? 10% perf-profile.children.cycles-pp.kmem_cache_free
0.07 ? 11% -0.0 0.04 ? 45% perf-profile.children.cycles-pp.xas_alloc
0.10 -0.0 0.08 ? 8% perf-profile.children.cycles-pp.iomap_page_create
0.10 ? 7% -0.0 0.08 ? 4% perf-profile.children.cycles-pp.free_unref_page_commit
0.06 ? 7% -0.0 0.04 ? 45% perf-profile.children.cycles-pp.xfs_perag_get
0.09 ? 5% -0.0 0.07 ? 7% perf-profile.children.cycles-pp.xfs_dir2_node_addname_int
0.12 ? 5% -0.0 0.10 ? 3% perf-profile.children.cycles-pp._xfs_trans_bjoin
0.09 ? 7% -0.0 0.07 ? 10% perf-profile.children.cycles-pp.xfs_get_extsz_hint
0.07 ? 11% -0.0 0.05 ? 7% perf-profile.children.cycles-pp.xfs_bmap_add_extent_hole_delay
0.12 ? 4% -0.0 0.10 ? 4% perf-profile.children.cycles-pp.xfs_da3_node_read
0.10 ? 4% -0.0 0.09 ? 5% perf-profile.children.cycles-pp.rwsem_wake
0.09 ? 6% -0.0 0.07 ? 6% perf-profile.children.cycles-pp.wait_for_stable_page
0.12 -0.0 0.11 ? 4% perf-profile.children.cycles-pp.xa_get_order
0.09 ? 8% -0.0 0.07 ? 6% perf-profile.children.cycles-pp.xas_find
0.08 ? 6% -0.0 0.06 ? 7% perf-profile.children.cycles-pp.wake_up_q
0.08 +0.0 0.10 ? 9% perf-profile.children.cycles-pp.update_load_avg
0.19 ? 7% +0.0 0.22 ? 4% perf-profile.children.cycles-pp.__schedule
0.09 ? 5% +0.0 0.13 ? 5% perf-profile.children.cycles-pp.__xfs_btree_check_sblock
0.05 ? 8% +0.0 0.09 ? 11% perf-profile.children.cycles-pp.xfs_iunlink
0.13 ? 5% +0.1 0.18 ? 5% perf-profile.children.cycles-pp.schedule
0.14 ? 6% +0.1 0.19 ? 3% perf-profile.children.cycles-pp.xfs_btree_get_rec
0.00 +0.1 0.06 ? 9% perf-profile.children.cycles-pp.update_cfs_group
0.13 ? 7% +0.1 0.20 ? 5% perf-profile.children.cycles-pp.xfs_btree_increment
0.15 ? 3% +0.1 0.22 ? 5% perf-profile.children.cycles-pp.xfs_btree_check_sblock
0.15 ? 7% +0.1 0.21 ? 3% perf-profile.children.cycles-pp.disk_wrt
0.08 ? 10% +0.1 0.15 ? 7% perf-profile.children.cycles-pp.task_tick_fair
0.53 ? 5% +0.1 0.60 ? 3% perf-profile.children.cycles-pp.xfs_vn_unlink
0.54 ? 5% +0.1 0.62 ? 3% perf-profile.children.cycles-pp.vfs_unlink
0.53 ? 5% +0.1 0.60 ? 3% perf-profile.children.cycles-pp.xfs_remove
0.15 ? 8% +0.1 0.22 ? 4% perf-profile.children.cycles-pp.scheduler_tick
0.03 ?100% +0.1 0.11 ? 8% perf-profile.children.cycles-pp.pick_next_task_fair
0.28 ? 3% +0.1 0.37 ? 3% perf-profile.children.cycles-pp.xfs_inobt_get_rec
0.01 ?223% +0.1 0.10 ? 4% perf-profile.children.cycles-pp.find_busiest_group
0.00 +0.1 0.10 ? 5% perf-profile.children.cycles-pp.update_sd_lb_stats
0.02 ?142% +0.1 0.12 ? 4% perf-profile.children.cycles-pp.load_balance
0.23 ? 10% +0.1 0.32 ? 2% perf-profile.children.cycles-pp.tick_sched_handle
0.32 ? 5% +0.1 0.42 ? 2% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.00 +0.1 0.10 ? 8% perf-profile.children.cycles-pp.newidle_balance
0.34 ? 4% +0.1 0.44 ? 2% perf-profile.children.cycles-pp.xfs_dialloc
0.22 ? 11% +0.1 0.32 ? 2% perf-profile.children.cycles-pp.update_process_times
0.24 ? 8% +0.1 0.35 ? 2% perf-profile.children.cycles-pp.tick_sched_timer
0.56 ? 9% +0.1 0.67 ? 3% perf-profile.children.cycles-pp.hrtimer_interrupt
0.57 ? 9% +0.1 0.68 ? 3% perf-profile.children.cycles-pp.__sysvec_apic_timer_interrupt
0.12 ? 6% +0.1 0.25 ? 2% perf-profile.children.cycles-pp.memcpy_erms
0.52 ? 2% +0.1 0.64 ? 3% perf-profile.children.cycles-pp.xfs_check_agi_freecount
0.15 ? 4% +0.2 0.30 ? 3% perf-profile.children.cycles-pp.xfs_buf_item_format_segment
0.16 ? 4% +0.2 0.32 ? 3% perf-profile.children.cycles-pp.xfs_buf_item_format
1.55 ? 3% +0.2 1.76 perf-profile.children.cycles-pp.rwsem_spin_on_owner
0.00 +0.8 0.84 ? 4% perf-profile.children.cycles-pp.xfs_inodegc_worker
0.00 +0.9 0.90 ? 4% perf-profile.children.cycles-pp.process_one_work
0.00 +0.9 0.90 ? 4% perf-profile.children.cycles-pp.worker_thread
0.01 ?223% +0.9 0.93 ? 4% perf-profile.children.cycles-pp.ret_from_fork
0.01 ?223% +0.9 0.93 ? 4% perf-profile.children.cycles-pp.kthread
67.40 ? 5% +8.7 76.14 ? 2% perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
67.02 ? 5% +8.8 75.81 ? 2% perf-profile.children.cycles-pp.do_syscall_64
2.02 ? 3% +15.4 17.42 ? 2% perf-profile.children.cycles-pp.creat64
1.99 ? 4% +15.4 17.40 ? 2% perf-profile.children.cycles-pp.do_sys_open
1.99 ? 3% +15.4 17.40 ? 2% perf-profile.children.cycles-pp.do_sys_openat2
1.96 ? 4% +15.4 17.38 ? 2% perf-profile.children.cycles-pp.path_openat
1.96 ? 3% +15.4 17.38 ? 2% perf-profile.children.cycles-pp.do_filp_open
2.23 ? 6% +15.6 17.82 ? 2% perf-profile.children.cycles-pp.unlink
2.18 ? 6% +15.6 17.78 ? 2% perf-profile.children.cycles-pp.do_unlinkat
0.66 ? 19% +30.7 31.40 ? 2% perf-profile.children.cycles-pp.osq_lock
2.34 ? 8% +31.0 33.33 ? 2% perf-profile.children.cycles-pp.rwsem_down_write_slowpath
29.14 ? 12% -9.3 19.86 ? 10% perf-profile.self.cycles-pp.intel_idle
12.10 ? 15% -8.6 3.46 ? 21% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
5.11 ? 6% -2.4 2.69 ? 12% perf-profile.self.cycles-pp.__mod_memcg_state
2.11 ? 7% -1.0 1.14 ? 9% perf-profile.self.cycles-pp.get_mem_cgroup_from_mm
2.17 ? 10% -0.8 1.34 ? 9% perf-profile.self.cycles-pp.__mod_lruvec_page_state
4.40 ? 4% -0.7 3.75 ? 2% perf-profile.self.cycles-pp.iomap_set_range_uptodate
1.28 ? 8% -0.6 0.66 ? 14% perf-profile.self.cycles-pp.__mem_cgroup_charge
0.95 ? 9% -0.5 0.49 ? 13% perf-profile.self.cycles-pp.mem_cgroup_charge
1.50 ? 5% -0.2 1.27 ? 2% perf-profile.self.cycles-pp.copy_user_enhanced_fast_string
0.62 ? 6% -0.2 0.40 ? 7% perf-profile.self.cycles-pp.lock_page_memcg
0.44 ? 7% -0.2 0.24 ? 8% perf-profile.self.cycles-pp.uncharge_page
0.92 ? 5% -0.2 0.72 ? 4% perf-profile.self.cycles-pp.xfs_buffered_write_iomap_begin
1.13 ? 4% -0.2 0.94 ? 3% perf-profile.self.cycles-pp.pagecache_get_page
0.62 ? 8% -0.2 0.45 ? 7% perf-profile.self.cycles-pp.__mod_memcg_lruvec_state
0.37 ? 10% -0.2 0.22 ? 8% perf-profile.self.cycles-pp.__count_memcg_events
0.46 ? 5% -0.2 0.31 ? 4% perf-profile.self.cycles-pp.page_counter_cancel
1.04 ? 2% -0.2 0.88 ? 3% perf-profile.self.cycles-pp.iomap_apply
0.80 ? 6% -0.1 0.66 ? 4% perf-profile.self.cycles-pp.syscall_return_via_sysret
0.47 ? 5% -0.1 0.34 ? 8% perf-profile.self.cycles-pp.balance_dirty_pages_ratelimited
0.68 ? 6% -0.1 0.55 ? 4% perf-profile.self.cycles-pp.down_write
0.73 ? 4% -0.1 0.60 ? 2% perf-profile.self.cycles-pp.write
0.50 ? 5% -0.1 0.38 ? 4% perf-profile.self.cycles-pp.__pagevec_lru_add
0.86 ? 4% -0.1 0.74 perf-profile.self.cycles-pp.___might_sleep
0.61 ? 5% -0.1 0.49 ? 6% perf-profile.self.cycles-pp.up_write
0.34 ? 6% -0.1 0.22 ? 5% perf-profile.self.cycles-pp.mem_cgroup_track_foreign_dirty_slowpath
0.67 ? 3% -0.1 0.57 ? 5% perf-profile.self.cycles-pp.vfs_write
0.40 ? 7% -0.1 0.29 ? 3% perf-profile.self.cycles-pp.xfs_buffered_write_iomap_end
0.53 ? 4% -0.1 0.43 ? 6% perf-profile.self.cycles-pp.iomap_write_end
0.62 ? 4% -0.1 0.53 ? 3% perf-profile.self.cycles-pp.iomap_write_begin
0.58 ? 5% -0.1 0.48 ? 3% perf-profile.self.cycles-pp.__get_user_nocheck_1
0.52 ? 6% -0.1 0.43 ? 3% perf-profile.self.cycles-pp.__fget_light
0.48 ? 4% -0.1 0.40 ? 4% perf-profile.self.cycles-pp.xas_load
0.19 ? 7% -0.1 0.10 ? 8% perf-profile.self.cycles-pp.propagate_protected_usage
0.51 ? 5% -0.1 0.42 ? 5% perf-profile.self.cycles-pp.copy_page_from_iter_atomic
0.56 ? 7% -0.1 0.47 ? 5% perf-profile.self.cycles-pp.xfs_file_buffered_write
0.40 ? 6% -0.1 0.32 perf-profile.self.cycles-pp.release_pages
0.41 ? 4% -0.1 0.32 ? 2% perf-profile.self.cycles-pp.percpu_counter_add_batch
0.28 ? 6% -0.1 0.20 ? 7% perf-profile.self.cycles-pp.xfs_bmbt_to_iomap
0.48 ? 4% -0.1 0.40 ? 7% perf-profile.self.cycles-pp.xfs_file_write_iter
0.11 ? 17% -0.1 0.03 ? 70% perf-profile.self.cycles-pp.xlog_grant_add_space
0.24 ? 9% -0.1 0.16 ? 5% perf-profile.self.cycles-pp.page_counter_try_charge
0.46 ? 3% -0.1 0.39 ? 3% perf-profile.self.cycles-pp.iomap_write_actor
0.36 ? 4% -0.1 0.29 ? 4% perf-profile.self.cycles-pp.__mod_node_page_state
0.26 ? 7% -0.1 0.20 ? 4% perf-profile.self.cycles-pp.file_update_time
0.31 ? 6% -0.1 0.25 ? 4% perf-profile.self.cycles-pp.find_lock_entries
0.32 ? 5% -0.1 0.26 ? 2% perf-profile.self.cycles-pp.__set_page_dirty_nobuffers
0.20 ? 8% -0.1 0.14 ? 6% perf-profile.self.cycles-pp.__unlock_page_memcg
0.28 ? 5% -0.1 0.22 ? 4% perf-profile.self.cycles-pp.__add_to_page_cache_locked
0.39 ? 4% -0.1 0.34 ? 2% perf-profile.self.cycles-pp.entry_SYSCALL_64_after_hwframe
0.34 ? 4% -0.1 0.29 ? 3% perf-profile.self.cycles-pp.unlock_page
0.41 ? 5% -0.1 0.35 ? 4% perf-profile.self.cycles-pp.__entry_text_start
0.29 ? 5% -0.1 0.23 ? 2% perf-profile.self.cycles-pp.new_sync_write
0.34 ? 6% -0.1 0.29 ? 3% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.41 ? 7% -0.1 0.36 ? 3% perf-profile.self.cycles-pp.common_file_perm
0.10 ? 11% -0.1 0.05 ? 46% perf-profile.self.cycles-pp.uncharge_batch
0.08 ? 13% -0.1 0.02 ? 99% perf-profile.self.cycles-pp.xfs_log_ticket_ungrant
0.29 ? 5% -0.1 0.24 ? 5% perf-profile.self.cycles-pp.xfs_break_leased_layouts
0.41 ? 4% -0.1 0.36 ? 2% perf-profile.self.cycles-pp.__might_sleep
0.08 ? 12% -0.0 0.02 ? 99% perf-profile.self.cycles-pp.lock_page_lruvec_irqsave
0.22 ? 6% -0.0 0.18 ? 5% perf-profile.self.cycles-pp.__cond_resched
0.23 ? 5% -0.0 0.19 ? 3% perf-profile.self.cycles-pp.xfs_ilock
0.22 ? 4% -0.0 0.18 ? 3% perf-profile.self.cycles-pp.xfs_file_write_checks
0.19 ? 6% -0.0 0.15 ? 6% perf-profile.self.cycles-pp.ksys_write
0.26 ? 3% -0.0 0.22 ? 8% perf-profile.self.cycles-pp.xas_start
0.34 ? 5% -0.0 0.30 ? 5% perf-profile.self.cycles-pp.xfs_iext_lookup_extent
0.24 ? 3% -0.0 0.21 ? 3% perf-profile.self.cycles-pp.__fsnotify_parent
0.16 ? 9% -0.0 0.12 ? 6% perf-profile.self.cycles-pp.cgroup_rstat_updated
0.15 ? 7% -0.0 0.12 ? 6% perf-profile.self.cycles-pp.xfs_iread_extents
0.28 ? 6% -0.0 0.24 ? 3% perf-profile.self.cycles-pp.xas_store
0.24 ? 4% -0.0 0.20 ? 3% perf-profile.self.cycles-pp.exit_to_user_mode_prepare
0.15 ? 6% -0.0 0.12 ? 6% perf-profile.self.cycles-pp.__set_page_dirty
0.26 ? 4% -0.0 0.23 ? 4% perf-profile.self.cycles-pp.xfs_iunlock
0.26 ? 5% -0.0 0.23 ? 3% perf-profile.self.cycles-pp.__list_del_entry_valid
0.20 ? 8% -0.0 0.17 ? 6% perf-profile.self.cycles-pp.iov_iter_fault_in_readable
0.16 ? 5% -0.0 0.13 ? 5% perf-profile.self.cycles-pp.node_dirty_ok
0.18 ? 8% -0.0 0.14 ? 10% perf-profile.self.cycles-pp.iomap_file_buffered_write
0.17 ? 3% -0.0 0.14 ? 4% perf-profile.self.cycles-pp.xfs_break_layouts
0.08 ? 10% -0.0 0.05 ? 7% perf-profile.self.cycles-pp.copyin
0.12 ? 6% -0.0 0.10 ? 5% perf-profile.self.cycles-pp.try_charge_memcg
0.13 ? 7% -0.0 0.10 ? 4% perf-profile.self.cycles-pp.__cancel_dirty_page
0.16 ? 8% -0.0 0.14 ? 6% perf-profile.self.cycles-pp.page_mapping
0.18 ? 6% -0.0 0.15 ? 3% perf-profile.self.cycles-pp._raw_spin_lock
0.15 ? 7% -0.0 0.13 ? 5% perf-profile.self.cycles-pp.generic_write_checks
0.13 ? 4% -0.0 0.11 ? 6% perf-profile.self.cycles-pp.entry_SYSCALL_64_safe_stack
0.14 ? 8% -0.0 0.12 ? 3% perf-profile.self.cycles-pp.rmqueue
0.10 -0.0 0.08 ? 8% perf-profile.self.cycles-pp.iomap_page_create
0.06 ? 7% -0.0 0.04 ? 44% perf-profile.self.cycles-pp.alloc_pages
0.10 ? 4% -0.0 0.08 ? 8% perf-profile.self.cycles-pp.__mod_lruvec_state
0.09 ? 5% -0.0 0.07 ? 9% perf-profile.self.cycles-pp.rcu_read_unlock_strict
0.11 ? 6% -0.0 0.09 ? 5% perf-profile.self.cycles-pp.__list_add_valid
0.08 ? 8% -0.0 0.06 ? 6% perf-profile.self.cycles-pp.mem_cgroup_update_lru_size
0.14 ? 5% -0.0 0.12 ? 9% perf-profile.self.cycles-pp.rcu_all_qs
0.10 ? 15% -0.0 0.08 ? 6% perf-profile.self.cycles-pp.timestamp_truncate
0.10 ? 9% -0.0 0.08 perf-profile.self.cycles-pp.security_file_permission
0.08 ? 4% +0.0 0.11 ? 8% perf-profile.self.cycles-pp.__xfs_btree_check_sblock
0.01 ?223% +0.0 0.06 ? 6% perf-profile.self.cycles-pp.xfs_btree_increment
0.00 +0.1 0.05 ? 8% perf-profile.self.cycles-pp.xfs_btree_get_rec
0.01 ?223% +0.1 0.07 ? 11% perf-profile.self.cycles-pp.update_load_avg
0.00 +0.1 0.06 ? 9% perf-profile.self.cycles-pp.update_cfs_group
0.11 ? 8% +0.1 0.18 ? 4% perf-profile.self.cycles-pp.disk_wrt
0.00 +0.1 0.07 ? 6% perf-profile.self.cycles-pp.update_sd_lb_stats
0.12 ? 6% +0.1 0.24 perf-profile.self.cycles-pp.memcpy_erms
1.54 ? 3% +0.2 1.74 perf-profile.self.cycles-pp.rwsem_spin_on_owner
0.66 ? 19% +30.6 31.21 ? 2% perf-profile.self.cycles-pp.osq_lock



aim7.jobs-per-min

560000 +------------------------------------------------------------------+
| + |
540000 |-+.++.+.+. .+ +.+.+.+ +.+.+.+.++. + : .+ |
|+ + :+ + : +.+.+ +.+ |
| + + |
520000 |-+ |
| |
500000 |-+ |
| |
480000 |-+ |
| |
| O |
460000 |-+ O O O O O O O O O OO O O OO O |
| O OO O O O OO O O O O O O O O O |
440000 +------------------------------------------------------------------+


aim7.time.elapsed_time

41 +----------------------------------------------------------------------+
| O O O O O OO O O O |
40 |-+ O O O O OO O O O O O O O OO O O O O |
39 |-+ O O O O |
| |
38 |-+ |
| |
37 |-+ |
| |
36 |-+ |
35 |-+ |
| |
34 |.+ .+ +. .+. |
| +.+.+.+.+ :+ +.+.+.+ +.+.+.+.++.+.+.+. .+.+.+ |
33 +----------------------------------------------------------------------+


aim7.time.elapsed_time.max

41 +----------------------------------------------------------------------+
| O O O O O OO O O O |
40 |-+ O O O O OO O O O O O O O OO O O O O |
39 |-+ O O O O |
| |
38 |-+ |
| |
37 |-+ |
| |
36 |-+ |
35 |-+ |
| |
34 |.+ .+ +. .+. |
| +.+.+.+.+ :+ +.+.+.+ +.+.+.+.++.+.+.+. .+.+.+ |
33 +----------------------------------------------------------------------+


aim7.time.voluntary_context_switches

950000 +------------------------------------------------------------------+
|.+.++. .+.+.++.+.+.+. .++.+.+. .++.+.+.+.++.+.+ |
900000 |-+ + + + |
| |
850000 |-+ |
800000 |-+ |
| |
750000 |-+ |
| |
700000 |-+ |
650000 |-+ |
| |
600000 |-O OO O O O OO O O O O OO O O O O O O O OO O O O O O O OO O |
| O O O O |
550000 +------------------------------------------------------------------+


aim7.time.involuntary_context_switches

20000 +-------------------------------------------------------------------+
18000 |-+ O O O O OO O O O OO O O O O O O O OO O O |
| O O O O O O O O O O O O O O |
16000 |-+ |
14000 |-+ |
| |
12000 |-+ |
10000 |-+ |
8000 |-+ |
| |
6000 |-+ |
4000 |-+ |
|. .+. .+.+. .+. +. .+. .+ .+. |
2000 |-+ ++ + + +.+ + +.+.+.+.+.++.+ +.+ |
0 +-------------------------------------------------------------------+


[*] bisect-good sample
[O] bisect-bad sample



Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


---
0DAY/LKP+ Test Infrastructure Open Source Technology Center
https://lists.01.org/hyperkitty/list/[email protected] Intel Corporation

Thanks,
Oliver Sang


Attachments:
(No filename) (81.08 kB)
config-5.14.0-rc4-00010-g6df693ed7ba9 (178.04 kB)
job-script (8.20 kB)
job.yaml (5.57 kB)
reproduce (1.04 kB)
Download all attachments

2021-08-14 23:29:55

by Dave Chinner

[permalink] [raw]
Subject: Re: [xfs] 6df693ed7b: aim7.jobs-per-min -15.7% regression

On Mon, Aug 09, 2021 at 02:42:48PM +0800, kernel test robot wrote:
>
>
> Greeting,
>
> FYI, we noticed a -15.7% regression of aim7.jobs-per-min due to commit:
>
>
> commit: 6df693ed7ba9ec03cafc38d5064de376a11243e2 ("xfs: per-cpu deferred inode inactivation queues")
> https://git.kernel.org/cgit/linux/kernel/git/djwong/xfs-linux.git xfs-5.15-merge
>
>
> in testcase: aim7
> on test machine: 88 threads 2 sockets Intel(R) Xeon(R) Gold 6238M CPU @ 2.10GHz with 128G memory
> with following parameters:
>
> disk: 4BRD_12G
> md: RAID1
> fs: xfs
> test: disk_wrt
> load: 3000
> cpufreq_governor: performance
> ucode: 0x5003006
>
> test-description: AIM7 is a traditional UNIX system level benchmark suite which is used to test and measure the performance of multiuser system.
> test-url: https://sourceforge.net/projects/aimbench/files/aim-suite7/

.....

> commit:
> 52cba078c8 ("xfs: detach dquots from inode if we don't need to inactivate it")
> 6df693ed7b ("xfs: per-cpu deferred inode inactivation queues")
>
> 52cba078c8b4b003 6df693ed7ba9ec03cafc38d5064
> ---------------- ---------------------------
> %stddev %change %stddev
> \ | \
> 539418 -15.7% 454630 aim7.jobs-per-min
> 33.57 +18.5% 39.79 aim7.time.elapsed_time
> 33.57 +18.5% 39.79 aim7.time.elapsed_time.max
> 2056 ? 7% +779.6% 18087 ? 2% aim7.time.involuntary_context_switches
> 673.92 ? 4% +29.2% 870.54 ? 2% aim7.time.system_time
> 912022 -34.2% 599694 aim7.time.voluntary_context_switches

OK, performance went down, system time went up massively. I'm
betting the improvement made something else fast enough to trigger
spinning lock breakdown somewhere in the kernel...

> 0.01 ?223% +0.9 0.93 ? 4% perf-profile.children.cycles-pp.ret_from_fork
> 0.01 ?223% +0.9 0.93 ? 4% perf-profile.children.cycles-pp.kthread
> 67.40 ? 5% +8.7 76.14 ? 2% perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
> 67.02 ? 5% +8.8 75.81 ? 2% perf-profile.children.cycles-pp.do_syscall_64
> 2.02 ? 3% +15.4 17.42 ? 2% perf-profile.children.cycles-pp.creat64
> 1.99 ? 4% +15.4 17.40 ? 2% perf-profile.children.cycles-pp.do_sys_open
> 1.99 ? 3% +15.4 17.40 ? 2% perf-profile.children.cycles-pp.do_sys_openat2
> 1.96 ? 4% +15.4 17.38 ? 2% perf-profile.children.cycles-pp.path_openat
> 1.96 ? 3% +15.4 17.38 ? 2% perf-profile.children.cycles-pp.do_filp_open
> 2.23 ? 6% +15.6 17.82 ? 2% perf-profile.children.cycles-pp.unlink
> 2.18 ? 6% +15.6 17.78 ? 2% perf-profile.children.cycles-pp.do_unlinkat
> 0.66 ? 19% +30.7 31.40 ? 2% perf-profile.children.cycles-pp.osq_lock
> 2.34 ? 8% +31.0 33.33 ? 2% perf-profile.children.cycles-pp.rwsem_down_write_slowpath

Well, looky there. Lots of new rwsem lock contention.

....

> 1.54 ? 3% +0.2 1.74 perf-profile.self.cycles-pp.rwsem_spin_on_owner
> 0.66 ? 19% +30.6 31.21 ? 2% perf-profile.self.cycles-pp.osq_lock

Yup, we now have catastrophic spin-on-onwer breakdown of a rwsem.

IOWs, what this commit has done is put pressure on a rwsem in a
different way, and on this specific machine configuration on this
specific workload, it results in the rwsem breaking down into
catastrophic spin-on-owner contention. This looks like a rwsem bug,
not a bug in the XFS code.

Given that this is showing up in the open and unlink paths, this
is likely the parent directory inode lock being contended due to
concurrent modifications in the same directory.

That correlates with the change that the deferred inactivation
brings to unlink worklaods - the unlink() syscall does about a third
of the work it used to, so results in locking the directory inode
*much* more frequently with only very short pauses in userspace to
make the next unlink() call.

Because of the way the worker threads are CPU bound and all the XFS
objects involved in repeated directory ops will stay CPU affine, the
unlink() syscall is likely to run hot and not block until the queue
limits are hit and it is forced to throttle and let the worker run
to drain the queue.

Now, rwsems are *supposed to be sleeping locks*. In which case, we
should switch away on contention and let the CPU be used for some
other useful work until we are granted the lock. But, no, spinning
on exclusive locks makes some other benchmark go faster, so now on
this benchmark burn *30% of 88 CPUs* spinning on rwsems across this
benchmark.

So this regression is caused by an rwsem bug. XFS is, as usual, just
the messenger for iproblems arising from the misbehaviour of rwsems.

Try turning off rwsem spin-on-owner behaviour and see what
difference that makes to performance...

Cheers,

Dave.

--
Dave Chinner
[email protected]

2021-08-14 23:41:55

by Dave Chinner

[permalink] [raw]
Subject: Re: [xfs] 6df693ed7b: aim7.jobs-per-min -15.7% regression

On Mon, Aug 09, 2021 at 05:31:14PM +0800, Hillf Danton wrote:
> On Mon, 9 Aug 2021 14:42:48 +0800
> >
> > FYI, we noticed a -15.7% regression of aim7.jobs-per-min due to commit:
> >
> >
> > commit: 6df693ed7ba9ec03cafc38d5064de376a11243e2 ("xfs: per-cpu deferred inode inactivation queues")
> > https://git.kernel.org/cgit/linux/kernel/git/djwong/xfs-linux.git xfs-5.15-merge
> >
> >
> > in testcase: aim7
> > on test machine: 88 threads 2 sockets Intel(R) Xeon(R) Gold 6238M CPU @ 2.10GHz with 128G memory
> > with following parameters:
> >
> > disk: 4BRD_12G
> > md: RAID1
> > fs: xfs
> > test: disk_wrt
> > load: 3000
> > cpufreq_governor: performance
> > ucode: 0x5003006
> >
>
> See if scheduling can help assuming a bound worker should run as short as
> it could.
>
> The change below is
> 1/ add schedule entry in inodegc worker, and as compensation allow it to
> repeat gc until no more c available.

Do you have any evidence that this is a problem?

I mean, we bound queue depth to 256 items, and my direct
measurements of workloads show that and typical inactivation
processing does not block and takes roughly 50-100us per item. On
inodes that require lots of work (maybe minutes!), we end up
sleeping on locks or resource reservations fairly quickly, hence we
don't tend to rack up significant amount of uninterrupted CPU time
in this loop at all.

> 2/ make inodegc_wq unbound to spawn workers because they are no longer
> potential CPU hogs (and this is not mandatory but optional).
>
> to see if hot cache outweights spawning of workers.

NACK. We already know what impact that has: moving to bound
workqueues erased a 50-60% performance degradation in the original
queueing mechanism that used unbound workqueues and required
inactivation to run on cold caches. IOWs, performance analysis lead
us to short bound depth per-cpu queues and single depth bound
per-cpu workqueues. We don't do complex stuff like this unless it is
necessary for performance and scalability...

Cheers,

Dave.
--
Dave Chinner
[email protected]