2020-11-04 06:06:15

by kernel test robot

[permalink] [raw]
Subject: [btrfs] 96bed17ad9: fio.write_iops -59.7% regression

Greeting,

FYI, we noticed a -59.7% regression of fio.write_iops due to commit:


commit: 96bed17ad9d425ff6958a2e6f87179453a3d76f2 ("btrfs: simplify the logic in need_preemptive_flushing")
https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git master


in testcase: fio-basic
on test machine: 192 threads Intel(R) Xeon(R) CPU @ 2.20GHz with 192G memory
with following parameters:

disk: 1SSD
fs: btrfs
runtime: 300s
nr_task: 8
rw: write
bs: 4k
ioengine: sync
test_size: 256g
cpufreq_governor: performance
ucode: 0x4002f01

test-description: Fio is a tool that will spawn a number of threads or processes doing a particular type of I/O action as specified by the user.
test-url: https://github.com/axboe/fio



If you fix the issue, kindly add following tag
Reported-by: kernel test robot <[email protected]>


Details are as below:
-------------------------------------------------------------------------------------------------->


To reproduce:

git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml

=========================================================================================
bs/compiler/cpufreq_governor/disk/fs/ioengine/kconfig/nr_task/rootfs/runtime/rw/tbox_group/test_size/testcase/ucode:
4k/gcc-9/performance/1SSD/btrfs/sync/x86_64-rhel-8.3/8/debian-10.4-x86_64-20200603.cgz/300s/write/lkp-csl-2ap1/256g/fio-basic/0x4002f01

commit:
77adc5cb16 ("btrfs: rework btrfs_calc_reclaim_metadata_size")
96bed17ad9 ("btrfs: simplify the logic in need_preemptive_flushing")

77adc5cb164dcf5d 96bed17ad9d425ff6958a2e6f87
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
1:4 -25% :4 kmsg.ACPI_BIOS_Error(bug)
1:4 -25% :4 kmsg.ACPI_Error
%stddev %change %stddev
\ | \
0.01 +1.4 1.39 ? 48% fio.latency_100us%
49.66 ? 7% -48.9 0.75 ?168% fio.latency_10us%
36.19 ? 10% -35.2 1.04 ?169% fio.latency_20us%
0.01 -0.0 0.00 fio.latency_2us%
13.28 ? 15% -13.3 0.01 ? 57% fio.latency_4us%
0.03 ? 9% -0.0 0.00 ?173% fio.latency_50ms%
0.83 ? 15% +96.0 96.82 ? 2% fio.latency_50us%
129.75 ? 2% +132.3% 301.45 fio.time.elapsed_time
129.75 ? 2% +132.3% 301.45 fio.time.elapsed_time.max
5.369e+08 -5.4% 5.078e+08 fio.time.file_system_outputs
1925 ? 7% +255.6% 6846 ? 8% fio.time.involuntary_context_switches
482.00 ? 5% +66.0% 800.25 fio.time.percent_of_cpu_this_job_got
564.84 ? 4% +317.3% 2356 fio.time.system_time
67108864 -5.4% 63471135 fio.workload
2048 ? 2% -59.7% 826.44 fio.write_bw_MBps
13376 ? 2% +221.5% 43008 fio.write_clat_90%_us
15072 ? 2% +201.5% 45440 fio.write_clat_95%_us
19520 ? 2% +160.3% 50816 ? 3% fio.write_clat_99%_us
14866 ? 2% +152.4% 37514 fio.write_clat_mean_us
360348 ? 5% -98.3% 5952 ? 21% fio.write_clat_stddev
524538 ? 2% -59.7% 211569 fio.write_iops
34.69 -0.9% 34.38 boot-time.boot
1.97 ? 6% -92.3% 0.15 ? 6% iostat.cpu.iowait
3.90 ? 4% +29.1% 5.03 iostat.cpu.system
168.55 ? 2% +101.7% 339.96 uptime.boot
29866 ? 2% +105.1% 61259 uptime.idle
1.99 ? 6% -1.8 0.15 ? 6% mpstat.cpu.all.iowait%
1.03 ? 4% -0.3 0.72 ? 10% mpstat.cpu.all.irq%
2.82 ? 6% +1.4 4.26 mpstat.cpu.all.sys%
0.29 ? 6% -0.2 0.12 mpstat.cpu.all.usr%
93.00 +1.1% 94.00 vmstat.cpu.id
1805132 ? 2% -56.5% 785909 ? 4% vmstat.io.bo
65013138 +11.3% 72328489 ? 4% vmstat.memory.free
3.75 ? 11% -100.0% 0.00 vmstat.procs.b
6.50 ? 7% +23.1% 8.00 vmstat.procs.r
49100 -72.8% 13352 ? 3% vmstat.system.cs
394325 -9.7% 356099 ? 16% vmstat.system.in
2064649 ?100% -100.0% 0.00 numa-numastat.node0.numa_foreign
1469474 ? 45% -98.9% 16866 ?102% numa-numastat.node0.numa_miss
1485006 ? 44% -98.3% 24665 ?103% numa-numastat.node0.other_node
2998955 ? 87% -92.5% 224907 ?132% numa-numastat.node1.numa_foreign
2047311 ?100% -100.0% 0.00 numa-numastat.node1.numa_miss
2078424 ? 98% -98.9% 23355 ? 57% numa-numastat.node1.other_node
2691737 ? 77% -91.6% 224907 ?132% numa-numastat.node2.numa_miss
2715144 ? 76% -90.6% 256010 ?116% numa-numastat.node2.other_node
1449274 ? 52% -98.8% 16866 ?102% numa-numastat.node3.numa_foreign
115940 +20.5% 139716 ? 3% meminfo.Active
3895 +284.7% 14986 ? 4% meminfo.Active(anon)
112045 +11.3% 124729 ? 3% meminfo.Active(file)
111136 ? 12% +48.7% 165206 meminfo.AnonHugePages
358017 ? 4% -15.3% 303181 ? 4% meminfo.AnonPages
65473338 +10.9% 72609414 ? 4% meminfo.MemFree
12802 ? 9% -20.4% 10188 ? 6% meminfo.PageTables
344826 -10.5% 308454 meminfo.SUnreclaim
19443863 -86.7% 2586903 ? 4% meminfo.Writeback
1499568 ? 2% -56.6% 650338 meminfo.max_used_kB
2900 ? 60% +985.9% 31493 ? 35% numa-meminfo.node0.Active
1167 ? 91% +2406.3% 29267 ? 38% numa-meminfo.node0.Active(file)
17540 ? 54% +363.7% 81329 ? 47% numa-meminfo.node0.AnonHugePages
89292 ? 33% +78.4% 159337 ? 36% numa-meminfo.node0.AnonPages
4775891 ? 6% -86.8% 629756 ? 5% numa-meminfo.node0.Writeback
5027255 ? 8% -86.9% 658431 ? 6% numa-meminfo.node1.Writeback
2389 ?105% +1897.0% 47724 ? 60% numa-meminfo.node2.Active
1356 ?122% +3343.6% 46721 ? 61% numa-meminfo.node2.Active(file)
100260 ? 60% -71.1% 28948 ? 78% numa-meminfo.node2.AnonPages
5047201 ? 7% -86.9% 661860 ? 6% numa-meminfo.node2.Writeback
399.75 ? 85% +2606.2% 10818 ? 14% numa-meminfo.node3.Active(anon)
283116 ?167% -97.2% 7930 ? 32% numa-meminfo.node3.Mapped
82365 ? 10% -23.8% 62787 ? 3% numa-meminfo.node3.SUnreclaim
4595302 ? 4% -86.1% 639095 ? 2% numa-meminfo.node3.Writeback
1021 ? 4% -40.2% 610.75 ? 8% slabinfo.biovec-max.active_objs
1114 ? 5% -35.4% 720.00 ? 7% slabinfo.biovec-max.num_objs
7746 ? 3% +19.7% 9269 ? 2% slabinfo.btrfs_extent_buffer.active_objs
7870 ? 2% +18.0% 9289 ? 2% slabinfo.btrfs_extent_buffer.num_objs
1694 ? 3% +7.9% 1828 ? 5% slabinfo.btrfs_ordered_extent.active_objs
1694 ? 3% +7.9% 1828 ? 5% slabinfo.btrfs_ordered_extent.num_objs
2033 ? 5% +39.5% 2835 ? 9% slabinfo.buffer_head.active_objs
2055 ? 5% +38.1% 2838 ? 9% slabinfo.buffer_head.num_objs
4933 ? 15% +125.3% 11117 ? 17% slabinfo.dmaengine-unmap-16.active_objs
117.50 ? 15% +125.1% 264.50 ? 17% slabinfo.dmaengine-unmap-16.active_slabs
4954 ? 15% +124.7% 11134 ? 17% slabinfo.dmaengine-unmap-16.num_objs
117.50 ? 15% +125.1% 264.50 ? 17% slabinfo.dmaengine-unmap-16.num_slabs
9462 ? 3% +14.1% 10795 ? 8% slabinfo.fsnotify_mark_connector.num_objs
588433 -8.5% 538563 ? 3% slabinfo.radix_tree_node.active_objs
10507 -8.5% 9616 ? 3% slabinfo.radix_tree_node.active_slabs
588440 -8.5% 538566 ? 3% slabinfo.radix_tree_node.num_objs
10507 -8.5% 9616 ? 3% slabinfo.radix_tree_node.num_slabs
3410 ? 3% +14.5% 3905 ? 3% slabinfo.trace_event_file.active_objs
3423 ? 3% +14.9% 3934 ? 4% slabinfo.trace_event_file.num_objs
20035 +23.6% 24756 slabinfo.vmap_area.active_objs
20100 +23.3% 24792 slabinfo.vmap_area.num_objs
295.50 ? 92% +2375.1% 7314 ? 38% numa-vmstat.node0.nr_active_file
22328 ? 33% +78.6% 39875 ? 36% numa-vmstat.node0.nr_anon_pages
1195604 ? 6% -86.8% 157291 ? 5% numa-vmstat.node0.nr_writeback
296.25 ? 93% +2369.0% 7314 ? 38% numa-vmstat.node0.nr_zone_active_file
1861506 ? 5% -57.6% 788811 ? 3% numa-vmstat.node0.nr_zone_write_pending
907498 ? 74% -100.0% 0.00 numa-vmstat.node0.numa_foreign
338047 ? 41% -99.3% 2248 ?114% numa-vmstat.node0.numa_miss
388008 ? 35% -89.1% 42425 ? 58% numa-vmstat.node0.numa_other
1252201 ? 8% -86.9% 164375 ? 6% numa-vmstat.node1.nr_writeback
1988661 ? 9% -59.4% 807812 ? 3% numa-vmstat.node1.nr_zone_write_pending
1478486 ? 93% -99.3% 10639 ?144% numa-vmstat.node1.numa_foreign
925105 ? 78% -100.0% 0.00 numa-vmstat.node1.numa_miss
1041053 ? 69% -89.6% 108743 ? 12% numa-vmstat.node1.numa_other
335.25 ?122% +3385.5% 11685 ? 61% numa-vmstat.node2.nr_active_file
25023 ? 60% -71.1% 7241 ? 78% numa-vmstat.node2.nr_anon_pages
1263373 ? 7% -86.9% 165494 ? 6% numa-vmstat.node2.nr_writeback
336.75 ?122% +3370.4% 11686 ? 61% numa-vmstat.node2.nr_zone_active_file
1945998 ? 8% -58.7% 804113 numa-vmstat.node2.nr_zone_write_pending
1347960 ? 88% -99.2% 10642 ?144% numa-vmstat.node2.numa_miss
1456839 ? 81% -92.3% 111463 ? 32% numa-vmstat.node2.numa_other
99.25 ? 85% +2629.7% 2709 ? 14% numa-vmstat.node3.nr_active_anon
20598 ? 10% -23.8% 15695 ? 3% numa-vmstat.node3.nr_slab_unreclaimable
1149987 ? 4% -86.1% 159664 ? 2% numa-vmstat.node3.nr_writeback
6127597 ? 7% +17.9% 7222770 ? 6% numa-vmstat.node3.nr_written
99.25 ? 85% +2629.7% 2709 ? 14% numa-vmstat.node3.nr_zone_active_anon
1830268 ? 3% -56.5% 796492 numa-vmstat.node3.nr_zone_write_pending
348039 ? 49% -99.4% 2248 ?114% numa-vmstat.node3.numa_foreign
590644 ? 34% -69.9% 177863 ? 9% proc-vmstat.compact_daemon_migrate_scanned
1079 ? 19% -57.2% 461.75 ? 4% proc-vmstat.compact_daemon_wake
257044 ? 42% -87.0% 33541 ? 21% proc-vmstat.compact_isolated
786837 ? 23% -58.4% 327500 ? 10% proc-vmstat.compact_migrate_scanned
554.00 ? 23% +118.5% 1210 ? 13% proc-vmstat.kswapd_high_wmark_hit_quickly
1612 ? 25% -45.0% 886.00 ? 18% proc-vmstat.kswapd_low_wmark_hit_quickly
973.25 +284.8% 3744 ? 4% proc-vmstat.nr_active_anon
28046 +11.2% 31197 ? 3% proc-vmstat.nr_active_file
89474 ? 4% -15.3% 75797 ? 4% proc-vmstat.nr_anon_pages
67188417 -5.4% 63554220 proc-vmstat.nr_dirtied
2765923 -7.8% 2550268 proc-vmstat.nr_dirty
32541918 -5.4% 30800890 ? 2% proc-vmstat.nr_file_pages
16346846 +11.0% 18143118 ? 4% proc-vmstat.nr_free_pages
367012 -2.6% 357316 proc-vmstat.nr_inactive_anon
31977972 -5.5% 30227089 ? 2% proc-vmstat.nr_inactive_file
283350 +1.3% 286901 proc-vmstat.nr_mapped
3194 ? 9% -20.6% 2534 ? 5% proc-vmstat.nr_page_table_pages
278659 +2.4% 285404 proc-vmstat.nr_shmem
105608 -6.0% 99255 ? 2% proc-vmstat.nr_slab_reclaimable
86209 -10.6% 77113 proc-vmstat.nr_slab_unreclaimable
4860968 -86.7% 646666 ? 4% proc-vmstat.nr_writeback
973.25 +284.8% 3744 ? 4% proc-vmstat.nr_zone_active_anon
28060 +11.2% 31201 ? 3% proc-vmstat.nr_zone_active_file
367029 -2.6% 357330 proc-vmstat.nr_zone_inactive_anon
31978036 -5.5% 30227107 ? 2% proc-vmstat.nr_zone_inactive_file
7627174 -58.1% 3197146 proc-vmstat.nr_zone_write_pending
8309580 ? 28% -96.6% 278711 ?103% proc-vmstat.numa_foreign
59708479 ? 3% +8.1% 64570979 proc-vmstat.numa_hit
59615003 ? 3% +8.2% 64477559 proc-vmstat.numa_local
8309580 ? 28% -96.6% 278711 ?103% proc-vmstat.numa_miss
8403056 ? 27% -95.6% 372131 ? 77% proc-vmstat.numa_other
13636 ? 2% +79.4% 24467 ? 2% proc-vmstat.pgactivate
68003378 -4.7% 64817961 proc-vmstat.pgalloc_normal
1087274 +58.2% 1719962 proc-vmstat.pgfault
20213698 -16.7% 16843776 ? 6% proc-vmstat.pgfree
123160 ? 41% -83.2% 20633 ? 21% proc-vmstat.pgmigrate_success
28853 ? 2% +126.8% 65426 ? 2% proc-vmstat.pgreuse
19006920 -19.6% 15280314 ? 7% proc-vmstat.pgscan_file
19001157 -19.6% 15280314 ? 7% proc-vmstat.pgscan_kswapd
19006880 -19.6% 15280274 ? 7% proc-vmstat.pgsteal_file
19001117 -19.6% 15280274 ? 7% proc-vmstat.pgsteal_kswapd
49968 ? 5% +17.5% 58735 proc-vmstat.slabs_scanned
52705 ? 4% -41.7% 30721 ? 14% proc-vmstat.workingset_nodes
1724 ? 37% +522.3% 10732 ? 13% sched_debug.cfs_rq:/.exec_clock.avg
26835 ? 28% +336.9% 117257 sched_debug.cfs_rq:/.exec_clock.max
71.42 ? 48% +86.3% 133.06 ? 14% sched_debug.cfs_rq:/.exec_clock.min
4531 ? 26% +522.7% 28217 ? 6% sched_debug.cfs_rq:/.exec_clock.stddev
41327 ? 75% +166.0% 109916 ? 15% sched_debug.cfs_rq:/.load.avg
707654 ? 16% +32.2% 935796 ? 9% sched_debug.cfs_rq:/.load.max
142326 ? 33% +84.2% 262137 ? 5% sched_debug.cfs_rq:/.load.stddev
82158 ? 14% +24.7% 102458 ? 8% sched_debug.cfs_rq:/.min_vruntime.avg
117042 ? 7% +85.2% 216774 ? 4% sched_debug.cfs_rq:/.min_vruntime.max
7933 ? 13% +275.2% 29765 ? 6% sched_debug.cfs_rq:/.min_vruntime.stddev
0.06 ? 48% +95.0% 0.12 ? 15% sched_debug.cfs_rq:/.nr_running.avg
0.23 ? 18% +32.7% 0.30 ? 6% sched_debug.cfs_rq:/.nr_running.stddev
77.84 ? 19% +67.5% 130.39 ? 13% sched_debug.cfs_rq:/.runnable_avg.avg
170.55 ? 12% +71.0% 291.57 ? 6% sched_debug.cfs_rq:/.runnable_avg.stddev
31274 ? 55% +261.8% 113161 ? 7% sched_debug.cfs_rq:/.spread0.max
7970 ? 13% +273.5% 29774 ? 6% sched_debug.cfs_rq:/.spread0.stddev
77.70 ? 19% +67.7% 130.32 ? 13% sched_debug.cfs_rq:/.util_avg.avg
170.33 ? 12% +71.1% 291.47 ? 6% sched_debug.cfs_rq:/.util_avg.stddev
20.44 ? 57% +372.5% 96.57 ? 15% sched_debug.cfs_rq:/.util_est_enqueued.avg
774.17 ? 4% +32.4% 1025 sched_debug.cfs_rq:/.util_est_enqueued.max
96.66 ? 21% +170.3% 261.27 ? 6% sched_debug.cfs_rq:/.util_est_enqueued.stddev
2000666 ? 24% -29.4% 1411975 ? 7% sched_debug.cpu.avg_idle.max
28117 ? 22% +174.5% 77182 ? 23% sched_debug.cpu.avg_idle.min
245757 ? 13% -30.5% 170717 ? 9% sched_debug.cpu.avg_idle.stddev
82979 ? 18% +90.1% 157720 sched_debug.cpu.clock.avg
82995 ? 18% +90.1% 157733 sched_debug.cpu.clock.max
82963 ? 18% +90.1% 157706 sched_debug.cpu.clock.min
9.01 ? 7% -16.7% 7.50 ? 7% sched_debug.cpu.clock.stddev
82357 ? 17% +90.2% 156639 sched_debug.cpu.clock_task.avg
82617 ? 17% +89.9% 156912 sched_debug.cpu.clock_task.max
74586 ? 19% +96.2% 146369 ? 3% sched_debug.cpu.clock_task.min
163.49 ? 15% +28.2% 209.65 ? 2% sched_debug.cpu.curr->pid.avg
5098 ? 9% +46.8% 7485 sched_debug.cpu.curr->pid.max
764.83 ? 6% +24.3% 950.32 sched_debug.cpu.curr->pid.stddev
1367457 ? 42% -46.3% 734767 ? 10% sched_debug.cpu.max_idle_balance_cost.max
85422 ? 47% -63.5% 31174 ? 36% sched_debug.cpu.max_idle_balance_cost.stddev
185.21 ? 31% +147.5% 458.45 sched_debug.cpu.sched_count.min
68.12 ? 29% -47.6% 35.70 ? 5% sched_debug.cpu.sched_goidle.min
63.00 ? 31% +154.5% 160.35 sched_debug.cpu.ttwu_count.min
60.79 ? 32% +159.4% 157.70 sched_debug.cpu.ttwu_local.min
82964 ? 18% +90.1% 157707 sched_debug.cpu_clk
82090 ? 18% +91.0% 156833 sched_debug.ktime
83347 ? 17% +89.7% 158071 sched_debug.sched_clk
11.72 ? 5% -43.3% 6.64 ? 4% perf-stat.i.MPKI
2.276e+09 ? 2% -7.4% 2.107e+09 perf-stat.i.branch-instructions
0.80 ? 9% -0.3 0.47 ? 2% perf-stat.i.branch-miss-rate%
18472434 ? 7% -44.4% 10278875 ? 3% perf-stat.i.branch-misses
88521815 ? 4% -50.8% 43547583 ? 12% perf-stat.i.cache-misses
1.362e+08 ? 4% -49.1% 69364352 ? 3% perf-stat.i.cache-references
49713 -73.1% 13367 ? 2% perf-stat.i.context-switches
2.56 ? 2% +50.9% 3.86 ? 4% perf-stat.i.cpi
2.943e+10 ? 4% +29.0% 3.796e+10 ? 5% perf-stat.i.cpu-cycles
202.46 -2.0% 198.34 perf-stat.i.cpu-migrations
436.82 ? 3% +169.1% 1175 ? 25% perf-stat.i.cycles-between-cache-misses
0.02 ? 54% -0.0 0.00 ? 19% perf-stat.i.dTLB-load-miss-rate%
460559 ? 51% -86.2% 63647 ? 18% perf-stat.i.dTLB-load-misses
3.28e+09 ? 2% -24.1% 2.489e+09 perf-stat.i.dTLB-loads
0.01 ? 30% -0.0 0.00 ? 9% perf-stat.i.dTLB-store-miss-rate%
78206 ? 32% -69.9% 23542 ? 7% perf-stat.i.dTLB-store-misses
1.647e+09 ? 2% -42.4% 9.479e+08 ? 2% perf-stat.i.dTLB-stores
61.64 -7.4 54.19 ? 5% perf-stat.i.iTLB-load-miss-rate%
7442826 ? 5% -33.3% 4966408 ? 12% perf-stat.i.iTLB-load-misses
1.153e+10 ? 2% -13.1% 1.002e+10 perf-stat.i.instructions
1552 ? 6% +31.6% 2044 ? 13% perf-stat.i.instructions-per-iTLB-miss
0.40 ? 2% -34.5% 0.26 ? 4% perf-stat.i.ipc
2165 ? 2% -57.1% 929.17 perf-stat.i.major-faults
0.15 ? 4% +29.1% 0.20 ? 5% perf-stat.i.metric.GHz
38.37 ? 2% -23.6% 29.33 perf-stat.i.metric.M/sec
3814 -2.7% 3712 perf-stat.i.minor-faults
88.44 +3.2 91.67 perf-stat.i.node-load-miss-rate%
15156015 ? 5% -41.9% 8808887 ? 11% perf-stat.i.node-load-misses
2118968 ? 3% -54.3% 968301 ? 6% perf-stat.i.node-loads
5700463 ? 4% -43.7% 3211574 ? 10% perf-stat.i.node-store-misses
332356 ? 33% -44.9% 183110 ? 19% perf-stat.i.node-stores
5979 -22.4% 4641 perf-stat.i.page-faults
11.82 ? 4% -41.4% 6.93 ? 4% perf-stat.overall.MPKI
0.81 ? 8% -0.3 0.49 ? 2% perf-stat.overall.branch-miss-rate%
2.55 ? 2% +48.4% 3.79 ? 4% perf-stat.overall.cpi
332.53 ? 2% +168.4% 892.56 ? 18% perf-stat.overall.cycles-between-cache-misses
0.01 ? 52% -0.0 0.00 ? 19% perf-stat.overall.dTLB-load-miss-rate%
61.67 -7.4 54.23 ? 5% perf-stat.overall.iTLB-load-miss-rate%
1554 ? 6% +31.9% 2050 ? 13% perf-stat.overall.instructions-per-iTLB-miss
0.39 ? 2% -32.5% 0.26 ? 4% perf-stat.overall.ipc
87.71 +2.3 90.04 perf-stat.overall.node-load-miss-rate%
22307 +113.2% 47562 perf-stat.overall.path-length
2.259e+09 ? 2% -7.0% 2.101e+09 perf-stat.ps.branch-instructions
18345804 ? 7% -44.1% 10252843 ? 3% perf-stat.ps.branch-misses
87823932 ? 4% -50.5% 43430790 ? 12% perf-stat.ps.cache-misses
1.352e+08 ? 4% -48.8% 69168925 ? 3% perf-stat.ps.cache-references
49229 -72.9% 13340 ? 2% perf-stat.ps.context-switches
2.921e+10 ? 4% +29.6% 3.784e+10 ? 5% perf-stat.ps.cpu-cycles
200.82 -1.5% 197.89 perf-stat.ps.cpu-migrations
457356 ? 51% -86.1% 63626 ? 18% perf-stat.ps.dTLB-load-misses
3.256e+09 ? 2% -23.8% 2.482e+09 perf-stat.ps.dTLB-loads
77699 ? 31% -69.7% 23538 ? 7% perf-stat.ps.dTLB-store-misses
1.635e+09 ? 2% -42.2% 9.45e+08 ? 2% perf-stat.ps.dTLB-stores
7385117 ? 5% -33.0% 4950431 ? 12% perf-stat.ps.iTLB-load-misses
1.144e+10 ? 2% -12.7% 9.987e+09 perf-stat.ps.instructions
2151 ? 2% -56.7% 932.08 perf-stat.ps.major-faults
3789 -2.3% 3701 perf-stat.ps.minor-faults
15032331 ? 5% -41.6% 8782610 ? 11% perf-stat.ps.node-load-misses
2102459 ? 3% -54.0% 966082 ? 6% perf-stat.ps.node-loads
5653300 ? 4% -43.4% 3200900 ? 10% perf-stat.ps.node-store-misses
330281 ? 33% -44.7% 182598 ? 19% perf-stat.ps.node-stores
5940 -22.0% 4633 perf-stat.ps.page-faults
1.497e+12 +101.6% 3.018e+12 perf-stat.total.instructions
17198 ? 15% +147.2% 42516 ? 20% softirqs.CPU1.SCHED
18452 +100.8% 37059 ? 5% softirqs.CPU100.SCHED
18590 ? 2% +99.9% 37161 ? 6% softirqs.CPU101.SCHED
17981 ? 4% +80.3% 32428 ? 19% softirqs.CPU102.SCHED
18300 +103.8% 37293 ? 4% softirqs.CPU103.SCHED
18457 +98.5% 36642 ? 7% softirqs.CPU104.SCHED
18316 ? 2% +99.6% 36559 ? 7% softirqs.CPU105.SCHED
18193 +93.5% 35209 ? 14% softirqs.CPU106.SCHED
18445 +80.5% 33298 ? 19% softirqs.CPU107.SCHED
17224 ? 9% +114.4% 36931 ? 6% softirqs.CPU108.SCHED
18451 ? 2% +93.9% 35777 ? 10% softirqs.CPU110.SCHED
17642 ? 7% +110.2% 37077 ? 5% softirqs.CPU111.SCHED
18382 +98.0% 36400 ? 9% softirqs.CPU112.SCHED
18502 ? 2% +97.5% 36540 ? 7% softirqs.CPU114.SCHED
18201 ? 2% +99.6% 36338 ? 7% softirqs.CPU115.SCHED
17632 ? 4% +106.4% 36398 ? 7% softirqs.CPU116.SCHED
17117 ? 10% +106.4% 35336 ? 5% softirqs.CPU117.SCHED
14701 ? 23% +141.4% 35489 ? 8% softirqs.CPU12.SCHED
18161 ? 3% +106.2% 37457 ? 3% softirqs.CPU120.SCHED
17715 ? 4% +114.5% 38007 softirqs.CPU121.SCHED
18310 +87.9% 34397 ? 13% softirqs.CPU122.SCHED
18423 +110.5% 38790 softirqs.CPU123.SCHED
18362 ? 2% +104.8% 37613 ? 3% softirqs.CPU124.SCHED
17867 ? 5% +114.7% 38354 softirqs.CPU125.SCHED
18296 +111.4% 38684 softirqs.CPU126.SCHED
16289 ? 20% +133.0% 37961 softirqs.CPU127.SCHED
18342 ? 2% +108.6% 38256 softirqs.CPU128.SCHED
18310 ? 2% +109.6% 38387 softirqs.CPU129.SCHED
16041 ? 18% +124.5% 36017 ? 12% softirqs.CPU13.SCHED
17946 ? 2% +113.0% 38219 softirqs.CPU130.SCHED
16723 ? 18% +130.8% 38591 softirqs.CPU131.SCHED
18335 ? 2% +109.1% 38348 softirqs.CPU132.SCHED
18302 ? 2% +108.7% 38191 ? 2% softirqs.CPU133.SCHED
18283 ? 3% +108.3% 38088 softirqs.CPU134.SCHED
17555 ? 7% +119.5% 38541 softirqs.CPU135.SCHED
18432 ? 2% +108.9% 38507 softirqs.CPU136.SCHED
17918 ? 5% +115.1% 38542 softirqs.CPU137.SCHED
18413 ? 2% +109.8% 38629 softirqs.CPU138.SCHED
18330 ? 3% +109.0% 38316 softirqs.CPU139.SCHED
17624 ? 7% +108.4% 36734 ? 9% softirqs.CPU140.SCHED
18389 ? 3% +109.7% 38560 softirqs.CPU141.SCHED
16063 ? 19% +141.2% 38750 ? 2% softirqs.CPU142.SCHED
7704 ? 25% -35.2% 4995 ? 14% softirqs.CPU143.RCU
18426 ? 2% +108.8% 38474 softirqs.CPU143.SCHED
14487 ? 40% +128.9% 33157 ? 25% softirqs.CPU144.SCHED
17619 ? 4% +109.3% 36875 ? 4% softirqs.CPU145.SCHED
18063 ? 3% +106.2% 37239 ? 6% softirqs.CPU146.SCHED
18508 +103.1% 37591 ? 4% softirqs.CPU147.SCHED
18538 ? 2% +101.3% 37323 ? 5% softirqs.CPU148.SCHED
18428 +101.2% 37078 ? 5% softirqs.CPU149.SCHED
18358 +103.2% 37297 ? 3% softirqs.CPU15.SCHED
18347 ? 2% +102.4% 37130 ? 5% softirqs.CPU150.SCHED
18417 +100.3% 36896 ? 6% softirqs.CPU151.SCHED
7574 ? 25% -46.3% 4071 ? 6% softirqs.CPU152.RCU
16641 ? 17% +128.2% 37969 softirqs.CPU152.SCHED
18452 ? 2% +106.3% 38069 softirqs.CPU153.SCHED
18429 ? 2% +88.0% 34641 ? 17% softirqs.CPU154.SCHED
14804 ? 42% +156.0% 37898 ? 2% softirqs.CPU155.SCHED
18167 ? 3% +108.8% 37943 ? 2% softirqs.CPU156.SCHED
7767 ? 24% -53.2% 3634 ? 9% softirqs.CPU159.RCU
17298 ? 12% +110.8% 36472 ? 7% softirqs.CPU159.SCHED
18229 ? 2% +98.6% 36204 ? 9% softirqs.CPU16.SCHED
18552 ? 5% +100.0% 37099 ? 6% softirqs.CPU160.SCHED
17417 ? 13% +114.7% 37386 ? 4% softirqs.CPU161.SCHED
18732 +96.6% 36828 ? 6% softirqs.CPU162.SCHED
15648 ? 29% +133.7% 36562 ? 6% softirqs.CPU163.SCHED
18569 +99.5% 37042 ? 6% softirqs.CPU164.SCHED
18181 ? 4% +102.8% 36872 ? 6% softirqs.CPU165.SCHED
18464 ? 2% +102.5% 37389 ? 4% softirqs.CPU166.SCHED
18482 ? 2% +84.4% 34090 ? 22% softirqs.CPU167.SCHED
17758 ? 8% +115.5% 38265 softirqs.CPU168.SCHED
17148 ? 5% +122.2% 38101 ? 2% softirqs.CPU169.SCHED
18454 ? 2% +96.9% 36345 ? 8% softirqs.CPU17.SCHED
18560 +108.6% 38721 softirqs.CPU170.SCHED
18572 +107.0% 38445 softirqs.CPU171.SCHED
18659 ? 2% +107.7% 38757 softirqs.CPU172.SCHED
18620 +108.8% 38878 ? 2% softirqs.CPU173.SCHED
18586 +108.6% 38765 softirqs.CPU174.SCHED
18449 +109.9% 38731 softirqs.CPU175.SCHED
18327 +111.9% 38832 ? 2% softirqs.CPU176.SCHED
18441 ? 2% +110.1% 38749 ? 2% softirqs.CPU177.SCHED
18432 ? 2% +103.8% 37573 ? 7% softirqs.CPU178.SCHED
18490 +107.8% 38426 ? 2% softirqs.CPU179.SCHED
18543 ? 2% +98.3% 36764 ? 11% softirqs.CPU18.SCHED
18074 ? 2% +114.3% 38734 ? 3% softirqs.CPU180.SCHED
18122 +113.2% 38643 softirqs.CPU181.SCHED
18343 +97.7% 36258 ? 10% softirqs.CPU182.SCHED
18535 +110.1% 38941 softirqs.CPU183.SCHED
8342 ? 19% -57.5% 3544 ? 3% softirqs.CPU184.RCU
18534 +110.1% 38932 ? 2% softirqs.CPU184.SCHED
18099 ? 3% +113.7% 38671 softirqs.CPU186.SCHED
18487 +109.5% 38732 ? 2% softirqs.CPU187.SCHED
18448 ? 2% +108.6% 38492 softirqs.CPU189.SCHED
18351 ? 2% +83.2% 33618 ? 13% softirqs.CPU19.SCHED
18492 ? 2% +110.6% 38950 ? 2% softirqs.CPU190.SCHED
18507 ? 2% +108.5% 38591 softirqs.CPU191.SCHED
17632 ? 8% +112.7% 37511 ? 3% softirqs.CPU2.SCHED
18306 +97.9% 36237 ? 9% softirqs.CPU20.SCHED
17940 +101.0% 36057 ? 9% softirqs.CPU21.SCHED
18207 ? 2% +92.9% 35129 ? 8% softirqs.CPU23.SCHED
16452 ? 13% +109.3% 34434 ? 17% softirqs.CPU26.SCHED
17301 ? 8% +121.1% 38261 softirqs.CPU27.SCHED
18266 ? 3% +111.6% 38654 ? 2% softirqs.CPU28.SCHED
18597 +105.5% 38217 softirqs.CPU30.SCHED
16799 ? 8% +122.7% 37407 softirqs.CPU31.SCHED
7933 ? 18% -44.5% 4405 ? 11% softirqs.CPU32.RCU
17290 ? 5% +115.0% 37177 ? 2% softirqs.CPU32.SCHED
18270 ? 3% +108.0% 38001 softirqs.CPU33.SCHED
18134 ? 3% +107.3% 37594 softirqs.CPU34.SCHED
17815 ? 2% +115.4% 38371 softirqs.CPU35.SCHED
18263 ? 2% +108.7% 38121 softirqs.CPU36.SCHED
17606 ? 5% +115.8% 37994 softirqs.CPU37.SCHED
18241 ? 3% +109.0% 38129 softirqs.CPU38.SCHED
18273 ? 2% +108.0% 38015 softirqs.CPU39.SCHED
18457 ? 2% +102.3% 37342 ? 2% softirqs.CPU4.SCHED
18233 ? 3% +106.2% 37591 ? 2% softirqs.CPU40.SCHED
7891 ? 16% -43.5% 4462 ? 7% softirqs.CPU41.RCU
18223 ? 2% +101.5% 36727 ? 7% softirqs.CPU41.SCHED
8020 ? 18% -45.0% 4413 ? 8% softirqs.CPU42.RCU
18408 ? 2% +108.6% 38404 softirqs.CPU42.SCHED
7774 ? 19% -43.3% 4405 ? 8% softirqs.CPU43.RCU
18328 ? 2% +95.4% 35807 ? 13% softirqs.CPU43.SCHED
7730 ? 19% -44.6% 4280 ? 5% softirqs.CPU44.RCU
18008 ? 2% +112.6% 38278 softirqs.CPU44.SCHED
18309 ? 2% +108.3% 38145 softirqs.CPU45.SCHED
7622 ? 21% -41.4% 4468 ? 9% softirqs.CPU46.RCU
7856 ? 21% -40.7% 4662 ? 7% softirqs.CPU47.RCU
16775 ? 19% +124.6% 37682 ? 3% softirqs.CPU5.SCHED
16954 ? 5% +115.3% 36507 ? 8% softirqs.CPU50.SCHED
7834 ? 17% -44.7% 4333 ? 11% softirqs.CPU51.RCU
18341 ? 2% +102.0% 37052 ? 5% softirqs.CPU51.SCHED
7830 ? 19% -44.4% 4355 ? 9% softirqs.CPU52.RCU
18428 ? 2% +103.2% 37455 ? 4% softirqs.CPU52.SCHED
7815 ? 17% -44.7% 4319 ? 11% softirqs.CPU53.RCU
18193 ? 2% +101.7% 36690 ? 6% softirqs.CPU53.SCHED
7773 ? 18% -44.4% 4320 ? 11% softirqs.CPU54.RCU
18003 ? 5% +85.4% 33369 ? 20% softirqs.CPU54.SCHED
18378 +102.5% 37206 ? 4% softirqs.CPU55.SCHED
7636 ? 18% -44.9% 4205 ? 8% softirqs.CPU56.RCU
16546 ? 17% +121.7% 36683 ? 6% softirqs.CPU56.SCHED
7723 ? 18% -46.6% 4122 ? 10% softirqs.CPU57.RCU
16611 ? 16% +121.3% 36758 ? 6% softirqs.CPU57.SCHED
7655 ? 19% -41.2% 4503 ? 6% softirqs.CPU58.RCU
7799 ? 19% -44.9% 4299 ? 12% softirqs.CPU59.RCU
18398 +97.1% 36263 ? 9% softirqs.CPU59.SCHED
18178 ? 3% +102.9% 36893 ? 3% softirqs.CPU6.SCHED
7828 ? 18% -47.4% 4116 ? 12% softirqs.CPU60.RCU
18134 ? 2% +93.1% 35020 ? 15% softirqs.CPU60.SCHED
18396 ? 2% +95.5% 35961 ? 7% softirqs.CPU61.SCHED
18388 ? 2% +96.7% 36166 ? 9% softirqs.CPU63.SCHED
18427 ? 2% +105.9% 37937 ? 2% softirqs.CPU64.SCHED
18321 ? 2% +101.2% 36865 ? 6% softirqs.CPU65.SCHED
20470 ? 17% +79.7% 36780 ? 7% softirqs.CPU66.SCHED
18377 ? 2% +98.3% 36433 ? 7% softirqs.CPU67.SCHED
18340 ? 3% +101.1% 36884 ? 6% softirqs.CPU68.SCHED
18281 ? 3% +100.4% 36639 ? 7% softirqs.CPU69.SCHED
17542 ? 2% +108.7% 36615 ? 7% softirqs.CPU7.SCHED
18250 +103.0% 37048 ? 5% softirqs.CPU70.SCHED
17625 ? 4% +85.4% 32678 ? 22% softirqs.CPU71.SCHED
17594 ? 8% +120.9% 38859 ? 2% softirqs.CPU74.SCHED
18524 +96.7% 36437 ? 9% softirqs.CPU75.SCHED
17739 ? 3% +118.0% 38667 ? 2% softirqs.CPU76.SCHED
18491 +110.0% 38829 ? 2% softirqs.CPU77.SCHED
18478 ? 2% +109.0% 38614 softirqs.CPU78.SCHED
18501 +109.2% 38714 softirqs.CPU79.SCHED
18476 +110.6% 38906 ? 2% softirqs.CPU80.SCHED
18272 ? 2% +110.7% 38496 softirqs.CPU81.SCHED
18479 ? 2% +88.6% 34844 ? 21% softirqs.CPU82.SCHED
16347 ? 10% +135.6% 38510 softirqs.CPU84.SCHED
18329 ? 2% +113.2% 39083 ? 3% softirqs.CPU85.SCHED
18292 ? 2% +109.5% 38331 softirqs.CPU86.SCHED
18475 +109.8% 38757 softirqs.CPU87.SCHED
18527 +87.7% 34784 ? 18% softirqs.CPU88.SCHED
18477 +104.2% 37727 ? 3% softirqs.CPU89.SCHED
18279 +103.5% 37194 ? 4% softirqs.CPU9.SCHED
18466 +109.2% 38638 softirqs.CPU90.SCHED
8660 ? 21% -56.1% 3805 ? 3% softirqs.CPU91.RCU
18504 ? 2% +89.8% 35125 ? 19% softirqs.CPU91.SCHED
18295 ? 2% +69.8% 31071 ? 21% softirqs.CPU92.SCHED
18516 +108.1% 38525 softirqs.CPU93.SCHED
18607 ? 2% +108.5% 38786 ? 2% softirqs.CPU94.SCHED
18538 +45.4% 26948 ? 23% softirqs.CPU95.SCHED
17002 ? 12% +114.3% 36427 ? 10% softirqs.CPU98.SCHED
18239 ? 2% +92.8% 35158 ? 10% softirqs.CPU99.SCHED
2799 ? 44% +248.5% 9753 ? 44% softirqs.NET_RX
1405324 ? 15% -43.2% 797662 ? 6% softirqs.RCU
3436439 +101.2% 6914856 ? 2% softirqs.SCHED
37936 ? 2% +37.5% 52169 ? 2% softirqs.TIMER
262.50 ? 2% +130.9% 606.00 interrupts.9:IO-APIC.9-fasteoi.acpi
262080 ? 2% +116.0% 566222 ? 12% interrupts.CPU0.LOC:Local_timer_interrupts
0.75 ?173% +5.1e+05% 3842 ? 88% interrupts.CPU0.NMI:Non-maskable_interrupts
0.75 ?173% +5.1e+05% 3842 ? 88% interrupts.CPU0.PMI:Performance_monitoring_interrupts
65.00 ? 47% +443.5% 353.25 ? 45% interrupts.CPU0.RES:Rescheduling_interrupts
262.50 ? 2% +130.9% 606.00 interrupts.CPU1.9:IO-APIC.9-fasteoi.acpi
1702 ? 3% +858.7% 16319 ?149% interrupts.CPU1.CAL:Function_call_interrupts
262519 ? 2% +117.7% 571624 ? 10% interrupts.CPU1.LOC:Local_timer_interrupts
262440 ? 2% +123.6% 586892 ? 5% interrupts.CPU10.LOC:Local_timer_interrupts
263041 ? 2% +114.6% 564434 ? 12% interrupts.CPU100.LOC:Local_timer_interrupts
0.25 ?173% +2.7e+05% 686.25 ? 68% interrupts.CPU100.NMI:Non-maskable_interrupts
0.25 ?173% +2.7e+05% 686.25 ? 68% interrupts.CPU100.PMI:Performance_monitoring_interrupts
262753 ? 2% +114.5% 563706 ? 12% interrupts.CPU101.LOC:Local_timer_interrupts
262541 ? 2% +115.1% 564758 ? 12% interrupts.CPU102.LOC:Local_timer_interrupts
263017 ? 3% +116.4% 569124 ? 11% interrupts.CPU103.LOC:Local_timer_interrupts
263682 ? 3% +114.7% 566056 ? 12% interrupts.CPU104.LOC:Local_timer_interrupts
263035 ? 3% +114.5% 564245 ? 12% interrupts.CPU105.LOC:Local_timer_interrupts
262394 ? 2% +115.3% 564927 ? 12% interrupts.CPU106.LOC:Local_timer_interrupts
262398 ? 2% +116.9% 569255 ? 11% interrupts.CPU107.LOC:Local_timer_interrupts
262388 ? 2% +115.9% 566437 ? 12% interrupts.CPU108.LOC:Local_timer_interrupts
262375 ? 2% +116.0% 566814 ? 11% interrupts.CPU109.LOC:Local_timer_interrupts
262435 ? 2% +116.1% 567228 ? 11% interrupts.CPU11.LOC:Local_timer_interrupts
262379 ? 2% +116.1% 566871 ? 11% interrupts.CPU110.LOC:Local_timer_interrupts
262336 ? 2% +115.2% 564540 ? 12% interrupts.CPU111.LOC:Local_timer_interrupts
262386 ? 2% +114.5% 562700 ? 13% interrupts.CPU112.LOC:Local_timer_interrupts
262354 ? 2% +114.7% 563289 ? 12% interrupts.CPU113.LOC:Local_timer_interrupts
262363 ? 2% +114.5% 562718 ? 13% interrupts.CPU114.LOC:Local_timer_interrupts
262399 ? 2% +114.3% 562370 ? 13% interrupts.CPU115.LOC:Local_timer_interrupts
262199 ? 2% +115.2% 564129 ? 12% interrupts.CPU116.LOC:Local_timer_interrupts
262422 ? 2% +115.9% 566482 ? 12% interrupts.CPU117.LOC:Local_timer_interrupts
262337 ? 2% +114.2% 561808 ? 13% interrupts.CPU118.LOC:Local_timer_interrupts
262366 ? 2% +114.6% 563121 ? 13% interrupts.CPU119.LOC:Local_timer_interrupts
0.25 ?173% +8.9e+05% 2214 ?162% interrupts.CPU119.NMI:Non-maskable_interrupts
0.25 ?173% +8.9e+05% 2214 ?162% interrupts.CPU119.PMI:Performance_monitoring_interrupts
262206 ? 2% +119.7% 576136 ? 12% interrupts.CPU12.LOC:Local_timer_interrupts
260078 ? 3% +110.4% 547239 ? 18% interrupts.CPU120.LOC:Local_timer_interrupts
261865 ? 2% +108.9% 547131 ? 18% interrupts.CPU121.LOC:Local_timer_interrupts
262335 ? 2% +112.5% 557486 ? 15% interrupts.CPU122.LOC:Local_timer_interrupts
260454 ? 3% +107.9% 541396 ? 20% interrupts.CPU123.LOC:Local_timer_interrupts
259867 ? 3% +110.2% 546236 ? 18% interrupts.CPU124.LOC:Local_timer_interrupts
262537 ? 2% +107.5% 544851 ? 19% interrupts.CPU125.LOC:Local_timer_interrupts
76.00 ? 16% +77.3% 134.75 ? 24% interrupts.CPU125.TLB:TLB_shootdowns
260299 ? 3% +109.3% 544923 ? 19% interrupts.CPU126.LOC:Local_timer_interrupts
262012 ? 2% +108.9% 547276 ? 18% interrupts.CPU127.LOC:Local_timer_interrupts
259654 ? 3% +111.3% 548548 ? 18% interrupts.CPU128.LOC:Local_timer_interrupts
262395 ? 2% +106.8% 542735 ? 20% interrupts.CPU129.LOC:Local_timer_interrupts
262340 ? 2% +121.6% 581356 ? 7% interrupts.CPU13.LOC:Local_timer_interrupts
262473 ? 2% +108.0% 546008 ? 19% interrupts.CPU130.LOC:Local_timer_interrupts
262413 ? 2% +106.2% 541040 ? 20% interrupts.CPU131.LOC:Local_timer_interrupts
262344 ? 2% +106.9% 542887 ? 20% interrupts.CPU132.LOC:Local_timer_interrupts
261664 ? 2% +107.8% 543712 ? 19% interrupts.CPU133.LOC:Local_timer_interrupts
262373 ? 2% +106.8% 542477 ? 19% interrupts.CPU134.LOC:Local_timer_interrupts
262387 ? 2% +104.8% 537456 ? 21% interrupts.CPU135.LOC:Local_timer_interrupts
261661 ? 2% +107.6% 543135 ? 19% interrupts.CPU136.LOC:Local_timer_interrupts
261989 ? 2% +106.3% 540444 ? 21% interrupts.CPU137.LOC:Local_timer_interrupts
262033 ? 2% +107.0% 542495 ? 20% interrupts.CPU138.LOC:Local_timer_interrupts
262582 ? 2% +106.6% 542623 ? 20% interrupts.CPU139.LOC:Local_timer_interrupts
262404 ? 2% +121.6% 581424 ? 7% interrupts.CPU14.LOC:Local_timer_interrupts
0.75 ?173% +3e+05% 2240 ?159% interrupts.CPU14.NMI:Non-maskable_interrupts
0.75 ?173% +3e+05% 2240 ?159% interrupts.CPU14.PMI:Performance_monitoring_interrupts
262495 ? 2% +106.8% 542886 ? 20% interrupts.CPU140.LOC:Local_timer_interrupts
73.00 ? 32% +81.5% 132.50 ? 26% interrupts.CPU140.TLB:TLB_shootdowns
262345 ? 2% +107.2% 543631 ? 19% interrupts.CPU141.LOC:Local_timer_interrupts
262474 ? 2% +106.1% 540980 ? 20% interrupts.CPU142.LOC:Local_timer_interrupts
262542 ? 2% +106.1% 540987 ? 20% interrupts.CPU143.LOC:Local_timer_interrupts
262511 ? 2% +108.0% 546124 ? 18% interrupts.CPU144.LOC:Local_timer_interrupts
262569 ? 2% +114.7% 563709 ? 13% interrupts.CPU145.LOC:Local_timer_interrupts
262480 ? 2% +111.4% 554998 ? 15% interrupts.CPU146.LOC:Local_timer_interrupts
262306 ? 2% +108.8% 547645 ? 18% interrupts.CPU147.LOC:Local_timer_interrupts
262459 ? 2% +107.8% 545453 ? 19% interrupts.CPU148.LOC:Local_timer_interrupts
0.25 ?173% +70600.0% 176.75 ? 29% interrupts.CPU148.NMI:Non-maskable_interrupts
0.25 ?173% +70600.0% 176.75 ? 29% interrupts.CPU148.PMI:Performance_monitoring_interrupts
262434 ? 2% +108.1% 546168 ? 18% interrupts.CPU149.LOC:Local_timer_interrupts
262325 ? 2% +117.6% 570853 ? 10% interrupts.CPU15.LOC:Local_timer_interrupts
2.25 ? 65% +1955.6% 46.25 ?122% interrupts.CPU15.RES:Rescheduling_interrupts
262382 ? 2% +108.1% 545997 ? 19% interrupts.CPU150.LOC:Local_timer_interrupts
263406 ? 2% +107.4% 546320 ? 18% interrupts.CPU151.LOC:Local_timer_interrupts
262629 ? 2% +112.3% 557638 ? 14% interrupts.CPU152.LOC:Local_timer_interrupts
263648 ? 2% +115.7% 568792 ? 11% interrupts.CPU153.LOC:Local_timer_interrupts
262475 ? 2% +110.0% 551072 ? 17% interrupts.CPU154.LOC:Local_timer_interrupts
1.50 ?173% +58900.0% 885.00 ?128% interrupts.CPU154.NMI:Non-maskable_interrupts
1.50 ?173% +58900.0% 885.00 ?128% interrupts.CPU154.PMI:Performance_monitoring_interrupts
262446 ? 2% +115.8% 566300 ? 11% interrupts.CPU155.LOC:Local_timer_interrupts
262464 ? 2% +116.1% 567266 ? 11% interrupts.CPU156.LOC:Local_timer_interrupts
262285 ? 2% +116.9% 568882 ? 11% interrupts.CPU157.LOC:Local_timer_interrupts
262472 ? 2% +110.9% 553580 ? 16% interrupts.CPU158.LOC:Local_timer_interrupts
262455 ? 2% +110.6% 552657 ? 16% interrupts.CPU159.LOC:Local_timer_interrupts
262628 ? 2% +115.2% 565202 ? 12% interrupts.CPU16.LOC:Local_timer_interrupts
0.25 ?173% +1.4e+05% 344.75 ? 60% interrupts.CPU16.NMI:Non-maskable_interrupts
0.25 ?173% +1.4e+05% 344.75 ? 60% interrupts.CPU16.PMI:Performance_monitoring_interrupts
262495 ? 2% +108.8% 548211 ? 18% interrupts.CPU160.LOC:Local_timer_interrupts
262484 ? 2% +113.6% 560649 ? 13% interrupts.CPU161.LOC:Local_timer_interrupts
0.25 ?173% +1.5e+05% 368.50 ? 84% interrupts.CPU161.NMI:Non-maskable_interrupts
0.25 ?173% +1.5e+05% 368.50 ? 84% interrupts.CPU161.PMI:Performance_monitoring_interrupts
262509 ? 2% +108.7% 547849 ? 18% interrupts.CPU162.LOC:Local_timer_interrupts
262508 ? 2% +109.5% 549885 ? 17% interrupts.CPU163.LOC:Local_timer_interrupts
262456 ? 2% +108.6% 547432 ? 18% interrupts.CPU164.LOC:Local_timer_interrupts
262463 ? 2% +108.2% 546382 ? 18% interrupts.CPU165.LOC:Local_timer_interrupts
0.25 ?173% +73900.0% 185.00 ? 20% interrupts.CPU165.NMI:Non-maskable_interrupts
0.25 ?173% +73900.0% 185.00 ? 20% interrupts.CPU165.PMI:Performance_monitoring_interrupts
262489 ? 2% +108.0% 546012 ? 18% interrupts.CPU166.LOC:Local_timer_interrupts
262487 ? 2% +111.8% 556017 ? 15% interrupts.CPU167.LOC:Local_timer_interrupts
262371 ? 2% +107.9% 545370 ? 19% interrupts.CPU168.LOC:Local_timer_interrupts
0.00 +2.2e+105% 2243 ?101% interrupts.CPU168.NMI:Non-maskable_interrupts
0.00 +2.2e+105% 2243 ?101% interrupts.CPU168.PMI:Performance_monitoring_interrupts
48.75 ? 83% -91.3% 4.25 ?105% interrupts.CPU168.RES:Rescheduling_interrupts
262437 ? 2% +106.5% 541884 ? 20% interrupts.CPU169.LOC:Local_timer_interrupts
262720 ? 2% +114.9% 564679 ? 12% interrupts.CPU17.LOC:Local_timer_interrupts
262496 ? 2% +105.5% 539521 ? 21% interrupts.CPU170.LOC:Local_timer_interrupts
262452 ? 2% +106.2% 541291 ? 20% interrupts.CPU171.LOC:Local_timer_interrupts
262437 ? 2% +106.8% 542846 ? 20% interrupts.CPU172.LOC:Local_timer_interrupts
262182 ? 2% +106.1% 540399 ? 21% interrupts.CPU173.LOC:Local_timer_interrupts
262276 ? 2% +105.7% 539393 ? 21% interrupts.CPU174.LOC:Local_timer_interrupts
1.50 ?173% +13650.0% 206.25 ? 19% interrupts.CPU174.NMI:Non-maskable_interrupts
1.50 ?173% +13650.0% 206.25 ? 19% interrupts.CPU174.PMI:Performance_monitoring_interrupts
262096 ? 2% +105.7% 539185 ? 21% interrupts.CPU175.LOC:Local_timer_interrupts
0.50 ?173% +40850.0% 204.75 ? 17% interrupts.CPU175.NMI:Non-maskable_interrupts
0.50 ?173% +40850.0% 204.75 ? 17% interrupts.CPU175.PMI:Performance_monitoring_interrupts
262080 ? 2% +106.1% 540169 ? 20% interrupts.CPU176.LOC:Local_timer_interrupts
260711 ? 2% +107.6% 541190 ? 20% interrupts.CPU177.LOC:Local_timer_interrupts
261545 ? 2% +106.4% 539702 ? 21% interrupts.CPU178.LOC:Local_timer_interrupts
260622 +107.4% 540514 ? 20% interrupts.CPU179.LOC:Local_timer_interrupts
262400 ? 2% +114.6% 563134 ? 13% interrupts.CPU18.LOC:Local_timer_interrupts
261789 ? 2% +111.8% 554460 ? 16% interrupts.CPU180.LOC:Local_timer_interrupts
260076 ? 2% +108.4% 541919 ? 20% interrupts.CPU181.LOC:Local_timer_interrupts
260765 +111.9% 552654 ? 16% interrupts.CPU182.LOC:Local_timer_interrupts
262033 ? 2% +106.6% 541360 ? 20% interrupts.CPU183.LOC:Local_timer_interrupts
262136 ? 2% +106.3% 540700 ? 20% interrupts.CPU184.LOC:Local_timer_interrupts
0.25 ?173% +80800.0% 202.25 ? 18% interrupts.CPU184.NMI:Non-maskable_interrupts
0.25 ?173% +80800.0% 202.25 ? 18% interrupts.CPU184.PMI:Performance_monitoring_interrupts
262220 ? 2% +116.9% 568857 ? 11% interrupts.CPU185.LOC:Local_timer_interrupts
262261 ? 2% +106.7% 541984 ? 20% interrupts.CPU186.LOC:Local_timer_interrupts
1.25 ?173% +13820.0% 174.00 ? 26% interrupts.CPU186.NMI:Non-maskable_interrupts
1.25 ?173% +13820.0% 174.00 ? 26% interrupts.CPU186.PMI:Performance_monitoring_interrupts
262443 ? 2% +106.0% 540544 ? 20% interrupts.CPU187.LOC:Local_timer_interrupts
261575 ? 2% +117.8% 569610 ? 11% interrupts.CPU188.LOC:Local_timer_interrupts
261522 ? 2% +106.9% 541106 ? 20% interrupts.CPU189.LOC:Local_timer_interrupts
262433 ? 2% +114.8% 563817 ? 12% interrupts.CPU19.LOC:Local_timer_interrupts
260718 +106.9% 539542 ? 21% interrupts.CPU190.LOC:Local_timer_interrupts
262427 ? 2% +108.9% 548096 ? 18% interrupts.CPU191.LOC:Local_timer_interrupts
1.25 ? 34% +13200.0% 166.25 ? 38% interrupts.CPU191.NMI:Non-maskable_interrupts
1.25 ? 34% +13200.0% 166.25 ? 38% interrupts.CPU191.PMI:Performance_monitoring_interrupts
262415 ? 2% +117.8% 571451 ? 10% interrupts.CPU2.LOC:Local_timer_interrupts
1.00 ?173% +17025.0% 171.25 ? 34% interrupts.CPU2.NMI:Non-maskable_interrupts
1.00 ?173% +17025.0% 171.25 ? 34% interrupts.CPU2.PMI:Performance_monitoring_interrupts
262388 ? 2% +115.5% 565435 ? 12% interrupts.CPU20.LOC:Local_timer_interrupts
262426 ? 2% +115.7% 566000 ? 12% interrupts.CPU21.LOC:Local_timer_interrupts
107.75 ?119% -98.4% 1.75 ? 47% interrupts.CPU21.RES:Rescheduling_interrupts
262428 ? 2% +115.7% 566087 ? 12% interrupts.CPU22.LOC:Local_timer_interrupts
262420 ? 2% +116.4% 567805 ? 11% interrupts.CPU23.LOC:Local_timer_interrupts
0.50 ?100% +3.1e+05% 1560 ?153% interrupts.CPU23.NMI:Non-maskable_interrupts
0.50 ?100% +3.1e+05% 1560 ?153% interrupts.CPU23.PMI:Performance_monitoring_interrupts
262493 ? 2% +122.4% 583804 ? 6% interrupts.CPU24.LOC:Local_timer_interrupts
71.00 ? 81% +215.5% 224.00 ? 39% interrupts.CPU24.RES:Rescheduling_interrupts
262614 ? 2% +111.9% 556548 ? 15% interrupts.CPU25.LOC:Local_timer_interrupts
262638 ? 2% +115.2% 565239 ? 12% interrupts.CPU26.LOC:Local_timer_interrupts
262641 ? 2% +107.1% 544048 ? 19% interrupts.CPU27.LOC:Local_timer_interrupts
262623 ? 2% +109.9% 551291 ? 17% interrupts.CPU28.LOC:Local_timer_interrupts
262610 ? 2% +107.2% 544162 ? 19% interrupts.CPU29.LOC:Local_timer_interrupts
262376 ? 2% +117.6% 570874 ? 10% interrupts.CPU3.LOC:Local_timer_interrupts
0.75 ?173% +18866.7% 142.25 ? 40% interrupts.CPU3.NMI:Non-maskable_interrupts
0.75 ?173% +18866.7% 142.25 ? 40% interrupts.CPU3.PMI:Performance_monitoring_interrupts
262652 ? 2% +108.1% 546468 ? 18% interrupts.CPU30.LOC:Local_timer_interrupts
262617 ? 2% +109.5% 550280 ? 17% interrupts.CPU31.LOC:Local_timer_interrupts
262628 ? 2% +109.7% 550671 ? 17% interrupts.CPU32.LOC:Local_timer_interrupts
262607 ? 2% +108.7% 548001 ? 18% interrupts.CPU33.LOC:Local_timer_interrupts
262627 ? 2% +107.5% 544959 ? 19% interrupts.CPU34.LOC:Local_timer_interrupts
262549 ? 2% +105.7% 540192 ? 21% interrupts.CPU35.LOC:Local_timer_interrupts
0.25 ?173% +49700.0% 124.50 ? 44% interrupts.CPU35.NMI:Non-maskable_interrupts
0.25 ?173% +49700.0% 124.50 ? 44% interrupts.CPU35.PMI:Performance_monitoring_interrupts
262618 ? 2% +106.3% 541838 ? 20% interrupts.CPU36.LOC:Local_timer_interrupts
262487 ? 2% +107.4% 544382 ? 19% interrupts.CPU37.LOC:Local_timer_interrupts
13.50 ?128% +514.8% 83.00 ? 70% interrupts.CPU37.TLB:TLB_shootdowns
262620 ? 2% +106.5% 542305 ? 20% interrupts.CPU38.LOC:Local_timer_interrupts
0.25 ?173% +63400.0% 158.75 ? 42% interrupts.CPU38.NMI:Non-maskable_interrupts
0.25 ?173% +63400.0% 158.75 ? 42% interrupts.CPU38.PMI:Performance_monitoring_interrupts
262649 ? 2% +107.4% 544844 ? 19% interrupts.CPU39.LOC:Local_timer_interrupts
262402 ? 2% +118.8% 574008 ? 11% interrupts.CPU4.LOC:Local_timer_interrupts
0.75 ?173% +1.6e+05% 1233 ?109% interrupts.CPU4.NMI:Non-maskable_interrupts
0.75 ?173% +1.6e+05% 1233 ?109% interrupts.CPU4.PMI:Performance_monitoring_interrupts
262634 ? 2% +107.0% 543548 ? 19% interrupts.CPU40.LOC:Local_timer_interrupts
262152 ? 2% +106.0% 539971 ? 21% interrupts.CPU41.LOC:Local_timer_interrupts
263740 ? 2% +105.8% 542816 ? 20% interrupts.CPU42.LOC:Local_timer_interrupts
67.50 ? 67% -96.7% 2.25 ? 72% interrupts.CPU42.RES:Rescheduling_interrupts
263061 ? 2% +106.2% 542431 ? 20% interrupts.CPU43.LOC:Local_timer_interrupts
263144 ? 2% +106.0% 542069 ? 20% interrupts.CPU44.LOC:Local_timer_interrupts
98.00 ?139% -99.5% 0.50 ?173% interrupts.CPU44.RES:Rescheduling_interrupts
31.00 ? 71% +213.7% 97.25 ? 34% interrupts.CPU44.TLB:TLB_shootdowns
262640 ? 2% +107.1% 543849 ? 19% interrupts.CPU45.LOC:Local_timer_interrupts
262653 ? 2% +105.7% 540247 ? 21% interrupts.CPU46.LOC:Local_timer_interrupts
261904 ? 2% +106.4% 540591 ? 20% interrupts.CPU47.LOC:Local_timer_interrupts
0.25 ?173% +9e+05% 2243 ?159% interrupts.CPU47.NMI:Non-maskable_interrupts
0.25 ?173% +9e+05% 2243 ?159% interrupts.CPU47.PMI:Performance_monitoring_interrupts
262378 ? 2% +108.9% 548209 ? 18% interrupts.CPU48.LOC:Local_timer_interrupts
55.00 ? 53% +351.8% 248.50 ? 35% interrupts.CPU48.RES:Rescheduling_interrupts
262257 ? 2% +113.0% 558578 ? 14% interrupts.CPU49.LOC:Local_timer_interrupts
262402 ? 2% +116.9% 569165 ? 11% interrupts.CPU5.LOC:Local_timer_interrupts
262531 ? 2% +109.0% 548622 ? 18% interrupts.CPU50.LOC:Local_timer_interrupts
0.00 +2e+104% 204.00 ? 14% interrupts.CPU50.NMI:Non-maskable_interrupts
0.00 +2e+104% 204.00 ? 14% interrupts.CPU50.PMI:Performance_monitoring_interrupts
262465 ? 2% +108.6% 547534 ? 18% interrupts.CPU51.LOC:Local_timer_interrupts
40.75 ?117% -81.6% 7.50 ? 98% interrupts.CPU51.RES:Rescheduling_interrupts
262492 ? 2% +108.5% 547260 ? 18% interrupts.CPU52.LOC:Local_timer_interrupts
262492 ? 2% +107.9% 545749 ? 19% interrupts.CPU53.LOC:Local_timer_interrupts
262435 ? 2% +108.5% 547082 ? 18% interrupts.CPU54.LOC:Local_timer_interrupts
262517 ? 2% +108.9% 548392 ? 18% interrupts.CPU55.LOC:Local_timer_interrupts
262573 ? 2% +108.4% 547312 ? 18% interrupts.CPU56.LOC:Local_timer_interrupts
262977 ? 2% +110.2% 552772 ? 16% interrupts.CPU57.LOC:Local_timer_interrupts
0.25 ?173% +1.9e+05% 470.75 ?106% interrupts.CPU57.NMI:Non-maskable_interrupts
0.25 ?173% +1.9e+05% 470.75 ?106% interrupts.CPU57.PMI:Performance_monitoring_interrupts
262378 ? 2% +121.7% 581611 ? 7% interrupts.CPU58.LOC:Local_timer_interrupts
262469 ? 2% +109.4% 549629 ? 17% interrupts.CPU59.LOC:Local_timer_interrupts
262426 ? 2% +118.6% 573635 ? 9% interrupts.CPU6.LOC:Local_timer_interrupts
262393 ? 2% +108.9% 548229 ? 18% interrupts.CPU60.LOC:Local_timer_interrupts
124.75 ? 34% -80.4% 24.50 ?104% interrupts.CPU60.RES:Rescheduling_interrupts
262324 ? 2% +109.6% 549933 ? 17% interrupts.CPU61.LOC:Local_timer_interrupts
262490 ? 2% +108.5% 547277 ? 18% interrupts.CPU62.LOC:Local_timer_interrupts
81.25 ?100% -97.2% 2.25 ? 65% interrupts.CPU62.RES:Rescheduling_interrupts
262468 ? 2% +108.9% 548272 ? 18% interrupts.CPU63.LOC:Local_timer_interrupts
262525 ? 2% +113.2% 559696 ? 14% interrupts.CPU64.LOC:Local_timer_interrupts
262486 ? 2% +111.3% 554745 ? 16% interrupts.CPU65.LOC:Local_timer_interrupts
262541 ? 2% +109.0% 548670 ? 18% interrupts.CPU66.LOC:Local_timer_interrupts
262413 ? 2% +109.9% 550793 ? 17% interrupts.CPU67.LOC:Local_timer_interrupts
262517 ? 2% +109.2% 549147 ? 17% interrupts.CPU68.LOC:Local_timer_interrupts
262508 ? 2% +108.9% 548499 ? 18% interrupts.CPU69.LOC:Local_timer_interrupts
0.25 ?173% +74500.0% 186.50 ? 20% interrupts.CPU69.NMI:Non-maskable_interrupts
0.25 ?173% +74500.0% 186.50 ? 20% interrupts.CPU69.PMI:Performance_monitoring_interrupts
262559 ? 2% +116.6% 568696 ? 11% interrupts.CPU7.LOC:Local_timer_interrupts
0.00 +1.4e+104% 145.00 ? 40% interrupts.CPU7.NMI:Non-maskable_interrupts
0.00 +1.4e+104% 145.00 ? 40% interrupts.CPU7.PMI:Performance_monitoring_interrupts
262390 ? 2% +108.4% 546885 ? 18% interrupts.CPU70.LOC:Local_timer_interrupts
262516 ? 2% +108.4% 547201 ? 18% interrupts.CPU71.LOC:Local_timer_interrupts
262409 ? 2% +107.4% 544158 ? 19% interrupts.CPU72.LOC:Local_timer_interrupts
262458 ? 2% +108.8% 548088 ? 18% interrupts.CPU73.LOC:Local_timer_interrupts
52.75 ? 57% +253.1% 186.25 ? 36% interrupts.CPU73.RES:Rescheduling_interrupts
262509 ? 2% +105.1% 538318 ? 21% interrupts.CPU74.LOC:Local_timer_interrupts
262532 ? 2% +109.5% 549979 ? 17% interrupts.CPU75.LOC:Local_timer_interrupts
262463 ? 2% +105.5% 539477 ? 21% interrupts.CPU76.LOC:Local_timer_interrupts
262484 ? 2% +105.6% 539784 ? 21% interrupts.CPU77.LOC:Local_timer_interrupts
0.25 ?173% +81300.0% 203.50 ? 19% interrupts.CPU77.NMI:Non-maskable_interrupts
0.25 ?173% +81300.0% 203.50 ? 19% interrupts.CPU77.PMI:Performance_monitoring_interrupts
262445 ? 2% +105.7% 539793 ? 21% interrupts.CPU78.LOC:Local_timer_interrupts
0.25 ?173% +82300.0% 206.00 ? 19% interrupts.CPU78.NMI:Non-maskable_interrupts
0.25 ?173% +82300.0% 206.00 ? 19% interrupts.CPU78.PMI:Performance_monitoring_interrupts
262433 ? 2% +105.7% 539708 ? 21% interrupts.CPU79.LOC:Local_timer_interrupts
262480 ? 2% +115.3% 565229 ? 12% interrupts.CPU8.LOC:Local_timer_interrupts
0.25 ?173% +1.9e+05% 470.75 ? 97% interrupts.CPU8.NMI:Non-maskable_interrupts
0.25 ?173% +1.9e+05% 470.75 ? 97% interrupts.CPU8.PMI:Performance_monitoring_interrupts
262432 ? 2% +105.8% 540115 ? 21% interrupts.CPU80.LOC:Local_timer_interrupts
262453 ? 2% +105.9% 540432 ? 20% interrupts.CPU81.LOC:Local_timer_interrupts
262472 ? 2% +105.3% 538774 ? 21% interrupts.CPU82.LOC:Local_timer_interrupts
262526 ? 2% +105.2% 538597 ? 21% interrupts.CPU83.LOC:Local_timer_interrupts
262579 ? 2% +105.6% 539844 ? 21% interrupts.CPU84.LOC:Local_timer_interrupts
0.00 +2.1e+104% 213.00 ? 21% interrupts.CPU84.NMI:Non-maskable_interrupts
0.00 +2.1e+104% 213.00 ? 21% interrupts.CPU84.PMI:Performance_monitoring_interrupts
262450 ? 2% +108.9% 548271 ? 18% interrupts.CPU85.LOC:Local_timer_interrupts
150.00 ? 98% -98.8% 1.75 ? 47% interrupts.CPU85.RES:Rescheduling_interrupts
1732 ? 5% +46.7% 2540 ? 43% interrupts.CPU86.CAL:Function_call_interrupts
262440 ? 2% +106.8% 542823 ? 20% interrupts.CPU86.LOC:Local_timer_interrupts
262459 ? 2% +108.2% 546320 ? 18% interrupts.CPU87.LOC:Local_timer_interrupts
261772 ? 2% +106.6% 540909 ? 20% interrupts.CPU88.LOC:Local_timer_interrupts
262191 ? 2% +108.1% 545731 ? 19% interrupts.CPU89.LOC:Local_timer_interrupts
262323 ? 2% +117.2% 569838 ? 11% interrupts.CPU9.LOC:Local_timer_interrupts
261786 ? 2% +107.1% 542265 ? 20% interrupts.CPU90.LOC:Local_timer_interrupts
262226 ? 2% +106.5% 541382 ? 20% interrupts.CPU91.LOC:Local_timer_interrupts
261703 ? 2% +108.7% 546199 ? 18% interrupts.CPU92.LOC:Local_timer_interrupts
262127 ? 2% +106.5% 541287 ? 20% interrupts.CPU93.LOC:Local_timer_interrupts
260263 ? 3% +107.7% 540500 ? 20% interrupts.CPU94.LOC:Local_timer_interrupts
261864 ? 2% +106.4% 540510 ? 20% interrupts.CPU95.LOC:Local_timer_interrupts
262438 ? 2% +114.1% 561887 ? 13% interrupts.CPU96.LOC:Local_timer_interrupts
0.25 ?173% +1.1e+06% 2648 ?127% interrupts.CPU96.NMI:Non-maskable_interrupts
0.25 ?173% +1.1e+06% 2648 ?127% interrupts.CPU96.PMI:Performance_monitoring_interrupts
262573 ? 2% +115.2% 565020 ? 12% interrupts.CPU97.LOC:Local_timer_interrupts
1.75 ?173% +11657.1% 205.75 ? 8% interrupts.CPU97.NMI:Non-maskable_interrupts
1.75 ?173% +11657.1% 205.75 ? 8% interrupts.CPU97.PMI:Performance_monitoring_interrupts
263173 ? 2% +114.2% 563719 ? 12% interrupts.CPU98.LOC:Local_timer_interrupts
0.50 ?100% +38950.0% 195.25 ? 22% interrupts.CPU98.NMI:Non-maskable_interrupts
0.50 ?100% +38950.0% 195.25 ? 22% interrupts.CPU98.PMI:Performance_monitoring_interrupts
263343 ? 2% +115.5% 567513 ? 14% interrupts.CPU99.LOC:Local_timer_interrupts
0.25 ?173% +49300.0% 123.50 ? 51% interrupts.CPU99.NMI:Non-maskable_interrupts
0.25 ?173% +49300.0% 123.50 ? 51% interrupts.CPU99.PMI:Performance_monitoring_interrupts
50365605 ? 2% +110.5% 1.06e+08 ? 16% interrupts.LOC:Local_timer_interrupts
0.00 +1.9e+104% 192.00 interrupts.MCP:Machine_check_polls
17.00 ? 16% +8.2e+05% 138886 ? 10% interrupts.NMI:Non-maskable_interrupts
17.00 ? 16% +8.2e+05% 138886 ? 10% interrupts.PMI:Performance_monitoring_interrupts
8445 ? 10% -24.0% 6416 ? 4% interrupts.RES:Rescheduling_interrupts
17.20 ? 68% -17.2 0.00 perf-profile.calltrace.cycles-pp.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe
17.20 ? 68% -17.2 0.00 perf-profile.calltrace.cycles-pp.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe
17.20 ? 68% -17.2 0.00 perf-profile.calltrace.cycles-pp.arch_do_signal.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe
17.20 ? 68% -17.2 0.00 perf-profile.calltrace.cycles-pp.get_signal.arch_do_signal.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe
17.20 ? 68% -17.2 0.00 perf-profile.calltrace.cycles-pp.do_group_exit.get_signal.arch_do_signal.exit_to_user_mode_prepare.syscall_exit_to_user_mode
17.20 ? 68% -17.2 0.00 perf-profile.calltrace.cycles-pp.do_exit.do_group_exit.get_signal.arch_do_signal.exit_to_user_mode_prepare
13.77 ?104% -13.8 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.read
13.77 ?104% -13.8 0.00 perf-profile.calltrace.cycles-pp.read
12.68 ?109% -12.7 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
11.59 ?117% -11.6 0.00 perf-profile.calltrace.cycles-pp.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
11.59 ?117% -11.6 0.00 perf-profile.calltrace.cycles-pp.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
10.19 ? 64% -10.2 0.00 perf-profile.calltrace.cycles-pp.task_work_run.do_exit.do_group_exit.get_signal.arch_do_signal
10.19 ? 64% -10.2 0.00 perf-profile.calltrace.cycles-pp.__fput.task_work_run.do_exit.do_group_exit.get_signal
9.82 ?107% -9.8 0.00 perf-profile.calltrace.cycles-pp.seq_read.proc_reg_read.vfs_read.ksys_read.do_syscall_64
9.82 ?107% -9.8 0.00 perf-profile.calltrace.cycles-pp.proc_reg_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
9.03 ?107% -9.0 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.open64
9.03 ?107% -9.0 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.open64
9.03 ?107% -9.0 0.00 perf-profile.calltrace.cycles-pp.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe.open64
9.03 ?107% -9.0 0.00 perf-profile.calltrace.cycles-pp.do_sys_openat2.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe.open64
9.03 ?107% -9.0 0.00 perf-profile.calltrace.cycles-pp.do_filp_open.do_sys_openat2.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
9.03 ?107% -9.0 0.00 perf-profile.calltrace.cycles-pp.path_openat.do_filp_open.do_sys_openat2.do_sys_open.do_syscall_64
9.03 ?107% -9.0 0.00 perf-profile.calltrace.cycles-pp.open64
8.66 ? 41% -8.7 0.00 perf-profile.calltrace.cycles-pp.__libc_start_main
8.66 ? 41% -8.7 0.00 perf-profile.calltrace.cycles-pp.main.__libc_start_main
8.66 ? 41% -8.7 0.00 perf-profile.calltrace.cycles-pp.run_builtin.main.__libc_start_main
8.66 ? 41% -8.7 0.00 perf-profile.calltrace.cycles-pp.cmd_record.run_builtin.main.__libc_start_main
7.74 ?100% -7.7 0.00 perf-profile.calltrace.cycles-pp.show_interrupts.seq_read.proc_reg_read.vfs_read.ksys_read
7.02 ? 80% -7.0 0.00 perf-profile.calltrace.cycles-pp.mmput.do_exit.do_group_exit.get_signal.arch_do_signal
7.02 ? 80% -7.0 0.00 perf-profile.calltrace.cycles-pp.exit_mmap.mmput.do_exit.do_group_exit.get_signal
5.09 ? 64% -5.1 0.00 perf-profile.calltrace.cycles-pp.record__mmap_read_evlist.cmd_record.run_builtin.main.__libc_start_main
5.09 ? 64% -5.1 0.00 perf-profile.calltrace.cycles-pp.perf_mmap__push.record__mmap_read_evlist.cmd_record.run_builtin.main
5.09 ? 64% -5.1 0.00 perf-profile.calltrace.cycles-pp.perf_mmap__read_head.perf_mmap__push.record__mmap_read_evlist.cmd_record.run_builtin
5.09 ? 64% -5.1 0.00 perf-profile.calltrace.cycles-pp.asm_exc_page_fault.perf_mmap__read_head.perf_mmap__push.record__mmap_read_evlist.cmd_record
5.09 ? 64% -5.1 0.00 perf-profile.calltrace.cycles-pp.exc_page_fault.asm_exc_page_fault.perf_mmap__read_head.perf_mmap__push.record__mmap_read_evlist
5.09 ? 64% -5.1 0.00 perf-profile.calltrace.cycles-pp.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.perf_mmap__read_head.perf_mmap__push
5.09 ? 64% -5.1 0.00 perf-profile.calltrace.cycles-pp.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.perf_mmap__read_head
5.09 ? 64% -5.1 0.00 perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
5.09 ? 64% -5.1 0.00 perf-profile.calltrace.cycles-pp.perf_release.__fput.task_work_run.do_exit.do_group_exit
5.09 ? 64% -5.1 0.00 perf-profile.calltrace.cycles-pp.perf_event_release_kernel.perf_release.__fput.task_work_run.do_exit
4.26 ?100% -4.3 0.00 perf-profile.calltrace.cycles-pp.seq_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
4.00 ?100% -4.0 0.00 perf-profile.calltrace.cycles-pp.zap_pte_range.unmap_page_range.unmap_vmas.exit_mmap.mmput
4.00 ?100% -4.0 0.00 perf-profile.calltrace.cycles-pp._free_event.perf_event_release_kernel.perf_release.__fput.task_work_run
4.00 ?100% -4.0 0.00 perf-profile.calltrace.cycles-pp.mutex_lock.swevent_hlist_put_cpu.sw_perf_event_destroy._free_event.perf_event_release_kernel
4.00 ?100% -4.0 0.00 perf-profile.calltrace.cycles-pp.do_fault.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault
4.00 ?100% -4.0 0.00 perf-profile.calltrace.cycles-pp.__do_fault.do_fault.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
4.00 ?100% -4.0 0.00 perf-profile.calltrace.cycles-pp.unmap_vmas.exit_mmap.mmput.do_exit.do_group_exit
4.00 ?100% -4.0 0.00 perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.exit_mmap.mmput.do_exit
4.00 ?100% -4.0 0.00 perf-profile.calltrace.cycles-pp.sw_perf_event_destroy._free_event.perf_event_release_kernel.perf_release.__fput
4.00 ?100% -4.0 0.00 perf-profile.calltrace.cycles-pp.swevent_hlist_put_cpu.sw_perf_event_destroy._free_event.perf_event_release_kernel.perf_release
0.00 +0.6 0.58 ? 15% perf-profile.calltrace.cycles-pp.add_to_page_cache_lru.pagecache_get_page.prepare_pages.btrfs_buffered_write.btrfs_file_write_iter
0.00 +0.6 0.64 ? 13% perf-profile.calltrace.cycles-pp.asm_call_sysvec_on_stack.do_softirq_own_stack.irq_exit_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
0.00 +0.6 0.64 ? 13% perf-profile.calltrace.cycles-pp.do_softirq_own_stack.irq_exit_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state
0.00 +0.6 0.64 ? 13% perf-profile.calltrace.cycles-pp.__softirqentry_text_start.asm_call_sysvec_on_stack.do_softirq_own_stack.irq_exit_rcu.sysvec_apic_timer_interrupt
0.00 +0.7 0.68 ? 10% perf-profile.calltrace.cycles-pp.copy_user_enhanced_fast_string.copyin.iov_iter_copy_from_user_atomic.btrfs_copy_from_user.btrfs_buffered_write
0.00 +0.7 0.69 ? 9% perf-profile.calltrace.cycles-pp.copyin.iov_iter_copy_from_user_atomic.btrfs_copy_from_user.btrfs_buffered_write.btrfs_file_write_iter
0.00 +0.7 0.71 ? 10% perf-profile.calltrace.cycles-pp.iov_iter_copy_from_user_atomic.btrfs_copy_from_user.btrfs_buffered_write.btrfs_file_write_iter.new_sync_write
0.00 +0.7 0.73 ? 13% perf-profile.calltrace.cycles-pp.irq_exit_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter
0.00 +0.7 0.74 ? 10% perf-profile.calltrace.cycles-pp.btrfs_copy_from_user.btrfs_buffered_write.btrfs_file_write_iter.new_sync_write.vfs_write
0.00 +0.8 0.77 ? 11% perf-profile.calltrace.cycles-pp._find_next_bit.cpumask_next.__percpu_counter_sum.__reserve_bytes.btrfs_reserve_metadata_bytes
0.00 +0.8 0.80 ? 16% perf-profile.calltrace.cycles-pp.pagecache_get_page.prepare_pages.btrfs_buffered_write.btrfs_file_write_iter.new_sync_write
0.00 +0.8 0.83 ? 17% perf-profile.calltrace.cycles-pp.prepare_pages.btrfs_buffered_write.btrfs_file_write_iter.new_sync_write.vfs_write
0.00 +0.8 0.84 ? 24% perf-profile.calltrace.cycles-pp.clockevents_program_event.hrtimer_interrupt.__sysvec_apic_timer_interrupt.asm_call_sysvec_on_stack.sysvec_apic_timer_interrupt
0.00 +0.9 0.88 ? 50% perf-profile.calltrace.cycles-pp.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt
0.00 +0.9 0.89 ? 49% perf-profile.calltrace.cycles-pp.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt
0.00 +1.2 1.16 ? 45% perf-profile.calltrace.cycles-pp.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt.asm_call_sysvec_on_stack
0.00 +1.3 1.28 ? 11% perf-profile.calltrace.cycles-pp.cpumask_next.__percpu_counter_sum.__reserve_bytes.btrfs_reserve_metadata_bytes.btrfs_delalloc_reserve_metadata
0.00 +1.3 1.30 ? 14% perf-profile.calltrace.cycles-pp.btrfs_dirty_pages.btrfs_buffered_write.btrfs_file_write_iter.new_sync_write.vfs_write
0.00 +1.5 1.54 ? 36% perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt.asm_call_sysvec_on_stack.sysvec_apic_timer_interrupt
0.00 +2.9 2.86 ? 37% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.__sysvec_apic_timer_interrupt.asm_call_sysvec_on_stack.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
0.00 +2.9 2.89 ? 37% perf-profile.calltrace.cycles-pp.__sysvec_apic_timer_interrupt.asm_call_sysvec_on_stack.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state
0.00 +2.9 2.90 ? 37% perf-profile.calltrace.cycles-pp.asm_call_sysvec_on_stack.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter
0.00 +4.5 4.49 ? 11% perf-profile.calltrace.cycles-pp.__percpu_counter_sum.__reserve_bytes.btrfs_reserve_metadata_bytes.btrfs_delalloc_reserve_metadata.btrfs_buffered_write
1.09 ?173% +6.2 7.24 ? 9% perf-profile.calltrace.cycles-pp.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
0.00 +6.4 6.41 ? 31% perf-profile.calltrace.cycles-pp.menu_select.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
0.00 +16.6 16.64 ? 9% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.__reserve_bytes.btrfs_reserve_metadata_bytes.btrfs_delalloc_reserve_metadata
0.00 +16.9 16.92 ? 9% perf-profile.calltrace.cycles-pp._raw_spin_lock.__reserve_bytes.btrfs_reserve_metadata_bytes.btrfs_delalloc_reserve_metadata.btrfs_buffered_write
0.00 +20.7 20.70 ? 10% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.btrfs_block_rsv_release.btrfs_inode_rsv_release.btrfs_buffered_write
0.00 +20.9 20.93 ? 10% perf-profile.calltrace.cycles-pp._raw_spin_lock.btrfs_block_rsv_release.btrfs_inode_rsv_release.btrfs_buffered_write.btrfs_file_write_iter
0.00 +21.1 21.05 ? 10% perf-profile.calltrace.cycles-pp.btrfs_block_rsv_release.btrfs_inode_rsv_release.btrfs_buffered_write.btrfs_file_write_iter.new_sync_write
0.00 +21.1 21.08 ? 10% perf-profile.calltrace.cycles-pp.btrfs_inode_rsv_release.btrfs_buffered_write.btrfs_file_write_iter.new_sync_write.vfs_write
0.00 +22.0 21.99 ? 9% perf-profile.calltrace.cycles-pp.__reserve_bytes.btrfs_reserve_metadata_bytes.btrfs_delalloc_reserve_metadata.btrfs_buffered_write.btrfs_file_write_iter
0.00 +22.0 22.01 ? 9% perf-profile.calltrace.cycles-pp.btrfs_reserve_metadata_bytes.btrfs_delalloc_reserve_metadata.btrfs_buffered_write.btrfs_file_write_iter.new_sync_write
0.00 +22.3 22.30 ? 9% perf-profile.calltrace.cycles-pp.btrfs_delalloc_reserve_metadata.btrfs_buffered_write.btrfs_file_write_iter.new_sync_write.vfs_write
20.77 ? 35% +27.3 48.10 ? 10% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
3.57 ?173% +44.5 48.08 ? 10% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +47.8 47.77 ? 10% perf-profile.calltrace.cycles-pp.btrfs_buffered_write.btrfs_file_write_iter.new_sync_write.vfs_write.ksys_write
0.00 +47.9 47.87 ? 10% perf-profile.calltrace.cycles-pp.btrfs_file_write_iter.new_sync_write.vfs_write.ksys_write.do_syscall_64
0.00 +47.9 47.89 ? 10% perf-profile.calltrace.cycles-pp.new_sync_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +48.0 47.99 ? 10% perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +48.0 48.03 ? 10% perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
22.39 ? 62% -22.4 0.00 perf-profile.children.cycles-pp.syscall_exit_to_user_mode
20.21 ? 68% -20.2 0.00 perf-profile.children.cycles-pp.exit_to_user_mode_prepare
17.20 ? 68% -17.2 0.00 perf-profile.children.cycles-pp.arch_do_signal
17.20 ? 68% -17.2 0.00 perf-profile.children.cycles-pp.get_signal
17.20 ? 68% -17.2 0.00 perf-profile.children.cycles-pp.do_group_exit
17.20 ? 68% -17.2 0.00 perf-profile.children.cycles-pp.do_exit
15.16 ? 78% -15.2 0.00 perf-profile.children.cycles-pp.ksys_read
15.16 ? 78% -15.2 0.00 perf-profile.children.cycles-pp.vfs_read
14.08 ? 86% -14.1 0.00 perf-profile.children.cycles-pp.seq_read
13.77 ?104% -13.8 0.00 perf-profile.children.cycles-pp.read
13.20 ? 63% -13.2 0.00 perf-profile.children.cycles-pp.task_work_run
13.20 ? 63% -13.2 0.00 perf-profile.children.cycles-pp.__fput
9.82 ?107% -9.8 0.00 perf-profile.children.cycles-pp.proc_reg_read
9.03 ?107% -9.0 0.00 perf-profile.children.cycles-pp.do_sys_open
9.03 ?107% -9.0 0.00 perf-profile.children.cycles-pp.do_sys_openat2
9.03 ?107% -9.0 0.00 perf-profile.children.cycles-pp.do_filp_open
9.03 ?107% -9.0 0.00 perf-profile.children.cycles-pp.path_openat
9.03 ?107% -9.0 0.00 perf-profile.children.cycles-pp.open64
8.66 ? 41% -8.7 0.00 perf-profile.children.cycles-pp.__libc_start_main
8.66 ? 41% -8.7 0.00 perf-profile.children.cycles-pp.main
8.66 ? 41% -8.7 0.00 perf-profile.children.cycles-pp.run_builtin
8.66 ? 41% -8.7 0.00 perf-profile.children.cycles-pp.cmd_record
8.66 ? 41% -8.7 0.00 perf-profile.children.cycles-pp.perf_mmap__push
7.74 ?100% -7.7 0.00 perf-profile.children.cycles-pp.show_interrupts
7.02 ? 80% -7.0 0.00 perf-profile.children.cycles-pp.mmput
7.02 ? 80% -7.0 0.00 perf-profile.children.cycles-pp.exit_mmap
5.18 ?106% -5.2 0.00 perf-profile.children.cycles-pp.refill_obj_stock
5.09 ? 64% -5.1 0.00 perf-profile.children.cycles-pp.record__mmap_read_evlist
5.09 ? 64% -5.1 0.00 perf-profile.children.cycles-pp.perf_mmap__read_head
5.09 ? 64% -5.1 0.00 perf-profile.children.cycles-pp.asm_exc_page_fault
5.09 ? 64% -5.1 0.00 perf-profile.children.cycles-pp.exc_page_fault
5.09 ? 64% -5.1 0.00 perf-profile.children.cycles-pp.do_user_addr_fault
5.09 ? 64% -5.1 0.00 perf-profile.children.cycles-pp.handle_mm_fault
5.09 ? 64% -5.1 0.00 perf-profile.children.cycles-pp.__handle_mm_fault
5.09 ? 64% -5.1 0.00 perf-profile.children.cycles-pp.perf_release
5.09 ? 64% -5.1 0.00 perf-profile.children.cycles-pp.perf_event_release_kernel
4.26 ?100% -4.3 0.00 perf-profile.children.cycles-pp.obj_cgroup_charge
4.10 ?100% -4.1 0.00 perf-profile.children.cycles-pp.kfree
4.00 ?100% -4.0 0.00 perf-profile.children.cycles-pp.zap_pte_range
4.00 ?100% -4.0 0.00 perf-profile.children.cycles-pp._free_event
4.00 ?100% -4.0 0.00 perf-profile.children.cycles-pp.mutex_lock
4.00 ?100% -4.0 0.00 perf-profile.children.cycles-pp.do_fault
4.00 ?100% -4.0 0.00 perf-profile.children.cycles-pp.__do_fault
4.00 ?100% -4.0 0.00 perf-profile.children.cycles-pp.unmap_vmas
4.00 ?100% -4.0 0.00 perf-profile.children.cycles-pp.unmap_page_range
4.00 ?100% -4.0 0.00 perf-profile.children.cycles-pp.sw_perf_event_destroy
4.00 ?100% -4.0 0.00 perf-profile.children.cycles-pp.swevent_hlist_put_cpu
0.00 +0.1 0.05 ? 8% perf-profile.children.cycles-pp.rmqueue_bulk
0.00 +0.1 0.06 ? 13% perf-profile.children.cycles-pp.hrtimer_next_event_without
0.00 +0.1 0.07 ? 13% perf-profile.children.cycles-pp.btrfs_delalloc_release_extents
0.00 +0.1 0.07 ? 17% perf-profile.children.cycles-pp.rcu_idle_exit
0.00 +0.1 0.07 ? 19% perf-profile.children.cycles-pp.free_extent_state
0.00 +0.1 0.07 ? 24% perf-profile.children.cycles-pp._raw_spin_lock_irq
0.00 +0.1 0.07 ? 15% perf-profile.children.cycles-pp.btrfs_calculate_inode_block_rsv_size
0.00 +0.1 0.08 ? 22% perf-profile.children.cycles-pp.rcu_sched_clock_irq
0.00 +0.1 0.08 ? 20% perf-profile.children.cycles-pp.btrfs_write_check
0.00 +0.1 0.08 ? 15% perf-profile.children.cycles-pp.btrfs_transaction_in_commit
0.00 +0.1 0.08 ? 19% perf-profile.children.cycles-pp.get_next_timer_interrupt
0.00 +0.1 0.08 ? 23% perf-profile.children.cycles-pp.merge_state
0.00 +0.1 0.09 ? 45% perf-profile.children.cycles-pp.run_rebalance_domains
0.00 +0.1 0.09 ? 14% perf-profile.children.cycles-pp.__pagevec_lru_add_fn
0.00 +0.1 0.10 ? 30% perf-profile.children.cycles-pp.__mod_memcg_state
0.00 +0.1 0.10 ? 29% perf-profile.children.cycles-pp.rmqueue
0.00 +0.1 0.12 ? 39% perf-profile.children.cycles-pp.__intel_pmu_enable_all
0.00 +0.1 0.12 ? 8% perf-profile.children.cycles-pp.irqtime_account_irq
0.00 +0.1 0.13 ? 14% perf-profile.children.cycles-pp.lock_extent_bits
0.00 +0.1 0.13 ? 20% perf-profile.children.cycles-pp.clear_state_bit
0.00 +0.1 0.13 ? 23% perf-profile.children.cycles-pp.update_sd_lb_stats
0.00 +0.1 0.13 ? 14% perf-profile.children.cycles-pp.pagevec_lru_move_fn
0.00 +0.1 0.13 ? 26% perf-profile.children.cycles-pp.find_busiest_group
0.00 +0.1 0.14 ? 10% perf-profile.children.cycles-pp.native_sched_clock
0.00 +0.1 0.14 ? 26% perf-profile.children.cycles-pp.account_page_dirtied
0.00 +0.1 0.14 ? 88% perf-profile.children.cycles-pp.arch_scale_freq_tick
0.00 +0.1 0.15 ? 15% perf-profile.children.cycles-pp.lru_cache_add
0.00 +0.1 0.15 ? 10% perf-profile.children.cycles-pp.sched_clock
0.00 +0.2 0.15 ? 10% perf-profile.children.cycles-pp.read_tsc
0.00 +0.2 0.16 ? 10% perf-profile.children.cycles-pp.sched_clock_cpu
0.00 +0.2 0.17 ? 17% perf-profile.children.cycles-pp.lapic_next_deadline
0.00 +0.2 0.17 ? 25% perf-profile.children.cycles-pp.get_page_from_freelist
0.00 +0.2 0.17 ? 15% perf-profile.children.cycles-pp.mem_cgroup_charge
0.00 +0.2 0.18 ? 21% perf-profile.children.cycles-pp.load_balance
0.00 +0.2 0.18 ? 40% perf-profile.children.cycles-pp.xas_load
0.00 +0.2 0.19 ? 19% perf-profile.children.cycles-pp.find_next_bit
0.00 +0.2 0.21 ? 14% perf-profile.children.cycles-pp.btrfs_drop_pages
0.00 +0.2 0.21 ?109% perf-profile.children.cycles-pp.update_rq_clock
0.00 +0.2 0.22 ? 21% perf-profile.children.cycles-pp.btrfs_get_alloc_profile
0.00 +0.2 0.22 ? 42% perf-profile.children.cycles-pp.btrfs_get_extent
0.00 +0.2 0.22 ?124% perf-profile.children.cycles-pp.update_blocked_averages
0.00 +0.2 0.23 ? 17% perf-profile.children.cycles-pp.lock_and_cleanup_extent_if_need
0.00 +0.2 0.24 ? 12% perf-profile.children.cycles-pp._raw_spin_trylock
0.00 +0.2 0.24 ? 20% perf-profile.children.cycles-pp.calc_available_free_space
0.00 +0.2 0.25 ? 6% perf-profile.children.cycles-pp.native_irq_return_iret
0.00 +0.3 0.26 ? 17% perf-profile.children.cycles-pp.alloc_extent_state
0.00 +0.3 0.27 ? 21% perf-profile.children.cycles-pp.perf_mux_hrtimer_handler
0.00 +0.3 0.29 ? 20% perf-profile.children.cycles-pp.__set_page_dirty_nobuffers
0.00 +0.3 0.33 ? 15% perf-profile.children.cycles-pp.clear_extent_bit
0.00 +0.4 0.36 ? 41% perf-profile.children.cycles-pp.percpu_counter_add_batch
0.00 +0.4 0.40 ? 8% perf-profile.children.cycles-pp.btrfs_set_delalloc_extent
0.00 +0.4 0.40 ? 10% perf-profile.children.cycles-pp.btrfs_reserve_data_bytes
0.00 +0.4 0.41 ? 93% perf-profile.children.cycles-pp.ktime_get_update_offsets_now
0.00 +0.4 0.41 ? 8% perf-profile.children.cycles-pp.set_state_bits
0.00 +0.4 0.42 ? 17% perf-profile.children.cycles-pp.__add_to_page_cache_locked
0.00 +0.5 0.47 ? 9% perf-profile.children.cycles-pp.btrfs_check_data_free_space
0.00 +0.5 0.47 ? 8% perf-profile.children.cycles-pp.set_extent_bit
0.00 +0.5 0.50 ? 12% perf-profile.children.cycles-pp.rebalance_domains
0.00 +0.5 0.52 ? 15% perf-profile.children.cycles-pp.__clear_extent_bit
0.00 +0.6 0.56 ? 27% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
0.00 +0.6 0.56 ? 55% perf-profile.children.cycles-pp.scheduler_tick
0.00 +0.6 0.58 ? 15% perf-profile.children.cycles-pp.add_to_page_cache_lru
0.00 +0.6 0.60 ? 9% perf-profile.children.cycles-pp.__set_extent_bit
0.00 +0.6 0.62 ? 40% perf-profile.children.cycles-pp.timekeeping_max_deferment
0.00 +0.6 0.62 ? 71% perf-profile.children.cycles-pp.tick_irq_enter
0.00 +0.7 0.68 ? 63% perf-profile.children.cycles-pp.irq_enter_rcu
0.00 +0.7 0.70 ? 10% perf-profile.children.cycles-pp.copy_user_enhanced_fast_string
0.00 +0.7 0.70 ? 10% perf-profile.children.cycles-pp.copyin
0.00 +0.7 0.71 ? 10% perf-profile.children.cycles-pp.iov_iter_copy_from_user_atomic
0.00 +0.7 0.74 ? 10% perf-profile.children.cycles-pp.btrfs_copy_from_user
0.00 +0.8 0.83 ? 47% perf-profile.children.cycles-pp.__softirqentry_text_start
0.00 +0.8 0.83 ? 46% perf-profile.children.cycles-pp.do_softirq_own_stack
0.00 +0.8 0.83 ? 12% perf-profile.children.cycles-pp._find_next_bit
0.00 +0.8 0.83 ? 17% perf-profile.children.cycles-pp.prepare_pages
0.00 +0.9 0.90 ? 18% perf-profile.children.cycles-pp.pagecache_get_page
0.00 +0.9 0.90 ? 28% perf-profile.children.cycles-pp.clockevents_program_event
0.00 +0.9 0.93 ? 41% perf-profile.children.cycles-pp.irq_exit_rcu
0.00 +0.9 0.94 ? 38% perf-profile.children.cycles-pp.tick_nohz_next_event
0.00 +1.0 0.97 ? 47% perf-profile.children.cycles-pp.update_process_times
0.00 +1.0 0.98 ? 47% perf-profile.children.cycles-pp.tick_sched_handle
0.00 +1.0 1.02 ? 36% perf-profile.children.cycles-pp.tick_nohz_get_sleep_length
0.00 +1.3 1.27 ? 44% perf-profile.children.cycles-pp.tick_sched_timer
0.00 +1.3 1.30 ? 14% perf-profile.children.cycles-pp.btrfs_dirty_pages
0.00 +1.4 1.44 ? 11% perf-profile.children.cycles-pp.cpumask_next
0.00 +1.7 1.69 ? 37% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.00 +3.1 3.12 ? 40% perf-profile.children.cycles-pp.hrtimer_interrupt
0.00 +3.2 3.16 ? 39% perf-profile.children.cycles-pp.__sysvec_apic_timer_interrupt
1.09 ?173% +3.9 5.02 ? 34% perf-profile.children.cycles-pp.sysvec_apic_timer_interrupt
0.00 +4.5 4.52 ? 11% perf-profile.children.cycles-pp.__percpu_counter_sum
1.09 ?173% +5.3 6.43 ? 20% perf-profile.children.cycles-pp.asm_sysvec_apic_timer_interrupt
0.00 +6.4 6.43 ? 31% perf-profile.children.cycles-pp.menu_select
0.00 +21.1 21.07 ? 10% perf-profile.children.cycles-pp.btrfs_block_rsv_release
0.00 +21.1 21.08 ? 10% perf-profile.children.cycles-pp.btrfs_inode_rsv_release
26.37 ? 31% +21.8 48.19 ? 10% perf-profile.children.cycles-pp.do_syscall_64
0.00 +22.0 22.01 ? 9% perf-profile.children.cycles-pp.btrfs_reserve_metadata_bytes
0.00 +22.3 22.30 ? 9% perf-profile.children.cycles-pp.btrfs_delalloc_reserve_metadata
0.00 +22.3 22.30 ? 9% perf-profile.children.cycles-pp.__reserve_bytes
0.00 +37.5 37.46 ? 9% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
0.00 +39.1 39.12 ? 10% perf-profile.children.cycles-pp._raw_spin_lock
0.00 +47.8 47.77 ? 10% perf-profile.children.cycles-pp.btrfs_buffered_write
0.00 +47.9 47.88 ? 10% perf-profile.children.cycles-pp.btrfs_file_write_iter
0.00 +47.9 47.91 ? 10% perf-profile.children.cycles-pp.new_sync_write
0.00 +48.0 48.01 ? 10% perf-profile.children.cycles-pp.vfs_write
0.00 +48.0 48.05 ? 10% perf-profile.children.cycles-pp.ksys_write
5.65 ?106% -5.7 0.00 perf-profile.self.cycles-pp.show_interrupts
4.00 ?100% -4.0 0.00 perf-profile.self.cycles-pp.zap_pte_range
4.00 ?100% -4.0 0.00 perf-profile.self.cycles-pp.mutex_lock
0.00 +0.1 0.05 ? 9% perf-profile.self.cycles-pp.perf_mux_hrtimer_handler
0.00 +0.1 0.06 ? 22% perf-profile.self.cycles-pp.btrfs_buffered_write
0.00 +0.1 0.07 ? 25% perf-profile.self.cycles-pp.rcu_sched_clock_irq
0.00 +0.1 0.07 ? 26% perf-profile.self.cycles-pp._raw_spin_lock_irq
0.00 +0.1 0.07 ? 12% perf-profile.self.cycles-pp.do_idle
0.00 +0.1 0.07 ? 19% perf-profile.self.cycles-pp.free_extent_state
0.00 +0.1 0.08 ? 24% perf-profile.self.cycles-pp.__add_to_page_cache_locked
0.00 +0.1 0.08 ? 21% perf-profile.self.cycles-pp.update_sd_lb_stats
0.00 +0.1 0.09 ? 24% perf-profile.self.cycles-pp.tick_sched_timer
0.00 +0.1 0.09 ? 25% perf-profile.self.cycles-pp.btrfs_get_alloc_profile
0.00 +0.1 0.09 ? 26% perf-profile.self.cycles-pp.btrfs_file_write_iter
0.00 +0.1 0.09 ? 19% perf-profile.self.cycles-pp.btrfs_reserve_data_bytes
0.00 +0.1 0.10 ? 11% perf-profile.self.cycles-pp.btrfs_block_rsv_release
0.00 +0.1 0.10 ? 11% perf-profile.self.cycles-pp.rebalance_domains
0.00 +0.1 0.10 ? 15% perf-profile.self.cycles-pp.btrfs_set_delalloc_extent
0.00 +0.1 0.10 ? 22% perf-profile.self.cycles-pp.__clear_extent_bit
0.00 +0.1 0.12 ? 27% perf-profile.self.cycles-pp.btrfs_dirty_pages
0.00 +0.1 0.12 ? 39% perf-profile.self.cycles-pp.__intel_pmu_enable_all
0.00 +0.1 0.13 ? 39% perf-profile.self.cycles-pp.xas_load
0.00 +0.1 0.14 ? 10% perf-profile.self.cycles-pp.native_sched_clock
0.00 +0.1 0.14 ? 85% perf-profile.self.cycles-pp.arch_scale_freq_tick
0.00 +0.1 0.15 ? 12% perf-profile.self.cycles-pp.read_tsc
0.00 +0.2 0.15 ? 18% perf-profile.self.cycles-pp.alloc_extent_state
0.00 +0.2 0.16 ? 9% perf-profile.self.cycles-pp.update_process_times
0.00 +0.2 0.17 ? 17% perf-profile.self.cycles-pp.lapic_next_deadline
0.00 +0.2 0.19 ? 18% perf-profile.self.cycles-pp.find_next_bit
0.00 +0.2 0.19 ? 16% perf-profile.self.cycles-pp.btrfs_drop_pages
0.00 +0.2 0.22 ? 42% perf-profile.self.cycles-pp.tick_nohz_next_event
0.00 +0.2 0.23 ? 13% perf-profile.self.cycles-pp._raw_spin_trylock
0.00 +0.2 0.25 ? 6% perf-profile.self.cycles-pp.native_irq_return_iret
0.00 +0.3 0.32 ? 12% perf-profile.self.cycles-pp.__reserve_bytes
0.00 +0.3 0.33 ? 41% perf-profile.self.cycles-pp.percpu_counter_add_batch
0.00 +0.4 0.38 ?102% perf-profile.self.cycles-pp.ktime_get_update_offsets_now
0.00 +0.4 0.45 ? 10% perf-profile.self.cycles-pp.cpumask_next
0.00 +0.5 0.52 ? 25% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.00 +0.6 0.62 ? 40% perf-profile.self.cycles-pp.timekeeping_max_deferment
0.00 +0.7 0.69 ? 10% perf-profile.self.cycles-pp.copy_user_enhanced_fast_string
0.00 +0.8 0.80 ? 11% perf-profile.self.cycles-pp._find_next_bit
0.00 +1.7 1.68 ? 17% perf-profile.self.cycles-pp._raw_spin_lock
0.00 +2.8 2.80 ? 11% perf-profile.self.cycles-pp.__percpu_counter_sum
0.00 +4.6 4.59 ? 39% perf-profile.self.cycles-pp.cpuidle_enter_state
0.00 +5.4 5.35 ? 31% perf-profile.self.cycles-pp.menu_select
0.00 +37.3 37.29 ? 9% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath



fio.write_bw_MBps

2200 +--------------------------------------------------------------------+
|+.++++.+++.++++.++++.+++. +++.+++.++++.++++.+++.++++.+ +.++++.++++. |
2000 |-+ + + +|
1800 |-+ |
| |
1600 |-+ |
| |
1400 |-+ |
| |
1200 |-+ |
1000 |-+ |
| |
800 |O+ OOO OOO OOOO OOOO OOO OOOO OOO OOOO OOOO OOO O |
| O |
600 +--------------------------------------------------------------------+


fio.write_iops

550000 +------------------------------------------------------------------+
| +.++ + + ++++ +++.+ + ++++ ++++ + +.+++ +++ .++++.++ +.+ |
500000 |-+ + + +|
| |
450000 |-+ |
400000 |-+ |
| |
350000 |-+ |
| |
300000 |-+ |
250000 |-+ |
| |
200000 |O+ OOOO OOOO OOOO OOOO OOOO OOOO OOOOO OOOO OOO |
| O |
150000 +------------------------------------------------------------------+


fio.write_clat_mean_us

45000 +-------------------------------------------------------------------+
| O |
40000 |-+ OO OO OO |
|O O OOOO OO O O OOOO OOOO OOO OOOO OOOO OO |
35000 |-+ |
| |
30000 |-+ |
| |
25000 |-+ |
| |
20000 |-+ |
| |
15000 |+.++++.++++.++++.++++.++++.++++.+++.++++.++++.++++.++++.++++.++++.+|
| |
10000 +-------------------------------------------------------------------+


fio.write_clat_stddev

400000 +------------------------------------------------------------------+
| + + +. + + + + :|
350000 |-+.+ : ++ ++ + ++ +++ ++ + ++: ++ +.+ + ++ +.+ +.+ |
| : ++.+ : : : : : : : : :+ : : : : : +. + |
300000 |-: : : : : : : : : + : : : : : + + : |
250000 |:+ :: :: :: :: : :: :: + |
|: : : : : : : : |
200000 |++ + + + + + + + |
| |
150000 |-+ |
100000 |-+ |
| |
50000 |-+ |
| |
0 +------------------------------------------------------------------+


fio.write_clat_90__us

50000 +-------------------------------------------------------------------+
| |
45000 |-+ OO O O OOO OOOO OO O OOOO OOO O O O OO |
40000 |O+ O O O O O O O O OO |
| |
35000 |-+ |
| |
30000 |-+ |
| |
25000 |-+ |
20000 |-+ |
| |
15000 |+.+ .++ .++ ++ + .++ ++. + ++ ++. + .++++.++ |
| +++ ++ ++.+ +.++ + ++.+ ++ +.+ +.++ ++ + ++.+|
10000 +-------------------------------------------------------------------+


fio.write_clat_95__us

55000 +-------------------------------------------------------------------+
| O |
50000 |-+ |
45000 |O+ OOO OOOO OOOO OOOO OOOO OOOO OOO OOOO OOOO OO |
| |
40000 |-+ |
35000 |-+ |
| |
30000 |-+ |
25000 |-+ |
| |
20000 |+. .+ + .+ + + + .+ .+ |
15000 |-+++++.++++ +++.+ ++.++++ +++.+ +.++++.+ ++.++ +.++++ +++ +++. |
| +|
10000 +-------------------------------------------------------------------+


fio.write_clat_99__us

60000 +-------------------------------------------------------------------+
| O |
55000 |O+ O O O O O |
50000 |-+ OOO OO O OOOO OOOO OO O OOOO OOO OOO O OO O |
| |
45000 |-+ |
40000 |-+ |
| |
35000 |-+ |
30000 |-+ |
| |
25000 |-+ |
20000 |+.++ +.++++.+++ .++++. +++.+++ .+++.+ ++.++++. +++.+ ++.++++.++++. |
| + + + + + + + +|
15000 +-------------------------------------------------------------------+


fio.latency_20us_

80 +----------------------------------------------------------------------+
|+ + + + + + + : |
70 |:+ : : : : : : :: + |
60 |:: : :: : :: + :: :: :: :: |
|:: :: : : + :: : : : : : :: : +. + : |
50 |-+: + : : : : ::: : : : : : : : : : +: + + |
| ++ + +.+: +.+ : ++ + :: +.+ : ++ : :: ++ : ++ : + : |
40 |-+ :+ + :+ + : + :+ : ++ + + :: ++ |
| + + + + + + + + |
30 |-+ +|
20 |-+ |
| |
10 |-+ |
| O |
0 +----------------------------------------------------------------------+


fio.latency_50us_

100 +---------------------------------------------------------------------+
90 |-+O O |
| |
80 |-+ |
70 |-+ |
| |
60 |-+ |
50 |-+ |
40 |-+ |
| |
30 |-+ |
20 |-+ |
| |
10 |-+ |
0 +---------------------------------------------------------------------+


fio.latency_100us_

9 +-----------------------------------------------------------------------+
| O |
8 |-+ |
7 |-+ |
| |
6 |-+ |
5 |-+ |
| |
4 |-+ |
3 |-+ |
| O O |
2 |O+ O O O O |
1 |-+ O O O OO OO OO OO OO OOO OOO OO O |
| O O O O O O OO |
0 +-----------------------------------------------------------------------+


fio.workload

6.8e+07 +-----------------------------------------------------------------+
|++.++++.+++++.++++.+++++.++++.+++++.++++.+++++.++++.+++++.++++.++|
6.6e+07 |-+ |
| O |
6.4e+07 |-+ O |
| O O O OOO O OO O OOOO O OO |
6.2e+07 |O+ O OOOO O O O |
| OO O OO O O O O |
6e+07 |-+ |
| |
5.8e+07 |-+ |
| |
5.6e+07 |-+ |
| O |
5.4e+07 +-----------------------------------------------------------------+


fio.time.system_time

2400 +--------------------------------------------------------------------+
2200 |-+ |
| |
2000 |-+ |
1800 |-+ |
| |
1600 |-+ |
1400 |-+ |
1200 |-+ |
| |
1000 |-+ |
800 |+. + + + + +. + .+ |
|: ++ .+++. :++. :+.+ +.+ :+.+ + ++ .+ : ++ :++. ++ +++.++ |
600 |-+ ++ + ++ + + ++ + ++ +.+ + ++.+|
400 +--------------------------------------------------------------------+


fio.time.elapsed_time

320 +---------------------------------------------------------------------+
300 |O+OOOO OOO OOO OOOO OOO OOO OOOO OOO OOOO OOO OOO |
| |
280 |-+ |
260 |-+ |
| |
240 |-+ |
220 |-+ |
200 |-+ |
| |
180 |-+ |
160 |-+ |
| |
140 |-.+++ .+ +.+++ .+++.+++. +++.+ +.+++ .+ +.+++. +++.+++.+++.+++ .+|
120 +---------------------------------------------------------------------+


fio.time.elapsed_time.max

320 +---------------------------------------------------------------------+
300 |O+OOOO OOO OOO OOOO OOO OOO OOOO OOO OOOO OOO OOO |
| |
280 |-+ |
260 |-+ |
| |
240 |-+ |
220 |-+ |
200 |-+ |
| |
180 |-+ |
160 |-+ |
| |
140 |-.+++ .+ +.+++ .+++.+++. +++.+ +.+++ .+ +.+++. +++.+++.+++.+++ .+|
120 +---------------------------------------------------------------------+


fio.time.involuntary_context_switches

9000 +--------------------------------------------------------------------+
| O |
8000 |-+ O O O O OOOO O |
7000 |-+ OO |
| O OO O O O O O OO |
6000 |-+ O OOO O O OOO O O O O |
|O OO |
5000 |-+ |
| |
4000 |-+ |
3000 |-+ |
|+. + + + .+ +. + .+ .+ |
2000 |-+++++.+++.+ ++.++ +.+++.++ +.+++ +++.+++ +++.+ ++.+++ +++ + +. |
| + +|
1000 +--------------------------------------------------------------------+


fio.time.file_system_outputs

5.4e+08 +-----------------------------------------------------------------+
5.3e+08 |-+ |
| |
5.2e+08 |-+ O |
5.1e+08 |-+ O O |
| O O O OO OO O O O OO |
5e+08 |O+ O OO OOOO O O O O O |
4.9e+08 |-+ OO O OO O O O O |
4.8e+08 |-+ |
| |
4.7e+08 |-+ |
4.6e+08 |-+ |
| |
4.5e+08 |-+ |
4.4e+08 +-----------------------------------------------------------------+


[*] bisect-good sample
[O] bisect-bad sample



Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


Thanks,
Oliver Sang


Attachments:
(No filename) (118.83 kB)
config-5.10.0-rc2-00074-g96bed17ad9d4 (173.93 kB)
job-script (8.15 kB)
job.yaml (5.75 kB)
reproduce (768.00 B)
Download all attachments

2020-11-04 16:52:38

by Josef Bacik

[permalink] [raw]
Subject: Re: [btrfs] 96bed17ad9: fio.write_iops -59.7% regression

On 11/4/20 1:16 AM, kernel test robot wrote:
> Greeting,
>
> FYI, we noticed a -59.7% regression of fio.write_iops due to commit:
>
>
> commit: 96bed17ad9d425ff6958a2e6f87179453a3d76f2 ("btrfs: simplify the logic in need_preemptive_flushing")
> https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git master
>
>
> in testcase: fio-basic
> on test machine: 192 threads Intel(R) Xeon(R) CPU @ 2.20GHz with 192G memory
> with following parameters:
>
> disk: 1SSD
> fs: btrfs
> runtime: 300s
> nr_task: 8
> rw: write
> bs: 4k
> ioengine: sync
> test_size: 256g
> cpufreq_governor: performance
> ucode: 0x4002f01
>
> test-description: Fio is a tool that will spawn a number of threads or processes doing a particular type of I/O action as specified by the user.
> test-url: https://github.com/axboe/fio
>

I generally ignore these reports, but since it's FIO I figured at least the test
itself was valid. However once again I'm unable to reproduce the results

linus master:

task_0: (groupid=0, jobs=8): err= 0: pid=38586: Wed Nov 4 08:13:36 2020
write: IOPS=168k, BW=655MiB/s (687MB/s)(192GiB/300001msec); 0 zone resets
clat (usec): min=26, max=786, avg=47.15, stdev= 7.21
lat (usec): min=26, max=786, avg=47.21, stdev= 7.21
clat percentiles (nsec):
| 1.00th=[31872], 5.00th=[35584], 10.00th=[37632], 20.00th=[40704],
| 30.00th=[43264], 40.00th=[45312], 50.00th=[47360], 60.00th=[48896],
| 70.00th=[50944], 80.00th=[52992], 90.00th=[56064], 95.00th=[59136],
| 99.00th=[65280], 99.50th=[68096], 99.90th=[74240], 99.95th=[77312],
| 99.99th=[88576]
bw ( KiB/s): min=63752, max=112864, per=12.50%, avg=83810.53,
stdev=3403.48, samples=4792
iops : min=15938, max=28216, avg=20952.61, stdev=850.87, samples=4792
lat (usec) : 50=65.73%, 100=34.27%, 250=0.01%, 500=0.01%, 750=0.01%
lat (usec) : 1000=0.01%
cpu : usr=2.22%, sys=97.77%, ctx=5054, majf=0, minf=63
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=0,50298940,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
WRITE: bw=655MiB/s (687MB/s), 655MiB/s-655MiB/s (687MB/s-687MB/s), io=192GiB
(206GB), run=300001-300001msec

kdave/for-next-20201104
task_0: (groupid=0, jobs=8): err= 0: pid=6652: Wed Nov 4 08:41:52 2020
write: IOPS=180k, BW=705MiB/s (739MB/s)(207GiB/300001msec); 0 zone resets
clat (usec): min=17, max=10603, avg=43.91, stdev= 9.62
lat (usec): min=17, max=10603, avg=43.98, stdev= 9.62
clat percentiles (nsec):
| 1.00th=[25984], 5.00th=[31104], 10.00th=[33536], 20.00th=[37120],
| 30.00th=[39168], 40.00th=[41216], 50.00th=[43264], 60.00th=[45824],
| 70.00th=[47872], 80.00th=[50944], 90.00th=[54528], 95.00th=[57600],
| 99.00th=[64768], 99.50th=[68096], 99.90th=[74240], 99.95th=[78336],
| 99.99th=[90624]
bw ( KiB/s): min=66760, max=123160, per=12.50%, avg=90221.11,
stdev=9052.52, samples=4792
iops : min=16690, max=30790, avg=22555.24, stdev=2263.14, samples=4792
lat (usec) : 20=0.01%, 50=77.24%, 100=22.75%, 250=0.01%, 500=0.01%
lat (usec) : 750=0.01%, 1000=0.01%
lat (msec) : 2=0.01%, 4=0.01%, 20=0.01%
cpu : usr=1.67%, sys=98.31%, ctx=4806, majf=0, minf=68
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=0,54134917,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
WRITE: bw=705MiB/s (739MB/s), 705MiB/s-705MiB/s (739MB/s-739MB/s), io=207GiB
(222GB), run=300001-300001msec

So instead of -60% iops regression, I'm seeing a 7% iops improvement. The only
difference is that my machine doesn't have 192 threads, it has 80. Thanks,

Josef