2018-06-22 08:40:45

by kernel test robot

[permalink] [raw]
Subject: [lkp-robot] [fs] 9965ed174e: will-it-scale.per_process_ops -3.7% regression


Greeting,

FYI, we noticed a -3.7% regression of will-it-scale.per_process_ops due to commit:


commit: 9965ed174e7d38896e5d2582159d8ef31ecd4cb5 ("fs: add new vfs_poll and file_can_poll helpers")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master

in testcase: will-it-scale
on test machine: 32 threads Intel(R) Xeon(R) CPU E5-2680 0 @ 2.70GHz with 64G memory
with following parameters:

test: poll2
cpufreq_governor: performance

test-description: Will It Scale takes a testcase and runs it from 1 through to n parallel copies to see if the testcase will scale. It builds both a process and threads based test in order to see any differences between the two.
test-url: https://github.com/antonblanchard/will-it-scale



Details are as below:
-------------------------------------------------------------------------------------------------->


To reproduce:

git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml

=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/tbox_group/test/testcase:
gcc-7/performance/x86_64-rhel-7.2/debian-x86_64-2016-08-31.cgz/lkp-sb03/poll2/will-it-scale

commit:
6e8b704df5 ("fs: update documentation to mention __poll_t and match the code")
9965ed174e ("fs: add new vfs_poll and file_can_poll helpers")

6e8b704df58407aa 9965ed174e7d38896e5d258215
---------------- --------------------------
%stddev %change %stddev
\ | \
520538 -3.7% 501456 will-it-scale.per_process_ops
256505 -4.6% 244715 will-it-scale.per_thread_ops
0.55 -3.6% 0.53 ? 2% will-it-scale.scalability
310.44 -0.0% 310.44 will-it-scale.time.elapsed_time
310.44 -0.0% 310.44 will-it-scale.time.elapsed_time.max
15891 ? 4% -0.7% 15775 ? 5% will-it-scale.time.involuntary_context_switches
9929 -0.2% 9911 will-it-scale.time.maximum_resident_set_size
17263 -0.5% 17178 will-it-scale.time.minor_page_faults
4096 +0.0% 4096 will-it-scale.time.page_size
806.75 -0.0% 806.50 will-it-scale.time.percent_of_cpu_this_job_got
2324 +0.3% 2330 will-it-scale.time.system_time
181.23 -3.3% 175.19 will-it-scale.time.user_time
1551 ? 20% +6.2% 1648 ? 15% will-it-scale.time.voluntary_context_switches
54673117 -4.2% 52370410 will-it-scale.workload
104939 +4.7% 109841 ? 7% interrupts.CAL:Function_call_interrupts
25.69 -0.3% 25.62 boot-time.boot
15.08 -0.2% 15.04 boot-time.dhcp
771.17 -0.6% 766.39 boot-time.idle
15.75 -0.4% 15.70 boot-time.kernel_boot
11082 ? 35% -23.6% 8462 ? 22% softirqs.NET_RX
493297 ? 5% -2.4% 481569 softirqs.RCU
761170 -0.3% 759091 softirqs.SCHED
4696028 +2.0% 4791321 ? 2% softirqs.TIMER
49.48 +0.0 49.53 mpstat.cpu.idle%
0.00 ? 33% +0.0 0.00 ? 14% mpstat.cpu.iowait%
0.02 ? 65% -0.0 0.02 ? 68% mpstat.cpu.soft%
45.21 +0.2 45.39 mpstat.cpu.sys%
5.28 -0.2 5.05 mpstat.cpu.usr%
192.00 +0.0% 192.00 vmstat.memory.buff
1303911 +0.0% 1304198 vmstat.memory.cache
64247669 -0.0% 64246863 vmstat.memory.free
0.00 -100.0% 0.00 vmstat.procs.b
16.00 +0.0% 16.00 vmstat.procs.r
1991 ? 9% -5.5% 1882 ? 4% vmstat.system.cs
32640 +0.0% 32654 vmstat.system.in
0.00 -100.0% 0.00 numa-numastat.node0.interleave_hit
199081 ? 37% +16.2% 231408 ? 9% numa-numastat.node0.local_node
199089 ? 37% +16.8% 232476 ? 9% numa-numastat.node0.numa_hit
7.75 ? 55% +13716.1% 1070 ?164% numa-numastat.node0.other_node
0.00 -100.0% 0.00 numa-numastat.node1.interleave_hit
437958 ? 16% -8.1% 402582 ? 5% numa-numastat.node1.local_node
444168 ? 16% -8.2% 407775 ? 5% numa-numastat.node1.numa_hit
6209 -16.4% 5193 ? 34% numa-numastat.node1.other_node
2091730 ? 5% -3.1% 2026939 ? 2% cpuidle.C1.time
91174 ? 5% +2.7% 93645 ? 5% cpuidle.C1.usage
5738844 ? 4% -2.4% 5600995 ? 4% cpuidle.C1E.time
28195 ? 3% -2.0% 27629 ? 4% cpuidle.C1E.usage
4880169 ? 9% -6.1% 4580663 cpuidle.C3.time
15144 ? 6% -4.7% 14432 ? 2% cpuidle.C3.usage
4.871e+09 +0.1% 4.874e+09 cpuidle.C7.time
4972233 +0.1% 4979280 cpuidle.C7.usage
20948 ? 3% -0.9% 20767 ? 2% cpuidle.POLL.time
1832 ? 7% +2.3% 1874 ? 4% cpuidle.POLL.usage
1573 -0.0% 1573 turbostat.Avg_MHz
51.07 -0.0 51.07 turbostat.Busy%
3088 -0.0% 3088 turbostat.Bzy_MHz
89012 ? 5% +2.5% 91251 ? 5% turbostat.C1
0.02 +0.0 0.02 turbostat.C1%
27955 ? 3% -2.0% 27387 ? 4% turbostat.C1E
0.06 -0.0 0.06 ? 7% turbostat.C1E%
15052 ? 6% -4.7% 14343 ? 2% turbostat.C3
0.05 ? 14% -0.0 0.05 ? 9% turbostat.C3%
4971682 +0.1% 4978589 turbostat.C7
48.82 +0.0 48.83 turbostat.C7%
21.61 -0.0% 21.60 turbostat.CPU%c1
27.24 +0.0% 27.25 turbostat.CPU%c3
0.07 ? 5% +0.0% 0.07 ? 5% turbostat.CPU%c7
122.33 -0.3% 121.94 turbostat.CorWatt
55.50 ? 3% -1.4% 54.75 turbostat.CoreTmp
10218703 +0.1% 10226202 turbostat.IRQ
16.63 -0.1% 16.61 turbostat.Pkg%pc2
0.09 ? 4% +20.0% 0.11 ? 35% turbostat.Pkg%pc3
0.00 ?173% -100.0% 0.00 turbostat.Pkg%pc6
55.25 ? 2% -0.5% 55.00 ? 3% turbostat.PkgTmp
149.66 -0.3% 149.27 turbostat.PkgWatt
2693 +0.0% 2693 turbostat.TSC_MHz
85694 -0.1% 85638 meminfo.Active
85494 -0.1% 85438 meminfo.Active(anon)
47486 +2.7% 48745 meminfo.AnonHugePages
80249 +0.1% 80292 meminfo.AnonPages
1251765 +0.0% 1252146 meminfo.Cached
137276 ? 16% +24.3% 170618 ? 7% meminfo.CmaFree
204800 +0.0% 204800 meminfo.CmaTotal
32955004 +0.0% 32955004 meminfo.CommitLimit
232279 ? 5% +2.5% 238145 ? 4% meminfo.Committed_AS
62914560 +0.0% 62914560 meminfo.DirectMap1G
6088704 +0.0% 6091264 meminfo.DirectMap2M
162480 ? 3% -1.6% 159920 ? 7% meminfo.DirectMap4k
2048 +0.0% 2048 meminfo.Hugepagesize
9820 +0.3% 9854 meminfo.Inactive
9673 +0.3% 9707 meminfo.Inactive(anon)
8444 -0.3% 8421 meminfo.KernelStack
25647 -0.8% 25445 meminfo.Mapped
63913807 -0.0% 63912919 meminfo.MemAvailable
64247648 -0.0% 64246806 meminfo.MemFree
65910012 -0.0% 65910008 meminfo.MemTotal
813.50 ?100% +50.1% 1221 ? 57% meminfo.Mlocked
4129 +0.2% 4138 meminfo.PageTables
52109 -0.2% 52017 meminfo.SReclaimable
73219 -0.1% 73138 meminfo.SUnreclaim
15496 ? 3% -0.1% 15475 ? 3% meminfo.Shmem
125328 -0.1% 125156 meminfo.Slab
1236246 +0.0% 1236713 meminfo.Unevictable
3.436e+10 +0.0% 3.436e+10 meminfo.VmallocTotal
4.539e+12 -4.8% 4.321e+12 ? 2% perf-stat.branch-instructions
0.27 -0.0 0.27 ? 2% perf-stat.branch-miss-rate%
1.24e+10 -5.7% 1.17e+10 ? 5% perf-stat.branch-misses
9.49 ? 6% -0.6 8.85 ? 5% perf-stat.cache-miss-rate%
2.239e+08 ? 4% -3.6% 2.157e+08 ? 6% perf-stat.cache-misses
2.372e+09 ? 8% +2.7% 2.436e+09 ? 3% perf-stat.cache-references
613020 ? 8% -6.1% 575516 ? 4% perf-stat.context-switches
0.79 +0.8% 0.79 ? 2% perf-stat.cpi
1.548e+13 +0.8% 1.561e+13 perf-stat.cpu-cycles
9077 ? 4% +2.0% 9256 perf-stat.cpu-migrations
0.52 ? 33% +0.2 0.73 ? 13% perf-stat.dTLB-load-miss-rate%
2.579e+10 ? 34% +34.3% 3.463e+10 ? 15% perf-stat.dTLB-load-misses
4.923e+12 -4.3% 4.709e+12 ? 2% perf-stat.dTLB-loads
0.04 ? 93% +0.0 0.08 ? 59% perf-stat.dTLB-store-miss-rate%
1.15e+09 ? 94% +82.3% 2.096e+09 ? 61% perf-stat.dTLB-store-misses
2.641e+12 +3.9% 2.745e+12 perf-stat.dTLB-stores
83.25 ? 3% -0.1 83.18 ? 2% perf-stat.iTLB-load-miss-rate%
2.267e+09 -8.2% 2.08e+09 ? 5% perf-stat.iTLB-load-misses
4.591e+08 ? 20% -8.2% 4.216e+08 ? 15% perf-stat.iTLB-loads
1.972e+13 +0.1% 1.973e+13 ? 3% perf-stat.instructions
8700 ? 2% +9.2% 9503 ? 3% perf-stat.instructions-per-iTLB-miss
1.27 -0.7% 1.26 ? 2% perf-stat.ipc
779240 -0.1% 778803 perf-stat.minor-faults
29.07 ? 2% -1.8 27.27 ? 5% perf-stat.node-load-miss-rate%
22776677 ? 18% -12.2% 20008915 ? 18% perf-stat.node-load-misses
55397234 ? 15% -3.1% 53683432 ? 22% perf-stat.node-loads
25.12 ? 8% -3.6 21.47 ? 6% perf-stat.node-store-miss-rate%
56765946 ? 7% -21.9% 44312543 ? 7% perf-stat.node-store-misses
1.694e+08 ? 3% -4.4% 1.619e+08 perf-stat.node-stores
779244 -0.1% 778804 perf-stat.page-faults
360663 +4.5% 376850 ? 3% perf-stat.path-length
21373 -0.1% 21359 proc-vmstat.nr_active_anon
20067 +0.1% 20080 proc-vmstat.nr_anon_pages
1594854 -0.0% 1594833 proc-vmstat.nr_dirty_background_threshold
3193608 -0.0% 3193566 proc-vmstat.nr_dirty_threshold
312986 +0.0% 313080 proc-vmstat.nr_file_pages
34314 ? 16% +24.3% 42650 ? 7% proc-vmstat.nr_free_cma
16061883 -0.0% 16061673 proc-vmstat.nr_free_pages
2418 +0.4% 2426 proc-vmstat.nr_inactive_anon
22956 +0.2% 22991 proc-vmstat.nr_indirectly_reclaimable
8461 -0.3% 8434 proc-vmstat.nr_kernel_stack
6551 -0.7% 6502 proc-vmstat.nr_mapped
203.00 ?100% +50.0% 304.50 ? 57% proc-vmstat.nr_mlock
1032 +0.1% 1033 proc-vmstat.nr_page_table_pages
3870 ? 3% -0.2% 3864 ? 3% proc-vmstat.nr_shmem
13026 -0.2% 13003 proc-vmstat.nr_slab_reclaimable
18303 -0.1% 18283 proc-vmstat.nr_slab_unreclaimable
309061 +0.0% 309177 proc-vmstat.nr_unevictable
21445 -0.1% 21431 proc-vmstat.nr_zone_active_anon
2418 +0.4% 2426 proc-vmstat.nr_zone_inactive_anon
309061 +0.0% 309178 proc-vmstat.nr_zone_unevictable
1580 ? 5% -1.7% 1552 ? 11% proc-vmstat.numa_hint_faults
1430 ? 6% -0.4% 1424 ? 13% proc-vmstat.numa_hint_faults_local
668360 -0.7% 663868 proc-vmstat.numa_hit
662140 -0.7% 657596 proc-vmstat.numa_local
6220 +0.8% 6271 proc-vmstat.numa_other
1937 ? 5% -0.9% 1920 ? 9% proc-vmstat.numa_pte_updates
1372 ? 17% +1.4% 1392 ? 15% proc-vmstat.pgactivate
244240 ? 15% -9.2% 221716 ? 5% proc-vmstat.pgalloc_movable
430176 ? 9% +4.5% 449628 ? 2% proc-vmstat.pgalloc_normal
800856 -0.2% 798885 proc-vmstat.pgfault
668073 -0.6% 664270 proc-vmstat.pgfree
28523 ? 90% +111.7% 60396 ? 20% numa-meminfo.node0.Active
28373 ? 90% +112.3% 60246 ? 20% numa-meminfo.node0.Active(anon)
11863 ?173% +228.7% 38993 ? 24% numa-meminfo.node0.AnonHugePages
24629 ?102% +127.8% 56097 ? 19% numa-meminfo.node0.AnonPages
636693 ? 3% -1.4% 627504 ? 4% numa-meminfo.node0.FilePages
5260 ? 72% +30.9% 6886 ? 43% numa-meminfo.node0.Inactive
5151 ? 74% +32.9% 6848 ? 45% numa-meminfo.node0.Inactive(anon)
4602 ? 6% +6.6% 4904 ? 6% numa-meminfo.node0.KernelStack
12836 ? 17% +8.3% 13902 ? 14% numa-meminfo.node0.Mapped
32079916 -0.1% 32051702 numa-meminfo.node0.MemFree
32914928 +0.0% 32914928 numa-meminfo.node0.MemTotal
835010 +3.4% 863224 ? 3% numa-meminfo.node0.MemUsed
1806 ? 31% +39.3% 2517 ? 4% numa-meminfo.node0.PageTables
27475 ? 9% +4.8% 28789 ? 8% numa-meminfo.node0.SReclaimable
38510 ? 4% +1.8% 39196 ? 2% numa-meminfo.node0.SUnreclaim
9195 ? 35% +22.0% 11216 ? 30% numa-meminfo.node0.Shmem
65986 ? 6% +3.0% 67986 ? 3% numa-meminfo.node0.Slab
627313 ? 3% -1.8% 616100 ? 4% numa-meminfo.node0.Unevictable
57170 ? 44% -55.9% 25236 ? 46% numa-meminfo.node1.Active
57120 ? 44% -55.9% 25186 ? 47% numa-meminfo.node1.Active(anon)
35629 ? 57% -72.6% 9758 ?100% numa-meminfo.node1.AnonHugePages
55634 ? 45% -56.5% 24194 ? 45% numa-meminfo.node1.AnonPages
615250 ? 3% +1.6% 624823 ? 4% numa-meminfo.node1.FilePages
4560 ? 83% -34.9% 2967 ?102% numa-meminfo.node1.Inactive
4521 ? 85% -36.8% 2858 ?108% numa-meminfo.node1.Inactive(anon)
3847 ? 7% -8.5% 3519 ? 7% numa-meminfo.node1.KernelStack
12819 ? 18% -9.9% 11548 ? 17% numa-meminfo.node1.Mapped
32167671 +0.1% 32195068 numa-meminfo.node1.MemFree
32995084 -0.0% 32995080 numa-meminfo.node1.MemTotal
827412 -3.3% 800010 ? 3% numa-meminfo.node1.MemUsed
2322 ? 24% -30.3% 1617 ? 6% numa-meminfo.node1.PageTables
24631 ? 11% -5.7% 23225 ? 10% numa-meminfo.node1.SReclaimable
34703 ? 5% -2.2% 33937 ? 4% numa-meminfo.node1.SUnreclaim
6287 ? 52% -32.4% 4248 ? 71% numa-meminfo.node1.Shmem
59335 ? 7% -3.7% 57163 ? 5% numa-meminfo.node1.Slab
608932 ? 3% +1.9% 620612 ? 3% numa-meminfo.node1.Unevictable
81686 +0.1% 81755 slabinfo.Acpi-Operand.active_objs
81690 +0.1% 81760 slabinfo.Acpi-Operand.num_objs
1045 ? 8% -15.9% 879.75 ? 16% slabinfo.Acpi-State.active_objs
1045 ? 8% -15.9% 879.75 ? 16% slabinfo.Acpi-State.num_objs
8933 ? 3% -0.1% 8923 ? 5% slabinfo.anon_vma.active_objs
8933 ? 3% -0.1% 8923 ? 5% slabinfo.anon_vma.num_objs
2677 ? 12% -28.6% 1912 ? 20% slabinfo.avtab_node.active_objs
2677 ? 12% -28.6% 1912 ? 20% slabinfo.avtab_node.num_objs
57779 +0.2% 57919 slabinfo.dentry.active_objs
58203 +0.4% 58456 slabinfo.dentry.num_objs
1359 ? 18% +2.6% 1394 ? 35% slabinfo.dmaengine-unmap-16.active_objs
1359 ? 18% +2.6% 1394 ? 35% slabinfo.dmaengine-unmap-16.num_objs
674.50 ? 5% -8.1% 620.00 ? 5% slabinfo.file_lock_cache.active_objs
674.50 ? 5% -8.1% 620.00 ? 5% slabinfo.file_lock_cache.num_objs
8733 ? 2% +1.3% 8847 ? 3% slabinfo.filp.active_objs
9095 ? 3% +1.1% 9191 ? 3% slabinfo.filp.num_objs
45573 -0.3% 45451 slabinfo.inode_cache.active_objs
45767 -0.3% 45637 slabinfo.inode_cache.num_objs
51367 +0.1% 51430 slabinfo.kernfs_node_cache.active_objs
51367 +0.1% 51430 slabinfo.kernfs_node_cache.num_objs
14036 +1.4% 14231 slabinfo.kmalloc-16.active_objs
14144 +1.4% 14336 slabinfo.kmalloc-16.num_objs
21249 ? 2% +4.2% 22142 ? 7% slabinfo.kmalloc-32.active_objs
21306 ? 2% +4.1% 22169 ? 7% slabinfo.kmalloc-32.num_objs
5500 ? 5% +2.9% 5657 ? 6% slabinfo.kmalloc-512.active_objs
5619 ? 5% +3.3% 5802 ? 7% slabinfo.kmalloc-512.num_objs
23965 +0.6% 24097 slabinfo.kmalloc-64.active_objs
24104 +0.8% 24299 slabinfo.kmalloc-64.num_objs
6524 ? 3% +0.8% 6576 ? 4% slabinfo.kmalloc-96.active_objs
6538 ? 3% +0.9% 6598 ? 4% slabinfo.kmalloc-96.num_objs
21191 ? 4% +0.1% 21213 ? 5% slabinfo.pid.active_objs
21191 ? 4% +0.2% 21226 ? 5% slabinfo.pid.num_objs
13042 -0.1% 13028 slabinfo.radix_tree_node.active_objs
13042 -0.1% 13028 slabinfo.radix_tree_node.num_objs
632.00 ? 15% -13.9% 544.00 ? 18% slabinfo.scsi_sense_cache.active_objs
632.00 ? 15% -13.9% 544.00 ? 18% slabinfo.scsi_sense_cache.num_objs
9062 -0.7% 8995 slabinfo.selinux_file_security.active_objs
9062 -0.7% 8995 slabinfo.selinux_file_security.num_objs
4883 +2.6% 5009 slabinfo.shmem_inode_cache.active_objs
4911 +2.5% 5033 slabinfo.shmem_inode_cache.num_objs
17056 ? 3% -5.4% 16136 ? 12% slabinfo.vm_area_struct.active_objs
17089 ? 3% -5.5% 16147 ? 12% slabinfo.vm_area_struct.num_objs
7096 ? 90% +112.3% 15065 ? 20% numa-vmstat.node0.nr_active_anon
6162 ?102% +127.7% 14031 ? 19% numa-vmstat.node0.nr_anon_pages
159173 ? 3% -1.4% 156875 ? 4% numa-vmstat.node0.nr_file_pages
8019960 -0.1% 8012903 numa-vmstat.node0.nr_free_pages
1290 ? 74% +32.9% 1714 ? 44% numa-vmstat.node0.nr_inactive_anon
10737 ? 21% -6.1% 10082 ? 10% numa-vmstat.node0.nr_indirectly_reclaimable
4604 ? 6% +6.7% 4913 ? 6% numa-vmstat.node0.nr_kernel_stack
3281 ? 17% +7.1% 3515 ? 14% numa-vmstat.node0.nr_mapped
103.00 ?101% +23.8% 127.50 ? 57% numa-vmstat.node0.nr_mlock
451.50 ? 32% +39.3% 629.00 ? 4% numa-vmstat.node0.nr_page_table_pages
2299 ? 35% +22.0% 2803 ? 30% numa-vmstat.node0.nr_shmem
6868 ? 9% +4.8% 7197 ? 8% numa-vmstat.node0.nr_slab_reclaimable
9627 ? 4% +1.8% 9798 ? 2% numa-vmstat.node0.nr_slab_unreclaimable
156827 ? 3% -1.8% 154024 ? 4% numa-vmstat.node0.nr_unevictable
7096 ? 90% +112.3% 15065 ? 20% numa-vmstat.node0.nr_zone_active_anon
1290 ? 74% +32.9% 1714 ? 44% numa-vmstat.node0.nr_zone_inactive_anon
156827 ? 3% -1.8% 154024 ? 4% numa-vmstat.node0.nr_zone_unevictable
455398 ? 10% +4.3% 474897 ? 2% numa-vmstat.node0.numa_hit
166575 +0.1% 166664 numa-vmstat.node0.numa_interleave
455125 ? 10% +4.1% 473733 ? 2% numa-vmstat.node0.numa_local
272.00 ? 51% +327.7% 1163 ?147% numa-vmstat.node0.numa_other
14285 ? 44% -55.9% 6302 ? 46% numa-vmstat.node1.nr_active_anon
13915 ? 44% -56.5% 6058 ? 45% numa-vmstat.node1.nr_anon_pages
153811 ? 3% +1.6% 156204 ? 4% numa-vmstat.node1.nr_file_pages
34304 ? 16% +24.3% 42643 ? 7% numa-vmstat.node1.nr_free_cma
8041888 +0.1% 8048742 numa-vmstat.node1.nr_free_pages
1129 ? 85% -36.8% 714.25 ?108% numa-vmstat.node1.nr_inactive_anon
12218 ? 19% +5.6% 12908 ? 8% numa-vmstat.node1.nr_indirectly_reclaimable
3854 ? 7% -8.6% 3522 ? 7% numa-vmstat.node1.nr_kernel_stack
3273 ? 18% -8.5% 2995 ? 17% numa-vmstat.node1.nr_mapped
99.25 ?102% +77.3% 176.00 ? 57% numa-vmstat.node1.nr_mlock
580.00 ? 24% -30.3% 404.00 ? 6% numa-vmstat.node1.nr_page_table_pages
1569 ? 52% -32.5% 1060 ? 71% numa-vmstat.node1.nr_shmem
6157 ? 11% -5.7% 5806 ? 10% numa-vmstat.node1.nr_slab_reclaimable
8675 ? 5% -2.2% 8483 ? 4% numa-vmstat.node1.nr_slab_unreclaimable
152232 ? 3% +1.9% 155152 ? 3% numa-vmstat.node1.nr_unevictable
14352 ? 43% -55.7% 6361 ? 46% numa-vmstat.node1.nr_zone_active_anon
1129 ? 85% -36.8% 714.25 ?108% numa-vmstat.node1.nr_zone_inactive_anon
152232 ? 3% +1.9% 155152 ? 3% numa-vmstat.node1.nr_zone_unevictable
482798 ? 9% -4.7% 460308 ? 2% numa-vmstat.node1.numa_hit
166756 +0.0% 166767 numa-vmstat.node1.numa_interleave
308350 ? 15% -7.0% 286720 ? 4% numa-vmstat.node1.numa_local
174448 -0.5% 173587 numa-vmstat.node1.numa_other
5.50 ?173% +419.6% 28.60 ? 75% sched_debug.cfs_rq:/.MIN_vruntime.avg
176.16 ?173% +362.7% 815.14 ? 73% sched_debug.cfs_rq:/.MIN_vruntime.max
0.00 +0.0% 0.00 sched_debug.cfs_rq:/.MIN_vruntime.min
30.65 ?173% +387.1% 149.28 ? 73% sched_debug.cfs_rq:/.MIN_vruntime.stddev
50118 +0.9% 50550 sched_debug.cfs_rq:/.exec_clock.avg
104426 ? 3% +2.1% 106670 ? 3% sched_debug.cfs_rq:/.exec_clock.max
14421 ? 41% +26.9% 18301 ? 32% sched_debug.cfs_rq:/.exec_clock.min
20989 ? 3% -4.5% 20049 ? 3% sched_debug.cfs_rq:/.exec_clock.stddev
45669 ? 16% +3.7% 47381 ? 16% sched_debug.cfs_rq:/.load.avg
194713 ? 35% +18.5% 230674 ? 35% sched_debug.cfs_rq:/.load.max
5121 +0.4% 5143 sched_debug.cfs_rq:/.load.min
59510 ? 20% +11.6% 66397 ? 25% sched_debug.cfs_rq:/.load.stddev
64.65 ? 10% -15.7% 54.48 ? 11% sched_debug.cfs_rq:/.load_avg.avg
371.67 ? 16% -18.9% 301.38 ? 3% sched_debug.cfs_rq:/.load_avg.max
5.29 ? 13% +17.3% 6.21 ? 11% sched_debug.cfs_rq:/.load_avg.min
105.29 ? 13% -16.6% 87.81 ? 9% sched_debug.cfs_rq:/.load_avg.stddev
5.52 ?173% +418.4% 28.62 ? 75% sched_debug.cfs_rq:/.max_vruntime.avg
176.66 ?173% +361.7% 815.64 ? 73% sched_debug.cfs_rq:/.max_vruntime.max
0.00 +0.0% 0.00 sched_debug.cfs_rq:/.max_vruntime.min
30.74 ?173% +386.0% 149.37 ? 73% sched_debug.cfs_rq:/.max_vruntime.stddev
972860 +0.3% 975783 sched_debug.cfs_rq:/.min_vruntime.avg
1402727 ? 3% +0.5% 1409368 ? 4% sched_debug.cfs_rq:/.min_vruntime.max
381344 ? 26% +17.5% 448106 ? 24% sched_debug.cfs_rq:/.min_vruntime.min
268465 ? 7% -6.4% 251304 ? 6% sched_debug.cfs_rq:/.min_vruntime.stddev
0.58 ? 6% +0.5% 0.59 ? 5% sched_debug.cfs_rq:/.nr_running.avg
1.04 ? 6% +0.0% 1.04 ? 6% sched_debug.cfs_rq:/.nr_running.max
0.17 +0.0% 0.17 sched_debug.cfs_rq:/.nr_running.min
0.37 ? 3% +2.0% 0.38 ? 3% sched_debug.cfs_rq:/.nr_running.stddev
0.90 ? 3% +2.4% 0.92 ? 3% sched_debug.cfs_rq:/.nr_spread_over.avg
2.42 ? 17% +8.6% 2.62 ? 11% sched_debug.cfs_rq:/.nr_spread_over.max
0.83 +0.0% 0.83 sched_debug.cfs_rq:/.nr_spread_over.min
0.30 ? 30% +15.8% 0.35 ? 21% sched_debug.cfs_rq:/.nr_spread_over.stddev
12.32 ? 46% -78.4% 2.67 ? 99% sched_debug.cfs_rq:/.removed.load_avg.avg
212.46 ? 34% -59.8% 85.33 ? 99% sched_debug.cfs_rq:/.removed.load_avg.max
48.34 ? 38% -69.3% 14.85 ?100% sched_debug.cfs_rq:/.removed.load_avg.stddev
569.87 ? 46% -78.4% 123.37 ?100% sched_debug.cfs_rq:/.removed.runnable_sum.avg
9834 ? 34% -59.9% 3947 ?100% sched_debug.cfs_rq:/.removed.runnable_sum.max
2235 ? 38% -69.3% 686.90 ?100% sched_debug.cfs_rq:/.removed.runnable_sum.stddev
3.24 ? 59% -67.6% 1.05 ?100% sched_debug.cfs_rq:/.removed.util_avg.avg
49.21 ? 45% -31.6% 33.67 ?100% sched_debug.cfs_rq:/.removed.util_avg.max
11.85 ? 54% -50.6% 5.86 ?100% sched_debug.cfs_rq:/.removed.util_avg.stddev
37.26 ? 16% -0.7% 36.99 ? 10% sched_debug.cfs_rq:/.runnable_load_avg.avg
141.88 ? 4% -1.2% 140.17 ? 2% sched_debug.cfs_rq:/.runnable_load_avg.max
4.92 +0.8% 4.96 sched_debug.cfs_rq:/.runnable_load_avg.min
48.04 ? 10% +0.3% 48.16 ? 7% sched_debug.cfs_rq:/.runnable_load_avg.stddev
44637 ? 16% +4.5% 46667 ? 16% sched_debug.cfs_rq:/.runnable_weight.avg
185937 ? 39% +21.3% 225541 ? 35% sched_debug.cfs_rq:/.runnable_weight.max
5121 +0.4% 5143 sched_debug.cfs_rq:/.runnable_weight.min
57866 ? 22% +12.8% 65291 ? 25% sched_debug.cfs_rq:/.runnable_weight.stddev
0.02 ?173% +0.0% 0.02 ?173% sched_debug.cfs_rq:/.spread.avg
0.50 ?173% +0.0% 0.50 ?173% sched_debug.cfs_rq:/.spread.max
0.09 ?173% +0.0% 0.09 ?173% sched_debug.cfs_rq:/.spread.stddev
-81762 -36.9% -51577 sched_debug.cfs_rq:/.spread0.avg
348095 ? 16% +9.7% 382004 ? 29% sched_debug.cfs_rq:/.spread0.max
-673288 -14.0% -579250 sched_debug.cfs_rq:/.spread0.min
268470 ? 7% -6.4% 251311 ? 6% sched_debug.cfs_rq:/.spread0.stddev
609.37 ? 3% -1.7% 598.80 ? 2% sched_debug.cfs_rq:/.util_avg.avg
1170 ? 5% +0.3% 1173 ? 4% sched_debug.cfs_rq:/.util_avg.max
180.54 ? 8% +11.9% 202.08 ? 9% sched_debug.cfs_rq:/.util_avg.min
343.00 ? 4% -0.2% 342.36 ? 2% sched_debug.cfs_rq:/.util_avg.stddev
308.67 ? 19% +4.9% 323.89 ? 8% sched_debug.cfs_rq:/.util_est_enqueued.avg
740.79 ? 15% -3.3% 716.25 sched_debug.cfs_rq:/.util_est_enqueued.max
51.42 ? 8% -10.4% 46.08 ? 14% sched_debug.cfs_rq:/.util_est_enqueued.min
234.33 ? 10% +18.1% 276.83 sched_debug.cfs_rq:/.util_est_enqueued.stddev
833319 ? 2% -2.2% 815280 ? 4% sched_debug.cpu.avg_idle.avg
984801 +0.9% 993248 sched_debug.cpu.avg_idle.max
273123 ? 68% -1.6% 268671 ? 37% sched_debug.cpu.avg_idle.min
158825 ? 20% +14.3% 181491 ? 20% sched_debug.cpu.avg_idle.stddev
176638 -0.0% 176552 sched_debug.cpu.clock.avg
176640 -0.0% 176555 sched_debug.cpu.clock.max
176634 -0.0% 176548 sched_debug.cpu.clock.min
1.50 ? 12% +5.5% 1.58 ? 7% sched_debug.cpu.clock.stddev
176638 -0.0% 176552 sched_debug.cpu.clock_task.avg
176640 -0.0% 176555 sched_debug.cpu.clock_task.max
176634 -0.0% 176548 sched_debug.cpu.clock_task.min
1.50 ? 12% +5.5% 1.58 ? 7% sched_debug.cpu.clock_task.stddev
27.91 -0.5% 27.78 sched_debug.cpu.cpu_load[0].avg
144.33 ? 3% -0.3% 143.88 ? 2% sched_debug.cpu.cpu_load[0].max
4.92 +0.8% 4.96 sched_debug.cpu.cpu_load[0].min
40.10 -0.9% 39.72 sched_debug.cpu.cpu_load[0].stddev
28.32 +3.5% 29.30 ? 5% sched_debug.cpu.cpu_load[1].avg
173.42 ? 6% +10.0% 190.75 ? 7% sched_debug.cpu.cpu_load[1].max
4.92 +0.8% 4.96 sched_debug.cpu.cpu_load[1].min
41.29 ? 4% +7.0% 44.19 ? 8% sched_debug.cpu.cpu_load[1].stddev
28.74 +2.5% 29.46 ? 3% sched_debug.cpu.cpu_load[2].avg
200.33 ? 2% +6.1% 212.54 ? 5% sched_debug.cpu.cpu_load[2].max
4.96 ? 2% +1.7% 5.04 sched_debug.cpu.cpu_load[2].min
44.21 ? 2% +5.2% 46.52 ? 6% sched_debug.cpu.cpu_load[2].stddev
28.82 +1.2% 29.15 ? 2% sched_debug.cpu.cpu_load[3].avg
214.92 +2.6% 220.46 ? 3% sched_debug.cpu.cpu_load[3].max
5.21 ? 7% +3.2% 5.38 ? 2% sched_debug.cpu.cpu_load[3].min
46.48 +2.0% 47.42 ? 3% sched_debug.cpu.cpu_load[3].stddev
28.47 +0.2% 28.54 sched_debug.cpu.cpu_load[4].avg
219.29 +1.1% 221.79 sched_debug.cpu.cpu_load[4].max
5.04 ? 4% -1.7% 4.96 sched_debug.cpu.cpu_load[4].min
47.54 +0.5% 47.78 sched_debug.cpu.cpu_load[4].stddev
3123 +0.8% 3147 sched_debug.cpu.curr->pid.avg
5205 -0.1% 5202 sched_debug.cpu.curr->pid.max
1116 ? 46% +27.0% 1417 sched_debug.cpu.curr->pid.min
1478 ? 6% -4.2% 1416 sched_debug.cpu.curr->pid.stddev
34187 ? 6% -1.9% 33530 ? 7% sched_debug.cpu.load.avg
194718 ? 35% -1.4% 191933 ? 40% sched_debug.cpu.load.max
5121 +0.4% 5143 sched_debug.cpu.load.min
51887 ? 22% -2.3% 50686 ? 25% sched_debug.cpu.load.stddev
500000 +0.0% 500000 sched_debug.cpu.max_idle_balance_cost.avg
500000 +0.0% 500000 sched_debug.cpu.max_idle_balance_cost.max
500000 +0.0% 500000 sched_debug.cpu.max_idle_balance_cost.min
4294 -0.0% 4294 sched_debug.cpu.next_balance.avg
4294 -0.0% 4294 sched_debug.cpu.next_balance.max
4294 -0.0% 4294 sched_debug.cpu.next_balance.min
0.00 ? 7% +20.5% 0.00 ? 31% sched_debug.cpu.next_balance.stddev
156219 -0.0% 156192 sched_debug.cpu.nr_load_updates.avg
160487 +0.0% 160511 sched_debug.cpu.nr_load_updates.max
154633 +0.3% 155075 sched_debug.cpu.nr_load_updates.min
1015 ? 6% -3.1% 983.74 ? 3% sched_debug.cpu.nr_load_updates.stddev
0.55 +0.7% 0.55 ? 2% sched_debug.cpu.nr_running.avg
1.38 ? 5% +6.1% 1.46 ? 9% sched_debug.cpu.nr_running.max
0.17 +0.0% 0.17 sched_debug.cpu.nr_running.min
0.41 ? 4% -1.7% 0.40 ? 4% sched_debug.cpu.nr_running.stddev
9354 ? 5% -2.7% 9103 ? 2% sched_debug.cpu.nr_switches.avg
35778 ? 12% +9.1% 39036 ? 9% sched_debug.cpu.nr_switches.max
1594 ? 19% +2.4% 1631 ? 27% sched_debug.cpu.nr_switches.min
7988 ? 15% +6.0% 8464 ? 12% sched_debug.cpu.nr_switches.stddev
0.01 ? 74% -50.0% 0.00 ?110% sched_debug.cpu.nr_uninterruptible.avg
9.08 ? 22% -20.2% 7.25 ? 8% sched_debug.cpu.nr_uninterruptible.max
-8.75 -2.4% -8.54 sched_debug.cpu.nr_uninterruptible.min
4.21 ? 14% -18.5% 3.44 ? 14% sched_debug.cpu.nr_uninterruptible.stddev
10437 ? 4% +4.0% 10856 ? 4% sched_debug.cpu.sched_count.avg
94011 ? 8% +21.7% 114419 ? 16% sched_debug.cpu.sched_count.max
976.42 ? 48% -10.3% 876.04 ? 36% sched_debug.cpu.sched_count.min
17053 ? 6% +19.5% 20384 ? 14% sched_debug.cpu.sched_count.stddev
3016 ? 4% +0.2% 3022 ? 3% sched_debug.cpu.sched_goidle.avg
10998 ? 13% +28.0% 14082 ? 10% sched_debug.cpu.sched_goidle.max
218.62 ? 36% +46.4% 319.96 ? 48% sched_debug.cpu.sched_goidle.min
2694 ? 16% +18.4% 3191 ? 11% sched_debug.cpu.sched_goidle.stddev
3582 ? 7% -3.3% 3462 ? 3% sched_debug.cpu.ttwu_count.avg
17772 ? 22% -10.2% 15960 ? 11% sched_debug.cpu.ttwu_count.max
610.04 ? 27% +11.1% 677.54 ? 12% sched_debug.cpu.ttwu_count.min
3572 ? 20% -3.3% 3454 ? 12% sched_debug.cpu.ttwu_count.stddev
1571 ? 8% -10.0% 1414 ? 5% sched_debug.cpu.ttwu_local.avg
12005 ? 17% -8.2% 11015 ? 16% sched_debug.cpu.ttwu_local.max
183.12 ? 21% -2.7% 178.12 ? 13% sched_debug.cpu.ttwu_local.min
2394 ? 20% -10.3% 2147 ? 10% sched_debug.cpu.ttwu_local.stddev
176635 -0.0% 176549 sched_debug.cpu_clk
996147 +0.0% 996147 sched_debug.dl_rq:.dl_bw->bw.avg
996147 +0.0% 996147 sched_debug.dl_rq:.dl_bw->bw.max
996147 +0.0% 996147 sched_debug.dl_rq:.dl_bw->bw.min
4.295e+09 -0.0% 4.295e+09 sched_debug.jiffies
176635 -0.0% 176549 sched_debug.ktime
0.00 ? 99% +100.0% 0.01 sched_debug.rt_rq:/.rt_nr_migratory.avg
0.08 ? 99% +100.0% 0.17 sched_debug.rt_rq:/.rt_nr_migratory.max
0.01 ?100% +100.0% 0.03 sched_debug.rt_rq:/.rt_nr_migratory.stddev
0.00 ? 99% +100.0% 0.01 sched_debug.rt_rq:/.rt_nr_running.avg
0.08 ? 99% +100.0% 0.17 sched_debug.rt_rq:/.rt_nr_running.max
0.01 ?100% +100.0% 0.03 sched_debug.rt_rq:/.rt_nr_running.stddev
950.00 +0.0% 950.00 sched_debug.rt_rq:/.rt_runtime.avg
950.00 +0.0% 950.00 sched_debug.rt_rq:/.rt_runtime.max
950.00 +0.0% 950.00 sched_debug.rt_rq:/.rt_runtime.min
0.05 ? 16% +2.4% 0.05 ? 14% sched_debug.rt_rq:/.rt_time.avg
1.51 ? 17% +2.4% 1.55 ? 14% sched_debug.rt_rq:/.rt_time.max
0.26 ? 17% +2.4% 0.27 ? 14% sched_debug.rt_rq:/.rt_time.stddev
176956 -0.0% 176870 sched_debug.sched_clk
1.00 +0.0% 1.00 sched_debug.sched_clock_stable()
4118331 +0.0% 4118331 sched_debug.sysctl_sched.sysctl_sched_features
24.00 +0.0% 24.00 sched_debug.sysctl_sched.sysctl_sched_latency
3.00 +0.0% 3.00 sched_debug.sysctl_sched.sysctl_sched_min_granularity
1.00 +0.0% 1.00 sched_debug.sysctl_sched.sysctl_sched_tunable_scaling
4.00 +0.0% 4.00 sched_debug.sysctl_sched.sysctl_sched_wakeup_granularity
69.42 ? 6% -3.8 65.59 ? 13% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
69.19 ? 6% -3.8 65.39 ? 13% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
28.56 ? 6% -3.8 24.80 ? 13% perf-profile.calltrace.cycles-pp.__fget_light.do_sys_poll.__x64_sys_poll.do_syscall_64.entry_SYSCALL_64_after_hwframe
68.28 ? 6% -3.7 64.61 ? 13% perf-profile.calltrace.cycles-pp.__x64_sys_poll.do_syscall_64.entry_SYSCALL_64_after_hwframe
68.05 ? 6% -3.7 64.39 ? 13% perf-profile.calltrace.cycles-pp.do_sys_poll.__x64_sys_poll.do_syscall_64.entry_SYSCALL_64_after_hwframe
26.46 ? 7% -3.5 22.97 ? 13% perf-profile.calltrace.cycles-pp.__fget.__fget_light.do_sys_poll.__x64_sys_poll.do_syscall_64
4.29 ? 59% -0.7 3.59 ? 71% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.do_idle.cpu_startup_entry.start_kernel.secondary_startup_64
4.30 ? 59% -0.7 3.60 ? 70% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_kernel.secondary_startup_64
4.30 ? 59% -0.7 3.60 ? 70% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_kernel.secondary_startup_64
4.30 ? 59% -0.7 3.60 ? 70% perf-profile.calltrace.cycles-pp.start_kernel.secondary_startup_64
4.18 ? 59% -0.7 3.53 ? 70% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.do_idle.cpu_startup_entry.start_kernel
2.64 ? 4% -0.3 2.38 ? 12% perf-profile.calltrace.cycles-pp._copy_from_user.do_sys_poll.__x64_sys_poll.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.07 ? 2% -0.2 1.86 ? 16% perf-profile.calltrace.cycles-pp.syscall_return_via_sysret
2.27 ? 3% -0.2 2.08 ? 13% perf-profile.calltrace.cycles-pp.copy_user_generic_string._copy_from_user.do_sys_poll.__x64_sys_poll.do_syscall_64
1.87 ? 7% -0.1 1.75 ? 12% perf-profile.calltrace.cycles-pp.fput
0.96 ? 10% -0.1 0.86 ? 12% perf-profile.calltrace.cycles-pp.__fdget.do_sys_poll.__x64_sys_poll.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.74 ? 9% -0.1 0.64 ? 13% perf-profile.calltrace.cycles-pp.kfree.do_sys_poll.__x64_sys_poll.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.01 ? 12% -0.1 0.94 ? 10% perf-profile.calltrace.cycles-pp.__kmalloc.do_sys_poll.__x64_sys_poll.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.95 ? 7% -0.1 0.89 ? 14% perf-profile.calltrace.cycles-pp.__fget_light.__fget_light.do_sys_poll.__x64_sys_poll.do_syscall_64
0.48 ? 59% +0.1 0.59 ? 62% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.do_idle
0.00 +0.2 0.15 ?173% perf-profile.calltrace.cycles-pp.__softirqentry_text_start.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state
0.15 ?173% +0.2 0.32 ?102% perf-profile.calltrace.cycles-pp.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.do_idle
1.05 ? 30% +0.2 1.28 ? 31% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.do_idle.cpu_startup_entry
1.06 ? 30% +0.2 1.29 ? 31% perf-profile.calltrace.cycles-pp.apic_timer_interrupt.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
0.00 +0.3 0.31 ?100% perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state
21.09 ? 6% +1.2 22.27 ? 14% perf-profile.calltrace.cycles-pp.fput.do_sys_poll.__x64_sys_poll.do_syscall_64.entry_SYSCALL_64_after_hwframe
25.20 ? 17% +4.2 29.43 ? 31% perf-profile.calltrace.cycles-pp.secondary_startup_64
19.31 ? 22% +4.5 23.86 ? 41% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
20.51 ? 20% +4.8 25.33 ? 40% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
20.89 ? 20% +4.9 25.83 ? 40% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
20.89 ? 20% +4.9 25.83 ? 40% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
20.89 ? 20% +4.9 25.83 ? 40% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
29.52 ? 6% -3.8 25.69 ? 13% perf-profile.children.cycles-pp.__fget_light
69.48 ? 6% -3.8 65.67 ? 13% perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
69.27 ? 6% -3.8 65.49 ? 13% perf-profile.children.cycles-pp.do_syscall_64
68.38 ? 6% -3.7 64.71 ? 13% perf-profile.children.cycles-pp.__x64_sys_poll
68.06 ? 6% -3.7 64.39 ? 13% perf-profile.children.cycles-pp.do_sys_poll
26.46 ? 7% -3.5 22.97 ? 13% perf-profile.children.cycles-pp.__fget
4.30 ? 59% -0.7 3.60 ? 70% perf-profile.children.cycles-pp.start_kernel
2.79 ? 3% -0.3 2.48 ? 12% perf-profile.children.cycles-pp._copy_from_user
2.07 ? 2% -0.2 1.86 ? 16% perf-profile.children.cycles-pp.syscall_return_via_sysret
2.29 ? 3% -0.2 2.09 ? 13% perf-profile.children.cycles-pp.copy_user_generic_string
0.96 ? 10% -0.1 0.86 ? 12% perf-profile.children.cycles-pp.__fdget
1.09 ? 11% -0.1 0.99 ? 10% perf-profile.children.cycles-pp.__kmalloc
0.78 ? 8% -0.1 0.69 ? 11% perf-profile.children.cycles-pp.kfree
0.19 ? 60% -0.0 0.15 ? 69% perf-profile.children.cycles-pp.memcpy
0.18 ? 60% -0.0 0.14 ? 69% perf-profile.children.cycles-pp.fb_flashcursor
0.18 ? 60% -0.0 0.14 ? 69% perf-profile.children.cycles-pp.bit_cursor
0.18 ? 60% -0.0 0.14 ? 69% perf-profile.children.cycles-pp.soft_cursor
0.18 ? 60% -0.0 0.14 ? 69% perf-profile.children.cycles-pp.mga_dirty_update
0.29 ? 5% -0.0 0.24 ? 23% perf-profile.children.cycles-pp.kmalloc_slab
0.37 ? 15% -0.0 0.33 ? 16% perf-profile.children.cycles-pp.update_process_times
0.05 ?100% -0.0 0.01 ?173% perf-profile.children.cycles-pp.wake_up_klogd_work_func
0.39 ? 14% -0.0 0.36 ? 16% perf-profile.children.cycles-pp.tick_sched_handle
0.18 ? 60% -0.0 0.15 ? 62% perf-profile.children.cycles-pp.worker_thread
0.18 ? 60% -0.0 0.15 ? 62% perf-profile.children.cycles-pp.process_one_work
0.18 ? 60% -0.0 0.16 ? 62% perf-profile.children.cycles-pp.ret_from_fork
0.18 ? 60% -0.0 0.16 ? 62% perf-profile.children.cycles-pp.kthread
0.07 ? 19% -0.0 0.05 ? 60% perf-profile.children.cycles-pp.task_tick_fair
0.13 ? 52% -0.0 0.11 ? 24% perf-profile.children.cycles-pp.load_balance
0.18 ? 41% -0.0 0.16 ? 44% perf-profile.children.cycles-pp.rebalance_domains
0.41 ? 12% -0.0 0.40 ? 15% perf-profile.children.cycles-pp.tick_sched_timer
0.13 ? 13% -0.0 0.12 ? 9% perf-profile.children.cycles-pp.entry_SYSCALL_64_stage2
0.01 ?173% -0.0 0.00 perf-profile.children.cycles-pp.update_rq_clock
0.03 ?100% -0.0 0.01 ?173% perf-profile.children.cycles-pp.find_next_bit
0.03 ?100% -0.0 0.01 ?173% perf-profile.children.cycles-pp.__remove_hrtimer
0.01 ?173% -0.0 0.00 perf-profile.children.cycles-pp.update_load_avg
0.01 ?173% -0.0 0.00 perf-profile.children.cycles-pp.poll_freewait
0.01 ?173% -0.0 0.00 perf-profile.children.cycles-pp.__hrtimer_next_event_base
0.01 ?173% -0.0 0.00 perf-profile.children.cycles-pp.cpu_load_update
0.09 ? 77% -0.0 0.08 ? 29% perf-profile.children.cycles-pp.find_busiest_group
0.23 ? 18% -0.0 0.22 ? 15% perf-profile.children.cycles-pp.__might_fault
0.12 ? 3% -0.0 0.12 ? 13% perf-profile.children.cycles-pp.___might_sleep
0.07 ? 19% -0.0 0.07 ? 17% perf-profile.children.cycles-pp.__indirect_thunk_start
0.01 ?173% -0.0 0.01 ?173% perf-profile.children.cycles-pp.timerqueue_del
0.20 ? 16% +0.0 0.20 ? 11% perf-profile.children.cycles-pp.scheduler_tick
0.01 ?173% +0.0 0.01 ?173% perf-profile.children.cycles-pp._cond_resched
0.03 ?102% +0.0 0.03 ?105% perf-profile.children.cycles-pp._raw_spin_lock
0.08 ? 23% +0.0 0.08 ? 37% perf-profile.children.cycles-pp.native_irq_return_iret
0.11 ? 17% +0.0 0.12 ? 15% perf-profile.children.cycles-pp.__might_sleep
0.03 ?102% +0.0 0.04 ?103% perf-profile.children.cycles-pp.sched_clock
0.03 ?100% +0.0 0.04 ?107% perf-profile.children.cycles-pp.native_sched_clock
0.01 ?173% +0.0 0.02 ?173% perf-profile.children.cycles-pp._raw_spin_trylock
0.04 ? 60% +0.0 0.06 ? 64% perf-profile.children.cycles-pp.run_rebalance_domains
0.00 +0.0 0.01 ?173% perf-profile.children.cycles-pp.intel_pmu_disable_all
0.00 +0.0 0.01 ?173% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
0.00 +0.0 0.01 ?173% perf-profile.children.cycles-pp.rcu_idle_exit
0.00 +0.0 0.01 ?173% perf-profile.children.cycles-pp.call_function_interrupt
0.00 +0.0 0.01 ?173% perf-profile.children.cycles-pp.smp_call_function_interrupt
0.00 +0.0 0.01 ?173% perf-profile.children.cycles-pp.ktime_get_update_offsets_now
0.00 +0.0 0.01 ?173% perf-profile.children.cycles-pp.try_to_wake_up
0.00 +0.0 0.01 ?173% perf-profile.children.cycles-pp.call_timer_fn
0.06 ? 58% +0.0 0.07 ? 24% perf-profile.children.cycles-pp.__next_timer_interrupt
0.05 ? 60% +0.0 0.06 ? 60% perf-profile.children.cycles-pp.irq_enter
0.00 +0.0 0.01 ?173% perf-profile.children.cycles-pp.__vfs_read
0.00 +0.0 0.01 ?173% perf-profile.children.cycles-pp.vfs_read
0.00 +0.0 0.01 ?173% perf-profile.children.cycles-pp.tick_nohz_irq_exit
0.01 ?173% +0.0 0.03 ?100% perf-profile.children.cycles-pp._raw_spin_lock_irq
0.15 ? 50% +0.0 0.17 ? 47% perf-profile.children.cycles-pp.delay_tsc
0.10 ? 21% +0.0 0.12 ? 24% perf-profile.children.cycles-pp.ktime_get
0.00 +0.0 0.02 ?173% perf-profile.children.cycles-pp.ksys_read
0.10 ? 22% +0.0 0.11 ? 24% perf-profile.children.cycles-pp.get_next_timer_interrupt
0.01 ?173% +0.0 0.04 ?102% perf-profile.children.cycles-pp.tick_irq_enter
0.04 ? 57% +0.0 0.06 ? 28% perf-profile.children.cycles-pp.read_tsc
0.07 ? 61% +0.0 0.10 ? 9% perf-profile.children.cycles-pp.run_timer_softirq
0.03 ?100% +0.0 0.06 ? 66% perf-profile.children.cycles-pp.sched_clock_cpu
0.03 ?100% +0.0 0.06 ? 60% perf-profile.children.cycles-pp.update_blocked_averages
0.03 ?100% +0.0 0.06 ? 28% perf-profile.children.cycles-pp.rcu_check_callbacks
0.63 ? 15% +0.0 0.66 ? 20% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.13 ? 27% +0.0 0.16 ? 23% perf-profile.children.cycles-pp.tick_nohz_next_event
0.01 ?173% +0.0 0.04 ? 59% perf-profile.children.cycles-pp._raw_spin_unlock_irqrestore
0.10 ? 30% +0.0 0.13 ? 26% perf-profile.children.cycles-pp.clockevents_program_event
0.15 ? 28% +0.0 0.18 ? 24% perf-profile.children.cycles-pp.tick_nohz_get_sleep_length
0.13 ? 18% +0.0 0.17 ? 31% perf-profile.children.cycles-pp.perf_mux_hrtimer_handler
0.38 ? 28% +0.0 0.41 ? 31% perf-profile.children.cycles-pp.io_serial_in
0.56 ? 31% +0.0 0.60 ? 22% perf-profile.children.cycles-pp.irq_work_run_list
0.06 ? 58% +0.0 0.10 ? 30% perf-profile.children.cycles-pp.lapic_next_deadline
0.09 ? 28% +0.0 0.14 ? 34% perf-profile.children.cycles-pp.native_write_msr
0.39 ? 32% +0.0 0.43 ? 31% perf-profile.children.cycles-pp.__softirqentry_text_start
0.56 ? 31% +0.1 0.61 ? 19% perf-profile.children.cycles-pp.console_unlock
0.53 ? 31% +0.1 0.58 ? 20% perf-profile.children.cycles-pp.uart_console_write
0.54 ? 31% +0.1 0.59 ? 19% perf-profile.children.cycles-pp.serial8250_console_write
0.53 ? 30% +0.1 0.58 ? 19% perf-profile.children.cycles-pp.wait_for_xmitr
0.52 ? 31% +0.1 0.58 ? 19% perf-profile.children.cycles-pp.serial8250_console_putchar
0.83 ? 15% +0.1 0.89 ? 22% perf-profile.children.cycles-pp.hrtimer_interrupt
0.45 ? 33% +0.1 0.52 ? 32% perf-profile.children.cycles-pp.irq_exit
0.51 ? 26% +0.1 0.58 ? 22% perf-profile.children.cycles-pp.irq_work_interrupt
0.51 ? 26% +0.1 0.58 ? 22% perf-profile.children.cycles-pp.smp_irq_work_interrupt
0.51 ? 26% +0.1 0.58 ? 22% perf-profile.children.cycles-pp.irq_work_run
0.51 ? 26% +0.1 0.58 ? 22% perf-profile.children.cycles-pp.printk
0.51 ? 26% +0.1 0.58 ? 22% perf-profile.children.cycles-pp.vprintk_emit
0.28 ? 35% +0.1 0.35 ? 24% perf-profile.children.cycles-pp.menu_select
1.39 ? 21% +0.1 1.53 ? 25% perf-profile.children.cycles-pp.apic_timer_interrupt
1.38 ? 21% +0.1 1.52 ? 25% perf-profile.children.cycles-pp.smp_apic_timer_interrupt
22.96 ? 6% +1.1 24.02 ? 14% perf-profile.children.cycles-pp.fput
23.50 ? 20% +3.9 27.40 ? 32% perf-profile.children.cycles-pp.intel_idle
24.83 ? 18% +4.1 28.97 ? 31% perf-profile.children.cycles-pp.cpuidle_enter_state
25.20 ? 17% +4.2 29.43 ? 31% perf-profile.children.cycles-pp.secondary_startup_64
25.20 ? 17% +4.2 29.43 ? 31% perf-profile.children.cycles-pp.cpu_startup_entry
25.20 ? 18% +4.2 29.45 ? 31% perf-profile.children.cycles-pp.do_idle
20.89 ? 20% +4.9 25.83 ? 40% perf-profile.children.cycles-pp.start_secondary
26.30 ? 7% -3.5 22.78 ? 13% perf-profile.self.cycles-pp.__fget
11.99 ? 5% -0.5 11.46 ? 14% perf-profile.self.cycles-pp.do_sys_poll
2.96 ? 5% -0.3 2.62 ? 15% perf-profile.self.cycles-pp.__fget_light
2.07 ? 2% -0.2 1.86 ? 16% perf-profile.self.cycles-pp.syscall_return_via_sysret
2.27 ? 3% -0.2 2.08 ? 13% perf-profile.self.cycles-pp.copy_user_generic_string
0.89 ? 7% -0.1 0.76 ? 10% perf-profile.self.cycles-pp.do_syscall_64
0.96 ? 10% -0.1 0.85 ? 11% perf-profile.self.cycles-pp.__fdget
0.28 ? 9% -0.1 0.18 ? 12% perf-profile.self.cycles-pp._copy_from_user
0.78 ? 8% -0.1 0.69 ? 11% perf-profile.self.cycles-pp.kfree
0.72 ? 16% -0.1 0.67 ? 5% perf-profile.self.cycles-pp.__kmalloc
0.19 ? 60% -0.0 0.15 ? 69% perf-profile.self.cycles-pp.memcpy
0.29 ? 5% -0.0 0.24 ? 23% perf-profile.self.cycles-pp.kmalloc_slab
0.23 ? 5% -0.0 0.21 ? 18% perf-profile.self.cycles-pp.entry_SYSCALL_64_after_hwframe
0.13 ? 13% -0.0 0.12 ? 9% perf-profile.self.cycles-pp.entry_SYSCALL_64_stage2
0.03 ?100% -0.0 0.01 ?173% perf-profile.self.cycles-pp.find_next_bit
0.01 ?173% -0.0 0.00 perf-profile.self.cycles-pp.poll_freewait
0.01 ?173% -0.0 0.00 perf-profile.self.cycles-pp.__hrtimer_next_event_base
0.01 ?173% -0.0 0.00 perf-profile.self.cycles-pp.cpu_load_update
0.01 ?173% -0.0 0.00 perf-profile.self.cycles-pp.update_rq_clock
0.05 ? 59% -0.0 0.04 ? 58% perf-profile.self.cycles-pp.run_timer_softirq
0.31 ? 3% -0.0 0.30 ? 12% perf-profile.self.cycles-pp.__x64_sys_poll
0.12 ? 3% -0.0 0.12 ? 13% perf-profile.self.cycles-pp.___might_sleep
0.07 ? 19% -0.0 0.07 ? 17% perf-profile.self.cycles-pp.__indirect_thunk_start
0.10 ? 41% -0.0 0.10 ? 37% perf-profile.self.cycles-pp.__might_fault
0.05 ? 67% +0.0 0.05 ? 62% perf-profile.self.cycles-pp.find_busiest_group
0.01 ?173% +0.0 0.01 ?173% perf-profile.self.cycles-pp.irq_exit
0.03 ?102% +0.0 0.03 ?105% perf-profile.self.cycles-pp._raw_spin_lock
0.11 ? 17% +0.0 0.11 ? 15% perf-profile.self.cycles-pp.__might_sleep
0.08 ? 23% +0.0 0.08 ? 37% perf-profile.self.cycles-pp.native_irq_return_iret
0.05 ? 59% +0.0 0.06 ? 70% perf-profile.self.cycles-pp.__softirqentry_text_start
0.03 ?100% +0.0 0.04 ?107% perf-profile.self.cycles-pp.native_sched_clock
0.05 ? 62% +0.0 0.06 ? 63% perf-profile.self.cycles-pp.ktime_get
0.00 +0.0 0.01 ?173% perf-profile.self.cycles-pp.perf_mux_hrtimer_handler
0.00 +0.0 0.01 ?173% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.01 ?173% +0.0 0.03 ?100% perf-profile.self.cycles-pp._raw_spin_lock_irq
0.00 +0.0 0.01 ?173% perf-profile.self.cycles-pp.update_blocked_averages
0.01 ?173% +0.0 0.03 ?102% perf-profile.self.cycles-pp.rcu_check_callbacks
0.01 ?173% +0.0 0.03 ?102% perf-profile.self.cycles-pp._raw_spin_unlock_irqrestore
0.15 ? 50% +0.0 0.17 ? 47% perf-profile.self.cycles-pp.delay_tsc
0.04 ? 57% +0.0 0.06 ? 28% perf-profile.self.cycles-pp.read_tsc
0.05 ? 67% +0.0 0.07 ? 62% perf-profile.self.cycles-pp.cpuidle_enter_state
0.00 +0.0 0.03 ?102% perf-profile.self.cycles-pp.__next_timer_interrupt
0.00 +0.0 0.03 ?105% perf-profile.self.cycles-pp.do_idle
0.38 ? 28% +0.0 0.41 ? 31% perf-profile.self.cycles-pp.io_serial_in
0.09 ? 60% +0.0 0.12 ? 30% perf-profile.self.cycles-pp.menu_select
0.09 ? 28% +0.0 0.14 ? 34% perf-profile.self.cycles-pp.native_write_msr
22.79 ? 6% +1.1 23.88 ? 14% perf-profile.self.cycles-pp.fput
23.48 ? 20% +3.9 27.38 ? 32% perf-profile.self.cycles-pp.intel_idle



will-it-scale.per_thread_ops

260000 +-+----------------------------------------------------------------+
|+.++.++.++.++.+++.++.++.+ .++. +. + .+ .+ .+ .++.+++. +.++.++.++. |
255000 +-+ + + + + + + + + +|
| |
250000 +-+ |
| O |
245000 +-+ O O |
| O |
240000 +O+ O O O OO O OO O O OO O OO OO OO OO |
O OO O O O OO OO O O |
235000 +-+ O O O O |
| |
230000 +-+ O |
| |
225000 +-+----------------------------------------------------------------+


[*] bisect-good sample
[O] bisect-bad sample



Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


Thanks,
Xiaolong


Attachments:
(No filename) (56.83 kB)
config-4.17.0-rc3-00029-g9965ed17 (167.15 kB)
job-script (7.12 kB)
job.yaml (4.83 kB)
reproduce (323.00 B)
Download all attachments