Greeting,
FYI, we noticed a -62.2% regression of aim7.jobs-per-min due to commit:
commit: a7f4e88080f3d50511400259cc613a666d297227 ("[PATCH] xfs: require an rcu grace period before inode recycle")
url: https://github.com/0day-ci/linux/commits/Brian-Foster/xfs-require-an-rcu-grace-period-before-inode-recycle/20220121-222536
base: https://git.kernel.org/cgit/fs/xfs/xfs-linux.git for-next
patch link: https://lore.kernel.org/linux-xfs/[email protected]
in testcase: aim7
on test machine: 88 threads 2 sockets Intel(R) Xeon(R) Gold 6238M CPU @ 2.10GHz with 128G memory
with following parameters:
disk: 4BRD_12G
md: RAID1
fs: xfs
test: disk_wrt
load: 3000
cpufreq_governor: performance
ucode: 0x5003006
test-description: AIM7 is a traditional UNIX system level benchmark suite which is used to test and measure the performance of multiuser system.
test-url: https://sourceforge.net/projects/aimbench/files/aim-suite7/
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <[email protected]>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
sudo bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
sudo bin/lkp run generated-yaml-file
# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.
=========================================================================================
compiler/cpufreq_governor/disk/fs/kconfig/load/md/rootfs/tbox_group/test/testcase/ucode:
gcc-9/performance/4BRD_12G/xfs/x86_64-rhel-8.3/3000/RAID1/debian-10.4-x86_64-20200603.cgz/lkp-csl-2sp9/disk_wrt/aim7/0x5003006
commit:
6191cf3ad5 ("xfs: flush inodegc workqueue tasks before cancel")
a7f4e88080 ("xfs: require an rcu grace period before inode recycle")
6191cf3ad59fda59 a7f4e88080f3d50511400259cc6
---------------- ---------------------------
%stddev %change %stddev
\ | \
443599 -62.2% 167677 ? 6% aim7.jobs-per-min
40.85 +164.6% 108.09 ? 6% aim7.time.elapsed_time
40.85 +164.6% 108.09 ? 6% aim7.time.elapsed_time.max
2.527e+09 ? 7% +230.1% 8.344e+09 ? 10% cpuidle..time
6069098 ? 6% +198.6% 18124131 ? 9% cpuidle..usage
68.58 ? 8% +26.7% 86.91 ? 3% iostat.cpu.idle
30.77 ? 18% -58.3% 12.83 ? 20% iostat.cpu.system
80.29 ? 2% +82.3% 146.34 ? 6% uptime.boot
5466 ? 2% +105.6% 11240 ? 8% uptime.idle
65862 ? 3% +80.6% 118976 ? 5% meminfo.AnonHugePages
14084 ? 39% -60.9% 5502 ? 41% meminfo.Dirty
15002 ? 37% -56.2% 6573 ? 35% meminfo.Inactive(file)
67.80 ? 8% +27.6% 86.50 ? 2% vmstat.cpu.id
15808 ? 5% -94.6% 849.00 ? 38% vmstat.io.bo
27.20 ? 16% -62.0% 10.33 ? 39% vmstat.procs.r
43806 ? 5% -61.7% 16761 vmstat.system.cs
186207 -5.2% 176590 vmstat.system.in
67.25 ? 8% +19.5 86.72 ? 3% mpstat.cpu.all.idle%
0.02 ? 55% -0.0 0.00 ?209% mpstat.cpu.all.iowait%
1.03 ? 10% +0.3 1.36 ? 2% mpstat.cpu.all.irq%
0.11 ? 7% +0.0 0.13 ? 3% mpstat.cpu.all.soft%
30.98 ? 19% -19.4 11.55 ? 23% mpstat.cpu.all.sys%
0.61 ? 3% -0.4 0.25 ? 7% mpstat.cpu.all.usr%
59890 ? 5% +91.1% 114450 ? 3% numa-meminfo.node0.AnonHugePages
79172 ? 22% +19.1% 94295 ? 4% numa-meminfo.node0.KReclaimable
79172 ? 22% +19.1% 94295 ? 4% numa-meminfo.node0.SReclaimable
260929 ? 21% +16.3% 303472 ? 4% numa-meminfo.node0.Slab
6556 ? 35% -59.2% 2677 ? 43% numa-meminfo.node1.Dirty
6650 ? 32% -60.7% 2611 ? 44% numa-meminfo.node1.Inactive(file)
88809 +3.8% 92151 proc-vmstat.nr_anon_pages
92000 +3.3% 95019 proc-vmstat.nr_inactive_anon
60719 +2.7% 62331 proc-vmstat.nr_kernel_stack
33987 +6.0% 36013 proc-vmstat.nr_slab_reclaimable
73664 +2.5% 75505 proc-vmstat.nr_slab_unreclaimable
92000 +3.3% 95019 proc-vmstat.nr_zone_inactive_anon
452639 +43.9% 651425 ? 7% proc-vmstat.pgfault
701322 ? 4% -86.2% 96665 ? 45% proc-vmstat.pgpgout
16001 ? 2% +79.0% 28642 ? 5% proc-vmstat.pgreuse
1651 +3.9% 1716 ? 2% proc-vmstat.unevictable_pgs_culled
908.60 ? 17% -60.6% 358.00 ? 21% turbostat.Avg_MHz
33.06 ? 17% -18.9 14.13 ? 19% turbostat.Busy%
2753 -8.2% 2527 ? 2% turbostat.Bzy_MHz
0.99 ?153% -0.9 0.08 ? 62% turbostat.C1%
4107068 ? 27% +282.8% 15723747 ? 15% turbostat.C1E
41.79 ? 26% +28.9 70.70 ? 18% turbostat.C1E%
65.82 ? 7% +29.9% 85.50 ? 3% turbostat.CPU%c1
8238417 ? 2% +138.2% 19624535 ? 6% turbostat.IRQ
174.45 ? 2% -23.0% 134.26 ? 2% turbostat.PkgWatt
57.67 -7.8% 53.17 turbostat.RAMWatt
14410648 ? 2% +23.0% 17723721 numa-vmstat.node0.nr_dirtied
1789 ? 38% -55.7% 791.83 ? 47% numa-vmstat.node0.nr_dirty
19789 ? 22% +19.1% 23574 ? 4% numa-vmstat.node0.nr_slab_reclaimable
4888 ? 22% -65.7% 1676 ? 90% numa-vmstat.node0.nr_written
1779 ? 39% -55.6% 791.00 ? 46% numa-vmstat.node0.nr_zone_write_pending
16042664 ? 3% +21.7% 19525710 ? 2% numa-vmstat.node0.numa_hit
16006466 ? 3% +21.7% 19486525 ? 2% numa-vmstat.node0.numa_local
14478628 ? 2% +15.4% 16706134 ? 2% numa-vmstat.node1.nr_dirtied
1641 ? 36% -61.9% 626.33 ? 37% numa-vmstat.node1.nr_dirty
1664 ? 33% -63.1% 614.17 ? 38% numa-vmstat.node1.nr_inactive_file
1671 ? 32% -63.4% 611.33 ? 39% numa-vmstat.node1.nr_zone_inactive_file
1654 ? 35% -62.0% 629.00 ? 37% numa-vmstat.node1.nr_zone_write_pending
15322749 ? 3% +13.7% 17428068 ? 2% numa-vmstat.node1.numa_hit
15280012 ? 3% +13.7% 17371951 ? 2% numa-vmstat.node1.numa_local
8.05e+09 -61.1% 3.131e+09 ? 5% perf-stat.i.branch-instructions
1.09 ? 45% -0.5 0.63 ? 3% perf-stat.i.branch-miss-rate%
38127328 ? 3% -56.6% 16548357 ? 6% perf-stat.i.branch-misses
44350095 ? 17% -61.7% 16990385 ? 20% perf-stat.i.cache-misses
1.541e+08 ? 15% -59.8% 61896928 ? 19% perf-stat.i.cache-references
45484 ? 5% -63.2% 16759 perf-stat.i.context-switches
8.062e+10 ? 18% -61.5% 3.103e+10 ? 21% perf-stat.i.cpu-cycles
2460 ? 21% -71.8% 694.46 ? 14% perf-stat.i.cpu-migrations
1013046 ? 15% -51.4% 491922 ? 36% perf-stat.i.dTLB-load-misses
1.16e+10 -61.2% 4.505e+09 ? 5% perf-stat.i.dTLB-loads
139405 ? 13% -57.2% 59703 ? 30% perf-stat.i.dTLB-store-misses
5.54e+09 -60.3% 2.198e+09 ? 5% perf-stat.i.dTLB-stores
19147689 ? 10% -59.8% 7688166 ? 11% perf-stat.i.iTLB-load-misses
3979444 ? 2% -41.1% 2345331 perf-stat.i.iTLB-loads
4.052e+10 -61.2% 1.574e+10 ? 5% perf-stat.i.instructions
54.62 ? 28% -62.7% 20.38 ? 24% perf-stat.i.major-faults
0.92 ? 18% -61.5% 0.35 ? 21% perf-stat.i.metric.GHz
307.44 ? 12% +149.5% 767.15 ? 15% perf-stat.i.metric.K/sec
287.96 -61.2% 111.77 ? 5% perf-stat.i.metric.M/sec
7763 ? 3% -38.3% 4792 ? 3% perf-stat.i.minor-faults
8023842 ? 16% -63.0% 2972243 ? 20% perf-stat.i.node-load-misses
2359076 ? 3% -61.2% 914463 ? 6% perf-stat.i.node-loads
2709777 ? 4% -63.9% 979535 ? 9% perf-stat.i.node-store-misses
3915550 -63.9% 1412477 ? 5% perf-stat.i.node-stores
7818 ? 3% -38.4% 4813 ? 3% perf-stat.i.page-faults
0.47 ? 3% +0.1 0.53 ? 3% perf-stat.overall.branch-miss-rate%
82.68 -6.2 76.44 ? 2% perf-stat.overall.iTLB-load-miss-rate%
7.877e+09 -60.6% 3.102e+09 ? 5% perf-stat.ps.branch-instructions
37245481 ? 3% -56.0% 16391709 ? 6% perf-stat.ps.branch-misses
43407496 ? 17% -61.2% 16831298 ? 20% perf-stat.ps.cache-misses
1.508e+08 ? 15% -59.3% 61321022 ? 19% perf-stat.ps.cache-references
44501 ? 5% -62.7% 16604 perf-stat.ps.context-switches
85898 +1.5% 87191 perf-stat.ps.cpu-clock
7.891e+10 ? 18% -61.0% 3.074e+10 ? 21% perf-stat.ps.cpu-cycles
2407 ? 21% -71.4% 687.91 ? 14% perf-stat.ps.cpu-migrations
990172 ? 15% -50.8% 487318 ? 36% perf-stat.ps.dTLB-load-misses
1.135e+10 -60.7% 4.463e+09 ? 5% perf-stat.ps.dTLB-loads
136151 ? 13% -56.6% 59137 ? 30% perf-stat.ps.dTLB-store-misses
5.421e+09 -59.8% 2.177e+09 ? 5% perf-stat.ps.dTLB-stores
18737344 ? 10% -59.3% 7617127 ? 11% perf-stat.ps.iTLB-load-misses
3890634 ? 2% -40.3% 2323525 perf-stat.ps.iTLB-loads
3.965e+10 -60.7% 1.56e+10 ? 5% perf-stat.ps.instructions
52.95 ? 28% -61.9% 20.17 ? 24% perf-stat.ps.major-faults
7543 ? 3% -37.1% 4745 ? 3% perf-stat.ps.minor-faults
7853349 ? 16% -62.5% 2944346 ? 20% perf-stat.ps.node-load-misses
2308573 ? 3% -60.8% 905994 ? 6% perf-stat.ps.node-loads
2651779 ? 4% -63.4% 970377 ? 9% perf-stat.ps.node-store-misses
3831581 -63.5% 1399572 ? 5% perf-stat.ps.node-stores
7596 ? 3% -37.3% 4765 ? 3% perf-stat.ps.page-faults
85898 +1.5% 87191 perf-stat.ps.task-clock
0.00 +1.1 1.09 ? 25% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state
0.00 +1.1 1.10 ? 25% perf-profile.calltrace.cycles-pp.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter
0.12 ?200% +1.6 1.76 ? 26% perf-profile.calltrace.cycles-pp.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle
0.12 ?200% +1.8 1.96 ? 26% perf-profile.calltrace.cycles-pp.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
0.07 ? 16% -0.0 0.03 ?100% perf-profile.children.cycles-pp.down
0.07 ? 16% -0.0 0.03 ?100% perf-profile.children.cycles-pp.__down
0.00 +0.1 0.06 ? 17% perf-profile.children.cycles-pp.tick_irq_enter
0.00 +0.1 0.06 ? 19% perf-profile.children.cycles-pp.rcu_sched_clock_irq
0.00 +0.1 0.07 ? 25% perf-profile.children.cycles-pp.update_rq_clock
0.00 +0.1 0.07 ? 27% perf-profile.children.cycles-pp.io_serial_in
0.00 +0.1 0.07 ? 20% perf-profile.children.cycles-pp.irq_enter_rcu
0.00 +0.1 0.08 ? 14% perf-profile.children.cycles-pp.irqtime_account_irq
0.00 +0.1 0.08 ? 28% perf-profile.children.cycles-pp.ktime_get_update_offsets_now
0.00 +0.1 0.08 ? 21% perf-profile.children.cycles-pp.read_tsc
0.00 +0.1 0.08 ? 22% perf-profile.children.cycles-pp.serial8250_console_putchar
0.00 +0.1 0.09 ? 24% perf-profile.children.cycles-pp.wait_for_xmitr
0.01 ?200% +0.1 0.10 ? 14% perf-profile.children.cycles-pp.perf_mux_hrtimer_handler
0.00 +0.1 0.09 ? 26% perf-profile.children.cycles-pp.uart_console_write
0.00 +0.1 0.09 ? 23% perf-profile.children.cycles-pp.native_sched_clock
0.00 +0.1 0.09 ? 23% perf-profile.children.cycles-pp.serial8250_console_write
0.00 +0.1 0.09 ? 25% perf-profile.children.cycles-pp.irq_work_run_list
0.00 +0.1 0.09 ? 25% perf-profile.children.cycles-pp.asm_sysvec_irq_work
0.00 +0.1 0.09 ? 25% perf-profile.children.cycles-pp.sysvec_irq_work
0.00 +0.1 0.09 ? 25% perf-profile.children.cycles-pp.__sysvec_irq_work
0.00 +0.1 0.09 ? 25% perf-profile.children.cycles-pp.irq_work_run
0.00 +0.1 0.09 ? 25% perf-profile.children.cycles-pp.irq_work_single
0.00 +0.1 0.10 ? 22% perf-profile.children.cycles-pp._printk
0.00 +0.1 0.10 ? 22% perf-profile.children.cycles-pp.vprintk_emit
0.00 +0.1 0.10 ? 22% perf-profile.children.cycles-pp.console_unlock
0.02 ?122% +0.1 0.12 ? 34% perf-profile.children.cycles-pp.rebalance_domains
0.00 +0.1 0.11 ? 33% perf-profile.children.cycles-pp.tick_nohz_irq_exit
0.01 ?200% +0.1 0.12 ? 26% perf-profile.children.cycles-pp.native_irq_return_iret
0.00 +0.1 0.11 ? 21% perf-profile.children.cycles-pp.sched_clock_cpu
0.00 +0.1 0.13 ? 28% perf-profile.children.cycles-pp.tick_nohz_next_event
0.01 ?200% +0.1 0.14 ? 26% perf-profile.children.cycles-pp.update_blocked_averages
0.00 +0.1 0.14 ? 20% perf-profile.children.cycles-pp.lapic_next_deadline
0.00 +0.1 0.15 ? 23% perf-profile.children.cycles-pp.run_rebalance_domains
0.20 ? 10% +0.2 0.35 ? 18% perf-profile.children.cycles-pp.scheduler_tick
0.00 +0.2 0.16 ? 27% perf-profile.children.cycles-pp.tick_nohz_get_sleep_length
0.32 ? 7% +0.2 0.50 ? 41% perf-profile.children.cycles-pp._raw_spin_lock
0.17 ? 22% +0.2 0.42 ? 28% perf-profile.children.cycles-pp.__softirqentry_text_start
0.28 ? 15% +0.3 0.55 ? 19% perf-profile.children.cycles-pp.update_process_times
0.29 ? 14% +0.3 0.56 ? 20% perf-profile.children.cycles-pp.tick_sched_handle
0.20 ? 21% +0.3 0.48 ? 27% perf-profile.children.cycles-pp.irq_exit_rcu
0.30 ? 14% +0.3 0.60 ? 19% perf-profile.children.cycles-pp.tick_sched_timer
0.07 ? 65% +0.3 0.38 ? 28% perf-profile.children.cycles-pp.menu_select
0.15 ? 35% +0.3 0.45 ? 24% perf-profile.children.cycles-pp.clockevents_program_event
0.14 ? 40% +0.3 0.47 ? 25% perf-profile.children.cycles-pp.ktime_get
0.38 ? 16% +0.4 0.82 ? 19% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.58 ? 20% +0.9 1.44 ? 20% perf-profile.children.cycles-pp.hrtimer_interrupt
0.58 ? 21% +0.9 1.46 ? 20% perf-profile.children.cycles-pp.__sysvec_apic_timer_interrupt
1.16 ? 45% +1.3 2.47 ? 21% perf-profile.children.cycles-pp.asm_sysvec_apic_timer_interrupt
0.84 ? 21% +1.3 2.18 ? 22% perf-profile.children.cycles-pp.sysvec_apic_timer_interrupt
0.02 ?122% +0.1 0.09 ? 33% perf-profile.self.cycles-pp.update_process_times
0.00 +0.1 0.07 ? 27% perf-profile.self.cycles-pp.io_serial_in
0.00 +0.1 0.08 ? 21% perf-profile.self.cycles-pp.read_tsc
0.00 +0.1 0.08 ? 25% perf-profile.self.cycles-pp.native_sched_clock
0.00 +0.1 0.09 ? 22% perf-profile.self.cycles-pp.update_blocked_averages
0.01 ?200% +0.1 0.12 ? 26% perf-profile.self.cycles-pp.native_irq_return_iret
0.00 +0.1 0.14 ? 20% perf-profile.self.cycles-pp.lapic_next_deadline
0.03 ?122% +0.2 0.19 ? 35% perf-profile.self.cycles-pp.menu_select
0.01 ?200% +0.3 0.26 ? 25% perf-profile.self.cycles-pp.cpuidle_enter_state
0.13 ? 41% +0.3 0.41 ? 25% perf-profile.self.cycles-pp.ktime_get
9220 ? 13% +124.1% 20659 ? 8% softirqs.CPU0.RCU
11092 ? 5% +70.2% 18876 ? 9% softirqs.CPU0.SCHED
8791 ? 23% +106.1% 18118 ? 12% softirqs.CPU1.RCU
7976 ? 9% +104.7% 16331 ? 10% softirqs.CPU1.SCHED
7028 ? 15% +150.4% 17600 ? 9% softirqs.CPU10.RCU
6664 ? 3% +118.2% 14542 ? 10% softirqs.CPU10.SCHED
7043 ? 17% +155.5% 17998 ? 8% softirqs.CPU11.RCU
6804 ? 9% +127.0% 15445 ? 11% softirqs.CPU11.SCHED
6995 ? 9% +149.7% 17467 ? 9% softirqs.CPU12.RCU
6734 ? 6% +117.9% 14676 ? 9% softirqs.CPU12.SCHED
6768 ? 15% +164.5% 17905 ? 11% softirqs.CPU13.RCU
6458 ? 9% +128.7% 14770 ? 9% softirqs.CPU13.SCHED
7216 ? 14% +145.9% 17744 ? 10% softirqs.CPU14.RCU
7202 ? 6% +105.4% 14792 ? 10% softirqs.CPU14.SCHED
6438 ? 15% +168.9% 17311 ? 7% softirqs.CPU15.RCU
6599 ? 6% +125.2% 14859 ? 9% softirqs.CPU15.SCHED
6712 ? 16% +155.3% 17135 ? 12% softirqs.CPU16.RCU
6576 ? 9% +126.4% 14885 ? 9% softirqs.CPU16.SCHED
6863 ? 13% +151.9% 17290 ? 7% softirqs.CPU17.RCU
6749 ? 9% +119.8% 14832 ? 8% softirqs.CPU17.SCHED
6711 ? 14% +158.3% 17335 ? 9% softirqs.CPU18.RCU
6715 ? 5% +116.0% 14504 ? 11% softirqs.CPU18.SCHED
6749 ? 20% +149.0% 16807 ? 8% softirqs.CPU19.RCU
6358 ? 4% +128.4% 14523 ? 9% softirqs.CPU19.SCHED
7638 ? 13% +147.4% 18898 ? 8% softirqs.CPU2.RCU
7850 ? 5% +97.5% 15505 ? 8% softirqs.CPU2.SCHED
7678 ? 24% +118.3% 16759 ? 12% softirqs.CPU20.RCU
6652 ? 7% +121.4% 14726 ? 8% softirqs.CPU20.SCHED
6882 ? 13% +153.2% 17426 ? 10% softirqs.CPU21.RCU
6865 ? 7% +113.9% 14685 ? 10% softirqs.CPU21.SCHED
6487 ? 11% +169.3% 17469 ? 7% softirqs.CPU22.RCU
6482 ? 5% +128.2% 14791 ? 8% softirqs.CPU22.SCHED
6276 ? 14% +174.6% 17235 ? 10% softirqs.CPU23.RCU
6511 ? 6% +129.5% 14941 ? 10% softirqs.CPU23.SCHED
6415 ? 14% +163.2% 16885 ? 9% softirqs.CPU24.RCU
6805 ? 7% +113.6% 14535 ? 8% softirqs.CPU24.SCHED
7473 ? 31% +125.5% 16849 ? 8% softirqs.CPU25.RCU
6941 ? 9% +107.5% 14405 ? 7% softirqs.CPU25.SCHED
6465 ? 13% +160.9% 16868 ? 8% softirqs.CPU26.RCU
6886 ? 11% +108.4% 14350 ? 8% softirqs.CPU26.SCHED
6352 ? 14% +169.2% 17102 ? 8% softirqs.CPU27.RCU
6487 ? 5% +124.8% 14581 ? 8% softirqs.CPU27.SCHED
6425 ? 13% +160.1% 16712 ? 7% softirqs.CPU28.RCU
6749 ? 9% +118.0% 14710 ? 6% softirqs.CPU28.SCHED
6178 ? 15% +171.7% 16788 ? 9% softirqs.CPU29.RCU
6368 ? 5% +125.1% 14333 ? 8% softirqs.CPU29.SCHED
7033 ? 12% +173.8% 19259 ? 8% softirqs.CPU3.RCU
7148 ? 6% +115.4% 15400 ? 8% softirqs.CPU3.SCHED
6646 ? 17% +158.0% 17149 ? 7% softirqs.CPU30.RCU
6886 ? 9% +110.3% 14483 ? 8% softirqs.CPU30.SCHED
6258 ? 14% +166.0% 16650 ? 14% softirqs.CPU31.RCU
6873 ? 12% +108.1% 14302 ? 6% softirqs.CPU31.SCHED
6406 ? 13% +178.8% 17860 ? 7% softirqs.CPU32.RCU
6475 ? 6% +123.9% 14497 ? 8% softirqs.CPU32.SCHED
6372 ? 13% +174.3% 17479 ? 7% softirqs.CPU33.RCU
6438 ? 6% +126.4% 14574 ? 8% softirqs.CPU33.SCHED
6324 ? 13% +176.7% 17497 ? 7% softirqs.CPU34.RCU
6448 ? 6% +124.4% 14468 ? 8% softirqs.CPU34.SCHED
6268 ? 13% +171.2% 16996 ? 6% softirqs.CPU35.RCU
6432 ? 4% +128.6% 14703 ? 10% softirqs.CPU35.SCHED
6358 ? 13% +174.9% 17481 ? 7% softirqs.CPU36.RCU
6465 ? 5% +123.1% 14425 ? 8% softirqs.CPU36.SCHED
6529 ? 12% +175.3% 17978 ? 9% softirqs.CPU37.RCU
6437 ? 5% +134.0% 15061 ? 11% softirqs.CPU37.SCHED
6292 ? 12% +178.4% 17517 ? 7% softirqs.CPU38.RCU
6353 ? 5% +127.4% 14444 ? 8% softirqs.CPU38.SCHED
6230 ? 13% +178.7% 17367 ? 7% softirqs.CPU39.RCU
6408 ? 6% +126.1% 14492 ? 8% softirqs.CPU39.SCHED
7067 ? 10% +163.0% 18585 ? 9% softirqs.CPU4.RCU
7419 ? 6% +108.0% 15431 ? 10% softirqs.CPU4.SCHED
6412 ? 15% +167.6% 17158 ? 7% softirqs.CPU40.RCU
6567 ? 8% +120.1% 14451 ? 8% softirqs.CPU40.SCHED
6366 ? 13% +175.9% 17564 ? 7% softirqs.CPU41.RCU
6356 ? 6% +127.1% 14434 ? 9% softirqs.CPU41.SCHED
6539 ? 14% +166.9% 17451 ? 8% softirqs.CPU42.RCU
6393 ? 12% +122.0% 14193 ? 10% softirqs.CPU42.SCHED
6722 ? 14% +165.4% 17838 ? 7% softirqs.CPU43.RCU
6211 ? 13% +112.6% 13202 ? 16% softirqs.CPU43.SCHED
6069 ? 13% +137.6% 14420 ? 9% softirqs.CPU44.RCU
6805 ? 11% +105.3% 13973 ? 6% softirqs.CPU44.SCHED
5881 ? 13% +181.0% 16528 ? 14% softirqs.CPU45.RCU
6423 ? 7% +123.6% 14363 ? 8% softirqs.CPU45.SCHED
6465 ? 15% +186.6% 18531 ? 16% softirqs.CPU46.RCU
6812 ? 11% +110.6% 14348 ? 9% softirqs.CPU46.SCHED
6796 ? 13% +169.6% 18326 ? 7% softirqs.CPU47.RCU
6515 ? 2% +118.8% 14256 ? 5% softirqs.CPU47.SCHED
7226 ? 14% +154.4% 18382 ? 12% softirqs.CPU48.RCU
6735 ? 13% +107.0% 13944 ? 7% softirqs.CPU48.SCHED
6878 ? 9% +162.7% 18072 ? 8% softirqs.CPU49.RCU
6506 ? 6% +121.5% 14411 ? 9% softirqs.CPU49.SCHED
7494 ? 17% +145.7% 18412 ? 10% softirqs.CPU5.RCU
7326 ? 8% +106.2% 15108 ? 9% softirqs.CPU5.SCHED
6872 ? 19% +159.3% 17816 ? 9% softirqs.CPU50.RCU
6587 ? 5% +120.2% 14504 ? 10% softirqs.CPU50.SCHED
7329 ? 19% +149.4% 18281 ? 8% softirqs.CPU51.RCU
6767 ? 10% +110.8% 14264 ? 8% softirqs.CPU51.SCHED
6699 ? 14% +172.5% 18258 ? 7% softirqs.CPU52.RCU
6386 ? 8% +133.9% 14937 ? 8% softirqs.CPU52.SCHED
7339 ? 17% +139.7% 17593 ? 8% softirqs.CPU53.RCU
6669 ? 5% +116.3% 14425 ? 6% softirqs.CPU53.SCHED
6991 ? 10% +160.2% 18192 ? 8% softirqs.CPU54.RCU
6563 ? 5% +122.5% 14602 ? 12% softirqs.CPU54.SCHED
6846 ? 12% +171.7% 18603 ? 10% softirqs.CPU55.RCU
6658 ? 7% +126.8% 15099 ? 8% softirqs.CPU55.SCHED
7738 ? 17% +142.6% 18773 ? 8% softirqs.CPU56.RCU
6670 ? 5% +122.1% 14816 ? 10% softirqs.CPU56.SCHED
7737 ? 30% +133.9% 18095 ? 11% softirqs.CPU57.RCU
6514 ? 4% +121.7% 14440 ? 10% softirqs.CPU57.SCHED
6821 ? 16% +161.3% 17826 ? 9% softirqs.CPU58.RCU
7124 ? 9% +104.1% 14545 ? 8% softirqs.CPU58.SCHED
6987 ? 14% +156.1% 17892 ? 8% softirqs.CPU59.RCU
7062 ? 6% +108.9% 14754 ? 8% softirqs.CPU59.SCHED
6589 ? 10% +169.3% 17743 ? 7% softirqs.CPU6.RCU
7181 ? 10% +105.4% 14754 ? 8% softirqs.CPU6.SCHED
6439 ? 13% +159.0% 16675 ? 13% softirqs.CPU60.RCU
6752 ? 6% +119.4% 14814 ? 9% softirqs.CPU60.SCHED
6680 ? 16% +158.8% 17290 ? 9% softirqs.CPU61.RCU
6887 ? 5% +116.6% 14920 ? 9% softirqs.CPU61.SCHED
6898 ? 14% +159.5% 17905 ? 16% softirqs.CPU62.RCU
6753 ? 6% +118.9% 14786 ? 9% softirqs.CPU62.SCHED
6823 ? 15% +150.8% 17112 ? 7% softirqs.CPU63.RCU
7047 ? 6% +102.3% 14258 ? 5% softirqs.CPU63.SCHED
7056 ? 15% +147.5% 17467 ? 7% softirqs.CPU64.RCU
6983 ? 6% +111.8% 14794 ? 9% softirqs.CPU64.SCHED
7279 ? 13% +137.4% 17280 ? 8% softirqs.CPU65.RCU
6898 ? 4% +110.4% 14515 ? 10% softirqs.CPU65.SCHED
6694 ? 15% +161.4% 17501 ? 8% softirqs.CPU66.RCU
6532 ? 6% +120.9% 14428 ? 7% softirqs.CPU66.SCHED
6428 ? 14% +169.1% 17301 ? 7% softirqs.CPU67.RCU
6479 ? 6% +124.3% 14535 ? 7% softirqs.CPU67.SCHED
6442 ? 13% +165.6% 17107 ? 9% softirqs.CPU68.RCU
6399 ? 5% +125.8% 14452 ? 8% softirqs.CPU68.SCHED
6834 ? 11% +154.6% 17396 ? 7% softirqs.CPU69.RCU
6753 ? 5% +114.2% 14467 ? 8% softirqs.CPU69.SCHED
7107 ? 9% +154.7% 18099 ? 9% softirqs.CPU7.RCU
6628 ? 7% +125.4% 14939 ? 8% softirqs.CPU7.SCHED
6655 ? 13% +159.8% 17288 ? 9% softirqs.CPU70.RCU
6766 ? 12% +111.6% 14313 ? 8% softirqs.CPU70.SCHED
6582 ? 14% +163.7% 17356 ? 7% softirqs.CPU71.RCU
6514 ? 7% +117.7% 14183 ? 11% softirqs.CPU71.SCHED
6305 ? 14% +169.1% 16965 ? 6% softirqs.CPU72.RCU
6478 ? 6% +120.4% 14277 ? 7% softirqs.CPU72.SCHED
6291 ? 14% +171.9% 17108 ? 7% softirqs.CPU73.RCU
6437 ? 6% +117.5% 14003 ? 9% softirqs.CPU73.SCHED
7039 ? 21% +142.0% 17031 ? 7% softirqs.CPU74.RCU
7555 ? 25% +84.3% 13925 ? 10% softirqs.CPU74.SCHED
5988 ? 15% +181.7% 16871 ? 7% softirqs.CPU75.RCU
6516 ? 6% +133.5% 15214 ? 14% softirqs.CPU75.SCHED
6047 ? 11% +186.9% 17347 ? 8% softirqs.CPU76.RCU
6498 ? 6% +127.1% 14758 ? 11% softirqs.CPU76.SCHED
6209 ? 10% +162.5% 16298 ? 8% softirqs.CPU77.RCU
6492 ? 6% +118.4% 14182 ? 9% softirqs.CPU77.SCHED
6159 ? 11% +169.3% 16585 ? 7% softirqs.CPU78.RCU
6426 ? 6% +123.4% 14357 ? 8% softirqs.CPU78.SCHED
6052 ? 13% +162.7% 15898 ? 7% softirqs.CPU79.RCU
6510 ? 6% +123.1% 14525 ? 8% softirqs.CPU79.SCHED
6848 ? 12% +166.0% 18217 ? 11% softirqs.CPU8.RCU
6924 ? 5% +109.2% 14485 ? 9% softirqs.CPU8.SCHED
6143 ? 13% +171.4% 16673 ? 7% softirqs.CPU80.RCU
6475 ? 6% +121.7% 14357 ? 7% softirqs.CPU80.SCHED
6180 ? 11% +173.2% 16885 ? 6% softirqs.CPU81.RCU
6691 ? 6% +119.6% 14696 ? 7% softirqs.CPU81.SCHED
6215 ? 13% +168.7% 16703 ? 8% softirqs.CPU82.RCU
6442 ? 6% +121.3% 14254 ? 8% softirqs.CPU82.SCHED
6275 ? 15% +164.8% 16617 ? 7% softirqs.CPU83.RCU
7139 ? 16% +98.7% 14188 ? 7% softirqs.CPU83.SCHED
6210 ? 14% +167.1% 16589 ? 8% softirqs.CPU84.RCU
6537 ? 7% +119.4% 14345 ? 8% softirqs.CPU84.SCHED
6558 ? 11% +156.6% 16828 ? 7% softirqs.CPU85.RCU
6953 ? 18% +106.3% 14345 ? 8% softirqs.CPU85.SCHED
6326 ? 16% +166.0% 16825 ? 7% softirqs.CPU86.RCU
6458 ? 6% +123.3% 14423 ? 9% softirqs.CPU86.SCHED
6216 ? 15% +175.5% 17130 ? 8% softirqs.CPU87.RCU
6004 ? 3% +132.4% 13953 ? 9% softirqs.CPU87.SCHED
6675 ? 13% +168.6% 17930 ? 9% softirqs.CPU9.RCU
6738 ? 9% +119.2% 14772 ? 9% softirqs.CPU9.SCHED
591090 ? 12% +159.9% 1536265 ? 8% softirqs.RCU
593854 ? 5% +116.8% 1287218 ? 8% softirqs.SCHED
16454 ? 4% +63.0% 26822 ? 4% softirqs.TIMER
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
---
0DAY/LKP+ Test Infrastructure Open Source Technology Center
https://lists.01.org/hyperkitty/list/[email protected] Intel Corporation
Thanks,
Oliver Sang