Greeting,
FYI, we noticed a 61.8% improvement of phoronix-test-suite.npb.FT.A.total_mop_s due to commit:
commit: 31b912de1316644040ca9a0fb9b514ffa462c20c ("mm/gup: decrement head page once for group of subpages")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: phoronix-test-suite
on test machine: 96 threads 2 sockets Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 128G memory
with following parameters:
test: npb-1.3.1
option_a: FT.A
cpufreq_governor: performance
ucode: 0x5003006
test-description: The Phoronix Test Suite is the most comprehensive testing and benchmarking platform available that provides an extensible framework for which new tests can be easily added.
test-url: http://www.phoronix-test-suite.com/
In addition to that, the commit also has significant impact on the following tests:
+------------------+---------------------------------------------------------------------------------+
| testcase: change | phoronix-test-suite: phoronix-test-suite.npb.FT.B.total_mop_s 67.9% improvement |
| test machine | 96 threads 2 sockets Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 128G memory |
| test parameters | cpufreq_governor=performance |
| | option_a=FT.B |
| | test=npb-1.3.1 |
| | ucode=0x5003006 |
+------------------+---------------------------------------------------------------------------------+
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
bin/lkp run generated-yaml-file
=========================================================================================
compiler/cpufreq_governor/kconfig/option_a/rootfs/tbox_group/test/testcase/ucode:
gcc-9/performance/x86_64-rhel-8.3/FT.A/debian-x86_64-phoronix/lkp-csl-2sp8/npb-1.3.1/phoronix-test-suite/0x5003006
commit:
8745d7f634 ("mm/gup: add compound page list iterator")
31b912de13 ("mm/gup: decrement head page once for group of subpages")
8745d7f6346ca107 31b912de1316644040ca9a0fb9b
---------------- ---------------------------
%stddev %change %stddev
\ | \
1772 +61.8% 2869 phoronix-test-suite.npb.FT.A.total_mop_s
33.27 -16.2% 27.89 phoronix-test-suite.time.elapsed_time
33.27 -16.2% 27.89 phoronix-test-suite.time.elapsed_time.max
4037 ? 4% -9.0% 3675 ? 5% phoronix-test-suite.time.involuntary_context_switches
667409 -7.7% 616334 phoronix-test-suite.time.minor_page_faults
2011 -24.7% 1514 phoronix-test-suite.time.percent_of_cpu_this_job_got
264.20 -40.1% 158.32 phoronix-test-suite.time.system_time
405.29 -34.8% 264.31 phoronix-test-suite.time.user_time
2.438e+09 ? 2% -10.4% 2.184e+09 cpuidle.C1E.time
5082140 -10.0% 4575868 cpuidle.C1E.usage
7.06 -2.1 4.91 mpstat.cpu.all.sys%
10.99 -2.6 8.41 mpstat.cpu.all.usr%
2730 ? 5% -11.7% 2410 ? 4% slabinfo.fsnotify_mark_connector.active_objs
2730 ? 5% -11.7% 2410 ? 4% slabinfo.fsnotify_mark_connector.num_objs
6161 ? 78% -56.9% 2658 ? 5% softirqs.NET_RX
354673 ? 7% -14.5% 303117 ? 3% softirqs.RCU
10420 ? 13% -36.2% 6648 ? 21% numa-meminfo.node0.Active(anon)
1282403 ? 6% -22.4% 994508 ? 5% numa-meminfo.node0.AnonHugePages
1582881 ? 4% -22.0% 1235332 ? 9% numa-meminfo.node0.AnonPages
13583 ? 4% -21.9% 10611 ? 13% numa-meminfo.node0.PageTables
1217497 ? 6% -22.5% 943365 ? 5% numa-meminfo.node1.AnonHugePages
78.17 +6.2% 83.00 vmstat.cpu.id
11.50 ? 4% -21.7% 9.00 vmstat.cpu.us
2437 +18.0% 2876 vmstat.io.bi
18.83 ? 7% -22.1% 14.67 ? 3% vmstat.procs.r
9376 ? 2% +14.6% 10743 ? 2% vmstat.system.cs
17391 ? 7% -16.4% 14547 ? 4% meminfo.Active(anon)
2628150 ? 10% -26.7% 1925597 meminfo.AnonHugePages
2983160 ? 9% -24.4% 2254776 meminfo.AnonPages
11603333 ? 6% -18.6% 9440776 ? 4% meminfo.Committed_AS
4841224 ? 5% -15.0% 4114574 meminfo.Inactive
3896500 ? 7% -18.6% 3169834 meminfo.Inactive(anon)
6370318 ? 4% -11.5% 5637070 meminfo.Memused
23945 ? 5% -19.9% 19184 meminfo.PageTables
2606 ? 13% -36.2% 1661 ? 21% numa-vmstat.node0.nr_active_anon
395726 ? 4% -21.9% 308870 ? 9% numa-vmstat.node0.nr_anon_pages
625.83 ? 6% -22.5% 485.33 ? 5% numa-vmstat.node0.nr_anon_transparent_hugepages
8971231 ? 68% -74.1% 2320961 ?200% numa-vmstat.node0.nr_foll_pin_acquired
8969656 ? 68% -74.1% 2320780 ?200% numa-vmstat.node0.nr_foll_pin_released
3395 ? 4% -21.9% 2652 ? 13% numa-vmstat.node0.nr_page_table_pages
2606 ? 13% -36.3% 1661 ? 21% numa-vmstat.node0.nr_zone_active_anon
594.00 ? 6% -22.6% 460.00 ? 5% numa-vmstat.node1.nr_anon_transparent_hugepages
15.96 ? 77% -8.7 7.22 ?156% perf-profile.calltrace.cycles-pp.mmput.begin_new_exec.load_elf_binary.exec_binprm.bprm_execve
15.96 ? 77% -8.7 7.22 ?156% perf-profile.calltrace.cycles-pp.exit_mmap.mmput.begin_new_exec.load_elf_binary.exec_binprm
7.41 ?108% -6.3 1.11 ?223% perf-profile.calltrace.cycles-pp.do_group_exit.__x64_sys_exit_group.do_syscall_64.entry_SYSCALL_64_after_hwframe
7.41 ?108% -6.3 1.11 ?223% perf-profile.calltrace.cycles-pp.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64.entry_SYSCALL_64_after_hwframe
7.41 ?108% -6.3 1.11 ?223% perf-profile.calltrace.cycles-pp.mmput.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
7.41 ?108% -6.3 1.11 ?223% perf-profile.calltrace.cycles-pp.exit_mmap.mmput.do_exit.do_group_exit.__x64_sys_exit_group
7.41 ?108% -6.3 1.11 ?223% perf-profile.calltrace.cycles-pp.__x64_sys_exit_group.do_syscall_64.entry_SYSCALL_64_after_hwframe
7.41 ?108% -6.0 1.39 ?223% perf-profile.calltrace.cycles-pp.unmap_vmas.exit_mmap.mmput.do_exit.do_group_exit
8.62 ? 80% -3.1 5.55 ?145% perf-profile.calltrace.cycles-pp.unmap_vmas.exit_mmap.mmput.begin_new_exec.load_elf_binary
8.62 ? 80% -3.1 5.55 ?145% perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.exit_mmap.mmput.begin_new_exec
1.51 ?223% +11.6 13.12 ? 18% perf-profile.calltrace.cycles-pp.arch_do_signal_or_restart.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.51 ?223% +11.6 13.12 ? 18% perf-profile.calltrace.cycles-pp.get_signal.arch_do_signal_or_restart.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64
1.51 ?223% +11.6 13.12 ? 18% perf-profile.calltrace.cycles-pp.do_group_exit.get_signal.arch_do_signal_or_restart.exit_to_user_mode_prepare.syscall_exit_to_user_mode
1.51 ?223% +12.7 14.23 ? 25% perf-profile.calltrace.cycles-pp.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.51 ?223% +12.7 14.23 ? 25% perf-profile.calltrace.cycles-pp.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe
7.41 ?108% -6.3 1.11 ?223% perf-profile.children.cycles-pp.__x64_sys_exit_group
1.51 ?223% +12.7 14.23 ? 25% perf-profile.children.cycles-pp.syscall_exit_to_user_mode
5.36 ?109% -4.0 1.39 ?223% perf-profile.self.cycles-pp.zap_pte_range
4347 ? 7% -16.3% 3637 ? 4% proc-vmstat.nr_active_anon
745895 ? 9% -24.4% 564177 proc-vmstat.nr_anon_pages
1283 ? 10% -26.7% 940.83 proc-vmstat.nr_anon_transparent_hugepages
13478115 -5.9% 12681453 proc-vmstat.nr_foll_pin_acquired
13475687 -5.9% 12680209 proc-vmstat.nr_foll_pin_released
974234 ? 7% -18.6% 792942 proc-vmstat.nr_inactive_anon
20603 -2.3% 20136 proc-vmstat.nr_mapped
5986 ? 5% -19.9% 4796 proc-vmstat.nr_page_table_pages
4347 ? 7% -16.3% 3637 ? 4% proc-vmstat.nr_zone_active_anon
974234 ? 7% -18.6% 792942 proc-vmstat.nr_zone_inactive_anon
184879 ? 4% -27.9% 133379 proc-vmstat.numa_hint_faults
174130 ? 4% -23.6% 133102 proc-vmstat.numa_hint_faults_local
19620 -49.2% 9969 proc-vmstat.numa_huge_pte_updates
86022 ? 4% -30.1% 60147 ? 13% proc-vmstat.numa_pages_migrated
10220324 -48.8% 5236492 proc-vmstat.numa_pte_updates
44801 ? 2% +11.8% 50071 proc-vmstat.pgactivate
843000 -7.9% 776723 proc-vmstat.pgfault
86022 ? 4% -30.1% 60147 ? 13% proc-vmstat.pgmigrate_success
36307 ? 2% -8.1% 33361 proc-vmstat.pgreuse
3.68 ? 2% +8.4% 3.99 ? 2% perf-stat.i.MPKI
9.051e+09 -23.0% 6.971e+09 perf-stat.i.branch-instructions
2.02 +0.3 2.30 perf-stat.i.branch-miss-rate%
65928496 -10.1% 59280741 perf-stat.i.branch-misses
21.94 -1.7 20.27 ? 2% perf-stat.i.cache-miss-rate%
45130628 ? 3% +6.5% 48086526 perf-stat.i.cache-misses
1.556e+08 ? 3% +6.2% 1.653e+08 perf-stat.i.cache-references
10191 +15.5% 11766 ? 3% perf-stat.i.context-switches
6.001e+10 ? 2% -22.5% 4.65e+10 perf-stat.i.cpu-cycles
89.98 ? 4% +8.6% 97.70 ? 4% perf-stat.i.cpu-migrations
2477 ? 2% +13.4% 2808 ? 4% perf-stat.i.cycles-between-cache-misses
301912 ? 7% +15.2% 347761 ? 5% perf-stat.i.dTLB-load-misses
2.011e+10 -18.0% 1.65e+10 perf-stat.i.dTLB-loads
8.536e+09 -21.3% 6.719e+09 perf-stat.i.dTLB-stores
44.76 -2.9 41.86 perf-stat.i.iTLB-load-miss-rate%
2034762 ? 3% -7.0% 1892624 ? 2% perf-stat.i.iTLB-load-misses
5.571e+10 -17.3% 4.608e+10 perf-stat.i.instructions
20342 ? 3% -19.0% 16472 ? 3% perf-stat.i.instructions-per-iTLB-miss
266.68 ? 6% +18.8% 316.77 ? 5% perf-stat.i.major-faults
625824 ? 2% -22.4% 485370 perf-stat.i.metric.GHz
3.95e+08 -19.7% 3.171e+08 perf-stat.i.metric.M/sec
22471 +9.4% 24584 perf-stat.i.minor-faults
68.65 -4.2 64.41 ? 3% perf-stat.i.node-load-miss-rate%
2287777 ? 2% -18.0% 1876580 ? 3% perf-stat.i.node-load-misses
1279773 ? 3% +15.1% 1473444 ? 3% perf-stat.i.node-loads
2732036 ? 3% -35.6% 1759774 perf-stat.i.node-store-misses
18822179 ? 4% +24.1% 23362852 ? 2% perf-stat.i.node-stores
22738 +9.5% 24899 perf-stat.i.page-faults
100846 ? 26% -20.9% 79764 interrupts.CAL:Function_call_interrupts
76508 -13.4% 66285 interrupts.CPU0.LOC:Local_timer_interrupts
817.33 ? 35% +250.4% 2864 ? 51% interrupts.CPU1.CAL:Function_call_interrupts
67899 -15.1% 57668 interrupts.CPU1.LOC:Local_timer_interrupts
67407 -15.2% 57130 interrupts.CPU10.LOC:Local_timer_interrupts
67416 -15.4% 57014 interrupts.CPU11.LOC:Local_timer_interrupts
67195 -15.0% 57141 interrupts.CPU12.LOC:Local_timer_interrupts
67262 -15.1% 57136 interrupts.CPU13.LOC:Local_timer_interrupts
758.50 ? 37% -26.4% 558.00 ? 2% interrupts.CPU14.CAL:Function_call_interrupts
67256 -14.5% 57483 interrupts.CPU14.LOC:Local_timer_interrupts
67474 -15.2% 57246 interrupts.CPU15.LOC:Local_timer_interrupts
67329 -15.4% 56991 interrupts.CPU16.LOC:Local_timer_interrupts
67222 -14.8% 57263 interrupts.CPU17.LOC:Local_timer_interrupts
67380 -15.1% 57183 interrupts.CPU18.LOC:Local_timer_interrupts
67430 -15.3% 57132 interrupts.CPU19.LOC:Local_timer_interrupts
67607 -15.5% 57119 interrupts.CPU2.LOC:Local_timer_interrupts
67459 -15.3% 57108 interrupts.CPU20.LOC:Local_timer_interrupts
67582 -15.4% 57150 interrupts.CPU21.LOC:Local_timer_interrupts
67321 -15.1% 57162 interrupts.CPU22.LOC:Local_timer_interrupts
67451 -15.2% 57228 interrupts.CPU23.LOC:Local_timer_interrupts
67243 -15.1% 57114 interrupts.CPU24.LOC:Local_timer_interrupts
67379 -14.8% 57377 interrupts.CPU25.LOC:Local_timer_interrupts
67486 -15.0% 57357 interrupts.CPU26.LOC:Local_timer_interrupts
756.50 ? 40% -25.0% 567.00 ? 6% interrupts.CPU27.CAL:Function_call_interrupts
67518 -15.2% 57251 interrupts.CPU27.LOC:Local_timer_interrupts
67464 -15.0% 57373 interrupts.CPU28.LOC:Local_timer_interrupts
67448 -14.8% 57460 interrupts.CPU29.LOC:Local_timer_interrupts
67450 -15.2% 57213 interrupts.CPU3.LOC:Local_timer_interrupts
67342 -15.2% 57125 interrupts.CPU30.LOC:Local_timer_interrupts
67493 -15.1% 57310 interrupts.CPU31.LOC:Local_timer_interrupts
67354 -15.0% 57281 interrupts.CPU32.LOC:Local_timer_interrupts
872.50 ? 26% -31.8% 595.00 ? 10% interrupts.CPU33.CAL:Function_call_interrupts
67309 -14.9% 57259 interrupts.CPU33.LOC:Local_timer_interrupts
67268 -14.9% 57242 interrupts.CPU34.LOC:Local_timer_interrupts
221.67 ? 71% -94.7% 11.83 ? 24% interrupts.CPU34.TLB:TLB_shootdowns
892.67 ? 27% -26.5% 656.17 ? 12% interrupts.CPU35.CAL:Function_call_interrupts
67660 -15.0% 57503 interrupts.CPU35.LOC:Local_timer_interrupts
171.67 ? 31% -94.4% 9.67 ? 42% interrupts.CPU35.TLB:TLB_shootdowns
67345 -15.2% 57076 interrupts.CPU36.LOC:Local_timer_interrupts
67310 -15.0% 57224 interrupts.CPU37.LOC:Local_timer_interrupts
67227 -15.1% 57061 interrupts.CPU38.LOC:Local_timer_interrupts
1058 ? 47% -43.3% 600.00 ? 13% interrupts.CPU39.CAL:Function_call_interrupts
67429 -15.3% 57124 interrupts.CPU39.LOC:Local_timer_interrupts
67560 -15.3% 57240 interrupts.CPU4.LOC:Local_timer_interrupts
67340 -15.2% 57106 interrupts.CPU40.LOC:Local_timer_interrupts
257.50 ? 84% -78.1% 56.50 ?191% interrupts.CPU40.TLB:TLB_shootdowns
871.83 ? 29% -29.9% 610.83 ? 10% interrupts.CPU41.CAL:Function_call_interrupts
67247 -15.1% 57122 interrupts.CPU41.LOC:Local_timer_interrupts
67331 -15.0% 57206 interrupts.CPU42.LOC:Local_timer_interrupts
220.50 ?122% -87.8% 27.00 ?143% interrupts.CPU42.TLB:TLB_shootdowns
67258 -15.1% 57093 interrupts.CPU43.LOC:Local_timer_interrupts
814.83 ? 47% -28.1% 585.83 ? 2% interrupts.CPU44.CAL:Function_call_interrupts
67426 -15.6% 56941 interrupts.CPU44.LOC:Local_timer_interrupts
847.00 ? 32% -31.9% 576.67 ? 3% interrupts.CPU45.CAL:Function_call_interrupts
67319 -15.2% 57094 interrupts.CPU45.LOC:Local_timer_interrupts
67210 -15.2% 57012 interrupts.CPU46.LOC:Local_timer_interrupts
986.17 ? 39% -40.3% 588.50 ? 4% interrupts.CPU47.CAL:Function_call_interrupts
67299 -15.0% 57210 interrupts.CPU47.LOC:Local_timer_interrupts
67543 -15.7% 56916 interrupts.CPU48.LOC:Local_timer_interrupts
67532 -15.5% 57085 interrupts.CPU49.LOC:Local_timer_interrupts
67347 -15.2% 57083 interrupts.CPU5.LOC:Local_timer_interrupts
67531 -15.5% 57087 interrupts.CPU50.LOC:Local_timer_interrupts
67465 -15.4% 57075 interrupts.CPU51.LOC:Local_timer_interrupts
67618 -15.5% 57166 interrupts.CPU52.LOC:Local_timer_interrupts
67404 -14.9% 57367 interrupts.CPU53.LOC:Local_timer_interrupts
67293 -14.9% 57243 interrupts.CPU54.LOC:Local_timer_interrupts
67312 -15.0% 57206 interrupts.CPU55.LOC:Local_timer_interrupts
67741 -15.1% 57511 interrupts.CPU56.LOC:Local_timer_interrupts
67478 -15.1% 57291 interrupts.CPU57.LOC:Local_timer_interrupts
67605 -15.0% 57487 interrupts.CPU58.LOC:Local_timer_interrupts
67521 -15.3% 57177 interrupts.CPU59.LOC:Local_timer_interrupts
67397 -15.3% 57085 interrupts.CPU6.LOC:Local_timer_interrupts
67660 -15.7% 57067 interrupts.CPU60.LOC:Local_timer_interrupts
67611 -15.5% 57109 interrupts.CPU61.LOC:Local_timer_interrupts
67638 -15.4% 57209 interrupts.CPU62.LOC:Local_timer_interrupts
816.83 ? 37% -33.7% 541.50 ? 10% interrupts.CPU63.CAL:Function_call_interrupts
67616 -14.6% 57773 interrupts.CPU63.LOC:Local_timer_interrupts
67561 -15.1% 57341 interrupts.CPU64.LOC:Local_timer_interrupts
67539 -14.9% 57444 interrupts.CPU65.LOC:Local_timer_interrupts
67688 -15.4% 57240 interrupts.CPU66.LOC:Local_timer_interrupts
67808 -15.6% 57237 interrupts.CPU67.LOC:Local_timer_interrupts
758.83 ? 44% -29.4% 535.83 ? 14% interrupts.CPU68.CAL:Function_call_interrupts
67772 -15.5% 57247 interrupts.CPU68.LOC:Local_timer_interrupts
67994 -15.5% 57427 interrupts.CPU69.LOC:Local_timer_interrupts
67283 -15.2% 57059 interrupts.CPU7.LOC:Local_timer_interrupts
67506 -15.3% 57184 interrupts.CPU70.LOC:Local_timer_interrupts
68051 -16.0% 57185 interrupts.CPU71.LOC:Local_timer_interrupts
67308 -15.0% 57245 interrupts.CPU72.LOC:Local_timer_interrupts
67543 -15.1% 57314 interrupts.CPU73.LOC:Local_timer_interrupts
67640 -15.2% 57372 interrupts.CPU74.LOC:Local_timer_interrupts
67489 -14.9% 57433 interrupts.CPU75.LOC:Local_timer_interrupts
67513 -14.9% 57481 interrupts.CPU76.LOC:Local_timer_interrupts
67552 -15.0% 57399 interrupts.CPU77.LOC:Local_timer_interrupts
67400 -15.2% 57134 interrupts.CPU78.LOC:Local_timer_interrupts
67487 -15.1% 57293 interrupts.CPU79.LOC:Local_timer_interrupts
67361 -15.2% 57135 interrupts.CPU8.LOC:Local_timer_interrupts
67368 -15.0% 57254 interrupts.CPU80.LOC:Local_timer_interrupts
67323 -14.9% 57299 interrupts.CPU81.LOC:Local_timer_interrupts
1638 ? 48% -50.5% 810.83 ? 29% interrupts.CPU82.CAL:Function_call_interrupts
67305 -15.1% 57164 interrupts.CPU82.LOC:Local_timer_interrupts
1006 ? 63% -75.6% 245.17 ?101% interrupts.CPU82.TLB:TLB_shootdowns
67651 -15.3% 57329 interrupts.CPU83.LOC:Local_timer_interrupts
67499 -15.5% 57013 interrupts.CPU84.LOC:Local_timer_interrupts
67259 -15.0% 57165 interrupts.CPU85.LOC:Local_timer_interrupts
67235 -15.2% 56985 interrupts.CPU86.LOC:Local_timer_interrupts
67349 -15.3% 57032 interrupts.CPU87.LOC:Local_timer_interrupts
855.50 ? 24% -31.6% 585.00 ? 3% interrupts.CPU88.CAL:Function_call_interrupts
67421 -15.4% 57032 interrupts.CPU88.LOC:Local_timer_interrupts
67286 -15.1% 57101 interrupts.CPU89.LOC:Local_timer_interrupts
67487 -15.4% 57116 interrupts.CPU9.LOC:Local_timer_interrupts
889.50 ? 39% -28.8% 633.33 ? 19% interrupts.CPU90.CAL:Function_call_interrupts
67710 -15.8% 57000 interrupts.CPU90.LOC:Local_timer_interrupts
799.17 ? 32% -30.2% 557.50 ? 6% interrupts.CPU91.CAL:Function_call_interrupts
67294 -15.2% 57037 interrupts.CPU91.LOC:Local_timer_interrupts
67372 -15.4% 57026 interrupts.CPU92.LOC:Local_timer_interrupts
67412 -15.2% 57139 interrupts.CPU93.LOC:Local_timer_interrupts
67181 -15.1% 57006 interrupts.CPU94.LOC:Local_timer_interrupts
67247 -14.6% 57416 interrupts.CPU95.LOC:Local_timer_interrupts
6484161 -15.2% 5501075 interrupts.LOC:Local_timer_interrupts
542.83 ? 8% -21.1% 428.33 ? 8% interrupts.RES:Rescheduling_interrupts
27464 -21.1% 21669 ? 2% interrupts.TLB:TLB_shootdowns
phoronix-test-suite.npb.FT.A.total_mop_s
3000 +--------------------------------------------------------------------+
|O O O O O O OO O OO O OO O OOO O |
2800 |-+OO OO O O O O OO O O |
| |
2600 |-+ |
| |
2400 |-+ |
| |
2200 |-+ |
| |
2000 |-+ |
| |
1800 |+.++.++.++.++.++.++.+++.++.++.++.++.++.+ .+ .+++.++. +.++.++.++.++.+|
| + + + |
1600 +--------------------------------------------------------------------+
phoronix-test-suite.time.user_time
420 +---------------------------------------------------------------------+
|+. .++.++.+ .+ .+ .+ +. + :+ :.++. +.++.++.+ ++.+ .+|
400 |-+++.++.++ + + + +.++.+ ++.+ + + + + |
380 |-+ |
| |
360 |-+ |
340 |-+ |
| |
320 |-+ |
300 |-+ |
| |
280 |-+ |
260 |O+OO OO OO OO OO O OO O O OO OO OO OO O O |
| O O OO OO O O |
240 +---------------------------------------------------------------------+
phoronix-test-suite.time.system_time
280 +---------------------------------------------------------------------+
|+. +.+ .+ +. .+ .+ + : +. |
260 |-+++.+ +.++.++.++.++.++ +.++.+ ++.++ + +.++.++ +.++.++.++.+ +|
| |
240 |-+ |
| |
220 |-+ |
| |
200 |-+ |
| |
180 |-+ |
| |
160 |O+OO OO O OO OO O OO O O OO OO OO O |
| O O O OO OO OO O OO |
140 +---------------------------------------------------------------------+
phoronix-test-suite.time.percent_of_cpu_this_job_got
2100 +--------------------------------------------------------------------+
| .+ +.+ .++.+ .+ .+ +. .+ +. .++. +. +.++.+ .++. +.+ +.+|
2000 |++ +.+ + + + : : ++ +.++.+ ++ + ++ + + +.+ |
1900 |-+ : : |
| : : |
1800 |-+ :: |
1700 |-+ : |
| : |
1600 |-+ : |
1500 |O+ O OO OO OO O O O+O O O O OO OO OOO O |
| O O O O OO O OO |
1400 |-+ |
1300 |-+ O |
| O |
1200 +--------------------------------------------------------------------+
phoronix-test-suite.time.minor_page_faults
690000 +------------------------------------------------------------------+
| : + |
680000 |-+ + + :: +.+ :: |
670000 |-+ +: :: : : : + : + |
| + +. + + + + :: : + : : : + +: :|
660000 |++ :+ +.+ .+ + .+ +. .+ : : .+ : :: + +. ::.+ : + :.+|
| +: + :+ + ++ :: ++ :: + + + + + |
650000 |-++ + + + |
| |
640000 |-+ |
630000 |-+ |
| O |
620000 |-+ O O O O O |
|O OO O O O OO OOO O O O O OOO OO O OOO |
610000 +------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
***************************************************************************************************
lkp-csl-2sp8: 96 threads 2 sockets Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 128G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/option_a/rootfs/tbox_group/test/testcase/ucode:
gcc-9/performance/x86_64-rhel-8.3/FT.B/debian-x86_64-phoronix/lkp-csl-2sp8/npb-1.3.1/phoronix-test-suite/0x5003006
commit:
8745d7f634 ("mm/gup: add compound page list iterator")
31b912de13 ("mm/gup: decrement head page once for group of subpages")
8745d7f6346ca107 31b912de1316644040ca9a0fb9b
---------------- ---------------------------
%stddev %change %stddev
\ | \
1883 +67.9% 3161 phoronix-test-suite.npb.FT.B.total_mop_s
173.55 -35.4% 112.14 phoronix-test-suite.time.elapsed_time
173.55 -35.4% 112.14 phoronix-test-suite.time.elapsed_time.max
13073 ? 2% -31.0% 9026 ? 2% phoronix-test-suite.time.involuntary_context_switches
996712 +4.8% 1044747 phoronix-test-suite.time.minor_page_faults
4218 -7.6% 3897 phoronix-test-suite.time.percent_of_cpu_this_job_got
2871 -43.2% 1631 phoronix-test-suite.time.system_time
4450 -38.5% 2739 phoronix-test-suite.time.user_time
135389 ?100% -58.9% 55607 ? 6% cpuidle.C1.usage
34326 ? 56% -42.0% 19920 ? 11% cpuidle.POLL.usage
204163 ? 2% -34.0% 134801 meminfo.Active
95238 ? 4% -71.4% 27223 ? 8% meminfo.Active(anon)
651335 ? 3% -12.3% 571121 ? 4% numa-numastat.node0.numa_hit
509857 ? 4% -14.7% 434809 ? 5% numa-numastat.node1.numa_hit
2272 ? 6% +13.2% 2572 ? 3% slabinfo.fsnotify_mark_connector.active_objs
2272 ? 6% +13.2% 2572 ? 3% slabinfo.fsnotify_mark_connector.num_objs
208.80 -29.4% 147.38 uptime.boot
12013 -24.0% 9125 uptime.idle
0.02 ? 17% +0.0 0.03 ? 11% mpstat.cpu.all.iowait%
0.97 ? 4% +0.1 1.06 ? 2% mpstat.cpu.all.irq%
0.04 ? 3% +0.0 0.05 ? 4% mpstat.cpu.all.soft%
16.43 -2.4 14.07 mpstat.cpu.all.sys%
62945 ? 38% -75.8% 15241 ? 19% numa-meminfo.node0.Active(anon)
13522393 ? 2% -9.0% 12307392 ? 2% numa-meminfo.node0.Inactive
12593029 ? 2% -9.6% 11380322 ? 2% numa-meminfo.node0.Inactive(anon)
14393350 ? 3% -8.8% 13121183 ? 2% numa-meminfo.node0.MemUsed
55.00 +6.1% 58.33 vmstat.cpu.id
26.00 -5.8% 24.50 ? 3% vmstat.cpu.us
502.17 +52.8% 767.50 vmstat.io.bi
3385 ? 2% +26.3% 4276 ? 4% vmstat.system.cs
15736 ? 38% -75.8% 3810 ? 19% numa-vmstat.node0.nr_active_anon
3148084 ? 2% -9.6% 2845175 ? 2% numa-vmstat.node0.nr_inactive_anon
15736 ? 38% -75.8% 3810 ? 19% numa-vmstat.node0.nr_zone_active_anon
3148075 ? 2% -9.6% 2845164 ? 2% numa-vmstat.node0.nr_zone_inactive_anon
1490540 ? 14% -12.5% 1303654 ? 9% numa-vmstat.node0.numa_hit
23454 ? 13% -34.0% 15487 ? 10% softirqs.CPU0.SCHED
12635 ? 17% -35.7% 8120 ? 22% softirqs.CPU11.SCHED
15163 ? 27% -35.0% 9863 ? 33% softirqs.CPU18.SCHED
15391 ? 10% -42.1% 8907 ? 38% softirqs.CPU22.SCHED
21047 ? 13% -28.5% 15040 ? 12% softirqs.CPU25.SCHED
19013 ? 14% -44.2% 10614 ? 25% softirqs.CPU26.SCHED
17972 ? 10% -26.7% 13180 ? 29% softirqs.CPU27.SCHED
20835 ? 10% -36.7% 13188 ? 15% softirqs.CPU29.SCHED
15286 ? 15% -34.4% 10026 ? 15% softirqs.CPU34.SCHED
14291 ? 19% -40.1% 8565 ? 32% softirqs.CPU38.SCHED
16513 ? 9% -50.7% 8140 ? 28% softirqs.CPU62.SCHED
16758 ? 23% -48.5% 8622 ? 20% softirqs.CPU69.SCHED
18223 ? 13% -32.3% 12331 ? 22% softirqs.CPU72.SCHED
16762 ? 9% -37.2% 10533 ? 22% softirqs.CPU8.SCHED
16681 ? 20% -42.1% 9651 ? 26% softirqs.CPU83.SCHED
15815 ? 13% -40.4% 9422 ? 13% softirqs.CPU85.SCHED
19757 ? 15% -36.7% 12505 ? 20% softirqs.CPU89.SCHED
1375511 ? 2% -28.2% 988096 ? 3% softirqs.SCHED
29878 ? 2% -31.8% 20371 ? 9% softirqs.TIMER
0.00 ? 10% +39.3% 0.01 ? 7% perf-sched.sch_delay.avg.ms.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex
0.09 ? 41% -71.7% 0.03 ?104% perf-sched.sch_delay.avg.ms.smpboot_thread_fn.kthread.ret_from_fork
0.15 ? 16% -88.3% 0.02 ? 19% perf-sched.sch_delay.avg.ms.worker_thread.kthread.ret_from_fork
0.09 ? 21% +158.5% 0.24 ? 85% perf-sched.sch_delay.max.ms.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex
0.04 ? 24% +72.6% 0.06 ? 35% perf-sched.sch_delay.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read
0.03 ? 12% -78.0% 0.01 ? 28% perf-sched.total_sch_delay.average.ms
0.01 ? 13% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.down_read.process_vm_rw_core.isra
0.67 ?223% +1650.0% 11.67 ? 64% perf-sched.wait_and_delay.count.devkmsg_read.vfs_read.ksys_read.do_syscall_64
0.67 ?223% +1650.0% 11.67 ? 64% perf-sched.wait_and_delay.count.do_syslog.part.0.kmsg_read.vfs_read
105.67 ? 13% +57.1% 166.00 ? 14% perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.__get_user_pages.__get_user_pages_remote.process_vm_rw_core
71.00 ? 14% -100.0% 0.00 perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.down_read.process_vm_rw_core.isra
97.33 ? 5% +38.7% 135.00 ? 6% perf-sched.wait_and_delay.count.rcu_gp_kthread.kthread.ret_from_fork
388.15 ?223% +963.4% 4127 ? 70% perf-sched.wait_and_delay.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64
388.15 ?223% +963.4% 4127 ? 70% perf-sched.wait_and_delay.max.ms.do_syslog.part.0.kmsg_read.vfs_read
1243 ? 39% +232.0% 4129 ? 70% perf-sched.wait_and_delay.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read
0.06 ?102% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.down_read.process_vm_rw_core.isra
697.32 ? 86% +565.1% 4637 ? 56% perf-sched.wait_and_delay.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait
451.86 ?185% +813.5% 4127 ? 70% perf-sched.wait_time.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64
449.53 ?187% +818.2% 4127 ? 70% perf-sched.wait_time.max.ms.do_syslog.part.0.kmsg_read.vfs_read
1243 ? 39% +232.0% 4129 ? 70% perf-sched.wait_time.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read
0.06 ?102% -86.8% 0.01 ? 74% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.down_read.process_vm_rw_core.isra
697.30 ? 86% +565.1% 4637 ? 56% perf-sched.wait_time.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait
9.824e+09 -5.7% 9.261e+09 perf-stat.i.branch-instructions
59060351 -8.1% 54261925 perf-stat.i.branch-misses
97507101 +38.3% 1.349e+08 perf-stat.i.cache-misses
3.14e+08 +36.6% 4.289e+08 perf-stat.i.cache-references
3317 ? 2% +26.4% 4192 ? 4% perf-stat.i.context-switches
2.01 ? 3% -20.6% 1.60 perf-stat.i.cpi
1.212e+11 -7.0% 1.127e+11 perf-stat.i.cpu-cycles
34.10 ? 3% +20.8% 41.21 ? 2% perf-stat.i.cpu-migrations
1512 ? 3% -16.1% 1269 perf-stat.i.cycles-between-cache-misses
2.632e+10 +9.4% 2.881e+10 ? 2% perf-stat.i.dTLB-loads
9.092e+09 +7.5% 9.777e+09 perf-stat.i.dTLB-stores
2593308 +9.5% 2839709 perf-stat.i.iTLB-load-misses
2099364 +2.3% 2146798 perf-stat.i.iTLB-loads
7.854e+10 +9.6% 8.606e+10 perf-stat.i.instructions
30394 -5.6% 28685 perf-stat.i.instructions-per-iTLB-miss
0.66 +15.5% 0.76 perf-stat.i.ipc
54.88 +54.0% 84.53 perf-stat.i.major-faults
1262451 -6.9% 1175061 perf-stat.i.metric.GHz
4.753e+08 +6.0% 5.04e+08 perf-stat.i.metric.M/sec
8962 +38.3% 12392 perf-stat.i.minor-faults
65.43 -8.8 56.63 perf-stat.i.node-load-miss-rate%
3458735 -8.1% 3178929 ? 2% perf-stat.i.node-load-misses
1911861 +41.6% 2707101 perf-stat.i.node-loads
6061203 -10.9% 5399342 perf-stat.i.node-store-misses
40829318 +61.6% 65961480 perf-stat.i.node-stores
9017 +38.3% 12473 perf-stat.i.page-faults
23809 ? 4% -71.4% 6805 ? 8% proc-vmstat.nr_active_anon
27230 -1.2% 26894 proc-vmstat.nr_active_file
5818014 -8.3% 5334534 proc-vmstat.nr_anon_pages
11185 -8.5% 10236 proc-vmstat.nr_anon_transparent_hugepages
2631685 +1.8% 2677781 proc-vmstat.nr_dirty_background_threshold
5269806 +1.8% 5362110 proc-vmstat.nr_dirty_threshold
515843 -3.2% 499299 proc-vmstat.nr_file_pages
26246664 +1.8% 26708558 proc-vmstat.nr_free_pages
6050522 -8.0% 5564919 proc-vmstat.nr_inactive_anon
35889 -8.1% 32986 proc-vmstat.nr_page_table_pages
252287 -6.5% 235951 proc-vmstat.nr_shmem
28121 -1.4% 27718 proc-vmstat.nr_slab_reclaimable
23809 ? 4% -71.4% 6805 ? 8% proc-vmstat.nr_zone_active_anon
27230 -1.2% 26894 proc-vmstat.nr_zone_active_file
6050523 -8.0% 5564919 proc-vmstat.nr_zone_inactive_anon
439937 +10.9% 487862 proc-vmstat.numa_hint_faults
413401 +14.3% 472600 proc-vmstat.numa_hint_faults_local
1183593 -12.6% 1034257 proc-vmstat.numa_hit
165828 +14.5% 189875 proc-vmstat.numa_huge_pte_updates
1115497 -13.4% 966164 proc-vmstat.numa_local
511118 ? 3% -29.8% 358584 ? 6% proc-vmstat.numa_pages_migrated
85219759 +14.5% 97557490 proc-vmstat.numa_pte_updates
87830 ? 3% -36.3% 55908 ? 37% proc-vmstat.pgactivate
21700466 -1.4% 21394542 proc-vmstat.pgalloc_normal
1741406 -10.0% 1567053 proc-vmstat.pgfault
21591246 -1.8% 21211687 proc-vmstat.pgfree
511118 ? 3% -29.8% 358584 ? 6% proc-vmstat.pgmigrate_success
122484 -30.7% 84928 proc-vmstat.pgreuse
19.64 ? 17% -19.6 0.00 perf-profile.calltrace.cycles-pp.unpin_user_pages.process_vm_rw_core.process_vm_rw.__x64_sys_process_vm_readv.do_syscall_64
19.58 ? 17% -19.6 0.00 perf-profile.calltrace.cycles-pp.put_compound_head.unpin_user_pages.process_vm_rw_core.process_vm_rw.__x64_sys_process_vm_readv
4.81 ? 18% +5.7 10.55 ? 16% perf-profile.calltrace.cycles-pp.copy_user_enhanced_fast_string.copyout.copy_page_to_iter.process_vm_rw_core.process_vm_rw
4.85 ? 18% +5.8 10.64 ? 16% perf-profile.calltrace.cycles-pp.copyout.copy_page_to_iter.process_vm_rw_core.process_vm_rw.__x64_sys_process_vm_readv
5.03 ? 18% +5.9 10.96 ? 16% perf-profile.calltrace.cycles-pp.copy_page_to_iter.process_vm_rw_core.process_vm_rw.__x64_sys_process_vm_readv.do_syscall_64
20.42 ? 18% +11.1 31.48 ? 17% perf-profile.calltrace.cycles-pp.try_grab_page.follow_trans_huge_pmd.follow_pmd_mask.__get_user_pages.__get_user_pages_remote
20.50 ? 18% +11.1 31.63 ? 17% perf-profile.calltrace.cycles-pp.follow_trans_huge_pmd.follow_pmd_mask.__get_user_pages.__get_user_pages_remote.process_vm_rw_core
20.97 ? 18% +11.4 32.41 ? 17% perf-profile.calltrace.cycles-pp.follow_pmd_mask.__get_user_pages.__get_user_pages_remote.process_vm_rw_core.process_vm_rw
21.12 ? 18% +11.5 32.65 ? 17% perf-profile.calltrace.cycles-pp.__get_user_pages.__get_user_pages_remote.process_vm_rw_core.process_vm_rw.__x64_sys_process_vm_readv
21.12 ? 18% +11.5 32.66 ? 17% perf-profile.calltrace.cycles-pp.__get_user_pages_remote.process_vm_rw_core.process_vm_rw.__x64_sys_process_vm_readv.do_syscall_64
19.64 ? 17% -19.6 0.07 ? 12% perf-profile.children.cycles-pp.unpin_user_pages
19.59 ? 17% -19.5 0.06 ? 47% perf-profile.children.cycles-pp.put_compound_head
0.04 ? 72% +0.0 0.08 ? 11% perf-profile.children.cycles-pp.__might_fault
0.07 ? 21% +0.0 0.12 ? 12% perf-profile.children.cycles-pp.___might_sleep
0.03 ? 70% +0.0 0.08 ? 16% perf-profile.children.cycles-pp.__cond_resched
0.03 ? 70% +0.0 0.08 ? 16% perf-profile.children.cycles-pp.__might_sleep
0.05 ? 71% +0.1 0.10 ? 16% perf-profile.children.cycles-pp.follow_page_mask
4.85 ? 18% +5.8 10.63 ? 16% perf-profile.children.cycles-pp.copy_user_enhanced_fast_string
4.86 ? 18% +5.8 10.65 ? 16% perf-profile.children.cycles-pp.copyout
5.05 ? 18% +5.9 11.00 ? 16% perf-profile.children.cycles-pp.copy_page_to_iter
20.44 ? 18% +11.1 31.51 ? 17% perf-profile.children.cycles-pp.try_grab_page
20.51 ? 18% +11.1 31.64 ? 17% perf-profile.children.cycles-pp.follow_trans_huge_pmd
20.98 ? 18% +11.4 32.42 ? 17% perf-profile.children.cycles-pp.follow_pmd_mask
21.12 ? 18% +11.5 32.66 ? 17% perf-profile.children.cycles-pp.__get_user_pages_remote
21.12 ? 18% +11.5 32.66 ? 17% perf-profile.children.cycles-pp.__get_user_pages
19.37 ? 17% -19.3 0.04 ? 72% perf-profile.self.cycles-pp.put_compound_head
0.07 ? 18% +0.0 0.11 ? 16% perf-profile.self.cycles-pp.___might_sleep
0.03 ? 99% +0.0 0.07 ? 12% perf-profile.self.cycles-pp.unpin_user_pages
0.04 ? 72% +0.1 0.09 ? 18% perf-profile.self.cycles-pp.follow_page_mask
0.06 ? 46% +0.1 0.11 ? 14% perf-profile.self.cycles-pp.process_vm_rw_core
0.09 ? 17% +0.1 0.15 ? 14% perf-profile.self.cycles-pp.follow_pmd_mask
0.08 ? 16% +0.1 0.14 ? 17% perf-profile.self.cycles-pp.copy_page_to_iter
4.78 ? 19% +5.7 10.48 ? 16% perf-profile.self.cycles-pp.copy_user_enhanced_fast_string
20.23 ? 17% +10.9 31.16 ? 17% perf-profile.self.cycles-pp.try_grab_page
350.00 -35.0% 227.33 interrupts.9:IO-APIC.9-fasteoi.acpi
168065 ? 17% -26.8% 123062 interrupts.CAL:Function_call_interrupts
357931 -34.2% 235436 interrupts.CPU0.LOC:Local_timer_interrupts
350.00 -35.0% 227.33 interrupts.CPU1.9:IO-APIC.9-fasteoi.acpi
349335 -35.0% 226974 interrupts.CPU1.LOC:Local_timer_interrupts
349499 -35.3% 225993 interrupts.CPU10.LOC:Local_timer_interrupts
348523 -35.2% 225697 interrupts.CPU11.LOC:Local_timer_interrupts
349064 -35.3% 225881 interrupts.CPU12.LOC:Local_timer_interrupts
7216 ? 20% -38.8% 4419 ? 48% interrupts.CPU12.NMI:Non-maskable_interrupts
7216 ? 20% -38.8% 4419 ? 48% interrupts.CPU12.PMI:Performance_monitoring_interrupts
348865 -35.3% 225816 interrupts.CPU13.LOC:Local_timer_interrupts
2354 ? 76% -64.5% 835.67 ? 27% interrupts.CPU14.CAL:Function_call_interrupts
348391 -35.2% 225919 interrupts.CPU14.LOC:Local_timer_interrupts
348913 -35.2% 226094 interrupts.CPU15.LOC:Local_timer_interrupts
349192 -35.3% 226071 interrupts.CPU16.LOC:Local_timer_interrupts
348735 -34.9% 226885 interrupts.CPU17.LOC:Local_timer_interrupts
348717 -35.2% 225922 interrupts.CPU18.LOC:Local_timer_interrupts
1860 ? 18% -36.7% 1177 ? 17% interrupts.CPU19.CAL:Function_call_interrupts
348918 -35.2% 226212 interrupts.CPU19.LOC:Local_timer_interrupts
348705 -35.1% 226408 interrupts.CPU2.LOC:Local_timer_interrupts
348810 -35.2% 226037 interrupts.CPU20.LOC:Local_timer_interrupts
1581 ? 30% -38.6% 970.83 ? 25% interrupts.CPU21.CAL:Function_call_interrupts
349176 -35.3% 225984 interrupts.CPU21.LOC:Local_timer_interrupts
349182 -35.3% 225997 interrupts.CPU22.LOC:Local_timer_interrupts
348569 -35.2% 225929 interrupts.CPU23.LOC:Local_timer_interrupts
348420 -35.2% 225849 interrupts.CPU24.LOC:Local_timer_interrupts
348227 -35.1% 226068 interrupts.CPU25.LOC:Local_timer_interrupts
348462 -35.0% 226368 interrupts.CPU26.LOC:Local_timer_interrupts
348493 -35.1% 226055 interrupts.CPU27.LOC:Local_timer_interrupts
348351 -35.1% 226081 interrupts.CPU28.LOC:Local_timer_interrupts
348369 -35.1% 226075 interrupts.CPU29.LOC:Local_timer_interrupts
348936 -35.1% 226615 interrupts.CPU3.LOC:Local_timer_interrupts
348414 -35.2% 225934 interrupts.CPU30.LOC:Local_timer_interrupts
348495 -35.2% 225999 interrupts.CPU31.LOC:Local_timer_interrupts
348524 -35.1% 226338 interrupts.CPU32.LOC:Local_timer_interrupts
348429 -35.0% 226316 interrupts.CPU33.LOC:Local_timer_interrupts
348482 -35.2% 225944 interrupts.CPU34.LOC:Local_timer_interrupts
5217 ? 40% -36.1% 3333 ? 33% interrupts.CPU34.NMI:Non-maskable_interrupts
5217 ? 40% -36.1% 3333 ? 33% interrupts.CPU34.PMI:Performance_monitoring_interrupts
348545 -35.2% 225991 interrupts.CPU35.LOC:Local_timer_interrupts
348281 -35.1% 225884 interrupts.CPU36.LOC:Local_timer_interrupts
1655 ? 31% -35.4% 1070 ? 21% interrupts.CPU37.CAL:Function_call_interrupts
348620 -35.2% 226007 interrupts.CPU37.LOC:Local_timer_interrupts
348281 -35.1% 225918 interrupts.CPU38.LOC:Local_timer_interrupts
348433 -35.1% 225974 interrupts.CPU39.LOC:Local_timer_interrupts
348653 -35.2% 226056 interrupts.CPU4.LOC:Local_timer_interrupts
348505 -35.2% 225935 interrupts.CPU40.LOC:Local_timer_interrupts
348518 -35.1% 226056 interrupts.CPU41.LOC:Local_timer_interrupts
348249 -35.2% 225815 interrupts.CPU42.LOC:Local_timer_interrupts
1824 ? 26% -44.5% 1013 ? 21% interrupts.CPU43.CAL:Function_call_interrupts
348271 -35.1% 226004 interrupts.CPU43.LOC:Local_timer_interrupts
348448 -35.1% 226033 interrupts.CPU44.LOC:Local_timer_interrupts
348279 -35.1% 225870 interrupts.CPU45.LOC:Local_timer_interrupts
1851 ? 9% -35.3% 1197 ? 23% interrupts.CPU46.CAL:Function_call_interrupts
348413 -35.1% 226036 interrupts.CPU46.LOC:Local_timer_interrupts
348365 -35.1% 226026 interrupts.CPU47.LOC:Local_timer_interrupts
1895 ? 16% -34.3% 1245 ? 8% interrupts.CPU48.CAL:Function_call_interrupts
348106 -35.1% 225860 interrupts.CPU48.LOC:Local_timer_interrupts
348690 -35.0% 226626 interrupts.CPU49.LOC:Local_timer_interrupts
349291 -35.3% 226020 interrupts.CPU5.LOC:Local_timer_interrupts
348763 -35.1% 226209 interrupts.CPU50.LOC:Local_timer_interrupts
349434 -35.1% 226654 interrupts.CPU51.LOC:Local_timer_interrupts
348935 -35.2% 226169 interrupts.CPU52.LOC:Local_timer_interrupts
349507 -35.2% 226352 interrupts.CPU53.LOC:Local_timer_interrupts
349010 -35.2% 226095 interrupts.CPU54.LOC:Local_timer_interrupts
348924 -35.0% 226716 interrupts.CPU55.LOC:Local_timer_interrupts
13.33 ? 66% +1156.2% 167.50 ? 90% interrupts.CPU55.TLB:TLB_shootdowns
1594 ? 21% -37.5% 997.50 ? 22% interrupts.CPU56.CAL:Function_call_interrupts
348962 -35.1% 226316 interrupts.CPU56.LOC:Local_timer_interrupts
1450 ? 12% -33.6% 963.67 ? 23% interrupts.CPU57.CAL:Function_call_interrupts
348774 -35.1% 226275 interrupts.CPU57.LOC:Local_timer_interrupts
348663 -35.0% 226520 interrupts.CPU58.LOC:Local_timer_interrupts
1320 ? 27% -33.2% 882.33 ? 16% interrupts.CPU59.CAL:Function_call_interrupts
349377 -35.1% 226596 interrupts.CPU59.LOC:Local_timer_interrupts
348871 -35.3% 225850 interrupts.CPU6.LOC:Local_timer_interrupts
349480 -35.3% 226023 interrupts.CPU60.LOC:Local_timer_interrupts
348948 -35.0% 226914 interrupts.CPU61.LOC:Local_timer_interrupts
348501 -35.2% 225847 interrupts.CPU62.LOC:Local_timer_interrupts
348482 -35.1% 226331 interrupts.CPU63.LOC:Local_timer_interrupts
348693 -35.1% 226128 interrupts.CPU64.LOC:Local_timer_interrupts
349048 -35.3% 225745 interrupts.CPU65.LOC:Local_timer_interrupts
1460 ? 20% -33.6% 970.33 ? 19% interrupts.CPU66.CAL:Function_call_interrupts
348721 -35.0% 226749 interrupts.CPU66.LOC:Local_timer_interrupts
348506 -35.1% 226244 interrupts.CPU67.LOC:Local_timer_interrupts
348661 -35.2% 226049 interrupts.CPU68.LOC:Local_timer_interrupts
348967 -35.2% 226256 interrupts.CPU69.LOC:Local_timer_interrupts
348963 -35.1% 226332 interrupts.CPU7.LOC:Local_timer_interrupts
1459 ? 22% -38.2% 902.00 ? 23% interrupts.CPU70.CAL:Function_call_interrupts
348551 -35.1% 226112 interrupts.CPU70.LOC:Local_timer_interrupts
348983 -35.1% 226501 interrupts.CPU71.LOC:Local_timer_interrupts
348131 -35.1% 225908 interrupts.CPU72.LOC:Local_timer_interrupts
3535 ? 18% -54.3% 1615 ? 11% interrupts.CPU73.CAL:Function_call_interrupts
348763 -35.1% 226266 interrupts.CPU73.LOC:Local_timer_interrupts
1931 ? 42% -81.0% 367.00 ? 51% interrupts.CPU73.TLB:TLB_shootdowns
348704 -35.1% 226225 interrupts.CPU74.LOC:Local_timer_interrupts
349000 -35.2% 226227 interrupts.CPU75.LOC:Local_timer_interrupts
348709 -35.1% 226413 interrupts.CPU76.LOC:Local_timer_interrupts
348620 -35.1% 226161 interrupts.CPU77.LOC:Local_timer_interrupts
348432 -35.1% 225976 interrupts.CPU78.LOC:Local_timer_interrupts
348645 -35.1% 226290 interrupts.CPU79.LOC:Local_timer_interrupts
348519 -35.2% 225997 interrupts.CPU8.LOC:Local_timer_interrupts
348297 -35.1% 225923 interrupts.CPU80.LOC:Local_timer_interrupts
348481 -35.1% 226209 interrupts.CPU81.LOC:Local_timer_interrupts
2236 ? 18% -42.0% 1297 ? 33% interrupts.CPU82.CAL:Function_call_interrupts
348262 -35.2% 225809 interrupts.CPU82.LOC:Local_timer_interrupts
348430 -35.2% 225879 interrupts.CPU83.LOC:Local_timer_interrupts
348259 -35.1% 225941 interrupts.CPU84.LOC:Local_timer_interrupts
348358 -35.1% 226016 interrupts.CPU85.LOC:Local_timer_interrupts
348503 -35.2% 225755 interrupts.CPU86.LOC:Local_timer_interrupts
348285 -35.2% 225841 interrupts.CPU87.LOC:Local_timer_interrupts
348139 -35.1% 225808 interrupts.CPU88.LOC:Local_timer_interrupts
348419 -35.2% 225796 interrupts.CPU89.LOC:Local_timer_interrupts
348466 -35.1% 226066 interrupts.CPU9.LOC:Local_timer_interrupts
348604 -35.2% 225892 interrupts.CPU90.LOC:Local_timer_interrupts
348314 -35.1% 226069 interrupts.CPU91.LOC:Local_timer_interrupts
348174 -35.1% 225802 interrupts.CPU92.LOC:Local_timer_interrupts
348449 -35.2% 225844 interrupts.CPU93.LOC:Local_timer_interrupts
348261 -35.1% 225893 interrupts.CPU94.LOC:Local_timer_interrupts
348235 -35.2% 225723 interrupts.CPU95.LOC:Local_timer_interrupts
33479298 -35.1% 21715759 interrupts.LOC:Local_timer_interrupts
1267 ? 4% -27.8% 915.33 ? 6% interrupts.RES:Rescheduling_interrupts
29009 ? 3% -9.2% 26326 ? 2% interrupts.TLB:TLB_shootdowns
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
---
0DAY/LKP+ Test Infrastructure Open Source Technology Center
https://lists.01.org/hyperkitty/list/[email protected] Intel Corporation
Thanks,
Oliver Sang