2020-10-30 07:20:22

by Chen, Rong A

[permalink] [raw]
Subject: [mm] e6e88712e4: stress-ng.tmpfs.ops_per_sec -69.7% regression

Greeting,

FYI, we noticed a -100.0% regression of stress-ng.tmpfs.ops_per_sec due to commit:


commit: e6e88712e43b7942df451508aafc2f083266f56b ("mm: optimise madvise WILLNEED")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master


in testcase: stress-ng
on test machine: 96 threads Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 192G memory
with following parameters:

nr_threads: 100%
disk: 1HDD
testtime: 100s
class: memory
cpufreq_governor: performance
ucode: 0x5002f01




If you fix the issue, kindly add following tag
Reported-by: kernel test robot <[email protected]>


Details are as below:
-------------------------------------------------------------------------------------------------->


To reproduce:

git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml



=========================================================================================
class/compiler/cpufreq_governor/disk/kconfig/nr_threads/rootfs/tbox_group/testcase/testtime/ucode:
memory/gcc-9/performance/1HDD/x86_64-rhel-8.3/100%/debian-10.4-x86_64-20200603.cgz/lkp-csl-2sp3/stress-ng/100s/0x400002c

commit:
f5df8635c5a3c912919c91be64aa198554b0f9ed
e6e88712e43b7942df451508aafc2f083266f56b

f5df8635c5a3c912 e6e88712e43b7942df451508aaf
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
423:5 51243% 2986:5 dmesg.timestamp:last
:5 40% 2:5 last_state.exit_fail
:5 40% 2:5 last_state.is_incomplete_run
:5 0% :5 last_state.running
:5 40% 2:5 last_state.soft_timeout
:5 40% 2:5 last_state.test.stress-ng.exit_code.143
3:5 40% 5:5 kmsg.EDAC_skx:Can't_find_size_for_NVDIMM_ADR=#/SMBIOS=
1:5 -20% :5 kmsg.ioatdma#:#:#:Errors
1:5 -20% :5 kmsg.ioatdma#:#:#:ioat_timer_event:Channel_halted(#)
69:5 836% 111:5 kmsg.timestamp:EDAC_skx:Can't_find_size_for_NVDIMM_ADR=#/SMBIOS=
23:5 -473% :5 kmsg.timestamp:ioatdma#:#:#:Errors
23:5 -472% :5 kmsg.timestamp:ioatdma#:#:#:ioat_timer_event:Channel_halted(#)
423:5 69754% 3911:5 kmsg.timestamp:last
5:5 -40% 3:5 stderr.Aborted
5:5 -40% 3:5 stderr.Events_disabled
5:5 -40% 3:5 stderr.Events_enabled
5:5 -40% 3:5 stderr.[perf_record:Woken_up#times_to_write_data]
5:5 -40% 3:5 stderr.has_stderr
1:5 0% 1:5 stderr.ls:cannot_access'%perf_data':No_such_file_or_directory
1:5 -20% :5 stderr.malloc():corrupted_top_size
4:5 -20% 3:5 stderr.malloc():invalid_size(unsorted)
0:5 0% 0:5 perf-profile.children.cycles-pp.error_entry
:5 3% 0:5 perf-profile.self.cycles-pp.error_entry
%stddev %change %stddev
\ | \
108.08 -1.0% 107.00 stress-ng.time.elapsed_time
108.08 -1.0% 107.00 stress-ng.time.elapsed_time.max
26521 ± 20% +76.4% 46780 ± 3% stress-ng.time.involuntary_context_switches
3248307 ± 20% +54.5% 5019072 ± 19% stress-ng.time.major_page_faults
675875 -0.1% 675470 stress-ng.time.maximum_resident_set_size
92279190 ± 12% -47.3% 48660512 stress-ng.time.minor_page_faults
4096 +0.0% 4096 stress-ng.time.page_size
3951 ± 8% +88.7% 7453 ± 2% stress-ng.time.percent_of_cpu_this_job_got
3370 ± 9% +117.9% 7343 stress-ng.time.system_time
900.24 ± 7% -29.7% 632.56 ± 2% stress-ng.time.user_time
1343 -16.1% 1127 ± 2% stress-ng.time.voluntary_context_switches
1198 ± 4% -69.7% 362.67 stress-ng.tmpfs.ops
11.62 ± 4% -69.7% 3.52 stress-ng.tmpfs.ops_per_sec
138.01 -1.0% 136.69 uptime.boot
2795 ± 2% -1.2% 2760 uptime.idle
27.54 ± 2% -0.2% 27.48 ± 2% boot-time.boot
20.59 -0.4% 20.50 boot-time.dhcp
2209 ± 2% +0.1% 2212 ± 2% boot-time.idle
0.95 ± 11% -9.6% 0.86 boot-time.smp_boot
6.15 ± 2% +1.6% 6.25 ± 9% iostat.cpu.idle
0.01 ± 59% -15.0% 0.01 ±100% iostat.cpu.iowait
74.40 +15.8% 86.14 iostat.cpu.system
19.44 ± 5% -60.9% 7.61 iostat.cpu.user
4.53 ± 3% +0.1 4.58 ± 12% mpstat.cpu.all.idle%
0.00 ± 80% -0.0 0.00 ±141% mpstat.cpu.all.iowait%
0.22 ± 3% +0.4 0.65 ± 4% mpstat.cpu.all.irq%
0.01 ± 10% +0.0 0.02 ± 15% mpstat.cpu.all.soft%
75.45 +11.6 87.00 mpstat.cpu.all.sys%
19.79 ± 5% -12.0 7.74 mpstat.cpu.all.usr%
1884655 ±137% -80.5% 367657 ± 16% cpuidle.C1.time
5320 ±146% -77.2% 1211 ± 8% cpuidle.C1.usage
3346442 ± 25% -38.8% 2047933 ± 18% cpuidle.C1E.time
8085 ± 26% -40.9% 4775 ± 15% cpuidle.C1E.usage
4.879e+08 ± 4% -2.7% 4.747e+08 ± 5% cpuidle.C6.time
184919 ± 13% -24.0% 140448 ± 18% cpuidle.C6.usage
782.80 ± 70% -70.6% 230.33 ± 38% cpuidle.POLL.time
207.60 ±120% -78.3% 45.00 ± 35% cpuidle.POLL.usage
0.00 -100.0% 0.00 numa-numastat.node0.interleave_hit
8845248 -7.2% 8207045 numa-numastat.node0.local_node
8872518 -7.2% 8235955 numa-numastat.node0.numa_hit
27274 ± 39% +6.0% 28909 ± 70% numa-numastat.node0.other_node
0.00 -100.0% 0.00 numa-numastat.node1.interleave_hit
8857958 +0.3% 8880677 numa-numastat.node1.local_node
8874087 +0.2% 8895138 numa-numastat.node1.numa_hit
16133 ± 66% -10.3% 14467 ±141% numa-numastat.node1.other_node
108.08 -1.0% 107.00 time.elapsed_time
108.08 -1.0% 107.00 time.elapsed_time.max
26521 ± 20% +76.4% 46780 ± 3% time.involuntary_context_switches
3248307 ± 20% +54.5% 5019072 ± 19% time.major_page_faults
675875 -0.1% 675470 time.maximum_resident_set_size
92279190 ± 12% -47.3% 48660512 time.minor_page_faults
4096 +0.0% 4096 time.page_size
3951 ± 8% +88.7% 7453 ± 2% time.percent_of_cpu_this_job_got
3370 ± 9% +117.9% 7343 time.system_time
900.24 ± 7% -29.7% 632.56 ± 2% time.user_time
1343 -16.1% 1127 ± 2% time.voluntary_context_switches
6.00 -5.6% 5.67 ± 8% vmstat.cpu.id
73.80 +16.1% 85.67 vmstat.cpu.sy
19.20 ± 6% -63.5% 7.00 vmstat.cpu.us
0.00 -100.0% 0.00 vmstat.cpu.wa
0.40 ±200% -100.0% 0.00 vmstat.io.bi
2.60 ±200% -100.0% 0.00 vmstat.io.bo
4.40 ± 18% -9.1% 4.00 vmstat.memory.buff
61205822 -0.0% 61180200 vmstat.memory.cache
68665018 -0.4% 68399944 vmstat.memory.free
0.00 -100.0% 0.00 vmstat.procs.b
90.00 +0.4% 90.33 vmstat.procs.r
1577 ± 6% -19.5% 1269 vmstat.system.cs
155950 +4.9% 163607 vmstat.system.in
31436258 +22.8% 38613269 meminfo.Active
31436086 +22.8% 38613201 meminfo.Active(anon)
170.60 ±107% -60.3% 67.67 ± 33% meminfo.Active(file)
128020 ± 4% +2.5% 131201 ± 3% meminfo.AnonHugePages
540756 ± 11% -10.6% 483387 ± 7% meminfo.AnonPages
4.40 ± 18% -9.1% 4.00 meminfo.Buffers
61444796 +0.0% 61468357 meminfo.Cached
194552 +0.0% 194552 meminfo.CmaFree
204800 +0.0% 204800 meminfo.CmaTotal
65845924 +0.0% 65845928 meminfo.CommitLimit
69965026 -0.1% 69896024 meminfo.Committed_AS
6.465e+08 -0.1% 6.459e+08 meminfo.DirectMap1G
17274148 ± 7% -11.0% 15370246 ± 6% meminfo.DirectMap2M
677172 ± 5% +6.2% 719273 ± 2% meminfo.DirectMap4k
64.80 ± 9% -2.8% 63.00 meminfo.Dirty
2048 +0.0% 2048 meminfo.Hugepagesize
7211066 ± 4% +132.7% 16783208 ± 6% meminfo.Inactive
7210669 ± 4% +132.8% 16782845 ± 6% meminfo.Inactive(anon)
396.00 ± 14% -8.4% 362.67 meminfo.Inactive(file)
224508 -0.6% 223245 meminfo.KReclaimable
21529 ± 5% -5.2% 20401 ± 3% meminfo.KernelStack
29107501 -47.5% 15281310 meminfo.Mapped
67689626 -0.5% 67372180 meminfo.MemAvailable
68195737 -0.5% 67878993 meminfo.MemFree
1.317e+08 +0.0% 1.317e+08 meminfo.MemTotal
63496115 +0.5% 63812865 meminfo.Memused
22381531 -75.0% 5600796 ± 6% meminfo.Mlocked
111504 ± 3% +6.3% 118584 ± 2% meminfo.PageTables
54026 +0.2% 54144 meminfo.Percpu
224508 -0.6% 223245 meminfo.SReclaimable
532214 +66.6% 886463 meminfo.SUnreclaim
60492516 +0.0% 60516229 meminfo.Shmem
756723 +46.6% 1109709 meminfo.Slab
23332949 -71.9% 6552476 ± 5% meminfo.Unevictable
3.436e+10 +0.0% 3.436e+10 meminfo.VmallocTotal
158453 ± 4% -5.2% 150267 meminfo.VmallocUsed
0.40 ±200% -100.0% 0.00 meminfo.Writeback
640913 +1.9% 653134 meminfo.max_used_kB
15828901 ± 3% +18.5% 18751296 ± 2% numa-meminfo.node0.Active
15828849 ± 3% +18.5% 18751235 ± 2% numa-meminfo.node0.Active(anon)
50.20 ± 43% +21.5% 61.00 ± 32% numa-meminfo.node0.Active(file)
89342 ± 38% -20.1% 71395 ± 10% numa-meminfo.node0.AnonHugePages
299668 ± 23% -26.8% 219240 ± 10% numa-meminfo.node0.AnonPages
24.60 ±109% -10.6% 22.00 ±135% numa-meminfo.node0.Dirty
30889346 -3.8% 29724751 numa-meminfo.node0.FilePages
3683538 ± 4% +116.6% 7980276 ± 4% numa-meminfo.node0.Inactive
3683361 ± 4% +116.7% 7980053 ± 4% numa-meminfo.node0.Inactive(anon)
176.60 ± 56% +25.9% 222.33 ± 44% numa-meminfo.node0.Inactive(file)
113695 ± 3% -6.2% 106650 ± 7% numa-meminfo.node0.KReclaimable
10287 ± 10% -3.7% 9908 ± 5% numa-meminfo.node0.KernelStack
14627069 -50.7% 7217504 ± 2% numa-meminfo.node0.Mapped
33718745 +3.2% 34799393 numa-meminfo.node0.MemFree
65676039 +0.0% 65676044 numa-meminfo.node0.MemTotal
31957292 -3.4% 30876649 numa-meminfo.node0.MemUsed
11196527 -75.7% 2725929 ± 7% numa-meminfo.node0.Mlocked
52778 ± 5% +11.3% 58763 ± 4% numa-meminfo.node0.PageTables
113695 ± 3% -6.2% 106650 ± 7% numa-meminfo.node0.SReclaimable
275746 ± 2% +63.0% 449434 ± 2% numa-meminfo.node0.SUnreclaim
30411486 -3.9% 29239402 numa-meminfo.node0.Shmem
389441 +42.8% 556084 ± 3% numa-meminfo.node0.Slab
11674021 -72.5% 3210988 ± 6% numa-meminfo.node0.Unevictable
0.00 -100.0% 0.00 numa-meminfo.node0.Writeback
15637726 ± 2% +25.7% 19663292 ± 3% numa-meminfo.node1.Active
15637606 ± 2% +25.7% 19663285 ± 3% numa-meminfo.node1.Active(anon)
119.40 ±165% -94.7% 6.33 ±141% numa-meminfo.node1.Active(file)
38463 ± 89% +56.1% 60028 ± 15% numa-meminfo.node1.AnonHugePages
241882 ± 37% +9.2% 264113 ± 16% numa-meminfo.node1.AnonPages
39.80 ± 74% +3.9% 41.33 ± 70% numa-meminfo.node1.Dirty
30632669 +2.7% 31455689 numa-meminfo.node1.FilePages
3559463 ± 6% +145.7% 8746082 ± 14% numa-meminfo.node1.Inactive
3559243 ± 6% +145.7% 8745941 ± 14% numa-meminfo.node1.Inactive(anon)
219.00 ± 67% -36.1% 140.00 ± 70% numa-meminfo.node1.Inactive(file)
111003 ± 3% +4.7% 116206 ± 6% numa-meminfo.node1.KReclaimable
11250 ± 10% -6.8% 10489 ± 7% numa-meminfo.node1.KernelStack
14533234 -45.2% 7961730 numa-meminfo.node1.Mapped
34398511 -3.0% 33370337 numa-meminfo.node1.MemFree
66015814 +0.0% 66015816 numa-meminfo.node1.MemTotal
31617302 +3.3% 32645477 numa-meminfo.node1.MemUsed
11200859 ± 3% -74.6% 2842334 ± 5% numa-meminfo.node1.Mlocked
58771 ± 7% +2.0% 59920 ± 7% numa-meminfo.node1.PageTables
111003 ± 3% +4.7% 116206 ± 6% numa-meminfo.node1.SReclaimable
256870 ± 3% +69.3% 434794 ± 2% numa-meminfo.node1.SUnreclaim
30158245 +2.8% 30988908 numa-meminfo.node1.Shmem
367874 ± 2% +49.8% 551001 ± 3% numa-meminfo.node1.Slab
11674780 ± 3% -71.7% 3308958 ± 5% numa-meminfo.node1.Unevictable
0.00 -100.0% 0.00 numa-meminfo.node1.Writeback
0.00 -100.0% 0.00 proc-vmstat.compact_isolated
7857318 +22.1% 9591743 proc-vmstat.nr_active_anon
42.00 ±109% -60.3% 16.67 ± 32% proc-vmstat.nr_active_file
135135 ± 11% -10.7% 120681 ± 7% proc-vmstat.nr_anon_pages
62.00 ± 4% +2.7% 63.67 ± 2% proc-vmstat.nr_anon_transparent_hugepages
130.40 ±108% -54.0% 60.00 proc-vmstat.nr_dirtied
15.80 ± 10% -5.1% 15.00 proc-vmstat.nr_dirty
1687530 -0.0% 1687077 proc-vmstat.nr_dirty_background_threshold
3379188 -0.0% 3378279 proc-vmstat.nr_dirty_threshold
15355708 -0.4% 15287398 proc-vmstat.nr_file_pages
48638 +0.0% 48638 proc-vmstat.nr_free_cma
17054619 -0.0% 17050200 proc-vmstat.nr_free_pages
1801446 ± 4% +132.2% 4183033 ± 6% proc-vmstat.nr_inactive_anon
98.60 ± 14% -8.4% 90.33 proc-vmstat.nr_inactive_file
1.60 ± 93% -16.7% 1.33 ± 93% proc-vmstat.nr_isolated_anon
21527 ± 5% -5.3% 20390 ± 3% proc-vmstat.nr_kernel_stack
7271413 -47.7% 3802076 proc-vmstat.nr_mapped
5592797 -75.1% 1394593 ± 6% proc-vmstat.nr_mlock
27791 ± 3% +6.4% 29565 proc-vmstat.nr_page_table_pages
15117636 -0.5% 15049364 proc-vmstat.nr_shmem
56092 -0.7% 55693 proc-vmstat.nr_slab_reclaimable
132917 +66.4% 221176 proc-vmstat.nr_slab_unreclaimable
5830654 -72.0% 1632513 ± 5% proc-vmstat.nr_unevictable
0.00 -100.0% 0.00 proc-vmstat.nr_writeback
131.20 ±111% -55.8% 58.00 proc-vmstat.nr_written
7857318 +22.1% 9591743 proc-vmstat.nr_zone_active_anon
42.00 ±109% -60.3% 16.67 ± 32% proc-vmstat.nr_zone_active_file
1801446 ± 4% +132.2% 4183033 ± 6% proc-vmstat.nr_zone_inactive_anon
98.60 ± 14% -8.4% 90.33 proc-vmstat.nr_zone_inactive_file
5830654 -72.0% 1632514 ± 5% proc-vmstat.nr_zone_unevictable
15.80 ± 10% -5.1% 15.00 proc-vmstat.nr_zone_write_pending
39731 ± 10% -11.9% 34987 ± 27% proc-vmstat.numa_hint_faults
19700 ± 21% -16.7% 16403 ± 25% proc-vmstat.numa_hint_faults_local
17783159 -3.4% 17169908 proc-vmstat.numa_hit
49.00 ± 67% -32.0% 33.33 ± 35% proc-vmstat.numa_huge_pte_updates
0.00 -100.0% 0.00 proc-vmstat.numa_interleave
17739740 -3.5% 17126412 proc-vmstat.numa_local
43418 +0.2% 43496 proc-vmstat.numa_other
12833 ± 52% +5.3% 13507 ± 57% proc-vmstat.numa_pages_migrated
84737 ± 25% -17.3% 70054 ± 26% proc-vmstat.numa_pte_updates
76644797 ± 2% -61.7% 29321914 ± 3% proc-vmstat.pgactivate
0.00 -100.0% 0.00 proc-vmstat.pgalloc_dma
0.00 -100.0% 0.00 proc-vmstat.pgalloc_dma32
18033237 -3.1% 17472724 proc-vmstat.pgalloc_normal
2.22e+08 ± 5% -67.1% 72960039 proc-vmstat.pgfault
15578043 ± 12% +11.1% 17305293 proc-vmstat.pgfree
0.00 -100.0% 0.00 proc-vmstat.pgmigrate_fail
12833 ± 52% +5.3% 13507 ± 57% proc-vmstat.pgmigrate_success
54.40 ±200% -100.0% 0.00 proc-vmstat.pgpgin
292.80 ±200% -100.0% 0.00 proc-vmstat.pgpgout
25419 -0.1% 25390 proc-vmstat.pgreuse
75.20 ± 19% +10.8% 83.33 ± 2% proc-vmstat.thp_collapse_alloc
0.00 -100.0% 0.00 proc-vmstat.thp_deferred_split_page
12.20 ± 3% +1.1% 12.33 ± 3% proc-vmstat.thp_fault_alloc
0.40 ±122% -100.0% 0.00 proc-vmstat.thp_split_pmd
0.40 ±122% -100.0% 0.00 proc-vmstat.thp_zero_page_alloc
0.00 -100.0% 0.00 proc-vmstat.unevictable_pgs_cleared
68994933 ± 2% -70.0% 20699812 ± 5% proc-vmstat.unevictable_pgs_culled
68995131 ± 2% -70.0% 20699799 ± 5% proc-vmstat.unevictable_pgs_mlocked
68930376 ± 2% -70.0% 20699799 ± 5% proc-vmstat.unevictable_pgs_munlocked
68930350 ± 2% -70.0% 20699796 ± 5% proc-vmstat.unevictable_pgs_rescued
0.00 -100.0% 0.00 proc-vmstat.unevictable_pgs_stranded
3953745 ± 3% +18.6% 4688080 ± 2% numa-vmstat.node0.nr_active_anon
12.00 ± 45% +25.0% 15.00 ± 32% numa-vmstat.node0.nr_active_file
74811 ± 23% -26.7% 54818 ± 10% numa-vmstat.node0.nr_anon_pages
43.40 ± 39% -20.1% 34.67 ± 10% numa-vmstat.node0.nr_anon_transparent_hugepages
23.80 ± 89% -18.8% 19.33 ±130% numa-vmstat.node0.nr_dirtied
5.80 ±115% -8.0% 5.33 ±141% numa-vmstat.node0.nr_dirty
7712748 -3.6% 7433505 numa-vmstat.node0.nr_file_pages
8439381 +3.1% 8697737 numa-vmstat.node0.nr_free_pages
918878 ± 4% +117.3% 1997170 ± 5% numa-vmstat.node0.nr_inactive_anon
43.80 ± 56% +25.6% 55.00 ± 45% numa-vmstat.node0.nr_inactive_file
0.60 ±133% -100.0% 0.00 numa-vmstat.node0.nr_isolated_anon
10288 ± 10% -3.6% 9914 ± 5% numa-vmstat.node0.nr_kernel_stack
3649858 -50.6% 1804283 ± 2% numa-vmstat.node0.nr_mapped
2794854 ± 2% -75.6% 681416 ± 8% numa-vmstat.node0.nr_mlock
13160 ± 5% +11.4% 14659 ± 3% numa-vmstat.node0.nr_page_table_pages
7593283 -3.7% 7312167 numa-vmstat.node0.nr_shmem
28406 ± 3% -6.2% 26640 ± 7% numa-vmstat.node0.nr_slab_reclaimable
68914 ± 2% +62.7% 112147 ± 2% numa-vmstat.node0.nr_slab_unreclaimable
2914224 -72.5% 802680 ± 6% numa-vmstat.node0.nr_unevictable
0.00 -100.0% 0.00 numa-vmstat.node0.nr_writeback
17.60 ± 84% -22.3% 13.67 ±131% numa-vmstat.node0.nr_written
3953751 ± 3% +18.6% 4688076 ± 2% numa-vmstat.node0.nr_zone_active_anon
12.00 ± 45% +25.0% 15.00 ± 32% numa-vmstat.node0.nr_zone_active_file
918874 ± 4% +117.3% 1997165 ± 5% numa-vmstat.node0.nr_zone_inactive_anon
43.80 ± 56% +25.6% 55.00 ± 45% numa-vmstat.node0.nr_zone_inactive_file
2914222 -72.5% 802680 ± 6% numa-vmstat.node0.nr_zone_unevictable
5.80 ±115% -8.0% 5.33 ±141% numa-vmstat.node0.nr_zone_write_pending
8839477 -5.7% 8338471 numa-vmstat.node0.numa_hit
133233 -0.2% 132979 numa-vmstat.node0.numa_interleave
8808766 -6.6% 8228382 numa-vmstat.node0.numa_local
30715 ± 34% +258.4% 110094 ± 43% numa-vmstat.node0.numa_other
3905459 ± 2% +25.7% 4908210 ± 4% numa-vmstat.node1.nr_active_anon
29.40 ±167% -95.5% 1.33 ±141% numa-vmstat.node1.nr_active_file
60260 ± 37% +9.5% 65963 ± 16% numa-vmstat.node1.nr_anon_pages
18.40 ± 90% +55.8% 28.67 ± 15% numa-vmstat.node1.nr_anon_transparent_hugepages
84.80 ±134% -57.9% 35.67 ± 70% numa-vmstat.node1.nr_dirtied
9.80 ± 74% +5.4% 10.33 ± 70% numa-vmstat.node1.nr_dirty
7647606 +2.7% 7857218 numa-vmstat.node1.nr_file_pages
48638 +0.0% 48638 numa-vmstat.node1.nr_free_cma
8610467 -3.0% 8349486 numa-vmstat.node1.nr_free_pages
885382 ± 6% +146.9% 2185669 ± 13% numa-vmstat.node1.nr_inactive_anon
54.60 ± 67% -36.5% 34.67 ± 70% numa-vmstat.node1.nr_inactive_file
0.20 ±200% +566.7% 1.33 ± 93% numa-vmstat.node1.nr_isolated_anon
11243 ± 10% -6.7% 10492 ± 7% numa-vmstat.node1.nr_kernel_stack
3625916 ± 2% -45.1% 1992150 numa-vmstat.node1.nr_mapped
2797793 ± 3% -74.5% 712259 ± 6% numa-vmstat.node1.nr_mlock
14674 ± 7% +2.1% 14987 ± 7% numa-vmstat.node1.nr_page_table_pages
7529001 +2.8% 7740522 numa-vmstat.node1.nr_shmem
27707 ± 3% +4.7% 29012 ± 7% numa-vmstat.node1.nr_slab_reclaimable
64142 ± 3% +69.3% 108600 ± 3% numa-vmstat.node1.nr_slab_unreclaimable
2916280 ± 3% -71.6% 828916 ± 5% numa-vmstat.node1.nr_unevictable
0.00 -100.0% 0.00 numa-vmstat.node1.nr_writeback
74.60 ±146% -66.0% 25.33 ± 70% numa-vmstat.node1.nr_written
3905452 ± 2% +25.7% 4908206 ± 4% numa-vmstat.node1.nr_zone_active_anon
29.40 ±167% -95.5% 1.33 ±141% numa-vmstat.node1.nr_zone_active_file
885383 ± 6% +146.9% 2185666 ± 13% numa-vmstat.node1.nr_zone_inactive_anon
54.60 ± 67% -36.5% 34.67 ± 70% numa-vmstat.node1.nr_zone_inactive_file
2916278 ± 3% -71.6% 828916 ± 5% numa-vmstat.node1.nr_zone_unevictable
9.80 ± 74% +2.0% 10.00 ± 71% numa-vmstat.node1.nr_zone_write_pending
8722862 +2.1% 8902016 numa-vmstat.node1.numa_hit
132992 +0.2% 133258 numa-vmstat.node1.numa_interleave
8570505 +3.0% 8829040 numa-vmstat.node1.numa_local
152360 ± 7% -52.1% 72980 ± 65% numa-vmstat.node1.numa_other
30.21 ±199% +65423.6% 19792 ±141% sched_debug.cfs_rq:/.MIN_vruntime.avg
2839 ±199% +33533.7% 955010 ±140% sched_debug.cfs_rq:/.MIN_vruntime.max
0.00 +0.0% 0.00 sched_debug.cfs_rq:/.MIN_vruntime.min
291.30 ±200% +46532.9% 135843 ±140% sched_debug.cfs_rq:/.MIN_vruntime.stddev
28461 -0.5% 28325 sched_debug.cfs_rq:/.exec_clock.avg
29902 -0.8% 29650 sched_debug.cfs_rq:/.exec_clock.max
27325 -5.8% 25738 sched_debug.cfs_rq:/.exec_clock.min
248.42 ± 21% +37.2% 340.72 ± 4% sched_debug.cfs_rq:/.exec_clock.stddev
12605 ± 23% +53.9% 19396 ± 30% sched_debug.cfs_rq:/.load.avg
571429 ± 48% +44.6% 826542 ± 39% sched_debug.cfs_rq:/.load.max
5363 -1.6% 5277 sched_debug.cfs_rq:/.load.min
58046 ± 48% +62.5% 94341 ± 42% sched_debug.cfs_rq:/.load.stddev
26.42 ± 25% -17.8% 21.71 ± 24% sched_debug.cfs_rq:/.load_avg.avg
640.10 ± 48% -24.0% 486.50 ± 21% sched_debug.cfs_rq:/.load_avg.max
4.90 ± 4% +2.0% 5.00 sched_debug.cfs_rq:/.load_avg.min
87.17 ± 36% -26.0% 64.51 ± 34% sched_debug.cfs_rq:/.load_avg.stddev
30.21 ±199% +65423.6% 19792 ±141% sched_debug.cfs_rq:/.max_vruntime.avg
2839 ±199% +33533.7% 955010 ±140% sched_debug.cfs_rq:/.max_vruntime.max
0.00 +0.0% 0.00 sched_debug.cfs_rq:/.max_vruntime.min
291.30 ±200% +46532.9% 135843 ±140% sched_debug.cfs_rq:/.max_vruntime.stddev
2903124 -0.8% 2881055 sched_debug.cfs_rq:/.min_vruntime.avg
2940930 -0.6% 2924526 sched_debug.cfs_rq:/.min_vruntime.max
2759231 ± 2% -5.6% 2605100 sched_debug.cfs_rq:/.min_vruntime.min
32148 ± 19% +50.4% 48341 ± 2% sched_debug.cfs_rq:/.min_vruntime.stddev
0.55 +4.5% 0.58 ± 4% sched_debug.cfs_rq:/.nr_running.avg
1.10 ± 18% +21.2% 1.33 ± 17% sched_debug.cfs_rq:/.nr_running.max
0.50 +0.0% 0.50 sched_debug.cfs_rq:/.nr_running.min
0.15 ± 7% +30.9% 0.20 ± 26% sched_debug.cfs_rq:/.nr_running.stddev
2.34 ± 17% +1142.8% 29.11 ± 11% sched_debug.cfs_rq:/.nr_spread_over.avg
56.20 ± 52% +743.4% 474.00 ± 31% sched_debug.cfs_rq:/.nr_spread_over.max
0.00 +3.2e+102% 3.17 ± 19% sched_debug.cfs_rq:/.nr_spread_over.min
7.55 ± 32% +809.9% 68.66 ± 23% sched_debug.cfs_rq:/.nr_spread_over.stddev
4.09 ±143% -56.0% 1.80 ±141% sched_debug.cfs_rq:/.removed.load_avg.avg
202.30 ±122% -15.6% 170.67 ±141% sched_debug.cfs_rq:/.removed.load_avg.max
27.21 ±128% -36.0% 17.42 ±141% sched_debug.cfs_rq:/.removed.load_avg.stddev
1.37 ±126% -31.7% 0.94 ±141% sched_debug.cfs_rq:/.removed.runnable_avg.avg
84.00 ±128% +6.0% 89.00 ±141% sched_debug.cfs_rq:/.removed.runnable_avg.max
10.08 ±122% -9.9% 9.08 ±141% sched_debug.cfs_rq:/.removed.runnable_avg.stddev
1.37 ±126% -31.7% 0.94 ±141% sched_debug.cfs_rq:/.removed.util_avg.avg
84.00 ±128% +6.0% 89.00 ±141% sched_debug.cfs_rq:/.removed.util_avg.max
10.08 ±122% -9.9% 9.08 ±141% sched_debug.cfs_rq:/.removed.util_avg.stddev
645.49 ± 3% -0.2% 644.45 ± 4% sched_debug.cfs_rq:/.runnable_avg.avg
1167 ± 5% +11.3% 1299 ± 12% sched_debug.cfs_rq:/.runnable_avg.max
466.90 ± 17% +9.3% 510.50 sched_debug.cfs_rq:/.runnable_avg.min
158.78 ± 7% +6.9% 169.72 ± 14% sched_debug.cfs_rq:/.runnable_avg.stddev
103207 ± 23% +10.6% 114183 ± 87% sched_debug.cfs_rq:/.spread0.avg
141212 ± 16% +10.8% 156503 ± 62% sched_debug.cfs_rq:/.spread0.max
-40257 +304.7% -162912 sched_debug.cfs_rq:/.spread0.min
32026 ± 19% +50.6% 48217 ± 2% sched_debug.cfs_rq:/.spread0.stddev
642.25 ± 3% -0.4% 639.71 ± 4% sched_debug.cfs_rq:/.util_avg.avg
1024 +15.0% 1177 ± 10% sched_debug.cfs_rq:/.util_avg.max
421.20 ± 30% +15.1% 484.67 ± 3% sched_debug.cfs_rq:/.util_avg.min
150.86 ± 4% +4.4% 157.44 ± 13% sched_debug.cfs_rq:/.util_avg.stddev
347.03 ± 9% -66.6% 116.03 ± 4% sched_debug.cfs_rq:/.util_est_enqueued.avg
1039 ± 2% -1.2% 1026 sched_debug.cfs_rq:/.util_est_enqueued.max
0.50 +0.0% 0.50 sched_debug.cfs_rq:/.util_est_enqueued.min
319.62 ± 4% -17.5% 263.67 sched_debug.cfs_rq:/.util_est_enqueued.stddev
1809157 ± 27% -11.0% 1610896 ± 33% sched_debug.cpu.avg_idle.avg
7946220 ± 54% -43.9% 4457187 ± 46% sched_debug.cpu.avg_idle.max
329579 ± 38% -25.7% 244834 ± 47% sched_debug.cpu.avg_idle.min
1115761 ± 44% -32.3% 754920 ± 54% sched_debug.cpu.avg_idle.stddev
59017 -0.5% 58751 sched_debug.cpu.clock.avg
59030 -0.4% 58768 sched_debug.cpu.clock.max
59007 -0.5% 58734 sched_debug.cpu.clock.min
7.41 ± 29% +55.1% 11.49 ± 13% sched_debug.cpu.clock.stddev
58776 -0.7% 58377 sched_debug.cpu.clock_task.avg
58933 -0.7% 58548 sched_debug.cpu.clock_task.max
53421 -0.9% 52926 sched_debug.cpu.clock_task.min
637.28 ± 3% +2.8% 655.18 ± 6% sched_debug.cpu.clock_task.stddev
1537 +1.6% 1562 ± 2% sched_debug.cpu.curr->pid.avg
3292 ± 8% -3.1% 3189 ± 10% sched_debug.cpu.curr->pid.max
1324 ± 5% +0.2% 1326 ± 5% sched_debug.cpu.curr->pid.min
395.47 ± 7% +8.2% 427.88 ± 10% sched_debug.cpu.curr->pid.stddev
976324 ± 31% -16.9% 811504 ± 33% sched_debug.cpu.max_idle_balance_cost.avg
3211116 ± 47% -12.6% 2806818 ± 73% sched_debug.cpu.max_idle_balance_cost.max
500000 +0.0% 500000 sched_debug.cpu.max_idle_balance_cost.min
453858 ± 49% -31.1% 312847 ± 70% sched_debug.cpu.max_idle_balance_cost.stddev
4294 -0.0% 4294 sched_debug.cpu.next_balance.avg
4294 -0.0% 4294 sched_debug.cpu.next_balance.max
4294 -0.0% 4294 sched_debug.cpu.next_balance.min
0.00 ± 16% +33.0% 0.00 ± 39% sched_debug.cpu.next_balance.stddev
0.56 +4.4% 0.59 ± 5% sched_debug.cpu.nr_running.avg
1.60 ± 12% +4.2% 1.67 ± 14% sched_debug.cpu.nr_running.max
0.50 +0.0% 0.50 sched_debug.cpu.nr_running.min
0.22 ± 7% +15.6% 0.25 ± 19% sched_debug.cpu.nr_running.stddev
2009 ± 4% +3.1% 2072 ± 2% sched_debug.cpu.nr_switches.avg
11032 ± 14% -25.7% 8200 ± 12% sched_debug.cpu.nr_switches.max
720.60 ± 9% +1.4% 730.50 ± 13% sched_debug.cpu.nr_switches.min
1541 ± 11% -9.1% 1400 ± 3% sched_debug.cpu.nr_switches.stddev
0.02 ± 35% -11.8% 0.02 ± 27% sched_debug.cpu.nr_uninterruptible.avg
31.30 ± 23% +1.7% 31.83 ± 8% sched_debug.cpu.nr_uninterruptible.max
-14.00 +50.0% -21.00 sched_debug.cpu.nr_uninterruptible.min
7.09 ± 9% +10.9% 7.87 ± 9% sched_debug.cpu.nr_uninterruptible.stddev
641.59 ± 10% -15.8% 540.26 ± 5% sched_debug.cpu.sched_count.avg
4502 ± 28% -0.5% 4479 sched_debug.cpu.sched_count.max
200.40 ± 35% -35.5% 129.33 ± 4% sched_debug.cpu.sched_count.min
617.03 ± 20% +13.6% 700.98 ± 4% sched_debug.cpu.sched_count.stddev
67.06 ± 28% -24.9% 50.36 ± 8% sched_debug.cpu.sched_goidle.avg
1491 ± 24% -12.9% 1299 ± 19% sched_debug.cpu.sched_goidle.max
4.30 ± 85% -38.0% 2.67 ± 38% sched_debug.cpu.sched_goidle.min
180.00 ± 16% -8.8% 164.19 ± 15% sched_debug.cpu.sched_goidle.stddev
285.21 ± 14% -23.4% 218.37 ± 5% sched_debug.cpu.ttwu_count.avg
2022 ± 39% +9.9% 2221 ± 20% sched_debug.cpu.ttwu_count.max
57.00 ± 8% -24.9% 42.83 ± 4% sched_debug.cpu.ttwu_count.min
297.42 ± 30% +10.5% 328.53 ± 9% sched_debug.cpu.ttwu_count.stddev
163.51 ± 6% -5.0% 155.34 ± 2% sched_debug.cpu.ttwu_local.avg
903.60 ± 68% +65.7% 1496 ± 8% sched_debug.cpu.ttwu_local.max
46.60 ± 6% -20.2% 37.17 sched_debug.cpu.ttwu_local.min
127.35 ± 45% +91.4% 243.79 ± 7% sched_debug.cpu.ttwu_local.stddev
59006 -0.5% 58734 sched_debug.cpu_clk
996147 +0.0% 996147 sched_debug.dl_rq:.dl_bw->bw.avg
996147 +0.0% 996147 sched_debug.dl_rq:.dl_bw->bw.max
996147 +0.0% 996147 sched_debug.dl_rq:.dl_bw->bw.min
4.295e+09 -0.0% 4.295e+09 sched_debug.jiffies
58510 -0.5% 58239 sched_debug.ktime
950.00 +0.0% 950.00 sched_debug.rt_rq:/.rt_runtime.avg
950.00 +0.0% 950.00 sched_debug.rt_rq:/.rt_runtime.max
950.00 +0.0% 950.00 sched_debug.rt_rq:/.rt_runtime.min
59366 -0.6% 59028 sched_debug.sched_clk
1.00 +0.0% 1.00 sched_debug.sched_clock_stable()
4139835 +0.0% 4139835 sched_debug.sysctl_sched.sysctl_sched_features
24.00 +0.0% 24.00 sched_debug.sysctl_sched.sysctl_sched_latency
3.00 +0.0% 3.00 sched_debug.sysctl_sched.sysctl_sched_min_granularity
1.00 +0.0% 1.00 sched_debug.sysctl_sched.sysctl_sched_tunable_scaling
4.00 +0.0% 4.00 sched_debug.sysctl_sched.sysctl_sched_wakeup_granularity
0.00 -100.0% 0.00 latency_stats.avg.path_openat.do_filp_open.do_sys_openat2.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.avg.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
0.00 -100.0% 0.00 latency_stats.avg.stop_two_cpus.migrate_swap.task_numa_migrate.task_numa_fault.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
0.00 -100.0% 0.00 latency_stats.avg.cgroup_kn_lock_live.cgroup_procs_write.cgroup_file_write.kernfs_fop_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.avg.d_alloc_parallel.__lookup_slow.walk_component.link_path_walk.path_lookupat.filename_lookup.vfs_statx.__do_sys_newstat.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.avg.page_lock_anon_vma_read.rmap_walk_anon.try_to_unmap.migrate_pages.migrate_misplaced_page.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
0.00 -100.0% 0.00 latency_stats.avg.rwsem_down_write_slowpath.anon_vma_fork.dup_mmap.dup_mm.copy_process._do_fork.__do_sys_clone.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.avg.rwsem_down_write_slowpath.anon_vma_clone.anon_vma_fork.dup_mmap.dup_mm.copy_process._do_fork.__do_sys_clone.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.avg.rwsem_down_write_slowpath.dup_mmap.dup_mm.copy_process._do_fork.__do_sys_clone.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.avg.rwsem_down_write_slowpath.vma_link.mmap_region.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.avg.rwsem_down_write_slowpath.unlink_file_vma.free_pgtables.exit_mmap.mmput.begin_new_exec.load_elf_binary.exec_binprm.bprm_execve.do_execveat_common.__x64_sys_execve.do_syscall_64
0.00 -100.0% 0.00 latency_stats.avg.pipe_release.__fput.task_work_run.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.avg.percpu_rwsem_wait.__percpu_down_read.cgroup_can_fork.copy_process._do_fork.__do_sys_clone.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.avg.wb_wait_for_completion.sync_inodes_sb.iterate_supers.ksys_sync.__do_sys_sync.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.avg.wait_on_page_bit.__filemap_fdatawait_range.filemap_fdatawait_keep_errors.sync_inodes_sb.iterate_supers.ksys_sync.__do_sys_sync.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.avg.sync_inodes_sb.iterate_supers.ksys_sync.__do_sys_sync.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.avg.put_and_wait_on_page_locked.do_swap_page.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
0.00 -100.0% 0.00 latency_stats.avg.stop_one_cpu.__set_cpus_allowed_ptr.sched_setaffinity.__x64_sys_sched_setaffinity.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.avg.stop_one_cpu.sched_exec.bprm_execve.do_execveat_common.__x64_sys_execve.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.avg.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_access.nfs_do_access.nfs_permission.inode_permission.link_path_walk.path_openat.do_filp_open
0.00 -100.0% 0.00 latency_stats.avg.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_lookup.nfs_lookup.path_openat.do_filp_open.do_sys_openat2.do_sys_open.do_syscall_64
0.00 -100.0% 0.00 latency_stats.avg.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_get_acl.get_acl.posix_acl_create.nfs3_proc_create.nfs_create.path_openat.do_filp_open.do_sys_openat2
0.00 -100.0% 0.00 latency_stats.avg.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_do_create.nfs3_proc_create.nfs_create.path_openat.do_filp_open.do_sys_openat2.do_sys_open
0.00 -100.0% 0.00 latency_stats.avg.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_setattr.nfs_setattr.notify_change.vfs_utimes.do_utimes.__x64_sys_utimensat.do_syscall_64
0.00 -100.0% 0.00 latency_stats.avg.wait_on_page_bit.__filemap_fdatawait_range.filemap_write_and_wait_range.nfs_wb_all.nfs_file_flush.filp_close.do_dup2.__x64_sys_dup2.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.avg.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_getattr.__nfs_revalidate_inode.nfs_lookup_verify_inode.nfs_do_lookup_revalidate.__nfs_lookup_revalidate.path_openat.do_filp_open
0.00 -100.0% 0.00 latency_stats.avg.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_access.nfs_do_access.nfs_permission.inode_permission.may_open.path_openat.do_filp_open
0.00 -100.0% 0.00 latency_stats.avg.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_getattr.__nfs_revalidate_inode.nfs_lookup_verify_inode.nfs_do_lookup_revalidate.__nfs_lookup_revalidate.lookup_fast.path_openat
0.00 -100.0% 0.00 latency_stats.avg.cgroup_kn_lock_live.__cgroup1_procs_write.cgroup_file_write.kernfs_fop_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.avg.stop_two_cpus.migrate_swap.task_numa_migrate.task_numa_fault.do_huge_pmd_numa_page.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
0.00 -100.0% 0.00 latency_stats.avg.rwsem_down_write_slowpath.__vma_adjust.__split_vma.__do_munmap.__vm_munmap.elf_map.load_elf_interp.load_elf_binary.exec_binprm.bprm_execve.do_execveat_common.__x64_sys_execve
0.00 -100.0% 0.00 latency_stats.avg.rwsem_down_write_slowpath.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.avg.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.copy_user_generic_unrolled._copy_from_user.__x64_sys_rt_sigprocmask.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.avg.rwsem_down_write_slowpath.do_mprotect_pkey.__x64_sys_mprotect.do_syscall_64.entry_SYSCALL_64_after_hwframe
79.40 ±200% -100.0% 0.00 latency_stats.avg.kernfs_iop_permission.inode_permission.link_path_walk.path_openat.do_filp_open.do_sys_openat2.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
6.60 ± 97% -100.0% 0.00 latency_stats.avg.d_alloc_parallel.__lookup_slow.walk_component.link_path_walk.path_openat.do_filp_open.do_sys_openat2.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
4843 ±122% -100.0% 0.00 latency_stats.avg.rwsem_down_write_slowpath.__vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe
16.60 ±200% -100.0% 0.00 latency_stats.avg.rwsem_down_write_slowpath.do_madvise.__x64_sys_madvise.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.80 ±123% -100.0% 0.00 latency_stats.avg.d_alloc_parallel.__lookup_slow.walk_component.path_lookupat.filename_lookup.vfs_statx.__do_sys_newstat.do_syscall_64.entry_SYSCALL_64_after_hwframe
154.80 ±120% -100.0% 0.00 latency_stats.avg.resolve_symbol.load_module.__do_sys_finit_module.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.80 ±200% -100.0% 0.00 latency_stats.avg.d_alloc_parallel.__lookup_slow.walk_component.path_lookupat.filename_lookup.do_faccessat.do_syscall_64.entry_SYSCALL_64_after_hwframe
321.00 ±122% -100.0% 0.00 latency_stats.avg.poll_schedule_timeout.do_sys_poll.__x64_sys_ppoll.do_syscall_64.entry_SYSCALL_64_after_hwframe
25.00 ±200% -100.0% 0.00 latency_stats.avg.rwsem_down_write_slowpath.__vma_adjust.__split_vma.mprotect_fixup.do_mprotect_pkey.__x64_sys_mprotect.do_syscall_64.entry_SYSCALL_64_after_hwframe
54.80 ±200% -100.0% 0.00 latency_stats.avg.rwsem_down_write_slowpath.__vma_adjust.__split_vma.__do_munmap.mmap_region.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
171.20 ±200% -100.0% 0.00 latency_stats.avg.rmap_walk_anon.remove_migration_ptes.migrate_pages.migrate_misplaced_page.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
4831 ±121% -97.6% 116.00 ±134% latency_stats.avg.do_exit.do_group_exit.get_signal.arch_do_signal.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe
2179 ±185% -95.2% 103.67 ±141% latency_stats.avg.load_module.__do_sys_finit_module.do_syscall_64.entry_SYSCALL_64_after_hwframe
1163 ±100% -87.5% 145.33 ± 2% latency_stats.avg.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_lookup.nfs_lookup.__lookup_slow.walk_component.path_lookupat.filename_lookup.vfs_statx
1034 ±187% -81.6% 190.00 ±134% latency_stats.avg.finished_loading.load_module.__do_sys_finit_module.do_syscall_64.entry_SYSCALL_64_after_hwframe
844.20 ± 90% -68.5% 266.33 ± 44% latency_stats.avg.poll_schedule_timeout.do_select.core_sys_select.kern_select.__x64_sys_select.do_syscall_64.entry_SYSCALL_64_after_hwframe
140.40 ±128% -61.8% 53.67 ±141% latency_stats.avg.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_access.nfs_do_access.nfs_permission.inode_permission.link_path_walk.path_lookupat.filename_lookup
175267 ± 81% -52.5% 83199 ±141% latency_stats.avg.blk_execute_rq.__scsi_execute.ioctl_internal_command.scsi_set_medium_removal.cdrom_release.[cdrom].sr_block_release.[sr_mod].__blkdev_put.blkdev_close.__fput.task_work_run.exit_to_user_mode_prepare.syscall_exit_to_user_mode
205883 ± 55% -35.0% 133812 ± 88% latency_stats.avg.max
496.40 ± 24% -34.5% 325.00 ± 6% latency_stats.avg.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64.entry_SYSCALL_64_after_hwframe
82676 ± 69% -29.4% 58389 ± 76% latency_stats.avg.mm_access.proc_mem_open.proc_maps_open.do_dentry_open.path_openat.do_filp_open.do_sys_openat2.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
2373 ± 5% -13.2% 2059 ± 4% latency_stats.avg.devkmsg_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
2251 ± 11% -9.0% 2048 ± 6% latency_stats.avg.do_syslog.kmsg_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
212.20 ± 14% -6.7% 198.00 ± 15% latency_stats.avg.wait_woken.__inet_stream_connect.inet_stream_connect.__sys_connect.__x64_sys_connect.do_syscall_64.entry_SYSCALL_64_after_hwframe
540.60 ± 14% -2.3% 528.00 ± 2% latency_stats.avg.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64.entry_SYSCALL_64_after_hwframe
59.60 ± 12% +14.1% 68.00 ± 10% latency_stats.avg.pipe_read.new_sync_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
392.20 ± 47% +17.9% 462.33 ± 95% latency_stats.avg.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_getattr.__nfs_revalidate_inode.nfs_access_get_cached.nfs_do_access.nfs_permission.inode_permission.link_path_walk
551.00 ± 82% +44.2% 794.67 ± 26% latency_stats.avg.ep_poll.do_epoll_wait.__x64_sys_epoll_wait.do_syscall_64.entry_SYSCALL_64_after_hwframe
721.80 ±147% +77.3% 1280 ±141% latency_stats.avg.blk_execute_rq.sg_io.scsi_cmd_ioctl.cdrom_ioctl.[cdrom].sr_block_ioctl.[sr_mod].blkdev_ioctl.block_ioctl.__x64_sys_ioctl.do_syscall_64.entry_SYSCALL_64_after_hwframe
2269 ±200% +81.3% 4114 ±141% latency_stats.avg.msleep.cpuinfo_open.do_dentry_open.path_openat.do_filp_open.do_sys_openat2.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
743.80 ± 75% +84.5% 1372 ± 14% latency_stats.avg.poll_schedule_timeout.do_sys_poll.__x64_sys_poll.do_syscall_64.entry_SYSCALL_64_after_hwframe
40213 ±198% +89.7% 76285 ±141% latency_stats.avg.do_init_module.load_module.__do_sys_finit_module.do_syscall_64.entry_SYSCALL_64_after_hwframe
207.20 ±149% +112.2% 439.67 ±141% latency_stats.avg.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_lookup.nfs_lookup_revalidate_dentry.nfs_do_lookup_revalidate.__nfs_lookup_revalidate.lookup_fast.walk_component.link_path_walk
975.40 ± 50% +122.1% 2166 ± 67% latency_stats.avg.wait_for_partner.fifo_open.do_dentry_open.path_openat.do_filp_open.do_sys_openat2.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
843.00 ± 70% +151.1% 2116 ± 47% latency_stats.avg.pipe_write.new_sync_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
78.20 ± 27% +397.4% 389.00 ±141% latency_stats.avg.do_coredump.get_signal.arch_do_signal.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe
16588 ±187% +426.3% 87301 ±139% latency_stats.avg.m_start.seq_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
24.00 ±108% +812.5% 219.00 ± 78% latency_stats.avg.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.20 ±200% +1.2e+05% 240.33 ±140% latency_stats.avg.d_alloc_parallel.path_openat.do_filp_open.do_sys_openat2.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +3.3e+101% 0.33 ±141% latency_stats.avg.d_alloc_parallel.__lookup_slow.walk_component.path_lookupat.filename_lookup.user_statfs.__do_sys_statfs.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +4.8e+103% 48.00 ± 72% latency_stats.avg.devtmpfs_submit_req.devtmpfs_create_node.device_add.devm_create_dev_dax.__dax_pmem_probe.[dax_pmem_core].dax_pmem_compat_probe.[dax_pmem_compat].nvdimm_bus_probe.[libnvdimm].really_probe.driver_probe_device.device_driver_attach.__driver_attach.bus_for_each_dev
0.00 +1.8e+105% 1781 ±141% latency_stats.avg.blk_execute_rq.__scsi_execute.sr_do_ioctl.[sr_mod].sr_packet.[sr_mod].cdrom_get_media_event.[cdrom].sr_drive_status.[sr_mod].cdrom_ioctl.[cdrom].sr_block_ioctl.[sr_mod].blkdev_ioctl.block_ioctl.__x64_sys_ioctl.do_syscall_64
0.00 +1.9e+105% 1940 ±141% latency_stats.avg.blk_execute_rq.__scsi_execute.sr_check_events.[sr_mod].cdrom_check_events.[cdrom].sr_block_check_events.[sr_mod].disk_check_events.bdev_check_media_change.sr_block_open.[sr_mod].__blkdev_get.blkdev_get.do_dentry_open.path_openat
0.00 +9.7e+106% 96869 ±141% latency_stats.avg.blk_execute_rq.__scsi_execute.scsi_test_unit_ready.sr_check_events.[sr_mod].cdrom_check_events.[cdrom].sr_block_check_events.[sr_mod].disk_check_events.bdev_check_media_change.sr_block_open.[sr_mod].__blkdev_get.blkdev_get.do_dentry_open
0.00 -100.0% 0.00 latency_stats.hits.path_openat.do_filp_open.do_sys_openat2.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.hits.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
0.00 -100.0% 0.00 latency_stats.hits.stop_two_cpus.migrate_swap.task_numa_migrate.task_numa_fault.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
0.00 -100.0% 0.00 latency_stats.hits.cgroup_kn_lock_live.cgroup_procs_write.cgroup_file_write.kernfs_fop_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.hits.d_alloc_parallel.__lookup_slow.walk_component.link_path_walk.path_lookupat.filename_lookup.vfs_statx.__do_sys_newstat.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.hits.page_lock_anon_vma_read.rmap_walk_anon.try_to_unmap.migrate_pages.migrate_misplaced_page.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
0.00 -100.0% 0.00 latency_stats.hits.rwsem_down_write_slowpath.anon_vma_fork.dup_mmap.dup_mm.copy_process._do_fork.__do_sys_clone.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.hits.rwsem_down_write_slowpath.anon_vma_clone.anon_vma_fork.dup_mmap.dup_mm.copy_process._do_fork.__do_sys_clone.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.hits.rwsem_down_write_slowpath.dup_mmap.dup_mm.copy_process._do_fork.__do_sys_clone.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.hits.rwsem_down_write_slowpath.vma_link.mmap_region.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.hits.rwsem_down_write_slowpath.unlink_file_vma.free_pgtables.exit_mmap.mmput.begin_new_exec.load_elf_binary.exec_binprm.bprm_execve.do_execveat_common.__x64_sys_execve.do_syscall_64
0.00 -100.0% 0.00 latency_stats.hits.pipe_release.__fput.task_work_run.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.hits.percpu_rwsem_wait.__percpu_down_read.cgroup_can_fork.copy_process._do_fork.__do_sys_clone.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.hits.wb_wait_for_completion.sync_inodes_sb.iterate_supers.ksys_sync.__do_sys_sync.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.hits.wait_on_page_bit.__filemap_fdatawait_range.filemap_fdatawait_keep_errors.sync_inodes_sb.iterate_supers.ksys_sync.__do_sys_sync.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.hits.sync_inodes_sb.iterate_supers.ksys_sync.__do_sys_sync.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.hits.put_and_wait_on_page_locked.do_swap_page.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
0.00 -100.0% 0.00 latency_stats.hits.stop_one_cpu.__set_cpus_allowed_ptr.sched_setaffinity.__x64_sys_sched_setaffinity.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.hits.stop_one_cpu.sched_exec.bprm_execve.do_execveat_common.__x64_sys_execve.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.hits.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_access.nfs_do_access.nfs_permission.inode_permission.link_path_walk.path_openat.do_filp_open
0.00 -100.0% 0.00 latency_stats.hits.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_lookup.nfs_lookup.path_openat.do_filp_open.do_sys_openat2.do_sys_open.do_syscall_64
0.00 -100.0% 0.00 latency_stats.hits.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_get_acl.get_acl.posix_acl_create.nfs3_proc_create.nfs_create.path_openat.do_filp_open.do_sys_openat2
0.00 -100.0% 0.00 latency_stats.hits.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_do_create.nfs3_proc_create.nfs_create.path_openat.do_filp_open.do_sys_openat2.do_sys_open
0.00 -100.0% 0.00 latency_stats.hits.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_setattr.nfs_setattr.notify_change.vfs_utimes.do_utimes.__x64_sys_utimensat.do_syscall_64
0.00 -100.0% 0.00 latency_stats.hits.wait_on_page_bit.__filemap_fdatawait_range.filemap_write_and_wait_range.nfs_wb_all.nfs_file_flush.filp_close.do_dup2.__x64_sys_dup2.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.hits.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_getattr.__nfs_revalidate_inode.nfs_lookup_verify_inode.nfs_do_lookup_revalidate.__nfs_lookup_revalidate.path_openat.do_filp_open
0.00 -100.0% 0.00 latency_stats.hits.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_access.nfs_do_access.nfs_permission.inode_permission.may_open.path_openat.do_filp_open
0.00 -100.0% 0.00 latency_stats.hits.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_getattr.__nfs_revalidate_inode.nfs_lookup_verify_inode.nfs_do_lookup_revalidate.__nfs_lookup_revalidate.lookup_fast.path_openat
0.00 -100.0% 0.00 latency_stats.hits.cgroup_kn_lock_live.__cgroup1_procs_write.cgroup_file_write.kernfs_fop_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.hits.stop_two_cpus.migrate_swap.task_numa_migrate.task_numa_fault.do_huge_pmd_numa_page.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
0.00 -100.0% 0.00 latency_stats.hits.rwsem_down_write_slowpath.__vma_adjust.__split_vma.__do_munmap.__vm_munmap.elf_map.load_elf_interp.load_elf_binary.exec_binprm.bprm_execve.do_execveat_common.__x64_sys_execve
0.00 -100.0% 0.00 latency_stats.hits.rwsem_down_write_slowpath.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.hits.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.copy_user_generic_unrolled._copy_from_user.__x64_sys_rt_sigprocmask.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.hits.rwsem_down_write_slowpath.do_mprotect_pkey.__x64_sys_mprotect.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.20 ±200% -100.0% 0.00 latency_stats.hits.kernfs_iop_permission.inode_permission.link_path_walk.path_openat.do_filp_open.do_sys_openat2.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.80 ± 93% -100.0% 0.00 latency_stats.hits.d_alloc_parallel.__lookup_slow.walk_component.link_path_walk.path_openat.do_filp_open.do_sys_openat2.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.40 ±122% -100.0% 0.00 latency_stats.hits.rwsem_down_write_slowpath.__vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.20 ±200% -100.0% 0.00 latency_stats.hits.rwsem_down_write_slowpath.do_madvise.__x64_sys_madvise.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.40 ±122% -100.0% 0.00 latency_stats.hits.d_alloc_parallel.__lookup_slow.walk_component.path_lookupat.filename_lookup.vfs_statx.__do_sys_newstat.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.80 ± 93% -100.0% 0.00 latency_stats.hits.resolve_symbol.load_module.__do_sys_finit_module.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.80 ±200% -100.0% 0.00 latency_stats.hits.d_alloc_parallel.__lookup_slow.walk_component.path_lookupat.filename_lookup.do_faccessat.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.80 ±122% -100.0% 0.00 latency_stats.hits.poll_schedule_timeout.do_sys_poll.__x64_sys_ppoll.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.20 ±200% -100.0% 0.00 latency_stats.hits.rwsem_down_write_slowpath.__vma_adjust.__split_vma.mprotect_fixup.do_mprotect_pkey.__x64_sys_mprotect.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.40 ±200% -100.0% 0.00 latency_stats.hits.rwsem_down_write_slowpath.__vma_adjust.__split_vma.__do_munmap.mmap_region.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.20 ±200% -100.0% 0.00 latency_stats.hits.rmap_walk_anon.remove_migration_ptes.migrate_pages.migrate_misplaced_page.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
27.20 ±127% -93.9% 1.67 ± 28% latency_stats.hits.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex.do_syscall_64.entry_SYSCALL_64_after_hwframe
115.60 ± 66% -92.2% 9.00 ± 72% latency_stats.hits.finished_loading.load_module.__do_sys_finit_module.do_syscall_64.entry_SYSCALL_64_after_hwframe
12.20 ± 75% -91.8% 1.00 ±141% latency_stats.hits.load_module.__do_sys_finit_module.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.00 -66.7% 0.33 ±141% latency_stats.hits.do_coredump.get_signal.arch_do_signal.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe
1133 ± 2% -50.2% 564.67 ± 4% latency_stats.hits.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64.entry_SYSCALL_64_after_hwframe
51.60 ± 63% -49.0% 26.33 ± 24% latency_stats.hits.poll_schedule_timeout.do_select.core_sys_select.kern_select.__x64_sys_select.do_syscall_64.entry_SYSCALL_64_after_hwframe
499.60 ± 36% -46.0% 269.67 ± 30% latency_stats.hits.ep_poll.do_epoll_wait.__x64_sys_epoll_wait.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.20 ±200% -44.4% 0.67 ± 70% latency_stats.hits.d_alloc_parallel.path_openat.do_filp_open.do_sys_openat2.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.60 ± 81% -44.4% 0.33 ±141% latency_stats.hits.blk_execute_rq.__scsi_execute.ioctl_internal_command.scsi_set_medium_removal.cdrom_release.[cdrom].sr_block_release.[sr_mod].__blkdev_put.blkdev_close.__fput.task_work_run.exit_to_user_mode_prepare.syscall_exit_to_user_mode
16557 ± 11% -42.7% 9493 ± 3% latency_stats.hits.pipe_read.new_sync_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
16557 ± 11% -42.7% 9493 ± 3% latency_stats.hits.max
19.60 ± 74% -35.4% 12.67 ± 40% latency_stats.hits.poll_schedule_timeout.do_sys_poll.__x64_sys_poll.do_syscall_64.entry_SYSCALL_64_after_hwframe
201.00 ± 2% -32.7% 135.33 ± 2% latency_stats.hits.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.40 ±122% -16.7% 0.33 ±141% latency_stats.hits.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_access.nfs_do_access.nfs_permission.inode_permission.link_path_walk.path_lookupat.filename_lookup
0.40 ±122% -16.7% 0.33 ±141% latency_stats.hits.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_lookup.nfs_lookup_revalidate_dentry.nfs_do_lookup_revalidate.__nfs_lookup_revalidate.lookup_fast.walk_component.link_path_walk
0.80 ± 50% -16.7% 0.67 ± 70% latency_stats.hits.mm_access.proc_mem_open.proc_maps_open.do_dentry_open.path_openat.do_filp_open.do_sys_openat2.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.40 ±122% -16.7% 0.33 ±141% latency_stats.hits.do_init_module.load_module.__do_sys_finit_module.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.00 +0.0% 1.00 latency_stats.hits.do_exit.do_group_exit.get_signal.arch_do_signal.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe
1.00 +0.0% 1.00 latency_stats.hits.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_lookup.nfs_lookup.__lookup_slow.walk_component.path_lookupat.filename_lookup.vfs_statx
1.00 +0.0% 1.00 latency_stats.hits.wait_woken.__inet_stream_connect.inet_stream_connect.__sys_connect.__x64_sys_connect.do_syscall_64.entry_SYSCALL_64_after_hwframe
8.60 ± 5% +0.8% 8.67 ± 5% latency_stats.hits.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_getattr.__nfs_revalidate_inode.nfs_access_get_cached.nfs_do_access.nfs_permission.inode_permission.link_path_walk
0.80 ± 50% +25.0% 1.00 latency_stats.hits.wait_for_partner.fifo_open.do_dentry_open.path_openat.do_filp_open.do_sys_openat2.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.20 ± 81% +38.9% 1.67 ±141% latency_stats.hits.blk_execute_rq.sg_io.scsi_cmd_ioctl.cdrom_ioctl.[cdrom].sr_block_ioctl.[sr_mod].blkdev_ioctl.block_ioctl.__x64_sys_ioctl.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.00 ± 70% +50.0% 3.00 ± 47% latency_stats.hits.pipe_write.new_sync_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.20 ±200% +66.7% 0.33 ±141% latency_stats.hits.msleep.cpuinfo_open.do_dentry_open.path_openat.do_filp_open.do_sys_openat2.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
22.60 ± 90% +100.6% 45.33 ± 52% latency_stats.hits.do_syslog.kmsg_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
22.60 ± 81% +103.5% 46.00 ± 49% latency_stats.hits.devkmsg_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.60 ±133% +233.3% 2.00 ± 40% latency_stats.hits.m_start.seq_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +3.3e+101% 0.33 ±141% latency_stats.hits.blk_execute_rq.__scsi_execute.sr_do_ioctl.[sr_mod].sr_packet.[sr_mod].cdrom_get_media_event.[cdrom].sr_drive_status.[sr_mod].cdrom_ioctl.[cdrom].sr_block_ioctl.[sr_mod].blkdev_ioctl.block_ioctl.__x64_sys_ioctl.do_syscall_64
0.00 +3.3e+101% 0.33 ±141% latency_stats.hits.blk_execute_rq.__scsi_execute.sr_check_events.[sr_mod].cdrom_check_events.[cdrom].sr_block_check_events.[sr_mod].disk_check_events.bdev_check_media_change.sr_block_open.[sr_mod].__blkdev_get.blkdev_get.do_dentry_open.path_openat
0.00 +3.3e+101% 0.33 ±141% latency_stats.hits.blk_execute_rq.__scsi_execute.scsi_test_unit_ready.sr_check_events.[sr_mod].cdrom_check_events.[cdrom].sr_block_check_events.[sr_mod].disk_check_events.bdev_check_media_change.sr_block_open.[sr_mod].__blkdev_get.blkdev_get.do_dentry_open
0.00 +3.3e+101% 0.33 ±141% latency_stats.hits.d_alloc_parallel.__lookup_slow.walk_component.path_lookupat.filename_lookup.user_statfs.__do_sys_statfs.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +6.7e+101% 0.67 ± 70% latency_stats.hits.devtmpfs_submit_req.devtmpfs_create_node.device_add.devm_create_dev_dax.__dax_pmem_probe.[dax_pmem_core].dax_pmem_compat_probe.[dax_pmem_compat].nvdimm_bus_probe.[libnvdimm].really_probe.driver_probe_device.device_driver_attach.__driver_attach.bus_for_each_dev
0.00 -100.0% 0.00 latency_stats.max.path_openat.do_filp_open.do_sys_openat2.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.max.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
0.00 -100.0% 0.00 latency_stats.max.stop_two_cpus.migrate_swap.task_numa_migrate.task_numa_fault.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
0.00 -100.0% 0.00 latency_stats.max.cgroup_kn_lock_live.cgroup_procs_write.cgroup_file_write.kernfs_fop_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.max.d_alloc_parallel.__lookup_slow.walk_component.link_path_walk.path_lookupat.filename_lookup.vfs_statx.__do_sys_newstat.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.max.page_lock_anon_vma_read.rmap_walk_anon.try_to_unmap.migrate_pages.migrate_misplaced_page.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
0.00 -100.0% 0.00 latency_stats.max.rwsem_down_write_slowpath.anon_vma_fork.dup_mmap.dup_mm.copy_process._do_fork.__do_sys_clone.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.max.rwsem_down_write_slowpath.anon_vma_clone.anon_vma_fork.dup_mmap.dup_mm.copy_process._do_fork.__do_sys_clone.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.max.rwsem_down_write_slowpath.dup_mmap.dup_mm.copy_process._do_fork.__do_sys_clone.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.max.rwsem_down_write_slowpath.vma_link.mmap_region.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.max.rwsem_down_write_slowpath.unlink_file_vma.free_pgtables.exit_mmap.mmput.begin_new_exec.load_elf_binary.exec_binprm.bprm_execve.do_execveat_common.__x64_sys_execve.do_syscall_64
0.00 -100.0% 0.00 latency_stats.max.pipe_release.__fput.task_work_run.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.max.percpu_rwsem_wait.__percpu_down_read.cgroup_can_fork.copy_process._do_fork.__do_sys_clone.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.max.wb_wait_for_completion.sync_inodes_sb.iterate_supers.ksys_sync.__do_sys_sync.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.max.wait_on_page_bit.__filemap_fdatawait_range.filemap_fdatawait_keep_errors.sync_inodes_sb.iterate_supers.ksys_sync.__do_sys_sync.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.max.sync_inodes_sb.iterate_supers.ksys_sync.__do_sys_sync.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.max.put_and_wait_on_page_locked.do_swap_page.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
0.00 -100.0% 0.00 latency_stats.max.stop_one_cpu.__set_cpus_allowed_ptr.sched_setaffinity.__x64_sys_sched_setaffinity.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.max.stop_one_cpu.sched_exec.bprm_execve.do_execveat_common.__x64_sys_execve.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.max.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_access.nfs_do_access.nfs_permission.inode_permission.link_path_walk.path_openat.do_filp_open
0.00 -100.0% 0.00 latency_stats.max.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_lookup.nfs_lookup.path_openat.do_filp_open.do_sys_openat2.do_sys_open.do_syscall_64
0.00 -100.0% 0.00 latency_stats.max.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_get_acl.get_acl.posix_acl_create.nfs3_proc_create.nfs_create.path_openat.do_filp_open.do_sys_openat2
0.00 -100.0% 0.00 latency_stats.max.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_do_create.nfs3_proc_create.nfs_create.path_openat.do_filp_open.do_sys_openat2.do_sys_open
0.00 -100.0% 0.00 latency_stats.max.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_setattr.nfs_setattr.notify_change.vfs_utimes.do_utimes.__x64_sys_utimensat.do_syscall_64
0.00 -100.0% 0.00 latency_stats.max.wait_on_page_bit.__filemap_fdatawait_range.filemap_write_and_wait_range.nfs_wb_all.nfs_file_flush.filp_close.do_dup2.__x64_sys_dup2.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.max.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_getattr.__nfs_revalidate_inode.nfs_lookup_verify_inode.nfs_do_lookup_revalidate.__nfs_lookup_revalidate.path_openat.do_filp_open
0.00 -100.0% 0.00 latency_stats.max.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_access.nfs_do_access.nfs_permission.inode_permission.may_open.path_openat.do_filp_open
0.00 -100.0% 0.00 latency_stats.max.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_getattr.__nfs_revalidate_inode.nfs_lookup_verify_inode.nfs_do_lookup_revalidate.__nfs_lookup_revalidate.lookup_fast.path_openat
0.00 -100.0% 0.00 latency_stats.max.cgroup_kn_lock_live.__cgroup1_procs_write.cgroup_file_write.kernfs_fop_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.max.stop_two_cpus.migrate_swap.task_numa_migrate.task_numa_fault.do_huge_pmd_numa_page.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
0.00 -100.0% 0.00 latency_stats.max.rwsem_down_write_slowpath.__vma_adjust.__split_vma.__do_munmap.__vm_munmap.elf_map.load_elf_interp.load_elf_binary.exec_binprm.bprm_execve.do_execveat_common.__x64_sys_execve
0.00 -100.0% 0.00 latency_stats.max.rwsem_down_write_slowpath.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.max.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.copy_user_generic_unrolled._copy_from_user.__x64_sys_rt_sigprocmask.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.max.rwsem_down_write_slowpath.do_mprotect_pkey.__x64_sys_mprotect.do_syscall_64.entry_SYSCALL_64_after_hwframe
79.40 ±200% -100.0% 0.00 latency_stats.max.kernfs_iop_permission.inode_permission.link_path_walk.path_openat.do_filp_open.do_sys_openat2.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
7.80 ± 95% -100.0% 0.00 latency_stats.max.d_alloc_parallel.__lookup_slow.walk_component.link_path_walk.path_openat.do_filp_open.do_sys_openat2.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
4843 ±122% -100.0% 0.00 latency_stats.max.rwsem_down_write_slowpath.__vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe
16.60 ±200% -100.0% 0.00 latency_stats.max.rwsem_down_write_slowpath.do_madvise.__x64_sys_madvise.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.80 ±123% -100.0% 0.00 latency_stats.max.d_alloc_parallel.__lookup_slow.walk_component.path_lookupat.filename_lookup.vfs_statx.__do_sys_newstat.do_syscall_64.entry_SYSCALL_64_after_hwframe
227.20 ±129% -100.0% 0.00 latency_stats.max.resolve_symbol.load_module.__do_sys_finit_module.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.40 ±200% -100.0% 0.00 latency_stats.max.d_alloc_parallel.__lookup_slow.walk_component.path_lookupat.filename_lookup.do_faccessat.do_syscall_64.entry_SYSCALL_64_after_hwframe
615.60 ±122% -100.0% 0.00 latency_stats.max.poll_schedule_timeout.do_sys_poll.__x64_sys_ppoll.do_syscall_64.entry_SYSCALL_64_after_hwframe
25.00 ±200% -100.0% 0.00 latency_stats.max.rwsem_down_write_slowpath.__vma_adjust.__split_vma.mprotect_fixup.do_mprotect_pkey.__x64_sys_mprotect.do_syscall_64.entry_SYSCALL_64_after_hwframe
75.80 ±200% -100.0% 0.00 latency_stats.max.rwsem_down_write_slowpath.__vma_adjust.__split_vma.__do_munmap.mmap_region.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
171.20 ±200% -100.0% 0.00 latency_stats.max.rmap_walk_anon.remove_migration_ptes.migrate_pages.migrate_misplaced_page.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
4831 ±121% -97.6% 116.00 ±134% latency_stats.max.do_exit.do_group_exit.get_signal.arch_do_signal.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe
4210 ±182% -94.5% 233.00 ±141% latency_stats.max.load_module.__do_sys_finit_module.do_syscall_64.entry_SYSCALL_64_after_hwframe
1163 ±100% -87.5% 145.33 ± 2% latency_stats.max.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_lookup.nfs_lookup.__lookup_slow.walk_component.path_lookupat.filename_lookup.vfs_statx
6357 ±147% -85.5% 924.00 ±136% latency_stats.max.finished_loading.load_module.__do_sys_finit_module.do_syscall_64.entry_SYSCALL_64_after_hwframe
140.40 ±128% -61.8% 53.67 ±141% latency_stats.max.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_access.nfs_do_access.nfs_permission.inode_permission.link_path_walk.path_lookupat.filename_lookup
175267 ± 81% -52.5% 83199 ±141% latency_stats.max.blk_execute_rq.__scsi_execute.ioctl_internal_command.scsi_set_medium_removal.cdrom_release.[cdrom].sr_block_release.[sr_mod].__blkdev_put.blkdev_close.__fput.task_work_run.exit_to_user_mode_prepare.syscall_exit_to_user_mode
205883 ± 55% -34.5% 134756 ± 87% latency_stats.max.max
82676 ± 69% -29.4% 58389 ± 76% latency_stats.max.mm_access.proc_mem_open.proc_maps_open.do_dentry_open.path_openat.do_filp_open.do_sys_openat2.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
212.20 ± 14% -6.7% 198.00 ± 15% latency_stats.max.wait_woken.__inet_stream_connect.inet_stream_connect.__sys_connect.__x64_sys_connect.do_syscall_64.entry_SYSCALL_64_after_hwframe
4915 -1.2% 4856 ± 2% latency_stats.max.pipe_read.new_sync_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
4807 ± 2% -0.2% 4796 latency_stats.max.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64.entry_SYSCALL_64_after_hwframe
4806 ± 3% +2.8% 4943 latency_stats.max.ep_poll.do_epoll_wait.__x64_sys_epoll_wait.do_syscall_64.entry_SYSCALL_64_after_hwframe
3021 ± 30% +9.4% 3305 ± 27% latency_stats.max.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64.entry_SYSCALL_64_after_hwframe
2567 ± 62% +12.7% 2892 ± 36% latency_stats.max.poll_schedule_timeout.do_select.core_sys_select.kern_select.__x64_sys_select.do_syscall_64.entry_SYSCALL_64_after_hwframe
3939 ± 27% +15.2% 4539 ± 12% latency_stats.max.do_syslog.kmsg_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
4186 ± 14% +16.3% 4870 ± 2% latency_stats.max.devkmsg_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
3327 ± 26% +19.4% 3971 ± 14% latency_stats.max.poll_schedule_timeout.do_sys_poll.__x64_sys_poll.do_syscall_64.entry_SYSCALL_64_after_hwframe
1259 ±163% +53.8% 1937 ±141% latency_stats.max.blk_execute_rq.sg_io.scsi_cmd_ioctl.cdrom_ioctl.[cdrom].sr_block_ioctl.[sr_mod].blkdev_ioctl.block_ioctl.__x64_sys_ioctl.do_syscall_64.entry_SYSCALL_64_after_hwframe
1946 ± 76% +56.3% 3043 ±129% latency_stats.max.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_getattr.__nfs_revalidate_inode.nfs_access_get_cached.nfs_do_access.nfs_permission.inode_permission.link_path_walk
1985 ± 91% +68.8% 3350 ± 5% latency_stats.max.pipe_write.new_sync_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
2269 ±200% +81.3% 4114 ±141% latency_stats.max.msleep.cpuinfo_open.do_dentry_open.path_openat.do_filp_open.do_sys_openat2.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
40213 ±198% +89.7% 76285 ±141% latency_stats.max.do_init_module.load_module.__do_sys_finit_module.do_syscall_64.entry_SYSCALL_64_after_hwframe
207.20 ±149% +112.2% 439.67 ±141% latency_stats.max.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_lookup.nfs_lookup_revalidate_dentry.nfs_do_lookup_revalidate.__nfs_lookup_revalidate.lookup_fast.walk_component.link_path_walk
975.40 ± 50% +122.1% 2166 ± 67% latency_stats.max.wait_for_partner.fifo_open.do_dentry_open.path_openat.do_filp_open.do_sys_openat2.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
151.60 ±125% +142.5% 367.67 ± 96% latency_stats.max.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex.do_syscall_64.entry_SYSCALL_64_after_hwframe
78.20 ± 27% +397.4% 389.00 ±141% latency_stats.max.do_coredump.get_signal.arch_do_signal.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe
16850 ±183% +423.6% 88237 ±137% latency_stats.max.m_start.seq_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.40 ±200% +59983.3% 240.33 ±140% latency_stats.max.d_alloc_parallel.path_openat.do_filp_open.do_sys_openat2.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +3.3e+101% 0.33 ±141% latency_stats.max.d_alloc_parallel.__lookup_slow.walk_component.path_lookupat.filename_lookup.user_statfs.__do_sys_statfs.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +4.8e+103% 48.00 ± 72% latency_stats.max.devtmpfs_submit_req.devtmpfs_create_node.device_add.devm_create_dev_dax.__dax_pmem_probe.[dax_pmem_core].dax_pmem_compat_probe.[dax_pmem_compat].nvdimm_bus_probe.[libnvdimm].really_probe.driver_probe_device.device_driver_attach.__driver_attach.bus_for_each_dev
0.00 +1.8e+105% 1781 ±141% latency_stats.max.blk_execute_rq.__scsi_execute.sr_do_ioctl.[sr_mod].sr_packet.[sr_mod].cdrom_get_media_event.[cdrom].sr_drive_status.[sr_mod].cdrom_ioctl.[cdrom].sr_block_ioctl.[sr_mod].blkdev_ioctl.block_ioctl.__x64_sys_ioctl.do_syscall_64
0.00 +1.9e+105% 1940 ±141% latency_stats.max.blk_execute_rq.__scsi_execute.sr_check_events.[sr_mod].cdrom_check_events.[cdrom].sr_block_check_events.[sr_mod].disk_check_events.bdev_check_media_change.sr_block_open.[sr_mod].__blkdev_get.blkdev_get.do_dentry_open.path_openat
0.00 +9.7e+106% 96869 ±141% latency_stats.max.blk_execute_rq.__scsi_execute.scsi_test_unit_ready.sr_check_events.[sr_mod].cdrom_check_events.[cdrom].sr_block_check_events.[sr_mod].disk_check_events.bdev_check_media_change.sr_block_open.[sr_mod].__blkdev_get.blkdev_get.do_dentry_open
0.00 -100.0% 0.00 latency_stats.sum.path_openat.do_filp_open.do_sys_openat2.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.sum.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
0.00 -100.0% 0.00 latency_stats.sum.stop_two_cpus.migrate_swap.task_numa_migrate.task_numa_fault.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
0.00 -100.0% 0.00 latency_stats.sum.cgroup_kn_lock_live.cgroup_procs_write.cgroup_file_write.kernfs_fop_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.sum.d_alloc_parallel.__lookup_slow.walk_component.link_path_walk.path_lookupat.filename_lookup.vfs_statx.__do_sys_newstat.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.sum.page_lock_anon_vma_read.rmap_walk_anon.try_to_unmap.migrate_pages.migrate_misplaced_page.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
0.00 -100.0% 0.00 latency_stats.sum.rwsem_down_write_slowpath.anon_vma_fork.dup_mmap.dup_mm.copy_process._do_fork.__do_sys_clone.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.sum.rwsem_down_write_slowpath.anon_vma_clone.anon_vma_fork.dup_mmap.dup_mm.copy_process._do_fork.__do_sys_clone.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.sum.rwsem_down_write_slowpath.dup_mmap.dup_mm.copy_process._do_fork.__do_sys_clone.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.sum.rwsem_down_write_slowpath.vma_link.mmap_region.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.sum.rwsem_down_write_slowpath.unlink_file_vma.free_pgtables.exit_mmap.mmput.begin_new_exec.load_elf_binary.exec_binprm.bprm_execve.do_execveat_common.__x64_sys_execve.do_syscall_64
0.00 -100.0% 0.00 latency_stats.sum.pipe_release.__fput.task_work_run.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.sum.percpu_rwsem_wait.__percpu_down_read.cgroup_can_fork.copy_process._do_fork.__do_sys_clone.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.sum.wb_wait_for_completion.sync_inodes_sb.iterate_supers.ksys_sync.__do_sys_sync.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.sum.wait_on_page_bit.__filemap_fdatawait_range.filemap_fdatawait_keep_errors.sync_inodes_sb.iterate_supers.ksys_sync.__do_sys_sync.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.sum.sync_inodes_sb.iterate_supers.ksys_sync.__do_sys_sync.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.sum.put_and_wait_on_page_locked.do_swap_page.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
0.00 -100.0% 0.00 latency_stats.sum.stop_one_cpu.__set_cpus_allowed_ptr.sched_setaffinity.__x64_sys_sched_setaffinity.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.sum.stop_one_cpu.sched_exec.bprm_execve.do_execveat_common.__x64_sys_execve.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.sum.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_access.nfs_do_access.nfs_permission.inode_permission.link_path_walk.path_openat.do_filp_open
0.00 -100.0% 0.00 latency_stats.sum.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_lookup.nfs_lookup.path_openat.do_filp_open.do_sys_openat2.do_sys_open.do_syscall_64
0.00 -100.0% 0.00 latency_stats.sum.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_get_acl.get_acl.posix_acl_create.nfs3_proc_create.nfs_create.path_openat.do_filp_open.do_sys_openat2
0.00 -100.0% 0.00 latency_stats.sum.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_do_create.nfs3_proc_create.nfs_create.path_openat.do_filp_open.do_sys_openat2.do_sys_open
0.00 -100.0% 0.00 latency_stats.sum.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_setattr.nfs_setattr.notify_change.vfs_utimes.do_utimes.__x64_sys_utimensat.do_syscall_64
0.00 -100.0% 0.00 latency_stats.sum.wait_on_page_bit.__filemap_fdatawait_range.filemap_write_and_wait_range.nfs_wb_all.nfs_file_flush.filp_close.do_dup2.__x64_sys_dup2.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.sum.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_getattr.__nfs_revalidate_inode.nfs_lookup_verify_inode.nfs_do_lookup_revalidate.__nfs_lookup_revalidate.path_openat.do_filp_open
0.00 -100.0% 0.00 latency_stats.sum.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_access.nfs_do_access.nfs_permission.inode_permission.may_open.path_openat.do_filp_open
0.00 -100.0% 0.00 latency_stats.sum.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_getattr.__nfs_revalidate_inode.nfs_lookup_verify_inode.nfs_do_lookup_revalidate.__nfs_lookup_revalidate.lookup_fast.path_openat
0.00 -100.0% 0.00 latency_stats.sum.cgroup_kn_lock_live.__cgroup1_procs_write.cgroup_file_write.kernfs_fop_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.sum.stop_two_cpus.migrate_swap.task_numa_migrate.task_numa_fault.do_huge_pmd_numa_page.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
0.00 -100.0% 0.00 latency_stats.sum.rwsem_down_write_slowpath.__vma_adjust.__split_vma.__do_munmap.__vm_munmap.elf_map.load_elf_interp.load_elf_binary.exec_binprm.bprm_execve.do_execveat_common.__x64_sys_execve
0.00 -100.0% 0.00 latency_stats.sum.rwsem_down_write_slowpath.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.sum.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.copy_user_generic_unrolled._copy_from_user.__x64_sys_rt_sigprocmask.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 -100.0% 0.00 latency_stats.sum.rwsem_down_write_slowpath.do_mprotect_pkey.__x64_sys_mprotect.do_syscall_64.entry_SYSCALL_64_after_hwframe
79.40 ±200% -100.0% 0.00 latency_stats.sum.kernfs_iop_permission.inode_permission.link_path_walk.path_openat.do_filp_open.do_sys_openat2.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
8.60 ± 98% -100.0% 0.00 latency_stats.sum.d_alloc_parallel.__lookup_slow.walk_component.link_path_walk.path_openat.do_filp_open.do_sys_openat2.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
4843 ±122% -100.0% 0.00 latency_stats.sum.rwsem_down_write_slowpath.__vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe
16.60 ±200% -100.0% 0.00 latency_stats.sum.rwsem_down_write_slowpath.do_madvise.__x64_sys_madvise.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.80 ±123% -100.0% 0.00 latency_stats.sum.d_alloc_parallel.__lookup_slow.walk_component.path_lookupat.filename_lookup.vfs_statx.__do_sys_newstat.do_syscall_64.entry_SYSCALL_64_after_hwframe
228.20 ±129% -100.0% 0.00 latency_stats.sum.resolve_symbol.load_module.__do_sys_finit_module.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.20 ±200% -100.0% 0.00 latency_stats.sum.d_alloc_parallel.__lookup_slow.walk_component.path_lookupat.filename_lookup.do_faccessat.do_syscall_64.entry_SYSCALL_64_after_hwframe
642.20 ±122% -100.0% 0.00 latency_stats.sum.poll_schedule_timeout.do_sys_poll.__x64_sys_ppoll.do_syscall_64.entry_SYSCALL_64_after_hwframe
25.00 ±200% -100.0% 0.00 latency_stats.sum.rwsem_down_write_slowpath.__vma_adjust.__split_vma.mprotect_fixup.do_mprotect_pkey.__x64_sys_mprotect.do_syscall_64.entry_SYSCALL_64_after_hwframe
109.80 ±200% -100.0% 0.00 latency_stats.sum.rwsem_down_write_slowpath.__vma_adjust.__split_vma.__do_munmap.mmap_region.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
171.20 ±200% -100.0% 0.00 latency_stats.sum.rmap_walk_anon.remove_migration_ptes.migrate_pages.migrate_misplaced_page.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
58893 ±193% -99.5% 311.33 ±141% latency_stats.sum.load_module.__do_sys_finit_module.do_syscall_64.entry_SYSCALL_64_after_hwframe
192025 ±190% -98.8% 2303 ±132% latency_stats.sum.finished_loading.load_module.__do_sys_finit_module.do_syscall_64.entry_SYSCALL_64_after_hwframe
4831 ±121% -97.6% 116.00 ±134% latency_stats.sum.do_exit.do_group_exit.get_signal.arch_do_signal.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe
67961 ±114% -88.8% 7587 ± 65% latency_stats.sum.poll_schedule_timeout.do_select.core_sys_select.kern_select.__x64_sys_select.do_syscall_64.entry_SYSCALL_64_after_hwframe
1163 ±100% -87.5% 145.33 ± 2% latency_stats.sum.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_lookup.nfs_lookup.__lookup_slow.walk_component.path_lookupat.filename_lookup.vfs_statx
140.40 ±128% -61.8% 53.67 ±141% latency_stats.sum.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_access.nfs_do_access.nfs_permission.inode_permission.link_path_walk.path_lookupat.filename_lookup
99229 ± 22% -55.6% 44099 ± 8% latency_stats.sum.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64.entry_SYSCALL_64_after_hwframe
175267 ± 81% -52.5% 83199 ±141% latency_stats.sum.blk_execute_rq.__scsi_execute.ioctl_internal_command.scsi_set_medium_removal.cdrom_release.[cdrom].sr_block_release.[sr_mod].__blkdev_put.blkdev_close.__fput.task_work_run.exit_to_user_mode_prepare.syscall_exit_to_user_mode
615300 ± 16% -51.5% 298615 ± 6% latency_stats.sum.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64.entry_SYSCALL_64_after_hwframe
760.80 ± 91% -42.6% 437.00 ± 79% latency_stats.sum.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex.do_syscall_64.entry_SYSCALL_64_after_hwframe
987092 ± 6% -34.4% 647610 ± 10% latency_stats.sum.pipe_read.new_sync_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
987092 ± 6% -34.4% 647610 ± 10% latency_stats.sum.max
82676 ± 69% -29.4% 58389 ± 76% latency_stats.sum.mm_access.proc_mem_open.proc_maps_open.do_dentry_open.path_openat.do_filp_open.do_sys_openat2.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
212.20 ± 14% -6.7% 198.00 ± 15% latency_stats.sum.wait_woken.__inet_stream_connect.inet_stream_connect.__sys_connect.__x64_sys_connect.do_syscall_64.entry_SYSCALL_64_after_hwframe
194779 ± 11% +1.8% 198252 ± 14% latency_stats.sum.ep_poll.do_epoll_wait.__x64_sys_epoll_wait.do_syscall_64.entry_SYSCALL_64_after_hwframe
3347 ± 45% +22.9% 4115 ± 97% latency_stats.sum.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_getattr.__nfs_revalidate_inode.nfs_access_get_cached.nfs_do_access.nfs_permission.inode_permission.link_path_walk
10592 ± 76% +55.2% 16437 ± 25% latency_stats.sum.poll_schedule_timeout.do_sys_poll.__x64_sys_poll.do_syscall_64.entry_SYSCALL_64_after_hwframe
53629 ± 82% +80.1% 96602 ± 51% latency_stats.sum.devkmsg_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
2269 ±200% +81.3% 4114 ±141% latency_stats.sum.msleep.cpuinfo_open.do_dentry_open.path_openat.do_filp_open.do_sys_openat2.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
50787 ± 89% +89.0% 95979 ± 54% latency_stats.sum.do_syslog.kmsg_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
40213 ±198% +89.7% 76285 ±141% latency_stats.sum.do_init_module.load_module.__do_sys_finit_module.do_syscall_64.entry_SYSCALL_64_after_hwframe
2518 ± 95% +96.5% 4948 ± 24% latency_stats.sum.pipe_write.new_sync_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
207.20 ±149% +112.2% 439.67 ±141% latency_stats.sum.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_lookup.nfs_lookup_revalidate_dentry.nfs_do_lookup_revalidate.__nfs_lookup_revalidate.lookup_fast.walk_component.link_path_walk
975.40 ± 50% +122.1% 2166 ± 67% latency_stats.sum.wait_for_partner.fifo_open.do_dentry_open.path_openat.do_filp_open.do_sys_openat2.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
1444 ±147% +343.2% 6400 ±141% latency_stats.sum.blk_execute_rq.sg_io.scsi_cmd_ioctl.cdrom_ioctl.[cdrom].sr_block_ioctl.[sr_mod].blkdev_ioctl.block_ioctl.__x64_sys_ioctl.do_syscall_64.entry_SYSCALL_64_after_hwframe
78.20 ± 27% +397.4% 389.00 ±141% latency_stats.sum.do_coredump.get_signal.arch_do_signal.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe
17462 ±176% +901.8% 174939 ±139% latency_stats.sum.m_start.seq_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.20 ±200% +10824.2% 240.33 ±140% latency_stats.sum.d_alloc_parallel.path_openat.do_filp_open.do_sys_openat2.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +3.3e+101% 0.33 ±141% latency_stats.sum.d_alloc_parallel.__lookup_slow.walk_component.path_lookupat.filename_lookup.user_statfs.__do_sys_statfs.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +4.8e+103% 48.00 ± 72% latency_stats.sum.devtmpfs_submit_req.devtmpfs_create_node.device_add.devm_create_dev_dax.__dax_pmem_probe.[dax_pmem_core].dax_pmem_compat_probe.[dax_pmem_compat].nvdimm_bus_probe.[libnvdimm].really_probe.driver_probe_device.device_driver_attach.__driver_attach.bus_for_each_dev
0.00 +1.8e+105% 1781 ±141% latency_stats.sum.blk_execute_rq.__scsi_execute.sr_do_ioctl.[sr_mod].sr_packet.[sr_mod].cdrom_get_media_event.[cdrom].sr_drive_status.[sr_mod].cdrom_ioctl.[cdrom].sr_block_ioctl.[sr_mod].blkdev_ioctl.block_ioctl.__x64_sys_ioctl.do_syscall_64
0.00 +1.9e+105% 1940 ±141% latency_stats.sum.blk_execute_rq.__scsi_execute.sr_check_events.[sr_mod].cdrom_check_events.[cdrom].sr_block_check_events.[sr_mod].disk_check_events.bdev_check_media_change.sr_block_open.[sr_mod].__blkdev_get.blkdev_get.do_dentry_open.path_openat
0.00 +9.7e+106% 96869 ±141% latency_stats.sum.blk_execute_rq.__scsi_execute.scsi_test_unit_ready.sr_check_events.[sr_mod].cdrom_check_events.[cdrom].sr_block_check_events.[sr_mod].disk_check_events.bdev_check_media_change.sr_block_open.[sr_mod].__blkdev_get.blkdev_get.do_dentry_open
199651 +0.2% 200132 slabinfo.Acpi-Operand.active_objs
3564 +0.2% 3573 slabinfo.Acpi-Operand.active_slabs
199651 +0.2% 200132 slabinfo.Acpi-Operand.num_objs
3564 +0.2% 3573 slabinfo.Acpi-Operand.num_slabs
2216 ± 8% +6.1% 2351 ± 7% slabinfo.Acpi-Parse.active_objs
29.60 ± 7% +7.0% 31.67 ± 5% slabinfo.Acpi-Parse.active_slabs
2216 ± 8% +6.1% 2351 ± 7% slabinfo.Acpi-Parse.num_objs
29.60 ± 7% +7.0% 31.67 ± 5% slabinfo.Acpi-Parse.num_slabs
4978 +0.3% 4995 slabinfo.Acpi-State.active_objs
97.00 +0.3% 97.33 slabinfo.Acpi-State.active_slabs
4978 +0.3% 4995 slabinfo.Acpi-State.num_objs
97.00 +0.3% 97.33 slabinfo.Acpi-State.num_slabs
2108 ± 4% -0.9% 2089 ± 2% slabinfo.PING.active_objs
65.20 ± 4% -0.3% 65.00 ± 2% slabinfo.PING.active_slabs
2108 ± 4% -0.9% 2089 ± 2% slabinfo.PING.num_objs
65.20 ± 4% -0.3% 65.00 ± 2% slabinfo.PING.num_slabs
192.00 +0.0% 192.00 slabinfo.RAW.active_objs
6.00 +0.0% 6.00 slabinfo.RAW.active_slabs
192.00 +0.0% 192.00 slabinfo.RAW.num_objs
6.00 +0.0% 6.00 slabinfo.RAW.num_slabs
109.20 ± 9% +3.2% 112.67 ± 10% slabinfo.RAWv6.active_objs
4.20 ± 9% +3.2% 4.33 ± 10% slabinfo.RAWv6.active_slabs
109.20 ± 9% +3.2% 112.67 ± 10% slabinfo.RAWv6.num_objs
4.20 ± 9% +3.2% 4.33 ± 10% slabinfo.RAWv6.num_slabs
86.80 ± 6% +2.2% 88.67 ± 7% slabinfo.TCP.active_objs
6.20 ± 6% +2.2% 6.33 ± 7% slabinfo.TCP.active_slabs
86.80 ± 6% +2.2% 88.67 ± 7% slabinfo.TCP.num_objs
6.20 ± 6% +2.2% 6.33 ± 7% slabinfo.TCP.num_slabs
52.00 -8.3% 47.67 ± 12% slabinfo.TCPv6.active_objs
4.00 -8.3% 3.67 ± 12% slabinfo.TCPv6.active_slabs
52.00 -8.3% 47.67 ± 12% slabinfo.TCPv6.num_objs
4.00 -8.3% 3.67 ± 12% slabinfo.TCPv6.num_slabs
164.00 ± 8% -7.9% 151.00 ± 12% slabinfo.UDPv6.active_objs
6.60 ± 7% -9.1% 6.00 ± 13% slabinfo.UDPv6.active_slabs
164.00 ± 8% -7.9% 151.00 ± 12% slabinfo.UDPv6.num_objs
6.60 ± 7% -9.1% 6.00 ± 13% slabinfo.UDPv6.num_slabs
30935 -2.3% 30211 ± 2% slabinfo.anon_vma.active_objs
672.20 -2.2% 657.33 ± 2% slabinfo.anon_vma.active_slabs
30941 -2.2% 30260 ± 2% slabinfo.anon_vma.num_objs
672.20 -2.2% 657.33 ± 2% slabinfo.anon_vma.num_slabs
69942 -2.2% 68395 slabinfo.anon_vma_chain.active_objs
1093 -2.0% 1071 slabinfo.anon_vma_chain.active_slabs
69980 -2.0% 68572 slabinfo.anon_vma_chain.num_objs
1093 -2.0% 1071 slabinfo.anon_vma_chain.num_slabs
631.80 ± 13% +2.9% 650.00 ± 5% slabinfo.bdev_cache.active_objs
16.20 ± 13% +2.9% 16.67 ± 5% slabinfo.bdev_cache.active_slabs
631.80 ± 13% +2.9% 650.00 ± 5% slabinfo.bdev_cache.num_objs
16.20 ± 13% +2.9% 16.67 ± 5% slabinfo.bdev_cache.num_slabs
223.00 ± 17% -17.9% 183.00 ± 3% slabinfo.biovec-128.active_objs
13.40 ± 19% -20.4% 10.67 ± 4% slabinfo.biovec-128.active_slabs
223.00 ± 17% -17.9% 183.00 ± 3% slabinfo.biovec-128.num_objs
13.40 ± 19% -20.4% 10.67 ± 4% slabinfo.biovec-128.num_slabs
358.40 ± 19% -7.7% 330.67 ± 9% slabinfo.biovec-64.active_objs
11.20 ± 19% -7.7% 10.33 ± 9% slabinfo.biovec-64.active_slabs
358.40 ± 19% -7.7% 330.67 ± 9% slabinfo.biovec-64.num_objs
11.20 ± 19% -7.7% 10.33 ± 9% slabinfo.biovec-64.num_slabs
152.20 ± 9% -1.9% 149.33 ± 6% slabinfo.biovec-max.active_objs
19.00 ± 8% -1.8% 18.67 ± 6% slabinfo.biovec-max.active_slabs
152.20 ± 9% -1.9% 149.33 ± 6% slabinfo.biovec-max.num_objs
19.00 ± 8% -1.8% 18.67 ± 6% slabinfo.biovec-max.num_slabs
549.40 ± 12% -7.7% 507.00 ± 6% slabinfo.blkdev_ioc.active_objs
14.00 ± 11% -7.1% 13.00 ± 6% slabinfo.blkdev_ioc.active_slabs
549.40 ± 12% -7.7% 507.00 ± 6% slabinfo.blkdev_ioc.num_objs
14.00 ± 11% -7.1% 13.00 ± 6% slabinfo.blkdev_ioc.num_slabs
20.80 ±200% -100.0% 0.00 slabinfo.btrfs_delayed_node.active_objs
0.40 ±200% -100.0% 0.00 slabinfo.btrfs_delayed_node.active_slabs
20.80 ±200% -100.0% 0.00 slabinfo.btrfs_delayed_node.num_objs
0.40 ±200% -100.0% 0.00 slabinfo.btrfs_delayed_node.num_slabs
138.40 ± 48% -19.1% 112.00 slabinfo.btrfs_inode.active_objs
4.80 ± 44% -16.7% 4.00 slabinfo.btrfs_inode.active_slabs
138.40 ± 48% -19.1% 112.00 slabinfo.btrfs_inode.num_objs
4.80 ± 44% -16.7% 4.00 slabinfo.btrfs_inode.num_slabs
17.40 ±200% -100.0% 0.00 slabinfo.btrfs_ordered_extent.active_objs
0.40 ±200% -100.0% 0.00 slabinfo.btrfs_ordered_extent.active_slabs
17.40 ±200% -100.0% 0.00 slabinfo.btrfs_ordered_extent.num_objs
0.40 ±200% -100.0% 0.00 slabinfo.btrfs_ordered_extent.num_slabs
565.00 ± 12% -12.6% 494.00 ± 3% slabinfo.buffer_head.active_objs
14.40 ± 11% -12.0% 12.67 ± 3% slabinfo.buffer_head.active_slabs
565.00 ± 12% -12.6% 494.00 ± 3% slabinfo.buffer_head.num_objs
14.40 ± 11% -12.0% 12.67 ± 3% slabinfo.buffer_head.num_slabs
5631 -1.0% 5577 slabinfo.cred_jar.active_objs
133.40 -0.8% 132.33 slabinfo.cred_jar.active_slabs
5631 -1.0% 5577 slabinfo.cred_jar.num_objs
133.40 -0.8% 132.33 slabinfo.cred_jar.num_slabs
126.00 -0.5% 125.33 slabinfo.dax_cache.active_objs
3.00 -22.2% 2.33 ± 20% slabinfo.dax_cache.active_slabs
126.00 -0.5% 125.33 slabinfo.dax_cache.num_objs
3.00 -22.2% 2.33 ± 20% slabinfo.dax_cache.num_slabs
117579 ± 2% +0.3% 117936 slabinfo.dentry.active_objs
2830 ± 2% +0.1% 2833 slabinfo.dentry.active_slabs
118902 ± 2% +0.1% 119026 slabinfo.dentry.num_objs
2830 ± 2% +0.1% 2833 slabinfo.dentry.num_slabs
32.00 +0.0% 32.00 slabinfo.dma-kmalloc-512.active_objs
1.00 +0.0% 1.00 slabinfo.dma-kmalloc-512.active_slabs
32.00 +0.0% 32.00 slabinfo.dma-kmalloc-512.num_objs
1.00 +0.0% 1.00 slabinfo.dma-kmalloc-512.num_slabs
30.00 +0.0% 30.00 slabinfo.dmaengine-unmap-128.active_objs
1.00 +0.0% 1.00 slabinfo.dmaengine-unmap-128.active_slabs
30.00 +0.0% 30.00 slabinfo.dmaengine-unmap-128.num_objs
1.00 +0.0% 1.00 slabinfo.dmaengine-unmap-128.num_slabs
1676 ± 10% +9.7% 1839 ± 17% slabinfo.dmaengine-unmap-16.active_objs
39.60 ± 10% +10.3% 43.67 ± 16% slabinfo.dmaengine-unmap-16.active_slabs
1676 ± 10% +9.7% 1839 ± 17% slabinfo.dmaengine-unmap-16.num_objs
39.60 ± 10% +10.3% 43.67 ± 16% slabinfo.dmaengine-unmap-16.num_slabs
15.00 +0.0% 15.00 slabinfo.dmaengine-unmap-256.active_objs
1.00 +0.0% 1.00 slabinfo.dmaengine-unmap-256.active_slabs
15.00 +0.0% 15.00 slabinfo.dmaengine-unmap-256.num_objs
1.00 +0.0% 1.00 slabinfo.dmaengine-unmap-256.num_slabs
4550 ± 8% +3.5% 4707 ± 2% slabinfo.eventpoll_pwq.active_objs
81.20 ± 8% +3.4% 84.00 ± 2% slabinfo.eventpoll_pwq.active_slabs
4550 ± 8% +3.5% 4707 ± 2% slabinfo.eventpoll_pwq.num_objs
81.20 ± 8% +3.4% 84.00 ± 2% slabinfo.eventpoll_pwq.num_slabs
1115 ± 14% +8.2% 1207 ± 11% slabinfo.file_lock_cache.active_objs
29.60 ± 14% +8.1% 32.00 ± 13% slabinfo.file_lock_cache.active_slabs
1115 ± 14% +8.2% 1207 ± 11% slabinfo.file_lock_cache.num_objs
29.60 ± 14% +8.1% 32.00 ± 13% slabinfo.file_lock_cache.num_slabs
4698 +1.6% 4772 slabinfo.files_cache.active_objs
101.60 +1.7% 103.33 slabinfo.files_cache.active_slabs
4698 +1.6% 4772 slabinfo.files_cache.num_objs
101.60 +1.7% 103.33 slabinfo.files_cache.num_slabs
28716 ± 3% +5.0% 30138 slabinfo.filp.active_objs
926.20 ± 4% +3.9% 962.67 slabinfo.filp.active_slabs
29652 ± 4% +3.9% 30817 slabinfo.filp.num_objs
926.20 ± 4% +3.9% 962.67 slabinfo.filp.num_slabs
3120 ± 15% -7.0% 2901 ± 5% slabinfo.fsnotify_mark_connector.active_objs
24.20 ± 15% -6.3% 22.67 ± 5% slabinfo.fsnotify_mark_connector.active_slabs
3120 ± 15% -7.0% 2901 ± 5% slabinfo.fsnotify_mark_connector.num_objs
24.20 ± 15% -6.3% 22.67 ± 5% slabinfo.fsnotify_mark_connector.num_slabs
31892 +0.9% 32178 slabinfo.ftrace_event_field.active_objs
375.20 +0.7% 378.00 slabinfo.ftrace_event_field.active_slabs
31892 +0.9% 32178 slabinfo.ftrace_event_field.num_objs
375.20 +0.7% 378.00 slabinfo.ftrace_event_field.num_slabs
104.00 +0.0% 104.00 slabinfo.hugetlbfs_inode_cache.active_objs
2.00 +0.0% 2.00 slabinfo.hugetlbfs_inode_cache.active_slabs
104.00 +0.0% 104.00 slabinfo.hugetlbfs_inode_cache.num_objs
2.00 +0.0% 2.00 slabinfo.hugetlbfs_inode_cache.num_slabs
66863 -0.4% 66623 slabinfo.inode_cache.active_objs
1242 -0.3% 1238 slabinfo.inode_cache.active_slabs
67098 -0.3% 66875 slabinfo.inode_cache.num_objs
1242 -0.3% 1238 slabinfo.inode_cache.num_slabs
87083 -0.2% 86908 slabinfo.kernfs_node_cache.active_objs
2720 -0.2% 2715 slabinfo.kernfs_node_cache.active_slabs
87083 -0.2% 86908 slabinfo.kernfs_node_cache.num_objs
2720 -0.2% 2715 slabinfo.kernfs_node_cache.num_slabs
2743 ± 17% +22.3% 3354 slabinfo.khugepaged_mm_slot.active_objs
75.60 ± 17% +22.6% 92.67 slabinfo.khugepaged_mm_slot.active_slabs
2743 ± 17% +22.3% 3354 slabinfo.khugepaged_mm_slot.num_objs
75.60 ± 17% +22.6% 92.67 slabinfo.khugepaged_mm_slot.num_slabs
4956 -0.3% 4940 ± 2% slabinfo.kmalloc-128.active_objs
156.80 ± 2% -0.5% 156.00 slabinfo.kmalloc-128.active_slabs
5019 ± 2% -0.3% 5002 slabinfo.kmalloc-128.num_objs
156.80 ± 2% -0.5% 156.00 slabinfo.kmalloc-128.num_slabs
35368 +0.5% 35535 slabinfo.kmalloc-16.active_objs
139.00 +1.4% 141.00 slabinfo.kmalloc-16.active_slabs
35632 +1.3% 36096 slabinfo.kmalloc-16.num_objs
139.00 +1.4% 141.00 slabinfo.kmalloc-16.num_slabs
6036 -0.2% 6026 slabinfo.kmalloc-192.active_objs
146.80 -0.3% 146.33 slabinfo.kmalloc-192.active_slabs
6165 -0.1% 6159 slabinfo.kmalloc-192.num_objs
146.80 -0.3% 146.33 slabinfo.kmalloc-192.num_slabs
6378 ± 2% +4.8% 6687 slabinfo.kmalloc-1k.active_objs
202.00 ± 2% +4.0% 210.00 slabinfo.kmalloc-1k.active_slabs
6464 ± 2% +4.1% 6730 slabinfo.kmalloc-1k.num_objs
202.00 ± 2% +4.0% 210.00 slabinfo.kmalloc-1k.num_slabs
8765 ± 4% -0.9% 8687 ± 3% slabinfo.kmalloc-256.active_objs
276.40 ± 4% -1.6% 272.00 ± 3% slabinfo.kmalloc-256.active_slabs
8864 ± 4% -1.7% 8718 ± 3% slabinfo.kmalloc-256.num_objs
276.40 ± 4% -1.6% 272.00 ± 3% slabinfo.kmalloc-256.num_slabs
6091 ± 4% -1.4% 6004 ± 2% slabinfo.kmalloc-2k.active_objs
386.80 ± 4% -1.2% 382.33 ± 2% slabinfo.kmalloc-2k.active_slabs
6198 ± 4% -1.2% 6125 ± 2% slabinfo.kmalloc-2k.num_objs
386.80 ± 4% -1.2% 382.33 ± 2% slabinfo.kmalloc-2k.num_slabs
90267 ± 2% +0.2% 90479 slabinfo.kmalloc-32.active_objs
705.20 ± 2% +0.4% 708.00 slabinfo.kmalloc-32.active_slabs
90374 ± 2% +0.4% 90705 slabinfo.kmalloc-32.num_objs
705.20 ± 2% +0.4% 708.00 slabinfo.kmalloc-32.num_slabs
2132 ± 4% -4.2% 2043 slabinfo.kmalloc-4k.active_objs
273.40 ± 4% -5.0% 259.67 slabinfo.kmalloc-4k.active_slabs
2189 ± 4% -4.9% 2081 slabinfo.kmalloc-4k.num_objs
273.40 ± 4% -5.0% 259.67 slabinfo.kmalloc-4k.num_slabs
60206 +66.6% 100322 slabinfo.kmalloc-512.active_objs
2210 ± 2% +66.3% 3676 ± 3% slabinfo.kmalloc-512.active_slabs
70766 ± 2% +66.3% 117666 ± 3% slabinfo.kmalloc-512.num_objs
2210 ± 2% +66.3% 3676 ± 3% slabinfo.kmalloc-512.num_slabs
53775 +0.8% 54179 slabinfo.kmalloc-64.active_objs
840.60 +0.8% 847.00 slabinfo.kmalloc-64.active_slabs
53835 +0.8% 54247 slabinfo.kmalloc-64.num_objs
840.60 +0.8% 847.00 slabinfo.kmalloc-64.num_slabs
54761 +1.2% 55439 slabinfo.kmalloc-8.active_objs
109.00 +0.9% 110.00 slabinfo.kmalloc-8.active_slabs
55808 +0.9% 56320 slabinfo.kmalloc-8.num_objs
109.00 +0.9% 110.00 slabinfo.kmalloc-8.num_slabs
898.40 +0.5% 902.67 slabinfo.kmalloc-8k.active_objs
229.80 +0.4% 230.67 slabinfo.kmalloc-8k.active_slabs
920.80 +0.4% 924.67 slabinfo.kmalloc-8k.num_objs
229.80 +0.4% 230.67 slabinfo.kmalloc-8k.num_slabs
7901 ± 3% -2.6% 7691 slabinfo.kmalloc-96.active_objs
188.80 ± 4% -2.0% 185.00 slabinfo.kmalloc-96.active_slabs
7957 ± 3% -2.4% 7770 slabinfo.kmalloc-96.num_objs
188.80 ± 4% -2.0% 185.00 slabinfo.kmalloc-96.num_slabs
1107 ± 21% -26.8% 810.67 ± 12% slabinfo.kmalloc-rcl-128.active_objs
34.60 ± 21% -26.8% 25.33 ± 12% slabinfo.kmalloc-rcl-128.active_slabs
1107 ± 21% -26.8% 810.67 ± 12% slabinfo.kmalloc-rcl-128.num_objs
34.60 ± 21% -26.8% 25.33 ± 12% slabinfo.kmalloc-rcl-128.num_slabs
302.40 ± 13% -25.9% 224.00 ± 17% slabinfo.kmalloc-rcl-192.active_objs
7.20 ± 13% -25.9% 5.33 ± 17% slabinfo.kmalloc-rcl-192.active_slabs
302.40 ± 13% -25.9% 224.00 ± 17% slabinfo.kmalloc-rcl-192.num_objs
7.20 ± 13% -25.9% 5.33 ± 17% slabinfo.kmalloc-rcl-192.num_slabs
166.40 ± 18% -23.1% 128.00 ± 20% slabinfo.kmalloc-rcl-256.active_objs
5.20 ± 18% -23.1% 4.00 ± 20% slabinfo.kmalloc-rcl-256.active_slabs
166.40 ± 18% -23.1% 128.00 ± 20% slabinfo.kmalloc-rcl-256.num_objs
5.20 ± 18% -23.1% 4.00 ± 20% slabinfo.kmalloc-rcl-256.num_slabs
2136 ± 6% +1.7% 2173 slabinfo.kmalloc-rcl-512.active_objs
66.40 ± 7% +1.9% 67.67 slabinfo.kmalloc-rcl-512.active_slabs
2136 ± 6% +1.7% 2173 slabinfo.kmalloc-rcl-512.num_objs
66.40 ± 7% +1.9% 67.67 slabinfo.kmalloc-rcl-512.num_slabs
6517 ± 4% -5.4% 6163 ± 4% slabinfo.kmalloc-rcl-64.active_objs
101.40 ± 4% -5.3% 96.00 ± 4% slabinfo.kmalloc-rcl-64.active_slabs
6519 ± 4% -5.5% 6164 ± 4% slabinfo.kmalloc-rcl-64.num_objs
101.40 ± 4% -5.3% 96.00 ± 4% slabinfo.kmalloc-rcl-64.num_slabs
3120 ± 11% -16.5% 2604 ± 5% slabinfo.kmalloc-rcl-96.active_objs
74.20 ± 12% -16.4% 62.00 ± 5% slabinfo.kmalloc-rcl-96.active_slabs
3120 ± 11% -16.5% 2604 ± 5% slabinfo.kmalloc-rcl-96.num_objs
74.20 ± 12% -16.4% 62.00 ± 5% slabinfo.kmalloc-rcl-96.num_slabs
377.60 ± 6% +7.3% 405.33 ± 19% slabinfo.kmem_cache.active_objs
11.80 ± 6% +7.3% 12.67 ± 19% slabinfo.kmem_cache.active_slabs
377.60 ± 6% +7.3% 405.33 ± 19% slabinfo.kmem_cache.num_objs
11.80 ± 6% +7.3% 12.67 ± 19% slabinfo.kmem_cache.num_slabs
773.20 ± 6% +15.5% 892.67 ± 17% slabinfo.kmem_cache_node.active_objs
12.80 ± 5% +14.6% 14.67 ± 17% slabinfo.kmem_cache_node.active_slabs
819.20 ± 5% +14.6% 938.67 ± 17% slabinfo.kmem_cache_node.num_objs
12.80 ± 5% +14.6% 14.67 ± 17% slabinfo.kmem_cache_node.num_slabs
17532 -0.2% 17496 slabinfo.lsm_file_cache.active_objs
102.20 -0.2% 102.00 slabinfo.lsm_file_cache.active_slabs
17532 -0.2% 17496 slabinfo.lsm_file_cache.num_objs
102.20 -0.2% 102.00 slabinfo.lsm_file_cache.num_slabs
3205 -0.6% 3185 slabinfo.mm_struct.active_objs
106.00 -0.6% 105.33 slabinfo.mm_struct.active_slabs
3205 -0.6% 3185 slabinfo.mm_struct.num_objs
106.00 -0.6% 105.33 slabinfo.mm_struct.num_slabs
950.40 ± 14% +0.2% 952.00 ± 11% slabinfo.mnt_cache.active_objs
18.60 ± 14% +0.4% 18.67 ± 11% slabinfo.mnt_cache.active_slabs
950.40 ± 14% +0.2% 952.00 ± 11% slabinfo.mnt_cache.num_objs
18.60 ± 14% +0.4% 18.67 ± 11% slabinfo.mnt_cache.num_slabs
34.00 +0.0% 34.00 slabinfo.mqueue_inode_cache.active_objs
1.00 +0.0% 1.00 slabinfo.mqueue_inode_cache.active_slabs
34.00 +0.0% 34.00 slabinfo.mqueue_inode_cache.num_objs
1.00 +0.0% 1.00 slabinfo.mqueue_inode_cache.num_slabs
769.80 -0.0% 769.67 slabinfo.names_cache.active_objs
96.20 -0.2% 96.00 slabinfo.names_cache.active_slabs
769.80 -0.0% 769.67 slabinfo.names_cache.num_objs
96.20 -0.2% 96.00 slabinfo.names_cache.num_slabs
120.00 +0.0% 120.00 slabinfo.nfs_inode_cache.active_objs
4.00 +0.0% 4.00 slabinfo.nfs_inode_cache.active_slabs
120.00 +0.0% 120.00 slabinfo.nfs_inode_cache.num_objs
4.00 +0.0% 4.00 slabinfo.nfs_inode_cache.num_slabs
73.00 ± 13% +2.7% 75.00 ± 13% slabinfo.nfs_read_data.active_objs
1.60 ± 30% +4.2% 1.67 ± 28% slabinfo.nfs_read_data.active_slabs
73.00 ± 13% +2.7% 75.00 ± 13% slabinfo.nfs_read_data.num_objs
1.60 ± 30% +4.2% 1.67 ± 28% slabinfo.nfs_read_data.num_slabs
352.80 ± 26% -18.0% 289.33 ± 20% slabinfo.numa_policy.active_objs
5.60 ± 24% -16.7% 4.67 ± 20% slabinfo.numa_policy.active_slabs
352.80 ± 26% -18.0% 289.33 ± 20% slabinfo.numa_policy.num_objs
5.60 ± 24% -16.7% 4.67 ± 20% slabinfo.numa_policy.num_slabs
9425 +1.8% 9595 slabinfo.pde_opener.active_objs
92.00 +1.4% 93.33 slabinfo.pde_opener.active_slabs
9425 +1.8% 9595 slabinfo.pde_opener.num_objs
92.00 +1.4% 93.33 slabinfo.pde_opener.num_slabs
4537 -1.7% 4459 slabinfo.pid.active_objs
141.20 -1.8% 138.67 slabinfo.pid.active_slabs
4537 -1.7% 4459 slabinfo.pid.num_objs
141.20 -1.8% 138.67 slabinfo.pid.num_slabs
105.80 ± 70% -47.1% 56.00 slabinfo.pid_namespace.active_objs
1.80 ± 64% -44.4% 1.00 slabinfo.pid_namespace.active_slabs
105.80 ± 70% -47.1% 56.00 slabinfo.pid_namespace.num_objs
1.80 ± 64% -44.4% 1.00 slabinfo.pid_namespace.num_slabs
1652 ± 4% +3.4% 1709 ± 6% slabinfo.pool_workqueue.active_objs
51.00 ± 4% +3.3% 52.67 ± 7% slabinfo.pool_workqueue.active_slabs
1652 ± 4% +3.4% 1709 ± 6% slabinfo.pool_workqueue.num_objs
51.00 ± 4% +3.3% 52.67 ± 7% slabinfo.pool_workqueue.num_slabs
3864 -0.7% 3836 slabinfo.proc_dir_entry.active_objs
92.00 -0.7% 91.33 slabinfo.proc_dir_entry.active_slabs
3864 -0.7% 3836 slabinfo.proc_dir_entry.num_objs
92.00 -0.7% 91.33 slabinfo.proc_dir_entry.num_slabs
15443 ± 5% -9.7% 13950 slabinfo.proc_inode_cache.active_objs
329.00 ± 6% -8.3% 301.67 ± 2% slabinfo.proc_inode_cache.active_slabs
15813 ± 6% -8.3% 14501 ± 2% slabinfo.proc_inode_cache.num_objs
329.00 ± 6% -8.3% 301.67 ± 2% slabinfo.proc_inode_cache.num_slabs
255149 -0.4% 254216 slabinfo.radix_tree_node.active_objs
4558 -0.4% 4541 slabinfo.radix_tree_node.active_slabs
255287 -0.4% 254355 slabinfo.radix_tree_node.num_objs
4558 -0.4% 4541 slabinfo.radix_tree_node.num_slabs
134.40 ± 5% +3.2% 138.67 ± 5% slabinfo.request_queue.active_objs
8.40 ± 5% +3.2% 8.67 ± 5% slabinfo.request_queue.active_slabs
134.40 ± 5% +3.2% 138.67 ± 5% slabinfo.request_queue.num_objs
8.40 ± 5% +3.2% 8.67 ± 5% slabinfo.request_queue.num_slabs
91.80 ± 22% +11.1% 102.00 slabinfo.rpc_inode_cache.active_objs
1.80 ± 22% +11.1% 2.00 slabinfo.rpc_inode_cache.active_slabs
91.80 ± 22% +11.1% 102.00 slabinfo.rpc_inode_cache.num_objs
1.80 ± 22% +11.1% 2.00 slabinfo.rpc_inode_cache.num_slabs
928.00 +0.0% 928.00 slabinfo.scsi_sense_cache.active_objs
29.00 +0.0% 29.00 slabinfo.scsi_sense_cache.active_slabs
928.00 +0.0% 928.00 slabinfo.scsi_sense_cache.num_objs
29.00 +0.0% 29.00 slabinfo.scsi_sense_cache.num_slabs
3264 +0.0% 3264 slabinfo.seq_file.active_objs
96.00 +0.0% 96.00 slabinfo.seq_file.active_slabs
3264 +0.0% 3264 slabinfo.seq_file.num_objs
96.00 +0.0% 96.00 slabinfo.seq_file.num_slabs
5449 ± 2% +0.7% 5486 ± 2% slabinfo.shmem_inode_cache.active_objs
117.80 ± 2% +0.5% 118.33 ± 2% slabinfo.shmem_inode_cache.active_slabs
5449 ± 2% +0.7% 5486 ± 2% slabinfo.shmem_inode_cache.num_objs
117.80 ± 2% +0.5% 118.33 ± 2% slabinfo.shmem_inode_cache.num_slabs
2552 ± 3% -2.1% 2498 ± 2% slabinfo.sighand_cache.active_objs
172.20 ± 2% -1.9% 169.00 ± 2% slabinfo.sighand_cache.active_slabs
2588 ± 2% -1.8% 2541 ± 2% slabinfo.sighand_cache.num_objs
172.20 ± 2% -1.9% 169.00 ± 2% slabinfo.sighand_cache.num_slabs
3887 ± 2% -0.3% 3877 slabinfo.signal_cache.active_objs
140.60 ± 2% -0.7% 139.67 slabinfo.signal_cache.active_slabs
3948 ± 2% -0.6% 3926 slabinfo.signal_cache.num_objs
140.60 ± 2% -0.7% 139.67 slabinfo.signal_cache.num_slabs
581.60 ± 14% -3.4% 562.00 ± 10% slabinfo.skbuff_fclone_cache.active_objs
17.60 ± 16% -3.4% 17.00 ± 12% slabinfo.skbuff_fclone_cache.active_slabs
581.60 ± 14% -3.4% 562.00 ± 10% slabinfo.skbuff_fclone_cache.num_objs
17.60 ± 16% -3.4% 17.00 ± 12% slabinfo.skbuff_fclone_cache.num_slabs
4339 ± 10% +5.7% 4585 ± 10% slabinfo.skbuff_head_cache.active_objs
135.60 ± 10% +6.2% 144.00 ± 10% slabinfo.skbuff_head_cache.active_slabs
4345 ± 10% +6.2% 4617 ± 10% slabinfo.skbuff_head_cache.num_objs
135.60 ± 10% +6.2% 144.00 ± 10% slabinfo.skbuff_head_cache.num_slabs
3405 ± 3% -1.3% 3359 slabinfo.sock_inode_cache.active_objs
86.60 ± 2% -1.1% 85.67 slabinfo.sock_inode_cache.active_slabs
3405 ± 3% -1.3% 3359 slabinfo.sock_inode_cache.num_objs
86.60 ± 2% -1.1% 85.67 slabinfo.sock_inode_cache.num_slabs
6129 +0.7% 6169 slabinfo.task_delay_info.active_objs
119.80 +0.7% 120.67 slabinfo.task_delay_info.active_slabs
6129 +0.7% 6169 slabinfo.task_delay_info.num_objs
119.80 +0.7% 120.67 slabinfo.task_delay_info.num_slabs
1296 ± 10% +1.7% 1318 ± 3% slabinfo.task_group.active_objs
27.60 ± 10% +3.9% 28.67 ± 3% slabinfo.task_group.active_slabs
1296 ± 10% +1.7% 1318 ± 3% slabinfo.task_group.num_objs
27.60 ± 10% +3.9% 28.67 ± 3% slabinfo.task_group.num_slabs
1593 ± 4% -3.5% 1538 ± 2% slabinfo.task_struct.active_objs
1599 ± 4% -3.4% 1544 ± 2% slabinfo.task_struct.active_slabs
1599 ± 4% -3.4% 1544 ± 2% slabinfo.task_struct.num_objs
1599 ± 4% -3.4% 1544 ± 2% slabinfo.task_struct.num_slabs
105.80 -2.0% 103.67 slabinfo.taskstats.active_objs
2.00 +0.0% 2.00 slabinfo.taskstats.active_slabs
105.80 -2.0% 103.67 slabinfo.taskstats.num_objs
2.00 +0.0% 2.00 slabinfo.taskstats.num_slabs
2633 ± 2% -3.3% 2545 ± 3% slabinfo.trace_event_file.active_objs
57.20 ± 2% -3.3% 55.33 ± 3% slabinfo.trace_event_file.active_slabs
2633 ± 2% -3.3% 2545 ± 3% slabinfo.trace_event_file.num_objs
57.20 ± 2% -3.3% 55.33 ± 3% slabinfo.trace_event_file.num_slabs
105.60 ± 23% -6.2% 99.00 ± 27% slabinfo.tw_sock_TCP.active_objs
3.20 ± 23% -6.3% 3.00 ± 27% slabinfo.tw_sock_TCP.active_slabs
105.60 ± 23% -6.2% 99.00 ± 27% slabinfo.tw_sock_TCP.num_objs
3.20 ± 23% -6.3% 3.00 ± 27% slabinfo.tw_sock_TCP.num_slabs
1345825 +75.7% 2364088 slabinfo.vm_area_struct.active_objs
40117 +103.2% 81517 slabinfo.vm_area_struct.active_slabs
1604716 +103.2% 3260692 slabinfo.vm_area_struct.num_objs
40117 +103.2% 81517 slabinfo.vm_area_struct.num_slabs
9416 ± 2% +1.7% 9574 ± 2% slabinfo.vmap_area.active_objs
147.00 ± 2% +1.6% 149.33 ± 2% slabinfo.vmap_area.active_slabs
9437 ± 2% +1.6% 9584 ± 2% slabinfo.vmap_area.num_objs
147.00 ± 2% +1.6% 149.33 ± 2% slabinfo.vmap_area.num_slabs
80.60 ± 29% -1.2% 79.67 ± 14% slabinfo.xfrm_state.active_objs
1.40 ± 34% -4.8% 1.33 ± 35% slabinfo.xfrm_state.active_slabs
80.60 ± 29% -1.2% 79.67 ± 14% slabinfo.xfrm_state.num_objs
1.40 ± 34% -4.8% 1.33 ± 35% slabinfo.xfrm_state.num_slabs
784.80 ± 2% -0.1% 784.33 softirqs.BLOCK
8.80 ± 84% +21.2% 10.67 ±104% softirqs.CPU0.BLOCK
1.00 +0.0% 1.00 softirqs.CPU0.HI
110.40 ± 41% -25.1% 82.67 ± 58% softirqs.CPU0.NET_RX
4.00 ± 47% -75.0% 1.00 ± 81% softirqs.CPU0.NET_TX
6886 ± 2% -51.0% 3373 ± 3% softirqs.CPU0.RCU
4936 ± 10% +1.1% 4992 ± 9% softirqs.CPU0.SCHED
113.20 -0.2% 113.00 softirqs.CPU0.TASKLET
1962 ± 22% -46.8% 1044 ± 32% softirqs.CPU0.TIMER
4.40 ±177% +248.5% 15.33 ± 91% softirqs.CPU1.BLOCK
47.40 ± 22% +2619.4% 1289 ± 99% softirqs.CPU1.NET_RX
1.00 ± 63% +500.0% 6.00 softirqs.CPU1.NET_TX
2997 ± 7% +68.3% 5046 ± 32% softirqs.CPU1.RCU
4038 ± 15% +8.5% 4382 ± 5% softirqs.CPU1.SCHED
0.00 +1.7e+102% 1.67 ± 74% softirqs.CPU1.TASKLET
205.40 ± 41% +369.5% 964.33 ± 63% softirqs.CPU1.TIMER
7.20 ± 88% -100.0% 0.00 softirqs.CPU10.BLOCK
1936 ± 7% +36.6% 2645 softirqs.CPU10.RCU
3043 ± 11% -8.4% 2787 ± 8% softirqs.CPU10.SCHED
170.00 ± 34% -63.3% 62.33 ± 14% softirqs.CPU10.TIMER
7.20 ±186% +62.0% 11.67 ± 74% softirqs.CPU11.BLOCK
1.00 +0.0% 1.00 softirqs.CPU11.HI
0.00 +3.3e+101% 0.33 ±141% softirqs.CPU11.NET_RX
1972 ± 7% +34.2% 2648 ± 2% softirqs.CPU11.RCU
2998 ± 7% -7.8% 2764 ± 3% softirqs.CPU11.SCHED
54.00 +0.0% 54.00 softirqs.CPU11.TASKLET
333.40 ± 62% -81.5% 61.67 softirqs.CPU11.TIMER
2.00 ±154% -66.7% 0.67 ±141% softirqs.CPU12.BLOCK
0.00 +6.7e+101% 0.67 ±141% softirqs.CPU12.NET_RX
2196 ± 22% +21.0% 2658 softirqs.CPU12.RCU
2828 ± 7% +4.0% 2942 ± 12% softirqs.CPU12.SCHED
193.40 ± 69% -65.9% 66.00 ± 11% softirqs.CPU12.TIMER
8.00 ±154% +8.3% 8.67 ±110% softirqs.CPU13.BLOCK
2077 ± 8% +31.0% 2720 ± 6% softirqs.CPU13.RCU
3100 ± 13% -4.9% 2947 ± 4% softirqs.CPU13.SCHED
178.00 ± 27% +58.1% 281.33 ± 25% softirqs.CPU13.TIMER
3.20 ±170% -89.6% 0.33 ±141% softirqs.CPU14.BLOCK
1970 ± 5% +63.8% 3227 ± 19% softirqs.CPU14.RCU
3065 ± 8% -4.7% 2922 ± 3% softirqs.CPU14.SCHED
253.40 ± 34% -50.5% 125.33 ± 29% softirqs.CPU14.TIMER
0.00 +6.7e+101% 0.67 ±141% softirqs.CPU15.BLOCK
2.00 +0.0% 2.00 softirqs.CPU15.NET_RX
2007 ± 8% +38.1% 2773 ± 4% softirqs.CPU15.RCU
2821 ± 12% +15.3% 3254 ± 10% softirqs.CPU15.SCHED
2.00 +0.0% 2.00 softirqs.CPU15.TASKLET
351.00 ± 45% -19.5% 282.67 ± 91% softirqs.CPU15.TIMER
6.60 ±177% +233.3% 22.00 ± 94% softirqs.CPU16.BLOCK
2.00 +0.0% 2.00 softirqs.CPU16.NET_RX
2043 ± 14% +31.6% 2689 ± 4% softirqs.CPU16.RCU
3162 ± 7% -7.8% 2917 ± 8% softirqs.CPU16.SCHED
1.60 ± 50% -58.3% 0.67 ±141% softirqs.CPU16.TASKLET
384.00 ±115% +62.8% 625.00 ±114% softirqs.CPU16.TIMER
21.00 ±100% -77.8% 4.67 ±141% softirqs.CPU17.BLOCK
2.00 +0.0% 2.00 softirqs.CPU17.NET_RX
2012 ± 9% +26.7% 2550 ± 5% softirqs.CPU17.RCU
3035 ± 4% -13.7% 2620 ± 6% softirqs.CPU17.SCHED
12.00 ±183% -83.3% 2.00 softirqs.CPU17.TASKLET
152.40 ± 56% +316.9% 635.33 ±123% softirqs.CPU17.TIMER
1.80 ±123% +214.8% 5.67 ±141% softirqs.CPU18.BLOCK
2.00 +0.0% 2.00 softirqs.CPU18.NET_RX
1936 ± 8% +43.1% 2771 ± 6% softirqs.CPU18.RCU
2916 ± 4% -2.4% 2847 ± 6% softirqs.CPU18.SCHED
1.60 ± 50% +25.0% 2.00 softirqs.CPU18.TASKLET
208.60 ± 45% -58.1% 87.33 ± 36% softirqs.CPU18.TIMER
0.60 ±133% +2177.8% 13.67 ±141% softirqs.CPU19.BLOCK
2.00 +0.0% 2.00 softirqs.CPU19.NET_RX
1925 ± 6% +42.4% 2742 softirqs.CPU19.RCU
2641 ± 18% +6.4% 2810 ± 4% softirqs.CPU19.SCHED
2.00 -33.3% 1.33 ± 70% softirqs.CPU19.TASKLET
191.80 ± 36% -30.7% 133.00 ± 34% softirqs.CPU19.TIMER
4.40 ±200% -84.8% 0.67 ±141% softirqs.CPU2.BLOCK
2611 ± 3% +35.0% 3526 ± 9% softirqs.CPU2.RCU
3550 ± 11% -12.7% 3099 ± 6% softirqs.CPU2.SCHED
0.20 ±200% -100.0% 0.00 softirqs.CPU2.TASKLET
431.60 ± 77% -47.9% 225.00 ± 20% softirqs.CPU2.TIMER
2.00 +0.0% 2.00 softirqs.CPU20.NET_RX
0.00 -100.0% 0.00 softirqs.CPU20.NET_TX
1932 ± 8% +44.5% 2792 ± 2% softirqs.CPU20.RCU
2811 ± 8% -1.4% 2772 ± 4% softirqs.CPU20.SCHED
2.00 +0.0% 2.00 softirqs.CPU20.TASKLET
217.40 ± 54% -51.5% 105.33 ± 20% softirqs.CPU20.TIMER
0.00 +3.3e+101% 0.33 ±141% softirqs.CPU21.BLOCK
2.20 ± 18% -9.1% 2.00 softirqs.CPU21.NET_RX
1926 ± 6% +36.4% 2628 ± 3% softirqs.CPU21.RCU
2679 ± 7% +4.9% 2809 softirqs.CPU21.SCHED
2.00 +0.0% 2.00 softirqs.CPU21.TASKLET
259.80 ± 40% -28.5% 185.67 ± 56% softirqs.CPU21.TIMER
6.40 ±200% -100.0% 0.00 softirqs.CPU22.BLOCK
2.00 +0.0% 2.00 softirqs.CPU22.NET_RX
2088 ± 20% +50.8% 3149 ± 26% softirqs.CPU22.RCU
3141 ± 10% -10.1% 2825 ± 19% softirqs.CPU22.SCHED
1.60 ± 50% +25.0% 2.00 softirqs.CPU22.TASKLET
381.20 ± 92% +33.9% 510.33 ±124% softirqs.CPU22.TIMER
2.40 ±200% +2538.9% 63.33 ±141% softirqs.CPU23.BLOCK
0.00 -100.0% 0.00 softirqs.CPU23.NET_TX
1933 ± 12% +39.7% 2700 ± 6% softirqs.CPU23.RCU
2848 ± 7% -2.0% 2792 ± 8% softirqs.CPU23.SCHED
2.00 -33.3% 1.33 ± 70% softirqs.CPU23.TASKLET
343.00 ± 58% -40.0% 205.67 ± 80% softirqs.CPU23.TIMER
20.60 ±116% +5.2% 21.67 ±134% softirqs.CPU24.BLOCK
0.20 ±200% -100.0% 0.00 softirqs.CPU24.NET_RX
2688 ± 3% +31.0% 3521 ± 6% softirqs.CPU24.RCU
2939 ± 9% -11.5% 2601 ± 15% softirqs.CPU24.SCHED
0.00 +3.3e+101% 0.33 ±141% softirqs.CPU24.TASKLET
203.00 ± 31% +140.4% 488.00 ±119% softirqs.CPU24.TIMER
6.40 ±177% -100.0% 0.00 softirqs.CPU25.BLOCK
0.00 -100.0% 0.00 softirqs.CPU25.NET_TX
2737 ± 8% +13.9% 3119 ± 2% softirqs.CPU25.RCU
3319 ± 12% -17.1% 2750 ± 7% softirqs.CPU25.SCHED
0.00 -100.0% 0.00 softirqs.CPU25.TASKLET
273.40 ± 46% -62.1% 103.67 ± 35% softirqs.CPU25.TIMER
4.00 ±114% +408.3% 20.33 ±127% softirqs.CPU26.BLOCK
1.00 ±200% -100.0% 0.00 softirqs.CPU26.NET_TX
2317 ± 5% +30.1% 3014 ± 2% softirqs.CPU26.RCU
3248 ± 16% -4.7% 3095 ± 12% softirqs.CPU26.SCHED
0.00 -100.0% 0.00 softirqs.CPU26.TASKLET
403.00 ±125% -21.2% 317.67 ± 97% softirqs.CPU26.TIMER
12.40 ±200% -11.3% 11.00 ±128% softirqs.CPU27.BLOCK
2191 ± 6% +25.8% 2756 ± 3% softirqs.CPU27.RCU
2933 ± 6% -7.3% 2719 ± 7% softirqs.CPU27.SCHED
1.60 ± 30% +25.0% 2.00 softirqs.CPU27.TASKLET
360.20 ± 43% +62.5% 585.33 ± 93% softirqs.CPU27.TIMER
0.00 +4.3e+103% 42.67 ±141% softirqs.CPU28.BLOCK
2005 ± 3% +38.8% 2783 ± 2% softirqs.CPU28.RCU
3058 ± 5% -11.0% 2720 ± 2% softirqs.CPU28.SCHED
6.20 ±143% -67.7% 2.00 softirqs.CPU28.TASKLET
305.40 ± 60% -62.3% 115.00 ± 37% softirqs.CPU28.TIMER
5.80 ±200% +256.3% 20.67 ±141% softirqs.CPU29.BLOCK
0.00 +1.3e+102% 1.33 ±141% softirqs.CPU29.NET_TX
2118 ± 18% +31.3% 2782 ± 12% softirqs.CPU29.RCU
2837 ± 8% +6.4% 3019 ± 13% softirqs.CPU29.SCHED
9.40 ±127% -78.7% 2.00 softirqs.CPU29.TASKLET
199.20 ± 43% +380.8% 957.67 ± 56% softirqs.CPU29.TIMER
2.40 ±200% +11.1% 2.67 ±141% softirqs.CPU3.BLOCK
0.80 ±200% -100.0% 0.00 softirqs.CPU3.NET_TX
2413 ± 7% +33.8% 3229 ± 5% softirqs.CPU3.RCU
3226 ± 18% -6.5% 3018 ± 2% softirqs.CPU3.SCHED
0.20 ±200% -100.0% 0.00 softirqs.CPU3.TASKLET
422.40 ±111% +3.6% 437.67 ±112% softirqs.CPU3.TIMER
0.00 -100.0% 0.00 softirqs.CPU30.BLOCK
0.00 +3.3e+101% 0.33 ±141% softirqs.CPU30.NET_TX
1920 ± 8% +61.7% 3104 ± 14% softirqs.CPU30.RCU
2959 ± 8% -3.3% 2860 ± 4% softirqs.CPU30.SCHED
1.60 ± 30% +25.0% 2.00 softirqs.CPU30.TASKLET
220.20 ± 59% -33.1% 147.33 ± 69% softirqs.CPU30.TIMER
9.00 ±194% -81.5% 1.67 ±141% softirqs.CPU31.BLOCK
0.00 +3.3e+101% 0.33 ±141% softirqs.CPU31.NET_RX
1966 ± 6% +39.7% 2747 ± 5% softirqs.CPU31.RCU
3207 ± 9% -9.2% 2911 ± 7% softirqs.CPU31.SCHED
1.80 ± 22% -7.4% 1.67 ± 28% softirqs.CPU31.TASKLET
240.60 ± 35% -25.3% 179.67 ± 46% softirqs.CPU31.TIMER
27.40 ±198% -61.1% 10.67 ±141% softirqs.CPU32.BLOCK
0.00 -100.0% 0.00 softirqs.CPU32.NET_RX
1947 ± 10% +37.3% 2674 ± 13% softirqs.CPU32.RCU
3235 ± 9% -17.0% 2685 ± 6% softirqs.CPU32.SCHED
1.80 ± 22% +103.7% 3.67 ± 46% softirqs.CPU32.TASKLET
154.00 ± 31% -19.5% 124.00 ± 23% softirqs.CPU32.TIMER
12.40 ±200% -100.0% 0.00 softirqs.CPU33.BLOCK
1840 ± 5% +29.6% 2386 ± 4% softirqs.CPU33.RCU
2970 ± 11% -4.2% 2846 ± 5% softirqs.CPU33.SCHED
1.80 ± 22% +11.1% 2.00 softirqs.CPU33.TASKLET
231.60 ± 45% +32.3% 306.33 ±107% softirqs.CPU33.TIMER
0.00 +3.3e+101% 0.33 ±141% softirqs.CPU34.BLOCK
1854 ± 10% +34.2% 2488 softirqs.CPU34.RCU
2674 ± 4% +10.1% 2946 ± 11% softirqs.CPU34.SCHED
2.00 +0.0% 2.00 softirqs.CPU34.TASKLET
177.80 ± 37% +22.2% 217.33 ± 22% softirqs.CPU34.TIMER
21.00 ±145% +73.0% 36.33 ± 67% softirqs.CPU35.BLOCK
1991 ± 10% +24.0% 2468 ± 6% softirqs.CPU35.RCU
3255 ± 8% -14.1% 2795 ± 11% softirqs.CPU35.SCHED
0.00 +3e+102% 3.00 ±141% softirqs.CPU35.TASKLET
220.40 ± 54% -57.5% 93.67 ± 47% softirqs.CPU35.TIMER
13.60 ±178% -95.1% 0.67 ±141% softirqs.CPU36.BLOCK
1841 ± 9% +31.7% 2425 ± 5% softirqs.CPU36.RCU
3068 ± 6% -8.5% 2809 ± 8% softirqs.CPU36.SCHED
151.40 ± 40% +118.4% 330.67 ± 87% softirqs.CPU36.TIMER
14.00 ±172% -100.0% 0.00 softirqs.CPU37.BLOCK
1864 ± 9% +23.2% 2297 softirqs.CPU37.RCU
2907 ± 6% -1.9% 2852 ± 6% softirqs.CPU37.SCHED
0.00 +3.3e+101% 0.33 ±141% softirqs.CPU37.TASKLET
228.00 ± 27% -43.1% 129.67 ± 53% softirqs.CPU37.TIMER
1.80 ±150% +992.6% 19.67 ±141% softirqs.CPU38.BLOCK
0.00 -100.0% 0.00 softirqs.CPU38.NET_TX
1803 ± 4% +34.9% 2433 ± 11% softirqs.CPU38.RCU
3001 ± 10% +3.3% 3100 ± 8% softirqs.CPU38.SCHED
189.00 ± 67% -66.8% 62.67 softirqs.CPU38.TIMER
25.60 ±140% -100.0% 0.00 softirqs.CPU39.BLOCK
0.20 ±200% -100.0% 0.00 softirqs.CPU39.NET_RX
1794 ± 7% +35.5% 2431 ± 3% softirqs.CPU39.RCU
3072 ± 6% -7.9% 2830 ± 7% softirqs.CPU39.SCHED
7.20 ±125% -100.0% 0.00 softirqs.CPU39.TASKLET
159.60 ± 35% -48.0% 83.00 ± 26% softirqs.CPU39.TIMER
0.00 +8.7e+102% 8.67 ±141% softirqs.CPU4.BLOCK
0.20 ±200% -100.0% 0.00 softirqs.CPU4.NET_RX
2273 ± 13% +36.7% 3107 softirqs.CPU4.RCU
3131 ± 3% +1.1% 3166 ± 10% softirqs.CPU4.SCHED
253.20 ± 58% +8.0% 273.33 ± 36% softirqs.CPU4.TIMER
6.20 ±176% -67.7% 2.00 ±141% softirqs.CPU40.BLOCK
0.20 ±200% -100.0% 0.00 softirqs.CPU40.NET_RX
1860 ± 2% +28.4% 2389 ± 7% softirqs.CPU40.RCU
3268 ± 12% -13.1% 2841 ± 7% softirqs.CPU40.SCHED
0.00 +6.7e+101% 0.67 ±141% softirqs.CPU40.TASKLET
249.00 ± 28% -46.6% 133.00 ± 48% softirqs.CPU40.TIMER
5.80 ±175% -82.8% 1.00 ± 81% softirqs.CPU41.BLOCK
1919 ± 6% +43.6% 2756 ± 8% softirqs.CPU41.RCU
3147 ± 7% -4.0% 3023 ± 3% softirqs.CPU41.SCHED
295.40 ± 36% -38.4% 182.00 ± 58% softirqs.CPU41.TIMER
0.20 ±200% +11400.0% 23.00 ±141% softirqs.CPU42.BLOCK
2008 ± 4% +78.3% 3580 ± 47% softirqs.CPU42.RCU
3071 ± 14% -10.0% 2763 ± 5% softirqs.CPU42.SCHED
297.00 ± 30% +56.5% 464.67 ±120% softirqs.CPU42.TIMER
13.20 ±184% -100.0% 0.00 softirqs.CPU43.BLOCK
2040 ± 15% +25.7% 2564 ± 11% softirqs.CPU43.RCU
3052 ± 5% +5.5% 3219 ± 11% softirqs.CPU43.SCHED
0.00 -100.0% 0.00 softirqs.CPU43.TASKLET
244.80 ± 36% +94.3% 475.67 ± 71% softirqs.CPU43.TIMER
12.40 ±200% -46.2% 6.67 ±141% softirqs.CPU44.BLOCK
1874 ± 16% +23.8% 2319 ± 9% softirqs.CPU44.RCU
2901 ± 5% -3.1% 2811 ± 3% softirqs.CPU44.SCHED
267.20 ± 52% +84.8% 493.67 ± 94% softirqs.CPU44.TIMER
38.60 ±196% -56.0% 17.00 ± 71% softirqs.CPU45.BLOCK
1791 ± 3% +28.8% 2307 ± 3% softirqs.CPU45.RCU
3169 ± 4% -11.0% 2820 ± 3% softirqs.CPU45.SCHED
0.00 -100.0% 0.00 softirqs.CPU45.TASKLET
189.80 ± 57% -64.2% 68.00 ± 14% softirqs.CPU45.TIMER
6.40 ±200% +124.0% 14.33 ±141% softirqs.CPU46.BLOCK
0.20 ±200% -100.0% 0.00 softirqs.CPU46.NET_RX
1904 ± 9% +23.7% 2355 softirqs.CPU46.RCU
3032 ± 5% -6.1% 2846 ± 8% softirqs.CPU46.SCHED
0.20 ±200% -100.0% 0.00 softirqs.CPU46.TASKLET
167.60 ± 35% -10.9% 149.33 ± 10% softirqs.CPU46.TIMER
14.00 ± 94% +40.5% 19.67 ± 60% softirqs.CPU47.BLOCK
0.20 ±200% -100.0% 0.00 softirqs.CPU47.NET_RX
2091 ± 16% +11.2% 2325 ± 5% softirqs.CPU47.RCU
2418 ± 11% +16.3% 2812 ± 11% softirqs.CPU47.SCHED
223.60 ± 20% +441.4% 1210 ± 53% softirqs.CPU47.TIMER
200.00 ± 27% -36.8% 126.33 ± 84% softirqs.CPU48.BLOCK
2.00 +0.0% 2.00 softirqs.CPU48.NET_RX
0.40 ±122% -100.0% 0.00 softirqs.CPU48.NET_TX
2589 ± 5% -2.5% 2525 softirqs.CPU48.RCU
2756 ± 13% -0.6% 2739 ± 12% softirqs.CPU48.SCHED
0.20 ±200% +233.3% 0.67 ±141% softirqs.CPU48.TASKLET
174.60 ± 43% -18.1% 143.00 ± 25% softirqs.CPU48.TIMER
0.00 +3.3e+101% 0.33 ±141% softirqs.CPU49.BLOCK
2.00 +0.0% 2.00 softirqs.CPU49.NET_RX
0.00 +3.3e+101% 0.33 ±141% softirqs.CPU49.NET_TX
2523 ± 13% +5.7% 2666 softirqs.CPU49.RCU
2975 ± 13% -10.7% 2656 ± 3% softirqs.CPU49.SCHED
0.00 +6.7e+102% 6.67 ±141% softirqs.CPU49.TASKLET
222.00 ± 42% -51.1% 108.67 ± 25% softirqs.CPU49.TIMER
0.00 +4e+102% 4.00 ±141% softirqs.CPU5.BLOCK
0.20 ±200% -100.0% 0.00 softirqs.CPU5.NET_RX
2125 ± 7% +39.4% 2962 softirqs.CPU5.RCU
3090 ± 4% -6.0% 2904 ± 4% softirqs.CPU5.SCHED
192.20 ± 48% -48.8% 98.33 ± 47% softirqs.CPU5.TIMER
5.20 ±123% -23.1% 4.00 ±141% softirqs.CPU50.BLOCK
2.00 +0.0% 2.00 softirqs.CPU50.NET_RX
2040 ± 15% +64.5% 3355 ± 17% softirqs.CPU50.RCU
3347 ± 18% -6.4% 3132 ± 9% softirqs.CPU50.SCHED
655.80 ± 81% -76.0% 157.67 ± 58% softirqs.CPU50.TIMER
2.00 +0.0% 2.00 softirqs.CPU51.NET_RX
0.20 ±200% -100.0% 0.00 softirqs.CPU51.NET_TX
2047 ± 6% +38.9% 2842 ± 6% softirqs.CPU51.RCU
3110 ± 7% -4.0% 2987 ± 5% softirqs.CPU51.SCHED
0.20 ±200% -100.0% 0.00 softirqs.CPU51.TASKLET
211.20 ± 40% -12.2% 185.33 ± 83% softirqs.CPU51.TIMER
2.60 ±200% -100.0% 0.00 softirqs.CPU52.BLOCK
2.00 +0.0% 2.00 softirqs.CPU52.NET_RX
1919 ± 10% +43.9% 2762 ± 15% softirqs.CPU52.RCU
2986 ± 8% +9.9% 3282 ± 22% softirqs.CPU52.SCHED
354.40 ± 68% +31.5% 466.00 ±114% softirqs.CPU52.TIMER
0.00 +4.7e+102% 4.67 ±141% softirqs.CPU53.BLOCK
2.00 +0.0% 2.00 softirqs.CPU53.NET_RX
2139 ± 20% +18.9% 2544 softirqs.CPU53.RCU
3264 ± 12% -9.9% 2942 ± 11% softirqs.CPU53.SCHED
350.80 ±123% -72.0% 98.33 ± 46% softirqs.CPU53.TIMER
2.20 ± 18% -9.1% 2.00 softirqs.CPU54.NET_RX
0.00 -100.0% 0.00 softirqs.CPU54.NET_TX
1920 ± 13% +44.9% 2783 ± 13% softirqs.CPU54.RCU
2970 ± 6% -4.8% 2828 ± 4% softirqs.CPU54.SCHED
145.80 ± 25% -16.8% 121.33 ± 35% softirqs.CPU54.TIMER
2.40 ±200% -100.0% 0.00 softirqs.CPU55.BLOCK
2.20 ± 18% -9.1% 2.00 softirqs.CPU55.NET_RX
1727 ± 5% +55.4% 2683 ± 11% softirqs.CPU55.RCU
2990 ± 8% -1.4% 2948 ± 9% softirqs.CPU55.SCHED
3.00 ±200% -88.9% 0.33 ±141% softirqs.CPU55.TASKLET
182.60 ± 44% -25.3% 136.33 ± 19% softirqs.CPU55.TIMER
4.00 ±200% -100.0% 0.00 softirqs.CPU56.BLOCK
0.60 ±200% -100.0% 0.00 softirqs.CPU56.NET_TX
1653 ± 2% +51.7% 2509 ± 3% softirqs.CPU56.RCU
3105 ± 10% -4.7% 2961 ± 11% softirqs.CPU56.SCHED
1.20 ±200% -100.0% 0.00 softirqs.CPU56.TASKLET
406.00 ±135% -67.2% 133.00 ± 65% softirqs.CPU56.TIMER
4.00 ±126% +50.0% 6.00 ±141% softirqs.CPU57.BLOCK
2.00 +0.0% 2.00 softirqs.CPU57.NET_RX
0.20 ±200% -100.0% 0.00 softirqs.CPU57.NET_TX
1907 ± 9% +27.4% 2429 softirqs.CPU57.RCU
3110 ± 14% -4.8% 2962 ± 9% softirqs.CPU57.SCHED
195.80 ± 27% +315.4% 813.33 ± 66% softirqs.CPU57.TIMER
4.80 ±200% -44.4% 2.67 ± 93% softirqs.CPU58.BLOCK
2.00 +0.0% 2.00 softirqs.CPU58.NET_RX
2147 ± 25% +15.5% 2479 ± 6% softirqs.CPU58.RCU
3032 ± 7% -0.1% 3029 ± 8% softirqs.CPU58.SCHED
0.00 +3.3e+101% 0.33 ±141% softirqs.CPU58.TASKLET
168.80 ± 46% +9.6% 185.00 ± 78% softirqs.CPU58.TIMER
3.20 ±122% -68.8% 1.00 ±141% softirqs.CPU59.BLOCK
2.00 +0.0% 2.00 softirqs.CPU59.NET_RX
1792 ± 13% +36.2% 2441 ± 2% softirqs.CPU59.RCU
3260 ± 12% -6.0% 3066 ± 5% softirqs.CPU59.SCHED
174.40 ± 40% -58.9% 71.67 ± 5% softirqs.CPU59.TIMER
0.00 -100.0% 0.00 softirqs.CPU6.BLOCK
0.00 -100.0% 0.00 softirqs.CPU6.NET_RX
2032 ± 5% +61.6% 3285 ± 11% softirqs.CPU6.RCU
3021 ± 15% -8.7% 2757 ± 6% softirqs.CPU6.SCHED
200.20 ± 42% +1.2% 202.67 ± 53% softirqs.CPU6.TIMER
10.80 ±115% -25.9% 8.00 ±106% softirqs.CPU60.BLOCK
2.00 +0.0% 2.00 softirqs.CPU60.NET_RX
1699 ± 4% +42.9% 2428 ± 5% softirqs.CPU60.RCU
3005 ± 8% -5.6% 2835 ± 10% softirqs.CPU60.SCHED
244.80 ± 57% +7.7% 263.67 ± 61% softirqs.CPU60.TIMER
2.80 ±200% -4.8% 2.67 ±141% softirqs.CPU61.BLOCK
2.20 ± 18% -9.1% 2.00 softirqs.CPU61.NET_RX
1950 ± 27% +20.5% 2351 ± 4% softirqs.CPU61.RCU
2924 ± 12% -5.4% 2765 ± 9% softirqs.CPU61.SCHED
137.00 ± 44% +36.3% 186.67 ± 57% softirqs.CPU61.TIMER
8.40 ±122% -44.4% 4.67 ±141% softirqs.CPU62.BLOCK
2.00 +0.0% 2.00 softirqs.CPU62.NET_RX
1741 ± 10% +45.0% 2525 ± 11% softirqs.CPU62.RCU
3140 ± 9% -10.6% 2808 ± 7% softirqs.CPU62.SCHED
223.60 ± 45% -73.2% 60.00 ± 10% softirqs.CPU62.TIMER
2.40 ±200% +11.1% 2.67 ± 93% softirqs.CPU63.BLOCK
2.00 +0.0% 2.00 softirqs.CPU63.NET_RX
1675 ± 8% +36.1% 2279 ± 9% softirqs.CPU63.RCU
2856 ± 10% +3.7% 2963 ± 6% softirqs.CPU63.SCHED
210.40 ± 16% +0.6% 211.67 ± 36% softirqs.CPU63.TIMER
1.20 ±200% -44.4% 0.67 ±141% softirqs.CPU64.BLOCK
2.20 ± 18% -9.1% 2.00 softirqs.CPU64.NET_RX
1805 ± 4% +31.3% 2371 ± 2% softirqs.CPU64.RCU
3093 ± 11% -10.2% 2776 ± 9% softirqs.CPU64.SCHED
215.00 ± 32% +49.6% 321.67 ± 96% softirqs.CPU64.TIMER
0.00 -100.0% 0.00 softirqs.CPU65.BLOCK
0.20 ±200% -100.0% 0.00 softirqs.CPU65.NET_RX
11.40 ± 4% -3.5% 11.00 softirqs.CPU65.NET_TX
1753 ± 3% +47.7% 2590 ± 5% softirqs.CPU65.RCU
3098 ± 3% -9.8% 2793 ± 9% softirqs.CPU65.SCHED
0.00 -100.0% 0.00 softirqs.CPU65.TASKLET
183.60 ± 25% -30.6% 127.33 ± 55% softirqs.CPU65.TIMER
2.40 ±200% +52.8% 3.67 ±141% softirqs.CPU66.BLOCK
626.40 ±155% -88.6% 71.67 ± 7% softirqs.CPU66.NET_RX
2.20 ± 34% +6.1% 2.33 ± 53% softirqs.CPU66.NET_TX
1796 ± 11% +26.2% 2267 softirqs.CPU66.RCU
3364 ± 11% -12.6% 2940 ± 5% softirqs.CPU66.SCHED
373.20 ± 70% +24.5% 464.67 ±122% softirqs.CPU66.TIMER
888.80 ±176% -91.4% 76.67 ± 43% softirqs.CPU67.NET_RX
3.20 ± 75% -37.5% 2.00 ± 40% softirqs.CPU67.NET_TX
1714 ± 5% +40.1% 2403 ± 3% softirqs.CPU67.RCU
3384 ± 15% -13.5% 2929 ± 8% softirqs.CPU67.SCHED
0.60 ±133% -44.4% 0.33 ±141% softirqs.CPU67.TASKLET
250.80 ± 31% -60.7% 98.67 ± 22% softirqs.CPU67.TIMER
0.40 ±122% -100.0% 0.00 softirqs.CPU68.BLOCK
1869 ±189% -92.4% 142.67 ± 94% softirqs.CPU68.NET_RX
4.00 ± 57% -75.0% 1.00 ± 81% softirqs.CPU68.NET_TX
1894 ± 17% +33.3% 2525 ± 5% softirqs.CPU68.RCU
3332 ± 18% -12.4% 2919 ± 6% softirqs.CPU68.SCHED
0.40 ±122% -100.0% 0.00 softirqs.CPU68.TASKLET
301.20 ± 33% -57.6% 127.67 ± 26% softirqs.CPU68.TIMER
253.80 ±119% -46.3% 136.33 ± 39% softirqs.CPU69.NET_RX
1.40 ± 34% +66.7% 2.33 ± 72% softirqs.CPU69.NET_TX
1771 ± 8% +42.2% 2519 ± 3% softirqs.CPU69.RCU
2996 ± 4% -4.4% 2865 ± 13% softirqs.CPU69.SCHED
0.20 ±200% +66.7% 0.33 ±141% softirqs.CPU69.TASKLET
244.40 ± 85% -65.5% 84.33 ± 25% softirqs.CPU69.TIMER
0.00 -100.0% 0.00 softirqs.CPU7.NET_RX
0.20 ±200% -100.0% 0.00 softirqs.CPU7.NET_TX
2008 ± 5% +41.3% 2837 softirqs.CPU7.RCU
3187 ± 11% -8.5% 2915 ± 7% softirqs.CPU7.SCHED
158.00 ± 34% -26.6% 116.00 ± 37% softirqs.CPU7.TIMER
9.20 ±127% -96.4% 0.33 ±141% softirqs.CPU70.BLOCK
288.00 ±153% +22.5% 352.67 ±108% softirqs.CPU70.NET_RX
2.40 ± 72% -2.8% 2.33 ± 20% softirqs.CPU70.NET_TX
1653 ± 5% +49.9% 2478 ± 7% softirqs.CPU70.RCU
3002 ± 8% -3.7% 2890 ± 6% softirqs.CPU70.SCHED
165.20 ± 41% +79.6% 296.67 ± 66% softirqs.CPU70.TIMER
4.00 ±200% +0.0% 4.00 ±141% softirqs.CPU71.BLOCK
220.40 ±104% +338.1% 965.67 ± 98% softirqs.CPU71.NET_RX
1.80 ± 22% -25.9% 1.33 ± 35% softirqs.CPU71.NET_TX
1855 ± 7% +38.4% 2567 ± 6% softirqs.CPU71.RCU
3131 ± 8% -4.9% 2978 ± 17% softirqs.CPU71.SCHED
0.20 ±200% +66.7% 0.33 ±141% softirqs.CPU71.TASKLET
230.80 ± 60% -34.9% 150.33 ± 40% softirqs.CPU71.TIMER
0.40 ±200% +2900.0% 12.00 ±118% softirqs.CPU72.BLOCK
2357 ± 2% +17.0% 2759 ± 5% softirqs.CPU72.RCU
2871 ± 4% -7.3% 2662 ± 4% softirqs.CPU72.SCHED
0.20 ±200% -100.0% 0.00 softirqs.CPU72.TASKLET
224.40 ± 40% -65.2% 78.00 ± 16% softirqs.CPU72.TIMER
33.00 ± 92% -100.0% 0.00 softirqs.CPU73.BLOCK
0.20 ±200% -100.0% 0.00 softirqs.CPU73.NET_RX
2368 ± 11% +7.6% 2549 ± 5% softirqs.CPU73.RCU
3173 ± 15% -25.7% 2357 ± 12% softirqs.CPU73.SCHED
573.40 ± 90% -41.9% 333.00 ±105% softirqs.CPU73.TIMER
13.20 ±178% +124.7% 29.67 ±141% softirqs.CPU74.BLOCK
0.20 ±200% +66.7% 0.33 ±141% softirqs.CPU74.NET_RX
1894 ± 9% +28.9% 2441 ± 6% softirqs.CPU74.RCU
3186 ± 7% -12.2% 2796 ± 9% softirqs.CPU74.SCHED
0.00 -100.0% 0.00 softirqs.CPU74.TASKLET
476.80 ± 36% -73.4% 127.00 ± 71% softirqs.CPU74.TIMER
0.80 ±145% -100.0% 0.00 softirqs.CPU75.BLOCK
0.00 +3.3e+101% 0.33 ±141% softirqs.CPU75.NET_RX
1743 ± 8% +32.8% 2315 ± 6% softirqs.CPU75.RCU
3105 ± 8% -12.9% 2704 ± 10% softirqs.CPU75.SCHED
235.80 ± 82% -49.4% 119.33 ± 34% softirqs.CPU75.TIMER
0.20 ±200% +11733.3% 23.67 ±135% softirqs.CPU76.BLOCK
0.00 -100.0% 0.00 softirqs.CPU76.NET_TX
1642 ± 7% +38.9% 2281 softirqs.CPU76.RCU
3063 ± 6% -0.9% 3037 ± 6% softirqs.CPU76.SCHED
0.20 ±200% -100.0% 0.00 softirqs.CPU76.TASKLET
268.00 ± 41% -58.2% 112.00 ± 64% softirqs.CPU76.TIMER
5.80 ±200% +256.3% 20.67 ±141% softirqs.CPU77.BLOCK
0.00 +1.7e+102% 1.67 ±141% softirqs.CPU77.NET_TX
1683 ± 14% +43.9% 2423 ± 13% softirqs.CPU77.RCU
3018 ± 5% +0.7% 3038 ± 16% softirqs.CPU77.SCHED
262.00 ± 21% +159.7% 680.33 ± 83% softirqs.CPU77.TIMER
8.20 ±135% +13.8% 9.33 ±126% softirqs.CPU78.BLOCK
1682 ± 3% +38.6% 2331 ± 5% softirqs.CPU78.RCU
3179 ± 6% -12.7% 2774 ± 4% softirqs.CPU78.SCHED
199.20 ± 18% -7.1% 185.00 ± 78% softirqs.CPU78.TIMER
0.00 +6.7e+101% 0.67 ±141% softirqs.CPU79.NET_RX
1916 ± 13% +76.9% 3390 ± 49% softirqs.CPU79.RCU
3214 ± 5% -12.1% 2826 ± 6% softirqs.CPU79.SCHED
412.20 ± 61% +15.2% 475.00 ± 90% softirqs.CPU79.TIMER
2.80 ±200% -100.0% 0.00 softirqs.CPU8.BLOCK
0.20 ±200% -100.0% 0.00 softirqs.CPU8.NET_TX
2195 ± 17% +28.2% 2814 ± 2% softirqs.CPU8.RCU
2970 ± 12% -1.5% 2926 ± 8% softirqs.CPU8.SCHED
231.00 ± 79% -54.3% 105.67 ± 58% softirqs.CPU8.TIMER
1.80 ±101% -63.0% 0.67 ±141% softirqs.CPU80.BLOCK
1758 ± 8% +38.3% 2432 ± 8% softirqs.CPU80.RCU
2985 ± 5% +0.9% 3013 ± 7% softirqs.CPU80.SCHED
0.20 ±200% +66.7% 0.33 ±141% softirqs.CPU80.TASKLET
141.00 ± 45% -15.4% 119.33 ± 35% softirqs.CPU80.TIMER
15.00 ±186% -100.0% 0.00 softirqs.CPU81.BLOCK
1613 ± 11% +27.7% 2061 ± 4% softirqs.CPU81.RCU
3086 ± 10% -8.8% 2814 ± 2% softirqs.CPU81.SCHED
0.40 ±200% -100.0% 0.00 softirqs.CPU81.TASKLET
183.80 ± 19% +41.1% 259.33 ± 46% softirqs.CPU81.TIMER
27.60 ±131% -100.0% 0.00 softirqs.CPU82.BLOCK
0.20 ±200% -100.0% 0.00 softirqs.CPU82.NET_RX
1709 ± 11% +24.7% 2131 ± 3% softirqs.CPU82.RCU
3088 ± 17% -9.0% 2811 ± 4% softirqs.CPU82.SCHED
0.00 -100.0% 0.00 softirqs.CPU82.TASKLET
280.00 ± 54% -35.2% 181.33 ± 94% softirqs.CPU82.TIMER
23.60 ±200% -100.0% 0.00 softirqs.CPU83.BLOCK
0.00 +3.3e+101% 0.33 ±141% softirqs.CPU83.NET_RX
1630 ± 7% +30.6% 2129 ± 2% softirqs.CPU83.RCU
3095 ± 12% -1.7% 3042 ± 2% softirqs.CPU83.SCHED
0.20 ±200% -100.0% 0.00 softirqs.CPU83.TASKLET
271.60 ± 17% -52.7% 128.33 ± 39% softirqs.CPU83.TIMER
7.20 ±173% +29.6% 9.33 ±141% softirqs.CPU84.BLOCK
1656 ± 6% +34.5% 2228 softirqs.CPU84.RCU
3178 ± 11% -2.0% 3114 ± 3% softirqs.CPU84.SCHED
338.80 ± 58% -60.1% 135.33 ± 51% softirqs.CPU84.TIMER
6.80 ±185% +61.8% 11.00 ±141% softirqs.CPU85.BLOCK
1670 ± 6% +24.5% 2080 ± 2% softirqs.CPU85.RCU
3047 ± 7% -2.4% 2976 softirqs.CPU85.SCHED
233.00 ± 48% -73.1% 62.67 ± 10% softirqs.CPU85.TIMER
1544 ± 9% +30.1% 2009 softirqs.CPU86.RCU
2992 ± 9% -8.6% 2735 ± 2% softirqs.CPU86.SCHED
175.80 ± 20% +155.2% 448.67 ± 71% softirqs.CPU86.TIMER
0.40 ±200% -100.0% 0.00 softirqs.CPU87.BLOCK
0.20 ±200% -100.0% 0.00 softirqs.CPU87.NET_RX
1542 ± 8% +57.1% 2423 ± 6% softirqs.CPU87.RCU
2941 ± 7% -5.5% 2780 ± 10% softirqs.CPU87.SCHED
111.80 ± 28% +36.3% 152.33 ± 44% softirqs.CPU87.TIMER
0.20 ±200% -100.0% 0.00 softirqs.CPU88.NET_RX
0.00 -100.0% 0.00 softirqs.CPU88.NET_TX
1601 ± 12% +38.6% 2220 ± 6% softirqs.CPU88.RCU
3040 ± 5% -7.1% 2825 ± 7% softirqs.CPU88.SCHED
200.20 ± 28% -52.0% 96.00 ± 44% softirqs.CPU88.TIMER
0.00 +6.7e+101% 0.67 ±141% softirqs.CPU89.BLOCK
1707 ± 17% +35.8% 2317 ± 18% softirqs.CPU89.RCU
3045 ± 6% -6.8% 2837 ± 2% softirqs.CPU89.SCHED
0.20 ±200% -100.0% 0.00 softirqs.CPU89.TASKLET
297.00 ± 88% -53.9% 137.00 ± 81% softirqs.CPU89.TIMER
2.80 ±200% -76.2% 0.67 ±141% softirqs.CPU9.BLOCK
2193 ± 11% +26.7% 2778 ± 3% softirqs.CPU9.RCU
3206 ± 12% -15.5% 2708 ± 8% softirqs.CPU9.SCHED
198.20 ± 54% -49.0% 101.00 ± 31% softirqs.CPU9.TIMER
0.20 ±200% +11066.7% 22.33 ±125% softirqs.CPU90.BLOCK
0.00 +3.3e+101% 0.33 ±141% softirqs.CPU90.NET_RX
0.00 -100.0% 0.00 softirqs.CPU90.NET_TX
1480 ± 4% +41.7% 2097 ± 3% softirqs.CPU90.RCU
2937 ± 6% +6.5% 3128 ± 6% softirqs.CPU90.SCHED
262.20 ± 76% -15.2% 222.33 ± 68% softirqs.CPU90.TIMER
0.00 +8.7e+102% 8.67 ±141% softirqs.CPU91.BLOCK
0.00 -100.0% 0.00 softirqs.CPU91.NET_RX
1690 ± 10% +21.8% 2058 softirqs.CPU91.RCU
3135 ± 5% -7.1% 2912 ± 5% softirqs.CPU91.SCHED
0.00 +1.7e+102% 1.67 ±141% softirqs.CPU91.TASKLET
187.40 ± 40% +94.2% 364.00 ±104% softirqs.CPU91.TIMER
0.00 -100.0% 0.00 softirqs.CPU92.NET_TX
1622 ± 7% +27.6% 2069 ± 6% softirqs.CPU92.RCU
3153 ± 4% -9.8% 2845 ± 4% softirqs.CPU92.SCHED
0.00 +6.3e+102% 6.33 ±141% softirqs.CPU92.TASKLET
221.60 ± 48% +42.6% 316.00 ± 99% softirqs.CPU92.TIMER
0.80 ±122% -100.0% 0.00 softirqs.CPU93.BLOCK
1557 ± 6% +32.2% 2058 ± 2% softirqs.CPU93.RCU
3096 -12.8% 2700 ± 5% softirqs.CPU93.SCHED
0.00 -100.0% 0.00 softirqs.CPU93.TASKLET
162.40 ± 22% -63.3% 59.67 ± 14% softirqs.CPU93.TIMER
8.40 ±188% -96.0% 0.33 ±141% softirqs.CPU94.BLOCK
1528 ± 8% +32.8% 2029 ± 3% softirqs.CPU94.RCU
3050 ± 8% -6.5% 2851 ± 7% softirqs.CPU94.SCHED
242.80 ± 39% -37.0% 153.00 ± 36% softirqs.CPU94.TIMER
0.40 ±200% +5066.7% 20.67 ±141% softirqs.CPU95.BLOCK
2161 ± 24% +25.3% 2707 ± 8% softirqs.CPU95.RCU
3117 ± 13% -9.3% 2825 ± 2% softirqs.CPU95.SCHED
0.00 -100.0% 0.00 softirqs.CPU95.TASKLET
229.40 ± 28% -5.0% 218.00 ± 70% softirqs.CPU95.TIMER
2.00 +0.0% 2.00 softirqs.HI
4358 ± 73% -27.2% 3170 ± 66% softirqs.NET_RX
36.60 ± 6% -4.4% 35.00 ± 4% softirqs.NET_TX
191572 ± 4% +31.8% 252582 softirqs.RCU
296266 -5.5% 280002 softirqs.SCHED
235.60 ± 7% -5.3% 223.00 softirqs.TASKLET
25759 -4.7% 24556 softirqs.TIMER
0.00 -100.0% 0.00 interrupts.0:IO-APIC.2-edge.timer
0.00 -100.0% 0.00 interrupts.31:PCI-MSI.48791552-edge.PCIe.PME,pciehp
0.00 -100.0% 0.00 interrupts.32:PCI-MSI.48807936-edge.PCIe.PME,pciehp
0.00 -100.0% 0.00 interrupts.342:PCI-MSI.327680-edge.xhci_hcd
0.00 -100.0% 0.00 interrupts.344:PCI-MSI.65536-edge.ioat-msix
0.00 +1.8e+103% 17.67 ±141% interrupts.345:PCI-MSI.288768-edge.ahci[0000:00:11.5]
0.00 -100.0% 0.00 interrupts.345:PCI-MSI.68681728-edge.nvme0q0
0.00 -100.0% 0.00 interrupts.345:PCI-MSI.69206016-edge.nvme1q0
0.00 +2.3e+103% 23.33 ±141% interrupts.346:PCI-MSI.288768-edge.ahci[0000:00:11.5]
0.00 -100.0% 0.00 interrupts.346:PCI-MSI.67584-edge.ioat-msix
0.00 -100.0% 0.00 interrupts.346:PCI-MSI.68681728-edge.nvme0q0
0.00 -100.0% 0.00 interrupts.346:PCI-MSI.69206016-edge.nvme1q0
35.00 ± 81% -100.0% 0.00 interrupts.347:PCI-MSI.288768-edge.ahci[0000:00:11.5]
0.00 -100.0% 0.00 interrupts.347:PCI-MSI.67584-edge.ioat-msix
0.00 -100.0% 0.00 interrupts.347:PCI-MSI.69206016-edge.nvme1q0
0.00 -100.0% 0.00 interrupts.347:PCI-MSI.69632-edge.ioat-msix
0.00 -100.0% 0.00 interrupts.348:PCI-MSI.67584-edge.ioat-msix
0.00 -100.0% 0.00 interrupts.348:PCI-MSI.68681728-edge.nvme0q0
0.00 -100.0% 0.00 interrupts.348:PCI-MSI.69632-edge.ioat-msix
0.00 -100.0% 0.00 interrupts.348:PCI-MSI.71680-edge.ioat-msix
0.00 -100.0% 0.00 interrupts.349:PCI-MSI.288768-edge.ahci[0000:00:11.5]
0.00 -100.0% 0.00 interrupts.349:PCI-MSI.67584-edge.ioat-msix
0.00 -100.0% 0.00 interrupts.349:PCI-MSI.68681728-edge.nvme0q0
0.00 -100.0% 0.00 interrupts.349:PCI-MSI.69206016-edge.nvme1q0
0.00 -100.0% 0.00 interrupts.349:PCI-MSI.69632-edge.ioat-msix
0.00 -100.0% 0.00 interrupts.349:PCI-MSI.73728-edge.ioat-msix
10.80 ±200% +63.6% 17.67 ±141% interrupts.350:PCI-MSI.288768-edge.ahci[0000:00:11.5]
0.00 -100.0% 0.00 interrupts.350:PCI-MSI.69206016-edge.nvme1q0
0.00 -100.0% 0.00 interrupts.350:PCI-MSI.69632-edge.ioat-msix
0.00 -100.0% 0.00 interrupts.350:PCI-MSI.75776-edge.ioat-msix
0.00 -100.0% 0.00 interrupts.351:PCI-MSI.68681728-edge.nvme0q0
0.00 -100.0% 0.00 interrupts.351:PCI-MSI.69206016-edge.nvme1q0
0.00 -100.0% 0.00 interrupts.351:PCI-MSI.71680-edge.ioat-msix
0.00 -100.0% 0.00 interrupts.352:PCI-MSI.73728-edge.ioat-msix
0.00 -100.0% 0.00 interrupts.352:PCI-MSI.77824-edge.ioat-msix
0.00 -100.0% 0.00 interrupts.353:PCI-MSI.69206016-edge.nvme1q0
0.00 -100.0% 0.00 interrupts.353:PCI-MSI.75776-edge.ioat-msix
12.20 ±200% -100.0% 0.00 interrupts.354:PCI-MSI.288768-edge.ahci[0000:00:11.5]
0.00 -100.0% 0.00 interrupts.354:PCI-MSI.77824-edge.ioat-msix
0.00 -100.0% 0.00 interrupts.355:PCI-MSI.376832-edge.ahci[0000:00:17.0]
0.00 -100.0% 0.00 interrupts.355:PCI-MSI.79872-edge.ioat-msix
0.00 -100.0% 0.00 interrupts.356:PCI-MSI.376832-edge.ahci[0000:00:17.0]
0.00 -100.0% 0.00 interrupts.356:PCI-MSI.79872-edge.ioat-msix
0.00 -100.0% 0.00 interrupts.357:PCI-MSI.67174400-edge.ioat-msix
0.00 -100.0% 0.00 interrupts.358:PCI-MSI.376832-edge.ahci[0000:00:17.0]
0.00 -100.0% 0.00 interrupts.358:PCI-MSI.67174400-edge.ioat-msix
0.00 -100.0% 0.00 interrupts.359:PCI-MSI.67176448-edge.ioat-msix
0.00 -100.0% 0.00 interrupts.360:PCI-MSI.376832-edge.ahci[0000:00:17.0]
0.00 -100.0% 0.00 interrupts.360:PCI-MSI.67176448-edge.ioat-msix
0.00 -100.0% 0.00 interrupts.360:PCI-MSI.67178496-edge.ioat-msix
0.00 -100.0% 0.00 interrupts.361:PCI-MSI.67178496-edge.ioat-msix
0.00 -100.0% 0.00 interrupts.361:PCI-MSI.67180544-edge.ioat-msix
0.00 -100.0% 0.00 interrupts.362:PCI-MSI.67180544-edge.ioat-msix
0.00 -100.0% 0.00 interrupts.362:PCI-MSI.67182592-edge.ioat-msix
0.00 -100.0% 0.00 interrupts.363:PCI-MSI.67182592-edge.ioat-msix
0.00 -100.0% 0.00 interrupts.363:PCI-MSI.67184640-edge.ioat-msix
0.00 -100.0% 0.00 interrupts.364:PCI-MSI.376832-edge.ahci[0000:00:17.0]
0.00 -100.0% 0.00 interrupts.364:PCI-MSI.67184640-edge.ioat-msix
0.00 -100.0% 0.00 interrupts.364:PCI-MSI.67186688-edge.ioat-msix
0.00 -100.0% 0.00 interrupts.365:PCI-MSI.67186688-edge.ioat-msix
0.00 -100.0% 0.00 interrupts.365:PCI-MSI.67188736-edge.ioat-msix
2.00 ±200% -100.0% 0.00 interrupts.366:PCI-MSI.376832-edge.ahci[0000:00:17.0]
0.00 -100.0% 0.00 interrupts.366:PCI-MSI.67188736-edge.ioat-msix
0.00 -100.0% 0.00 interrupts.367:PCI-MSI.68681729-edge.nvme0q1
0.00 -100.0% 0.00 interrupts.367:PCI-MSI.69206017-edge.nvme1q1
0.00 -100.0% 0.00 interrupts.371:PCI-MSI.69206021-edge.nvme1q5
0.00 -100.0% 0.00 interrupts.372:PCI-MSI.69206022-edge.nvme1q6
0.00 -100.0% 0.00 interrupts.373:PCI-MSI.68681735-edge.nvme0q7
0.00 -100.0% 0.00 interrupts.374:PCI-MSI.69206024-edge.nvme1q8
0.00 -100.0% 0.00 interrupts.375:PCI-MSI.69206025-edge.nvme1q9
0.00 -100.0% 0.00 interrupts.376:PCI-MSI.68681733-edge.nvme0q5
0.00 -100.0% 0.00 interrupts.376:PCI-MSI.68681738-edge.nvme0q10
0.00 -100.0% 0.00 interrupts.378:PCI-MSI.68681740-edge.nvme0q12
0.00 -100.0% 0.00 interrupts.378:PCI-MSI.69206028-edge.nvme1q12
0.00 -100.0% 0.00 interrupts.379:PCI-MSI.69206029-edge.nvme1q13
0.00 -100.0% 0.00 interrupts.381:PCI-MSI.69206031-edge.nvme1q15
0.00 -100.0% 0.00 interrupts.384:PCI-MSI.68681746-edge.nvme0q18
0.00 -100.0% 0.00 interrupts.385:PCI-MSI.69206035-edge.nvme1q19
0.00 -100.0% 0.00 interrupts.387:PCI-MSI.69206027-edge.nvme1q11
0.00 -100.0% 0.00 interrupts.389:PCI-MSI.69206028-edge.nvme1q12
0.00 -100.0% 0.00 interrupts.390:PCI-MSI.69206040-edge.nvme1q24
0.00 -100.0% 0.00 interrupts.391:PCI-MSI.69206041-edge.nvme1q25
0.00 -100.0% 0.00 interrupts.392:PCI-MSI.69206042-edge.nvme1q26
0.00 -100.0% 0.00 interrupts.398:PCI-MSI.68681729-edge.nvme0q1
0.00 -100.0% 0.00 interrupts.398:PCI-MSI.69206017-edge.nvme1q1
0.00 -100.0% 0.00 interrupts.399:PCI-MSI.68681730-edge.nvme0q2
0.00 -100.0% 0.00 interrupts.399:PCI-MSI.69206018-edge.nvme1q2
0.00 -100.0% 0.00 interrupts.3:IO-APIC.3-edge
0.00 -100.0% 0.00 interrupts.401:PCI-MSI.68681732-edge.nvme0q4
0.00 -100.0% 0.00 interrupts.403:PCI-MSI.68681734-edge.nvme0q6
0.00 -100.0% 0.00 interrupts.404:PCI-MSI.68681735-edge.nvme0q7
0.00 -100.0% 0.00 interrupts.405:PCI-MSI.68681736-edge.nvme0q8
0.00 -100.0% 0.00 interrupts.406:PCI-MSI.68681737-edge.nvme0q9
0.00 -100.0% 0.00 interrupts.407:PCI-MSI.69206026-edge.nvme1q10
0.00 -100.0% 0.00 interrupts.409:PCI-MSI.69206028-edge.nvme1q12
0.00 -100.0% 0.00 interrupts.411:PCI-MSI.68681742-edge.nvme0q14
0.00 -100.0% 0.00 interrupts.412:PCI-MSI.68681751-edge.nvme0q23
0.00 -100.0% 0.00 interrupts.413:PCI-MSI.68681744-edge.nvme0q16
0.00 -100.0% 0.00 interrupts.414:PCI-MSI.68681745-edge.nvme0q17
0.00 -100.0% 0.00 interrupts.415:PCI-MSI.69206034-edge.nvme1q18
0.00 -100.0% 0.00 interrupts.418:PCI-MSI.68681749-edge.nvme0q21
0.00 -100.0% 0.00 interrupts.41:PCI-MSI.112721920-edge.PCIe.PME,pciehp
0.00 -100.0% 0.00 interrupts.42:PCI-MSI.112738304-edge.PCIe.PME,pciehp
0.00 -100.0% 0.00 interrupts.45:PCI-MSI.12582913-edge
0.00 -100.0% 0.00 interrupts.46:PCI-MSI.12582914-edge
0.00 -100.0% 0.00 interrupts.47:PCI-MSI.12582915-edge
0.00 -100.0% 0.00 interrupts.48:PCI-MSI.12582916-edge
0.00 -100.0% 0.00 interrupts.49:PCI-MSI.12582917-edge
0.00 -100.0% 0.00 interrupts.4:IO-APIC.4-edge.ttyS0
0.00 -100.0% 0.00 interrupts.50:PCI-MSI.12582918-edge
0.00 -100.0% 0.00 interrupts.51:PCI-MSI.12582919-edge
0.00 -100.0% 0.00 interrupts.52:PCI-MSI.12582920-edge
0.00 -100.0% 0.00 interrupts.55:PCI-MSI.12584961-edge
0.00 -100.0% 0.00 interrupts.56:PCI-MSI.12584962-edge
0.00 -100.0% 0.00 interrupts.57:PCI-MSI.12584963-edge
0.00 -100.0% 0.00 interrupts.58:PCI-MSI.12584964-edge
0.00 -100.0% 0.00 interrupts.59:PCI-MSI.12584965-edge
0.00 -100.0% 0.00 interrupts.60:PCI-MSI.12584966-edge
0.00 -100.0% 0.00 interrupts.61:PCI-MSI.12584967-edge
0.00 -100.0% 0.00 interrupts.62:PCI-MSI.12584968-edge
0.00 -100.0% 0.00 interrupts.65:PCI-MSI.12587009-edge
0.00 -100.0% 0.00 interrupts.66:PCI-MSI.12587010-edge
0.00 -100.0% 0.00 interrupts.67:PCI-MSI.12587011-edge
0.00 -100.0% 0.00 interrupts.68:PCI-MSI.12587012-edge
0.00 -100.0% 0.00 interrupts.69:PCI-MSI.12587013-edge
0.00 -100.0% 0.00 interrupts.70:PCI-MSI.12587014-edge
0.00 -100.0% 0.00 interrupts.71:PCI-MSI.12587015-edge
0.00 -100.0% 0.00 interrupts.72:PCI-MSI.12587016-edge
0.00 -100.0% 0.00 interrupts.74:PCI-MSI.12589056-edge.eth3
1189 ±161% -91.2% 104.33 ± 10% interrupts.75:PCI-MSI.12589057-edge.eth3-TxRx-0
1628 ±187% -94.1% 96.33 ± 24% interrupts.76:PCI-MSI.12589058-edge.eth3-TxRx-1
3389 ±194% -93.4% 223.00 ± 95% interrupts.77:PCI-MSI.12589059-edge.eth3-TxRx-2
432.80 ±120% -74.7% 109.67 ± 18% interrupts.78:PCI-MSI.12589060-edge.eth3-TxRx-3
502.60 ±167% +21.2% 609.00 ±120% interrupts.79:PCI-MSI.12589061-edge.eth3-TxRx-4
241.60 ± 72% +662.3% 1841 ±100% interrupts.80:PCI-MSI.12589062-edge.eth3-TxRx-5
125.20 ± 40% +12.9% 141.33 ± 63% interrupts.81:PCI-MSI.12589063-edge.eth3-TxRx-6
67.00 ± 12% +3288.6% 2270 ±112% interrupts.82:PCI-MSI.12589064-edge.eth3-TxRx-7
0.00 -100.0% 0.00 interrupts.8:IO-APIC.8-edge.rtc0
117914 ± 41% -49.0% 60118 ± 42% interrupts.CAL:Function_call_interrupts
0.00 -100.0% 0.00 interrupts.CPU0.0:IO-APIC.2-edge.timer
125.20 ± 40% +12.9% 141.33 ± 63% interrupts.CPU0.81:PCI-MSI.12589063-edge.eth3-TxRx-6
1743 ± 16% -25.7% 1295 ± 16% interrupts.CPU0.CAL:Function_call_interrupts
45.20 ± 69% -20.4% 36.00 ± 21% interrupts.CPU0.IWI:IRQ_work_interrupts
107476 -1.3% 106067 interrupts.CPU0.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU0.MCP:Machine_check_polls
35344 ± 7% +18.3% 41811 interrupts.CPU0.NMI:Non-maskable_interrupts
35344 ± 7% +18.3% 41811 interrupts.CPU0.PMI:Performance_monitoring_interrupts
87.80 ± 35% -10.0% 79.00 ± 19% interrupts.CPU0.RES:Rescheduling_interrupts
0.00 -100.0% 0.00 interrupts.CPU0.RTR:APIC_ICR_read_retries
82.80 ±190% -84.3% 13.00 ± 53% interrupts.CPU0.TLB:TLB_shootdowns
0.00 -100.0% 0.00 interrupts.CPU1.413:PCI-MSI.68681744-edge.nvme0q16
67.00 ± 12% +3288.6% 2270 ±112% interrupts.CPU1.82:PCI-MSI.12589064-edge.eth3-TxRx-7
2076 ± 58% -33.9% 1373 ± 40% interrupts.CPU1.CAL:Function_call_interrupts
45.00 ± 71% -38.5% 27.67 ± 17% interrupts.CPU1.IWI:IRQ_work_interrupts
107466 -1.9% 105397 interrupts.CPU1.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU1.MCP:Machine_check_polls
35451 ± 7% +15.4% 40916 interrupts.CPU1.NMI:Non-maskable_interrupts
35451 ± 7% +15.4% 40916 interrupts.CPU1.PMI:Performance_monitoring_interrupts
94.00 ± 32% +54.3% 145.00 ± 37% interrupts.CPU1.RES:Rescheduling_interrupts
0.20 ±200% +5900.0% 12.00 ±141% interrupts.CPU1.TLB:TLB_shootdowns
1229 ± 44% -52.1% 589.00 ± 49% interrupts.CPU10.CAL:Function_call_interrupts
64.40 ± 40% -61.7% 24.67 ± 30% interrupts.CPU10.IWI:IRQ_work_interrupts
104889 -0.5% 104324 interrupts.CPU10.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU10.MCP:Machine_check_polls
36270 ± 6% +9.8% 39825 ± 4% interrupts.CPU10.NMI:Non-maskable_interrupts
36270 ± 6% +9.8% 39825 ± 4% interrupts.CPU10.PMI:Performance_monitoring_interrupts
70.80 ± 40% -60.9% 27.67 ± 43% interrupts.CPU10.RES:Rescheduling_interrupts
0.00 -100.0% 0.00 interrupts.CPU10.TLB:TLB_shootdowns
0.00 -100.0% 0.00 interrupts.CPU11.342:PCI-MSI.327680-edge.xhci_hcd
0.00 -100.0% 0.00 interrupts.CPU11.418:PCI-MSI.68681749-edge.nvme0q21
1527 ± 64% -63.6% 555.67 ± 47% interrupts.CPU11.CAL:Function_call_interrupts
39.00 ± 43% -34.2% 25.67 ± 23% interrupts.CPU11.IWI:IRQ_work_interrupts
107139 -2.8% 104093 interrupts.CPU11.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU11.MCP:Machine_check_polls
34802 ± 5% +18.8% 41346 interrupts.CPU11.NMI:Non-maskable_interrupts
34802 ± 5% +18.8% 41346 interrupts.CPU11.PMI:Performance_monitoring_interrupts
79.20 ± 83% -40.7% 47.00 ± 49% interrupts.CPU11.RES:Rescheduling_interrupts
0.20 ±200% -100.0% 0.00 interrupts.CPU11.TLB:TLB_shootdowns
0.00 -100.0% 0.00 interrupts.CPU12.8:IO-APIC.8-edge.rtc0
1199 ± 43% -54.4% 546.67 ± 47% interrupts.CPU12.CAL:Function_call_interrupts
38.20 ± 53% -38.9% 23.33 ± 8% interrupts.CPU12.IWI:IRQ_work_interrupts
105614 -1.0% 104543 interrupts.CPU12.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU12.MCP:Machine_check_polls
34818 ± 7% +19.5% 41623 interrupts.CPU12.NMI:Non-maskable_interrupts
34818 ± 7% +19.5% 41623 interrupts.CPU12.PMI:Performance_monitoring_interrupts
104.60 ± 45% -73.6% 27.67 ± 25% interrupts.CPU12.RES:Rescheduling_interrupts
0.40 ±200% +566.7% 2.67 ±141% interrupts.CPU12.TLB:TLB_shootdowns
1302 ± 22% -53.2% 609.33 ± 50% interrupts.CPU13.CAL:Function_call_interrupts
51.20 ± 51% -35.5% 33.00 ± 26% interrupts.CPU13.IWI:IRQ_work_interrupts
106875 -2.0% 104695 interrupts.CPU13.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU13.MCP:Machine_check_polls
34490 ± 6% +20.3% 41488 interrupts.CPU13.NMI:Non-maskable_interrupts
34490 ± 6% +20.3% 41488 interrupts.CPU13.PMI:Performance_monitoring_interrupts
78.40 ± 88% -49.4% 39.67 ± 45% interrupts.CPU13.RES:Rescheduling_interrupts
0.00 +6e+102% 6.00 ± 75% interrupts.CPU13.TLB:TLB_shootdowns
0.00 -100.0% 0.00 interrupts.CPU14.4:IO-APIC.4-edge.ttyS0
1167 ± 44% -52.9% 550.00 ± 47% interrupts.CPU14.CAL:Function_call_interrupts
54.40 ± 52% -42.4% 31.33 ± 33% interrupts.CPU14.IWI:IRQ_work_interrupts
106164 -2.2% 103815 interrupts.CPU14.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU14.MCP:Machine_check_polls
36638 ± 4% +12.8% 41325 interrupts.CPU14.NMI:Non-maskable_interrupts
36638 ± 4% +12.8% 41325 interrupts.CPU14.PMI:Performance_monitoring_interrupts
83.20 ± 37% -73.2% 22.33 ± 33% interrupts.CPU14.RES:Rescheduling_interrupts
0.60 ±200% -100.0% 0.00 interrupts.CPU14.TLB:TLB_shootdowns
0.00 -100.0% 0.00 interrupts.CPU15.344:PCI-MSI.65536-edge.ioat-msix
0.00 -100.0% 0.00 interrupts.CPU15.45:PCI-MSI.12582913-edge
1177 ± 39% -46.9% 624.67 ± 39% interrupts.CPU15.CAL:Function_call_interrupts
36.40 ± 42% -14.8% 31.00 ± 28% interrupts.CPU15.IWI:IRQ_work_interrupts
106322 -1.7% 104472 interrupts.CPU15.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU15.MCP:Machine_check_polls
35907 ± 6% +14.1% 40987 interrupts.CPU15.NMI:Non-maskable_interrupts
35907 ± 6% +14.1% 40987 interrupts.CPU15.PMI:Performance_monitoring_interrupts
156.00 ±116% -65.6% 53.67 ± 36% interrupts.CPU15.RES:Rescheduling_interrupts
0.00 +3.7e+102% 3.67 ±122% interrupts.CPU15.TLB:TLB_shootdowns
0.00 +1.8e+103% 17.67 ±141% interrupts.CPU16.345:PCI-MSI.288768-edge.ahci[0000:00:11.5]
0.00 +2.3e+103% 23.33 ±141% interrupts.CPU16.346:PCI-MSI.288768-edge.ahci[0000:00:11.5]
0.00 -100.0% 0.00 interrupts.CPU16.346:PCI-MSI.67584-edge.ioat-msix
10.80 ±200% -100.0% 0.00 interrupts.CPU16.347:PCI-MSI.288768-edge.ahci[0000:00:11.5]
0.00 -100.0% 0.00 interrupts.CPU16.348:PCI-MSI.67584-edge.ioat-msix
0.00 -100.0% 0.00 interrupts.CPU16.46:PCI-MSI.12582914-edge
1154 ± 43% -50.1% 576.33 ± 48% interrupts.CPU16.CAL:Function_call_interrupts
52.60 ± 29% -41.7% 30.67 ± 20% interrupts.CPU16.IWI:IRQ_work_interrupts
106527 -2.4% 103960 interrupts.CPU16.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU16.MCP:Machine_check_polls
34764 ± 4% +18.5% 41185 interrupts.CPU16.NMI:Non-maskable_interrupts
34764 ± 4% +18.5% 41185 interrupts.CPU16.PMI:Performance_monitoring_interrupts
49.00 ± 57% +13.6% 55.67 ± 76% interrupts.CPU16.RES:Rescheduling_interrupts
0.20 ±200% -100.0% 0.00 interrupts.CPU16.TLB:TLB_shootdowns
24.20 ±122% -100.0% 0.00 interrupts.CPU17.347:PCI-MSI.288768-edge.ahci[0000:00:11.5]
0.00 -100.0% 0.00 interrupts.CPU17.347:PCI-MSI.67584-edge.ioat-msix
0.00 -100.0% 0.00 interrupts.CPU17.347:PCI-MSI.69632-edge.ioat-msix
0.00 -100.0% 0.00 interrupts.CPU17.349:PCI-MSI.288768-edge.ahci[0000:00:11.5]
0.00 -100.0% 0.00 interrupts.CPU17.349:PCI-MSI.67584-edge.ioat-msix
0.00 -100.0% 0.00 interrupts.CPU17.47:PCI-MSI.12582915-edge
1182 ± 44% -48.9% 604.67 ± 42% interrupts.CPU17.CAL:Function_call_interrupts
39.60 ± 51% -31.0% 27.33 ± 32% interrupts.CPU17.IWI:IRQ_work_interrupts
106106 -2.2% 103754 interrupts.CPU17.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU17.MCP:Machine_check_polls
36370 ± 6% +13.2% 41168 interrupts.CPU17.NMI:Non-maskable_interrupts
36370 ± 6% +13.2% 41168 interrupts.CPU17.PMI:Performance_monitoring_interrupts
58.60 ± 44% +7.5% 63.00 ± 64% interrupts.CPU17.RES:Rescheduling_interrupts
0.00 -100.0% 0.00 interrupts.CPU17.TLB:TLB_shootdowns
0.00 -100.0% 0.00 interrupts.CPU18.348:PCI-MSI.69632-edge.ioat-msix
0.00 -100.0% 0.00 interrupts.CPU18.348:PCI-MSI.71680-edge.ioat-msix
0.00 -100.0% 0.00 interrupts.CPU18.349:PCI-MSI.288768-edge.ahci[0000:00:11.5]
0.00 -100.0% 0.00 interrupts.CPU18.349:PCI-MSI.69632-edge.ioat-msix
10.80 ±200% -100.0% 0.00 interrupts.CPU18.350:PCI-MSI.288768-edge.ahci[0000:00:11.5]
0.00 -100.0% 0.00 interrupts.CPU18.350:PCI-MSI.69632-edge.ioat-msix
0.00 -100.0% 0.00 interrupts.CPU18.48:PCI-MSI.12582916-edge
1151 ± 43% -54.6% 522.67 ± 45% interrupts.CPU18.CAL:Function_call_interrupts
30.60 ± 54% -7.4% 28.33 ± 23% interrupts.CPU18.IWI:IRQ_work_interrupts
106264 -1.1% 105122 interrupts.CPU18.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU18.MCP:Machine_check_polls
37200 ± 6% +10.8% 41199 interrupts.CPU18.NMI:Non-maskable_interrupts
37200 ± 6% +10.8% 41199 interrupts.CPU18.PMI:Performance_monitoring_interrupts
69.80 ± 55% -44.1% 39.00 ± 56% interrupts.CPU18.RES:Rescheduling_interrupts
0.00 -100.0% 0.00 interrupts.CPU18.TLB:TLB_shootdowns
0.00 -100.0% 0.00 interrupts.CPU19.349:PCI-MSI.73728-edge.ioat-msix
0.00 +1.8e+103% 17.67 ±141% interrupts.CPU19.350:PCI-MSI.288768-edge.ahci[0000:00:11.5]
0.00 -100.0% 0.00 interrupts.CPU19.351:PCI-MSI.71680-edge.ioat-msix
0.00 -100.0% 0.00 interrupts.CPU19.49:PCI-MSI.12582917-edge
1144 ± 44% -50.8% 563.00 ± 47% interrupts.CPU19.CAL:Function_call_interrupts
39.60 ± 54% -19.2% 32.00 ± 26% interrupts.CPU19.IWI:IRQ_work_interrupts
105742 -1.7% 103955 interrupts.CPU19.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU19.MCP:Machine_check_polls
36162 ± 5% +13.9% 41201 interrupts.CPU19.NMI:Non-maskable_interrupts
36162 ± 5% +13.9% 41201 interrupts.CPU19.PMI:Performance_monitoring_interrupts
51.00 ± 43% -37.3% 32.00 ± 27% interrupts.CPU19.RES:Rescheduling_interrupts
0.20 ±200% -100.0% 0.00 interrupts.CPU19.TLB:TLB_shootdowns
1594 ± 33% -49.0% 814.00 ± 8% interrupts.CPU2.CAL:Function_call_interrupts
58.40 ± 43% -48.6% 30.00 ± 37% interrupts.CPU2.IWI:IRQ_work_interrupts
105913 -0.9% 105004 interrupts.CPU2.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU2.MCP:Machine_check_polls
35248 ± 7% +14.0% 40194 ± 5% interrupts.CPU2.NMI:Non-maskable_interrupts
35248 ± 7% +14.0% 40194 ± 5% interrupts.CPU2.PMI:Performance_monitoring_interrupts
113.80 ± 36% -7.1% 105.67 ± 18% interrupts.CPU2.RES:Rescheduling_interrupts
1.60 ±200% +1150.0% 20.00 ± 74% interrupts.CPU2.TLB:TLB_shootdowns
0.00 -100.0% 0.00 interrupts.CPU20.350:PCI-MSI.75776-edge.ioat-msix
0.00 -100.0% 0.00 interrupts.CPU20.352:PCI-MSI.73728-edge.ioat-msix
0.00 -100.0% 0.00 interrupts.CPU20.50:PCI-MSI.12582918-edge
1213 ± 43% -54.0% 558.67 ± 46% interrupts.CPU20.CAL:Function_call_interrupts
20.40 ± 5% +47.1% 30.00 ± 7% interrupts.CPU20.IWI:IRQ_work_interrupts
104307 +0.4% 104744 interrupts.CPU20.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU20.MCP:Machine_check_polls
33299 ± 2% +24.7% 41519 interrupts.CPU20.NMI:Non-maskable_interrupts
33299 ± 2% +24.7% 41519 interrupts.CPU20.PMI:Performance_monitoring_interrupts
61.60 ± 32% -67.0% 20.33 ± 48% interrupts.CPU20.RES:Rescheduling_interrupts
0.00 +1e+102% 1.00 ±141% interrupts.CPU20.TLB:TLB_shootdowns
0.00 -100.0% 0.00 interrupts.CPU21.352:PCI-MSI.77824-edge.ioat-msix
0.00 -100.0% 0.00 interrupts.CPU21.353:PCI-MSI.75776-edge.ioat-msix
0.00 -100.0% 0.00 interrupts.CPU21.51:PCI-MSI.12582919-edge
1362 ± 53% -57.0% 586.00 ± 46% interrupts.CPU21.CAL:Function_call_interrupts
30.60 ± 51% -7.4% 28.33 ± 34% interrupts.CPU21.IWI:IRQ_work_interrupts
105105 ± 2% -0.6% 104492 interrupts.CPU21.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU21.MCP:Machine_check_polls
36342 ± 6% +12.8% 40986 interrupts.CPU21.NMI:Non-maskable_interrupts
36342 ± 6% +12.8% 40986 interrupts.CPU21.PMI:Performance_monitoring_interrupts
71.00 ± 43% -28.2% 51.00 ± 50% interrupts.CPU21.RES:Rescheduling_interrupts
0.60 ±133% -100.0% 0.00 interrupts.CPU21.TLB:TLB_shootdowns
12.20 ±200% -100.0% 0.00 interrupts.CPU22.354:PCI-MSI.288768-edge.ahci[0000:00:11.5]
0.00 -100.0% 0.00 interrupts.CPU22.354:PCI-MSI.77824-edge.ioat-msix
0.00 -100.0% 0.00 interrupts.CPU22.52:PCI-MSI.12582920-edge
1130 ± 43% -51.1% 552.33 ± 46% interrupts.CPU22.CAL:Function_call_interrupts
22.60 ± 6% +74.0% 39.33 ± 13% interrupts.CPU22.IWI:IRQ_work_interrupts
105349 -2.0% 103276 interrupts.CPU22.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU22.MCP:Machine_check_polls
35790 ± 9% +15.2% 41230 interrupts.CPU22.NMI:Non-maskable_interrupts
35790 ± 9% +15.2% 41230 interrupts.CPU22.PMI:Performance_monitoring_interrupts
53.60 ± 40% -50.2% 26.67 ± 46% interrupts.CPU22.RES:Rescheduling_interrupts
0.20 ±200% -100.0% 0.00 interrupts.CPU22.TLB:TLB_shootdowns
0.00 -100.0% 0.00 interrupts.CPU23.355:PCI-MSI.376832-edge.ahci[0000:00:17.0]
0.00 -100.0% 0.00 interrupts.CPU23.355:PCI-MSI.79872-edge.ioat-msix
1141 ± 43% -57.2% 488.33 ± 46% interrupts.CPU23.CAL:Function_call_interrupts
46.00 ± 67% -32.6% 31.00 ± 22% interrupts.CPU23.IWI:IRQ_work_interrupts
105776 ± 2% -1.1% 104605 interrupts.CPU23.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU23.MCP:Machine_check_polls
36878 ± 5% +11.9% 41277 interrupts.CPU23.NMI:Non-maskable_interrupts
36878 ± 5% +11.9% 41277 interrupts.CPU23.PMI:Performance_monitoring_interrupts
63.60 ± 37% -45.5% 34.67 ± 22% interrupts.CPU23.RES:Rescheduling_interrupts
0.00 +2.7e+102% 2.67 ±141% interrupts.CPU23.TLB:TLB_shootdowns
1292 ± 36% -48.3% 668.33 ± 34% interrupts.CPU24.CAL:Function_call_interrupts
42.80 ± 39% -23.7% 32.67 ± 25% interrupts.CPU24.IWI:IRQ_work_interrupts
107035 -1.9% 105051 interrupts.CPU24.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU24.MCP:Machine_check_polls
36772 ± 7% +11.0% 40806 interrupts.CPU24.NMI:Non-maskable_interrupts
36772 ± 7% +11.0% 40806 interrupts.CPU24.PMI:Performance_monitoring_interrupts
508.00 ±123% -83.1% 85.67 ± 31% interrupts.CPU24.RES:Rescheduling_interrupts
0.20 ±200% +4900.0% 10.00 ± 58% interrupts.CPU24.TLB:TLB_shootdowns
0.00 -100.0% 0.00 interrupts.CPU25.41:PCI-MSI.112721920-edge.PCIe.PME,pciehp
1320 ± 21% -48.8% 676.33 ± 39% interrupts.CPU25.CAL:Function_call_interrupts
38.00 ± 50% -7.9% 35.00 ± 23% interrupts.CPU25.IWI:IRQ_work_interrupts
105956 -2.2% 103678 interrupts.CPU25.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU25.MCP:Machine_check_polls
35852 ± 5% +14.2% 40946 ± 2% interrupts.CPU25.NMI:Non-maskable_interrupts
35852 ± 5% +14.2% 40946 ± 2% interrupts.CPU25.PMI:Performance_monitoring_interrupts
94.20 ± 34% -2.0% 92.33 ± 13% interrupts.CPU25.RES:Rescheduling_interrupts
0.20 ±200% +3733.3% 7.67 ±105% interrupts.CPU25.TLB:TLB_shootdowns
0.00 -100.0% 0.00 interrupts.CPU26.42:PCI-MSI.112738304-edge.PCIe.PME,pciehp
1712 ± 53% -33.8% 1134 ± 16% interrupts.CPU26.CAL:Function_call_interrupts
44.60 ± 42% -32.0% 30.33 ± 28% interrupts.CPU26.IWI:IRQ_work_interrupts
104913 ± 2% -0.5% 104337 interrupts.CPU26.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU26.MCP:Machine_check_polls
35996 ± 8% +13.4% 40804 ± 2% interrupts.CPU26.NMI:Non-maskable_interrupts
35996 ± 8% +13.4% 40804 ± 2% interrupts.CPU26.PMI:Performance_monitoring_interrupts
323.60 ±138% -1.4% 319.00 ± 84% interrupts.CPU26.RES:Rescheduling_interrupts
0.20 ±200% +66.7% 0.33 ±141% interrupts.CPU26.TLB:TLB_shootdowns
0.00 -100.0% 0.00 interrupts.CPU27.357:PCI-MSI.67174400-edge.ioat-msix
0.00 -100.0% 0.00 interrupts.CPU27.358:PCI-MSI.67174400-edge.ioat-msix
1275 ± 44% -40.4% 759.67 ± 44% interrupts.CPU27.CAL:Function_call_interrupts
37.80 ± 85% -38.3% 23.33 ± 17% interrupts.CPU27.IWI:IRQ_work_interrupts
104939 ± 2% -1.8% 103032 interrupts.CPU27.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU27.MCP:Machine_check_polls
36940 ± 7% +10.5% 40803 interrupts.CPU27.NMI:Non-maskable_interrupts
36940 ± 7% +10.5% 40803 interrupts.CPU27.PMI:Performance_monitoring_interrupts
73.20 ± 42% +59.4% 116.67 ± 28% interrupts.CPU27.RES:Rescheduling_interrupts
0.00 +6.7e+101% 0.67 ±141% interrupts.CPU27.TLB:TLB_shootdowns
0.00 -100.0% 0.00 interrupts.CPU28.359:PCI-MSI.67176448-edge.ioat-msix
0.00 -100.0% 0.00 interrupts.CPU28.360:PCI-MSI.67176448-edge.ioat-msix
1391 ± 49% -56.6% 603.67 ± 43% interrupts.CPU28.CAL:Function_call_interrupts
30.40 ± 49% +22.8% 37.33 ± 13% interrupts.CPU28.IWI:IRQ_work_interrupts
105431 -1.7% 103629 interrupts.CPU28.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU28.MCP:Machine_check_polls
38038 ± 8% +5.4% 40078 ± 2% interrupts.CPU28.NMI:Non-maskable_interrupts
38038 ± 8% +5.4% 40078 ± 2% interrupts.CPU28.PMI:Performance_monitoring_interrupts
194.80 ±101% -72.6% 53.33 ± 23% interrupts.CPU28.RES:Rescheduling_interrupts
66.40 ±200% -100.0% 0.00 interrupts.CPU28.TLB:TLB_shootdowns
0.00 -100.0% 0.00 interrupts.CPU29.360:PCI-MSI.67178496-edge.ioat-msix
0.00 -100.0% 0.00 interrupts.CPU29.361:PCI-MSI.67178496-edge.ioat-msix
1191 ± 43% -43.2% 676.33 ± 46% interrupts.CPU29.CAL:Function_call_interrupts
69.60 ± 22% -42.5% 40.00 ± 12% interrupts.CPU29.IWI:IRQ_work_interrupts
106254 -2.5% 103547 interrupts.CPU29.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU29.MCP:Machine_check_polls
36910 ± 5% +11.8% 41267 interrupts.CPU29.NMI:Non-maskable_interrupts
36910 ± 5% +11.8% 41267 interrupts.CPU29.PMI:Performance_monitoring_interrupts
49.80 ± 26% +41.9% 70.67 ± 20% interrupts.CPU29.RES:Rescheduling_interrupts
0.40 ±122% +1650.0% 7.00 ±141% interrupts.CPU29.TLB:TLB_shootdowns
0.00 -100.0% 0.00 interrupts.CPU3.414:PCI-MSI.68681745-edge.nvme0q17
1717 ± 63% -44.3% 955.67 ± 42% interrupts.CPU3.CAL:Function_call_interrupts
53.60 ± 30% -40.9% 31.67 ± 24% interrupts.CPU3.IWI:IRQ_work_interrupts
105699 -0.9% 104797 interrupts.CPU3.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU3.MCP:Machine_check_polls
35746 ± 6% +15.9% 41429 interrupts.CPU3.NMI:Non-maskable_interrupts
35746 ± 6% +15.9% 41429 interrupts.CPU3.PMI:Performance_monitoring_interrupts
86.00 ± 22% -5.4% 81.33 ± 30% interrupts.CPU3.RES:Rescheduling_interrupts
0.00 +6.7e+102% 6.67 ± 93% interrupts.CPU3.TLB:TLB_shootdowns
0.00 -100.0% 0.00 interrupts.CPU30.361:PCI-MSI.67180544-edge.ioat-msix
0.00 -100.0% 0.00 interrupts.CPU30.362:PCI-MSI.67180544-edge.ioat-msix
1215 ± 44% -50.6% 600.00 ± 39% interrupts.CPU30.CAL:Function_call_interrupts
45.60 ± 68% -10.1% 41.00 ± 12% interrupts.CPU30.IWI:IRQ_work_interrupts
105188 -0.6% 104548 interrupts.CPU30.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU30.MCP:Machine_check_polls
35956 ± 5% +13.9% 40943 interrupts.CPU30.NMI:Non-maskable_interrupts
35956 ± 5% +13.9% 40943 interrupts.CPU30.PMI:Performance_monitoring_interrupts
71.20 ± 95% -37.7% 44.33 ± 37% interrupts.CPU30.RES:Rescheduling_interrupts
0.00 +1.6e+103% 16.00 ±141% interrupts.CPU30.TLB:TLB_shootdowns
0.00 -100.0% 0.00 interrupts.CPU31.362:PCI-MSI.67182592-edge.ioat-msix
0.00 -100.0% 0.00 interrupts.CPU31.363:PCI-MSI.67182592-edge.ioat-msix
0.00 -100.0% 0.00 interrupts.CPU31.401:PCI-MSI.68681732-edge.nvme0q4
1170 ± 40% -49.7% 589.33 ± 41% interrupts.CPU31.CAL:Function_call_interrupts
54.40 ± 28% -34.4% 35.67 ± 37% interrupts.CPU31.IWI:IRQ_work_interrupts
105350 ± 2% -0.6% 104674 interrupts.CPU31.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU31.MCP:Machine_check_polls
36695 ± 3% +6.1% 38944 ± 8% interrupts.CPU31.NMI:Non-maskable_interrupts
36695 ± 3% +6.1% 38944 ± 8% interrupts.CPU31.PMI:Performance_monitoring_interrupts
44.00 ± 38% +34.8% 59.33 ± 30% interrupts.CPU31.RES:Rescheduling_interrupts
44.40 ±198% -88.0% 5.33 ±141% interrupts.CPU31.TLB:TLB_shootdowns
0.00 -100.0% 0.00 interrupts.CPU32.363:PCI-MSI.67184640-edge.ioat-msix
0.00 -100.0% 0.00 interrupts.CPU32.364:PCI-MSI.67184640-edge.ioat-msix
1480 ± 60% -56.7% 641.67 ± 41% interrupts.CPU32.CAL:Function_call_interrupts
48.60 ± 68% -41.7% 28.33 ± 27% interrupts.CPU32.IWI:IRQ_work_interrupts
105223 -0.6% 104608 interrupts.CPU32.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU32.MCP:Machine_check_polls
36046 ± 6% +13.0% 40735 interrupts.CPU32.NMI:Non-maskable_interrupts
36046 ± 6% +13.0% 40735 interrupts.CPU32.PMI:Performance_monitoring_interrupts
43.80 ± 40% +113.9% 93.67 ± 78% interrupts.CPU32.RES:Rescheduling_interrupts
17.60 ±200% -29.9% 12.33 ± 53% interrupts.CPU32.TLB:TLB_shootdowns
0.00 -100.0% 0.00 interrupts.CPU33.364:PCI-MSI.67186688-edge.ioat-msix
0.00 -100.0% 0.00 interrupts.CPU33.365:PCI-MSI.67186688-edge.ioat-msix
1179 ± 38% -47.2% 623.00 ± 45% interrupts.CPU33.CAL:Function_call_interrupts
45.20 ± 68% -35.8% 29.00 ± 39% interrupts.CPU33.IWI:IRQ_work_interrupts
105800 -1.2% 104555 interrupts.CPU33.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU33.MCP:Machine_check_polls
35765 ± 7% +13.9% 40749 interrupts.CPU33.NMI:Non-maskable_interrupts
35765 ± 7% +13.9% 40749 interrupts.CPU33.PMI:Performance_monitoring_interrupts
96.80 ± 73% -5.3% 91.67 ± 65% interrupts.CPU33.RES:Rescheduling_interrupts
1.40 ±139% +328.6% 6.00 ± 98% interrupts.CPU33.TLB:TLB_shootdowns
0.00 -100.0% 0.00 interrupts.CPU34.365:PCI-MSI.67188736-edge.ioat-msix
0.00 -100.0% 0.00 interrupts.CPU34.366:PCI-MSI.67188736-edge.ioat-msix
1253 ± 34% -53.8% 579.00 ± 44% interrupts.CPU34.CAL:Function_call_interrupts
40.60 ± 68% -17.9% 33.33 ± 23% interrupts.CPU34.IWI:IRQ_work_interrupts
104952 -0.3% 104594 interrupts.CPU34.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU34.MCP:Machine_check_polls
37133 ± 6% +9.6% 40704 interrupts.CPU34.NMI:Non-maskable_interrupts
37133 ± 6% +9.6% 40704 interrupts.CPU34.PMI:Performance_monitoring_interrupts
95.80 ± 55% -56.9% 41.33 ± 18% interrupts.CPU34.RES:Rescheduling_interrupts
0.00 +1e+103% 10.00 ±114% interrupts.CPU34.TLB:TLB_shootdowns
0.00 -100.0% 0.00 interrupts.CPU35.403:PCI-MSI.68681734-edge.nvme0q6
1161 ± 41% -51.7% 561.00 ± 48% interrupts.CPU35.CAL:Function_call_interrupts
46.80 ± 43% -38.0% 29.00 ± 34% interrupts.CPU35.IWI:IRQ_work_interrupts
106085 -2.3% 103668 interrupts.CPU35.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU35.MCP:Machine_check_polls
35361 ± 6% +15.7% 40921 interrupts.CPU35.NMI:Non-maskable_interrupts
35361 ± 6% +15.7% 40921 interrupts.CPU35.PMI:Performance_monitoring_interrupts
60.80 ± 33% -75.9% 14.67 ± 25% interrupts.CPU35.RES:Rescheduling_interrupts
0.40 ±200% +1816.7% 7.67 ± 81% interrupts.CPU35.TLB:TLB_shootdowns
1148 ± 43% -47.3% 605.67 ± 49% interrupts.CPU36.CAL:Function_call_interrupts
30.40 ± 53% +4.2% 31.67 ± 23% interrupts.CPU36.IWI:IRQ_work_interrupts
104917 -1.4% 103480 interrupts.CPU36.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU36.MCP:Machine_check_polls
36278 ± 9% +11.8% 40572 interrupts.CPU36.NMI:Non-maskable_interrupts
36278 ± 9% +11.8% 40572 interrupts.CPU36.PMI:Performance_monitoring_interrupts
78.60 ± 44% -64.0% 28.33 ± 15% interrupts.CPU36.RES:Rescheduling_interrupts
0.00 +1.3e+103% 12.67 ± 87% interrupts.CPU36.TLB:TLB_shootdowns
0.00 -100.0% 0.00 interrupts.CPU37.404:PCI-MSI.68681735-edge.nvme0q7
1157 ± 43% -48.2% 599.00 ± 36% interrupts.CPU37.CAL:Function_call_interrupts
38.20 ± 48% -12.7% 33.33 ± 27% interrupts.CPU37.IWI:IRQ_work_interrupts
105480 -1.8% 103560 interrupts.CPU37.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU37.MCP:Machine_check_polls
35362 ± 5% +15.0% 40668 interrupts.CPU37.NMI:Non-maskable_interrupts
35362 ± 5% +15.0% 40668 interrupts.CPU37.PMI:Performance_monitoring_interrupts
40.20 ± 44% -44.4% 22.33 ± 25% interrupts.CPU37.RES:Rescheduling_interrupts
0.40 ±200% -16.7% 0.33 ±141% interrupts.CPU37.TLB:TLB_shootdowns
1166 ± 43% -51.2% 569.67 ± 48% interrupts.CPU38.CAL:Function_call_interrupts
34.60 ± 78% -34.5% 22.67 ± 7% interrupts.CPU38.IWI:IRQ_work_interrupts
105830 -1.8% 103888 interrupts.CPU38.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU38.MCP:Machine_check_polls
35914 ± 7% +12.9% 40541 interrupts.CPU38.NMI:Non-maskable_interrupts
35914 ± 7% +12.9% 40541 interrupts.CPU38.PMI:Performance_monitoring_interrupts
78.00 ± 64% -76.9% 18.00 ± 12% interrupts.CPU38.RES:Rescheduling_interrupts
1.80 ±173% -81.5% 0.33 ±141% interrupts.CPU38.TLB:TLB_shootdowns
0.00 -100.0% 0.00 interrupts.CPU39.405:PCI-MSI.68681736-edge.nvme0q8
1206 ± 44% -52.4% 575.00 ± 46% interrupts.CPU39.CAL:Function_call_interrupts
30.00 ± 53% +2.2% 30.67 ± 27% interrupts.CPU39.IWI:IRQ_work_interrupts
106171 -1.8% 104234 interrupts.CPU39.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU39.MCP:Machine_check_polls
35164 ± 5% +15.5% 40616 interrupts.CPU39.NMI:Non-maskable_interrupts
35164 ± 5% +15.5% 40616 interrupts.CPU39.PMI:Performance_monitoring_interrupts
58.60 ± 73% -53.9% 27.00 ± 19% interrupts.CPU39.RES:Rescheduling_interrupts
0.20 ±200% -100.0% 0.00 interrupts.CPU39.TLB:TLB_shootdowns
1258 ± 39% -46.0% 680.33 ± 28% interrupts.CPU4.CAL:Function_call_interrupts
49.80 ± 49% -59.8% 20.00 ± 8% interrupts.CPU4.IWI:IRQ_work_interrupts
105754 -1.0% 104715 interrupts.CPU4.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU4.MCP:Machine_check_polls
38485 ± 4% +1.8% 39159 ± 3% interrupts.CPU4.NMI:Non-maskable_interrupts
38485 ± 4% +1.8% 39159 ± 3% interrupts.CPU4.PMI:Performance_monitoring_interrupts
106.00 ± 36% +33.3% 141.33 ± 48% interrupts.CPU4.RES:Rescheduling_interrupts
0.00 +6.7e+101% 0.67 ±141% interrupts.CPU4.TLB:TLB_shootdowns
1184 ± 43% -51.2% 578.33 ± 47% interrupts.CPU40.CAL:Function_call_interrupts
31.20 ± 46% -12.4% 27.33 ± 22% interrupts.CPU40.IWI:IRQ_work_interrupts
106382 -1.9% 104310 interrupts.CPU40.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU40.MCP:Machine_check_polls
38001 ± 3% +7.3% 40775 interrupts.CPU40.NMI:Non-maskable_interrupts
38001 ± 3% +7.3% 40775 interrupts.CPU40.PMI:Performance_monitoring_interrupts
61.20 ± 54% -40.1% 36.67 ± 14% interrupts.CPU40.RES:Rescheduling_interrupts
0.00 +7.3e+102% 7.33 ±131% interrupts.CPU40.TLB:TLB_shootdowns
0.00 -100.0% 0.00 interrupts.CPU41.406:PCI-MSI.68681737-edge.nvme0q9
1178 ± 41% -51.0% 577.33 ± 49% interrupts.CPU41.CAL:Function_call_interrupts
46.40 ± 39% -44.0% 26.00 ± 11% interrupts.CPU41.IWI:IRQ_work_interrupts
105918 -0.6% 105239 interrupts.CPU41.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU41.MCP:Machine_check_polls
37378 +8.8% 40674 interrupts.CPU41.NMI:Non-maskable_interrupts
37378 +8.8% 40674 interrupts.CPU41.PMI:Performance_monitoring_interrupts
56.60 ± 42% -48.2% 29.33 ± 30% interrupts.CPU41.RES:Rescheduling_interrupts
0.00 +5.3e+102% 5.33 ±141% interrupts.CPU41.TLB:TLB_shootdowns
1209 ± 37% -51.3% 589.67 ± 44% interrupts.CPU42.CAL:Function_call_interrupts
38.80 ± 59% -33.8% 25.67 ± 25% interrupts.CPU42.IWI:IRQ_work_interrupts
106931 -2.9% 103872 interrupts.CPU42.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU42.MCP:Machine_check_polls
34186 ± 7% +18.6% 40549 interrupts.CPU42.NMI:Non-maskable_interrupts
34186 ± 7% +18.6% 40549 interrupts.CPU42.PMI:Performance_monitoring_interrupts
104.60 ± 29% -53.2% 49.00 ± 58% interrupts.CPU42.RES:Rescheduling_interrupts
0.00 +2.7e+102% 2.67 ±141% interrupts.CPU42.TLB:TLB_shootdowns
0.00 -100.0% 0.00 interrupts.CPU43.407:PCI-MSI.69206026-edge.nvme1q10
1146 ± 41% -50.9% 563.00 ± 47% interrupts.CPU43.CAL:Function_call_interrupts
60.40 ± 58% -39.3% 36.67 ± 26% interrupts.CPU43.IWI:IRQ_work_interrupts
105946 -2.1% 103720 interrupts.CPU43.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU43.MCP:Machine_check_polls
37198 ± 4% +9.7% 40800 interrupts.CPU43.NMI:Non-maskable_interrupts
37198 ± 4% +9.7% 40800 interrupts.CPU43.PMI:Performance_monitoring_interrupts
80.80 ± 42% -60.0% 32.33 ± 16% interrupts.CPU43.RES:Rescheduling_interrupts
0.60 ±133% +1177.8% 7.67 ± 70% interrupts.CPU43.TLB:TLB_shootdowns
1139 ± 42% -46.6% 608.33 ± 45% interrupts.CPU44.CAL:Function_call_interrupts
34.60 ± 68% +7.9% 37.33 ± 27% interrupts.CPU44.IWI:IRQ_work_interrupts
105117 -1.5% 103556 interrupts.CPU44.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU44.MCP:Machine_check_polls
36248 ± 8% +13.5% 41148 interrupts.CPU44.NMI:Non-maskable_interrupts
36248 ± 8% +13.5% 41148 interrupts.CPU44.PMI:Performance_monitoring_interrupts
83.40 ± 90% -27.3% 60.67 ± 26% interrupts.CPU44.RES:Rescheduling_interrupts
1.60 ±170% -37.5% 1.00 ±141% interrupts.CPU44.TLB:TLB_shootdowns
0.00 -100.0% 0.00 interrupts.CPU45.387:PCI-MSI.69206027-edge.nvme1q11
1167 ± 41% -49.2% 593.00 ± 51% interrupts.CPU45.CAL:Function_call_interrupts
59.80 ± 29% -61.5% 23.00 ± 3% interrupts.CPU45.IWI:IRQ_work_interrupts
106680 -3.1% 103357 interrupts.CPU45.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU45.MCP:Machine_check_polls
36324 ± 7% +10.3% 40065 interrupts.CPU45.NMI:Non-maskable_interrupts
36324 ± 7% +10.3% 40065 interrupts.CPU45.PMI:Performance_monitoring_interrupts
133.60 ± 80% -57.1% 57.33 ± 46% interrupts.CPU45.RES:Rescheduling_interrupts
1.60 ±170% -79.2% 0.33 ±141% interrupts.CPU45.TLB:TLB_shootdowns
1176 ± 42% -42.9% 671.67 ± 31% interrupts.CPU46.CAL:Function_call_interrupts
45.80 ± 68% -18.5% 37.33 ± 27% interrupts.CPU46.IWI:IRQ_work_interrupts
104406 -0.7% 103655 interrupts.CPU46.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU46.MCP:Machine_check_polls
36585 ± 10% +13.4% 41474 interrupts.CPU46.NMI:Non-maskable_interrupts
36585 ± 10% +13.4% 41474 interrupts.CPU46.PMI:Performance_monitoring_interrupts
74.00 ± 38% +1.4% 75.00 ± 87% interrupts.CPU46.RES:Rescheduling_interrupts
0.00 +2.3e+103% 22.67 ± 12% interrupts.CPU46.TLB:TLB_shootdowns
0.00 -100.0% 0.00 interrupts.CPU47.389:PCI-MSI.69206028-edge.nvme1q12
0.00 -100.0% 0.00 interrupts.CPU47.409:PCI-MSI.69206028-edge.nvme1q12
1415 ± 53% -57.3% 604.00 ± 43% interrupts.CPU47.CAL:Function_call_interrupts
55.00 ± 53% -57.6% 23.33 ± 7% interrupts.CPU47.IWI:IRQ_work_interrupts
105299 -1.1% 104095 interrupts.CPU47.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU47.MCP:Machine_check_polls
37302 ± 10% +9.1% 40692 interrupts.CPU47.NMI:Non-maskable_interrupts
37302 ± 10% +9.1% 40692 interrupts.CPU47.PMI:Performance_monitoring_interrupts
87.00 ± 58% -15.7% 73.33 ± 46% interrupts.CPU47.RES:Rescheduling_interrupts
2.20 ±200% +218.2% 7.00 ± 34% interrupts.CPU47.TLB:TLB_shootdowns
0.00 -100.0% 0.00 interrupts.CPU48.356:PCI-MSI.376832-edge.ahci[0000:00:17.0]
0.00 -100.0% 0.00 interrupts.CPU48.356:PCI-MSI.79872-edge.ioat-msix
0.00 -100.0% 0.00 interrupts.CPU48.358:PCI-MSI.376832-edge.ahci[0000:00:17.0]
0.00 -100.0% 0.00 interrupts.CPU48.360:PCI-MSI.376832-edge.ahci[0000:00:17.0]
0.00 -100.0% 0.00 interrupts.CPU48.364:PCI-MSI.376832-edge.ahci[0000:00:17.0]
2.00 ±200% -100.0% 0.00 interrupts.CPU48.366:PCI-MSI.376832-edge.ahci[0000:00:17.0]
0.00 -100.0% 0.00 interrupts.CPU48.55:PCI-MSI.12584961-edge
1280 ± 37% -55.3% 573.00 ± 47% interrupts.CPU48.CAL:Function_call_interrupts
58.00 ± 58% -42.0% 33.67 ± 30% interrupts.CPU48.IWI:IRQ_work_interrupts
106107 -1.5% 104468 interrupts.CPU48.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU48.MCP:Machine_check_polls
35915 ± 8% +14.9% 41254 interrupts.CPU48.NMI:Non-maskable_interrupts
35915 ± 8% +14.9% 41254 interrupts.CPU48.PMI:Performance_monitoring_interrupts
59.60 ± 43% -16.7% 49.67 ± 39% interrupts.CPU48.RES:Rescheduling_interrupts
0.20 ±200% -100.0% 0.00 interrupts.CPU48.TLB:TLB_shootdowns
0.00 -100.0% 0.00 interrupts.CPU49.345:PCI-MSI.68681728-edge.nvme0q0
0.00 -100.0% 0.00 interrupts.CPU49.345:PCI-MSI.69206016-edge.nvme1q0
0.00 -100.0% 0.00 interrupts.CPU49.346:PCI-MSI.69206016-edge.nvme1q0
0.00 -100.0% 0.00 interrupts.CPU49.348:PCI-MSI.68681728-edge.nvme0q0
0.00 -100.0% 0.00 interrupts.CPU49.349:PCI-MSI.69206016-edge.nvme1q0
0.00 -100.0% 0.00 interrupts.CPU49.350:PCI-MSI.69206016-edge.nvme1q0
0.00 -100.0% 0.00 interrupts.CPU49.351:PCI-MSI.68681728-edge.nvme0q0
0.00 -100.0% 0.00 interrupts.CPU49.351:PCI-MSI.69206016-edge.nvme1q0
0.00 -100.0% 0.00 interrupts.CPU49.3:IO-APIC.3-edge
0.00 -100.0% 0.00 interrupts.CPU49.56:PCI-MSI.12584962-edge
1304 ± 38% -49.3% 661.67 ± 41% interrupts.CPU49.CAL:Function_call_interrupts
46.20 ± 67% -38.7% 28.33 ± 34% interrupts.CPU49.IWI:IRQ_work_interrupts
104634 -0.2% 104379 interrupts.CPU49.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU49.MCP:Machine_check_polls
36454 ± 8% +13.7% 41435 ± 2% interrupts.CPU49.NMI:Non-maskable_interrupts
36454 ± 8% +13.7% 41435 ± 2% interrupts.CPU49.PMI:Performance_monitoring_interrupts
94.40 ± 9% -8.5% 86.33 ± 23% interrupts.CPU49.RES:Rescheduling_interrupts
1.60 ±170% +1191.7% 20.67 ±141% interrupts.CPU49.TLB:TLB_shootdowns
0.00 -100.0% 0.00 interrupts.CPU5.31:PCI-MSI.48791552-edge.PCIe.PME,pciehp
0.00 -100.0% 0.00 interrupts.CPU5.415:PCI-MSI.69206034-edge.nvme1q18
1202 ± 44% -41.0% 709.67 ± 38% interrupts.CPU5.CAL:Function_call_interrupts
73.80 ± 22% -63.4% 27.00 ± 29% interrupts.CPU5.IWI:IRQ_work_interrupts
106142 -1.9% 104111 interrupts.CPU5.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU5.MCP:Machine_check_polls
36562 ± 3% +8.9% 39809 ± 4% interrupts.CPU5.NMI:Non-maskable_interrupts
36562 ± 3% +8.9% 39809 ± 4% interrupts.CPU5.PMI:Performance_monitoring_interrupts
58.40 ± 35% +120.3% 128.67 ± 40% interrupts.CPU5.RES:Rescheduling_interrupts
0.00 +8.7e+102% 8.67 ±141% interrupts.CPU5.TLB:TLB_shootdowns
0.00 -100.0% 0.00 interrupts.CPU50.345:PCI-MSI.68681728-edge.nvme0q0
0.00 -100.0% 0.00 interrupts.CPU50.346:PCI-MSI.68681728-edge.nvme0q0
0.00 -100.0% 0.00 interrupts.CPU50.347:PCI-MSI.69206016-edge.nvme1q0
0.00 -100.0% 0.00 interrupts.CPU50.348:PCI-MSI.68681728-edge.nvme0q0
0.00 -100.0% 0.00 interrupts.CPU50.349:PCI-MSI.68681728-edge.nvme0q0
0.00 -100.0% 0.00 interrupts.CPU50.350:PCI-MSI.69206016-edge.nvme1q0
0.00 -100.0% 0.00 interrupts.CPU50.353:PCI-MSI.69206016-edge.nvme1q0
0.00 -100.0% 0.00 interrupts.CPU50.57:PCI-MSI.12584963-edge
1194 ± 41% -45.9% 646.67 ± 42% interrupts.CPU50.CAL:Function_call_interrupts
45.80 ± 40% -35.2% 29.67 ± 32% interrupts.CPU50.IWI:IRQ_work_interrupts
105279 -0.9% 104357 interrupts.CPU50.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU50.MCP:Machine_check_polls
37528 ± 2% +9.9% 41248 interrupts.CPU50.NMI:Non-maskable_interrupts
37528 ± 2% +9.9% 41248 interrupts.CPU50.PMI:Performance_monitoring_interrupts
93.00 ± 73% -15.1% 79.00 ± 12% interrupts.CPU50.RES:Rescheduling_interrupts
0.20 ±200% +66.7% 0.33 ±141% interrupts.CPU50.TLB:TLB_shootdowns
0.00 -100.0% 0.00 interrupts.CPU51.58:PCI-MSI.12584964-edge
1176 ± 42% -45.6% 640.67 ± 50% interrupts.CPU51.CAL:Function_call_interrupts
60.20 ± 40% -56.8% 26.00 ± 17% interrupts.CPU51.IWI:IRQ_work_interrupts
106157 -2.0% 104034 interrupts.CPU51.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU51.MCP:Machine_check_polls
36877 ± 6% +10.3% 40679 interrupts.CPU51.NMI:Non-maskable_interrupts
36877 ± 6% +10.3% 40679 interrupts.CPU51.PMI:Performance_monitoring_interrupts
62.40 ± 41% -18.3% 51.00 ± 40% interrupts.CPU51.RES:Rescheduling_interrupts
0.20 ±200% -100.0% 0.00 interrupts.CPU51.TLB:TLB_shootdowns
0.00 -100.0% 0.00 interrupts.CPU52.384:PCI-MSI.68681746-edge.nvme0q18
0.00 -100.0% 0.00 interrupts.CPU52.59:PCI-MSI.12584965-edge
1136 ± 44% -48.9% 580.33 ± 43% interrupts.CPU52.CAL:Function_call_interrupts
50.20 ± 67% -28.3% 36.00 ± 27% interrupts.CPU52.IWI:IRQ_work_interrupts
105211 -0.3% 104883 interrupts.CPU52.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU52.MCP:Machine_check_polls
37420 ± 10% +10.0% 41179 interrupts.CPU52.NMI:Non-maskable_interrupts
37420 ± 10% +10.0% 41179 interrupts.CPU52.PMI:Performance_monitoring_interrupts
87.60 ± 67% -46.0% 47.33 ± 36% interrupts.CPU52.RES:Rescheduling_interrupts
205.00 ±199% -99.8% 0.33 ±141% interrupts.CPU52.TLB:TLB_shootdowns
0.00 -100.0% 0.00 interrupts.CPU53.60:PCI-MSI.12584966-edge
1166 ± 44% -49.9% 584.33 ± 46% interrupts.CPU53.CAL:Function_call_interrupts
22.60 ± 9% -4.1% 21.67 ± 2% interrupts.CPU53.IWI:IRQ_work_interrupts
106072 -0.5% 105555 interrupts.CPU53.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU53.MCP:Machine_check_polls
35894 ± 7% +14.6% 41127 interrupts.CPU53.NMI:Non-maskable_interrupts
35894 ± 7% +14.6% 41127 interrupts.CPU53.PMI:Performance_monitoring_interrupts
65.40 ± 69% +5.5% 69.00 ± 19% interrupts.CPU53.RES:Rescheduling_interrupts
0.20 ±200% +66.7% 0.33 ±141% interrupts.CPU53.TLB:TLB_shootdowns
0.00 -100.0% 0.00 interrupts.CPU54.385:PCI-MSI.69206035-edge.nvme1q19
0.00 -100.0% 0.00 interrupts.CPU54.61:PCI-MSI.12584967-edge
1145 ± 42% -48.0% 595.00 ± 48% interrupts.CPU54.CAL:Function_call_interrupts
49.80 ± 51% -33.7% 33.00 ± 29% interrupts.CPU54.IWI:IRQ_work_interrupts
104362 +0.2% 104559 interrupts.CPU54.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU54.MCP:Machine_check_polls
36865 ± 4% +11.9% 41254 interrupts.CPU54.NMI:Non-maskable_interrupts
36865 ± 4% +11.9% 41254 interrupts.CPU54.PMI:Performance_monitoring_interrupts
50.00 ± 33% +11.3% 55.67 ± 34% interrupts.CPU54.RES:Rescheduling_interrupts
3.40 ±185% -100.0% 0.00 interrupts.CPU54.TLB:TLB_shootdowns
0.00 -100.0% 0.00 interrupts.CPU55.62:PCI-MSI.12584968-edge
1137 ± 42% -51.3% 554.00 ± 47% interrupts.CPU55.CAL:Function_call_interrupts
45.80 ± 63% -41.0% 27.00 ± 28% interrupts.CPU55.IWI:IRQ_work_interrupts
105488 -0.5% 105005 interrupts.CPU55.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU55.MCP:Machine_check_polls
34680 ± 7% +18.1% 40941 interrupts.CPU55.NMI:Non-maskable_interrupts
34680 ± 7% +18.1% 40941 interrupts.CPU55.PMI:Performance_monitoring_interrupts
56.60 ± 39% -2.8% 55.00 ± 73% interrupts.CPU55.RES:Rescheduling_interrupts
33.80 ±194% -99.0% 0.33 ±141% interrupts.CPU55.TLB:TLB_shootdowns
1155 ± 44% -50.9% 567.67 ± 48% interrupts.CPU56.CAL:Function_call_interrupts
30.40 ± 52% +10.7% 33.67 ± 26% interrupts.CPU56.IWI:IRQ_work_interrupts
106025 ± 2% -1.3% 104695 interrupts.CPU56.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU56.MCP:Machine_check_polls
35806 ± 6% +14.8% 41111 interrupts.CPU56.NMI:Non-maskable_interrupts
35806 ± 6% +14.8% 41111 interrupts.CPU56.PMI:Performance_monitoring_interrupts
60.60 ± 38% -13.6% 52.33 ± 50% interrupts.CPU56.RES:Rescheduling_interrupts
0.40 ±122% -100.0% 0.00 interrupts.CPU56.TLB:TLB_shootdowns
0.00 -100.0% 0.00 interrupts.CPU57.65:PCI-MSI.12587009-edge
1217 ± 45% -49.9% 610.33 ± 42% interrupts.CPU57.CAL:Function_call_interrupts
53.20 ± 54% -49.2% 27.00 ± 21% interrupts.CPU57.IWI:IRQ_work_interrupts
105073 +0.1% 105170 interrupts.CPU57.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU57.MCP:Machine_check_polls
36927 ± 6% +11.0% 40972 interrupts.CPU57.NMI:Non-maskable_interrupts
36927 ± 6% +11.0% 40972 interrupts.CPU57.PMI:Performance_monitoring_interrupts
67.40 ± 32% +28.1% 86.33 ± 29% interrupts.CPU57.RES:Rescheduling_interrupts
0.00 -100.0% 0.00 interrupts.CPU57.TLB:TLB_shootdowns
0.00 -100.0% 0.00 interrupts.CPU58.66:PCI-MSI.12587010-edge
1163 ± 43% -51.6% 562.33 ± 48% interrupts.CPU58.CAL:Function_call_interrupts
43.60 ± 48% -38.1% 27.00 ± 19% interrupts.CPU58.IWI:IRQ_work_interrupts
107719 -2.7% 104802 interrupts.CPU58.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU58.MCP:Machine_check_polls
37449 ± 7% +8.8% 40746 interrupts.CPU58.NMI:Non-maskable_interrupts
37449 ± 7% +8.8% 40746 interrupts.CPU58.PMI:Performance_monitoring_interrupts
72.00 ± 72% -56.9% 31.00 ± 44% interrupts.CPU58.RES:Rescheduling_interrupts
0.40 ±122% -100.0% 0.00 interrupts.CPU58.TLB:TLB_shootdowns
0.00 -100.0% 0.00 interrupts.CPU59.67:PCI-MSI.12587011-edge
1149 ± 44% -50.2% 572.33 ± 43% interrupts.CPU59.CAL:Function_call_interrupts
60.60 ± 37% -60.4% 24.00 ± 12% interrupts.CPU59.IWI:IRQ_work_interrupts
104343 +0.7% 105107 interrupts.CPU59.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU59.MCP:Machine_check_polls
35970 ± 4% +14.3% 41123 interrupts.CPU59.NMI:Non-maskable_interrupts
35970 ± 4% +14.3% 41123 interrupts.CPU59.PMI:Performance_monitoring_interrupts
49.00 ± 70% +57.8% 77.33 ± 78% interrupts.CPU59.RES:Rescheduling_interrupts
0.20 ±200% -100.0% 0.00 interrupts.CPU59.TLB:TLB_shootdowns
0.00 -100.0% 0.00 interrupts.CPU6.32:PCI-MSI.48807936-edge.PCIe.PME,pciehp
1160 ± 43% -48.9% 593.00 ± 47% interrupts.CPU6.CAL:Function_call_interrupts
63.80 ± 32% -62.9% 23.67 ± 13% interrupts.CPU6.IWI:IRQ_work_interrupts
106868 -1.4% 105362 interrupts.CPU6.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU6.MCP:Machine_check_polls
37563 ± 2% +9.0% 40946 interrupts.CPU6.NMI:Non-maskable_interrupts
37563 ± 2% +9.0% 40946 interrupts.CPU6.PMI:Performance_monitoring_interrupts
55.20 ± 37% -38.4% 34.00 ± 12% interrupts.CPU6.RES:Rescheduling_interrupts
0.40 ±122% +483.3% 2.33 ±141% interrupts.CPU6.TLB:TLB_shootdowns
0.00 -100.0% 0.00 interrupts.CPU60.68:PCI-MSI.12587012-edge
1179 ± 44% -53.6% 547.33 ± 47% interrupts.CPU60.CAL:Function_call_interrupts
37.80 ± 52% -25.0% 28.33 ± 31% interrupts.CPU60.IWI:IRQ_work_interrupts
106971 -2.0% 104804 interrupts.CPU60.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU60.MCP:Machine_check_polls
35528 ± 7% +15.7% 41121 interrupts.CPU60.NMI:Non-maskable_interrupts
35528 ± 7% +15.7% 41121 interrupts.CPU60.PMI:Performance_monitoring_interrupts
113.60 ± 66% -63.0% 42.00 ± 79% interrupts.CPU60.RES:Rescheduling_interrupts
0.60 ±133% -100.0% 0.00 interrupts.CPU60.TLB:TLB_shootdowns
0.00 -100.0% 0.00 interrupts.CPU61.69:PCI-MSI.12587013-edge
1188 ± 43% -52.7% 562.33 ± 47% interrupts.CPU61.CAL:Function_call_interrupts
46.80 ± 68% -53.0% 22.00 ± 12% interrupts.CPU61.IWI:IRQ_work_interrupts
105340 ± 2% -0.5% 104852 interrupts.CPU61.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU61.MCP:Machine_check_polls
37616 ± 2% +1.3% 38087 ± 9% interrupts.CPU61.NMI:Non-maskable_interrupts
37616 ± 2% +1.3% 38087 ± 9% interrupts.CPU61.PMI:Performance_monitoring_interrupts
62.60 ± 61% -67.0% 20.67 ± 18% interrupts.CPU61.RES:Rescheduling_interrupts
0.20 ±200% -100.0% 0.00 interrupts.CPU61.TLB:TLB_shootdowns
0.00 -100.0% 0.00 interrupts.CPU62.412:PCI-MSI.68681751-edge.nvme0q23
0.00 -100.0% 0.00 interrupts.CPU62.70:PCI-MSI.12587014-edge
1148 ± 43% -51.9% 552.67 ± 47% interrupts.CPU62.CAL:Function_call_interrupts
49.40 ± 62% -58.2% 20.67 ± 6% interrupts.CPU62.IWI:IRQ_work_interrupts
106207 -0.6% 105550 interrupts.CPU62.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU62.MCP:Machine_check_polls
36231 ± 7% +10.7% 40102 ± 3% interrupts.CPU62.NMI:Non-maskable_interrupts
36231 ± 7% +10.7% 40102 ± 3% interrupts.CPU62.PMI:Performance_monitoring_interrupts
77.60 ± 46% -57.9% 32.67 ± 35% interrupts.CPU62.RES:Rescheduling_interrupts
1.00 ±154% -33.3% 0.67 ±141% interrupts.CPU62.TLB:TLB_shootdowns
0.00 -100.0% 0.00 interrupts.CPU63.71:PCI-MSI.12587015-edge
1152 ± 42% -51.3% 560.67 ± 46% interrupts.CPU63.CAL:Function_call_interrupts
33.00 ± 43% -16.2% 27.67 ± 31% interrupts.CPU63.IWI:IRQ_work_interrupts
106674 -1.5% 105114 interrupts.CPU63.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU63.MCP:Machine_check_polls
36221 ± 5% +13.2% 41009 interrupts.CPU63.NMI:Non-maskable_interrupts
36221 ± 5% +13.2% 41009 interrupts.CPU63.PMI:Performance_monitoring_interrupts
72.60 ± 98% -16.0% 61.00 ± 22% interrupts.CPU63.RES:Rescheduling_interrupts
0.40 ±200% -100.0% 0.00 interrupts.CPU63.TLB:TLB_shootdowns
0.00 -100.0% 0.00 interrupts.CPU64.390:PCI-MSI.69206040-edge.nvme1q24
0.00 -100.0% 0.00 interrupts.CPU64.72:PCI-MSI.12587016-edge
1289 ± 49% -57.2% 552.33 ± 48% interrupts.CPU64.CAL:Function_call_interrupts
53.40 ± 53% -61.3% 20.67 ± 2% interrupts.CPU64.IWI:IRQ_work_interrupts
106599 -1.3% 105265 interrupts.CPU64.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU64.MCP:Machine_check_polls
37543 ± 3% +7.8% 40471 ± 2% interrupts.CPU64.NMI:Non-maskable_interrupts
37543 ± 3% +7.8% 40471 ± 2% interrupts.CPU64.PMI:Performance_monitoring_interrupts
42.80 ± 58% -26.8% 31.33 ± 62% interrupts.CPU64.RES:Rescheduling_interrupts
0.80 ± 93% -100.0% 0.00 interrupts.CPU64.TLB:TLB_shootdowns
0.00 -100.0% 0.00 interrupts.CPU65.74:PCI-MSI.12589056-edge.eth3
1150 ± 43% -52.1% 551.00 ± 47% interrupts.CPU65.CAL:Function_call_interrupts
45.40 ± 67% -55.9% 20.00 ± 4% interrupts.CPU65.IWI:IRQ_work_interrupts
105239 -0.0% 105223 interrupts.CPU65.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU65.MCP:Machine_check_polls
35937 ± 5% +11.7% 40147 ± 2% interrupts.CPU65.NMI:Non-maskable_interrupts
35937 ± 5% +11.7% 40147 ± 2% interrupts.CPU65.PMI:Performance_monitoring_interrupts
48.80 ± 33% -44.0% 27.33 ± 45% interrupts.CPU65.RES:Rescheduling_interrupts
0.20 ±200% +1233.3% 2.67 ±141% interrupts.CPU65.TLB:TLB_shootdowns
0.00 -100.0% 0.00 interrupts.CPU66.391:PCI-MSI.69206041-edge.nvme1q25
1189 ±161% -91.2% 104.33 ± 10% interrupts.CPU66.75:PCI-MSI.12589057-edge.eth3-TxRx-0
1151 ± 42% -50.3% 572.00 ± 46% interrupts.CPU66.CAL:Function_call_interrupts
59.00 ± 60% -50.8% 29.00 ± 36% interrupts.CPU66.IWI:IRQ_work_interrupts
106152 -0.5% 105648 interrupts.CPU66.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU66.MCP:Machine_check_polls
38444 ± 10% +8.2% 41610 ± 2% interrupts.CPU66.NMI:Non-maskable_interrupts
38444 ± 10% +8.2% 41610 ± 2% interrupts.CPU66.PMI:Performance_monitoring_interrupts
62.80 ± 27% -30.5% 43.67 ± 54% interrupts.CPU66.RES:Rescheduling_interrupts
0.40 ±200% +66.7% 0.67 ±141% interrupts.CPU66.TLB:TLB_shootdowns
1628 ±187% -94.1% 96.33 ± 24% interrupts.CPU67.76:PCI-MSI.12589058-edge.eth3-TxRx-1
1176 ± 43% -52.4% 560.33 ± 47% interrupts.CPU67.CAL:Function_call_interrupts
32.40 ± 46% -35.2% 21.00 interrupts.CPU67.IWI:IRQ_work_interrupts
106022 -0.3% 105733 interrupts.CPU67.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU67.MCP:Machine_check_polls
35901 ± 5% +14.3% 41043 interrupts.CPU67.NMI:Non-maskable_interrupts
35901 ± 5% +14.3% 41043 interrupts.CPU67.PMI:Performance_monitoring_interrupts
57.80 ± 49% -29.1% 41.00 ± 50% interrupts.CPU67.RES:Rescheduling_interrupts
568.60 ±199% -100.0% 0.00 interrupts.CPU67.TLB:TLB_shootdowns
0.00 -100.0% 0.00 interrupts.CPU68.392:PCI-MSI.69206042-edge.nvme1q26
3389 ±194% -93.4% 223.00 ± 95% interrupts.CPU68.77:PCI-MSI.12589059-edge.eth3-TxRx-2
1172 ± 43% -53.0% 550.33 ± 47% interrupts.CPU68.CAL:Function_call_interrupts
60.60 ± 42% -70.8% 17.67 ± 2% interrupts.CPU68.IWI:IRQ_work_interrupts
107459 -2.8% 104481 interrupts.CPU68.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU68.MCP:Machine_check_polls
35334 ± 5% +5.6% 37302 interrupts.CPU68.NMI:Non-maskable_interrupts
35334 ± 5% +5.6% 37302 interrupts.CPU68.PMI:Performance_monitoring_interrupts
65.20 ± 54% -52.5% 31.00 ± 54% interrupts.CPU68.RES:Rescheduling_interrupts
0.80 ± 93% -100.0% 0.00 interrupts.CPU68.TLB:TLB_shootdowns
432.80 ±120% -74.7% 109.67 ± 18% interrupts.CPU69.78:PCI-MSI.12589060-edge.eth3-TxRx-3
1159 ± 43% -53.1% 543.67 ± 47% interrupts.CPU69.CAL:Function_call_interrupts
24.20 ± 3% +21.2% 29.33 ± 29% interrupts.CPU69.IWI:IRQ_work_interrupts
106381 -1.6% 104721 interrupts.CPU69.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU69.MCP:Machine_check_polls
38373 +6.5% 40885 interrupts.CPU69.NMI:Non-maskable_interrupts
38373 +6.5% 40885 interrupts.CPU69.PMI:Performance_monitoring_interrupts
59.20 ± 74% -41.4% 34.67 ± 56% interrupts.CPU69.RES:Rescheduling_interrupts
0.40 ±122% -100.0% 0.00 interrupts.CPU69.TLB:TLB_shootdowns
1157 ± 42% -45.4% 632.00 ± 50% interrupts.CPU7.CAL:Function_call_interrupts
48.40 ± 61% -55.2% 21.67 ± 2% interrupts.CPU7.IWI:IRQ_work_interrupts
105610 ± 2% -0.9% 104711 interrupts.CPU7.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU7.MCP:Machine_check_polls
36345 ± 4% +13.3% 41174 interrupts.CPU7.NMI:Non-maskable_interrupts
36345 ± 4% +13.3% 41174 interrupts.CPU7.PMI:Performance_monitoring_interrupts
70.80 ± 74% -15.7% 59.67 ± 41% interrupts.CPU7.RES:Rescheduling_interrupts
0.20 ±200% +1233.3% 2.67 ±141% interrupts.CPU7.TLB:TLB_shootdowns
502.60 ±167% +21.2% 609.00 ±120% interrupts.CPU70.79:PCI-MSI.12589061-edge.eth3-TxRx-4
1142 ± 43% -51.8% 550.67 ± 46% interrupts.CPU70.CAL:Function_call_interrupts
46.20 ± 67% -56.0% 20.33 ± 8% interrupts.CPU70.IWI:IRQ_work_interrupts
105503 ± 2% -0.5% 105005 interrupts.CPU70.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU70.MCP:Machine_check_polls
37743 ± 4% +5.7% 39879 ± 4% interrupts.CPU70.NMI:Non-maskable_interrupts
37743 ± 4% +5.7% 39879 ± 4% interrupts.CPU70.PMI:Performance_monitoring_interrupts
48.80 ± 45% -18.0% 40.00 ± 72% interrupts.CPU70.RES:Rescheduling_interrupts
0.60 ±133% -44.4% 0.33 ±141% interrupts.CPU70.TLB:TLB_shootdowns
241.60 ± 72% +662.3% 1841 ±100% interrupts.CPU71.80:PCI-MSI.12589062-edge.eth3-TxRx-5
1174 ± 41% -52.7% 555.33 ± 46% interrupts.CPU71.CAL:Function_call_interrupts
48.00 ± 61% -34.7% 31.33 ± 27% interrupts.CPU71.IWI:IRQ_work_interrupts
105211 -0.3% 104865 interrupts.CPU71.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU71.MCP:Machine_check_polls
36912 ± 6% +11.3% 41085 interrupts.CPU71.NMI:Non-maskable_interrupts
36912 ± 6% +11.3% 41085 interrupts.CPU71.PMI:Performance_monitoring_interrupts
43.60 ± 65% -36.5% 27.67 ± 44% interrupts.CPU71.RES:Rescheduling_interrupts
0.80 ± 93% -58.3% 0.33 ±141% interrupts.CPU71.TLB:TLB_shootdowns
0.00 -100.0% 0.00 interrupts.CPU72.398:PCI-MSI.68681729-edge.nvme0q1
0.00 -100.0% 0.00 interrupts.CPU72.398:PCI-MSI.69206017-edge.nvme1q1
1281 ± 34% -54.2% 586.33 ± 44% interrupts.CPU72.CAL:Function_call_interrupts
38.00 ± 49% -37.7% 23.67 ± 15% interrupts.CPU72.IWI:IRQ_work_interrupts
104488 -0.4% 104077 interrupts.CPU72.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU72.MCP:Machine_check_polls
36746 ± 7% +11.6% 41024 interrupts.CPU72.NMI:Non-maskable_interrupts
36746 ± 7% +11.6% 41024 interrupts.CPU72.PMI:Performance_monitoring_interrupts
228.40 ±102% -82.8% 39.33 ± 30% interrupts.CPU72.RES:Rescheduling_interrupts
0.80 ±122% -100.0% 0.00 interrupts.CPU72.TLB:TLB_shootdowns
0.00 -100.0% 0.00 interrupts.CPU73.367:PCI-MSI.68681729-edge.nvme0q1
0.00 -100.0% 0.00 interrupts.CPU73.367:PCI-MSI.69206017-edge.nvme1q1
1294 ± 37% -51.4% 629.00 ± 42% interrupts.CPU73.CAL:Function_call_interrupts
29.80 ± 40% +4.0% 31.00 ± 35% interrupts.CPU73.IWI:IRQ_work_interrupts
106119 ± 2% -1.7% 104362 interrupts.CPU73.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU73.MCP:Machine_check_polls
38003 ± 5% +7.7% 40939 interrupts.CPU73.NMI:Non-maskable_interrupts
38003 ± 5% +7.7% 40939 interrupts.CPU73.PMI:Performance_monitoring_interrupts
75.40 ± 29% -3.6% 72.67 ± 28% interrupts.CPU73.RES:Rescheduling_interrupts
0.60 ±133% -100.0% 0.00 interrupts.CPU73.TLB:TLB_shootdowns
0.00 -100.0% 0.00 interrupts.CPU74.399:PCI-MSI.68681730-edge.nvme0q2
0.00 -100.0% 0.00 interrupts.CPU74.399:PCI-MSI.69206018-edge.nvme1q2
1228 ± 42% -48.3% 635.33 ± 40% interrupts.CPU74.CAL:Function_call_interrupts
61.00 ± 54% -53.6% 28.33 ± 34% interrupts.CPU74.IWI:IRQ_work_interrupts
106208 -2.4% 103690 interrupts.CPU74.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU74.MCP:Machine_check_polls
38616 ± 3% +5.6% 40768 interrupts.CPU74.NMI:Non-maskable_interrupts
38616 ± 3% +5.6% 40768 interrupts.CPU74.PMI:Performance_monitoring_interrupts
111.60 ± 73% -51.6% 54.00 ± 50% interrupts.CPU74.RES:Rescheduling_interrupts
86.80 ±198% -99.6% 0.33 ±141% interrupts.CPU74.TLB:TLB_shootdowns
1168 ± 41% -46.2% 629.00 ± 41% interrupts.CPU75.CAL:Function_call_interrupts
53.40 ± 47% -42.6% 30.67 ± 22% interrupts.CPU75.IWI:IRQ_work_interrupts
106626 -1.9% 104620 interrupts.CPU75.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU75.MCP:Machine_check_polls
38362 ± 2% +7.0% 41048 interrupts.CPU75.NMI:Non-maskable_interrupts
38362 ± 2% +7.0% 41048 interrupts.CPU75.PMI:Performance_monitoring_interrupts
68.60 ± 41% -23.7% 52.33 ± 36% interrupts.CPU75.RES:Rescheduling_interrupts
0.60 ±133% -100.0% 0.00 interrupts.CPU75.TLB:TLB_shootdowns
1129 ± 43% -44.0% 633.00 ± 31% interrupts.CPU76.CAL:Function_call_interrupts
37.60 ± 51% -7.8% 34.67 ± 29% interrupts.CPU76.IWI:IRQ_work_interrupts
106500 -2.2% 104158 interrupts.CPU76.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU76.MCP:Machine_check_polls
34992 ± 9% +16.9% 40889 interrupts.CPU76.NMI:Non-maskable_interrupts
34992 ± 9% +16.9% 40889 interrupts.CPU76.PMI:Performance_monitoring_interrupts
45.20 ± 38% -24.0% 34.33 ± 9% interrupts.CPU76.RES:Rescheduling_interrupts
0.80 ± 93% -58.3% 0.33 ±141% interrupts.CPU76.TLB:TLB_shootdowns
1146 ± 42% -44.8% 633.00 ± 46% interrupts.CPU77.CAL:Function_call_interrupts
49.40 ± 31% -56.1% 21.67 ± 2% interrupts.CPU77.IWI:IRQ_work_interrupts
106334 -1.7% 104531 interrupts.CPU77.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU77.MCP:Machine_check_polls
37297 ± 2% +7.7% 40176 interrupts.CPU77.NMI:Non-maskable_interrupts
37297 ± 2% +7.7% 40176 interrupts.CPU77.PMI:Performance_monitoring_interrupts
33.00 ± 33% +37.4% 45.33 ± 55% interrupts.CPU77.RES:Rescheduling_interrupts
0.80 ± 50% -100.0% 0.00 interrupts.CPU77.TLB:TLB_shootdowns
1140 ± 43% -49.4% 577.33 ± 45% interrupts.CPU78.CAL:Function_call_interrupts
23.40 ± 6% +55.3% 36.33 ± 24% interrupts.CPU78.IWI:IRQ_work_interrupts
106173 -2.4% 103635 interrupts.CPU78.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU78.MCP:Machine_check_polls
36301 ± 6% +12.8% 40936 interrupts.CPU78.NMI:Non-maskable_interrupts
36301 ± 6% +12.8% 40936 interrupts.CPU78.PMI:Performance_monitoring_interrupts
42.80 ± 33% -11.2% 38.00 ± 57% interrupts.CPU78.RES:Rescheduling_interrupts
0.80 ± 93% -100.0% 0.00 interrupts.CPU78.TLB:TLB_shootdowns
1163 ± 43% -51.3% 567.00 ± 46% interrupts.CPU79.CAL:Function_call_interrupts
45.20 ± 43% -45.4% 24.67 ± 11% interrupts.CPU79.IWI:IRQ_work_interrupts
105861 -1.4% 104332 interrupts.CPU79.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU79.MCP:Machine_check_polls
35532 ± 4% +17.1% 41611 ± 3% interrupts.CPU79.NMI:Non-maskable_interrupts
35532 ± 4% +17.1% 41611 ± 3% interrupts.CPU79.PMI:Performance_monitoring_interrupts
62.40 ± 66% -43.4% 35.33 ± 22% interrupts.CPU79.RES:Rescheduling_interrupts
1.00 ± 63% -66.7% 0.33 ±141% interrupts.CPU79.TLB:TLB_shootdowns
1145 ± 43% -17.6% 943.33 ± 76% interrupts.CPU8.CAL:Function_call_interrupts
44.80 ± 43% -52.4% 21.33 ± 9% interrupts.CPU8.IWI:IRQ_work_interrupts
106298 -1.6% 104562 interrupts.CPU8.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU8.MCP:Machine_check_polls
36034 ± 3% +11.6% 40213 interrupts.CPU8.NMI:Non-maskable_interrupts
36034 ± 3% +11.6% 40213 interrupts.CPU8.PMI:Performance_monitoring_interrupts
45.20 ± 50% -15.2% 38.33 ± 71% interrupts.CPU8.RES:Rescheduling_interrupts
0.60 ±200% +177.8% 1.67 ±141% interrupts.CPU8.TLB:TLB_shootdowns
0.00 -100.0% 0.00 interrupts.CPU80.371:PCI-MSI.69206021-edge.nvme1q5
0.00 -100.0% 0.00 interrupts.CPU80.376:PCI-MSI.68681733-edge.nvme0q5
1149 ± 43% -47.1% 607.67 ± 34% interrupts.CPU80.CAL:Function_call_interrupts
21.80 ± 7% +69.7% 37.00 ± 17% interrupts.CPU80.IWI:IRQ_work_interrupts
106798 -2.3% 104393 interrupts.CPU80.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU80.MCP:Machine_check_polls
35077 ± 6% +18.0% 41383 interrupts.CPU80.NMI:Non-maskable_interrupts
35077 ± 6% +18.0% 41383 interrupts.CPU80.PMI:Performance_monitoring_interrupts
77.80 ±104% -69.6% 23.67 ± 53% interrupts.CPU80.RES:Rescheduling_interrupts
5.40 ±163% -93.8% 0.33 ±141% interrupts.CPU80.TLB:TLB_shootdowns
1140 ± 43% -46.9% 605.00 ± 33% interrupts.CPU81.CAL:Function_call_interrupts
39.20 ± 80% +8.0% 42.33 ± 6% interrupts.CPU81.IWI:IRQ_work_interrupts
105037 -1.9% 103020 interrupts.CPU81.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU81.MCP:Machine_check_polls
37562 ± 8% +11.1% 41714 ± 2% interrupts.CPU81.NMI:Non-maskable_interrupts
37562 ± 8% +11.1% 41714 ± 2% interrupts.CPU81.PMI:Performance_monitoring_interrupts
43.60 ± 61% -40.4% 26.00 ± 21% interrupts.CPU81.RES:Rescheduling_interrupts
7.80 ±181% -95.7% 0.33 ±141% interrupts.CPU81.TLB:TLB_shootdowns
0.00 -100.0% 0.00 interrupts.CPU82.372:PCI-MSI.69206022-edge.nvme1q6
1162 ± 42% -45.8% 630.00 ± 52% interrupts.CPU82.CAL:Function_call_interrupts
47.20 ± 64% -39.3% 28.67 ± 35% interrupts.CPU82.IWI:IRQ_work_interrupts
106938 -2.3% 104486 interrupts.CPU82.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU82.MCP:Machine_check_polls
37757 ± 6% +9.0% 41157 ± 3% interrupts.CPU82.NMI:Non-maskable_interrupts
37757 ± 6% +9.0% 41157 ± 3% interrupts.CPU82.PMI:Performance_monitoring_interrupts
55.80 ± 82% -34.3% 36.67 ± 49% interrupts.CPU82.RES:Rescheduling_interrupts
1.80 ± 41% -81.5% 0.33 ±141% interrupts.CPU82.TLB:TLB_shootdowns
0.00 -100.0% 0.00 interrupts.CPU83.379:PCI-MSI.69206029-edge.nvme1q13
1182 ± 44% -47.4% 622.00 ± 42% interrupts.CPU83.CAL:Function_call_interrupts
37.80 ± 52% -1.2% 37.33 ± 27% interrupts.CPU83.IWI:IRQ_work_interrupts
105482 -0.7% 104697 interrupts.CPU83.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU83.MCP:Machine_check_polls
34286 ± 4% +20.7% 41384 interrupts.CPU83.NMI:Non-maskable_interrupts
34286 ± 4% +20.7% 41384 interrupts.CPU83.PMI:Performance_monitoring_interrupts
47.60 ± 69% -24.4% 36.00 ± 45% interrupts.CPU83.RES:Rescheduling_interrupts
3.60 ±147% -7.4% 3.33 ± 86% interrupts.CPU83.TLB:TLB_shootdowns
0.00 -100.0% 0.00 interrupts.CPU84.373:PCI-MSI.68681735-edge.nvme0q7
1185 ± 41% -53.1% 556.00 ± 48% interrupts.CPU84.CAL:Function_call_interrupts
45.00 ± 68% -34.8% 29.33 ± 33% interrupts.CPU84.IWI:IRQ_work_interrupts
106133 -1.3% 104799 interrupts.CPU84.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU84.MCP:Machine_check_polls
35125 ± 7% +17.1% 41123 interrupts.CPU84.NMI:Non-maskable_interrupts
35125 ± 7% +17.1% 41123 interrupts.CPU84.PMI:Performance_monitoring_interrupts
49.80 ± 56% -66.5% 16.67 ± 37% interrupts.CPU84.RES:Rescheduling_interrupts
0.60 ±133% +66.7% 1.00 ±141% interrupts.CPU84.TLB:TLB_shootdowns
1150 ± 44% -47.8% 600.33 ± 50% interrupts.CPU85.CAL:Function_call_interrupts
38.60 ± 79% -46.5% 20.67 ± 2% interrupts.CPU85.IWI:IRQ_work_interrupts
106438 -2.0% 104319 interrupts.CPU85.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU85.MCP:Machine_check_polls
37591 ± 6% +9.0% 40969 interrupts.CPU85.NMI:Non-maskable_interrupts
37591 ± 6% +9.0% 40969 interrupts.CPU85.PMI:Performance_monitoring_interrupts
45.60 ± 34% +33.8% 61.00 ± 30% interrupts.CPU85.RES:Rescheduling_interrupts
1.00 ± 63% +33.3% 1.33 ± 93% interrupts.CPU85.TLB:TLB_shootdowns
0.00 -100.0% 0.00 interrupts.CPU86.374:PCI-MSI.69206024-edge.nvme1q8
1184 ± 43% -45.9% 641.00 ± 51% interrupts.CPU86.CAL:Function_call_interrupts
23.40 ± 8% +56.7% 36.67 ± 26% interrupts.CPU86.IWI:IRQ_work_interrupts
104643 -1.6% 103010 interrupts.CPU86.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU86.MCP:Machine_check_polls
36619 ± 7% +12.2% 41099 interrupts.CPU86.NMI:Non-maskable_interrupts
36619 ± 7% +12.2% 41099 interrupts.CPU86.PMI:Performance_monitoring_interrupts
96.20 ± 69% -55.0% 43.33 ± 38% interrupts.CPU86.RES:Rescheduling_interrupts
0.80 ± 93% -58.3% 0.33 ±141% interrupts.CPU86.TLB:TLB_shootdowns
0.00 -100.0% 0.00 interrupts.CPU87.411:PCI-MSI.68681742-edge.nvme0q14
1161 ± 43% -45.8% 629.67 ± 51% interrupts.CPU87.CAL:Function_call_interrupts
38.40 ± 79% -45.3% 21.00 ± 3% interrupts.CPU87.IWI:IRQ_work_interrupts
105294 -0.4% 104894 interrupts.CPU87.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU87.MCP:Machine_check_polls
36854 ± 8% +11.0% 40894 ± 2% interrupts.CPU87.NMI:Non-maskable_interrupts
36854 ± 8% +11.0% 40894 ± 2% interrupts.CPU87.PMI:Performance_monitoring_interrupts
93.60 ± 62% -76.9% 21.67 ± 30% interrupts.CPU87.RES:Rescheduling_interrupts
13.60 ±192% -85.3% 2.00 ±108% interrupts.CPU87.TLB:TLB_shootdowns
0.00 -100.0% 0.00 interrupts.CPU88.375:PCI-MSI.69206025-edge.nvme1q9
1178 ± 43% -51.6% 569.67 ± 48% interrupts.CPU88.CAL:Function_call_interrupts
56.80 ± 56% -41.9% 33.00 ± 13% interrupts.CPU88.IWI:IRQ_work_interrupts
105543 -1.2% 104283 interrupts.CPU88.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU88.MCP:Machine_check_polls
36893 ± 6% +11.8% 41248 interrupts.CPU88.NMI:Non-maskable_interrupts
36893 ± 6% +11.8% 41248 interrupts.CPU88.PMI:Performance_monitoring_interrupts
68.60 ± 66% -44.1% 38.33 ± 19% interrupts.CPU88.RES:Rescheduling_interrupts
0.80 ± 93% +25.0% 1.00 interrupts.CPU88.TLB:TLB_shootdowns
1228 ± 35% -54.7% 557.00 ± 47% interrupts.CPU89.CAL:Function_call_interrupts
38.20 ± 53% -9.2% 34.67 ± 23% interrupts.CPU89.IWI:IRQ_work_interrupts
105435 -2.6% 102696 interrupts.CPU89.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU89.MCP:Machine_check_polls
36102 ± 10% +15.0% 41527 ± 2% interrupts.CPU89.NMI:Non-maskable_interrupts
36102 ± 10% +15.0% 41527 ± 2% interrupts.CPU89.PMI:Performance_monitoring_interrupts
92.80 ± 48% -70.9% 27.00 ± 42% interrupts.CPU89.RES:Rescheduling_interrupts
129.40 ±198% -99.2% 1.00 interrupts.CPU89.TLB:TLB_shootdowns
1172 ± 41% -42.2% 677.67 ± 35% interrupts.CPU9.CAL:Function_call_interrupts
68.00 ± 42% -57.8% 28.67 ± 40% interrupts.CPU9.IWI:IRQ_work_interrupts
106676 -2.3% 104256 interrupts.CPU9.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU9.MCP:Machine_check_polls
38201 ± 2% +6.9% 40850 ± 2% interrupts.CPU9.NMI:Non-maskable_interrupts
38201 ± 2% +6.9% 40850 ± 2% interrupts.CPU9.PMI:Performance_monitoring_interrupts
73.00 ± 32% -0.5% 72.67 ± 43% interrupts.CPU9.RES:Rescheduling_interrupts
0.20 ±200% +1900.0% 4.00 ±141% interrupts.CPU9.TLB:TLB_shootdowns
0.00 -100.0% 0.00 interrupts.CPU90.376:PCI-MSI.68681738-edge.nvme0q10
1178 ± 44% -52.5% 559.33 ± 49% interrupts.CPU90.CAL:Function_call_interrupts
30.20 ± 53% +1.5% 30.67 ± 30% interrupts.CPU90.IWI:IRQ_work_interrupts
103590 +0.1% 103729 interrupts.CPU90.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU90.MCP:Machine_check_polls
36362 ± 8% +13.2% 41148 interrupts.CPU90.NMI:Non-maskable_interrupts
36362 ± 8% +13.2% 41148 interrupts.CPU90.PMI:Performance_monitoring_interrupts
51.40 ± 51% -48.8% 26.33 ± 51% interrupts.CPU90.RES:Rescheduling_interrupts
1.40 ±106% -28.6% 1.00 interrupts.CPU90.TLB:TLB_shootdowns
1124 ± 43% -46.9% 597.67 ± 50% interrupts.CPU91.CAL:Function_call_interrupts
38.00 ± 51% +0.0% 38.00 ± 22% interrupts.CPU91.IWI:IRQ_work_interrupts
106425 ± 2% -1.8% 104491 interrupts.CPU91.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU91.MCP:Machine_check_polls
36002 ± 5% +15.3% 41517 ± 2% interrupts.CPU91.NMI:Non-maskable_interrupts
36002 ± 5% +15.3% 41517 ± 2% interrupts.CPU91.PMI:Performance_monitoring_interrupts
77.80 ± 59% -33.6% 51.67 ± 43% interrupts.CPU91.RES:Rescheduling_interrupts
0.40 ±122% +1650.0% 7.00 ±112% interrupts.CPU91.TLB:TLB_shootdowns
1166 ± 41% -48.7% 598.00 ± 39% interrupts.CPU92.CAL:Function_call_interrupts
31.00 ± 50% +4.3% 32.33 ± 27% interrupts.CPU92.IWI:IRQ_work_interrupts
105500 -1.5% 103918 interrupts.CPU92.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU92.MCP:Machine_check_polls
37063 ± 8% +12.1% 41532 ± 3% interrupts.CPU92.NMI:Non-maskable_interrupts
37063 ± 8% +12.1% 41532 ± 3% interrupts.CPU92.PMI:Performance_monitoring_interrupts
61.60 ± 35% -48.1% 32.00 ± 73% interrupts.CPU92.RES:Rescheduling_interrupts
1.00 ±109% +33.3% 1.33 ± 35% interrupts.CPU92.TLB:TLB_shootdowns
1178 ± 44% -51.8% 568.00 ± 46% interrupts.CPU93.CAL:Function_call_interrupts
70.80 ± 40% -68.9% 22.00 ± 7% interrupts.CPU93.IWI:IRQ_work_interrupts
104633 -0.5% 104066 interrupts.CPU93.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU93.MCP:Machine_check_polls
38032 ± 2% +7.7% 40967 ± 2% interrupts.CPU93.NMI:Non-maskable_interrupts
38032 ± 2% +7.7% 40967 ± 2% interrupts.CPU93.PMI:Performance_monitoring_interrupts
82.60 ± 62% -65.7% 28.33 ± 11% interrupts.CPU93.RES:Rescheduling_interrupts
1.00 -66.7% 0.33 ±141% interrupts.CPU93.TLB:TLB_shootdowns
0.00 -100.0% 0.00 interrupts.CPU94.378:PCI-MSI.68681740-edge.nvme0q12
0.00 -100.0% 0.00 interrupts.CPU94.378:PCI-MSI.69206028-edge.nvme1q12
1170 ± 43% -47.3% 617.33 ± 45% interrupts.CPU94.CAL:Function_call_interrupts
29.40 ± 55% +14.5% 33.67 ± 25% interrupts.CPU94.IWI:IRQ_work_interrupts
105992 -2.5% 103340 interrupts.CPU94.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU94.MCP:Machine_check_polls
34659 ± 5% +19.5% 41429 ± 2% interrupts.CPU94.NMI:Non-maskable_interrupts
34659 ± 5% +19.5% 41429 ± 2% interrupts.CPU94.PMI:Performance_monitoring_interrupts
69.80 ± 54% -65.1% 24.33 ± 10% interrupts.CPU94.RES:Rescheduling_interrupts
1.20 ± 62% -44.4% 0.67 ± 70% interrupts.CPU94.TLB:TLB_shootdowns
0.00 -100.0% 0.00 interrupts.CPU95.381:PCI-MSI.69206031-edge.nvme1q15
1255 ± 38% -51.1% 613.33 ± 40% interrupts.CPU95.CAL:Function_call_interrupts
35.80 ± 38% +0.6% 36.00 ± 27% interrupts.CPU95.IWI:IRQ_work_interrupts
106774 -2.7% 103921 interrupts.CPU95.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.CPU95.MCP:Machine_check_polls
42106 ± 9% -1.4% 41536 interrupts.CPU95.NMI:Non-maskable_interrupts
42106 ± 9% -1.4% 41536 interrupts.CPU95.PMI:Performance_monitoring_interrupts
181.80 ± 39% -60.0% 72.67 ± 16% interrupts.CPU95.RES:Rescheduling_interrupts
7.40 ± 13% -23.4% 5.67 ± 29% interrupts.CPU95.TLB:TLB_shootdowns
4228 ± 4% -33.5% 2813 ± 2% interrupts.IWI:IRQ_work_interrupts
10161513 -1.4% 10021491 interrupts.LOC:Local_timer_interrupts
0.00 -100.0% 0.00 interrupts.MCP:Machine_check_polls
3501659 +12.0% 3922994 interrupts.NMI:Non-maskable_interrupts
3501659 +12.0% 3922994 interrupts.PMI:Performance_monitoring_interrupts
7870 ± 19% -34.8% 5128 ± 9% interrupts.RES:Rescheduling_interrupts
0.00 -100.0% 0.00 interrupts.RTR:APIC_ICR_read_retries
1317 ±130% -77.8% 292.67 ± 12% interrupts.TLB:TLB_shootdowns
43.97 ± 12% -43.8 0.20 ±141% perf-profile.calltrace.cycles-pp.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
43.89 ± 12% -43.7 0.20 ±141% perf-profile.calltrace.cycles-pp.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
42.03 ± 14% -42.0 0.00 perf-profile.calltrace.cycles-pp.__mm_populate.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
41.95 ± 14% -42.0 0.00 perf-profile.calltrace.cycles-pp.populate_vma_page_range.__mm_populate.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64
41.95 ± 14% -42.0 0.00 perf-profile.calltrace.cycles-pp.__get_user_pages.populate_vma_page_range.__mm_populate.vm_mmap_pgoff.ksys_mmap_pgoff
41.60 ± 14% -41.6 0.00 perf-profile.calltrace.cycles-pp.follow_page_pte.__get_user_pages.populate_vma_page_range.__mm_populate.vm_mmap_pgoff
49.03 ± 7% -40.1 8.92 ± 13% perf-profile.calltrace.cycles-pp.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe
49.00 ± 7% -40.1 8.89 ± 14% perf-profile.calltrace.cycles-pp.__vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe
48.85 ± 7% -40.0 8.81 ± 14% perf-profile.calltrace.cycles-pp.__do_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe
27.05 ± 9% -25.3 1.71 ± 44% perf-profile.calltrace.cycles-pp.__munlock_pagevec.munlock_vma_pages_range.__do_munmap.__vm_munmap.__x64_sys_munmap
27.29 ± 9% -25.3 1.98 ± 39% perf-profile.calltrace.cycles-pp.munlock_vma_pages_range.__do_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64
23.70 ± 16% -23.7 0.00 perf-profile.calltrace.cycles-pp.lru_add_drain.follow_page_pte.__get_user_pages.populate_vma_page_range.__mm_populate
17.65 ± 11% -17.6 0.00 perf-profile.calltrace.cycles-pp.lru_add_drain_cpu.lru_add_drain.follow_page_pte.__get_user_pages.populate_vma_page_range
17.64 ± 11% -17.6 0.00 perf-profile.calltrace.cycles-pp.pagevec_lru_move_fn.lru_add_drain_cpu.lru_add_drain.follow_page_pte.__get_user_pages
17.56 ± 11% -17.6 0.00 perf-profile.calltrace.cycles-pp.mlock_vma_page.follow_page_pte.__get_user_pages.populate_vma_page_range.__mm_populate
17.53 ± 11% -17.5 0.00 perf-profile.calltrace.cycles-pp.isolate_lru_page.mlock_vma_page.follow_page_pte.__get_user_pages.populate_vma_page_range
17.26 ± 11% -17.3 0.00 perf-profile.calltrace.cycles-pp._raw_spin_lock_irq.isolate_lru_page.mlock_vma_page.follow_page_pte.__get_user_pages
17.18 ± 11% -17.2 0.00 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irq.isolate_lru_page.mlock_vma_page.follow_page_pte
17.12 ± 11% -17.1 0.00 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.pagevec_lru_move_fn.lru_add_drain_cpu.lru_add_drain.follow_page_pte
17.02 ± 11% -17.0 0.00 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.pagevec_lru_move_fn.lru_add_drain_cpu.lru_add_drain
17.27 ± 6% -13.6 3.65 ± 15% perf-profile.calltrace.cycles-pp.unmap_region.__do_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64
16.45 ± 6% -13.4 3.09 ± 18% perf-profile.calltrace.cycles-pp.unmap_vmas.unmap_region.__do_munmap.__vm_munmap.__x64_sys_munmap
16.21 ± 5% -13.3 2.87 ± 20% perf-profile.calltrace.cycles-pp.zap_pte_range.unmap_page_range.unmap_vmas.unmap_region.__do_munmap
16.36 ± 5% -13.3 3.04 ± 19% perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.unmap_region.__do_munmap.__vm_munmap
13.84 ± 3% -13.0 0.87 ± 78% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.pagevec_lru_move_fn.lru_add_drain.free_pages_and_swap_cache.tlb_flush_mmu
13.75 ± 3% -13.0 0.80 ± 79% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.pagevec_lru_move_fn.lru_add_drain.free_pages_and_swap_cache
13.58 ± 9% -12.9 0.70 ± 80% perf-profile.calltrace.cycles-pp.pagevec_lru_move_fn.__munlock_pagevec.munlock_vma_pages_range.__do_munmap.__vm_munmap
14.59 ± 4% -12.9 1.72 ± 34% perf-profile.calltrace.cycles-pp.tlb_flush_mmu.zap_pte_range.unmap_page_range.unmap_vmas.unmap_region
13.19 ± 9% -12.8 0.35 ±141% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.pagevec_lru_move_fn.__munlock_pagevec.munlock_vma_pages_range.__do_munmap
14.46 ± 4% -12.8 1.65 ± 36% perf-profile.calltrace.cycles-pp.free_pages_and_swap_cache.tlb_flush_mmu.zap_pte_range.unmap_page_range.unmap_vmas
14.42 ± 4% -12.8 1.63 ± 37% perf-profile.calltrace.cycles-pp.lru_add_drain.free_pages_and_swap_cache.tlb_flush_mmu.zap_pte_range.unmap_page_range
13.13 ± 9% -12.8 0.34 ±141% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.pagevec_lru_move_fn.__munlock_pagevec.munlock_vma_pages_range
13.14 ± 8% -12.8 0.36 ±141% perf-profile.calltrace.cycles-pp._raw_spin_lock_irq.__munlock_pagevec.munlock_vma_pages_range.__do_munmap.__vm_munmap
14.39 ± 3% -12.8 1.61 ± 37% perf-profile.calltrace.cycles-pp.pagevec_lru_move_fn.lru_add_drain.free_pages_and_swap_cache.tlb_flush_mmu.zap_pte_range
13.06 ± 8% -12.7 0.33 ±141% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irq.__munlock_pagevec.munlock_vma_pages_range.__do_munmap
6.06 ± 33% -6.1 0.00 perf-profile.calltrace.cycles-pp.pagevec_lru_move_fn.lru_add_drain.follow_page_pte.__get_user_pages.populate_vma_page_range
5.85 ± 33% -5.8 0.00 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.pagevec_lru_move_fn.lru_add_drain.follow_page_pte.__get_user_pages
5.82 ± 33% -5.8 0.00 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.pagevec_lru_move_fn.lru_add_drain.follow_page_pte
1.69 ± 32% -1.7 0.00 perf-profile.calltrace.cycles-pp.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.60 ± 32% -1.6 0.00 perf-profile.calltrace.cycles-pp.mmap_region.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64
0.93 ± 33% -0.9 0.00 perf-profile.calltrace.cycles-pp.asm_exc_page_fault
0.92 ±125% -0.9 0.00 perf-profile.calltrace.cycles-pp.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe
0.75 ± 24% -0.8 0.00 perf-profile.calltrace.cycles-pp.exc_page_fault.asm_exc_page_fault
0.73 ± 24% -0.7 0.00 perf-profile.calltrace.cycles-pp.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
1.29 ± 25% -0.6 0.74 ± 6% perf-profile.calltrace.cycles-pp.remove_vma.__do_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64
1.22 ± 25% -0.5 0.70 ± 5% perf-profile.calltrace.cycles-pp.kmem_cache_free.remove_vma.__do_munmap.__vm_munmap.__x64_sys_munmap
1.65 ± 20% -0.5 1.17 ± 5% perf-profile.calltrace.cycles-pp.__split_vma.do_madvise.__x64_sys_madvise.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.45 ± 85% -0.5 0.00 perf-profile.calltrace.cycles-pp.free_pgtables.unmap_region.__do_munmap.__vm_munmap.__x64_sys_munmap
1.89 ± 20% -0.4 1.44 ± 6% perf-profile.calltrace.cycles-pp.__split_vma.__do_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64
0.43 ± 84% -0.4 0.00 perf-profile.calltrace.cycles-pp.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
0.35 ± 82% -0.3 0.00 perf-profile.calltrace.cycles-pp.vm_area_dup.__split_vma.__do_munmap.__vm_munmap.__x64_sys_munmap
0.30 ±122% -0.3 0.00 perf-profile.calltrace.cycles-pp.unlink_file_vma.free_pgtables.unmap_region.__do_munmap.__vm_munmap
0.27 ±122% -0.3 0.00 perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
0.23 ±122% -0.2 0.00 perf-profile.calltrace.cycles-pp.do_fault.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault
1.16 ± 20% -0.2 0.93 ± 5% perf-profile.calltrace.cycles-pp.__vma_adjust.__split_vma.do_madvise.__x64_sys_madvise.do_syscall_64
0.21 ±122% -0.2 0.00 perf-profile.calltrace.cycles-pp.find_vma.__do_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64
0.21 ±122% -0.2 0.00 perf-profile.calltrace.cycles-pp.flush_tlb_mm_range.zap_pte_range.unmap_page_range.unmap_vmas.unmap_region
1.27 ± 20% -0.2 1.09 ± 6% perf-profile.calltrace.cycles-pp.__vma_adjust.__split_vma.__do_munmap.__vm_munmap.__x64_sys_munmap
0.12 ±200% -0.1 0.00 perf-profile.calltrace.cycles-pp.vma_link.mmap_region.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff
0.10 ±200% -0.1 0.00 perf-profile.calltrace.cycles-pp.vm_area_dup.__split_vma.do_madvise.__x64_sys_madvise.do_syscall_64
0.10 ±200% -0.1 0.00 perf-profile.calltrace.cycles-pp.vma_interval_tree_remove.unlink_file_vma.free_pgtables.unmap_region.__do_munmap
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.syscall
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.syscall
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.syscall
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.__do_sys_finit_module.do_syscall_64.entry_SYSCALL_64_after_hwframe.syscall
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.load_module.__do_sys_finit_module.do_syscall_64.entry_SYSCALL_64_after_hwframe.syscall
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.do_init_module.load_module.__do_sys_finit_module.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.do_one_initcall.do_init_module.load_module.__do_sys_finit_module.do_syscall_64
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.driver_register.do_one_initcall.do_init_module.load_module.__do_sys_finit_module
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.bus_add_driver.driver_register.do_one_initcall.do_init_module.load_module
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.bus_for_each_dev.bus_add_driver.driver_register.do_one_initcall.do_init_module
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.__driver_attach.bus_for_each_dev.bus_add_driver.driver_register.do_one_initcall
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.device_driver_attach.__driver_attach.bus_for_each_dev.bus_add_driver.driver_register
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.driver_probe_device.device_driver_attach.__driver_attach.bus_for_each_dev.bus_add_driver
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.really_probe.driver_probe_device.device_driver_attach.__driver_attach.bus_for_each_dev
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.nvdimm_bus_probe.really_probe.driver_probe_device.device_driver_attach.__driver_attach
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.dax_pmem_compat_probe.nvdimm_bus_probe.really_probe.driver_probe_device.device_driver_attach
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.dev_dax_probe.dax_pmem_compat_probe.nvdimm_bus_probe.really_probe.driver_probe_device
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.devm_memremap_pages.dev_dax_probe.dax_pmem_compat_probe.nvdimm_bus_probe.really_probe
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.memremap_pages.devm_memremap_pages.dev_dax_probe.dax_pmem_compat_probe.nvdimm_bus_probe
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.pagemap_range.memremap_pages.devm_memremap_pages.dev_dax_probe.dax_pmem_compat_probe
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.memmap_init_zone_device.pagemap_range.memremap_pages.devm_memremap_pages.dev_dax_probe
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.__libc_write
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__libc_write
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_write
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_write
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.new_sync_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_write
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.generic_file_write_iter.new_sync_write.vfs_write.ksys_write.do_syscall_64
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.__generic_file_write_iter.generic_file_write_iter.new_sync_write.vfs_write.ksys_write
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.new_sync_write.vfs_write
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.asm_sysvec_apic_timer_interrupt
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.ret_from_fork
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.kthread.ret_from_fork
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.worker_thread.kthread.ret_from_fork
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.process_one_work.worker_thread.kthread.ret_from_fork
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.drm_fb_helper_dirty_work.process_one_work.worker_thread.kthread.ret_from_fork
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.memcpy_erms.drm_fb_helper_dirty_work.process_one_work.worker_thread.kthread
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.iov_iter_copy_from_user_atomic.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.new_sync_write
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.copyin.iov_iter_copy_from_user_atomic.generic_perform_write.__generic_file_write_iter.generic_file_write_iter
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.finished_loading.load_module.__do_sys_finit_module.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.__mutex_lock.finished_loading.load_module.__do_sys_finit_module.do_syscall_64
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.copy_user_enhanced_fast_string.copyin.iov_iter_copy_from_user_atomic.generic_perform_write.__generic_file_write_iter
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.osq_lock.__mutex_lock.finished_loading.load_module.__do_sys_finit_module
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.shmem_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.new_sync_write
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.scheduler_tick.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.prepare_to_wait_event.load_module.__do_sys_finit_module.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.prepare_to_wait_event.load_module.__do_sys_finit_module.do_syscall_64
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.prepare_to_wait_event.load_module.__do_sys_finit_module
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.shmem_write_begin.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.new_sync_write
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.shmem_getpage_gfp.shmem_write_begin.generic_perform_write.__generic_file_write_iter.generic_file_write_iter
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.task_tick_fair.scheduler_tick.update_process_times.tick_sched_handle.tick_sched_timer
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.khugepaged.kthread.ret_from_fork
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.khugepaged_scan_mm_slot.khugepaged.kthread.ret_from_fork
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.khugepaged_scan_pmd.khugepaged_scan_mm_slot.khugepaged.kthread.ret_from_fork
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.collapse_huge_page.khugepaged_scan_pmd.khugepaged_scan_mm_slot.khugepaged.kthread
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.iov_iter_fault_in_readable.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.new_sync_write
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.__get_user_nocheck_1.iov_iter_fault_in_readable.generic_perform_write.__generic_file_write_iter.generic_file_write_iter
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.execve
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.execve
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.execve
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.__x64_sys_execve.do_syscall_64.entry_SYSCALL_64_after_hwframe.execve
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.do_execveat_common.__x64_sys_execve.do_syscall_64.entry_SYSCALL_64_after_hwframe.execve
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.irq_exit_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.clockevents_program_event.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.asm_call_sysvec_on_stack.do_softirq_own_stack.irq_exit_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.do_softirq_own_stack.irq_exit_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.__softirqentry_text_start.asm_call_sysvec_on_stack.do_softirq_own_stack.irq_exit_rcu.sysvec_apic_timer_interrupt
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.unlock_page.shmem_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.bprm_execve.do_execveat_common.__x64_sys_execve.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.copy_page.collapse_huge_page.khugepaged_scan_pmd.khugepaged_scan_mm_slot.khugepaged
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.ktime_get.clockevents_program_event.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.__libc_fork
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.asm_exc_page_fault.__get_user_nocheck_1.iov_iter_fault_in_readable.generic_perform_write.__generic_file_write_iter
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.exec_binprm.bprm_execve.do_execveat_common.__x64_sys_execve.do_syscall_64
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.__libc_start_main
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.main.__libc_start_main
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.run_builtin.main.__libc_start_main
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.cmd_record.run_builtin.main.__libc_start_main
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.exc_page_fault.asm_exc_page_fault.__get_user_nocheck_1.iov_iter_fault_in_readable.generic_perform_write
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.__get_user_nocheck_1.iov_iter_fault_in_readable
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.load_elf_binary.exec_binprm.bprm_execve.do_execveat_common.__x64_sys_execve
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.ktime_get_update_offsets_now.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.find_lock_entry.shmem_getpage_gfp.shmem_write_begin.generic_perform_write.__generic_file_write_iter
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.write
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.write
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.__fdget_pos.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_write
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__libc_fork
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_fork
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.__do_sys_clone.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_fork
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp._do_fork.__do_sys_clone.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_fork
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.change_page_attr_set_clr.set_memory_ro.module_enable_ro.load_module.__do_sys_finit_module
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.module_enable_ro.load_module.__do_sys_finit_module.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.set_memory_ro.module_enable_ro.load_module.__do_sys_finit_module.do_syscall_64
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.record__mmap_read_evlist.cmd_record.run_builtin.main.__libc_start_main
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.read
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.read
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.perf_mmap__push.record__mmap_read_evlist.cmd_record.run_builtin.main
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.finish_wait.load_module.__do_sys_finit_module.do_syscall_64
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.finish_wait.load_module.__do_sys_finit_module
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.finish_wait.load_module.__do_sys_finit_module.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.__get_user_nocheck_1
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.perf_mmap__read_head.perf_mmap__push.record__mmap_read_evlist.cmd_record.run_builtin
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.update_load_avg.task_tick_fair.scheduler_tick.update_process_times.tick_sched_handle
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.copy_process._do_fork.__do_sys_clone.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.task_work_run.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.task_numa_work.task_work_run.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.__pthread_enable_asynccancel
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.__x64_sys_exit_group.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.do_group_exit.__x64_sys_exit_group.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.vprintk_emit.devkmsg_emit.devkmsg_write.cold.new_sync_write.vfs_write
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.devkmsg_write.cold.new_sync_write.vfs_write.ksys_write.do_syscall_64
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.devkmsg_emit.devkmsg_write.cold.new_sync_write.vfs_write.ksys_write
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.up_write.generic_file_write_iter.new_sync_write.vfs_write.ksys_write
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.rcu_gp_kthread.kthread.ret_from_fork
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.change_protection.change_prot_numa.task_numa_work.task_work_run.exit_to_user_mode_prepare
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.change_prot_numa.task_numa_work.task_work_run.exit_to_user_mode_prepare.irqentry_exit_to_user_mode
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.on_each_cpu.cpa_flush.change_page_attr_set_clr.set_memory_ro.module_enable_ro
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.smp_call_function_many_cond.on_each_cpu.cpa_flush.change_page_attr_set_clr.set_memory_ro
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.cpa_flush.change_page_attr_set_clr.set_memory_ro.module_enable_ro.load_module
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.__fget_light.__fdget_pos.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.secondary_startup_64
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.__fget_files.__fget_light.__fdget_pos.ksys_write.do_syscall_64
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.hrtimer_active.task_tick_fair.scheduler_tick.update_process_times.tick_sched_handle
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.__pthread_disable_asynccancel
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.mutex_unlock.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_write
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.__init_single_page.memmap_init_zone_device.pagemap_range.memremap_pages.devm_memremap_pages
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.find_get_entry.find_lock_entry.shmem_getpage_gfp.shmem_write_begin.generic_perform_write
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.console_unlock.vprintk_emit.devkmsg_emit.devkmsg_write.cold.new_sync_write
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.begin_new_exec.load_elf_binary.exec_binprm.bprm_execve.do_execveat_common
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.do_sys_openat2.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.rebalance_domains.__softirqentry_text_start.asm_call_sysvec_on_stack.do_softirq_own_stack.irq_exit_rcu
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.run_timer_softirq.__softirqentry_text_start.asm_call_sysvec_on_stack.do_softirq_own_stack.irq_exit_rcu
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.__do_fault.do_fault.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.shmem_add_to_page_cache.shmem_getpage_gfp.shmem_write_begin.generic_perform_write.__generic_file_write_iter
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.wp_page_copy.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.call_timer_fn.run_timer_softirq.__softirqentry_text_start.asm_call_sysvec_on_stack.do_softirq_own_stack
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.serial8250_console_write.console_unlock.vprintk_emit.devkmsg_emit.devkmsg_write.cold
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.rwsem_down_read_slowpath.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.rwsem_optimistic_spin.rwsem_down_read_slowpath.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.rwsem_spin_on_owner.rwsem_optimistic_spin.rwsem_down_read_slowpath.do_user_addr_fault.exc_page_fault
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.uart_console_write.serial8250_console_write.console_unlock.vprintk_emit.devkmsg_emit
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.wait_for_xmitr.serial8250_console_putchar.uart_console_write.serial8250_console_write.console_unlock
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.serial8250_console_putchar.uart_console_write.serial8250_console_write.console_unlock.vprintk_emit
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.io_serial_in.wait_for_xmitr.serial8250_console_putchar.uart_console_write.serial8250_console_write
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.mmput.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.exit_mmap.mmput.do_exit.do_group_exit.__x64_sys_exit_group
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.do_filp_open.do_sys_openat2.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.path_openat.do_filp_open.do_sys_openat2.do_sys_open.do_syscall_64
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.rcu_core.__softirqentry_text_start.asm_call_sysvec_on_stack.do_softirq_own_stack.irq_exit_rcu
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.__mutex_lock.load_module.__do_sys_finit_module.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.dup_mm.copy_process._do_fork.__do_sys_clone.do_syscall_64
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.osq_lock.__mutex_lock.load_module.__do_sys_finit_module.do_syscall_64
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.mmput.begin_new_exec.load_elf_binary.exec_binprm.bprm_execve
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.exit_mmap.mmput.begin_new_exec.load_elf_binary.exec_binprm
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.load_balance.rebalance_domains.__softirqentry_text_start.asm_call_sysvec_on_stack.do_softirq_own_stack
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.dup_mmap.dup_mm.copy_process._do_fork.__do_sys_clone
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.filemap_map_pages.do_fault.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.update_curr.task_tick_fair.scheduler_tick.update_process_times.tick_sched_handle
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.asm_sysvec_apic_timer_interrupt.memcpy_erms.drm_fb_helper_dirty_work.process_one_work.worker_thread
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.flush_tlb_mm_range.pmdp_huge_clear_flush.collapse_huge_page.khugepaged_scan_pmd.khugepaged_scan_mm_slot
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.pmdp_huge_clear_flush.collapse_huge_page.khugepaged_scan_pmd.khugepaged_scan_mm_slot.khugepaged
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.on_each_cpu_cond_mask.flush_tlb_mm_range.pmdp_huge_clear_flush.collapse_huge_page.khugepaged_scan_pmd
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.wait4
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.wait4
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.setlocale
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.seq_read.proc_reg_read.vfs_read.ksys_read.do_syscall_64
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.proc_reg_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.seq_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.copy_page.wp_page_copy.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.do_anonymous_page.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.wait4
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.__do_sys_wait4.do_syscall_64.entry_SYSCALL_64_after_hwframe.wait4
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.kernel_wait4.__do_sys_wait4.do_syscall_64.entry_SYSCALL_64_after_hwframe.wait4
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.smp_call_function_many_cond.on_each_cpu.__change_page_attr_set_clr.__change_page_attr_set_clr.change_page_attr_set_clr
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.on_each_cpu.__change_page_attr_set_clr.__change_page_attr_set_clr.change_page_attr_set_clr.set_memory_ro
0.00 +0.0 0.00 perf-profile.calltrace.cycles-pp.__change_page_attr_set_clr.__change_page_attr_set_clr.change_page_attr_set_clr.set_memory_ro.module_enable_ro
0.55 ± 53% +0.1 0.63 ± 7% perf-profile.calltrace.cycles-pp.vma_interval_tree_insert.__vma_adjust.__split_vma.__do_munmap.__vm_munmap
0.42 ± 82% +0.2 0.63 ± 8% perf-profile.calltrace.cycles-pp.vma_interval_tree_insert.__vma_adjust.__split_vma.do_madvise.__x64_sys_madvise
1.36 ± 24% +0.3 1.68 ± 9% perf-profile.calltrace.cycles-pp.find_vma_prev.do_madvise.__x64_sys_madvise.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.34 ± 24% +0.3 1.67 ± 9% perf-profile.calltrace.cycles-pp.find_vma.find_vma_prev.do_madvise.__x64_sys_madvise.do_syscall_64
0.00 +0.3 0.35 ± 70% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.__sysvec_apic_timer_interrupt.asm_call_sysvec_on_stack.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
0.00 +0.5 0.53 ± 2% perf-profile.calltrace.cycles-pp.__sysvec_apic_timer_interrupt.asm_call_sysvec_on_stack.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.do_madvise
0.00 +0.5 0.53 ± 2% perf-profile.calltrace.cycles-pp.asm_call_sysvec_on_stack.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.do_madvise.__x64_sys_madvise
0.00 +0.6 0.58 ± 2% perf-profile.calltrace.cycles-pp.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.do_madvise.__x64_sys_madvise.do_syscall_64
0.00 +0.6 0.62 ± 3% perf-profile.calltrace.cycles-pp.asm_sysvec_apic_timer_interrupt.do_madvise.__x64_sys_madvise.do_syscall_64.entry_SYSCALL_64_after_hwframe
97.91 +1.7 99.63 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
96.78 +2.4 99.22 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +9.1 9.05 perf-profile.calltrace.cycles-pp.xas_find.do_madvise.__x64_sys_madvise.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.64 ± 22% +85.5 89.18 perf-profile.calltrace.cycles-pp.do_madvise.__x64_sys_madvise.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.67 ± 22% +86.1 89.78 perf-profile.calltrace.cycles-pp.__x64_sys_madvise.do_syscall_64.entry_SYSCALL_64_after_hwframe
80.11 ± 5% -78.0 2.12 ± 54% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
51.71 ± 5% -49.2 2.49 ± 39% perf-profile.children.cycles-pp.pagevec_lru_move_fn
50.07 ± 6% -48.4 1.62 ± 51% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
43.97 ± 12% -43.5 0.48 ± 25% perf-profile.children.cycles-pp.ksys_mmap_pgoff
43.89 ± 12% -43.4 0.47 ± 25% perf-profile.children.cycles-pp.vm_mmap_pgoff
42.03 ± 14% -41.9 0.08 ± 20% perf-profile.children.cycles-pp.__mm_populate
41.95 ± 14% -41.9 0.07 ± 20% perf-profile.children.cycles-pp.populate_vma_page_range
41.95 ± 14% -41.9 0.07 ± 20% perf-profile.children.cycles-pp.__get_user_pages
41.77 ± 14% -41.6 0.20 ± 11% perf-profile.children.cycles-pp.follow_page_pte
49.04 ± 7% -40.1 8.92 ± 13% perf-profile.children.cycles-pp.__x64_sys_munmap
49.00 ± 7% -40.1 8.90 ± 13% perf-profile.children.cycles-pp.__vm_munmap
48.87 ± 7% -40.0 8.82 ± 14% perf-profile.children.cycles-pp.__do_munmap
38.17 ± 9% -36.5 1.66 ± 36% perf-profile.children.cycles-pp.lru_add_drain
30.43 ± 5% -29.8 0.62 ± 54% perf-profile.children.cycles-pp._raw_spin_lock_irq
27.06 ± 9% -25.3 1.72 ± 43% perf-profile.children.cycles-pp.__munlock_pagevec
27.29 ± 9% -25.3 1.98 ± 39% perf-profile.children.cycles-pp.munlock_vma_pages_range
17.70 ± 11% -17.7 0.00 perf-profile.children.cycles-pp.lru_add_drain_cpu
17.57 ± 11% -17.6 0.00 perf-profile.children.cycles-pp.isolate_lru_page
17.56 ± 11% -17.6 0.00 perf-profile.children.cycles-pp.mlock_vma_page
17.28 ± 6% -13.6 3.66 ± 15% perf-profile.children.cycles-pp.unmap_region
16.45 ± 6% -13.4 3.10 ± 18% perf-profile.children.cycles-pp.unmap_vmas
16.22 ± 5% -13.3 2.87 ± 20% perf-profile.children.cycles-pp.zap_pte_range
16.37 ± 5% -13.3 3.04 ± 19% perf-profile.children.cycles-pp.unmap_page_range
14.62 ± 4% -12.9 1.75 ± 34% perf-profile.children.cycles-pp.tlb_flush_mmu
14.46 ± 4% -12.8 1.65 ± 36% perf-profile.children.cycles-pp.free_pages_and_swap_cache
1.69 ± 32% -1.3 0.37 ± 27% perf-profile.children.cycles-pp.do_mmap
1.60 ± 32% -1.3 0.35 ± 26% perf-profile.children.cycles-pp.mmap_region
3.54 ± 20% -0.9 2.61 ± 6% perf-profile.children.cycles-pp.__split_vma
0.94 ± 33% -0.9 0.02 ±141% perf-profile.children.cycles-pp.asm_exc_page_fault
0.88 ± 20% -0.8 0.07 ± 25% perf-profile.children.cycles-pp.handle_mm_fault
0.76 ± 24% -0.7 0.02 ±141% perf-profile.children.cycles-pp.exc_page_fault
0.77 ± 20% -0.7 0.05 ± 72% perf-profile.children.cycles-pp.__handle_mm_fault
0.74 ± 24% -0.7 0.02 ±141% perf-profile.children.cycles-pp.do_user_addr_fault
1.06 ± 99% -0.7 0.38 ± 6% perf-profile.children.cycles-pp.syscall_exit_to_user_mode
0.68 ± 19% -0.7 0.02 ±141% perf-profile.children.cycles-pp.do_fault
1.30 ± 25% -0.5 0.75 ± 6% perf-profile.children.cycles-pp.remove_vma
0.93 ± 26% -0.5 0.39 ± 4% perf-profile.children.cycles-pp.kmem_cache_alloc
1.23 ± 25% -0.5 0.70 ± 5% perf-profile.children.cycles-pp.kmem_cache_free
0.92 ± 21% -0.4 0.48 ± 4% perf-profile.children.cycles-pp.vm_area_dup
2.43 ± 20% -0.4 2.03 ± 6% perf-profile.children.cycles-pp.__vma_adjust
0.48 ± 6% -0.3 0.15 ± 19% perf-profile.children.cycles-pp.__pagevec_lru_add_fn
0.39 ± 30% -0.3 0.08 ± 30% perf-profile.children.cycles-pp.vma_link
0.62 ± 9% -0.3 0.33 ± 9% perf-profile.children.cycles-pp.release_pages
0.26 ± 19% -0.3 0.00 perf-profile.children.cycles-pp.alloc_set_pte
0.30 ± 37% -0.2 0.06 ± 19% perf-profile.children.cycles-pp.vm_area_alloc
0.36 ± 19% -0.2 0.14 ± 9% perf-profile.children.cycles-pp.__mod_memcg_lruvec_state
0.22 ± 23% -0.2 0.00 perf-profile.children.cycles-pp.native_irq_return_iret
0.25 ± 31% -0.2 0.04 ± 76% perf-profile.children.cycles-pp.perf_event_mmap
0.21 ± 15% -0.2 0.00 perf-profile.children.cycles-pp.filemap_map_pages
0.20 ±102% -0.2 0.00 perf-profile.children.cycles-pp.irqentry_exit_to_user_mode
0.65 ± 27% -0.2 0.45 ± 6% perf-profile.children.cycles-pp.free_pgtables
0.20 ± 21% -0.2 0.00 perf-profile.children.cycles-pp.find_get_entry
0.19 ± 24% -0.2 0.00 perf-profile.children.cycles-pp.__do_fault
0.18 ± 19% -0.2 0.00 perf-profile.children.cycles-pp.shmem_getpage_gfp
0.18 ± 18% -0.2 0.00 perf-profile.children.cycles-pp.page_add_file_rmap
0.42 ± 21% -0.2 0.25 ± 5% perf-profile.children.cycles-pp.flush_tlb_mm_range
0.18 ± 22% -0.2 0.00 perf-profile.children.cycles-pp.shmem_fault
0.17 ± 24% -0.2 0.00 perf-profile.children.cycles-pp.finish_fault
0.57 ± 27% -0.2 0.40 ± 7% perf-profile.children.cycles-pp.unlink_file_vma
0.30 ± 24% -0.2 0.14 ± 8% perf-profile.children.cycles-pp.refill_obj_stock
0.34 ± 29% -0.2 0.18 ± 2% perf-profile.children.cycles-pp.__vma_link_rb
0.17 ± 3% -0.2 0.02 ±141% perf-profile.children.cycles-pp.mem_cgroup_page_lruvec
0.15 ± 26% -0.2 0.00 perf-profile.children.cycles-pp.sync_regs
0.15 ± 2% -0.1 0.00 perf-profile.children.cycles-pp._raw_spin_unlock_irqrestore
0.24 ± 21% -0.1 0.10 ± 4% perf-profile.children.cycles-pp.syscall_return_via_sysret
0.13 ± 25% -0.1 0.00 perf-profile.children.cycles-pp.find_lock_entry
0.13 ± 12% -0.1 0.00 perf-profile.children.cycles-pp.__mod_node_page_state
0.55 ± 26% -0.1 0.42 ± 5% perf-profile.children.cycles-pp.vma_interval_tree_remove
0.31 ± 21% -0.1 0.17 ± 5% perf-profile.children.cycles-pp.flush_tlb_func_common
0.24 ± 4% -0.1 0.11 ± 22% perf-profile.children.cycles-pp.mem_cgroup_update_lru_size
0.14 ± 20% -0.1 0.02 ±141% perf-profile.children.cycles-pp.xas_load
0.16 ± 11% -0.1 0.03 ± 70% perf-profile.children.cycles-pp.__mod_lruvec_state
0.27 ± 21% -0.1 0.15 ± 5% perf-profile.children.cycles-pp.native_flush_tlb_one_user
0.19 ± 19% -0.1 0.07 ± 6% perf-profile.children.cycles-pp.__rb_insert_augmented
0.12 ± 3% -0.1 0.00 perf-profile.children.cycles-pp.lru_cache_add
0.20 ± 23% -0.1 0.09 ± 5% perf-profile.children.cycles-pp.___might_sleep
0.18 ± 20% -0.1 0.07 perf-profile.children.cycles-pp.__mod_memcg_state
0.21 ± 27% -0.1 0.11 ± 8% perf-profile.children.cycles-pp.drain_obj_stock
0.11 ± 4% -0.1 0.00 perf-profile.children.cycles-pp.putback_lru_page
0.18 ± 21% -0.1 0.07 ± 6% perf-profile.children.cycles-pp.down_write_killable
0.17 ± 32% -0.1 0.07 ± 6% perf-profile.children.cycles-pp.get_obj_cgroup_from_current
0.21 ± 23% -0.1 0.11 ± 4% perf-profile.children.cycles-pp.page_remove_rmap
1.39 ± 21% -0.1 1.29 ± 7% perf-profile.children.cycles-pp.vma_interval_tree_insert
0.20 ± 26% -0.1 0.11 ± 8% perf-profile.children.cycles-pp.page_counter_uncharge
0.09 ± 32% -0.1 0.00 perf-profile.children.cycles-pp.d_path
0.39 ± 11% -0.1 0.30 ± 14% perf-profile.children.cycles-pp.mark_page_accessed
0.20 ± 24% -0.1 0.11 ± 8% perf-profile.children.cycles-pp.page_counter_cancel
0.09 ± 25% -0.1 0.00 perf-profile.children.cycles-pp.security_mmap_file
0.14 ± 22% -0.1 0.05 ± 8% perf-profile.children.cycles-pp.obj_cgroup_charge
0.14 ± 20% -0.1 0.05 perf-profile.children.cycles-pp.__entry_text_start
0.09 ± 20% -0.1 0.00 perf-profile.children.cycles-pp.__might_sleep
0.13 ± 22% -0.1 0.05 perf-profile.children.cycles-pp.up_write
0.08 ± 22% -0.1 0.00 perf-profile.children.cycles-pp.fault_dirty_shared_page
0.09 ± 19% -0.1 0.02 ±141% perf-profile.children.cycles-pp._cond_resched
0.19 ± 29% -0.1 0.12 ± 6% perf-profile.children.cycles-pp.__rb_erase_color
0.19 ± 33% -0.1 0.12 ± 6% perf-profile.children.cycles-pp.__slab_alloc
0.07 ± 23% -0.1 0.00 perf-profile.children.cycles-pp.syscall_enter_from_user_mode
0.07 ± 5% -0.1 0.00 perf-profile.children.cycles-pp.__libc_write
0.09 ± 11% -0.1 0.02 ±141% perf-profile.children.cycles-pp.ksys_write
0.09 ± 11% -0.1 0.02 ±141% perf-profile.children.cycles-pp.vfs_write
0.07 ± 24% -0.1 0.00 perf-profile.children.cycles-pp.tlb_finish_mmu
0.07 ± 18% -0.1 0.00 perf-profile.children.cycles-pp.__slab_free
0.07 ± 20% -0.1 0.00 perf-profile.children.cycles-pp.vma_interval_tree_augment_rotate
0.14 ± 23% -0.1 0.07 ± 6% perf-profile.children.cycles-pp.__memcg_kmem_uncharge
0.08 ± 12% -0.1 0.02 ±141% perf-profile.children.cycles-pp.new_sync_write
0.06 ± 7% -0.1 0.00 perf-profile.children.cycles-pp.generic_file_write_iter
0.06 ± 7% -0.1 0.00 perf-profile.children.cycles-pp.__generic_file_write_iter
0.18 ± 4% -0.1 0.12 ± 18% perf-profile.children.cycles-pp.__list_add_valid
0.06 ± 18% -0.1 0.00 perf-profile.children.cycles-pp.cpumask_any_but
0.06 ± 6% -0.1 0.00 perf-profile.children.cycles-pp.generic_perform_write
0.12 ± 21% -0.1 0.06 ± 7% perf-profile.children.cycles-pp.down_write
0.11 ± 22% -0.1 0.05 ± 8% perf-profile.children.cycles-pp.vmacache_find
0.06 ± 51% -0.1 0.00 perf-profile.children.cycles-pp.ret_from_fork
0.06 ± 51% -0.1 0.00 perf-profile.children.cycles-pp.kthread
0.17 ± 36% -0.1 0.12 ± 4% perf-profile.children.cycles-pp.___slab_alloc
0.11 ± 28% -0.0 0.06 ± 16% perf-profile.children.cycles-pp.vma_merge
0.05 ± 53% -0.0 0.00 perf-profile.children.cycles-pp.page_mapping
0.28 ± 7% -0.0 0.24 ± 14% perf-profile.children.cycles-pp.workingset_age_nonresident
0.11 ± 15% -0.0 0.06 perf-profile.children.cycles-pp.free_unref_page_list
0.30 ± 7% -0.0 0.26 ± 15% perf-profile.children.cycles-pp.workingset_activation
0.04 ± 51% -0.0 0.00 perf-profile.children.cycles-pp.follow_page_mask
0.04 ± 87% -0.0 0.00 perf-profile.children.cycles-pp.prepend_path
0.44 ± 6% -0.0 0.40 ± 19% perf-profile.children.cycles-pp.__activate_page
0.04 ± 85% -0.0 0.00 perf-profile.children.cycles-pp.perf_iterate_sb
0.04 ± 83% -0.0 0.00 perf-profile.children.cycles-pp.userfaultfd_unmap_prep
0.04 ± 82% -0.0 0.00 perf-profile.children.cycles-pp.syscall
0.04 ± 82% -0.0 0.00 perf-profile.children.cycles-pp.__do_sys_finit_module
0.04 ± 82% -0.0 0.00 perf-profile.children.cycles-pp.load_module
0.04 ± 83% -0.0 0.00 perf-profile.children.cycles-pp.vma_gap_callbacks_rotate
0.04 ± 83% -0.0 0.00 perf-profile.children.cycles-pp.__memcg_kmem_charge
0.03 ± 82% -0.0 0.00 perf-profile.children.cycles-pp.up_read
0.03 ± 82% -0.0 0.00 perf-profile.children.cycles-pp.tlb_gather_mmu
0.03 ±123% -0.0 0.00 perf-profile.children.cycles-pp.get_unmapped_area
0.13 ± 20% -0.0 0.10 ± 16% perf-profile.children.cycles-pp.__list_del_entry_valid
0.13 ± 13% -0.0 0.11 ± 12% perf-profile.children.cycles-pp.__munlock_isolate_lru_page
0.02 ±125% -0.0 0.00 perf-profile.children.cycles-pp.memcpy_erms
0.02 ±125% -0.0 0.00 perf-profile.children.cycles-pp.common_file_perm
0.02 ±122% -0.0 0.00 perf-profile.children.cycles-pp.rcu_all_qs
0.02 ±122% -0.0 0.00 perf-profile.children.cycles-pp.page_counter_try_charge
0.02 ±122% -0.0 0.00 perf-profile.children.cycles-pp.__count_memcg_events
0.02 ±122% -0.0 0.00 perf-profile.children.cycles-pp.downgrade_write
0.02 ±123% -0.0 0.00 perf-profile.children.cycles-pp.fput_many
0.02 ±123% -0.0 0.00 perf-profile.children.cycles-pp.put_cpu_partial
0.02 ±123% -0.0 0.00 perf-profile.children.cycles-pp.set_page_dirty
0.02 ±122% -0.0 0.00 perf-profile.children.cycles-pp.__set_page_dirty_no_writeback
0.01 ±200% -0.0 0.00 perf-profile.children.cycles-pp.vprintk_emit
0.01 ±200% -0.0 0.00 perf-profile.children.cycles-pp.console_unlock
0.01 ±200% -0.0 0.00 perf-profile.children.cycles-pp.prepend_name
0.01 ±200% -0.0 0.00 perf-profile.children.cycles-pp.__fget_files
0.01 ±200% -0.0 0.00 perf-profile.children.cycles-pp.unlock_page
0.01 ±200% -0.0 0.00 perf-profile.children.cycles-pp.activate_page
0.01 ±200% -0.0 0.00 perf-profile.children.cycles-pp.serial8250_console_write
0.01 ±200% -0.0 0.00 perf-profile.children.cycles-pp.uart_console_write
0.01 ±200% -0.0 0.00 perf-profile.children.cycles-pp._find_next_bit
0.01 ±200% -0.0 0.00 perf-profile.children.cycles-pp.khugepaged
0.01 ±200% -0.0 0.00 perf-profile.children.cycles-pp.khugepaged_scan_mm_slot
0.01 ±200% -0.0 0.00 perf-profile.children.cycles-pp.khugepaged_scan_pmd
0.01 ±200% -0.0 0.00 perf-profile.children.cycles-pp.collapse_huge_page
0.01 ±200% -0.0 0.00 perf-profile.children.cycles-pp.__remove_shared_vm_struct
0.08 ± 13% -0.0 0.07 ± 11% perf-profile.children.cycles-pp.try_grab_page
0.08 ± 24% -0.0 0.07 ± 6% perf-profile.children.cycles-pp.vmacache_update
0.00 +0.0 0.00 perf-profile.children.cycles-pp.do_init_module
0.00 +0.0 0.00 perf-profile.children.cycles-pp.do_one_initcall
0.00 +0.0 0.00 perf-profile.children.cycles-pp.driver_register
0.00 +0.0 0.00 perf-profile.children.cycles-pp.bus_add_driver
0.00 +0.0 0.00 perf-profile.children.cycles-pp.bus_for_each_dev
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__driver_attach
0.00 +0.0 0.00 perf-profile.children.cycles-pp.device_driver_attach
0.00 +0.0 0.00 perf-profile.children.cycles-pp.driver_probe_device
0.00 +0.0 0.00 perf-profile.children.cycles-pp.really_probe
0.00 +0.0 0.00 perf-profile.children.cycles-pp.nvdimm_bus_probe
0.00 +0.0 0.00 perf-profile.children.cycles-pp.dax_pmem_compat_probe
0.00 +0.0 0.00 perf-profile.children.cycles-pp.dev_dax_probe
0.00 +0.0 0.00 perf-profile.children.cycles-pp.devm_memremap_pages
0.00 +0.0 0.00 perf-profile.children.cycles-pp.memremap_pages
0.00 +0.0 0.00 perf-profile.children.cycles-pp.pagemap_range
0.00 +0.0 0.00 perf-profile.children.cycles-pp.memmap_init_zone_device
0.00 +0.0 0.00 perf-profile.children.cycles-pp.worker_thread
0.00 +0.0 0.00 perf-profile.children.cycles-pp.process_one_work
0.00 +0.0 0.00 perf-profile.children.cycles-pp.drm_fb_helper_dirty_work
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__mutex_lock
0.00 +0.0 0.00 perf-profile.children.cycles-pp.copy_user_enhanced_fast_string
0.00 +0.0 0.00 perf-profile.children.cycles-pp.iov_iter_copy_from_user_atomic
0.00 +0.0 0.00 perf-profile.children.cycles-pp.copyin
0.00 +0.0 0.00 perf-profile.children.cycles-pp.osq_lock
0.00 +0.0 0.00 perf-profile.children.cycles-pp.finished_loading
0.00 +0.0 0.00 perf-profile.children.cycles-pp.shmem_write_end
0.00 +0.0 0.00 perf-profile.children.cycles-pp.prepare_to_wait_event
0.00 +0.0 0.00 perf-profile.children.cycles-pp.copy_page
0.00 +0.0 0.00 perf-profile.children.cycles-pp.shmem_write_begin
0.00 +0.0 0.00 perf-profile.children.cycles-pp.iov_iter_fault_in_readable
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__get_user_nocheck_1
0.00 +0.0 0.00 perf-profile.children.cycles-pp.irq_exit_rcu
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__x64_sys_execve
0.00 +0.0 0.00 perf-profile.children.cycles-pp.do_execveat_common
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__softirqentry_text_start
0.00 +0.0 0.00 perf-profile.children.cycles-pp.execve
0.00 +0.0 0.00 perf-profile.children.cycles-pp.on_each_cpu
0.00 +0.0 0.00 perf-profile.children.cycles-pp.smp_call_function_many_cond
0.00 +0.0 0.00 perf-profile.children.cycles-pp.do_softirq_own_stack
0.00 +0.0 0.00 perf-profile.children.cycles-pp.bprm_execve
0.00 +0.0 0.00 perf-profile.children.cycles-pp.change_page_attr_set_clr
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__libc_fork
0.00 +0.0 0.00 perf-profile.children.cycles-pp.exec_binprm
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__libc_start_main
0.00 +0.0 0.00 perf-profile.children.cycles-pp.main
0.00 +0.0 0.00 perf-profile.children.cycles-pp.run_builtin
0.00 +0.0 0.00 perf-profile.children.cycles-pp.cmd_record
0.00 +0.0 0.00 perf-profile.children.cycles-pp.load_elf_binary
0.00 +0.0 0.00 perf-profile.children.cycles-pp.ktime_get_update_offsets_now
0.00 +0.0 0.00 perf-profile.children.cycles-pp.write
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__sched_text_start
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__fdget_pos
0.00 +0.0 0.00 perf-profile.children.cycles-pp.cpa_flush
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__do_sys_clone
0.00 +0.0 0.00 perf-profile.children.cycles-pp._do_fork
0.00 +0.0 0.00 perf-profile.children.cycles-pp.try_to_wake_up
0.00 +0.0 0.00 perf-profile.children.cycles-pp.module_enable_ro
0.00 +0.0 0.00 perf-profile.children.cycles-pp.set_memory_ro
0.00 +0.0 0.00 perf-profile.children.cycles-pp.ksys_read
0.00 +0.0 0.00 perf-profile.children.cycles-pp.vfs_read
0.00 +0.0 0.00 perf-profile.children.cycles-pp.record__mmap_read_evlist
0.00 +0.0 0.00 perf-profile.children.cycles-pp.schedule
0.00 +0.0 0.00 perf-profile.children.cycles-pp.read
0.00 +0.0 0.00 perf-profile.children.cycles-pp.mmput
0.00 +0.0 0.00 perf-profile.children.cycles-pp.do_filp_open
0.00 +0.0 0.00 perf-profile.children.cycles-pp.path_openat
0.00 +0.0 0.00 perf-profile.children.cycles-pp.perf_mmap__push
0.00 +0.0 0.00 perf-profile.children.cycles-pp.finish_wait
0.00 +0.0 0.00 perf-profile.children.cycles-pp.exit_mmap
0.00 +0.0 0.00 perf-profile.children.cycles-pp.do_sys_open
0.00 +0.0 0.00 perf-profile.children.cycles-pp.do_sys_openat2
0.00 +0.0 0.00 perf-profile.children.cycles-pp.perf_mmap__read_head
0.00 +0.0 0.00 perf-profile.children.cycles-pp.select_task_rq_fair
0.00 +0.0 0.00 perf-profile.children.cycles-pp._vm_unmap_aliases
0.00 +0.0 0.00 perf-profile.children.cycles-pp.copy_process
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__alloc_pages_nodemask
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__x64_sys_exit_group
0.00 +0.0 0.00 perf-profile.children.cycles-pp.do_group_exit
0.00 +0.0 0.00 perf-profile.children.cycles-pp.do_exit
0.00 +0.0 0.00 perf-profile.children.cycles-pp.seq_read
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__purge_vmap_area_lazy
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__pthread_enable_asynccancel
0.00 +0.0 0.00 perf-profile.children.cycles-pp.devkmsg_write.cold
0.00 +0.0 0.00 perf-profile.children.cycles-pp.devkmsg_emit
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__init_single_page
0.00 +0.0 0.00 perf-profile.children.cycles-pp.wp_page_copy
0.00 +0.0 0.00 perf-profile.children.cycles-pp.change_protection
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__fget_light
0.00 +0.0 0.00 perf-profile.children.cycles-pp.rcu_gp_kthread
0.00 +0.0 0.00 perf-profile.children.cycles-pp.get_page_from_freelist
0.00 +0.0 0.00 perf-profile.children.cycles-pp.load_balance
0.00 +0.0 0.00 perf-profile.children.cycles-pp.change_prot_numa
0.00 +0.0 0.00 perf-profile.children.cycles-pp.mutex_unlock
0.00 +0.0 0.00 perf-profile.children.cycles-pp.secondary_startup_64
0.00 +0.0 0.00 perf-profile.children.cycles-pp.start_secondary
0.00 +0.0 0.00 perf-profile.children.cycles-pp.cpu_startup_entry
0.00 +0.0 0.00 perf-profile.children.cycles-pp.do_idle
0.00 +0.0 0.00 perf-profile.children.cycles-pp.rebalance_domains
0.00 +0.0 0.00 perf-profile.children.cycles-pp.hrtimer_active
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__pthread_disable_asynccancel
0.00 +0.0 0.00 perf-profile.children.cycles-pp.cpuidle_enter
0.00 +0.0 0.00 perf-profile.children.cycles-pp.cpuidle_enter_state
0.00 +0.0 0.00 perf-profile.children.cycles-pp.intel_idle
0.00 +0.0 0.00 perf-profile.children.cycles-pp.begin_new_exec
0.00 +0.0 0.00 perf-profile.children.cycles-pp.mutex_lock
0.00 +0.0 0.00 perf-profile.children.cycles-pp.run_timer_softirq
0.00 +0.0 0.00 perf-profile.children.cycles-pp.io_serial_in
0.00 +0.0 0.00 perf-profile.children.cycles-pp.shmem_add_to_page_cache
0.00 +0.0 0.00 perf-profile.children.cycles-pp.call_timer_fn
0.00 +0.0 0.00 perf-profile.children.cycles-pp.walk_component
0.00 +0.0 0.00 perf-profile.children.cycles-pp.wait_for_xmitr
0.00 +0.0 0.00 perf-profile.children.cycles-pp.clear_page_erms
0.00 +0.0 0.00 perf-profile.children.cycles-pp.dup_mm
0.00 +0.0 0.00 perf-profile.children.cycles-pp.link_path_walk
0.00 +0.0 0.00 perf-profile.children.cycles-pp.rcu_core
0.00 +0.0 0.00 perf-profile.children.cycles-pp.serial8250_console_putchar
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__vunmap
0.00 +0.0 0.00 perf-profile.children.cycles-pp.change_p4d_range
0.00 +0.0 0.00 perf-profile.children.cycles-pp.set_pfnblock_flags_mask
0.00 +0.0 0.00 perf-profile.children.cycles-pp.find_busiest_group
0.00 +0.0 0.00 perf-profile.children.cycles-pp.perf_mmap_fault
0.00 +0.0 0.00 perf-profile.children.cycles-pp.dup_mmap
0.00 +0.0 0.00 perf-profile.children.cycles-pp.set_memory_x
0.00 +0.0 0.00 perf-profile.children.cycles-pp.update_sd_lb_stats
0.00 +0.0 0.00 perf-profile.children.cycles-pp.select_idle_sibling
0.00 +0.0 0.00 perf-profile.children.cycles-pp.prep_new_page
0.00 +0.0 0.00 perf-profile.children.cycles-pp.proc_reg_read
0.00 +0.0 0.00 perf-profile.children.cycles-pp.enqueue_task_fair
0.00 +0.0 0.00 perf-profile.children.cycles-pp.do_anonymous_page
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__update_load_avg_se
0.00 +0.0 0.00 perf-profile.children.cycles-pp.ttwu_do_activate
0.00 +0.0 0.00 perf-profile.children.cycles-pp.setlocale
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__wake_up_common_lock
0.00 +0.0 0.00 perf-profile.children.cycles-pp.find_idlest_group
0.00 +0.0 0.00 perf-profile.children.cycles-pp.pick_next_task_fair
0.00 +0.0 0.00 perf-profile.children.cycles-pp.alloc_pages_vma
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__wake_up_common
0.00 +0.0 0.00 perf-profile.children.cycles-pp.enqueue_entity
0.00 +0.0 0.00 perf-profile.children.cycles-pp.flush_tlb_kernel_range
0.00 +0.0 0.00 perf-profile.children.cycles-pp.autoremove_wake_function
0.00 +0.0 0.00 perf-profile.children.cycles-pp.rcu_do_batch
0.00 +0.0 0.00 perf-profile.children.cycles-pp.schedule_timeout
0.00 +0.0 0.00 perf-profile.children.cycles-pp.force_qs_rnp
0.00 +0.0 0.00 perf-profile.children.cycles-pp.lapic_next_deadline
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__update_load_avg_cfs_rq
0.00 +0.0 0.00 perf-profile.children.cycles-pp.do_huge_pmd_numa_page
0.00 +0.0 0.00 perf-profile.children.cycles-pp.migrate_misplaced_transhuge_page
0.00 +0.0 0.00 perf-profile.children.cycles-pp.down_read_trylock
0.00 +0.0 0.00 perf-profile.children.cycles-pp.update_rq_clock
0.00 +0.0 0.00 perf-profile.children.cycles-pp.migrate_page_copy
0.00 +0.0 0.00 perf-profile.children.cycles-pp.irqtime_account_irq
0.00 +0.0 0.00 perf-profile.children.cycles-pp.irq_work_run_list
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__open64_nocancel
0.00 +0.0 0.00 perf-profile.children.cycles-pp.pipe_write
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__lookup_slow
0.00 +0.0 0.00 perf-profile.children.cycles-pp.mem_cgroup_charge
0.00 +0.0 0.00 perf-profile.children.cycles-pp.irq_work_single
0.00 +0.0 0.00 perf-profile.children.cycles-pp.newidle_balance
0.00 +0.0 0.00 perf-profile.children.cycles-pp.rmqueue
0.00 +0.0 0.00 perf-profile.children.cycles-pp.new_sync_read
0.00 +0.0 0.00 perf-profile.children.cycles-pp.sched_exec
0.00 +0.0 0.00 perf-profile.children.cycles-pp.update_cfs_group
0.00 +0.0 0.00 perf-profile.children.cycles-pp.free_module
0.00 +0.0 0.00 perf-profile.children.cycles-pp.mutex_spin_on_owner
0.00 +0.0 0.00 perf-profile.children.cycles-pp.wake_up_new_task
0.00 +0.0 0.00 perf-profile.children.cycles-pp.run_local_timers
0.00 +0.0 0.00 perf-profile.children.cycles-pp.shmem_alloc_and_acct_page
0.00 +0.0 0.00 perf-profile.children.cycles-pp.asm_sysvec_irq_work
0.00 +0.0 0.00 perf-profile.children.cycles-pp.sysvec_irq_work
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__sysvec_irq_work
0.00 +0.0 0.00 perf-profile.children.cycles-pp.arch_scale_freq_tick
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__account_scheduler_latency
0.00 +0.0 0.00 perf-profile.children.cycles-pp.diskstats_show
0.00 +0.0 0.00 perf-profile.children.cycles-pp.smpboot_thread_fn
0.00 +0.0 0.00 perf-profile.children.cycles-pp.copy_page_range
0.00 +0.0 0.00 perf-profile.children.cycles-pp.filename_lookup
0.00 +0.0 0.00 perf-profile.children.cycles-pp.available_idle_cpu
0.00 +0.0 0.00 perf-profile.children.cycles-pp.path_lookupat
0.00 +0.0 0.00 perf-profile.children.cycles-pp.irq_work_run
0.00 +0.0 0.00 perf-profile.children.cycles-pp.printk
0.00 +0.0 0.00 perf-profile.children.cycles-pp.unlink_anon_vmas
0.00 +0.0 0.00 perf-profile.children.cycles-pp.wait4
0.00 +0.0 0.00 perf-profile.children.cycles-pp.alloc_empty_file
0.00 +0.0 0.00 perf-profile.children.cycles-pp.rcu_sched_clock_irq
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__alloc_file
0.00 +0.0 0.00 perf-profile.children.cycles-pp._raw_spin_trylock
0.00 +0.0 0.00 perf-profile.children.cycles-pp.update_irq_load_avg
0.00 +0.0 0.00 perf-profile.children.cycles-pp.sched_setaffinity
0.00 +0.0 0.00 perf-profile.children.cycles-pp.lookup_fast
0.00 +0.0 0.00 perf-profile.children.cycles-pp.step_into
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__clear_user
0.00 +0.0 0.00 perf-profile.children.cycles-pp.shmem_alloc_page
0.00 +0.0 0.00 perf-profile.children.cycles-pp.d_alloc_parallel
0.00 +0.0 0.00 perf-profile.children.cycles-pp.sched_clock_cpu
0.00 +0.0 0.00 perf-profile.children.cycles-pp.get_arg_page
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__get_user_pages_remote
0.00 +0.0 0.00 perf-profile.children.cycles-pp.dput
0.00 +0.0 0.00 perf-profile.children.cycles-pp.sysfs_kf_seq_show
0.00 +0.0 0.00 perf-profile.children.cycles-pp.sum_zone_numa_state
0.00 +0.0 0.00 perf-profile.children.cycles-pp.update_blocked_averages
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__do_sys_wait4
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__x64_sys_mprotect
0.00 +0.0 0.00 perf-profile.children.cycles-pp.do_mprotect_pkey
0.00 +0.0 0.00 perf-profile.children.cycles-pp.dev_attr_show
0.00 +0.0 0.00 perf-profile.children.cycles-pp.run_rebalance_domains
0.00 +0.0 0.00 perf-profile.children.cycles-pp.kernel_wait4
0.00 +0.0 0.00 perf-profile.children.cycles-pp.do_wait
0.00 +0.0 0.00 perf-profile.children.cycles-pp.perf_mmap_to_page
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__get_free_pages
0.00 +0.0 0.00 perf-profile.children.cycles-pp.vmstat_start
0.00 +0.0 0.00 perf-profile.children.cycles-pp.sum_vm_events
0.00 +0.0 0.00 perf-profile.children.cycles-pp.pte_alloc_one
0.00 +0.0 0.00 perf-profile.children.cycles-pp.account_user_time
0.00 +0.0 0.00 perf-profile.children.cycles-pp.asm_sysvec_reschedule_ipi
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__switch_to
0.00 +0.0 0.00 perf-profile.children.cycles-pp.mprotect_fixup
0.00 +0.0 0.00 perf-profile.children.cycles-pp.evlist__disable
0.00 +0.0 0.00 perf-profile.children.cycles-pp.elf_map
0.00 +0.0 0.00 perf-profile.children.cycles-pp.finish_task_switch
0.00 +0.0 0.00 perf-profile.children.cycles-pp.stack_trace_save_tsk
0.00 +0.0 0.00 perf-profile.children.cycles-pp.sched_clock
0.00 +0.0 0.00 perf-profile.children.cycles-pp.do_open_execat
0.00 +0.0 0.00 perf-profile.children.cycles-pp.release_pte_page
0.00 +0.0 0.00 perf-profile.children.cycles-pp.open64
0.00 +0.0 0.00 perf-profile.children.cycles-pp._dl_addr
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__memcg_kmem_charge_page
0.00 +0.0 0.00 perf-profile.children.cycles-pp.read_tsc
0.00 +0.0 0.00 perf-profile.children.cycles-pp.try_charge
0.00 +0.0 0.00 perf-profile.children.cycles-pp.blocking_notifier_call_chain
0.00 +0.0 0.00 perf-profile.children.cycles-pp.trace_module_notify
0.00 +0.0 0.00 perf-profile.children.cycles-pp.rmqueue_bulk
0.00 +0.0 0.00 perf-profile.children.cycles-pp.asm_sysvec_call_function_single
0.00 +0.0 0.00 perf-profile.children.cycles-pp.delay_tsc
0.00 +0.0 0.00 perf-profile.children.cycles-pp.update_nohz_stats
0.00 +0.0 0.00 perf-profile.children.cycles-pp.switch_mm_irqs_off
0.00 +0.0 0.00 perf-profile.children.cycles-pp.load_elf_interp
0.00 +0.0 0.00 perf-profile.children.cycles-pp.d_alloc
0.00 +0.0 0.00 perf-profile.children.cycles-pp.do_dentry_open
0.00 +0.0 0.00 perf-profile.children.cycles-pp.pipe_read
0.00 +0.0 0.00 perf-profile.children.cycles-pp.native_sched_clock
0.00 +0.0 0.00 perf-profile.children.cycles-pp.show_stat
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__do_sys_newstat
0.00 +0.0 0.00 perf-profile.children.cycles-pp.vfs_statx
0.00 +0.0 0.00 perf-profile.children.cycles-pp.calc_global_load_tick
0.00 +0.0 0.00 perf-profile.children.cycles-pp.asm_common_interrupt
0.00 +0.0 0.00 perf-profile.children.cycles-pp.reweight_entity
0.00 +0.0 0.00 perf-profile.children.cycles-pp.copy_strings
0.00 +0.0 0.00 perf-profile.children.cycles-pp.common_interrupt
0.00 +0.0 0.00 perf-profile.children.cycles-pp.run_ksoftirqd
0.00 +0.0 0.00 perf-profile.children.cycles-pp.copy_pte_range
0.00 +0.0 0.00 perf-profile.children.cycles-pp.dequeue_task_fair
0.00 +0.0 0.00 perf-profile.children.cycles-pp.llist_add_batch
0.00 +0.0 0.00 perf-profile.children.cycles-pp.part_stat_read_all
0.00 +0.0 0.00 perf-profile.children.cycles-pp.check_preempt_curr
0.00 +0.0 0.00 perf-profile.children.cycles-pp.wait_task_zombie
0.00 +0.0 0.00 perf-profile.children.cycles-pp.ttwu_do_wakeup
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__mmap
0.00 +0.0 0.00 perf-profile.children.cycles-pp.anon_vma_fork
0.00 +0.0 0.00 perf-profile.children.cycles-pp.copy_string_kernel
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__pte_alloc
0.00 +0.0 0.00 perf-profile.children.cycles-pp.irq_enter_rcu
0.00 +0.0 0.00 perf-profile.children.cycles-pp.unlazy_walk
0.00 +0.0 0.00 perf-profile.children.cycles-pp.switch_fpu_return
0.00 +0.0 0.00 perf-profile.children.cycles-pp.check_preempt_wakeup
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__fput
0.00 +0.0 0.00 perf-profile.children.cycles-pp.arch_stack_walk
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__legitimize_path
0.00 +0.0 0.00 perf-profile.children.cycles-pp.alloc_bprm
0.00 +0.0 0.00 perf-profile.children.cycles-pp.release_task
0.00 +0.0 0.00 perf-profile.children.cycles-pp.vmstat_update
0.00 +0.0 0.00 perf-profile.children.cycles-pp.refresh_cpu_vm_stats
0.00 +0.0 0.00 perf-profile.children.cycles-pp.generic_file_buffered_read
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__pmd_alloc
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__close
0.00 +0.0 0.00 perf-profile.children.cycles-pp.blk_mq_queue_tag_busy_iter
0.00 +0.0 0.00 perf-profile.children.cycles-pp.mm_init
0.00 +0.0 0.00 perf-profile.children.cycles-pp.do_faccessat
0.00 +0.0 0.00 perf-profile.children.cycles-pp.blk_mq_in_flight
0.00 +0.0 0.00 perf-profile.children.cycles-pp.sysvec_call_function_single
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__vmalloc_node_range
0.00 +0.0 0.00 perf-profile.children.cycles-pp.malloc
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__dentry_kill
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__x64_sys_sched_setaffinity
0.00 +0.0 0.00 perf-profile.children.cycles-pp.vt_console_print
0.00 +0.0 0.00 perf-profile.children.cycles-pp.lf
0.00 +0.0 0.00 perf-profile.children.cycles-pp.con_scroll
0.00 +0.0 0.00 perf-profile.children.cycles-pp.fbcon_scroll
0.00 +0.0 0.00 perf-profile.children.cycles-pp.node_read_numastat
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__queue_work
0.00 +0.0 0.00 perf-profile.children.cycles-pp.fbcon_redraw
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__pud_alloc
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__sysvec_call_function_single
0.00 +0.0 0.00 perf-profile.children.cycles-pp.sched_ttwu_pending
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__x64_sys_poll
0.00 +0.0 0.00 perf-profile.children.cycles-pp.do_sys_poll
0.00 +0.0 0.00 perf-profile.children.cycles-pp.node_read_vmstat
0.00 +0.0 0.00 perf-profile.children.cycles-pp.schedule_tail
0.00 +0.0 0.00 perf-profile.children.cycles-pp.setup_arg_pages
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__poll
0.00 +0.0 0.00 perf-profile.children.cycles-pp.xas_find_conflict
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__accumulate_pelt_segments
0.00 +0.0 0.00 perf-profile.children.cycles-pp.anon_vma_clone
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__put_user_nocheck_4
0.00 +0.0 0.00 perf-profile.children.cycles-pp.dequeue_entity
0.00 +0.0 0.00 perf-profile.children.cycles-pp.call_rcu
0.00 +0.0 0.00 perf-profile.children.cycles-pp.pgd_alloc
0.00 +0.0 0.00 perf-profile.children.cycles-pp.fbcon_putcs
0.00 +0.0 0.00 perf-profile.children.cycles-pp.bit_putcs
0.00 +0.0 0.00 perf-profile.children.cycles-pp.cpuacct_charge
0.00 +0.0 0.00 perf-profile.children.cycles-pp.rcu_segcblist_enqueue
0.00 +0.0 0.00 perf-profile.children.cycles-pp.free_pcppages_bulk
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__x86_indirect_thunk_rax
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__legitimize_mnt
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__d_lookup_rcu
0.00 +0.0 0.00 perf-profile.children.cycles-pp.d_add
0.00 +0.0 0.00 perf-profile.children.cycles-pp.net_rx_action
0.00 +0.0 0.00 perf-profile.children.cycles-pp.touch_atime
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__x64_sys_readlink
0.00 +0.0 0.00 perf-profile.children.cycles-pp.do_readlinkat
0.00 +0.0 0.00 perf-profile.children.cycles-pp.simple_lookup
0.00 +0.0 0.00 perf-profile.children.cycles-pp.swake_up_locked
0.00 +0.0 0.00 perf-profile.children.cycles-pp.get_mem_cgroup_from_mm
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__xstat64
0.00 +0.0 0.00 perf-profile.children.cycles-pp.resched_curr
0.00 +0.0 0.00 perf-profile.children.cycles-pp.tick_sched_do_timer
0.00 +0.0 0.00 perf-profile.children.cycles-pp.irqtime_account_process_tick
0.00 +0.0 0.00 perf-profile.children.cycles-pp.rcu_cblist_dequeue
0.00 +0.0 0.00 perf-profile.children.cycles-pp.sys_imageblit
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__kernel_read
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__d_alloc
0.00 +0.0 0.00 perf-profile.children.cycles-pp.security_file_permission
0.00 +0.0 0.00 perf-profile.children.cycles-pp.drm_fb_helper_sys_imageblit
0.00 +0.0 0.00 perf-profile.children.cycles-pp.igb_poll
0.00 +0.0 0.00 perf-profile.children.cycles-pp.shift_arg_pages
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__close_nocancel
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__fsnotify_parent
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__remove_hrtimer
0.00 +0.0 0.00 perf-profile.children.cycles-pp.unwind_next_frame
0.00 +0.0 0.00 perf-profile.children.cycles-pp.file_update_time
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__x64_sys_close
0.00 +0.0 0.00 perf-profile.children.cycles-pp.mod_sysfs_setup
0.00 +0.0 0.00 perf-profile.children.cycles-pp.pick_link
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__send_signal
0.00 +0.0 0.00 perf-profile.children.cycles-pp.arch_do_signal
0.00 +0.0 0.00 perf-profile.children.cycles-pp.arch_dup_task_struct
0.00 +0.0 0.00 perf-profile.children.cycles-pp.open_exec
0.00 +0.0 0.00 perf-profile.children.cycles-pp.rcu_implicit_dynticks_qs
0.00 +0.0 0.00 perf-profile.children.cycles-pp.trigger_load_balance
0.00 +0.0 0.00 perf-profile.children.cycles-pp.lockref_put_or_lock
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__cgroup_account_cputime_field
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__mmdrop
0.00 +0.0 0.00 perf-profile.children.cycles-pp.ep_poll_callback
0.00 +0.0 0.00 perf-profile.children.cycles-pp.napi_complete_done
0.00 +0.0 0.00 perf-profile.children.cycles-pp.do_notify_parent
0.00 +0.0 0.00 perf-profile.children.cycles-pp.create_elf_tables
0.00 +0.0 0.00 perf-profile.children.cycles-pp.swake_up_one
0.00 +0.0 0.00 perf-profile.children.cycles-pp.anon_vma_interval_tree_remove
0.00 +0.0 0.00 perf-profile.children.cycles-pp.strnlen_user
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__hrtimer_next_event_base
0.00 +0.0 0.00 perf-profile.children.cycles-pp.kstat_irqs
0.00 +0.0 0.00 perf-profile.children.cycles-pp.do_wp_page
0.00 +0.0 0.00 perf-profile.children.cycles-pp.inode_permission
0.00 +0.0 0.00 perf-profile.children.cycles-pp.allocate_slab
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__kmalloc_node
0.00 +0.0 0.00 perf-profile.children.cycles-pp.getname_flags
0.00 +0.0 0.00 perf-profile.children.cycles-pp.set_pageblock_migratetype
0.00 +0.0 0.00 perf-profile.children.cycles-pp.handle_edge_irq
0.00 +0.0 0.00 perf-profile.children.cycles-pp.kstat_irqs_usr
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__switch_to_asm
0.00 +0.0 0.00 perf-profile.children.cycles-pp.record__pushfn
0.00 +0.0 0.00 perf-profile.children.cycles-pp.put_task_stack
0.00 +0.0 0.00 perf-profile.children.cycles-pp.put_prev_entity
0.00 +0.0 0.00 perf-profile.children.cycles-pp.current_time
0.00 +0.0 0.00 perf-profile.children.cycles-pp._copy_to_user
0.00 +0.0 0.00 perf-profile.children.cycles-pp.security_bprm_creds_for_exec
0.00 +0.0 0.00 perf-profile.children.cycles-pp._IO_setvbuf
0.00 +0.0 0.00 perf-profile.children.cycles-pp.asm_sysvec_call_function
0.00 +0.0 0.00 perf-profile.children.cycles-pp.gro_normal_list
0.00 +0.0 0.00 perf-profile.children.cycles-pp.netif_receive_skb_list_internal
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__netif_receive_skb_list_core
0.00 +0.0 0.00 perf-profile.children.cycles-pp.ip_list_rcv
0.00 +0.0 0.00 perf-profile.children.cycles-pp.signal_wake_up_state
0.00 +0.0 0.00 perf-profile.children.cycles-pp.sock_sendmsg
0.00 +0.0 0.00 perf-profile.children.cycles-pp.wake_up_q
0.00 +0.0 0.00 perf-profile.children.cycles-pp.rwsem_down_read_slowpath
0.00 +0.0 0.00 perf-profile.children.cycles-pp.rwsem_optimistic_spin
0.00 +0.0 0.00 perf-profile.children.cycles-pp.rwsem_spin_on_owner
0.00 +0.0 0.00 perf-profile.children.cycles-pp.pmdp_huge_clear_flush
0.00 +0.0 0.00 perf-profile.children.cycles-pp.on_each_cpu_cond_mask
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__change_page_attr_set_clr
0.00 +0.0 0.00 perf-profile.children.cycles-pp.smp_call_function_single
0.00 +0.0 0.00 perf-profile.children.cycles-pp.pgd_free
0.00 +0.0 0.00 perf-profile.children.cycles-pp.schedule_idle
0.00 +0.0 0.00 perf-profile.children.cycles-pp.resolve_symbol
0.00 +0.0 0.00 perf-profile.children.cycles-pp.set_next_entity
0.00 +0.0 0.00 perf-profile.children.cycles-pp.flush_smp_call_function_queue
0.00 +0.0 0.00 perf-profile.children.cycles-pp.sysvec_call_function
0.00 +0.0 0.00 perf-profile.children.cycles-pp.timekeeping_advance
0.00 +0.0 0.00 perf-profile.children.cycles-pp.note_gp_changes
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__sysvec_call_function
0.00 +0.0 0.00 perf-profile.children.cycles-pp.vsnprintf
0.00 +0.0 0.00 perf-profile.children.cycles-pp.do_task_dead
0.00 +0.0 0.00 perf-profile.children.cycles-pp.devkmsg_read
0.00 +0.0 0.00 perf-profile.children.cycles-pp.purge_fragmented_blocks_allcpus
0.00 +0.0 0.00 perf-profile.children.cycles-pp.prepare_creds
0.00 +0.0 0.00 perf-profile.children.cycles-pp.sendmsg
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__sys_sendmsg
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__acct_update_integrals
0.00 +0.0 0.00 perf-profile.children.cycles-pp.___sys_sendmsg
0.00 +0.0 0.00 perf-profile.children.cycles-pp.____sys_sendmsg
0.00 +0.0 0.00 perf-profile.children.cycles-pp.kobject_uevent_env
0.00 +0.0 0.00 perf-profile.children.cycles-pp.mem_cgroup_uncharge_list
0.00 +0.0 0.00 perf-profile.children.cycles-pp.lockref_get_not_dead
0.00 +0.0 0.00 perf-profile.children.cycles-pp.update_cfs_rq_h_load
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__alloc_fd
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__anon_vma_prepare
0.00 +0.0 0.00 perf-profile.children.cycles-pp.security_file_alloc
0.00 +0.0 0.00 perf-profile.children.cycles-pp.xas_create_range
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__calc_delta
0.00 +0.0 0.00 perf-profile.children.cycles-pp.apparmor_file_alloc_security
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__check_object_size
0.00 +0.0 0.00 perf-profile.children.cycles-pp.xas_create
0.00 +0.0 0.00 perf-profile.children.cycles-pp.uncharge_batch
0.00 +0.0 0.00 perf-profile.children.cycles-pp.preempt_schedule_common
0.00 +0.0 0.00 perf-profile.children.cycles-pp.epoll_wait
0.00 +0.0 0.00 perf-profile.children.cycles-pp.file_free_rcu
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__d_lookup_done
0.00 +0.0 0.00 perf-profile.children.cycles-pp.strncpy_from_user
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__x64_sys_ioctl
0.00 +0.0 0.00 perf-profile.children.cycles-pp.queue_work_on
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__memcg_kmem_uncharge_page
0.00 +0.0 0.00 perf-profile.children.cycles-pp.set_memory_nx
0.00 +0.0 0.00 perf-profile.children.cycles-pp.osq_unlock
0.00 +0.0 0.00 perf-profile.children.cycles-pp.update_min_vruntime
0.00 +0.0 0.00 perf-profile.children.cycles-pp.kfree
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__kmalloc
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__setup_rt_frame
0.00 +0.0 0.00 perf-profile.children.cycles-pp.proc_invalidate_siblings_dcache
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__close_fd
0.00 +0.0 0.00 perf-profile.children.cycles-pp.netlink_sendmsg
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__free_pages_ok
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__x64_sys_epoll_wait
0.00 +0.0 0.00 perf-profile.children.cycles-pp.do_epoll_wait
0.00 +0.0 0.00 perf-profile.children.cycles-pp.ep_poll
0.00 +0.0 0.00 perf-profile.children.cycles-pp.netlink_broadcast_filtered
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__munmap
0.00 +0.0 0.00 perf-profile.children.cycles-pp.ioctl
0.00 +0.0 0.00 perf-profile.children.cycles-pp.down_read
0.00 +0.0 0.00 perf-profile.children.cycles-pp.perf_event_task_tick
0.00 +0.0 0.00 perf-profile.children.cycles-pp.strcmp
0.00 +0.0 0.00 perf-profile.children.cycles-pp.put_cred_rcu
0.00 +0.0 0.00 perf-profile.children.cycles-pp.pick_file
0.00 +0.0 0.00 perf-profile.children.cycles-pp.security_task_getsecid
0.00 +0.0 0.00 perf-profile.children.cycles-pp.enqueue_hrtimer
0.00 +0.0 0.00 perf-profile.children.cycles-pp.generic_update_time
0.00 +0.0 0.00 perf-profile.children.cycles-pp.find_symbol
0.00 +0.0 0.00 perf-profile.children.cycles-pp.each_symbol_section
0.00 +0.0 0.00 perf-profile.children.cycles-pp.find_exported_symbol_in_section
0.00 +0.0 0.00 perf-profile.children.cycles-pp.bsearch
0.00 +0.0 0.00 perf-profile.children.cycles-pp.mod_sysfs_teardown
0.00 +0.0 0.00 perf-profile.children.cycles-pp._exit
0.00 +0.0 0.00 perf-profile.children.cycles-pp.recvmsg
0.00 +0.0 0.00 perf-profile.children.cycles-pp.account_process_tick
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__mark_inode_dirty
0.00 +0.0 0.00 perf-profile.children.cycles-pp.vfprintf
0.00 +0.0 0.00 perf-profile.children.cycles-pp.timerqueue_add
0.00 +0.0 0.00 perf-profile.children.cycles-pp.timerqueue_del
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__put_anon_vma
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__put_task_struct
0.00 +0.0 0.00 perf-profile.children.cycles-pp.sock_def_readable
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__sys_recvmsg
0.00 +0.0 0.00 perf-profile.children.cycles-pp.___sys_recvmsg
0.00 +0.0 0.00 perf-profile.children.cycles-pp.____sys_recvmsg
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__x64_sys_pipe
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__do_pipe_flags
0.00 +0.0 0.00 perf-profile.children.cycles-pp.complete
0.00 +0.0 0.00 perf-profile.children.cycles-pp.d_invalidate
0.00 +0.0 0.00 perf-profile.children.cycles-pp.do_pipe2
0.00 +0.0 0.00 perf-profile.children.cycles-pp.fifo_open
0.00 +0.0 0.00 perf-profile.children.cycles-pp.kernfs_remove_by_name_ns
0.00 +0.0 0.00 perf-profile.children.cycles-pp.ptep_clear_flush
0.00 +0.0 0.00 perf-profile.children.cycles-pp.shrink_dcache_parent
0.00 +0.0 0.00 perf-profile.children.cycles-pp.xas_alloc
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__pipe
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__hrtimer_get_next_event
0.00 +0.0 0.00 perf-profile.children.cycles-pp.path_init
0.00 +0.0 0.00 perf-profile.children.cycles-pp.apparmor_task_getsecid
0.00 +0.0 0.00 perf-profile.children.cycles-pp.perf_poll
0.00 +0.0 0.00 perf-profile.children.cycles-pp.create_pipe_files
0.00 +0.0 0.00 perf-profile.children.cycles-pp.wait_for_completion
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__libc_read
0.00 +0.0 0.00 perf-profile.children.cycles-pp.kmsg_read
0.00 +0.0 0.00 perf-profile.children.cycles-pp.do_syslog
0.00 +0.0 0.00 perf-profile.children.cycles-pp.it_real_fn
0.00 +0.0 0.00 perf-profile.children.cycles-pp.kill_pid_info
0.00 +0.0 0.00 perf-profile.children.cycles-pp.sysfs_remove_group
0.00 +0.0 0.00 perf-profile.children.cycles-pp.access
0.00 +0.0 0.00 perf-profile.children.cycles-pp.memset_erms
0.00 +0.0 0.00 perf-profile.children.cycles-pp.rb_erase
0.00 +0.0 0.00 perf-profile.children.cycles-pp.vma_policy_mof
0.00 +0.0 0.00 perf-profile.children.cycles-pp.copy_fpstate_to_sigframe
0.00 +0.0 0.00 perf-profile.children.cycles-pp.page_add_new_anon_rmap
0.00 +0.0 0.00 perf-profile.children.cycles-pp.copy_page_to_iter
0.00 +0.0 0.00 perf-profile.children.cycles-pp.page_get_link
0.00 +0.0 0.00 perf-profile.children.cycles-pp.queue_delayed_work_on
0.00 +0.0 0.00 perf-profile.children.cycles-pp.security_file_open
0.00 +0.0 0.00 perf-profile.children.cycles-pp.perf_ioctl
0.00 +0.0 0.00 perf-profile.children.cycles-pp.wait_for_partner
0.00 +0.0 0.00 perf-profile.children.cycles-pp.__do_sys_newfstat
0.00 +0.0 0.00 perf-profile.children.cycles-pp.pagecache_get_page
0.00 +0.0 0.00 perf-profile.children.cycles-pp.remove_files
0.00 +0.0 0.00 perf-profile.children.cycles-pp.netlink_broadcast
0.05 ± 88% +0.0 0.07 ± 14% perf-profile.children.cycles-pp.get_partial_node
0.26 ± 25% +0.0 0.29 ± 7% perf-profile.children.cycles-pp.__vma_rb_erase
0.20 ± 15% +0.0 0.24 ± 9% perf-profile.children.cycles-pp.follow_page
0.02 ±123% +0.0 0.06 ± 19% perf-profile.children.cycles-pp.clockevents_program_event
0.00 +0.1 0.05 perf-profile.children.cycles-pp.update_curr
0.00 +0.1 0.06 ± 13% perf-profile.children.cycles-pp.follow_pmd_mask
0.01 ±200% +0.1 0.07 ± 17% perf-profile.children.cycles-pp.ktime_get
0.00 +0.1 0.07 ± 6% perf-profile.children.cycles-pp.update_load_avg
0.29 ± 21% +0.1 0.37 ± 5% perf-profile.children.cycles-pp.exit_to_user_mode_prepare
0.23 ± 23% +0.1 0.32 perf-profile.children.cycles-pp._raw_spin_lock
0.21 ± 22% +0.1 0.31 ± 5% perf-profile.children.cycles-pp.vma_migratable
0.25 ± 22% +0.1 0.35 ± 6% perf-profile.children.cycles-pp.task_work_run
0.25 ± 20% +0.1 0.35 ± 6% perf-profile.children.cycles-pp.task_numa_work
0.08 ± 13% +0.1 0.20 ± 2% perf-profile.children.cycles-pp.task_tick_fair
0.10 ± 11% +0.2 0.29 ± 3% perf-profile.children.cycles-pp.scheduler_tick
1.86 ± 24% +0.2 2.09 ± 7% perf-profile.children.cycles-pp.find_vma
0.14 ± 13% +0.3 0.44 ± 2% perf-profile.children.cycles-pp.update_process_times
0.14 ± 13% +0.3 0.44 ± 2% perf-profile.children.cycles-pp.tick_sched_handle
1.36 ± 24% +0.3 1.68 ± 9% perf-profile.children.cycles-pp.find_vma_prev
0.15 ± 13% +0.3 0.48 ± 2% perf-profile.children.cycles-pp.tick_sched_timer
0.16 ± 12% +0.4 0.54 ± 2% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.23 ± 16% +0.5 0.69 ± 2% perf-profile.children.cycles-pp.hrtimer_interrupt
0.24 ± 16% +0.5 0.70 ± 3% perf-profile.children.cycles-pp.__sysvec_apic_timer_interrupt
0.35 ± 18% +0.5 0.83 ± 2% perf-profile.children.cycles-pp.asm_sysvec_apic_timer_interrupt
0.26 ± 14% +0.5 0.77 ± 2% perf-profile.children.cycles-pp.sysvec_apic_timer_interrupt
0.19 ± 10% +0.5 0.70 ± 2% perf-profile.children.cycles-pp.asm_call_sysvec_on_stack
98.08 +1.6 99.73 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
96.94 +2.4 99.32 perf-profile.children.cycles-pp.do_syscall_64
0.00 +8.6 8.60 perf-profile.children.cycles-pp.xas_find
3.68 ± 22% +86.1 89.79 perf-profile.children.cycles-pp.__x64_sys_madvise
3.66 ± 22% +86.1 89.78 perf-profile.children.cycles-pp.do_madvise
80.11 ± 5% -78.0 2.12 ± 53% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
0.83 ±123% -0.8 0.00 perf-profile.self.cycles-pp.syscall_exit_to_user_mode
0.57 ± 33% -0.4 0.14 ± 29% perf-profile.self.cycles-pp.mmap_region
0.81 ± 25% -0.3 0.49 ± 5% perf-profile.self.cycles-pp.kmem_cache_free
0.55 ± 23% -0.3 0.28 ± 11% perf-profile.self.cycles-pp.zap_pte_range
0.35 ± 23% -0.2 0.12 ± 4% perf-profile.self.cycles-pp.kmem_cache_alloc
0.48 ± 8% -0.2 0.26 ± 11% perf-profile.self.cycles-pp.release_pages
0.22 ± 23% -0.2 0.00 perf-profile.self.cycles-pp.native_irq_return_iret
0.17 ± 5% -0.2 0.00 perf-profile.self.cycles-pp.mem_cgroup_page_lruvec
0.34 ± 29% -0.2 0.18 ± 2% perf-profile.self.cycles-pp.__vma_link_rb
0.48 ± 20% -0.2 0.33 ± 4% perf-profile.self.cycles-pp.__vma_adjust
0.15 ± 24% -0.1 0.00 perf-profile.self.cycles-pp.sync_regs
0.24 ± 21% -0.1 0.10 ± 4% perf-profile.self.cycles-pp.syscall_return_via_sysret
0.14 ±129% -0.1 0.00 perf-profile.self.cycles-pp.irqentry_exit_to_user_mode
0.20 ± 4% -0.1 0.06 ± 19% perf-profile.self.cycles-pp.__pagevec_lru_add_fn
0.54 ± 26% -0.1 0.41 ± 6% perf-profile.self.cycles-pp.vma_interval_tree_remove
0.26 ± 4% -0.1 0.13 ± 24% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.24 ± 4% -0.1 0.11 ± 22% perf-profile.self.cycles-pp.mem_cgroup_update_lru_size
0.13 ± 9% -0.1 0.00 perf-profile.self.cycles-pp.__mod_node_page_state
0.13 ± 20% -0.1 0.00 perf-profile.self.cycles-pp.xas_load
0.27 ± 17% -0.1 0.14 ± 6% perf-profile.self.cycles-pp.vm_area_dup
0.27 ± 21% -0.1 0.15 ± 5% perf-profile.self.cycles-pp.native_flush_tlb_one_user
0.18 ± 22% -0.1 0.06 ± 7% perf-profile.self.cycles-pp.__mod_memcg_lruvec_state
0.11 ± 13% -0.1 0.00 perf-profile.self.cycles-pp.follow_page_pte
0.18 ± 18% -0.1 0.07 ± 7% perf-profile.self.cycles-pp.__mod_memcg_state
0.19 ± 22% -0.1 0.09 ± 5% perf-profile.self.cycles-pp.___might_sleep
0.17 ± 20% -0.1 0.06 ± 7% perf-profile.self.cycles-pp.__rb_insert_augmented
1.39 ± 21% -0.1 1.28 ± 7% perf-profile.self.cycles-pp.vma_interval_tree_insert
0.16 ± 6% -0.1 0.06 ± 23% perf-profile.self.cycles-pp._raw_spin_lock_irq
0.16 ± 33% -0.1 0.06 ± 7% perf-profile.self.cycles-pp.get_obj_cgroup_from_current
0.09 ± 21% -0.1 0.00 perf-profile.self.cycles-pp.__handle_mm_fault
0.09 ± 4% -0.1 0.00 perf-profile.self.cycles-pp.lru_cache_add
0.09 ± 7% -0.1 0.00 perf-profile.self.cycles-pp.isolate_lru_page
0.14 ± 20% -0.1 0.05 perf-profile.self.cycles-pp.__entry_text_start
0.19 ± 26% -0.1 0.10 ± 8% perf-profile.self.cycles-pp.page_counter_cancel
0.09 ± 18% -0.1 0.00 perf-profile.self.cycles-pp.refill_obj_stock
0.08 ± 20% -0.1 0.00 perf-profile.self.cycles-pp.down_write_killable
0.13 ± 22% -0.1 0.05 perf-profile.self.cycles-pp.up_write
0.08 ± 33% -0.1 0.00 perf-profile.self.cycles-pp.entry_SYSCALL_64_after_hwframe
0.08 ± 22% -0.1 0.00 perf-profile.self.cycles-pp.page_remove_rmap
0.08 ± 13% -0.1 0.00 perf-profile.self.cycles-pp.filemap_map_pages
0.08 ± 20% -0.1 0.00 perf-profile.self.cycles-pp.__might_sleep
0.17 ± 21% -0.1 0.10 ± 9% perf-profile.self.cycles-pp.__split_vma
0.07 ± 18% -0.1 0.00 perf-profile.self.cycles-pp.handle_mm_fault
0.07 ± 18% -0.1 0.00 perf-profile.self.cycles-pp.page_add_file_rmap
0.07 ± 23% -0.1 0.00 perf-profile.self.cycles-pp.find_get_entry
0.07 ± 24% -0.1 0.00 perf-profile.self.cycles-pp.syscall_enter_from_user_mode
0.07 ± 18% -0.1 0.00 perf-profile.self.cycles-pp.__slab_free
0.10 ± 29% -0.1 0.03 ± 70% perf-profile.self.cycles-pp.___slab_alloc
0.10 ± 23% -0.1 0.03 ± 70% perf-profile.self.cycles-pp.vmacache_find
0.18 ± 4% -0.1 0.12 ± 18% perf-profile.self.cycles-pp.__list_add_valid
0.06 ± 18% -0.1 0.00 perf-profile.self.cycles-pp.vma_interval_tree_augment_rotate
0.06 ± 14% -0.1 0.00 perf-profile.self.cycles-pp.obj_cgroup_charge
0.16 ± 27% -0.1 0.10 ± 8% perf-profile.self.cycles-pp.__rb_erase_color
0.16 ± 17% -0.1 0.10 ± 19% perf-profile.self.cycles-pp.__munlock_pagevec
0.05 ± 53% -0.1 0.00 perf-profile.self.cycles-pp.mark_page_accessed
0.28 ± 7% -0.1 0.23 ± 13% perf-profile.self.cycles-pp.workingset_age_nonresident
0.11 ± 15% -0.0 0.06 ± 8% perf-profile.self.cycles-pp.free_unref_page_list
0.05 ± 52% -0.0 0.00 perf-profile.self.cycles-pp.page_mapping
0.10 ± 28% -0.0 0.06 ± 16% perf-profile.self.cycles-pp.vma_merge
0.04 ± 51% -0.0 0.00 perf-profile.self.cycles-pp.lru_add_drain_cpu
0.04 ± 83% -0.0 0.00 perf-profile.self.cycles-pp.down_write
0.04 ± 82% -0.0 0.00 perf-profile.self.cycles-pp.flush_tlb_mm_range
0.04 ± 50% -0.0 0.00 perf-profile.self.cycles-pp._raw_spin_unlock_irqrestore
0.04 ± 83% -0.0 0.00 perf-profile.self.cycles-pp.userfaultfd_unmap_prep
0.23 ± 24% -0.0 0.19 ± 4% perf-profile.self.cycles-pp.__do_munmap
0.03 ± 82% -0.0 0.00 perf-profile.self.cycles-pp.up_read
0.03 ± 82% -0.0 0.00 perf-profile.self.cycles-pp.tlb_gather_mmu
0.03 ± 81% -0.0 0.00 perf-profile.self.cycles-pp.follow_page_mask
0.03 ±123% -0.0 0.00 perf-profile.self.cycles-pp.vma_gap_callbacks_rotate
0.13 ± 20% -0.0 0.10 ± 16% perf-profile.self.cycles-pp.__list_del_entry_valid
0.02 ±125% -0.0 0.00 perf-profile.self.cycles-pp.perf_iterate_sb
0.02 ±125% -0.0 0.00 perf-profile.self.cycles-pp.memcpy_erms
0.02 ±122% -0.0 0.00 perf-profile.self.cycles-pp.downgrade_write
0.02 ±123% -0.0 0.00 perf-profile.self.cycles-pp._cond_resched
0.02 ±123% -0.0 0.00 perf-profile.self.cycles-pp.fput_many
0.02 ±123% -0.0 0.00 perf-profile.self.cycles-pp.page_counter_try_charge
0.02 ±123% -0.0 0.00 perf-profile.self.cycles-pp.__count_memcg_events
0.02 ±122% -0.0 0.00 perf-profile.self.cycles-pp.alloc_set_pte
0.02 ±122% -0.0 0.00 perf-profile.self.cycles-pp.__set_page_dirty_no_writeback
0.15 ± 9% -0.0 0.13 ± 16% perf-profile.self.cycles-pp.__activate_page
0.01 ±200% -0.0 0.00 perf-profile.self.cycles-pp.common_file_perm
0.01 ±200% -0.0 0.00 perf-profile.self.cycles-pp.do_mmap
0.01 ±200% -0.0 0.00 perf-profile.self.cycles-pp.asm_exc_page_fault
0.01 ±200% -0.0 0.00 perf-profile.self.cycles-pp.do_user_addr_fault
0.01 ±200% -0.0 0.00 perf-profile.self.cycles-pp.exit_to_user_mode_prepare
0.01 ±200% -0.0 0.00 perf-profile.self.cycles-pp.perf_event_mmap
0.01 ±200% -0.0 0.00 perf-profile.self.cycles-pp.prepend_name
0.01 ±200% -0.0 0.00 perf-profile.self.cycles-pp.__fget_files
0.01 ±200% -0.0 0.00 perf-profile.self.cycles-pp.unlock_page
0.01 ±200% -0.0 0.00 perf-profile.self.cycles-pp.flush_tlb_func_common
0.01 ±200% -0.0 0.00 perf-profile.self.cycles-pp.tlb_finish_mmu
0.01 ±200% -0.0 0.00 perf-profile.self.cycles-pp._find_next_bit
0.01 ±200% -0.0 0.00 perf-profile.self.cycles-pp.__remove_shared_vm_struct
0.08 ± 28% -0.0 0.07 perf-profile.self.cycles-pp.vmacache_update
0.08 ± 13% -0.0 0.07 ± 11% perf-profile.self.cycles-pp.try_grab_page
0.00 +0.0 0.00 perf-profile.self.cycles-pp.memmap_init_zone_device
0.00 +0.0 0.00 perf-profile.self.cycles-pp.copy_user_enhanced_fast_string
0.00 +0.0 0.00 perf-profile.self.cycles-pp.osq_lock
0.00 +0.0 0.00 perf-profile.self.cycles-pp.shmem_write_end
0.00 +0.0 0.00 perf-profile.self.cycles-pp.copy_page
0.00 +0.0 0.00 perf-profile.self.cycles-pp.smp_call_function_many_cond
0.00 +0.0 0.00 perf-profile.self.cycles-pp.ktime_get_update_offsets_now
0.00 +0.0 0.00 perf-profile.self.cycles-pp.perf_mmap__read_head
0.00 +0.0 0.00 perf-profile.self.cycles-pp.__pthread_enable_asynccancel
0.00 +0.0 0.00 perf-profile.self.cycles-pp.__get_user_nocheck_1
0.00 +0.0 0.00 perf-profile.self.cycles-pp.mutex_unlock
0.00 +0.0 0.00 perf-profile.self.cycles-pp.hrtimer_active
0.00 +0.0 0.00 perf-profile.self.cycles-pp.__pthread_disable_asynccancel
0.00 +0.0 0.00 perf-profile.self.cycles-pp.intel_idle
0.00 +0.0 0.00 perf-profile.self.cycles-pp.mutex_lock
0.00 +0.0 0.00 perf-profile.self.cycles-pp.io_serial_in
0.00 +0.0 0.00 perf-profile.self.cycles-pp.find_lock_entry
0.00 +0.0 0.00 perf-profile.self.cycles-pp.clear_page_erms
0.00 +0.0 0.00 perf-profile.self.cycles-pp.__init_single_page
0.00 +0.0 0.00 perf-profile.self.cycles-pp.set_pfnblock_flags_mask
0.00 +0.0 0.00 perf-profile.self.cycles-pp.update_curr
0.00 +0.0 0.00 perf-profile.self.cycles-pp.vprintk_emit
0.00 +0.0 0.00 perf-profile.self.cycles-pp.__update_load_avg_se
0.00 +0.0 0.00 perf-profile.self.cycles-pp.find_idlest_group
0.00 +0.0 0.00 perf-profile.self.cycles-pp.update_load_avg
0.00 +0.0 0.00 perf-profile.self.cycles-pp.lapic_next_deadline
0.00 +0.0 0.00 perf-profile.self.cycles-pp.perf_mmap_fault
0.00 +0.0 0.00 perf-profile.self.cycles-pp.down_read_trylock
0.00 +0.0 0.00 perf-profile.self.cycles-pp.change_p4d_range
0.00 +0.0 0.00 perf-profile.self.cycles-pp.update_sd_lb_stats
0.00 +0.0 0.00 perf-profile.self.cycles-pp.__update_load_avg_cfs_rq
0.00 +0.0 0.00 perf-profile.self.cycles-pp.task_tick_fair
0.00 +0.0 0.00 perf-profile.self.cycles-pp.change_protection
0.00 +0.0 0.00 perf-profile.self.cycles-pp.tick_sched_timer
0.00 +0.0 0.00 perf-profile.self.cycles-pp.update_cfs_group
0.00 +0.0 0.00 perf-profile.self.cycles-pp.mutex_spin_on_owner
0.00 +0.0 0.00 perf-profile.self.cycles-pp.arch_scale_freq_tick
0.00 +0.0 0.00 perf-profile.self.cycles-pp.available_idle_cpu
0.00 +0.0 0.00 perf-profile.self.cycles-pp.run_local_timers
0.00 +0.0 0.00 perf-profile.self.cycles-pp.select_idle_sibling
0.00 +0.0 0.00 perf-profile.self.cycles-pp._raw_spin_trylock
0.00 +0.0 0.00 perf-profile.self.cycles-pp.__sched_text_start
0.00 +0.0 0.00 perf-profile.self.cycles-pp.shmem_add_to_page_cache
0.00 +0.0 0.00 perf-profile.self.cycles-pp.rcu_sched_clock_irq
0.00 +0.0 0.00 perf-profile.self.cycles-pp.update_irq_load_avg
0.00 +0.0 0.00 perf-profile.self.cycles-pp.sum_zone_numa_state
0.00 +0.0 0.00 perf-profile.self.cycles-pp.perf_mmap_to_page
0.00 +0.0 0.00 perf-profile.self.cycles-pp.sum_vm_events
0.00 +0.0 0.00 perf-profile.self.cycles-pp.__switch_to
0.00 +0.0 0.00 perf-profile.self.cycles-pp._dl_addr
0.00 +0.0 0.00 perf-profile.self.cycles-pp.irqtime_account_irq
0.00 +0.0 0.00 perf-profile.self.cycles-pp.read_tsc
0.00 +0.0 0.00 perf-profile.self.cycles-pp.trace_module_notify
0.00 +0.0 0.00 perf-profile.self.cycles-pp.delay_tsc
0.00 +0.0 0.00 perf-profile.self.cycles-pp.update_nohz_stats
0.00 +0.0 0.00 perf-profile.self.cycles-pp.native_sched_clock
0.00 +0.0 0.00 perf-profile.self.cycles-pp.__alloc_file
0.00 +0.0 0.00 perf-profile.self.cycles-pp.rmqueue_bulk
0.00 +0.0 0.00 perf-profile.self.cycles-pp.switch_mm_irqs_off
0.00 +0.0 0.00 perf-profile.self.cycles-pp.calc_global_load_tick
0.00 +0.0 0.00 perf-profile.self.cycles-pp.llist_add_batch
0.00 +0.0 0.00 perf-profile.self.cycles-pp.part_stat_read_all
0.00 +0.0 0.00 perf-profile.self.cycles-pp.__do_fault
0.00 +0.0 0.00 perf-profile.self.cycles-pp.update_blocked_averages
0.00 +0.0 0.00 perf-profile.self.cycles-pp.hrtimer_interrupt
0.00 +0.0 0.00 perf-profile.self.cycles-pp.load_balance
0.00 +0.0 0.00 perf-profile.self.cycles-pp.reweight_entity
0.00 +0.0 0.00 perf-profile.self.cycles-pp.switch_fpu_return
0.00 +0.0 0.00 perf-profile.self.cycles-pp.blk_mq_queue_tag_busy_iter
0.00 +0.0 0.00 perf-profile.self.cycles-pp.update_rq_clock
0.00 +0.0 0.00 perf-profile.self.cycles-pp.account_user_time
0.00 +0.0 0.00 perf-profile.self.cycles-pp.try_charge
0.00 +0.0 0.00 perf-profile.self.cycles-pp.xas_find_conflict
0.00 +0.0 0.00 perf-profile.self.cycles-pp.cpuacct_charge
0.00 +0.0 0.00 perf-profile.self.cycles-pp.rcu_segcblist_enqueue
0.00 +0.0 0.00 perf-profile.self.cycles-pp.__legitimize_mnt
0.00 +0.0 0.00 perf-profile.self.cycles-pp.collapse_huge_page
0.00 +0.0 0.00 perf-profile.self.cycles-pp.record__mmap_read_evlist
0.00 +0.0 0.00 perf-profile.self.cycles-pp.__d_lookup_rcu
0.00 +0.0 0.00 perf-profile.self.cycles-pp.get_mem_cgroup_from_mm
0.00 +0.0 0.00 perf-profile.self.cycles-pp.resched_curr
0.00 +0.0 0.00 perf-profile.self.cycles-pp.irqtime_account_process_tick
0.00 +0.0 0.00 perf-profile.self.cycles-pp.rcu_cblist_dequeue
0.00 +0.0 0.00 perf-profile.self.cycles-pp.sys_imageblit
0.00 +0.0 0.00 perf-profile.self.cycles-pp.__fsnotify_parent
0.00 +0.0 0.00 perf-profile.self.cycles-pp.asm_sysvec_apic_timer_interrupt
0.00 +0.0 0.00 perf-profile.self.cycles-pp.__hrtimer_run_queues
0.00 +0.0 0.00 perf-profile.self.cycles-pp.scheduler_tick
0.00 +0.0 0.00 perf-profile.self.cycles-pp.khugepaged_scan_pmd
0.00 +0.0 0.00 perf-profile.self.cycles-pp.enqueue_entity
0.00 +0.0 0.00 perf-profile.self.cycles-pp.refresh_cpu_vm_stats
0.00 +0.0 0.00 perf-profile.self.cycles-pp.__accumulate_pelt_segments
0.00 +0.0 0.00 perf-profile.self.cycles-pp.rcu_implicit_dynticks_qs
0.00 +0.0 0.00 perf-profile.self.cycles-pp.trigger_load_balance
0.00 +0.0 0.00 perf-profile.self.cycles-pp.anon_vma_interval_tree_remove
0.00 +0.0 0.00 perf-profile.self.cycles-pp.strnlen_user
0.00 +0.0 0.00 perf-profile.self.cycles-pp.__hrtimer_next_event_base
0.00 +0.0 0.00 perf-profile.self.cycles-pp.kstat_irqs
0.00 +0.0 0.00 perf-profile.self.cycles-pp.__switch_to_asm
0.00 +0.0 0.00 perf-profile.self.cycles-pp.rwsem_spin_on_owner
0.00 +0.0 0.00 perf-profile.self.cycles-pp.smp_call_function_single
0.00 +0.0 0.00 perf-profile.self.cycles-pp._vm_unmap_aliases
0.00 +0.0 0.00 perf-profile.self.cycles-pp.prep_new_page
0.00 +0.0 0.00 perf-profile.self.cycles-pp.purge_fragmented_blocks_allcpus
0.00 +0.0 0.00 perf-profile.self.cycles-pp.__acct_update_integrals
0.00 +0.0 0.00 perf-profile.self.cycles-pp.lockref_get_not_dead
0.00 +0.0 0.00 perf-profile.self.cycles-pp.update_cfs_rq_h_load
0.00 +0.0 0.00 perf-profile.self.cycles-pp.do_dentry_open
0.00 +0.0 0.00 perf-profile.self.cycles-pp.copy_pte_range
0.00 +0.0 0.00 perf-profile.self.cycles-pp.__calc_delta
0.00 +0.0 0.00 perf-profile.self.cycles-pp.apparmor_file_alloc_security
0.00 +0.0 0.00 perf-profile.self.cycles-pp.select_task_rq_fair
0.00 +0.0 0.00 perf-profile.self.cycles-pp.file_free_rcu
0.00 +0.0 0.00 perf-profile.self.cycles-pp.osq_unlock
0.00 +0.0 0.00 perf-profile.self.cycles-pp.update_min_vruntime
0.00 +0.0 0.00 perf-profile.self.cycles-pp.__alloc_pages_nodemask
0.00 +0.0 0.00 perf-profile.self.cycles-pp.stack_trace_save_tsk
0.00 +0.0 0.00 perf-profile.self.cycles-pp.timekeeping_advance
0.00 +0.0 0.00 perf-profile.self.cycles-pp.down_read
0.00 +0.0 0.00 perf-profile.self.cycles-pp.perf_event_task_tick
0.00 +0.0 0.00 perf-profile.self.cycles-pp.strcmp
0.00 +0.0 0.00 perf-profile.self.cycles-pp.account_process_tick
0.00 +0.0 0.00 perf-profile.self.cycles-pp.get_page_from_freelist
0.00 +0.0 0.00 perf-profile.self.cycles-pp.rebalance_domains
0.00 +0.0 0.00 perf-profile.self.cycles-pp.task_numa_work
0.00 +0.0 0.00 perf-profile.self.cycles-pp.pick_next_task_fair
0.00 +0.0 0.00 perf-profile.self.cycles-pp.sched_clock
0.00 +0.0 0.00 perf-profile.self.cycles-pp.flush_smp_call_function_queue
0.00 +0.0 0.00 perf-profile.self.cycles-pp.prepare_creds
0.00 +0.0 0.00 perf-profile.self.cycles-pp.lockref_put_or_lock
0.00 +0.0 0.00 perf-profile.self.cycles-pp.__mark_inode_dirty
0.00 +0.0 0.00 perf-profile.self.cycles-pp.vfprintf
0.00 +0.0 0.00 perf-profile.self.cycles-pp.__hrtimer_get_next_event
0.00 +0.0 0.00 perf-profile.self.cycles-pp.memset_erms
0.26 ± 25% +0.0 0.28 ± 7% perf-profile.self.cycles-pp.__vma_rb_erase
0.11 ± 20% +0.0 0.15 ± 8% perf-profile.self.cycles-pp.unmap_page_range
0.19 ± 21% +0.1 0.24 perf-profile.self.cycles-pp._raw_spin_lock
0.01 ±200% +0.1 0.07 ± 14% perf-profile.self.cycles-pp.ktime_get
0.00 +0.1 0.06 ± 8% perf-profile.self.cycles-pp.follow_pmd_mask
0.21 ± 23% +0.1 0.30 ± 6% perf-profile.self.cycles-pp.vma_migratable
1.67 ± 23% +0.3 1.95 ± 8% perf-profile.self.cycles-pp.find_vma
0.00 +7.9 7.95 perf-profile.self.cycles-pp.xas_find
0.30 ± 23% +77.8 78.12 perf-profile.self.cycles-pp.do_madvise


Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


Thanks,
Rong Chen


Attachments:
(No filename) (388.65 kB)
config-5.9.0-02740-ge6e88712e43b7 (173.22 kB)
reproduce (761.00 B)
job.yaml (5.50 kB)
job-script (7.88 kB)
Download all attachments

2020-10-30 13:21:46

by Matthew Wilcox

[permalink] [raw]
Subject: Re: [mm] e6e88712e4: stress-ng.tmpfs.ops_per_sec -69.7% regression

On Fri, Oct 30, 2020 at 03:17:15PM +0800, kernel test robot wrote:
> Details are as below:
> -------------------------------------------------------------------------------------------------->
>
>
> To reproduce:
>
> git clone https://github.com/intel/lkp-tests.git
> cd lkp-tests
> bin/lkp install job.yaml # job file is attached in this email
> bin/lkp run job.yaml

Do you actually test these instructions before you send them out?

hdd_partitions: "/dev/disk/by-id/ata-WDC_WD2500BEKT-00PVMT0_WD-WX11A23L4840-part
1"
ssd_partitions: "/dev/nvme1n1p1 /dev/nvme0n1p1"
rootfs_partition: "/dev/disk/by-id/ata-INTEL_SSDSC2CW240A3_CVCV204303WP240CGN-part1"

That's _very_ specific to a given machine. I'm not familiar with
this test, so I don't know what I need to change.

[snipped 4000 lines of gunk]

2020-10-30 14:04:59

by Chen, Rong A

[permalink] [raw]
Subject: Re: [mm] e6e88712e4: stress-ng.tmpfs.ops_per_sec -69.7% regression



On 10/30/2020 9:17 PM, Matthew Wilcox wrote:
> On Fri, Oct 30, 2020 at 03:17:15PM +0800, kernel test robot wrote:
>> Details are as below:
>> -------------------------------------------------------------------------------------------------->
>>
>>
>> To reproduce:
>>
>> git clone https://github.com/intel/lkp-tests.git
>> cd lkp-tests
>> bin/lkp install job.yaml # job file is attached in this email
>> bin/lkp run job.yaml
>
> Do you actually test these instructions before you send them out?
>
> hdd_partitions: "/dev/disk/by-id/ata-WDC_WD2500BEKT-00PVMT0_WD-WX11A23L4840-part
> 1"
> ssd_partitions: "/dev/nvme1n1p1 /dev/nvme0n1p1"
> rootfs_partition: "/dev/disk/by-id/ata-INTEL_SSDSC2CW240A3_CVCV204303WP240CGN-part1"
>
> That's _very_ specific to a given machine. I'm not familiar with
> this test, so I don't know what I need to change.


Hi Matthew,

Sorry about that, I copied the job.yaml file from the server,
the right way to do is to set your disk partitions in the yaml,
please see https://github.com/intel/lkp-tests#run-your-own-disk-partitions.

there is another reproduce script attached in the original mail
for your reference.

Best Regards,
Rong Chen



>
> [snipped 4000 lines of gunk]
>

2020-10-30 15:06:29

by Matthew Wilcox

[permalink] [raw]
Subject: Re: [mm] e6e88712e4: stress-ng.tmpfs.ops_per_sec -69.7% regression

On Fri, Oct 30, 2020 at 10:02:45PM +0800, Chen, Rong A wrote:
> On 10/30/2020 9:17 PM, Matthew Wilcox wrote:
> > On Fri, Oct 30, 2020 at 03:17:15PM +0800, kernel test robot wrote:
> > > Details are as below:
> > > -------------------------------------------------------------------------------------------------->
> > >
> > >
> > > To reproduce:
> > >
> > > git clone https://github.com/intel/lkp-tests.git
> > > cd lkp-tests
> > > bin/lkp install job.yaml # job file is attached in this email
> > > bin/lkp run job.yaml
> >
> > Do you actually test these instructions before you send them out?
> >
> > hdd_partitions: "/dev/disk/by-id/ata-WDC_WD2500BEKT-00PVMT0_WD-WX11A23L4840-part
> > 1"
> > ssd_partitions: "/dev/nvme1n1p1 /dev/nvme0n1p1"
> > rootfs_partition: "/dev/disk/by-id/ata-INTEL_SSDSC2CW240A3_CVCV204303WP240CGN-part1"
> >
> > That's _very_ specific to a given machine. I'm not familiar with
> > this test, so I don't know what I need to change.
>
>
> Hi Matthew,
>
> Sorry about that, I copied the job.yaml file from the server,
> the right way to do is to set your disk partitions in the yaml,
> please see https://github.com/intel/lkp-tests#run-your-own-disk-partitions.
>
> there is another reproduce script attached in the original mail
> for your reference.

Can you reproduce this? Here's my results:

# stress-ng "--timeout" "100" "--times" "--verify" "--metrics-brief" "--sequential" "96" "--class" "memory" "--minimize" "--exclude" "spawn,exec,swap,stack,atomic,bad-altstack,bsearch,context,full,heapsort,hsearch,judy,lockbus,lsearch,malloc,matrix-3d,matrix,mcontend,membarrier,memcpy,memfd,memrate,memthrash,mergesort,mincore,null,numa,pipe,pipeherd,qsort,radixsort,remap,resources,rmap,shellsort,skiplist,stackmmap,str,stream,tlb-shootdown,tree,tsearch,vm-addr,vm-rw,vm-segv,vm,wcs,zero,zlib"
stress-ng: info: [7670] disabled 'oom-pipe' as it may hang or reboot the machine (enable it with the --pathological option)
stress-ng: info: [7670] dispatching hogs: 96 tmpfs
stress-ng: info: [7670] successful run completed in 100.23s (1 min, 40.23 secs)
stress-ng: info: [7670] stressor bogo ops real time usr time sys time bogo ops/s bogo ops/s
stress-ng: info: [7670] (secs) (secs) (secs) (real time) (usr+sys time)
stress-ng: info: [7670] tmpfs 8216 100.10 368.02 230.89 82.08 13.72
stress-ng: info: [7670] for a 100.23s run time:
stress-ng: info: [7670] 601.38s available CPU time
stress-ng: info: [7670] 368.71s user time ( 61.31%)
stress-ng: info: [7670] 231.55s system time ( 38.50%)
stress-ng: info: [7670] 600.26s total time ( 99.81%)
stress-ng: info: [7670] load average: 78.32 27.87 10.10

2020-10-31 06:16:09

by Li, Philip

[permalink] [raw]
Subject: Re: [LKP] Re: [mm] e6e88712e4: stress-ng.tmpfs.ops_per_sec -69.7% regression

On Fri, Oct 30, 2020 at 02:58:35PM +0000, Matthew Wilcox wrote:
> On Fri, Oct 30, 2020 at 10:02:45PM +0800, Chen, Rong A wrote:
> > On 10/30/2020 9:17 PM, Matthew Wilcox wrote:
> > > On Fri, Oct 30, 2020 at 03:17:15PM +0800, kernel test robot wrote:
> > > > Details are as below:
> > > > -------------------------------------------------------------------------------------------------->
> > > >
> > > >
> > > > To reproduce:
> > > >
> > > > git clone https://github.com/intel/lkp-tests.git
> > > > cd lkp-tests
> > > > bin/lkp install job.yaml # job file is attached in this email
> > > > bin/lkp run job.yaml
> > >
> > > Do you actually test these instructions before you send them out?
> > >
> > > hdd_partitions: "/dev/disk/by-id/ata-WDC_WD2500BEKT-00PVMT0_WD-WX11A23L4840-part
> > > 1"
> > > ssd_partitions: "/dev/nvme1n1p1 /dev/nvme0n1p1"
> > > rootfs_partition: "/dev/disk/by-id/ata-INTEL_SSDSC2CW240A3_CVCV204303WP240CGN-part1"
> > >
> > > That's _very_ specific to a given machine. I'm not familiar with
> > > this test, so I don't know what I need to change.
> >
> >
> > Hi Matthew,
> >
> > Sorry about that, I copied the job.yaml file from the server,
> > the right way to do is to set your disk partitions in the yaml,
> > please see https://github.com/intel/lkp-tests#run-your-own-disk-partitions.
> >
> > there is another reproduce script attached in the original mail
> > for your reference.
>
> Can you reproduce this? Here's my results:
thanks for quick check, we will provide update right after the weekend. Sorry
for any inconvenience for the reproduction side so far. We need to improve
this part.

>
> # stress-ng "--timeout" "100" "--times" "--verify" "--metrics-brief" "--sequential" "96" "--class" "memory" "--minimize" "--exclude" "spawn,exec,swap,stack,atomic,bad-altstack,bsearch,context,full,heapsort,hsearch,judy,lockbus,lsearch,malloc,matrix-3d,matrix,mcontend,membarrier,memcpy,memfd,memrate,memthrash,mergesort,mincore,null,numa,pipe,pipeherd,qsort,radixsort,remap,resources,rmap,shellsort,skiplist,stackmmap,str,stream,tlb-shootdown,tree,tsearch,vm-addr,vm-rw,vm-segv,vm,wcs,zero,zlib"
> stress-ng: info: [7670] disabled 'oom-pipe' as it may hang or reboot the machine (enable it with the --pathological option)
> stress-ng: info: [7670] dispatching hogs: 96 tmpfs
> stress-ng: info: [7670] successful run completed in 100.23s (1 min, 40.23 secs)
> stress-ng: info: [7670] stressor bogo ops real time usr time sys time bogo ops/s bogo ops/s
> stress-ng: info: [7670] (secs) (secs) (secs) (real time) (usr+sys time)
> stress-ng: info: [7670] tmpfs 8216 100.10 368.02 230.89 82.08 13.72
> stress-ng: info: [7670] for a 100.23s run time:
> stress-ng: info: [7670] 601.38s available CPU time
> stress-ng: info: [7670] 368.71s user time ( 61.31%)
> stress-ng: info: [7670] 231.55s system time ( 38.50%)
> stress-ng: info: [7670] 600.26s total time ( 99.81%)
> stress-ng: info: [7670] load average: 78.32 27.87 10.10
> _______________________________________________
> LKP mailing list -- [email protected]
> To unsubscribe send an email to [email protected]

2020-11-02 05:24:39

by Chen, Rong A

[permalink] [raw]
Subject: Re: [mm] e6e88712e4: stress-ng.tmpfs.ops_per_sec -69.7% regression



On 10/30/20 10:58 PM, Matthew Wilcox wrote:
> On Fri, Oct 30, 2020 at 10:02:45PM +0800, Chen, Rong A wrote:
>> On 10/30/2020 9:17 PM, Matthew Wilcox wrote:
>>> On Fri, Oct 30, 2020 at 03:17:15PM +0800, kernel test robot wrote:
>>>> Details are as below:
>>>> -------------------------------------------------------------------------------------------------->
>>>>
>>>>
>>>> To reproduce:
>>>>
>>>> git clone https://github.com/intel/lkp-tests.git
>>>> cd lkp-tests
>>>> bin/lkp install job.yaml # job file is attached in this email
>>>> bin/lkp run job.yaml
>>> Do you actually test these instructions before you send them out?
>>>
>>> hdd_partitions: "/dev/disk/by-id/ata-WDC_WD2500BEKT-00PVMT0_WD-WX11A23L4840-part
>>> 1"
>>> ssd_partitions: "/dev/nvme1n1p1 /dev/nvme0n1p1"
>>> rootfs_partition: "/dev/disk/by-id/ata-INTEL_SSDSC2CW240A3_CVCV204303WP240CGN-part1"
>>>
>>> That's _very_ specific to a given machine. I'm not familiar with
>>> this test, so I don't know what I need to change.
>>
>> Hi Matthew,
>>
>> Sorry about that, I copied the job.yaml file from the server,
>> the right way to do is to set your disk partitions in the yaml,
>> please see https://github.com/intel/lkp-tests#run-your-own-disk-partitions.
>>
>> there is another reproduce script attached in the original mail
>> for your reference.
> Can you reproduce this? Here's my results:
>
> # stress-ng "--timeout" "100" "--times" "--verify" "--metrics-brief" "--sequential" "96" "--class" "memory" "--minimize" "--exclude" "spawn,exec,swap,stack,atomic,bad-altstack,bsearch,context,full,heapsort,hsearch,judy,lockbus,lsearch,malloc,matrix-3d,matrix,mcontend,membarrier,memcpy,memfd,memrate,memthrash,mergesort,mincore,null,numa,pipe,pipeherd,qsort,radixsort,remap,resources,rmap,shellsort,skiplist,stackmmap,str,stream,tlb-shootdown,tree,tsearch,vm-addr,vm-rw,vm-segv,vm,wcs,zero,zlib"
> stress-ng: info: [7670] disabled 'oom-pipe' as it may hang or reboot the machine (enable it with the --pathological option)
> stress-ng: info: [7670] dispatching hogs: 96 tmpfs
> stress-ng: info: [7670] successful run completed in 100.23s (1 min, 40.23 secs)
> stress-ng: info: [7670] stressor bogo ops real time usr time sys time bogo ops/s bogo ops/s
> stress-ng: info: [7670] (secs) (secs) (secs) (real time) (usr+sys time)
> stress-ng: info: [7670] tmpfs 8216 100.10 368.02 230.89 82.08 13.72
> stress-ng: info: [7670] for a 100.23s run time:
> stress-ng: info: [7670] 601.38s available CPU time
> stress-ng: info: [7670] 368.71s user time ( 61.31%)
> stress-ng: info: [7670] 231.55s system time ( 38.50%)
> stress-ng: info: [7670] 600.26s total time ( 99.81%)
> stress-ng: info: [7670] load average: 78.32 27.87 10.10
>

Hi Matthew,

IIUC, yes, we can reproduce it, here is the result from the server:

$ stress-ng --timeout 100 --times --verify --metrics-brief --sequential
96 --class memory --minimize --exclude
spawn,exec,swap,stack,atomic,bad-altstack,bsearch,context,full,heapsort,hsearch,judy,lockbus,lsearch,malloc,matrix-3d,matrix,mcontend,membarrier,memcpy,memfd,memrate,memthrash,mergesort,mincore,null,numa,pipe,pipeherd,qsort,radixsort,remap,resources,rmap,shellsort,skiplist,stackmmap,str,stream,tlb-shootdown,tree,tsearch,vm-addr,vm-rw,vm-segv,vm,wcs,zero,zlib

stress-ng: info:  [2765] disabled 'oom-pipe' as it may hang or reboot
the machine (enable it with the --pathological option)
stress-ng: info:  [2765] dispatching hogs: 96 tmpfs
stress-ng: info:  [2765] successful run completed in 104.67s (1 min,
44.67 secs)
stress-ng: info:  [2765] stressor       bogo ops real time  usr time 
sys time   bogo ops/s   bogo ops/s
stress-ng: info:  [2765]                           (secs) (secs)   
(secs)   (real time) (usr+sys time)
stress-ng: info:  [2765] tmpfs               363    103.02 622.07  
7289.85         3.52         0.05
stress-ng: info:  [2765] for a 104.67s run time:
stress-ng: info:  [2765]   10047.98s available CPU time
stress-ng: info:  [2765]     622.46s user time   (  6.19%)
stress-ng: info:  [2765]    7290.66s system time ( 72.56%)
stress-ng: info:  [2765]    7913.12s total time  ( 78.75%)
stress-ng: info:  [2765] load average: 79.62 28.89 10.45

we compared the tmpfs.ops_per_sec: (363 / 103.02) between this commit
and parent commit.

Best Regards,
Rong Chen

2020-11-02 14:26:10

by Matthew Wilcox

[permalink] [raw]
Subject: Re: [mm] e6e88712e4: stress-ng.tmpfs.ops_per_sec -69.7% regression

On Mon, Nov 02, 2020 at 01:21:39PM +0800, Rong Chen wrote:
> On 10/30/20 10:58 PM, Matthew Wilcox wrote:
> > Can you reproduce this? Here's my results:
[snipped]
>
> Hi Matthew,
>
> IIUC, yes, we can reproduce it, here is the result from the server:
>
> $ stress-ng --timeout 100 --times --verify --metrics-brief --sequential 96
> --class memory --minimize --exclude spawn,exec,swap,stack,atomic,bad-altstack,bsearch,context,full,heapsort,hsearch,judy,lockbus,lsearch,malloc,matrix-3d,matrix,mcontend,membarrier,memcpy,memfd,memrate,memthrash,mergesort,mincore,null,numa,pipe,pipeherd,qsort,radixsort,remap,resources,rmap,shellsort,skiplist,stackmmap,str,stream,tlb-shootdown,tree,tsearch,vm-addr,vm-rw,vm-segv,vm,wcs,zero,zlib
>
> stress-ng: info:? [2765] disabled 'oom-pipe' as it may hang or reboot the
> machine (enable it with the --pathological option)
> stress-ng: info:? [2765] dispatching hogs: 96 tmpfs
> stress-ng: info:? [2765] successful run completed in 104.67s (1 min, 44.67
> secs)
> stress-ng: info:? [2765] stressor?????? bogo ops real time? usr time? sys
> time?? bogo ops/s?? bogo ops/s
> stress-ng: info:? [2765]?????????????????????????? (secs) (secs)??? (secs)??
> (real time) (usr+sys time)
> stress-ng: info:? [2765] tmpfs?????????????? 363??? 103.02 622.07??
> 7289.85???????? 3.52???????? 0.05
> stress-ng: info:? [2765] for a 104.67s run time:
> stress-ng: info:? [2765]?? 10047.98s available CPU time
> stress-ng: info:? [2765]???? 622.46s user time?? (? 6.19%)
> stress-ng: info:? [2765]??? 7290.66s system time ( 72.56%)
> stress-ng: info:? [2765]??? 7913.12s total time? ( 78.75%)
> stress-ng: info:? [2765] load average: 79.62 28.89 10.45
>
> we compared the tmpfs.ops_per_sec: (363 / 103.02) between this commit and
> parent commit.

Ah, so this was the 60-70% regression reported in the subject, not the 100%
reported in the body. I'd assumed the latter was the intended message.

I'll have another look at this later today. At first glance, I don't
see why it _should_ regress. It would seem to be doing fewer atomic
operations (avoiding getting the page reference) and walks the XArray more
efficiently. I wonder if it's walking the XArray _too_ efficiently --
holding the rcu read lock for too long. I'll try putting a rescheduling
point in the middle and see what happens.

2020-11-06 20:59:56

by Matthew Wilcox

[permalink] [raw]
Subject: Re: [mm] e6e88712e4: stress-ng.tmpfs.ops_per_sec -69.7% regression

On Mon, Nov 02, 2020 at 01:21:39PM +0800, Rong Chen wrote:
> we compared the tmpfs.ops_per_sec: (363 / 103.02) between this commit and
> parent commit.

Thanks! I see about a 50% hit on my system, and this patch restores the
performance. Can you verify this works for you?

diff --git a/mm/madvise.c b/mm/madvise.c
index 9b065d412e5f..e602333f8c0d 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -225,7 +225,7 @@ static void force_shm_swapin_readahead(struct vm_area_struct *vma,
struct address_space *mapping)
{
XA_STATE(xas, &mapping->i_pages, linear_page_index(vma, start));
- pgoff_t end_index = end / PAGE_SIZE;
+ pgoff_t end_index = linear_page_index(vma, end + PAGE_SIZE - 1);
struct page *page;

rcu_read_lock();

2020-11-09 08:14:48

by Xing Zhengjun

[permalink] [raw]
Subject: Re: [LKP] Re: [mm] e6e88712e4: stress-ng.tmpfs.ops_per_sec -69.7% regression



On 11/7/2020 4:55 AM, Matthew Wilcox wrote:
> On Mon, Nov 02, 2020 at 01:21:39PM +0800, Rong Chen wrote:
>> we compared the tmpfs.ops_per_sec: (363 / 103.02) between this commit and
>> parent commit.
>
> Thanks! I see about a 50% hit on my system, and this patch restores the
> performance. Can you verify this works for you?
>
> diff --git a/mm/madvise.c b/mm/madvise.c
> index 9b065d412e5f..e602333f8c0d 100644
> --- a/mm/madvise.c
> +++ b/mm/madvise.c
> @@ -225,7 +225,7 @@ static void force_shm_swapin_readahead(struct vm_area_struct *vma,
> struct address_space *mapping)
> {
> XA_STATE(xas, &mapping->i_pages, linear_page_index(vma, start));
> - pgoff_t end_index = end / PAGE_SIZE;
> + pgoff_t end_index = linear_page_index(vma, end + PAGE_SIZE - 1);
> struct page *page;
>
> rcu_read_lock();
> _______________________________________________
> LKP mailing list -- [email protected]
> To unsubscribe send an email to [email protected]
>
I test the patch, the regression is disappeared.

=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/nr_threads/disk/testtime/class/cpufreq_governor/ucode:

lkp-csl-2sp3/stress-ng/debian-10.4-x86_64-20200603.cgz/x86_64-rhel-8.3/gcc-9/100%/1HDD/100s/memory/performance/0x400002c

commit:
f5df8635c5a3c912919c91be64aa198554b0f9ed
e6e88712e43b7942df451508aafc2f083266f56b
6bc25f0c5e0d55145f7ef087adea2693802a80f3 (this test patch)

f5df8635c5a3c912 e6e88712e43b7942df451508aaf 6bc25f0c5e0d55145f7ef087ade
---------------- --------------------------- ---------------------------
%stddev %change %stddev %change %stddev
\ | \ | \
1198 ± 4% -69.7% 362.67 +3.3% 1238 ±
3% stress-ng.tmpfs.ops
11.62 ± 4% -69.7% 3.52 +3.4% 12.02 ±
3% stress-ng.tmpfs.ops_per_sec



--
Zhengjun Xing