hi, John Johansen,
we reported
"[linux-next:master] [apparmor] 90c436a64a: stress-ng.kcmp.ops_per_sec 383.0% improvement"
in
https://lore.kernel.org/all/[email protected]/
when this commit is in linux-next/master.
now we noticed this commit is in mainline, however, we catpured a small
regression by another stress-ng test. FYI.
Hello,
kernel test robot noticed a -8.0% regression of stress-ng.kill.ops_per_sec on:
commit: 90c436a64a6e20482a9a613c47eb4af2e8a5328e ("apparmor: pass cred through to audit info.")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
testcase: stress-ng
test machine: 64 threads 2 sockets Intel(R) Xeon(R) Gold 6346 CPU @ 3.10GHz (Ice Lake) with 256G memory
parameters:
nr_threads: 100%
disk: 1HDD
testtime: 60s
class: interrupt
test: kill
cpufreq_governor: performance
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <[email protected]>
| Closes: https://lore.kernel.org/oe-lkp/[email protected]
Details are as below:
-------------------------------------------------------------------------------------------------->
The kernel config and materials to reproduce are available at:
https://download.01.org/0day-ci/archive/20231116/[email protected]
=========================================================================================
class/compiler/cpufreq_governor/disk/kconfig/nr_threads/rootfs/tbox_group/test/testcase/testtime:
interrupt/gcc-12/performance/1HDD/x86_64-rhel-8.3/100%/debian-11.1-x86_64-20220510.cgz/lkp-icl-2sp8/kill/stress-ng/60s
commit:
d20f5a1a6e ("apparmor: rename audit_data->label to audit_data->subj_label")
90c436a64a ("apparmor: pass cred through to audit info.")
d20f5a1a6e792d22 90c436a64a6e20482a9a613c47e
---------------- ---------------------------
%stddev %change %stddev
\ | \
112.01 +21438.4% 24124 ?200% meminfo.Active(file)
62.80 +4.5% 65.65 turbostat.RAMWatt
15.04 ?100% +4879.0% 748.88 ?114% vmstat.io.bo
82957 ? 2% -7.4% 76795 ? 3% vmstat.system.cs
28.00 +21403.0% 6021 ?200% proc-vmstat.nr_active_file
28.00 +21403.0% 6021 ?200% proc-vmstat.nr_zone_active_file
129482 -4.4% 123826 proc-vmstat.pgactivate
1032 ?100% +4641.1% 48928 ?113% proc-vmstat.pgpgout
16.90 ? 14% +25.5% 21.21 ? 20% sched_debug.cfs_rq:/.removed.runnable_avg.avg
16.89 ? 14% +25.5% 21.21 ? 20% sched_debug.cfs_rq:/.removed.util_avg.avg
247.17 ? 6% +14.0% 281.82 ? 8% sched_debug.cfs_rq:/.util_est_enqueued.avg
34402 ? 2% -21.9% 26867 ? 6% sched_debug.cpu.nr_switches.min
181.33 ? 7% +129.4% 416.00 ? 2% perf-c2c.DRAM.local
1766 ? 4% +140.5% 4249 ? 4% perf-c2c.DRAM.remote
3189 ? 5% +119.0% 6984 perf-c2c.HITM.local
538.00 ? 10% +273.7% 2010 ? 7% perf-c2c.HITM.remote
3727 ? 6% +141.3% 8995 ? 2% perf-c2c.HITM.total
738.58 -8.1% 678.88 stress-ng.kill.kill_calls_per_sec
912564 -8.0% 839201 stress-ng.kill.ops
15208 -8.0% 13985 stress-ng.kill.ops_per_sec
2741058 -7.8% 2527111 stress-ng.time.involuntary_context_switches
2721296 -7.9% 2505879 stress-ng.time.voluntary_context_switches
0.58 ? 7% +35.9% 0.79 ? 6% perf-sched.sch_delay.avg.ms.smpboot_thread_fn.kthread.ret_from_fork
21.33 ? 18% -50.0% 10.67 ? 32% perf-sched.wait_and_delay.count.__cond_resched.generic_perform_write.generic_file_write_iter.vfs_write.ksys_write
0.01 ? 26% +54.5% 0.01 ? 25% perf-sched.wait_time.avg.ms.__cond_resched.__kmem_cache_alloc_node.__kmalloc_node.proc_sys_call_handler.vfs_read
1.68 ?214% -99.5% 0.01 ? 64% perf-sched.wait_time.avg.ms.exit_to_user_mode_loop.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt
0.01 ? 36% +66.0% 0.01 ? 28% perf-sched.wait_time.max.ms.__cond_resched.__kmem_cache_alloc_node.__kmalloc_node.proc_sys_call_handler.vfs_read
8.10 ? 43% -29.7% 5.69 ? 12% perf-sched.wait_time.max.ms.__cond_resched.dput.__fput.task_work_run.exit_to_user_mode_loop
1.64 ? 2% +152.0% 4.13 ? 3% perf-stat.i.MPKI
11.98 +6.8 18.75 ? 2% perf-stat.i.cache-miss-rate%
13072329 ? 2% +145.9% 32141491 ? 4% perf-stat.i.cache-misses
1.082e+08 ? 3% +53.1% 1.657e+08 ? 4% perf-stat.i.cache-references
86654 ? 2% -8.4% 79368 ? 4% perf-stat.i.context-switches
15915 -59.2% 6487 perf-stat.i.cycles-between-cache-misses
0.01 ? 15% -0.0 0.01 ? 5% perf-stat.i.dTLB-load-miss-rate%
338311 ? 15% -21.7% 264729 ? 8% perf-stat.i.dTLB-load-misses
149.20 ? 5% +115.1% 320.97 ? 2% perf-stat.i.metric.K/sec
86.96 +3.9 90.83 perf-stat.i.node-load-miss-rate%
2659602 ? 3% +205.8% 8133328 ? 4% perf-stat.i.node-load-misses
398586 ? 5% +69.2% 674468 ? 5% perf-stat.i.node-loads
4272013 ? 2% +102.4% 8646615 ? 5% perf-stat.i.node-store-misses
444659 ? 3% +117.1% 965253 ? 5% perf-stat.i.node-stores
1.63 ? 2% +149.4% 4.06 ? 2% perf-stat.overall.MPKI
12.08 +7.3 19.41 perf-stat.overall.cache-miss-rate%
16313 -59.8% 6554 perf-stat.overall.cycles-between-cache-misses
0.02 ? 15% -0.0 0.01 ? 7% perf-stat.overall.dTLB-load-miss-rate%
86.84 +5.5 92.33 perf-stat.overall.node-load-miss-rate%
12940552 ? 2% +146.4% 31886577 ? 3% perf-stat.ps.cache-misses
1.072e+08 ? 2% +53.3% 1.643e+08 ? 3% perf-stat.ps.cache-references
85805 ? 2% -8.3% 78664 ? 3% perf-stat.ps.context-switches
341154 ? 16% -21.3% 268598 ? 9% perf-stat.ps.dTLB-load-misses
2632824 ? 2% +206.4% 8067907 ? 4% perf-stat.ps.node-load-misses
398939 ? 5% +68.2% 670849 ? 5% perf-stat.ps.node-loads
4233610 ? 2% +102.6% 8578379 ? 4% perf-stat.ps.node-store-misses
434918 ? 3% +119.3% 953796 ? 4% perf-stat.ps.node-stores
38.66 -38.7 0.00 perf-profile.calltrace.cycles-pp.aa_get_task_label.apparmor_task_kill.security_task_kill.kill_something_info.__x64_sys_kill
9.04 ? 6% -2.4 6.67 perf-profile.calltrace.cycles-pp.aa_may_signal.apparmor_task_kill.security_task_kill.kill_something_info.__x64_sys_kill
0.74 -0.0 0.68 perf-profile.calltrace.cycles-pp.profile_signal_perm.aa_may_signal.apparmor_task_kill.security_task_kill.kill_something_info
0.76 -0.0 0.72 perf-profile.calltrace.cycles-pp.kill_pid_info.kill_something_info.__x64_sys_kill.do_syscall_64.entry_SYSCALL_64_after_hwframe
98.37 +0.1 98.46 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.kill
98.38 +0.1 98.47 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.kill
98.40 +0.1 98.49 perf-profile.calltrace.cycles-pp.kill
98.24 +0.1 98.36 perf-profile.calltrace.cycles-pp.__x64_sys_kill.do_syscall_64.entry_SYSCALL_64_after_hwframe.kill
98.20 +0.1 98.32 perf-profile.calltrace.cycles-pp.kill_something_info.__x64_sys_kill.do_syscall_64.entry_SYSCALL_64_after_hwframe.kill
0.00 +0.7 0.68 perf-profile.calltrace.cycles-pp.check_kill_permission.kill_something_info.__x64_sys_kill.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.7 0.74 perf-profile.calltrace.cycles-pp.get_task_cred.apparmor_task_kill.security_task_kill.kill_something_info.__x64_sys_kill
38.96 -39.0 0.00 perf-profile.children.cycles-pp.aa_get_task_label
9.14 ? 6% -2.4 6.74 perf-profile.children.cycles-pp.aa_may_signal
0.47 ? 4% -0.1 0.41 ? 2% perf-profile.children.cycles-pp.__x64_sys_openat
0.47 ? 3% -0.1 0.41 ? 2% perf-profile.children.cycles-pp.do_sys_openat2
0.46 ? 3% -0.1 0.41 perf-profile.children.cycles-pp.open64
0.76 -0.1 0.71 perf-profile.children.cycles-pp.profile_signal_perm
0.45 ? 4% -0.1 0.40 perf-profile.children.cycles-pp.do_filp_open
0.45 ? 4% -0.1 0.40 ? 2% perf-profile.children.cycles-pp.path_openat
0.20 ? 2% -0.0 0.15 ? 3% perf-profile.children.cycles-pp.__schedule
0.20 ? 3% -0.0 0.16 ? 3% perf-profile.children.cycles-pp.schedule
0.32 ? 6% -0.0 0.28 ? 4% perf-profile.children.cycles-pp._raw_spin_lock
0.76 -0.0 0.72 perf-profile.children.cycles-pp.kill_pid_info
0.07 ? 7% -0.0 0.03 ? 70% perf-profile.children.cycles-pp.complete_signal
0.10 ? 5% -0.0 0.07 perf-profile.children.cycles-pp.do_send_sig_info
0.09 -0.0 0.06 ? 7% perf-profile.children.cycles-pp.__send_signal_locked
0.19 ? 3% -0.0 0.17 ? 3% perf-profile.children.cycles-pp.apparmor_capable
0.07 ? 5% -0.0 0.05 ? 7% perf-profile.children.cycles-pp.switch_mm_irqs_off
0.07 ? 5% -0.0 0.05 ? 7% perf-profile.children.cycles-pp.try_to_wake_up
0.19 ? 3% -0.0 0.17 ? 2% perf-profile.children.cycles-pp.ns_capable
0.19 ? 3% -0.0 0.17 ? 2% perf-profile.children.cycles-pp.security_capable
0.09 -0.0 0.07 ? 5% perf-profile.children.cycles-pp.__close
0.07 ? 7% -0.0 0.05 perf-profile.children.cycles-pp.__x64_sys_pause
0.07 -0.0 0.06 ? 8% perf-profile.children.cycles-pp.task_work_run
0.09 ? 5% -0.0 0.07 ? 6% perf-profile.children.cycles-pp._raw_spin_lock_irq
0.09 ? 8% -0.0 0.07 ? 6% perf-profile.children.cycles-pp.lookup_fast
0.07 ? 5% -0.0 0.06 perf-profile.children.cycles-pp.proc_sys_call_handler
98.24 +0.1 98.36 perf-profile.children.cycles-pp.__x64_sys_kill
98.23 +0.1 98.35 perf-profile.children.cycles-pp.kill_something_info
0.05 +0.1 0.17 ? 2% perf-profile.children.cycles-pp.audit_signal_info
0.07 ? 7% +0.2 0.27 ? 2% perf-profile.children.cycles-pp.audit_signal_info_syscall
0.42 +0.3 0.70 perf-profile.children.cycles-pp.check_kill_permission
0.00 +0.8 0.76 perf-profile.children.cycles-pp.get_task_cred
38.83 -38.8 0.00 perf-profile.self.cycles-pp.aa_get_task_label
8.36 ? 6% -2.3 6.03 perf-profile.self.cycles-pp.aa_may_signal
0.74 -0.1 0.68 perf-profile.self.cycles-pp.profile_signal_perm
0.10 ? 4% -0.0 0.08 ? 4% perf-profile.self.cycles-pp.security_task_kill
0.09 ? 4% -0.0 0.07 ? 5% perf-profile.self.cycles-pp._raw_spin_lock
0.07 ? 5% -0.0 0.05 perf-profile.self.cycles-pp.switch_mm_irqs_off
0.12 ? 4% -0.0 0.11 ? 5% perf-profile.self.cycles-pp.__kill_pgrp_info
0.09 ? 5% -0.0 0.08 perf-profile.self.cycles-pp.kill_something_info
0.11 ? 4% -0.0 0.10 perf-profile.self.cycles-pp.check_kill_permission
0.00 +0.2 0.16 ? 2% perf-profile.self.cycles-pp.audit_signal_info
0.06 ? 8% +0.2 0.26 ? 2% perf-profile.self.cycles-pp.audit_signal_info_syscall
0.00 +0.8 0.75 perf-profile.self.cycles-pp.get_task_cred
46.37 +40.2 86.58 perf-profile.self.cycles-pp.apparmor_task_kill
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki