2022-01-21 21:02:33

by kernel test robot

[permalink] [raw]
Subject: [net] 91a760b269: stress-ng.sockfd.ops_per_sec 11.1% improvement



Greeting,

FYI, we noticed a 11.1% improvement of stress-ng.sockfd.ops_per_sec due to commit:


commit: 91a760b26926265a60c77ddf016529bcf3e17a04 ("net: bpf: Handle return value of BPF_CGROUP_RUN_PROG_INET{4,6}_POST_BIND()")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master

in testcase: stress-ng
on test machine: 128 threads 2 sockets Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz with 128G memory
with following parameters:

nr_threads: 100%
testtime: 60s
class: network
test: sockfd
cpufreq_governor: performance
ucode: 0xd000280






Details are as below:
-------------------------------------------------------------------------------------------------->


To reproduce:

git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
sudo bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
sudo bin/lkp run generated-yaml-file

# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.

=========================================================================================
class/compiler/cpufreq_governor/kconfig/nr_threads/rootfs/tbox_group/test/testcase/testtime/ucode:
network/gcc-9/performance/x86_64-rhel-8.3/100%/debian-10.4-x86_64-20200603.cgz/lkp-icl-2sp6/sockfd/stress-ng/60s/0xd000280

commit:
44bab87d8c ("bpf/selftests: Test bpf_d_path on rdonly_mem.")
91a760b269 ("net: bpf: Handle return value of BPF_CGROUP_RUN_PROG_INET{4,6}_POST_BIND()")

44bab87d8ca6f054 91a760b26926265a60c77ddf016
---------------- ---------------------------
%stddev %change %stddev
\ | \
62606757 +11.1% 69543502 stress-ng.sockfd.ops
1042700 +11.1% 1158219 stress-ng.sockfd.ops_per_sec
39649320 ? 3% +16.9% 46347365 ? 2% stress-ng.time.involuntary_context_switches
40863134 ? 3% +16.6% 47645007 ? 2% stress-ng.time.voluntary_context_switches
1218291 ? 7% +15.0% 1400874 ? 3% vmstat.system.cs
1531051 +8.7% 1664219 proc-vmstat.numa_hit
1415336 +9.4% 1548835 proc-vmstat.numa_local
1533329 +8.7% 1666172 proc-vmstat.pgalloc_normal
1312762 +10.1% 1445416 proc-vmstat.pgfree
3.91 ? 9% +44.6% 5.65 ? 23% perf-stat.i.MPKI
0.38 ? 6% +0.2 0.58 ? 23% perf-stat.i.branch-miss-rate%
34907752 ? 7% +15.9% 40452634 ? 2% perf-stat.i.branch-misses
58903061 ? 4% +7.5% 63345388 perf-stat.i.cache-misses
1.816e+08 ? 5% +11.9% 2.032e+08 ? 2% perf-stat.i.cache-references
1269255 ? 7% +15.6% 1467498 ? 3% perf-stat.i.context-switches
1256 ? 2% +17.2% 1473 ? 3% perf-stat.i.cpu-migrations
6575 -6.6% 6139 perf-stat.i.cycles-between-cache-misses
0.01 ? 44% +0.0 0.03 ? 48% perf-stat.i.dTLB-load-miss-rate%
4.035e+09 ? 5% +10.9% 4.475e+09 ? 2% perf-stat.i.dTLB-stores
255.11 ? 3% +10.2% 281.05 perf-stat.i.metric.K/sec
12204006 ? 4% +8.3% 13217107 ? 2% perf-stat.i.node-load-misses
1942843 ? 4% +13.3% 2201701 ? 2% perf-stat.i.node-loads
12401363 ? 4% +10.3% 13678576 perf-stat.i.node-store-misses
3.74 +7.5% 4.02 perf-stat.overall.MPKI
0.31 ? 2% +0.0 0.35 ? 2% perf-stat.overall.branch-miss-rate%
32.41 -1.2 31.18 perf-stat.overall.cache-miss-rate%
8.01 -4.5% 7.65 perf-stat.overall.cpi
6612 -7.6% 6108 perf-stat.overall.cycles-between-cache-misses
0.01 ? 9% +0.0 0.01 ? 6% perf-stat.overall.dTLB-load-miss-rate%
0.12 +4.7% 0.13 perf-stat.overall.ipc
85.41 +1.6 87.00 perf-stat.overall.node-store-miss-rate%
34371265 ? 6% +15.9% 39833821 ? 2% perf-stat.ps.branch-misses
58139497 ? 3% +7.6% 62545220 perf-stat.ps.cache-misses
1.795e+08 ? 4% +11.8% 2.007e+08 ? 2% perf-stat.ps.cache-references
1251886 ? 7% +15.6% 1447062 ? 3% perf-stat.ps.context-switches
1239 ? 2% +17.4% 1454 ? 3% perf-stat.ps.cpu-migrations
1200950 ? 12% +21.5% 1459523 ? 7% perf-stat.ps.dTLB-load-misses
3.98e+09 ? 5% +10.9% 4.415e+09 ? 2% perf-stat.ps.dTLB-stores
12039917 ? 3% +8.3% 13043647 perf-stat.ps.node-load-misses
1927780 ? 4% +13.3% 2184856 ? 2% perf-stat.ps.node-loads
12234839 ? 4% +10.3% 13498939 perf-stat.ps.node-store-misses
3.127e+12 +5.4% 3.295e+12 perf-stat.total.instructions
19250 ? 4% +17.0% 22532 ? 8% softirqs.CPU1.RCU
19195 ? 5% +10.7% 21248 ? 2% softirqs.CPU10.RCU
18204 ? 2% +14.3% 20809 softirqs.CPU100.RCU
18565 ? 4% +12.6% 20897 ? 2% softirqs.CPU102.RCU
18396 ? 3% +12.7% 20728 softirqs.CPU103.RCU
18343 ? 2% +14.3% 20957 ? 2% softirqs.CPU104.RCU
18219 +13.4% 20666 ? 5% softirqs.CPU105.RCU
18599 ? 4% +13.8% 21171 ? 2% softirqs.CPU107.RCU
18222 ? 3% +13.7% 20714 softirqs.CPU108.RCU
18416 ? 2% +11.3% 20496 ? 4% softirqs.CPU109.RCU
18865 ? 2% +9.5% 20655 ? 3% softirqs.CPU11.RCU
18319 +12.3% 20565 ? 2% softirqs.CPU110.RCU
18314 ? 2% +12.8% 20663 ? 3% softirqs.CPU111.RCU
18326 +13.2% 20745 ? 3% softirqs.CPU113.RCU
18515 ? 3% +12.0% 20733 ? 3% softirqs.CPU115.RCU
18080 +12.4% 20330 ? 3% softirqs.CPU116.RCU
18384 +11.4% 20477 ? 4% softirqs.CPU117.RCU
18310 ? 5% +10.3% 20189 ? 3% softirqs.CPU118.RCU
18515 ? 2% +12.5% 20822 ? 2% softirqs.CPU119.RCU
18642 ? 3% +11.9% 20852 softirqs.CPU120.RCU
18121 ? 3% +12.9% 20463 ? 2% softirqs.CPU121.RCU
18926 ? 3% +9.5% 20733 ? 4% softirqs.CPU122.RCU
18274 ? 3% +12.5% 20555 ? 3% softirqs.CPU123.RCU
18230 ? 5% +13.8% 20740 ? 4% softirqs.CPU124.RCU
18462 ? 2% +12.8% 20823 ? 5% softirqs.CPU125.RCU
17987 ? 2% +15.0% 20685 ? 5% softirqs.CPU126.RCU
16788 ? 2% +14.8% 19269 ? 3% softirqs.CPU127.RCU
18544 ? 2% +16.6% 21622 ? 7% softirqs.CPU14.RCU
18921 ? 2% +10.0% 20808 ? 3% softirqs.CPU15.RCU
18674 ? 2% +10.9% 20708 ? 3% softirqs.CPU16.RCU
18471 +12.9% 20855 softirqs.CPU17.RCU
18661 ? 3% +13.0% 21093 ? 4% softirqs.CPU18.RCU
18942 ? 2% +15.1% 21812 ? 5% softirqs.CPU19.RCU
18757 ? 2% +10.4% 20706 ? 2% softirqs.CPU20.RCU
18686 ? 3% +13.5% 21208 ? 2% softirqs.CPU21.RCU
18959 ? 2% +11.8% 21193 softirqs.CPU22.RCU
18882 ? 2% +12.0% 21141 ? 3% softirqs.CPU24.RCU
18758 ? 2% +12.5% 21100 ? 2% softirqs.CPU25.RCU
18486 ? 4% +15.8% 21406 ? 3% softirqs.CPU26.RCU
18597 +11.0% 20651 ? 4% softirqs.CPU28.RCU
19322 +13.3% 21895 ? 9% softirqs.CPU3.RCU
18805 ? 2% +11.9% 21052 ? 5% softirqs.CPU30.RCU
18642 ? 2% +13.0% 21063 ? 2% softirqs.CPU32.RCU
18504 ? 2% +12.8% 20863 softirqs.CPU33.RCU
18648 ? 3% +11.2% 20730 ? 2% softirqs.CPU34.RCU
18778 +11.6% 20951 ? 3% softirqs.CPU35.RCU
18595 ? 3% +13.9% 21182 ? 6% softirqs.CPU36.RCU
18541 ? 3% +14.1% 21148 ? 4% softirqs.CPU37.RCU
18482 ? 2% +12.0% 20693 ? 3% softirqs.CPU38.RCU
18078 ? 2% +17.6% 21255 ? 4% softirqs.CPU39.RCU
19108 ? 2% +14.9% 21960 ? 4% softirqs.CPU4.RCU
18751 +11.4% 20890 ? 2% softirqs.CPU41.RCU
18632 ? 2% +13.0% 21055 ? 3% softirqs.CPU42.RCU
18300 ? 2% +13.9% 20850 ? 3% softirqs.CPU43.RCU
18647 ? 2% +13.4% 21140 ? 4% softirqs.CPU44.RCU
18327 +14.3% 20940 ? 3% softirqs.CPU45.RCU
18272 ? 3% +14.3% 20882 ? 2% softirqs.CPU46.RCU
18359 ? 2% +13.7% 20874 softirqs.CPU47.RCU
18550 ? 2% +11.7% 20718 ? 3% softirqs.CPU48.RCU
18418 ? 2% +11.7% 20576 ? 3% softirqs.CPU49.RCU
18753 ? 4% +13.2% 21229 ? 3% softirqs.CPU5.RCU
18450 +11.4% 20545 ? 2% softirqs.CPU50.RCU
18442 ? 3% +12.1% 20673 ? 2% softirqs.CPU51.RCU
18402 ? 2% +13.6% 20896 ? 5% softirqs.CPU53.RCU
18664 ? 2% +18.9% 22195 ? 10% softirqs.CPU54.RCU
18502 ? 2% +10.2% 20389 ? 2% softirqs.CPU55.RCU
18421 ? 3% +12.8% 20787 ? 3% softirqs.CPU56.RCU
18544 ? 3% +12.0% 20763 ? 3% softirqs.CPU57.RCU
17891 +16.1% 20770 ? 3% softirqs.CPU58.RCU
18934 +15.4% 21842 ? 8% softirqs.CPU6.RCU
18276 ? 2% +16.3% 21257 ? 3% softirqs.CPU61.RCU
19277 ? 3% +10.8% 21367 softirqs.CPU62.RCU
18512 ? 2% +14.0% 21096 softirqs.CPU63.RCU
18081 +15.9% 20948 ? 3% softirqs.CPU66.RCU
18550 ? 3% +12.3% 20826 softirqs.CPU67.RCU
18111 +17.7% 21322 softirqs.CPU68.RCU
19092 ? 3% +12.0% 21383 ? 3% softirqs.CPU69.RCU
18334 ? 3% +16.8% 21415 ? 2% softirqs.CPU7.RCU
18770 ? 5% +11.7% 20972 ? 3% softirqs.CPU70.RCU
18500 ? 3% +13.2% 20946 ? 3% softirqs.CPU72.RCU
18748 ? 3% +12.8% 21142 ? 3% softirqs.CPU73.RCU
19066 ? 3% +11.5% 21261 softirqs.CPU74.RCU
18789 ? 4% +12.3% 21098 ? 3% softirqs.CPU75.RCU
18516 +14.7% 21244 ? 3% softirqs.CPU76.RCU
18825 ? 2% +12.1% 21105 ? 3% softirqs.CPU78.RCU
18320 +16.9% 21423 ? 3% softirqs.CPU79.RCU
18691 ? 2% +16.8% 21828 ? 7% softirqs.CPU8.RCU
18657 ? 4% +12.8% 21040 softirqs.CPU80.RCU
18694 ? 2% +10.0% 20569 ? 2% softirqs.CPU81.RCU
18670 ? 2% +12.8% 21058 ? 3% softirqs.CPU82.RCU
18548 ? 2% +13.8% 21103 ? 2% softirqs.CPU84.RCU
18485 ? 2% +14.3% 21125 ? 3% softirqs.CPU85.RCU
18412 ? 3% +12.3% 20680 ? 3% softirqs.CPU86.RCU
18635 ? 2% +12.9% 21041 ? 4% softirqs.CPU87.RCU
18272 ? 2% +15.9% 21186 ? 2% softirqs.CPU88.RCU
17968 ? 3% +14.5% 20566 ? 3% softirqs.CPU89.RCU
18656 ? 3% +14.5% 21365 ? 4% softirqs.CPU9.RCU
18629 ? 3% +13.0% 21059 softirqs.CPU90.RCU
18415 ? 2% +13.7% 20940 ? 2% softirqs.CPU91.RCU
18711 ? 3% +13.6% 21252 ? 3% softirqs.CPU92.RCU
18206 ? 4% +16.0% 21113 softirqs.CPU93.RCU
18633 ? 2% +15.6% 21541 ? 2% softirqs.CPU94.RCU
18756 ? 5% +11.1% 20839 ? 3% softirqs.CPU96.RCU
18441 ? 2% +13.4% 20913 ? 2% softirqs.CPU98.RCU
18316 ? 2% +14.2% 20914 softirqs.CPU99.RCU
2383757 +12.6% 2683774 ? 2% softirqs.RCU
46.59 -1.0 45.57 perf-profile.calltrace.cycles-pp._raw_spin_lock.unix_inflight.unix_attach_fds.unix_scm_to_skb.unix_stream_sendmsg
46.45 -1.0 45.43 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.unix_inflight.unix_attach_fds.unix_scm_to_skb
46.66 -1.0 45.66 perf-profile.calltrace.cycles-pp.unix_inflight.unix_attach_fds.unix_scm_to_skb.unix_stream_sendmsg.sock_sendmsg
46.96 -0.9 46.07 perf-profile.calltrace.cycles-pp.unix_scm_to_skb.unix_stream_sendmsg.sock_sendmsg.____sys_sendmsg.___sys_sendmsg
46.96 -0.9 46.06 perf-profile.calltrace.cycles-pp.unix_attach_fds.unix_scm_to_skb.unix_stream_sendmsg.sock_sendmsg.____sys_sendmsg
46.06 -0.7 45.34 perf-profile.calltrace.cycles-pp._raw_spin_lock.unix_notinflight.unix_detach_fds.unix_stream_read_generic.unix_stream_recvmsg
45.90 -0.7 45.20 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.unix_notinflight.unix_detach_fds.unix_stream_read_generic
46.14 -0.7 45.45 perf-profile.calltrace.cycles-pp.unix_notinflight.unix_detach_fds.unix_stream_read_generic.unix_stream_recvmsg.____sys_recvmsg
46.15 -0.7 45.46 perf-profile.calltrace.cycles-pp.unix_detach_fds.unix_stream_read_generic.unix_stream_recvmsg.____sys_recvmsg.___sys_recvmsg
48.81 -0.5 48.33 perf-profile.calltrace.cycles-pp.unix_stream_sendmsg.sock_sendmsg.____sys_sendmsg.___sys_sendmsg.__sys_sendmsg
47.52 -0.5 47.04 perf-profile.calltrace.cycles-pp.unix_stream_read_generic.unix_stream_recvmsg.____sys_recvmsg.___sys_recvmsg.__sys_recvmsg
47.54 -0.5 47.06 perf-profile.calltrace.cycles-pp.unix_stream_recvmsg.____sys_recvmsg.___sys_recvmsg.__sys_recvmsg.do_syscall_64
47.65 -0.4 47.21 perf-profile.calltrace.cycles-pp.____sys_recvmsg.___sys_recvmsg.__sys_recvmsg.do_syscall_64.entry_SYSCALL_64_after_hwframe
47.74 -0.4 47.31 perf-profile.calltrace.cycles-pp.___sys_recvmsg.__sys_recvmsg.do_syscall_64.entry_SYSCALL_64_after_hwframe.recvmsg
47.76 -0.4 47.34 perf-profile.calltrace.cycles-pp.__sys_recvmsg.do_syscall_64.entry_SYSCALL_64_after_hwframe.recvmsg.stress_run
47.81 -0.4 47.39 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.recvmsg.stress_run
47.83 -0.4 47.41 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.recvmsg.stress_run
47.87 -0.4 47.46 perf-profile.calltrace.cycles-pp.recvmsg.stress_run
49.00 -0.4 48.58 perf-profile.calltrace.cycles-pp.sock_sendmsg.____sys_sendmsg.___sys_sendmsg.__sys_sendmsg.do_syscall_64
49.07 -0.4 48.67 perf-profile.calltrace.cycles-pp.____sys_sendmsg.___sys_sendmsg.__sys_sendmsg.do_syscall_64.entry_SYSCALL_64_after_hwframe
49.23 -0.4 48.86 perf-profile.calltrace.cycles-pp.___sys_sendmsg.__sys_sendmsg.do_syscall_64.entry_SYSCALL_64_after_hwframe.sendmsg
49.26 -0.4 48.90 perf-profile.calltrace.cycles-pp.__sys_sendmsg.do_syscall_64.entry_SYSCALL_64_after_hwframe.sendmsg.stress_run
49.45 -0.3 49.15 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.sendmsg.stress_run
49.47 -0.3 49.18 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.sendmsg.stress_run
49.55 -0.3 49.27 perf-profile.calltrace.cycles-pp.sendmsg.stress_run
99.59 -0.1 99.51 perf-profile.calltrace.cycles-pp.stress_run
0.51 +0.1 0.61 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__close.stress_run
0.52 +0.1 0.62 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__close.stress_run
0.58 +0.1 0.69 perf-profile.calltrace.cycles-pp.__close.stress_run
0.53 ? 2% +0.2 0.70 perf-profile.calltrace.cycles-pp.__scm_send.unix_stream_sendmsg.sock_sendmsg.____sys_sendmsg.___sys_sendmsg
0.70 ? 2% +0.3 1.00 perf-profile.calltrace.cycles-pp.do_open.path_openat.do_filp_open.do_sys_openat2.do_sys_open
1.24 +0.4 1.68 perf-profile.calltrace.cycles-pp.do_filp_open.do_sys_openat2.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.23 +0.4 1.67 perf-profile.calltrace.cycles-pp.path_openat.do_filp_open.do_sys_openat2.do_sys_open.do_syscall_64
1.37 +0.5 1.84 perf-profile.calltrace.cycles-pp.do_sys_openat2.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe.open64
1.39 +0.5 1.86 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.open64.stress_run
1.37 +0.5 1.85 perf-profile.calltrace.cycles-pp.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe.open64.stress_run
1.39 +0.5 1.87 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.open64.stress_run
1.43 +0.5 1.91 perf-profile.calltrace.cycles-pp.open64.stress_run
0.00 +0.6 0.58 ? 2% perf-profile.calltrace.cycles-pp.do_dentry_open.do_open.path_openat.do_filp_open.do_sys_openat2
92.42 -1.7 90.73 perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
92.84 -1.7 91.16 perf-profile.children.cycles-pp._raw_spin_lock
46.66 -1.0 45.66 perf-profile.children.cycles-pp.unix_inflight
46.96 -0.9 46.07 perf-profile.children.cycles-pp.unix_scm_to_skb
46.96 -0.9 46.07 perf-profile.children.cycles-pp.unix_attach_fds
46.14 -0.7 45.45 perf-profile.children.cycles-pp.unix_notinflight
46.15 -0.7 45.46 perf-profile.children.cycles-pp.unix_detach_fds
48.82 -0.5 48.34 perf-profile.children.cycles-pp.unix_stream_sendmsg
47.59 -0.5 47.13 perf-profile.children.cycles-pp.unix_stream_recvmsg
47.58 -0.5 47.12 perf-profile.children.cycles-pp.unix_stream_read_generic
47.71 -0.4 47.28 perf-profile.children.cycles-pp.____sys_recvmsg
47.80 -0.4 47.38 perf-profile.children.cycles-pp.___sys_recvmsg
47.82 -0.4 47.41 perf-profile.children.cycles-pp.__sys_recvmsg
49.00 -0.4 48.59 perf-profile.children.cycles-pp.sock_sendmsg
49.07 -0.4 48.68 perf-profile.children.cycles-pp.____sys_sendmsg
47.99 -0.4 47.62 perf-profile.children.cycles-pp.recvmsg
49.23 -0.4 48.87 perf-profile.children.cycles-pp.___sys_sendmsg
49.26 -0.4 48.90 perf-profile.children.cycles-pp.__sys_sendmsg
49.68 -0.2 49.44 perf-profile.children.cycles-pp.sendmsg
99.33 -0.1 99.22 perf-profile.children.cycles-pp.do_syscall_64
99.38 -0.1 99.28 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
99.59 -0.1 99.51 perf-profile.children.cycles-pp.stress_run
0.22 ? 2% -0.0 0.20 perf-profile.children.cycles-pp.skb_unlink
0.10 +0.0 0.11 perf-profile.children.cycles-pp.copy_user_generic_unrolled
0.05 +0.0 0.06 perf-profile.children.cycles-pp.switch_fpu_return
0.07 +0.0 0.08 perf-profile.children.cycles-pp.recvmsg_copy_msghdr
0.07 +0.0 0.08 perf-profile.children.cycles-pp.__skb_datagram_iter
0.09 +0.0 0.10 perf-profile.children.cycles-pp.ioctl
0.06 +0.0 0.07 perf-profile.children.cycles-pp.kmem_cache_alloc_node
0.05 +0.0 0.06 ? 6% perf-profile.children.cycles-pp.dequeue_entity
0.13 ? 5% +0.0 0.14 ? 3% perf-profile.children.cycles-pp.rcu_core
0.07 ? 6% +0.0 0.09 ? 8% perf-profile.children.cycles-pp.task_tick_fair
0.08 ? 4% +0.0 0.09 ? 4% perf-profile.children.cycles-pp.link_path_walk
0.12 ? 5% +0.0 0.14 ? 3% perf-profile.children.cycles-pp.ret_from_fork
0.12 ? 5% +0.0 0.14 ? 3% perf-profile.children.cycles-pp.kthread
0.12 ? 3% +0.0 0.13 ? 2% perf-profile.children.cycles-pp.__x64_sys_close
0.11 ? 3% +0.0 0.13 ? 3% perf-profile.children.cycles-pp.kmem_cache_free
0.06 ? 7% +0.0 0.08 ? 8% perf-profile.children.cycles-pp.kobject_put
0.08 ? 6% +0.0 0.09 perf-profile.children.cycles-pp.kmalloc_reserve
0.09 ? 5% +0.0 0.11 perf-profile.children.cycles-pp.__copy_msghdr_from_user
0.07 ? 6% +0.0 0.09 perf-profile.children.cycles-pp.getname_flags
0.07 ? 5% +0.0 0.09 ? 4% perf-profile.children.cycles-pp.__kmalloc_node_track_caller
0.07 ? 7% +0.0 0.08 ? 5% perf-profile.children.cycles-pp.cdev_put
0.06 ? 9% +0.0 0.07 ? 5% perf-profile.children.cycles-pp.enqueue_entity
0.08 ? 5% +0.0 0.10 perf-profile.children.cycles-pp.iovec_from_user
0.28 ? 3% +0.0 0.30 ? 2% perf-profile.children.cycles-pp.hrtimer_interrupt
0.14 ? 6% +0.0 0.16 ? 3% perf-profile.children.cycles-pp.tick_sched_handle
0.10 +0.0 0.12 ? 3% perf-profile.children.cycles-pp.__import_iovec
0.11 +0.0 0.13 ? 2% perf-profile.children.cycles-pp.__entry_text_start
0.08 ? 4% +0.0 0.10 perf-profile.children.cycles-pp.__check_object_size
0.06 ? 7% +0.0 0.08 ? 5% perf-profile.children.cycles-pp.security_file_free
0.11 ? 4% +0.0 0.13 ? 3% perf-profile.children.cycles-pp.update_load_avg
0.11 ? 4% +0.0 0.13 ? 3% perf-profile.children.cycles-pp.import_iovec
0.06 +0.0 0.08 perf-profile.children.cycles-pp.apparmor_file_free_security
0.17 ? 4% +0.0 0.19 ? 3% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.12 +0.0 0.14 perf-profile.children.cycles-pp.syscall_return_via_sysret
0.09 ? 4% +0.0 0.11 ? 4% perf-profile.children.cycles-pp.common_file_perm
0.06 ? 6% +0.0 0.08 ? 5% perf-profile.children.cycles-pp.select_idle_cpu
0.06 ? 9% +0.0 0.08 ? 6% perf-profile.children.cycles-pp.lockref_get
0.12 ? 3% +0.0 0.14 ? 3% perf-profile.children.cycles-pp.__might_resched
0.09 ? 5% +0.0 0.12 ? 4% perf-profile.children.cycles-pp.security_file_receive
0.14 ? 2% +0.0 0.16 ? 2% perf-profile.children.cycles-pp.kmem_cache_alloc
0.10 ? 7% +0.0 0.12 ? 5% perf-profile.children.cycles-pp.__legitimize_path
0.08 ? 9% +0.0 0.10 ? 3% perf-profile.children.cycles-pp.update_curr
0.09 ? 8% +0.0 0.11 ? 6% perf-profile.children.cycles-pp.lockref_get_not_dead
0.08 ? 4% +0.0 0.11 ? 4% perf-profile.children.cycles-pp.load_new_mm_cr3
0.10 ? 4% +0.0 0.13 ? 5% perf-profile.children.cycles-pp.complete_walk
0.14 ? 2% +0.0 0.17 perf-profile.children.cycles-pp.sendmsg_copy_msghdr
0.09 ? 5% +0.0 0.12 ? 4% perf-profile.children.cycles-pp.pick_next_task_fair
0.10 ? 8% +0.0 0.13 ? 4% perf-profile.children.cycles-pp.ttwu_do_activate
0.08 +0.0 0.11 ? 3% perf-profile.children.cycles-pp.sock_recvmsg
0.08 +0.0 0.11 ? 3% perf-profile.children.cycles-pp.security_socket_recvmsg
0.16 ? 3% +0.0 0.18 ? 2% perf-profile.children.cycles-pp.__receive_fd
0.10 ? 5% +0.0 0.12 ? 4% perf-profile.children.cycles-pp.select_idle_sibling
0.10 ? 6% +0.0 0.13 ? 5% perf-profile.children.cycles-pp.enqueue_task_fair
0.11 ? 6% +0.0 0.14 ? 3% perf-profile.children.cycles-pp.dequeue_task_fair
0.11 ? 4% +0.0 0.14 ? 6% perf-profile.children.cycles-pp.select_task_rq_fair
0.10 ? 5% +0.0 0.13 ? 5% perf-profile.children.cycles-pp.try_to_unlazy
0.09 ? 5% +0.0 0.13 ? 3% perf-profile.children.cycles-pp.terminate_walk
0.08 ? 5% +0.0 0.12 ? 8% perf-profile.children.cycles-pp.propagate_protected_usage
0.18 ? 2% +0.0 0.21 ? 3% perf-profile.children.cycles-pp.sock_alloc_send_pskb
0.15 ? 3% +0.0 0.19 perf-profile.children.cycles-pp.__alloc_skb
0.11 ? 3% +0.0 0.14 ? 3% perf-profile.children.cycles-pp.switch_mm_irqs_off
0.19 +0.0 0.22 ? 2% perf-profile.children.cycles-pp._copy_from_user
0.16 ? 3% +0.0 0.19 ? 3% perf-profile.children.cycles-pp.alloc_skb_with_frags
0.22 ? 3% +0.0 0.26 perf-profile.children.cycles-pp.scm_detach_fds
0.21 +0.0 0.25 perf-profile.children.cycles-pp.copy_msghdr_from_user
0.02 ?141% +0.0 0.06 ? 8% perf-profile.children.cycles-pp.cdev_get
0.14 ? 5% +0.0 0.18 ? 2% perf-profile.children.cycles-pp.page_counter_cancel
0.09 ? 4% +0.0 0.14 ? 2% perf-profile.children.cycles-pp.aa_get_task_label
0.01 ?223% +0.0 0.06 ? 9% perf-profile.children.cycles-pp.__switch_to
0.13 ? 2% +0.0 0.18 ? 2% perf-profile.children.cycles-pp.apparmor_file_alloc_security
0.00 +0.1 0.05 perf-profile.children.cycles-pp.kobject_get_unless_zero
0.00 +0.1 0.05 perf-profile.children.cycles-pp.__update_load_avg_se
0.00 +0.1 0.05 perf-profile.children.cycles-pp.__kmalloc_track_caller
0.17 ? 2% +0.1 0.22 perf-profile.children.cycles-pp.security_file_alloc
0.17 ? 5% +0.1 0.22 ? 4% perf-profile.children.cycles-pp.lockref_put_or_lock
0.00 +0.1 0.06 ? 9% perf-profile.children.cycles-pp.available_idle_cpu
0.19 ? 5% +0.1 0.25 ? 3% perf-profile.children.cycles-pp.dput
0.18 ? 4% +0.1 0.24 perf-profile.children.cycles-pp.page_counter_uncharge
0.14 ? 6% +0.1 0.20 ? 4% perf-profile.children.cycles-pp.chrdev_open
0.18 ? 2% +0.1 0.24 perf-profile.children.cycles-pp.scm_fp_dup
0.20 ? 4% +0.1 0.26 perf-profile.children.cycles-pp.obj_cgroup_uncharge_pages
0.18 ? 2% +0.1 0.24 perf-profile.children.cycles-pp.security_socket_sendmsg
0.20 ? 2% +0.1 0.27 ? 3% perf-profile.children.cycles-pp.page_counter_charge
0.22 ? 2% +0.1 0.29 ? 4% perf-profile.children.cycles-pp.obj_cgroup_charge_pages
0.32 ? 2% +0.1 0.39 ? 2% perf-profile.children.cycles-pp.__fput
0.26 +0.1 0.34 ? 2% perf-profile.children.cycles-pp.kmem_cache_alloc_trace
0.25 ? 2% +0.1 0.32 ? 3% perf-profile.children.cycles-pp.obj_cgroup_charge
0.28 ? 2% +0.1 0.36 perf-profile.children.cycles-pp.kfree
0.36 ? 2% +0.1 0.44 ? 2% perf-profile.children.cycles-pp.task_work_run
0.16 ? 3% +0.1 0.24 perf-profile.children.cycles-pp.ima_file_check
0.15 +0.1 0.23 ? 2% perf-profile.children.cycles-pp.security_task_getsecid_subj
0.15 ? 3% +0.1 0.23 perf-profile.children.cycles-pp.apparmor_task_getsecid
0.32 +0.1 0.40 perf-profile.children.cycles-pp.alloc_empty_file
0.31 +0.1 0.40 perf-profile.children.cycles-pp.__alloc_file
0.18 ? 2% +0.1 0.26 ? 2% perf-profile.children.cycles-pp.security_file_open
0.24 ? 2% +0.1 0.33 perf-profile.children.cycles-pp.aa_sk_perm
0.17 ? 3% +0.1 0.26 ? 4% perf-profile.children.cycles-pp.apparmor_file_open
0.32 ? 4% +0.1 0.42 ? 2% perf-profile.children.cycles-pp.schedule_timeout
0.32 ? 4% +0.1 0.42 ? 3% perf-profile.children.cycles-pp.__wake_up_common
0.31 ? 5% +0.1 0.41 ? 4% perf-profile.children.cycles-pp.try_to_wake_up
0.32 ? 5% +0.1 0.41 ? 3% perf-profile.children.cycles-pp.autoremove_wake_function
0.33 ? 4% +0.1 0.43 ? 3% perf-profile.children.cycles-pp.__wake_up_common_lock
0.34 ? 5% +0.1 0.44 ? 4% perf-profile.children.cycles-pp.sock_def_readable
0.60 +0.1 0.72 perf-profile.children.cycles-pp.__close
0.41 +0.1 0.53 perf-profile.children.cycles-pp.refcount_dec_not_one
0.42 +0.1 0.54 perf-profile.children.cycles-pp.free_uid
0.42 +0.1 0.54 perf-profile.children.cycles-pp.refcount_dec_and_lock_irqsave
0.44 +0.1 0.57 perf-profile.children.cycles-pp.__scm_destroy
0.48 ? 4% +0.1 0.62 ? 2% perf-profile.children.cycles-pp.__schedule
0.48 ? 4% +0.1 0.63 ? 2% perf-profile.children.cycles-pp.schedule
0.62 ? 2% +0.2 0.77 perf-profile.children.cycles-pp.exit_to_user_mode_prepare
0.65 ? 2% +0.2 0.81 perf-profile.children.cycles-pp.syscall_exit_to_user_mode
0.53 ? 2% +0.2 0.70 perf-profile.children.cycles-pp.__scm_send
0.40 ? 2% +0.2 0.58 ? 2% perf-profile.children.cycles-pp.do_dentry_open
0.70 ? 2% +0.3 1.00 perf-profile.children.cycles-pp.do_open
1.24 +0.4 1.68 perf-profile.children.cycles-pp.do_filp_open
1.23 +0.4 1.68 perf-profile.children.cycles-pp.path_openat
1.38 +0.5 1.85 perf-profile.children.cycles-pp.do_sys_open
1.37 +0.5 1.84 perf-profile.children.cycles-pp.do_sys_openat2
1.44 +0.5 1.93 perf-profile.children.cycles-pp.open64
92.10 -1.7 90.39 perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
0.09 +0.0 0.10 ? 3% perf-profile.self.cycles-pp.copy_user_generic_unrolled
0.05 +0.0 0.06 ? 6% perf-profile.self.cycles-pp.entry_SYSCALL_64_after_hwframe
0.07 +0.0 0.08 ? 5% perf-profile.self.cycles-pp.kfree
0.06 +0.0 0.07 ? 6% perf-profile.self.cycles-pp.unix_stream_read_generic
0.07 +0.0 0.08 ? 5% perf-profile.self.cycles-pp.unix_notinflight
0.06 ? 7% +0.0 0.08 ? 8% perf-profile.self.cycles-pp.kobject_put
0.06 ? 7% +0.0 0.08 perf-profile.self.cycles-pp.__alloc_file
0.12 +0.0 0.14 ? 2% perf-profile.self.cycles-pp.syscall_return_via_sysret
0.42 +0.0 0.44 perf-profile.self.cycles-pp._raw_spin_lock
0.06 ? 7% +0.0 0.08 ? 5% perf-profile.self.cycles-pp.__schedule
0.06 +0.0 0.08 perf-profile.self.cycles-pp.apparmor_file_free_security
0.07 ? 7% +0.0 0.09 ? 4% perf-profile.self.cycles-pp.unix_inflight
0.06 ? 9% +0.0 0.08 ? 6% perf-profile.self.cycles-pp.lockref_get
0.11 +0.0 0.13 ? 2% perf-profile.self.cycles-pp.__might_resched
0.08 ? 4% +0.0 0.10 ? 4% perf-profile.self.cycles-pp.common_file_perm
0.09 ? 8% +0.0 0.11 ? 6% perf-profile.self.cycles-pp.lockref_get_not_dead
0.08 ? 4% +0.0 0.11 ? 4% perf-profile.self.cycles-pp.load_new_mm_cr3
0.08 ? 4% +0.0 0.12 ? 5% perf-profile.self.cycles-pp.propagate_protected_usage
0.05 ? 8% +0.0 0.09 perf-profile.self.cycles-pp.apparmor_task_getsecid
0.14 ? 5% +0.0 0.18 ? 2% perf-profile.self.cycles-pp.page_counter_cancel
0.12 +0.0 0.16 ? 2% perf-profile.self.cycles-pp.unix_attach_fds
0.13 ? 3% +0.0 0.17 ? 2% perf-profile.self.cycles-pp.apparmor_file_alloc_security
0.09 +0.0 0.14 ? 3% perf-profile.self.cycles-pp.aa_get_task_label
0.16 ? 3% +0.0 0.20 ? 2% perf-profile.self.cycles-pp.page_counter_charge
0.17 ? 5% +0.0 0.22 ? 5% perf-profile.self.cycles-pp.lockref_put_or_lock
0.00 +0.1 0.05 perf-profile.self.cycles-pp.__check_object_size
0.00 +0.1 0.05 ? 7% perf-profile.self.cycles-pp.__switch_to
0.00 +0.1 0.05 ? 8% perf-profile.self.cycles-pp.available_idle_cpu
0.13 ? 3% +0.1 0.19 ? 2% perf-profile.self.cycles-pp.scm_fp_dup
0.16 +0.1 0.22 perf-profile.self.cycles-pp.__scm_send
0.22 +0.1 0.31 perf-profile.self.cycles-pp.aa_sk_perm
0.17 ? 2% +0.1 0.26 ? 3% perf-profile.self.cycles-pp.apparmor_file_open
0.41 +0.1 0.53 perf-profile.self.cycles-pp.refcount_dec_not_one




Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


---
0DAY/LKP+ Test Infrastructure Open Source Technology Center
https://lists.01.org/hyperkitty/list/[email protected] Intel Corporation

Thanks,
Oliver Sang


Attachments:
(No filename) (34.25 kB)
config-5.16.0-rc7-02009-g91a760b26926 (176.28 kB)
job-script (7.99 kB)
job.yaml (5.53 kB)
reproduce (350.00 B)
Download all attachments