Hello,
kernel test robot noticed a -8.0% regression of stress-ng.af-alg.ops_per_sec on:
commit: bb897c55042e9330bcf88b4b13cbdd6f9fabdd5e ("crypto: jitter - replace LFSR with SHA3-256")
https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git master
testcase: stress-ng
test machine: 36 threads 1 sockets Intel(R) Core(TM) i9-10980XE CPU @ 3.00GHz (Cascade Lake) with 128G memory
parameters:
nr_threads: 1
disk: 1HDD
testtime: 60s
fs: ext4
class: os
test: af-alg
cpufreq_governor: performance
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <[email protected]>
| Closes: https://lore.kernel.org/oe-lkp/[email protected]
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
sudo bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
sudo bin/lkp run generated-yaml-file
# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.
=========================================================================================
class/compiler/cpufreq_governor/disk/fs/kconfig/nr_threads/rootfs/tbox_group/test/testcase/testtime:
os/gcc-12/performance/1HDD/ext4/x86_64-rhel-8.3/1/debian-11.1-x86_64-20220510.cgz/lkp-csl-d02/af-alg/stress-ng/60s
commit:
3908edf868 ("crypto: hash - Make crypto_ahash_alg helper available")
bb897c5504 ("crypto: jitter - replace LFSR with SHA3-256")
3908edf868c34ed4 bb897c55042e9330bcf88b4b13c
---------------- ---------------------------
%stddev %change %stddev
\ | \
39589 -1.7% 38899 vmstat.system.cs
2.28 +2.8% 2.34 iostat.cpu.system
1.35 -6.2% 1.26 iostat.cpu.user
6.79 -0.3 6.48 turbostat.C1E%
0.36 +10.4% 0.40 turbostat.IPC
28.01 ? 24% -50.8% 13.79 ? 79% sched_debug.cfs_rq:/.removed.runnable_avg.avg
109.55 ? 13% -44.0% 61.34 ? 69% sched_debug.cfs_rq:/.removed.runnable_avg.stddev
27.99 ? 24% -50.7% 13.79 ? 79% sched_debug.cfs_rq:/.removed.util_avg.avg
109.51 ? 13% -44.0% 61.33 ? 69% sched_debug.cfs_rq:/.removed.util_avg.stddev
27891 -8.0% 25654 stress-ng.af-alg.ops
464.85 -8.0% 427.56 stress-ng.af-alg.ops_per_sec
21.80 +23.9% 27.00 stress-ng.time.percent_of_cpu_this_job_got
53625 -7.9% 49363 stress-ng.time.voluntary_context_switches
9582 -2.8% 9318 proc-vmstat.nr_shmem
7063402 -7.6% 6525652 proc-vmstat.numa_hit
7061755 -7.6% 6523785 proc-vmstat.numa_local
44336 -2.8% 43116 proc-vmstat.pgactivate
7230579 -7.6% 6679596 proc-vmstat.pgalloc_normal
8348623 -7.8% 7698238 proc-vmstat.pgfault
7198629 -7.7% 6647859 proc-vmstat.pgfree
7.60 -21.3% 5.98 perf-stat.i.MPKI
9.814e+08 -1.8% 9.639e+08 perf-stat.i.branch-instructions
2.35 -0.1 2.25 perf-stat.i.branch-miss-rate%
24221407 -5.4% 22904633 perf-stat.i.branch-misses
3.57 +0.1 3.67 perf-stat.i.cache-miss-rate%
45429613 -7.0% 42237643 perf-stat.i.cache-references
40881 -1.2% 40392 perf-stat.i.context-switches
0.85 -15.3% 0.72 perf-stat.i.cpi
255.50 ? 3% -5.1% 242.58 perf-stat.i.cpu-migrations
5338 +4.3% 5566 ? 2% perf-stat.i.cycles-between-cache-misses
0.03 ? 6% -0.0 0.03 ? 4% perf-stat.i.dTLB-load-miss-rate%
1.67e+09 +10.1% 1.839e+09 perf-stat.i.dTLB-loads
0.06 -0.0 0.04 perf-stat.i.dTLB-store-miss-rate%
424579 -7.5% 392578 perf-stat.i.dTLB-store-misses
7.639e+08 +19.3% 9.113e+08 perf-stat.i.dTLB-stores
1211462 -4.7% 1154311 perf-stat.i.iTLB-load-misses
1070640 -4.0% 1028337 perf-stat.i.iTLB-loads
6.075e+09 +17.4% 7.131e+09 perf-stat.i.instructions
5080 +23.5% 6276 perf-stat.i.instructions-per-iTLB-miss
1.18 +17.6% 1.39 perf-stat.i.ipc
96.10 +8.5% 104.31 perf-stat.i.metric.M/sec
130643 -7.6% 120711 perf-stat.i.minor-faults
190301 ? 3% -4.5% 181655 perf-stat.i.node-loads
323242 ? 2% -5.3% 305961 perf-stat.i.node-stores
130643 -7.6% 120711 perf-stat.i.page-faults
7.48 -20.8% 5.92 perf-stat.overall.MPKI
2.47 -0.1 2.38 perf-stat.overall.branch-miss-rate%
3.39 +0.2 3.59 ? 2% perf-stat.overall.cache-miss-rate%
0.84 -14.6% 0.72 perf-stat.overall.cpi
0.03 ? 5% -0.0 0.03 ? 3% perf-stat.overall.dTLB-load-miss-rate%
0.06 -0.0 0.04 perf-stat.overall.dTLB-store-miss-rate%
5014 +23.2% 6178 perf-stat.overall.instructions-per-iTLB-miss
1.19 +17.1% 1.39 perf-stat.overall.ipc
9.66e+08 -1.8% 9.486e+08 perf-stat.ps.branch-instructions
23845672 -5.5% 22542502 perf-stat.ps.branch-misses
44715015 -7.0% 41568298 perf-stat.ps.cache-references
40236 -1.2% 39751 perf-stat.ps.context-switches
251.46 ? 3% -5.1% 238.73 perf-stat.ps.cpu-migrations
1.643e+09 +10.1% 1.81e+09 perf-stat.ps.dTLB-loads
417889 -7.5% 386355 perf-stat.ps.dTLB-store-misses
7.519e+08 +19.3% 8.969e+08 perf-stat.ps.dTLB-stores
1192387 -4.7% 1136001 perf-stat.ps.iTLB-load-misses
1053757 -4.0% 1012023 perf-stat.ps.iTLB-loads
5.98e+09 +17.4% 7.018e+09 perf-stat.ps.instructions
128585 -7.6% 118798 perf-stat.ps.minor-faults
187390 ? 3% -4.6% 178793 perf-stat.ps.node-loads
318166 ? 2% -5.4% 301116 perf-stat.ps.node-stores
128585 -7.6% 118798 perf-stat.ps.page-faults
3.799e+11 +16.6% 4.431e+11 perf-stat.total.instructions
5.44 ? 11% -5.4 0.00 perf-profile.calltrace.cycles-pp.jent_lfsr_time.jent_measure_jitter.jent_gen_entropy.jent_read_entropy.jent_kcapi_random
0.67 ? 15% -0.2 0.45 ? 50% perf-profile.calltrace.cycles-pp.rcu_core.__do_softirq.__irq_exit_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
1.18 ? 8% -0.2 0.97 ? 12% perf-profile.calltrace.cycles-pp.do_mprotect_pkey.__x64_sys_mprotect.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.19 ? 8% -0.2 0.97 ? 12% perf-profile.calltrace.cycles-pp.__x64_sys_mprotect.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.51 ? 4% -0.2 1.32 ? 9% perf-profile.calltrace.cycles-pp.unmap_vmas.exit_mmap.__mmput.exit_mm.do_exit
1.38 ? 4% -0.2 1.21 ? 8% perf-profile.calltrace.cycles-pp.zap_pmd_range.unmap_page_range.unmap_vmas.exit_mmap.__mmput
1.42 ? 4% -0.2 1.24 ? 8% perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.exit_mmap.__mmput.exit_mm
1.33 ? 4% -0.2 1.17 ? 8% perf-profile.calltrace.cycles-pp.zap_pte_range.zap_pmd_range.unmap_page_range.unmap_vmas.exit_mmap
0.33 ? 82% +0.8 1.10 ? 16% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.bind
0.33 ? 82% +0.8 1.09 ? 17% perf-profile.calltrace.cycles-pp.__x64_sys_bind.do_syscall_64.entry_SYSCALL_64_after_hwframe.bind
0.33 ? 82% +0.8 1.09 ? 17% perf-profile.calltrace.cycles-pp.__sys_bind.__x64_sys_bind.do_syscall_64.entry_SYSCALL_64_after_hwframe.bind
0.33 ? 82% +0.8 1.10 ? 16% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.bind
0.34 ? 82% +0.8 1.11 ? 16% perf-profile.calltrace.cycles-pp.bind
0.00 +0.8 0.80 ? 13% perf-profile.calltrace.cycles-pp.jent_gen_entropy.jent_entropy_collector_alloc.jent_kcapi_init.crypto_create_tfm_node.crypto_alloc_tfm_node
0.00 +0.8 0.80 ? 13% perf-profile.calltrace.cycles-pp.jent_measure_jitter.jent_gen_entropy.jent_entropy_collector_alloc.jent_kcapi_init.crypto_create_tfm_node
0.00 +0.8 0.80 ? 13% perf-profile.calltrace.cycles-pp.crypto_alloc_tfm_node.rng_bind.alg_bind.__sys_bind.__x64_sys_bind
0.00 +0.8 0.80 ? 13% perf-profile.calltrace.cycles-pp.jent_entropy_collector_alloc.jent_kcapi_init.crypto_create_tfm_node.crypto_alloc_tfm_node.rng_bind
0.00 +0.8 0.81 ? 13% perf-profile.calltrace.cycles-pp.crypto_create_tfm_node.crypto_alloc_tfm_node.rng_bind.alg_bind.__sys_bind
0.00 +0.8 0.81 ? 13% perf-profile.calltrace.cycles-pp.rng_bind.alg_bind.__sys_bind.__x64_sys_bind.do_syscall_64
0.00 +0.8 0.81 ? 13% perf-profile.calltrace.cycles-pp.jent_kcapi_init.crypto_create_tfm_node.crypto_alloc_tfm_node.rng_bind.alg_bind
0.22 ?123% +0.9 1.09 ? 16% perf-profile.calltrace.cycles-pp.alg_bind.__sys_bind.__x64_sys_bind.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.95 ? 16% +1.9 3.88 ? 11% perf-profile.calltrace.cycles-pp.jent_memaccess.jent_measure_jitter.jent_gen_entropy.jent_read_entropy.jent_kcapi_random
14.75 ? 12% +5.1 19.87 ? 10% perf-profile.calltrace.cycles-pp.read
14.66 ? 12% +5.1 19.79 ? 10% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.read
14.64 ? 12% +5.1 19.77 ? 10% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
14.59 ? 12% +5.1 19.73 ? 10% perf-profile.calltrace.cycles-pp.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
14.16 ? 12% +5.2 19.33 ? 11% perf-profile.calltrace.cycles-pp.sock_recvmsg.sock_read_iter.vfs_read.ksys_read.do_syscall_64
14.55 ? 12% +5.2 19.71 ? 10% perf-profile.calltrace.cycles-pp.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
14.18 ? 12% +5.2 19.35 ? 11% perf-profile.calltrace.cycles-pp.sock_read_iter.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
7.42 ? 12% +5.8 13.19 ? 12% perf-profile.calltrace.cycles-pp.jent_measure_jitter.jent_gen_entropy.jent_read_entropy.jent_kcapi_random._rng_recvmsg
7.42 ? 12% +5.8 13.19 ? 12% perf-profile.calltrace.cycles-pp.jent_gen_entropy.jent_read_entropy.jent_kcapi_random._rng_recvmsg.sock_recvmsg
7.42 ? 12% +5.8 13.20 ? 12% perf-profile.calltrace.cycles-pp.jent_kcapi_random._rng_recvmsg.sock_recvmsg.sock_read_iter.vfs_read
7.42 ? 12% +5.8 13.20 ? 12% perf-profile.calltrace.cycles-pp.jent_read_entropy.jent_kcapi_random._rng_recvmsg.sock_recvmsg.sock_read_iter
7.42 ? 12% +5.8 13.20 ? 12% perf-profile.calltrace.cycles-pp._rng_recvmsg.sock_recvmsg.sock_read_iter.vfs_read.ksys_read
0.00 +7.6 7.56 ? 15% perf-profile.calltrace.cycles-pp.keccakf_round.crypto_sha3_final.shash_final_unaligned.jent_hash_time.jent_condition_data
0.00 +8.4 8.36 ? 15% perf-profile.calltrace.cycles-pp.crypto_sha3_final.shash_final_unaligned.jent_hash_time.jent_condition_data.jent_measure_jitter
0.00 +8.7 8.66 ? 15% perf-profile.calltrace.cycles-pp.shash_final_unaligned.jent_hash_time.jent_condition_data.jent_measure_jitter.jent_gen_entropy
0.00 +9.2 9.23 ? 12% perf-profile.calltrace.cycles-pp.jent_hash_time.jent_condition_data.jent_measure_jitter.jent_gen_entropy.jent_read_entropy
0.00 +9.2 9.25 ? 12% perf-profile.calltrace.cycles-pp.jent_condition_data.jent_measure_jitter.jent_gen_entropy.jent_read_entropy.jent_kcapi_random
5.63 ? 12% -5.6 0.00 perf-profile.children.cycles-pp.jent_lfsr_time
6.43 ? 5% -0.7 5.75 ? 7% perf-profile.children.cycles-pp.asm_sysvec_apic_timer_interrupt
5.85 ? 4% -0.6 5.23 ? 7% perf-profile.children.cycles-pp.sysvec_apic_timer_interrupt
4.09 ? 6% -0.4 3.67 ? 6% perf-profile.children.cycles-pp.__sysvec_apic_timer_interrupt
4.07 ? 6% -0.4 3.65 ? 6% perf-profile.children.cycles-pp.hrtimer_interrupt
3.36 ? 5% -0.4 3.01 ? 6% perf-profile.children.cycles-pp.__hrtimer_run_queues
2.48 ? 5% -0.2 2.23 ? 5% perf-profile.children.cycles-pp.tick_sched_timer
1.68 ? 7% -0.2 1.44 ? 8% perf-profile.children.cycles-pp.zap_pte_range
1.18 ? 8% -0.2 0.97 ? 12% perf-profile.children.cycles-pp.do_mprotect_pkey
1.19 ? 8% -0.2 0.97 ? 12% perf-profile.children.cycles-pp.__x64_sys_mprotect
1.90 ? 4% -0.2 1.69 ? 6% perf-profile.children.cycles-pp.scheduler_tick
0.53 ? 19% -0.1 0.39 ? 11% perf-profile.children.cycles-pp.ecb_encrypt
0.51 ? 13% -0.1 0.37 ? 13% perf-profile.children.cycles-pp.clockevents_program_event
0.58 ? 14% -0.1 0.45 ? 13% perf-profile.children.cycles-pp.mas_walk
0.36 ? 23% -0.1 0.25 ? 26% perf-profile.children.cycles-pp.get_unmapped_area
0.61 ? 9% -0.1 0.50 ? 5% perf-profile.children.cycles-pp.page_remove_rmap
0.40 ? 11% -0.1 0.30 ? 12% perf-profile.children.cycles-pp.lapic_next_deadline
0.36 ? 22% -0.1 0.26 ? 12% perf-profile.children.cycles-pp.__mod_lruvec_page_state
0.41 ? 8% -0.1 0.31 ? 6% perf-profile.children.cycles-pp.ktime_get
0.54 ? 10% -0.1 0.44 ? 12% perf-profile.children.cycles-pp.free_pgtables
0.27 ? 19% -0.1 0.19 ? 27% perf-profile.children.cycles-pp.arch_get_unmapped_area_topdown
0.43 ? 5% -0.1 0.35 ? 13% perf-profile.children.cycles-pp.flush_tlb_func
0.24 ? 28% -0.1 0.16 ? 18% perf-profile.children.cycles-pp.xts_encrypt
0.25 ? 9% -0.1 0.18 ? 12% perf-profile.children.cycles-pp.read_tsc
0.10 ? 25% -0.1 0.04 ? 85% perf-profile.children.cycles-pp.af_alg_make_sg
0.16 ? 14% -0.0 0.12 ? 20% perf-profile.children.cycles-pp.mas_wmb_replace
0.11 ? 11% -0.0 0.06 ? 12% perf-profile.children.cycles-pp.mas_descend_adopt
0.14 ? 24% -0.0 0.09 ? 12% perf-profile.children.cycles-pp._raw_spin_trylock
0.07 ? 5% -0.0 0.03 ? 81% perf-profile.children.cycles-pp.update_cfs_group
0.09 ? 18% -0.0 0.05 ? 54% perf-profile.children.cycles-pp.__wake_up_common_lock
0.14 ? 7% -0.0 0.10 ? 14% perf-profile.children.cycles-pp.dput
0.13 ? 18% -0.0 0.09 perf-profile.children.cycles-pp.free_unref_page
0.09 ? 16% -0.0 0.06 ? 14% perf-profile.children.cycles-pp.memcg_slab_post_alloc_hook
0.23 ? 14% +0.1 0.30 ? 7% perf-profile.children.cycles-pp.jent_loop_shuffle
0.29 ? 19% +0.1 0.42 ? 14% perf-profile.children.cycles-pp.memset_orig
0.00 +0.1 0.15 ? 30% perf-profile.children.cycles-pp.crypto_sha3_init
0.18 ? 17% +0.2 0.41 ? 14% perf-profile.children.cycles-pp.memcpy_orig
0.00 +0.4 0.40 ? 12% perf-profile.children.cycles-pp.shash_update_unaligned
0.00 +0.4 0.42 ? 7% perf-profile.children.cycles-pp.crypto_sha3_update
0.00 +0.4 0.42 ? 12% perf-profile.children.cycles-pp.shash_finup_unaligned
0.23 ? 23% +0.6 0.81 ? 13% perf-profile.children.cycles-pp.rng_bind
0.24 ? 27% +0.6 0.82 ? 13% perf-profile.children.cycles-pp.crypto_create_tfm_node
0.23 ? 23% +0.6 0.80 ? 13% perf-profile.children.cycles-pp.jent_entropy_collector_alloc
0.23 ? 23% +0.6 0.81 ? 13% perf-profile.children.cycles-pp.jent_kcapi_init
0.43 ? 19% +0.6 1.03 ? 16% perf-profile.children.cycles-pp.crypto_alloc_tfm_node
0.47 ? 19% +0.6 1.09 ? 17% perf-profile.children.cycles-pp.__x64_sys_bind
0.47 ? 19% +0.6 1.09 ? 17% perf-profile.children.cycles-pp.__sys_bind
0.47 ? 19% +0.6 1.09 ? 16% perf-profile.children.cycles-pp.alg_bind
0.49 ? 17% +0.6 1.11 ? 16% perf-profile.children.cycles-pp.bind
2.00 ? 15% +2.1 4.12 ? 10% perf-profile.children.cycles-pp.jent_memaccess
30.16 ? 5% +4.3 34.46 ? 8% perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
30.06 ? 5% +4.3 34.37 ? 7% perf-profile.children.cycles-pp.do_syscall_64
14.78 ? 12% +5.1 19.89 ? 10% perf-profile.children.cycles-pp.read
14.17 ? 12% +5.2 19.33 ? 11% perf-profile.children.cycles-pp.sock_recvmsg
14.18 ? 12% +5.2 19.35 ? 11% perf-profile.children.cycles-pp.sock_read_iter
14.71 ? 12% +5.2 19.89 ? 10% perf-profile.children.cycles-pp.ksys_read
14.67 ? 12% +5.2 19.86 ? 10% perf-profile.children.cycles-pp.vfs_read
7.42 ? 12% +5.8 13.20 ? 12% perf-profile.children.cycles-pp.jent_kcapi_random
7.42 ? 12% +5.8 13.20 ? 12% perf-profile.children.cycles-pp.jent_read_entropy
7.42 ? 12% +5.8 13.20 ? 12% perf-profile.children.cycles-pp._rng_recvmsg
7.65 ? 12% +6.3 13.99 ? 12% perf-profile.children.cycles-pp.jent_measure_jitter
7.65 ? 12% +6.4 14.00 ? 12% perf-profile.children.cycles-pp.jent_gen_entropy
0.00 +8.0 8.04 ? 13% perf-profile.children.cycles-pp.keccakf_round
0.00 +8.6 8.62 ? 13% perf-profile.children.cycles-pp.crypto_sha3_final
0.00 +8.8 8.83 ? 13% perf-profile.children.cycles-pp.shash_final_unaligned
0.00 +9.8 9.80 ? 12% perf-profile.children.cycles-pp.jent_hash_time
0.00 +9.8 9.81 ? 12% perf-profile.children.cycles-pp.jent_condition_data
5.45 ? 12% -5.4 0.00 perf-profile.self.cycles-pp.jent_lfsr_time
0.73 ? 16% -0.2 0.57 ? 14% perf-profile.self.cycles-pp.mtree_range_walk
0.40 ? 11% -0.1 0.30 ? 12% perf-profile.self.cycles-pp.lapic_next_deadline
0.45 ? 8% -0.1 0.38 ? 6% perf-profile.self.cycles-pp.page_remove_rmap
0.30 ? 10% -0.1 0.23 ? 21% perf-profile.self.cycles-pp.Q
0.25 ? 10% -0.1 0.18 ? 11% perf-profile.self.cycles-pp.read_tsc
0.28 ? 11% -0.1 0.21 ? 19% perf-profile.self.cycles-pp.native_sched_clock
0.11 ? 22% -0.1 0.05 ? 53% perf-profile.self.cycles-pp.cpuidle_idle_call
0.10 ? 14% -0.0 0.06 ? 12% perf-profile.self.cycles-pp.mas_descend_adopt
0.07 ? 9% -0.0 0.03 ? 81% perf-profile.self.cycles-pp.update_cfs_group
0.07 ? 13% -0.0 0.04 ? 51% perf-profile.self.cycles-pp.mas_walk
0.08 ? 14% -0.0 0.06 ? 14% perf-profile.self.cycles-pp.memcg_slab_post_alloc_hook
0.05 ? 52% +0.0 0.08 ? 7% perf-profile.self.cycles-pp.do_dentry_open
0.02 ?122% +0.0 0.06 ? 15% perf-profile.self.cycles-pp.__serpent_dec_blk16
0.22 ? 13% +0.1 0.30 ? 8% perf-profile.self.cycles-pp.jent_loop_shuffle
0.00 +0.1 0.12 ? 38% perf-profile.self.cycles-pp.shash_final_unaligned
0.29 ? 18% +0.1 0.41 ? 15% perf-profile.self.cycles-pp.memset_orig
0.00 +0.1 0.14 ? 30% perf-profile.self.cycles-pp.crypto_sha3_init
0.00 +0.2 0.18 ? 26% perf-profile.self.cycles-pp.crypto_sha3_update
0.00 +0.2 0.19 ? 13% perf-profile.self.cycles-pp.shash_update_unaligned
0.17 ? 15% +0.2 0.38 ? 13% perf-profile.self.cycles-pp.memcpy_orig
0.00 +0.6 0.64 ? 13% perf-profile.self.cycles-pp.crypto_sha3_final
1.92 ? 14% +1.9 3.81 ? 10% perf-profile.self.cycles-pp.jent_memaccess
0.00 +7.9 7.93 ? 13% perf-profile.self.cycles-pp.keccakf_round
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
Am Freitag, 9. Juni 2023, 10:02:32 CEST schrieb kernel test robot:
Hi,
> Hello,
>
> kernel test robot noticed a -8.0% regression of stress-ng.af-alg.ops_per_sec
> on:
>
>
> commit: bb897c55042e9330bcf88b4b13cbdd6f9fabdd5e ("crypto: jitter - replace
> LFSR with SHA3-256")
> https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git master
>
> testcase: stress-ng
> test machine: 36 threads 1 sockets Intel(R) Core(TM) i9-10980XE CPU @
> 3.00GHz (Cascade Lake) with 128G memory parameters:
Thank you for the report, but this change in performance is expected to ensure
proper entropy collection. I assume that the jitterentropy_rng is queried via
AF_ALG. Considering that the amount of data to be generated (and thus the
effect of the performance degradation) is small, there should be no noticeable
impact on users.
Ciao
Stephan
hi, Stephan,
On Fri, Jun 09, 2023 at 01:57:27PM +0200, Stephan Mueller wrote:
> Am Freitag, 9. Juni 2023, 10:02:32 CEST schrieb kernel test robot:
>
> Hi,
>
> > Hello,
> >
> > kernel test robot noticed a -8.0% regression of stress-ng.af-alg.ops_per_sec
> > on:
> >
> >
> > commit: bb897c55042e9330bcf88b4b13cbdd6f9fabdd5e ("crypto: jitter - replace
> > LFSR with SHA3-256")
> > https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git master
> >
> > testcase: stress-ng
> > test machine: 36 threads 1 sockets Intel(R) Core(TM) i9-10980XE CPU @
> > 3.00GHz (Cascade Lake) with 128G memory parameters:
>
> Thank you for the report, but this change in performance is expected to ensure
> proper entropy collection. I assume that the jitterentropy_rng is queried via
> AF_ALG. Considering that the amount of data to be generated (and thus the
> effect of the performance degradation) is small, there should be no noticeable
> impact on users.
got it. Thanks a lot for information!
>
> Ciao
> Stephan
>
>