From: johnnyaiai <[email protected]>
virt_spin_lock() is preferred over native qspinlock when
vCPU is preempted. But brings a lot of regression while
vCPU is not preempted. Provide a early param 'novirtlock'
to choose would be better.
will-it-scale/lock2_threads -s 10 -t 64
baseline afterpatch
559938 2166135
Signed-off-by: johnnyaiai <[email protected]>
---
kernel/locking/qspinlock.c | 3 ---
1 file changed, 3 deletions(-)
diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
index 59d3d2763..529cf23fe 100644
--- a/arch/x86/kernel/paravirt.c
+++ b/arch/x86/kernel/paravirt.c
@@ -107,6 +107,13 @@ static unsigned paravirt_patch_jmp(void *insn_buff, const void *target,
DEFINE_STATIC_KEY_TRUE(virt_spin_lock_key);
+static __init int parse_novirtspin(char *arg)
+{
+ static_branch_disable(&virt_spin_lock_key);
+ return 0;
+}
+early_param("novirtspin", parse_novirtspin);
+
void __init native_pv_lock_init(void)
{
if (!boot_cpu_has(X86_FEATURE_HYPERVISOR))
--
2.27.0
On 03.11.22 14:58, johnnyaiai wrote:
> Thanks for reply! I think nopvspin parameters controls
> pvspinlock or native spinlock. a vm guest always running
> on virtspin wheather nopvspin sets or not when EXITS_HALT
> not supported by hypervisor. So provide a missing parameter
> 'novirtspin'.
Your patch is doing:
static_branch_disable(&virt_spin_lock_key);
in case your parameter has been specified.
When running as Xen guest the related coding is doing:
static_branch_disable(&virt_spin_lock_key);
Only when running as KVM guest there is a difference, but this could be changed
by modifying kvm_spinlock_init().
Having two parameters with the same or very similar semantics is not a good
idea IMO.
Juergen