2019-10-22 09:10:42

by Zhenzhong Duan

[permalink] [raw]
Subject: [PATCH v7 0/5] Add a unified parameter "nopvspin"

All the patches have Reviewed-by now, I think v7 could be the final
version.

There are cases folks want to disable spinlock optimization for
debug/test purpose. Xen and hyperv already have parameters "xen_nopvspin"
and "hv_nopvspin" to support that, but kvm doesn't.

The first patch adds that feature to KVM guest with "nopvspin".

For compatibility reason original parameters "xen_nopvspin" and
"hv_nopvspin" are retained and marked obsolete.

v7:
PATCH3: update comment and use goto, add RB [Vitaly Kuznetsov]

v6:
PATCH1: add Reviewed-by [Vitaly Kuznetsov]
PATCH2: change 'pv' to 'PV', add Reviewed-by [Vitaly Kuznetsov]
PATCH3: refactor 'if' branch in kvm_spinlock_init() [Vitaly Kuznetsov]

v5:
PATCH1: new patch to revert a currently unnecessory commit,
code is simpler a bit after that change. [Boris Ostrovsky]
PATCH3: fold 'if' statement,add comments on virt_spin_lock_key,
reorder with PATCH2 to better reflect dependency
PATCH4: fold 'if' statement, add Reviewed-by [Boris Ostrovsky]
PATCH5: add Reviewed-by [Michael Kelley]

v4:
PATCH1: use variable name nopvspin instead of pvspin and
defined it as __initdata, changed print message,
updated patch description [Sean Christopherson]
PATCH2: remove Suggested-by, use "kvm-guest:" prefix [Sean Christopherson]
PATCH3: make variable nopvsin and xen_pvspin coexist
remove Reviewed-by due to code change [Sean Christopherson]
PATCH4: make variable nopvsin and hv_pvspin coexist [Sean Christopherson]

v3:
PATCH2: Fix indentation

v2:
PATCH1: pick the print code change into separate PATCH2,
updated patch description [Vitaly Kuznetsov]
PATCH2: new patch with print code change [Vitaly Kuznetsov]
PATCH3: add Reviewed-by [Juergen Gross]

Zhenzhong Duan (5):
Revert "KVM: X86: Fix setup the virt_spin_lock_key before static key
get initialized"
x86/kvm: Change print code to use pr_*() format
x86/kvm: Add "nopvspin" parameter to disable PV spinlocks
xen: Mark "xen_nopvspin" parameter obsolete
x86/hyperv: Mark "hv_nopvspin" parameter obsolete

Documentation/admin-guide/kernel-parameters.txt | 14 ++++-
arch/x86/hyperv/hv_spinlock.c | 4 ++
arch/x86/include/asm/qspinlock.h | 1 +
arch/x86/kernel/kvm.c | 74 +++++++++++++++----------
arch/x86/xen/spinlock.c | 4 +-
kernel/locking/qspinlock.c | 7 +++
6 files changed, 71 insertions(+), 33 deletions(-)

--
1.8.3.1


2019-10-22 09:11:30

by Zhenzhong Duan

[permalink] [raw]
Subject: [PATCH v7 1/5] Revert "KVM: X86: Fix setup the virt_spin_lock_key before static key get initialized"

This reverts commit 34226b6b70980a8f81fff3c09a2c889f77edeeff.

Commit 8990cac6e5ea ("x86/jump_label: Initialize static branching
early") adds jump_label_init() call in setup_arch() to make static
keys initialized early, so we could use the original simpler code
again.

The similar change for XEN is in commit 090d54bcbc54 ("Revert
"x86/paravirt: Set up the virt_spin_lock_key after static keys get
initialized"")

Signed-off-by: Zhenzhong Duan <[email protected]>
Reviewed-by: Vitaly Kuznetsov <[email protected]>
Cc: Paolo Bonzini <[email protected]>
Cc: Radim Krcmar <[email protected]>
Cc: Sean Christopherson <[email protected]>
Cc: Vitaly Kuznetsov <[email protected]>
Cc: Wanpeng Li <[email protected]>
Cc: Jim Mattson <[email protected]>
Cc: Joerg Roedel <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
---
arch/x86/kernel/kvm.c | 12 +++---------
1 file changed, 3 insertions(+), 9 deletions(-)

diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index e820568..3bc6a266 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -527,13 +527,6 @@ static void kvm_smp_send_call_func_ipi(const struct cpumask *mask)
}
}

-static void __init kvm_smp_prepare_cpus(unsigned int max_cpus)
-{
- native_smp_prepare_cpus(max_cpus);
- if (kvm_para_has_hint(KVM_HINTS_REALTIME))
- static_branch_disable(&virt_spin_lock_key);
-}
-
static void __init kvm_smp_prepare_boot_cpu(void)
{
/*
@@ -633,7 +626,6 @@ static void __init kvm_guest_init(void)
apic_set_eoi_write(kvm_guest_apic_eoi_write);

#ifdef CONFIG_SMP
- smp_ops.smp_prepare_cpus = kvm_smp_prepare_cpus;
smp_ops.smp_prepare_boot_cpu = kvm_smp_prepare_boot_cpu;
if (kvm_para_has_feature(KVM_FEATURE_PV_SCHED_YIELD) &&
!kvm_para_has_hint(KVM_HINTS_REALTIME) &&
@@ -835,8 +827,10 @@ void __init kvm_spinlock_init(void)
if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT))
return;

- if (kvm_para_has_hint(KVM_HINTS_REALTIME))
+ if (kvm_para_has_hint(KVM_HINTS_REALTIME)) {
+ static_branch_disable(&virt_spin_lock_key);
return;
+ }

/* Don't use the pvqspinlock code if there is only 1 vCPU. */
if (num_possible_cpus() == 1)
--
1.8.3.1

2019-10-22 09:12:58

by Zhenzhong Duan

[permalink] [raw]
Subject: [PATCH v7 3/5] x86/kvm: Add "nopvspin" parameter to disable PV spinlocks

There are cases where a guest tries to switch spinlocks to bare metal
behavior (e.g. by setting "xen_nopvspin" on XEN platform and
"hv_nopvspin" on HYPER_V).

That feature is missed on KVM, add a new parameter "nopvspin" to disable
PV spinlocks for KVM guest.

The new 'nopvspin' parameter will also replace Xen and Hyper-V specific
parameters in future patches.

Define variable nopvsin as global because it will be used in future
patches as above.

Signed-off-by: Zhenzhong Duan <[email protected]>
Reviewed-by: Vitaly Kuznetsov <[email protected]>
Cc: Jonathan Corbet <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: Paolo Bonzini <[email protected]>
Cc: Radim Krcmar <[email protected]>
Cc: Sean Christopherson <[email protected]>
Cc: Vitaly Kuznetsov <[email protected]>
Cc: Wanpeng Li <[email protected]>
Cc: Jim Mattson <[email protected]>
Cc: Joerg Roedel <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Will Deacon <[email protected]>
---
Documentation/admin-guide/kernel-parameters.txt | 5 ++++
arch/x86/include/asm/qspinlock.h | 1 +
arch/x86/kernel/kvm.c | 34 ++++++++++++++++++++-----
kernel/locking/qspinlock.c | 7 +++++
4 files changed, 40 insertions(+), 7 deletions(-)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index a84a83f..bd49ed2 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -5334,6 +5334,11 @@
as generic guest with no PV drivers. Currently support
XEN HVM, KVM, HYPER_V and VMWARE guest.

+ nopvspin [X86,KVM]
+ Disables the qspinlock slow path using PV optimizations
+ which allow the hypervisor to 'idle' the guest on lock
+ contention.
+
xirc2ps_cs= [NET,PCMCIA]
Format:
<irq>,<irq_mask>,<io>,<full_duplex>,<do_sound>,<lockup_hack>[,<irq2>[,<irq3>[,<irq4>]]]
diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h
index 444d6fd..d86ab94 100644
--- a/arch/x86/include/asm/qspinlock.h
+++ b/arch/x86/include/asm/qspinlock.h
@@ -32,6 +32,7 @@ static __always_inline u32 queued_fetch_set_pending_acquire(struct qspinlock *lo
extern void __pv_init_lock_hash(void);
extern void __pv_queued_spin_lock_slowpath(struct qspinlock *lock, u32 val);
extern void __raw_callee_save___pv_queued_spin_unlock(struct qspinlock *lock);
+extern bool nopvspin;

#define queued_spin_unlock queued_spin_unlock
/**
diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 249f14a..3945aa5 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -825,18 +825,36 @@ __visible bool __kvm_vcpu_is_preempted(long cpu)
*/
void __init kvm_spinlock_init(void)
{
- /* Does host kernel support KVM_FEATURE_PV_UNHALT? */
- if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT))
+ /*
+ * In case host doesn't support KVM_FEATURE_PV_UNHALT there is still an
+ * advantage of keeping virt_spin_lock_key enabled: virt_spin_lock() is
+ * preferred over native qspinlock when vCPU is preempted.
+ */
+ if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)) {
+ pr_info("PV spinlocks disabled, no host support.\n");
return;
+ }

+ /*
+ * Disable PV qspinlock and use native qspinlock when dedicated pCPUs
+ * are available.
+ */
if (kvm_para_has_hint(KVM_HINTS_REALTIME)) {
- static_branch_disable(&virt_spin_lock_key);
- return;
+ pr_info("PV spinlocks disabled with KVM_HINTS_REALTIME hints.\n");
+ goto out;
}

- /* Don't use the pvqspinlock code if there is only 1 vCPU. */
- if (num_possible_cpus() == 1)
- return;
+ if (num_possible_cpus() == 1) {
+ pr_info("PV spinlocks disabled, single CPU.\n");
+ goto out;
+ }
+
+ if (nopvspin) {
+ pr_info("PV spinlocks disabled, forced by \"nopvspin\" parameter.\n");
+ goto out;
+ }
+
+ pr_info("PV spinlocks enabled\n");

__pv_init_lock_hash();
pv_ops.lock.queued_spin_lock_slowpath = __pv_queued_spin_lock_slowpath;
@@ -849,6 +867,8 @@ void __init kvm_spinlock_init(void)
pv_ops.lock.vcpu_is_preempted =
PV_CALLEE_SAVE(__kvm_vcpu_is_preempted);
}
+out:
+ static_branch_disable(&virt_spin_lock_key);
}

#endif /* CONFIG_PARAVIRT_SPINLOCKS */
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index 2473f10..75193d6 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -580,4 +580,11 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
#include "qspinlock_paravirt.h"
#include "qspinlock.c"

+bool nopvspin __initdata;
+static __init int parse_nopvspin(char *arg)
+{
+ nopvspin = true;
+ return 0;
+}
+early_param("nopvspin", parse_nopvspin);
#endif
--
1.8.3.1

2019-10-22 09:13:20

by Zhenzhong Duan

[permalink] [raw]
Subject: [PATCH v7 5/5] x86/hyperv: Mark "hv_nopvspin" parameter obsolete

Map "hv_nopvspin" to "nopvspin".

Signed-off-by: Zhenzhong Duan <[email protected]>
Reviewed-by: Michael Kelley <[email protected]>
Cc: Jonathan Corbet <[email protected]>
Cc: "K. Y. Srinivasan" <[email protected]>
Cc: Haiyang Zhang <[email protected]>
Cc: Stephen Hemminger <[email protected]>
Cc: Sasha Levin <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
---
Documentation/admin-guide/kernel-parameters.txt | 6 +++++-
arch/x86/hyperv/hv_spinlock.c | 4 ++++
2 files changed, 9 insertions(+), 1 deletion(-)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 85059dd..78648bb 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -1436,6 +1436,10 @@
hv_nopvspin [X86,HYPER_V] Disables the paravirt spinlock optimizations
which allow the hypervisor to 'idle' the
guest on lock contention.
+ This parameter is obsoleted by "nopvspin"
+ parameter, which has equivalent effect for
+ HYPER_V platform.
+

keep_bootcon [KNL]
Do not unregister boot console at start. This is only
@@ -5335,7 +5339,7 @@
as generic guest with no PV drivers. Currently support
XEN HVM, KVM, HYPER_V and VMWARE guest.

- nopvspin [X86,XEN,KVM]
+ nopvspin [X86,XEN,KVM,HYPER_V]
Disables the qspinlock slow path using PV optimizations
which allow the hypervisor to 'idle' the guest on lock
contention.
diff --git a/arch/x86/hyperv/hv_spinlock.c b/arch/x86/hyperv/hv_spinlock.c
index 07f21a0..47c7d6c 100644
--- a/arch/x86/hyperv/hv_spinlock.c
+++ b/arch/x86/hyperv/hv_spinlock.c
@@ -64,6 +64,9 @@ __visible bool hv_vcpu_is_preempted(int vcpu)

void __init hv_init_spinlocks(void)
{
+ if (nopvspin)
+ hv_pvspin = false;
+
if (!hv_pvspin || !apic ||
!(ms_hyperv.hints & HV_X64_CLUSTER_IPI_RECOMMENDED) ||
!(ms_hyperv.features & HV_X64_MSR_GUEST_IDLE_AVAILABLE)) {
@@ -82,6 +85,7 @@ void __init hv_init_spinlocks(void)

static __init int hv_parse_nopvspin(char *arg)
{
+ pr_notice("\"hv_nopvspin\" is deprecated, please use \"nopvspin\" instead\n");
hv_pvspin = false;
return 0;
}
--
1.8.3.1

2019-10-22 13:24:24

by Vitaly Kuznetsov

[permalink] [raw]
Subject: Re: [PATCH v7 3/5] x86/kvm: Add "nopvspin" parameter to disable PV spinlocks

Zhenzhong Duan <[email protected]> writes:

> There are cases where a guest tries to switch spinlocks to bare metal
> behavior (e.g. by setting "xen_nopvspin" on XEN platform and
> "hv_nopvspin" on HYPER_V).
>
> That feature is missed on KVM, add a new parameter "nopvspin" to disable
> PV spinlocks for KVM guest.
>
> The new 'nopvspin' parameter will also replace Xen and Hyper-V specific
> parameters in future patches.
>
> Define variable nopvsin as global because it will be used in future
> patches as above.
>
> Signed-off-by: Zhenzhong Duan <[email protected]>
> Reviewed-by: Vitaly Kuznetsov <[email protected]>
> Cc: Jonathan Corbet <[email protected]>
> Cc: Thomas Gleixner <[email protected]>
> Cc: Ingo Molnar <[email protected]>
> Cc: Borislav Petkov <[email protected]>
> Cc: "H. Peter Anvin" <[email protected]>
> Cc: Paolo Bonzini <[email protected]>
> Cc: Radim Krcmar <[email protected]>
> Cc: Sean Christopherson <[email protected]>
> Cc: Vitaly Kuznetsov <[email protected]>
> Cc: Wanpeng Li <[email protected]>
> Cc: Jim Mattson <[email protected]>
> Cc: Joerg Roedel <[email protected]>
> Cc: Peter Zijlstra <[email protected]>
> Cc: Will Deacon <[email protected]>
> ---
> Documentation/admin-guide/kernel-parameters.txt | 5 ++++
> arch/x86/include/asm/qspinlock.h | 1 +
> arch/x86/kernel/kvm.c | 34 ++++++++++++++++++++-----
> kernel/locking/qspinlock.c | 7 +++++
> 4 files changed, 40 insertions(+), 7 deletions(-)
>
> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
> index a84a83f..bd49ed2 100644
> --- a/Documentation/admin-guide/kernel-parameters.txt
> +++ b/Documentation/admin-guide/kernel-parameters.txt
> @@ -5334,6 +5334,11 @@
> as generic guest with no PV drivers. Currently support
> XEN HVM, KVM, HYPER_V and VMWARE guest.
>
> + nopvspin [X86,KVM]
> + Disables the qspinlock slow path using PV optimizations
> + which allow the hypervisor to 'idle' the guest on lock
> + contention.
> +
> xirc2ps_cs= [NET,PCMCIA]
> Format:
> <irq>,<irq_mask>,<io>,<full_duplex>,<do_sound>,<lockup_hack>[,<irq2>[,<irq3>[,<irq4>]]]
> diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h
> index 444d6fd..d86ab94 100644
> --- a/arch/x86/include/asm/qspinlock.h
> +++ b/arch/x86/include/asm/qspinlock.h
> @@ -32,6 +32,7 @@ static __always_inline u32 queued_fetch_set_pending_acquire(struct qspinlock *lo
> extern void __pv_init_lock_hash(void);
> extern void __pv_queued_spin_lock_slowpath(struct qspinlock *lock, u32 val);
> extern void __raw_callee_save___pv_queued_spin_unlock(struct qspinlock *lock);
> +extern bool nopvspin;
>
> #define queued_spin_unlock queued_spin_unlock
> /**
> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> index 249f14a..3945aa5 100644
> --- a/arch/x86/kernel/kvm.c
> +++ b/arch/x86/kernel/kvm.c
> @@ -825,18 +825,36 @@ __visible bool __kvm_vcpu_is_preempted(long cpu)
> */
> void __init kvm_spinlock_init(void)
> {
> - /* Does host kernel support KVM_FEATURE_PV_UNHALT? */
> - if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT))
> + /*
> + * In case host doesn't support KVM_FEATURE_PV_UNHALT there is still an
> + * advantage of keeping virt_spin_lock_key enabled: virt_spin_lock() is
> + * preferred over native qspinlock when vCPU is preempted.
> + */
> + if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)) {
> + pr_info("PV spinlocks disabled, no host support.\n");
> return;
> + }
>
> + /*
> + * Disable PV qspinlock and use native qspinlock when dedicated pCPUs
> + * are available.
> + */
> if (kvm_para_has_hint(KVM_HINTS_REALTIME)) {
> - static_branch_disable(&virt_spin_lock_key);
> - return;
> + pr_info("PV spinlocks disabled with KVM_HINTS_REALTIME hints.\n");
> + goto out;
> }
>
> - /* Don't use the pvqspinlock code if there is only 1 vCPU. */
> - if (num_possible_cpus() == 1)
> - return;
> + if (num_possible_cpus() == 1) {
> + pr_info("PV spinlocks disabled, single CPU.\n");
> + goto out;
> + }
> +
> + if (nopvspin) {
> + pr_info("PV spinlocks disabled, forced by \"nopvspin\" parameter.\n");
> + goto out;
> + }
> +
> + pr_info("PV spinlocks enabled\n");
>
> __pv_init_lock_hash();
> pv_ops.lock.queued_spin_lock_slowpath = __pv_queued_spin_lock_slowpath;
> @@ -849,6 +867,8 @@ void __init kvm_spinlock_init(void)
> pv_ops.lock.vcpu_is_preempted =
> PV_CALLEE_SAVE(__kvm_vcpu_is_preempted);
> }
> +out:
> + static_branch_disable(&virt_spin_lock_key);

You probably need to add 'return' before 'out:' as it seems you're
disabling virt_spin_lock_key in all cases now).

> }
>
> #endif /* CONFIG_PARAVIRT_SPINLOCKS */
> diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
> index 2473f10..75193d6 100644
> --- a/kernel/locking/qspinlock.c
> +++ b/kernel/locking/qspinlock.c
> @@ -580,4 +580,11 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
> #include "qspinlock_paravirt.h"
> #include "qspinlock.c"
>
> +bool nopvspin __initdata;
> +static __init int parse_nopvspin(char *arg)
> +{
> + nopvspin = true;
> + return 0;
> +}
> +early_param("nopvspin", parse_nopvspin);
> #endif

--
Vitaly

2019-10-22 13:43:15

by Zhenzhong Duan

[permalink] [raw]
Subject: Re: [PATCH v7 3/5] x86/kvm: Add "nopvspin" parameter to disable PV spinlocks

Hi Vitaly,

On 2019/10/22 19:36, Vitaly Kuznetsov wrote:

> Zhenzhong Duan<[email protected]> writes:
>
...snip

>> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
>> index 249f14a..3945aa5 100644
>> --- a/arch/x86/kernel/kvm.c
>> +++ b/arch/x86/kernel/kvm.c
>> @@ -825,18 +825,36 @@ __visible bool __kvm_vcpu_is_preempted(long cpu)
>> */
>> void __init kvm_spinlock_init(void)
>> {
>> - /* Does host kernel support KVM_FEATURE_PV_UNHALT? */
>> - if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT))
>> + /*
>> + * In case host doesn't support KVM_FEATURE_PV_UNHALT there is still an
>> + * advantage of keeping virt_spin_lock_key enabled: virt_spin_lock() is
>> + * preferred over native qspinlock when vCPU is preempted.
>> + */
>> + if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)) {
>> + pr_info("PV spinlocks disabled, no host support.\n");
>> return;
>> + }
>>
>> + /*
>> + * Disable PV qspinlock and use native qspinlock when dedicated pCPUs
>> + * are available.
>> + */
>> if (kvm_para_has_hint(KVM_HINTS_REALTIME)) {
>> - static_branch_disable(&virt_spin_lock_key);
>> - return;
>> + pr_info("PV spinlocks disabled with KVM_HINTS_REALTIME hints.\n");
>> + goto out;
>> }
>>
>> - /* Don't use the pvqspinlock code if there is only 1 vCPU. */
>> - if (num_possible_cpus() == 1)
>> - return;
>> + if (num_possible_cpus() == 1) {
>> + pr_info("PV spinlocks disabled, single CPU.\n");
>> + goto out;
>> + }
>> +
>> + if (nopvspin) {
>> + pr_info("PV spinlocks disabled, forced by \"nopvspin\" parameter.\n");
>> + goto out;
>> + }
>> +
>> + pr_info("PV spinlocks enabled\n");
>>
>> __pv_init_lock_hash();
>> pv_ops.lock.queued_spin_lock_slowpath = __pv_queued_spin_lock_slowpath;
>> @@ -849,6 +867,8 @@ void __init kvm_spinlock_init(void)
>> pv_ops.lock.vcpu_is_preempted =
>> PV_CALLEE_SAVE(__kvm_vcpu_is_preempted);
>> }
>> +out:
>> + static_branch_disable(&virt_spin_lock_key);
> You probably need to add 'return' before 'out:' as it seems you're
> disabling virt_spin_lock_key in all cases now).

virt_spin_lock_key is kept enabled in !kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)
case which is the only case virt_spin_lock() optimization is used.

When PV qspinlock is enabled, virt_spin_lock() isn't called in
__pv_queued_spin_lock_slowpath() in which case we don't care
virt_spin_lock_key's value.

So adding 'return' or not are both ok, I chosed to save a line,
let me know if you prefer to add a 'return' and I'll change it.

btw: __pv_queued_spin_lock_slowpath() is alias of queued_spin_lock_slowpath()

Thanks
Zhenzhong

2019-10-22 13:55:38

by Vitaly Kuznetsov

[permalink] [raw]
Subject: Re: [PATCH v7 3/5] x86/kvm: Add "nopvspin" parameter to disable PV spinlocks

Zhenzhong Duan <[email protected]> writes:

> Hi Vitaly,
>
> On 2019/10/22 19:36, Vitaly Kuznetsov wrote:
>
>> Zhenzhong Duan<[email protected]> writes:
>>
> ...snip
>
>>> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
>>> index 249f14a..3945aa5 100644
>>> --- a/arch/x86/kernel/kvm.c
>>> +++ b/arch/x86/kernel/kvm.c
>>> @@ -825,18 +825,36 @@ __visible bool __kvm_vcpu_is_preempted(long cpu)
>>> */
>>> void __init kvm_spinlock_init(void)
>>> {
>>> - /* Does host kernel support KVM_FEATURE_PV_UNHALT? */
>>> - if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT))
>>> + /*
>>> + * In case host doesn't support KVM_FEATURE_PV_UNHALT there is still an
>>> + * advantage of keeping virt_spin_lock_key enabled: virt_spin_lock() is
>>> + * preferred over native qspinlock when vCPU is preempted.
>>> + */
>>> + if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)) {
>>> + pr_info("PV spinlocks disabled, no host support.\n");
>>> return;
>>> + }
>>>
>>> + /*
>>> + * Disable PV qspinlock and use native qspinlock when dedicated pCPUs
>>> + * are available.
>>> + */
>>> if (kvm_para_has_hint(KVM_HINTS_REALTIME)) {
>>> - static_branch_disable(&virt_spin_lock_key);
>>> - return;
>>> + pr_info("PV spinlocks disabled with KVM_HINTS_REALTIME hints.\n");
>>> + goto out;
>>> }
>>>
>>> - /* Don't use the pvqspinlock code if there is only 1 vCPU. */
>>> - if (num_possible_cpus() == 1)
>>> - return;
>>> + if (num_possible_cpus() == 1) {
>>> + pr_info("PV spinlocks disabled, single CPU.\n");
>>> + goto out;
>>> + }
>>> +
>>> + if (nopvspin) {
>>> + pr_info("PV spinlocks disabled, forced by \"nopvspin\" parameter.\n");
>>> + goto out;
>>> + }
>>> +
>>> + pr_info("PV spinlocks enabled\n");
>>>
>>> __pv_init_lock_hash();
>>> pv_ops.lock.queued_spin_lock_slowpath = __pv_queued_spin_lock_slowpath;
>>> @@ -849,6 +867,8 @@ void __init kvm_spinlock_init(void)
>>> pv_ops.lock.vcpu_is_preempted =
>>> PV_CALLEE_SAVE(__kvm_vcpu_is_preempted);
>>> }
>>> +out:
>>> + static_branch_disable(&virt_spin_lock_key);
>> You probably need to add 'return' before 'out:' as it seems you're
>> disabling virt_spin_lock_key in all cases now).
>
> virt_spin_lock_key is kept enabled in !kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)
> case which is the only case virt_spin_lock() optimization is used.
>
> When PV qspinlock is enabled, virt_spin_lock() isn't called in
> __pv_queued_spin_lock_slowpath() in which case we don't care
> virt_spin_lock_key's value.
>

True, my bad: I though we still need it enabled for something.

> So adding 'return' or not are both ok, I chosed to save a line,
> let me know if you prefer to add a 'return' and I'll change it.

No, please ignore.

>
> btw: __pv_queued_spin_lock_slowpath() is alias of queued_spin_lock_slowpath()
>
> Thanks
> Zhenzhong
>

--
Vitaly

2019-10-22 23:58:30

by Sean Christopherson

[permalink] [raw]
Subject: Re: [PATCH v7 3/5] x86/kvm: Add "nopvspin" parameter to disable PV spinlocks

On Tue, Oct 22, 2019 at 08:46:46PM +0800, Zhenzhong Duan wrote:
> Hi Vitaly,
>
> On 2019/10/22 19:36, Vitaly Kuznetsov wrote:
>
> >Zhenzhong Duan<[email protected]> writes:
> >
> ...snip
>
> >>diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> >>index 249f14a..3945aa5 100644
> >>--- a/arch/x86/kernel/kvm.c
> >>+++ b/arch/x86/kernel/kvm.c
> >>@@ -825,18 +825,36 @@ __visible bool __kvm_vcpu_is_preempted(long cpu)
> >> */
> >> void __init kvm_spinlock_init(void)
> >> {
> >>- /* Does host kernel support KVM_FEATURE_PV_UNHALT? */
> >>- if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT))
> >>+ /*
> >>+ * In case host doesn't support KVM_FEATURE_PV_UNHALT there is still an
> >>+ * advantage of keeping virt_spin_lock_key enabled: virt_spin_lock() is
> >>+ * preferred over native qspinlock when vCPU is preempted.
> >>+ */
> >>+ if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)) {
> >>+ pr_info("PV spinlocks disabled, no host support.\n");
> >> return;
> >>+ }
> >>+ /*
> >>+ * Disable PV qspinlock and use native qspinlock when dedicated pCPUs
> >>+ * are available.
> >>+ */
> >> if (kvm_para_has_hint(KVM_HINTS_REALTIME)) {
> >>- static_branch_disable(&virt_spin_lock_key);
> >>- return;
> >>+ pr_info("PV spinlocks disabled with KVM_HINTS_REALTIME hints.\n");
> >>+ goto out;
> >> }
> >>- /* Don't use the pvqspinlock code if there is only 1 vCPU. */
> >>- if (num_possible_cpus() == 1)
> >>- return;
> >>+ if (num_possible_cpus() == 1) {
> >>+ pr_info("PV spinlocks disabled, single CPU.\n");
> >>+ goto out;
> >>+ }
> >>+
> >>+ if (nopvspin) {
> >>+ pr_info("PV spinlocks disabled, forced by \"nopvspin\" parameter.\n");
> >>+ goto out;
> >>+ }
> >>+
> >>+ pr_info("PV spinlocks enabled\n");
> >> __pv_init_lock_hash();
> >> pv_ops.lock.queued_spin_lock_slowpath = __pv_queued_spin_lock_slowpath;
> >>@@ -849,6 +867,8 @@ void __init kvm_spinlock_init(void)
> >> pv_ops.lock.vcpu_is_preempted =
> >> PV_CALLEE_SAVE(__kvm_vcpu_is_preempted);
> >> }
> >>+out:
> >>+ static_branch_disable(&virt_spin_lock_key);
> >You probably need to add 'return' before 'out:' as it seems you're
> >disabling virt_spin_lock_key in all cases now).
>
> virt_spin_lock_key is kept enabled in !kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)
> case which is the only case virt_spin_lock() optimization is used.
>
> When PV qspinlock is enabled, virt_spin_lock() isn't called in
> __pv_queued_spin_lock_slowpath() in which case we don't care
> virt_spin_lock_key's value.
>
> So adding 'return' or not are both ok, I chosed to save a line,
> let me know if you prefer to add a 'return' and I'll change it.

It'd be worth adding a comment here if you end up spinning another version
to change the logging prefix. The logic is sound and I like the end
result, but I had the same knee jerk "this can't be right!?!?" reaction as
Vitaly.

2019-10-23 03:27:00

by Zhenzhong Duan

[permalink] [raw]
Subject: Re: [PATCH v7 3/5] x86/kvm: Add "nopvspin" parameter to disable PV spinlocks


On 2019/10/23 5:03, Sean Christopherson wrote:
> On Tue, Oct 22, 2019 at 08:46:46PM +0800, Zhenzhong Duan wrote:
>> Hi Vitaly,
>>
>> On 2019/10/22 19:36, Vitaly Kuznetsov wrote:
>>
>>> Zhenzhong Duan<[email protected]> writes:
>>>
>> ...snip
>>
>>>> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
>>>> index 249f14a..3945aa5 100644
>>>> --- a/arch/x86/kernel/kvm.c
>>>> +++ b/arch/x86/kernel/kvm.c
>>>> @@ -825,18 +825,36 @@ __visible bool __kvm_vcpu_is_preempted(long cpu)
>>>> */
>>>> void __init kvm_spinlock_init(void)
>>>> {
>>>> - /* Does host kernel support KVM_FEATURE_PV_UNHALT? */
>>>> - if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT))
>>>> + /*
>>>> + * In case host doesn't support KVM_FEATURE_PV_UNHALT there is still an
>>>> + * advantage of keeping virt_spin_lock_key enabled: virt_spin_lock() is
>>>> + * preferred over native qspinlock when vCPU is preempted.
>>>> + */
>>>> + if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)) {
>>>> + pr_info("PV spinlocks disabled, no host support.\n");
>>>> return;
>>>> + }
>>>> + /*
>>>> + * Disable PV qspinlock and use native qspinlock when dedicated pCPUs
>>>> + * are available.
>>>> + */
>>>> if (kvm_para_has_hint(KVM_HINTS_REALTIME)) {
>>>> - static_branch_disable(&virt_spin_lock_key);
>>>> - return;
>>>> + pr_info("PV spinlocks disabled with KVM_HINTS_REALTIME hints.\n");
>>>> + goto out;
>>>> }
>>>> - /* Don't use the pvqspinlock code if there is only 1 vCPU. */
>>>> - if (num_possible_cpus() == 1)
>>>> - return;
>>>> + if (num_possible_cpus() == 1) {
>>>> + pr_info("PV spinlocks disabled, single CPU.\n");
>>>> + goto out;
>>>> + }
>>>> +
>>>> + if (nopvspin) {
>>>> + pr_info("PV spinlocks disabled, forced by \"nopvspin\" parameter.\n");
>>>> + goto out;
>>>> + }
>>>> +
>>>> + pr_info("PV spinlocks enabled\n");
>>>> __pv_init_lock_hash();
>>>> pv_ops.lock.queued_spin_lock_slowpath = __pv_queued_spin_lock_slowpath;
>>>> @@ -849,6 +867,8 @@ void __init kvm_spinlock_init(void)
>>>> pv_ops.lock.vcpu_is_preempted =
>>>> PV_CALLEE_SAVE(__kvm_vcpu_is_preempted);
>>>> }
>>>> +out:
>>>> + static_branch_disable(&virt_spin_lock_key);
>>> You probably need to add 'return' before 'out:' as it seems you're
>>> disabling virt_spin_lock_key in all cases now).
>> virt_spin_lock_key is kept enabled in !kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)
>> case which is the only case virt_spin_lock() optimization is used.
>>
>> When PV qspinlock is enabled, virt_spin_lock() isn't called in
>> __pv_queued_spin_lock_slowpath() in which case we don't care
>> virt_spin_lock_key's value.
>>
>> So adding 'return' or not are both ok, I chosed to save a line,
>> let me know if you prefer to add a 'return' and I'll change it.
> It'd be worth adding a comment here if you end up spinning another version
> to change the logging prefix. The logic is sound and I like the end
> result, but I had the same knee jerk "this can't be right!?!?" reaction as
> Vitaly.

Sure, will do in next version.

Thanks

Zhenzhong