2018-09-27 06:28:07

by Yi Sun

[permalink] [raw]
Subject: [PATCH v3 0/2] Enable PV qspinlock for Hyper-V

v2->v3:
- use "Hyper-V: " as the message prefix
- remove unnecessary header files
- remove unnecessary check in 'hv_qlock_wait'
- fix compilation error on different platforms

v1->v2:
- compile hv_spinlock.c only when CONFIG_PARAVIRT_SPINLOCKS enabled
- merge v1 patch 2/3 to single patch
- remove part of the boilerplate in hv_spinlock.c
- declare hv_pvspin as __initdata
- remove spin_wait_info and hv_notify_long_spin_wait because
SpinWaitInfo is a standalone feature.
- add comments for reading HV_X64_MSR_GUEST_IDLE
- replace pr_warn to pr_info
- use pr_fmt instead of the 'hv:' prefix
- register callback function for smp_ops.smp_prepare_boot_cpu
to initialize hyper-v spinlock

This patch adds the necessary Hyper-V specific code to allow
PV qspinlock work on Hyper-V.

In wait callback function, read HV_X64_MSR_GUEST_IDLE MSR
to trigger the guest's transition to the idle power state
which can be exited by an IPI even if IF flag is disabled.

In kick callback function, just send platform IPI to make
waiting vcpu exit idle state.

In vcpu_is_preempted callback function, return false directly
because Hyper-V does not provide such interface so far.


Cc: "K. Y. Srinivasan" <[email protected]>
Cc: Haiyang Zhang <[email protected]>
Cc: Stephen Hemminger <[email protected]>
Cc: Jonathan Corbet <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Michael Kelley (EOSG) <[email protected]>

Yi Sun (2):
X86/Hyper-V: Add Guest IDLE MSR support
locking/pvqspinlock, hv: Enable PV qspinlock for Hyper-V

Documentation/admin-guide/kernel-parameters.txt | 5 ++
arch/x86/hyperv/Makefile | 4 ++
arch/x86/hyperv/hv_spinlock.c | 76 +++++++++++++++++++++++++
arch/x86/include/asm/hyperv-tlfs.h | 5 ++
arch/x86/include/asm/mshyperv.h | 1 +
arch/x86/kernel/cpu/mshyperv.c | 14 +++++
6 files changed, 105 insertions(+)
create mode 100644 arch/x86/hyperv/hv_spinlock.c

--
1.9.1



2018-09-27 06:27:01

by Yi Sun

[permalink] [raw]
Subject: [PATCH v3 1/2] X86/Hyper-V: Add Guest IDLE MSR support

Hyper-V may expose a HV_X64_MSR_GUEST_IDLE MSR. We can read
HYPERV_CPUID_FEATURES eax to check it. Read this MSR can
trigger the guest's transition to the idle power state which
can be exited by an IPI even if IF flag is disabled.

Cc: "K. Y. Srinivasan" <[email protected]>
Cc: Haiyang Zhang <[email protected]>
Cc: Stephen Hemminger <[email protected]>
Reviewed-by: Michael Kelley <[email protected]>
Signed-off-by: Yi Sun <[email protected]>
---
arch/x86/include/asm/hyperv-tlfs.h | 5 +++++
1 file changed, 5 insertions(+)

diff --git a/arch/x86/include/asm/hyperv-tlfs.h b/arch/x86/include/asm/hyperv-tlfs.h
index 00e01d2..4139f76 100644
--- a/arch/x86/include/asm/hyperv-tlfs.h
+++ b/arch/x86/include/asm/hyperv-tlfs.h
@@ -38,6 +38,8 @@
#define HV_MSR_TIME_REF_COUNT_AVAILABLE (1 << 1)
/* Partition reference TSC MSR is available */
#define HV_MSR_REFERENCE_TSC_AVAILABLE (1 << 9)
+/* Partition Guest IDLE MSR is available */
+#define HV_X64_MSR_GUEST_IDLE_AVAILABLE (1 << 10)

/* A partition's reference time stamp counter (TSC) page */
#define HV_X64_MSR_REFERENCE_TSC 0x40000021
@@ -246,6 +248,9 @@
#define HV_X64_MSR_STIMER3_CONFIG 0x400000B6
#define HV_X64_MSR_STIMER3_COUNT 0x400000B7

+/* Hyper-V guest idle MSR */
+#define HV_X64_MSR_GUEST_IDLE 0x400000F0
+
/* Hyper-V guest crash notification MSR's */
#define HV_X64_MSR_CRASH_P0 0x40000100
#define HV_X64_MSR_CRASH_P1 0x40000101
--
1.9.1


2018-09-27 06:27:02

by Yi Sun

[permalink] [raw]
Subject: [PATCH v3 2/2] locking/pvqspinlock, hv: Enable PV qspinlock for Hyper-V

Follow PV spinlock mechanism to implement the callback functions
to allow the CPU idling and kicking operations on Hyper-V.

Cc: "K. Y. Srinivasan" <[email protected]>
Cc: Haiyang Zhang <[email protected]>
Cc: Stephen Hemminger <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Michael Kelley (EOSG) <[email protected]>
Signed-off-by: Yi Sun <[email protected]>
---
v2->v3:
- use "Hyper-V: " as the message prefix
(suggested by Michael Kelley)
- remove unnecessary header files
(suggested by Michael Kelley)
- remove unnecessary check in 'hv_qlock_wait'
(suggested by Michael Kelley)
- fix compilation error on different platforms
(suggested by Thomas Gleixner)
---
Documentation/admin-guide/kernel-parameters.txt | 5 ++
arch/x86/hyperv/Makefile | 4 ++
arch/x86/hyperv/hv_spinlock.c | 76 +++++++++++++++++++++++++
arch/x86/include/asm/mshyperv.h | 1 +
arch/x86/kernel/cpu/mshyperv.c | 14 +++++
5 files changed, 100 insertions(+)
create mode 100644 arch/x86/hyperv/hv_spinlock.c

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 92eb1f4..0fc8448 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -1385,6 +1385,11 @@
hvc_iucv_allow= [S390] Comma-separated list of z/VM user IDs.
If specified, z/VM IUCV HVC accepts connections
from listed z/VM user IDs only.
+
+ hv_nopvspin [X86,HYPER_V]
+ Disables the ticketlock slowpath using HYPER-V PV
+ optimizations.
+
keep_bootcon [KNL]
Do not unregister boot console at start. This is only
useful for debugging when something happens in the window
diff --git a/arch/x86/hyperv/Makefile b/arch/x86/hyperv/Makefile
index b21ee65..1c11f94 100644
--- a/arch/x86/hyperv/Makefile
+++ b/arch/x86/hyperv/Makefile
@@ -1,2 +1,6 @@
obj-y := hv_init.o mmu.o nested.o
obj-$(CONFIG_X86_64) += hv_apic.o
+
+ifdef CONFIG_X86_64
+obj-$(CONFIG_PARAVIRT_SPINLOCKS) += hv_spinlock.o
+endif
diff --git a/arch/x86/hyperv/hv_spinlock.c b/arch/x86/hyperv/hv_spinlock.c
new file mode 100644
index 0000000..e8fe997
--- /dev/null
+++ b/arch/x86/hyperv/hv_spinlock.c
@@ -0,0 +1,76 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/*
+ * Hyper-V specific spinlock code.
+ *
+ * Copyright (C) 2018, Intel, Inc.
+ *
+ * Author : Yi Sun <[email protected]>
+ */
+
+#define pr_fmt(fmt) "Hyper-V: " fmt
+
+#include <linux/spinlock.h>
+
+#include <asm/mshyperv.h>
+#include <asm/hyperv-tlfs.h>
+#include <asm/paravirt.h>
+#include <asm/qspinlock.h>
+#include <asm/apic.h>
+
+static bool __initdata hv_pvspin = true;
+
+static void hv_qlock_kick(int cpu)
+{
+ apic->send_IPI(cpu, X86_PLATFORM_IPI_VECTOR);
+}
+
+static void hv_qlock_wait(u8 *byte, u8 val)
+{
+ unsigned long msr_val;
+
+ if (READ_ONCE(*byte) != val)
+ return;
+
+ /*
+ * Read HV_X64_MSR_GUEST_IDLE MSR can trigger the guest's
+ * transition to the idle power state which can be exited
+ * by an IPI even if IF flag is disabled.
+ */
+ rdmsrl(HV_X64_MSR_GUEST_IDLE, msr_val);
+}
+
+/*
+ * Hyper-V does not support this so far.
+ */
+bool hv_vcpu_is_preempted(int vcpu)
+{
+ return false;
+}
+PV_CALLEE_SAVE_REGS_THUNK(hv_vcpu_is_preempted);
+
+void __init hv_init_spinlocks(void)
+{
+ if (!hv_pvspin ||
+ !apic ||
+ !(ms_hyperv.hints & HV_X64_CLUSTER_IPI_RECOMMENDED) ||
+ !(ms_hyperv.features & HV_X64_MSR_GUEST_IDLE_AVAILABLE)) {
+ pr_info("PV spinlocks disabled\n");
+ return;
+ }
+ pr_info("PV spinlocks enabled\n");
+
+ __pv_init_lock_hash();
+ pv_lock_ops.queued_spin_lock_slowpath = __pv_queued_spin_lock_slowpath;
+ pv_lock_ops.queued_spin_unlock = PV_CALLEE_SAVE(__pv_queued_spin_unlock);
+ pv_lock_ops.wait = hv_qlock_wait;
+ pv_lock_ops.kick = hv_qlock_kick;
+ pv_lock_ops.vcpu_is_preempted = PV_CALLEE_SAVE(hv_vcpu_is_preempted);
+}
+
+static __init int hv_parse_nopvspin(char *arg)
+{
+ hv_pvspin = false;
+ return 0;
+}
+early_param("hv_nopvspin", hv_parse_nopvspin);
diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h
index f377044..759cfd2 100644
--- a/arch/x86/include/asm/mshyperv.h
+++ b/arch/x86/include/asm/mshyperv.h
@@ -351,6 +351,7 @@ static inline int cpumask_to_vpset(struct hv_vpset *vpset,

#ifdef CONFIG_X86_64
void hv_apic_init(void);
+void __init hv_init_spinlocks(void);
#else
static inline void hv_apic_init(void) {}
#endif
diff --git a/arch/x86/kernel/cpu/mshyperv.c b/arch/x86/kernel/cpu/mshyperv.c
index ad12733..a5cc219 100644
--- a/arch/x86/kernel/cpu/mshyperv.c
+++ b/arch/x86/kernel/cpu/mshyperv.c
@@ -199,6 +199,16 @@ static unsigned long hv_get_tsc_khz(void)
return freq / 1000;
}

+#if defined(CONFIG_SMP) && IS_ENABLED(CONFIG_HYPERV)
+static void __init hv_smp_prepare_boot_cpu(void)
+{
+ native_smp_prepare_boot_cpu();
+#if defined(CONFIG_X86_64) && defined(CONFIG_PARAVIRT_SPINLOCKS)
+ hv_init_spinlocks();
+#endif
+}
+#endif
+
static void __init ms_hyperv_init_platform(void)
{
int hv_host_info_eax;
@@ -303,6 +313,10 @@ static void __init ms_hyperv_init_platform(void)
if (ms_hyperv.misc_features & HV_STIMER_DIRECT_MODE_AVAILABLE)
alloc_intr_gate(HYPERV_STIMER0_VECTOR,
hv_stimer0_callback_vector);
+
+#if defined(CONFIG_SMP)
+ smp_ops.smp_prepare_boot_cpu = hv_smp_prepare_boot_cpu;
+#endif
#endif
}

--
1.9.1


2018-09-28 22:12:03

by Michael Kelley (EOSG)

[permalink] [raw]
Subject: RE: [PATCH v3 2/2] locking/pvqspinlock, hv: Enable PV qspinlock for Hyper-V

From: Yi Sun <[email protected]> Sent: Wednesday, September 26, 2018 11:02 PM

> Follow PV spinlock mechanism to implement the callback functions
> to allow the CPU idling and kicking operations on Hyper-V.
>
> Cc: "K. Y. Srinivasan" <[email protected]>
> Cc: Haiyang Zhang <[email protected]>
> Cc: Stephen Hemminger <[email protected]>
> Cc: Thomas Gleixner <[email protected]>
> Cc: Michael Kelley (EOSG) <[email protected]>
> Signed-off-by: Yi Sun <[email protected]>
> ---

Reviewed-by: Michael Kelley <[email protected]>

Subject: [tip:x86/hyperv] x86/hyperv: Add GUEST_IDLE_MSR support

Commit-ID: 10d02e13385ce1e90abb49ef1e9a366a5d968157
Gitweb: https://git.kernel.org/tip/10d02e13385ce1e90abb49ef1e9a366a5d968157
Author: Yi Sun <[email protected]>
AuthorDate: Thu, 27 Sep 2018 14:01:43 +0800
Committer: Thomas Gleixner <[email protected]>
CommitDate: Tue, 2 Oct 2018 13:21:52 +0200

x86/hyperv: Add GUEST_IDLE_MSR support

Hyper-V may expose a HV_X64_MSR_GUEST_IDLE MSR via HYPERV_CPUID_FEATURES.

Reading this MSR triggers the host to transition the guest vCPU into an
idle state. This state can be exited via an IPI even if the read in the
guest happened from an interrupt disabled section.

Signed-off-by: Yi Sun <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
Reviewed-by: Michael Kelley <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: "K. Y. Srinivasan" <[email protected]>
Cc: Haiyang Zhang <[email protected]>
Cc: Stephen Hemminger <[email protected]>
Link: https://lkml.kernel.org/r/[email protected]

---
arch/x86/include/asm/hyperv-tlfs.h | 5 +++++
1 file changed, 5 insertions(+)

diff --git a/arch/x86/include/asm/hyperv-tlfs.h b/arch/x86/include/asm/hyperv-tlfs.h
index 00e01d215f74..4139f7650fe5 100644
--- a/arch/x86/include/asm/hyperv-tlfs.h
+++ b/arch/x86/include/asm/hyperv-tlfs.h
@@ -38,6 +38,8 @@
#define HV_MSR_TIME_REF_COUNT_AVAILABLE (1 << 1)
/* Partition reference TSC MSR is available */
#define HV_MSR_REFERENCE_TSC_AVAILABLE (1 << 9)
+/* Partition Guest IDLE MSR is available */
+#define HV_X64_MSR_GUEST_IDLE_AVAILABLE (1 << 10)

/* A partition's reference time stamp counter (TSC) page */
#define HV_X64_MSR_REFERENCE_TSC 0x40000021
@@ -246,6 +248,9 @@
#define HV_X64_MSR_STIMER3_CONFIG 0x400000B6
#define HV_X64_MSR_STIMER3_COUNT 0x400000B7

+/* Hyper-V guest idle MSR */
+#define HV_X64_MSR_GUEST_IDLE 0x400000F0
+
/* Hyper-V guest crash notification MSR's */
#define HV_X64_MSR_CRASH_P0 0x40000100
#define HV_X64_MSR_CRASH_P1 0x40000101

Subject: [tip:x86/hyperv] x86/hyperv: Enable PV qspinlock for Hyper-V

Commit-ID: aaa7fc34c003bd8133a49f7634480cef6288ad55
Gitweb: https://git.kernel.org/tip/aaa7fc34c003bd8133a49f7634480cef6288ad55
Author: Yi Sun <[email protected]>
AuthorDate: Thu, 27 Sep 2018 14:01:44 +0800
Committer: Thomas Gleixner <[email protected]>
CommitDate: Tue, 2 Oct 2018 13:22:06 +0200

x86/hyperv: Enable PV qspinlock for Hyper-V

Implement the necessary callbacks for PV spinlocks which allow vCPU idling
and kicking operations when running as a guest on Hyper-V

Signed-off-by: Yi Sun <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
Reviewed-by: Michael Kelley <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: "K. Y. Srinivasan" <[email protected]>
Cc: Haiyang Zhang <[email protected]>
Cc: Stephen Hemminger <[email protected]>
Link: https://lkml.kernel.org/r/[email protected]
---
Documentation/admin-guide/kernel-parameters.txt | 5 ++
arch/x86/hyperv/Makefile | 4 ++
arch/x86/hyperv/hv_spinlock.c | 75 +++++++++++++++++++++++++
arch/x86/include/asm/mshyperv.h | 1 +
arch/x86/kernel/cpu/mshyperv.c | 14 +++++
5 files changed, 99 insertions(+)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 92eb1f42240d..0fc8448a85ea 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -1385,6 +1385,11 @@
hvc_iucv_allow= [S390] Comma-separated list of z/VM user IDs.
If specified, z/VM IUCV HVC accepts connections
from listed z/VM user IDs only.
+
+ hv_nopvspin [X86,HYPER_V]
+ Disables the ticketlock slowpath using HYPER-V PV
+ optimizations.
+
keep_bootcon [KNL]
Do not unregister boot console at start. This is only
useful for debugging when something happens in the window
diff --git a/arch/x86/hyperv/Makefile b/arch/x86/hyperv/Makefile
index b21ee65c4101..1c11f9420a82 100644
--- a/arch/x86/hyperv/Makefile
+++ b/arch/x86/hyperv/Makefile
@@ -1,2 +1,6 @@
obj-y := hv_init.o mmu.o nested.o
obj-$(CONFIG_X86_64) += hv_apic.o
+
+ifdef CONFIG_X86_64
+obj-$(CONFIG_PARAVIRT_SPINLOCKS) += hv_spinlock.o
+endif
diff --git a/arch/x86/hyperv/hv_spinlock.c b/arch/x86/hyperv/hv_spinlock.c
new file mode 100644
index 000000000000..6d3221322d0d
--- /dev/null
+++ b/arch/x86/hyperv/hv_spinlock.c
@@ -0,0 +1,75 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/*
+ * Hyper-V specific spinlock code.
+ *
+ * Copyright (C) 2018, Intel, Inc.
+ *
+ * Author : Yi Sun <[email protected]>
+ */
+
+#define pr_fmt(fmt) "Hyper-V: " fmt
+
+#include <linux/spinlock.h>
+
+#include <asm/mshyperv.h>
+#include <asm/hyperv-tlfs.h>
+#include <asm/paravirt.h>
+#include <asm/qspinlock.h>
+#include <asm/apic.h>
+
+static bool __initdata hv_pvspin = true;
+
+static void hv_qlock_kick(int cpu)
+{
+ apic->send_IPI(cpu, X86_PLATFORM_IPI_VECTOR);
+}
+
+static void hv_qlock_wait(u8 *byte, u8 val)
+{
+ unsigned long msr_val;
+
+ if (READ_ONCE(*byte) != val)
+ return;
+
+ /*
+ * Read HV_X64_MSR_GUEST_IDLE MSR can trigger the guest's
+ * transition to the idle power state which can be exited
+ * by an IPI even if IF flag is disabled.
+ */
+ rdmsrl(HV_X64_MSR_GUEST_IDLE, msr_val);
+}
+
+/*
+ * Hyper-V does not support this so far.
+ */
+static bool hv_vcpu_is_preempted(int vcpu)
+{
+ return false;
+}
+PV_CALLEE_SAVE_REGS_THUNK(hv_vcpu_is_preempted);
+
+void __init hv_init_spinlocks(void)
+{
+ if (!hv_pvspin || !apic ||
+ !(ms_hyperv.hints & HV_X64_CLUSTER_IPI_RECOMMENDED) ||
+ !(ms_hyperv.features & HV_X64_MSR_GUEST_IDLE_AVAILABLE)) {
+ pr_info("PV spinlocks disabled\n");
+ return;
+ }
+ pr_info("PV spinlocks enabled\n");
+
+ __pv_init_lock_hash();
+ pv_lock_ops.queued_spin_lock_slowpath = __pv_queued_spin_lock_slowpath;
+ pv_lock_ops.queued_spin_unlock = PV_CALLEE_SAVE(__pv_queued_spin_unlock);
+ pv_lock_ops.wait = hv_qlock_wait;
+ pv_lock_ops.kick = hv_qlock_kick;
+ pv_lock_ops.vcpu_is_preempted = PV_CALLEE_SAVE(hv_vcpu_is_preempted);
+}
+
+static __init int hv_parse_nopvspin(char *arg)
+{
+ hv_pvspin = false;
+ return 0;
+}
+early_param("hv_nopvspin", hv_parse_nopvspin);
diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h
index f37704497d8f..759cfd214eb9 100644
--- a/arch/x86/include/asm/mshyperv.h
+++ b/arch/x86/include/asm/mshyperv.h
@@ -351,6 +351,7 @@ int hyperv_flush_guest_mapping(u64 as);

#ifdef CONFIG_X86_64
void hv_apic_init(void);
+void __init hv_init_spinlocks(void);
#else
static inline void hv_apic_init(void) {}
#endif
diff --git a/arch/x86/kernel/cpu/mshyperv.c b/arch/x86/kernel/cpu/mshyperv.c
index ad12733f6058..1c72f3819eb1 100644
--- a/arch/x86/kernel/cpu/mshyperv.c
+++ b/arch/x86/kernel/cpu/mshyperv.c
@@ -199,6 +199,16 @@ static unsigned long hv_get_tsc_khz(void)
return freq / 1000;
}

+#if defined(CONFIG_SMP) && IS_ENABLED(CONFIG_HYPERV)
+static void __init hv_smp_prepare_boot_cpu(void)
+{
+ native_smp_prepare_boot_cpu();
+#if defined(CONFIG_X86_64) && defined(CONFIG_PARAVIRT_SPINLOCKS)
+ hv_init_spinlocks();
+#endif
+}
+#endif
+
static void __init ms_hyperv_init_platform(void)
{
int hv_host_info_eax;
@@ -303,6 +313,10 @@ static void __init ms_hyperv_init_platform(void)
if (ms_hyperv.misc_features & HV_STIMER_DIRECT_MODE_AVAILABLE)
alloc_intr_gate(HYPERV_STIMER0_VECTOR,
hv_stimer0_callback_vector);
+
+# ifdef CONFIG_SMP
+ smp_ops.smp_prepare_boot_cpu = hv_smp_prepare_boot_cpu;
+# endif
#endif
}


2018-10-02 11:39:20

by Jürgen Groß

[permalink] [raw]
Subject: Re: [tip:x86/hyperv] x86/hyperv: Enable PV qspinlock for Hyper-V

Sorry for noticing this only now, but I have been fighting with
Xen PV qspinlocks last weekend:

On 02/10/2018 13:28, tip-bot for Yi Sun wrote:
> Commit-ID: aaa7fc34c003bd8133a49f7634480cef6288ad55
> Gitweb: https://git.kernel.org/tip/aaa7fc34c003bd8133a49f7634480cef6288ad55
> Author: Yi Sun <[email protected]>
> AuthorDate: Thu, 27 Sep 2018 14:01:44 +0800
> Committer: Thomas Gleixner <[email protected]>
> CommitDate: Tue, 2 Oct 2018 13:22:06 +0200
>
> x86/hyperv: Enable PV qspinlock for Hyper-V
>
> Implement the necessary callbacks for PV spinlocks which allow vCPU idling
> and kicking operations when running as a guest on Hyper-V
>
> Signed-off-by: Yi Sun <[email protected]>
> Signed-off-by: Thomas Gleixner <[email protected]>
> Reviewed-by: Michael Kelley <[email protected]>
> Cc: [email protected]
> Cc: [email protected]
> Cc: [email protected]
> Cc: [email protected]
> Cc: "K. Y. Srinivasan" <[email protected]>
> Cc: Haiyang Zhang <[email protected]>
> Cc: Stephen Hemminger <[email protected]>
> Link: https://lkml.kernel.org/r/[email protected]
> ---
> Documentation/admin-guide/kernel-parameters.txt | 5 ++
> arch/x86/hyperv/Makefile | 4 ++
> arch/x86/hyperv/hv_spinlock.c | 75 +++++++++++++++++++++++++
> arch/x86/include/asm/mshyperv.h | 1 +
> arch/x86/kernel/cpu/mshyperv.c | 14 +++++
> 5 files changed, 99 insertions(+)
>

...

> diff --git a/arch/x86/hyperv/hv_spinlock.c b/arch/x86/hyperv/hv_spinlock.c
> new file mode 100644
> index 000000000000..6d3221322d0d
> --- /dev/null
> +++ b/arch/x86/hyperv/hv_spinlock.c
> @@ -0,0 +1,75 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +/*
> + * Hyper-V specific spinlock code.
> + *
> + * Copyright (C) 2018, Intel, Inc.
> + *
> + * Author : Yi Sun <[email protected]>
> + */
> +
> +#define pr_fmt(fmt) "Hyper-V: " fmt
> +
> +#include <linux/spinlock.h>
> +
> +#include <asm/mshyperv.h>
> +#include <asm/hyperv-tlfs.h>
> +#include <asm/paravirt.h>
> +#include <asm/qspinlock.h>
> +#include <asm/apic.h>
> +
> +static bool __initdata hv_pvspin = true;
> +
> +static void hv_qlock_kick(int cpu)
> +{
> + apic->send_IPI(cpu, X86_PLATFORM_IPI_VECTOR);
> +}
> +
> +static void hv_qlock_wait(u8 *byte, u8 val)
> +{
> + unsigned long msr_val;
> +
> + if (READ_ONCE(*byte) != val)
> + return;
> +
> + /*
> + * Read HV_X64_MSR_GUEST_IDLE MSR can trigger the guest's
> + * transition to the idle power state which can be exited
> + * by an IPI even if IF flag is disabled.
> + */

What if interrupts are enabled? Won't a kick happening here just
interrupt and then the following rdmsr result in a hang?

I believe the correct way would be to:

- disable interrupts before above READ_ONCE() and restore them
after the rdmsrl()

- return early if in_nmi()

similar as the kvm specific variant is doing it.


Juergen

2018-10-02 12:15:12

by Thomas Gleixner

[permalink] [raw]
Subject: Re: [tip:x86/hyperv] x86/hyperv: Enable PV qspinlock for Hyper-V

On Tue, 2 Oct 2018, Juergen Gross wrote:

> Sorry for noticing this only now, but I have been fighting with
> Xen PV qspinlocks last weekend:
>
> > + /*
> > + * Read HV_X64_MSR_GUEST_IDLE MSR can trigger the guest's
> > + * transition to the idle power state which can be exited
> > + * by an IPI even if IF flag is disabled.
> > + */
>
> What if interrupts are enabled? Won't a kick happening here just
> interrupt and then the following rdmsr result in a hang?
>
> I believe the correct way would be to:
>
> - disable interrupts before above READ_ONCE() and restore them
> after the rdmsrl()
>
> - return early if in_nmi()
>
> similar as the kvm specific variant is doing it.

Interesting question. I zapped the last commit for now until this is
clarified.

Thanks,

tglx

2018-10-08 08:16:49

by Yi Sun

[permalink] [raw]
Subject: Re: [tip:x86/hyperv] x86/hyperv: Enable PV qspinlock for Hyper-V

On 18-10-02 13:38:55, Juergen Gross wrote:
> > +static void hv_qlock_wait(u8 *byte, u8 val)
> > +{
> > + unsigned long msr_val;
> > +
> > + if (READ_ONCE(*byte) != val)
> > + return;
> > +
> > + /*
> > + * Read HV_X64_MSR_GUEST_IDLE MSR can trigger the guest's
> > + * transition to the idle power state which can be exited
> > + * by an IPI even if IF flag is disabled.
> > + */
>
> What if interrupts are enabled? Won't a kick happening here just
> interrupt and then the following rdmsr result in a hang?
>
> I believe the correct way would be to:
>
> - disable interrupts before above READ_ONCE() and restore them
> after the rdmsrl()
>
> - return early if in_nmi()
>
> similar as the kvm specific variant is doing it.
>
>
> Juergen

Thank you for the suggestion! That is a possible case. I will submit
a new version in soon.

BRs,
Yi Sun

Subject: [tip:x86/paravirt] x86/hyperv: Add GUEST_IDLE_MSR support

Commit-ID: f726c4620df39055f060537a8ed183c18a2c504b
Gitweb: https://git.kernel.org/tip/f726c4620df39055f060537a8ed183c18a2c504b
Author: Yi Sun <[email protected]>
AuthorDate: Thu, 27 Sep 2018 14:01:43 +0800
Committer: Thomas Gleixner <[email protected]>
CommitDate: Tue, 9 Oct 2018 14:14:49 +0200

x86/hyperv: Add GUEST_IDLE_MSR support

Hyper-V may expose a HV_X64_MSR_GUEST_IDLE MSR via HYPERV_CPUID_FEATURES.

Reading this MSR triggers the host to transition the guest vCPU into an
idle state. This state can be exited via an IPI even if the read in the
guest happened from an interrupt disabled section.

Signed-off-by: Yi Sun <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
Reviewed-by: Michael Kelley <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: "K. Y. Srinivasan" <[email protected]>
Cc: Haiyang Zhang <[email protected]>
Cc: Stephen Hemminger <[email protected]>
Link: https://lkml.kernel.org/r/[email protected]

---
arch/x86/include/asm/hyperv-tlfs.h | 5 +++++
1 file changed, 5 insertions(+)

diff --git a/arch/x86/include/asm/hyperv-tlfs.h b/arch/x86/include/asm/hyperv-tlfs.h
index e977b6b3a538..2a2fa170caf1 100644
--- a/arch/x86/include/asm/hyperv-tlfs.h
+++ b/arch/x86/include/asm/hyperv-tlfs.h
@@ -38,6 +38,8 @@
#define HV_MSR_TIME_REF_COUNT_AVAILABLE (1 << 1)
/* Partition reference TSC MSR is available */
#define HV_MSR_REFERENCE_TSC_AVAILABLE (1 << 9)
+/* Partition Guest IDLE MSR is available */
+#define HV_X64_MSR_GUEST_IDLE_AVAILABLE (1 << 10)

/* A partition's reference time stamp counter (TSC) page */
#define HV_X64_MSR_REFERENCE_TSC 0x40000021
@@ -246,6 +248,9 @@
#define HV_X64_MSR_STIMER3_CONFIG 0x400000B6
#define HV_X64_MSR_STIMER3_COUNT 0x400000B7

+/* Hyper-V guest idle MSR */
+#define HV_X64_MSR_GUEST_IDLE 0x400000F0
+
/* Hyper-V guest crash notification MSR's */
#define HV_X64_MSR_CRASH_P0 0x40000100
#define HV_X64_MSR_CRASH_P1 0x40000101