2020-01-16 12:48:54

by yezengruan

[permalink] [raw]
Subject: [PATCH v3 0/8] KVM: arm64: vCPU preempted check support

This patch set aims to support the vcpu_is_preempted() functionality
under KVM/arm64, which allowing the guest to obtain the vCPU is
currently running or not. This will enhance lock performance on
overcommitted hosts (more runnable vCPUs than physical CPUs in the
system) as doing busy waits for preempted vCPUs will hurt system
performance far worse than early yielding.

We have observed some performace improvements in uninx benchmark tests.

unix benchmark result:
host: kernel 5.5.0-rc5, HiSilicon Kunpeng920, 8 CPUs
guest: kernel 5.5.0-rc5, 16 vCPUs

test-case | after-patch | before-patch
----------------------------------------+-------------------+------------------
Dhrystone 2 using register variables | 334600751.0 lps | 335319028.3 lps
Double-Precision Whetstone | 32856.1 MWIPS | 32849.6 MWIPS
Execl Throughput | 3662.1 lps | 2718.0 lps
File Copy 1024 bufsize 2000 maxblocks | 432906.4 KBps | 158011.8 KBps
File Copy 256 bufsize 500 maxblocks | 116023.0 KBps | 37664.0 KBps
File Copy 4096 bufsize 8000 maxblocks | 1432769.8 KBps | 441108.8 KBps
Pipe Throughput | 6405029.6 lps | 6021457.6 lps
Pipe-based Context Switching | 185872.7 lps | 184255.3 lps
Process Creation | 4025.7 lps | 3706.6 lps
Shell Scripts (1 concurrent) | 6745.6 lpm | 6436.1 lpm
Shell Scripts (8 concurrent) | 998.7 lpm | 931.1 lpm
System Call Overhead | 3913363.1 lps | 3883287.8 lps
----------------------------------------+-------------------+------------------
System Benchmarks Index Score | 1835.1 | 1327.6

Changes from v2:
https://lore.kernel.org/lkml/[email protected]/
* Post Will's patches as part of this series [1][2], and add the
probing logic for checking whether the hypervisor is KVM or not
* Clear PV-lock interface documentation
* Remove preempted state field
* Fix build error when CONFIG_PARAVIRT is not set
* Bunch of typo fixes.

Changes from v1:
https://lore.kernel.org/lkml/[email protected]/
* Guest kernel no longer allocates the PV lock structure, instead it
is allocated by user space to avoid lifetime issues about kexec.
* Provide vCPU attributes for PV lock.
* Update SMC number of PV lock features.
* Report some basic validation when PV lock init.
* Document preempted field.
* Bunch of typo fixes.

[1] https://git.kernel.org/pub/scm/linux/kernel/git/will/linux.git/commit/?h=kvm/hvc&id=464f5a1741e5959c3e4d2be1966ae0093b4dce06

[2] https://git.kernel.org/pub/scm/linux/kernel/git/will/linux.git/commit/?h=kvm/hvc&id=6597490e005d0eeca8ed8c1c1d7b4318ee014681

Will Deacon (2):
arm64: Probe for the presence of KVM hypervisor services during boot
arm/arm64: KVM: Advertise KVM UID to guests via SMCCC

Zengruan Ye (6):
KVM: arm64: Document PV-lock interface
KVM: arm64: Add SMCCC paravirtualised lock calls
KVM: arm64: Support pvlock preempted via shared structure
KVM: arm64: Provide vCPU attributes for PV lock
KVM: arm64: Add interface to support vCPU preempted check
KVM: arm64: Support the vCPU preemption check

Documentation/virt/kvm/arm/pvlock.rst | 68 +++++++++++++
Documentation/virt/kvm/devices/vcpu.txt | 14 +++
arch/arm/include/asm/kvm_host.h | 18 ++++
arch/arm64/include/asm/hypervisor.h | 11 ++
arch/arm64/include/asm/kvm_host.h | 27 +++++
arch/arm64/include/asm/paravirt.h | 15 +++
arch/arm64/include/asm/pvlock-abi.h | 16 +++
arch/arm64/include/asm/spinlock.h | 9 ++
arch/arm64/include/uapi/asm/kvm.h | 2 +
arch/arm64/kernel/Makefile | 2 +-
arch/arm64/kernel/paravirt-spinlocks.c | 13 +++
arch/arm64/kernel/paravirt.c | 129 +++++++++++++++++++++++-
arch/arm64/kernel/setup.c | 37 +++++++
arch/arm64/kvm/Makefile | 1 +
arch/arm64/kvm/guest.c | 9 ++
include/linux/arm-smccc.h | 36 +++++++
include/linux/cpuhotplug.h | 1 +
include/uapi/linux/kvm.h | 2 +
virt/kvm/arm/arm.c | 8 ++
virt/kvm/arm/hypercalls.c | 54 +++++++---
virt/kvm/arm/pvlock.c | 102 +++++++++++++++++++
21 files changed, 559 insertions(+), 15 deletions(-)
create mode 100644 Documentation/virt/kvm/arm/pvlock.rst
create mode 100644 arch/arm64/include/asm/pvlock-abi.h
create mode 100644 arch/arm64/kernel/paravirt-spinlocks.c
create mode 100644 virt/kvm/arm/pvlock.c

--
2.19.1



2020-01-16 12:49:04

by yezengruan

[permalink] [raw]
Subject: [PATCH v3 4/8] KVM: arm64: Add SMCCC paravirtualised lock calls

Add two new SMCCC compatible hypercalls for PV lock features:
* ARM_SMCCC_VENDOR_HYP_KVM_PV_LOCK_FUNC_ID: 0x86000001
- KVM_PV_LOCK_FEATURES: 0
- KVM_PV_LOCK_PREEMPTED: 1

Also add the header file which defines the ABI for the paravirtualized
lock features we're about to add.

Signed-off-by: Zengruan Ye <[email protected]>
---
arch/arm64/include/asm/pvlock-abi.h | 16 ++++++++++++++++
include/linux/arm-smccc.h | 10 ++++++++++
2 files changed, 26 insertions(+)
create mode 100644 arch/arm64/include/asm/pvlock-abi.h

diff --git a/arch/arm64/include/asm/pvlock-abi.h b/arch/arm64/include/asm/pvlock-abi.h
new file mode 100644
index 000000000000..06e0c3d7710a
--- /dev/null
+++ b/arch/arm64/include/asm/pvlock-abi.h
@@ -0,0 +1,16 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright(c) 2019 Huawei Technologies Co., Ltd
+ * Author: Zengruan Ye <[email protected]>
+ */
+
+#ifndef __ASM_PVLOCK_ABI_H
+#define __ASM_PVLOCK_ABI_H
+
+struct pvlock_vcpu_state {
+ __le64 preempted;
+ /* Structure must be 64 byte aligned, pad to that size */
+ u8 padding[56];
+} __packed;
+
+#endif
diff --git a/include/linux/arm-smccc.h b/include/linux/arm-smccc.h
index 2b2c295c9109..081be5f6a6be 100644
--- a/include/linux/arm-smccc.h
+++ b/include/linux/arm-smccc.h
@@ -394,14 +394,24 @@ asmlinkage void __arm_smccc_hvc(unsigned long a0, unsigned long a1,

/* KVM "vendor specific" services */
#define ARM_SMCCC_KVM_FUNC_FEATURES 0
+#define ARM_SMCCC_KVM_FUNC_PV_LOCK 1
#define ARM_SMCCC_KVM_FUNC_FEATURES_2 127
#define ARM_SMCCC_KVM_NUM_FUNCS 128

+#define KVM_PV_LOCK_FEATURES 0
+#define KVM_PV_LOCK_PREEMPTED 1
+
#define ARM_SMCCC_VENDOR_HYP_KVM_FEATURES_FUNC_ID \
ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \
ARM_SMCCC_SMC_32, \
ARM_SMCCC_OWNER_VENDOR_HYP, \
ARM_SMCCC_KVM_FUNC_FEATURES)

+#define ARM_SMCCC_VENDOR_HYP_KVM_PV_LOCK_FUNC_ID \
+ ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \
+ ARM_SMCCC_SMC_32, \
+ ARM_SMCCC_OWNER_VENDOR_HYP, \
+ ARM_SMCCC_KVM_FUNC_PV_LOCK)
+
#endif /*__ASSEMBLY__*/
#endif /*__LINUX_ARM_SMCCC_H*/
--
2.19.1


2020-01-16 12:50:05

by yezengruan

[permalink] [raw]
Subject: [PATCH v3 3/8] arm/arm64: KVM: Advertise KVM UID to guests via SMCCC

From: Will Deacon <[email protected]>

We can advertise ourselves to guests as KVM and provide a basic features
bitmap for discoverability of future hypervisor services.

Signed-off-by: Will Deacon <[email protected]>
[[email protected]: rebased]
---
virt/kvm/arm/hypercalls.c | 37 ++++++++++++++++++++++++-------------
1 file changed, 24 insertions(+), 13 deletions(-)

diff --git a/virt/kvm/arm/hypercalls.c b/virt/kvm/arm/hypercalls.c
index 550dfa3e53cd..bdbab9ef6d2d 100644
--- a/virt/kvm/arm/hypercalls.c
+++ b/virt/kvm/arm/hypercalls.c
@@ -12,26 +12,28 @@
int kvm_hvc_call_handler(struct kvm_vcpu *vcpu)
{
u32 func_id = smccc_get_function(vcpu);
- long val = SMCCC_RET_NOT_SUPPORTED;
- u32 feature;
+ long val[4] = {};
+ u32 option;
gpa_t gpa;

+ val[0] = SMCCC_RET_NOT_SUPPORTED;
+
switch (func_id) {
case ARM_SMCCC_VERSION_FUNC_ID:
- val = ARM_SMCCC_VERSION_1_1;
+ val[0] = ARM_SMCCC_VERSION_1_1;
break;
case ARM_SMCCC_ARCH_FEATURES_FUNC_ID:
- feature = smccc_get_arg1(vcpu);
- switch (feature) {
+ option = smccc_get_arg1(vcpu);
+ switch (option) {
case ARM_SMCCC_ARCH_WORKAROUND_1:
switch (kvm_arm_harden_branch_predictor()) {
case KVM_BP_HARDEN_UNKNOWN:
break;
case KVM_BP_HARDEN_WA_NEEDED:
- val = SMCCC_RET_SUCCESS;
+ val[0] = SMCCC_RET_SUCCESS;
break;
case KVM_BP_HARDEN_NOT_REQUIRED:
- val = SMCCC_RET_NOT_REQUIRED;
+ val[0] = SMCCC_RET_NOT_REQUIRED;
break;
}
break;
@@ -41,31 +43,40 @@ int kvm_hvc_call_handler(struct kvm_vcpu *vcpu)
case KVM_SSBD_UNKNOWN:
break;
case KVM_SSBD_KERNEL:
- val = SMCCC_RET_SUCCESS;
+ val[0] = SMCCC_RET_SUCCESS;
break;
case KVM_SSBD_FORCE_ENABLE:
case KVM_SSBD_MITIGATED:
- val = SMCCC_RET_NOT_REQUIRED;
+ val[0] = SMCCC_RET_NOT_REQUIRED;
break;
}
break;
case ARM_SMCCC_HV_PV_TIME_FEATURES:
- val = SMCCC_RET_SUCCESS;
+ val[0] = SMCCC_RET_SUCCESS;
break;
}
break;
case ARM_SMCCC_HV_PV_TIME_FEATURES:
- val = kvm_hypercall_pv_features(vcpu);
+ val[0] = kvm_hypercall_pv_features(vcpu);
break;
case ARM_SMCCC_HV_PV_TIME_ST:
gpa = kvm_init_stolen_time(vcpu);
if (gpa != GPA_INVALID)
- val = gpa;
+ val[0] = gpa;
+ break;
+ case ARM_SMCCC_VENDOR_HYP_CALL_UID_FUNC_ID:
+ val[0] = ARM_SMCCC_VENDOR_HYP_UID_KVM_REG_0;
+ val[1] = ARM_SMCCC_VENDOR_HYP_UID_KVM_REG_1;
+ val[2] = ARM_SMCCC_VENDOR_HYP_UID_KVM_REG_2;
+ val[3] = ARM_SMCCC_VENDOR_HYP_UID_KVM_REG_3;
+ break;
+ case ARM_SMCCC_VENDOR_HYP_KVM_FEATURES_FUNC_ID:
+ val[0] = BIT(ARM_SMCCC_KVM_FUNC_FEATURES);
break;
default:
return kvm_psci_call(vcpu);
}

- smccc_set_retval(vcpu, val, 0, 0, 0);
+ smccc_set_retval(vcpu, val[0], val[1], val[2], val[3]);
return 1;
}
--
2.19.1


2020-01-16 12:50:05

by yezengruan

[permalink] [raw]
Subject: [PATCH v3 5/8] KVM: arm64: Support pvlock preempted via shared structure

Implement the service call for configuring a shared structure between a
vCPU and the hypervisor in which the hypervisor can tell the vCPU that is
running or not.

Signed-off-by: Zengruan Ye <[email protected]>
---
arch/arm/include/asm/kvm_host.h | 18 +++++++++++++
arch/arm64/include/asm/kvm_host.h | 18 +++++++++++++
arch/arm64/kvm/Makefile | 1 +
virt/kvm/arm/arm.c | 8 ++++++
virt/kvm/arm/hypercalls.c | 17 ++++++++++++
virt/kvm/arm/pvlock.c | 45 +++++++++++++++++++++++++++++++
6 files changed, 107 insertions(+)
create mode 100644 virt/kvm/arm/pvlock.c

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index 556cd818eccf..dfeaf9204875 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -356,6 +356,24 @@ static inline bool kvm_arm_is_pvtime_enabled(struct kvm_vcpu_arch *vcpu_arch)
return false;
}

+static inline void kvm_arm_pvlock_preempted_init(struct kvm_vcpu_arch *vcpu_arch)
+{
+}
+
+static inline bool kvm_arm_is_pvlock_preempted_ready(struct kvm_vcpu_arch *vcpu_arch)
+{
+ return false;
+}
+
+static inline gpa_t kvm_init_pvlock(struct kvm_vcpu *vcpu)
+{
+ return GPA_INVALID;
+}
+
+static inline void kvm_update_pvlock_preempted(struct kvm_vcpu *vcpu, u64 preempted)
+{
+}
+
void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot);

struct kvm_vcpu *kvm_mpidr_to_vcpu(struct kvm *kvm, unsigned long mpidr);
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index c61260cf63c5..10f8c4bbf97e 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -354,6 +354,11 @@ struct kvm_vcpu_arch {
u64 last_steal;
gpa_t base;
} steal;
+
+ /* Guest PV lock state */
+ struct {
+ gpa_t base;
+ } pv;
};

/* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */
@@ -515,6 +520,19 @@ static inline bool kvm_arm_is_pvtime_enabled(struct kvm_vcpu_arch *vcpu_arch)
return (vcpu_arch->steal.base != GPA_INVALID);
}

+static inline void kvm_arm_pvlock_preempted_init(struct kvm_vcpu_arch *vcpu_arch)
+{
+ vcpu_arch->pv.base = GPA_INVALID;
+}
+
+static inline bool kvm_arm_is_pvlock_preempted_ready(struct kvm_vcpu_arch *vcpu_arch)
+{
+ return (vcpu_arch->pv.base != GPA_INVALID);
+}
+
+gpa_t kvm_init_pvlock(struct kvm_vcpu *vcpu);
+void kvm_update_pvlock_preempted(struct kvm_vcpu *vcpu, u64 preempted);
+
void kvm_set_sei_esr(struct kvm_vcpu *vcpu, u64 syndrome);

struct kvm_vcpu *kvm_mpidr_to_vcpu(struct kvm *kvm, unsigned long mpidr);
diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
index 5ffbdc39e780..e4591f56d5f1 100644
--- a/arch/arm64/kvm/Makefile
+++ b/arch/arm64/kvm/Makefile
@@ -15,6 +15,7 @@ kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/arm.o $(KVM)/arm/mmu.o $(KVM)/arm/mmio.
kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/psci.o $(KVM)/arm/perf.o
kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/hypercalls.o
kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/pvtime.o
+kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/pvlock.o

kvm-$(CONFIG_KVM_ARM_HOST) += inject_fault.o regmap.o va_layout.o
kvm-$(CONFIG_KVM_ARM_HOST) += hyp.o hyp-init.o handle_exit.o
diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
index 8de4daf25097..36d57e77d3c4 100644
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -383,6 +383,8 @@ int kvm_arch_vcpu_init(struct kvm_vcpu *vcpu)

kvm_arm_pvtime_vcpu_init(&vcpu->arch);

+ kvm_arm_pvlock_preempted_init(&vcpu->arch);
+
return kvm_vgic_vcpu_init(vcpu);
}

@@ -421,6 +423,9 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
vcpu_set_wfx_traps(vcpu);

vcpu_ptrauth_setup_lazy(vcpu);
+
+ if (kvm_arm_is_pvlock_preempted_ready(&vcpu->arch))
+ kvm_update_pvlock_preempted(vcpu, 0);
}

void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
@@ -434,6 +439,9 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
vcpu->cpu = -1;

kvm_arm_set_running_vcpu(NULL);
+
+ if (kvm_arm_is_pvlock_preempted_ready(&vcpu->arch))
+ kvm_update_pvlock_preempted(vcpu, 1);
}

static void vcpu_power_off(struct kvm_vcpu *vcpu)
diff --git a/virt/kvm/arm/hypercalls.c b/virt/kvm/arm/hypercalls.c
index bdbab9ef6d2d..7f90b413641c 100644
--- a/virt/kvm/arm/hypercalls.c
+++ b/virt/kvm/arm/hypercalls.c
@@ -72,6 +72,23 @@ int kvm_hvc_call_handler(struct kvm_vcpu *vcpu)
break;
case ARM_SMCCC_VENDOR_HYP_KVM_FEATURES_FUNC_ID:
val[0] = BIT(ARM_SMCCC_KVM_FUNC_FEATURES);
+ if (!vcpu_el1_is_32bit(vcpu))
+ val[0] |= BIT(ARM_SMCCC_KVM_FUNC_PV_LOCK);
+ break;
+ case ARM_SMCCC_VENDOR_HYP_KVM_PV_LOCK_FUNC_ID:
+ if (vcpu_el1_is_32bit(vcpu))
+ break;
+ option = smccc_get_arg1(vcpu);
+ switch (option) {
+ case KVM_PV_LOCK_FEATURES:
+ val[0] = SMCCC_RET_SUCCESS;
+ break;
+ case KVM_PV_LOCK_PREEMPTED:
+ gpa = kvm_init_pvlock(vcpu);
+ if (gpa != GPA_INVALID)
+ val[0] = gpa;
+ break;
+ }
break;
default:
return kvm_psci_call(vcpu);
diff --git a/virt/kvm/arm/pvlock.c b/virt/kvm/arm/pvlock.c
new file mode 100644
index 000000000000..0644b23be51e
--- /dev/null
+++ b/virt/kvm/arm/pvlock.c
@@ -0,0 +1,45 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright(c) 2019 Huawei Technologies Co., Ltd
+ * Author: Zengruan Ye <[email protected]>
+ */
+
+#include <linux/arm-smccc.h>
+#include <linux/kvm_host.h>
+
+#include <asm/pvlock-abi.h>
+
+#include <kvm/arm_hypercalls.h>
+
+gpa_t kvm_init_pvlock(struct kvm_vcpu *vcpu)
+{
+ struct pvlock_vcpu_state init_values = {};
+ struct kvm *kvm = vcpu->kvm;
+ u64 base = vcpu->arch.pv.base;
+ int idx;
+
+ if (base == GPA_INVALID)
+ return base;
+
+ idx = srcu_read_lock(&kvm->srcu);
+ kvm_write_guest(kvm, base, &init_values, sizeof(init_values));
+ srcu_read_unlock(&kvm->srcu, idx);
+
+ return base;
+}
+
+void kvm_update_pvlock_preempted(struct kvm_vcpu *vcpu, u64 preempted)
+{
+ int idx;
+ u64 offset;
+ __le64 preempted_le;
+ struct kvm *kvm = vcpu->kvm;
+ u64 base = vcpu->arch.pv.base;
+
+ preempted_le = cpu_to_le64(preempted);
+
+ idx = srcu_read_lock(&kvm->srcu);
+ offset = offsetof(struct pvlock_vcpu_state, preempted);
+ kvm_put_guest(kvm, base + offset, preempted_le, u64);
+ srcu_read_unlock(&kvm->srcu, idx);
+}
--
2.19.1