2019-03-29 13:55:43

by Xiaoyao Li

[permalink] [raw]
Subject: [PATCH v4 0/2] Switch MSR_MISC_FEATURES_ENABLES and one optimization

Patch 1 switches MSR_MISC_FEATURES_ENABLE between host and guest during
vcpu_{load,put} to avoid cpuid faulting and ring3mwait of host leaking
to guest. Because cpuid faulting eanbled in host may potentially cause guest
boot failure, and kvm doesn't expose ring3mwait to guest yet.
Both two features shouldn't be leaked to guest.

Patch 2 optimizes the switch of MSR_MISC_FEATURES_ENABLES by avoiding WRMSR
whenever possible to save cycles.

==changelog==
v3->v4:
remove the helper function, and some code refine. per Sean Christopherson's
comments.

v2->v3:
- use msr_misc_features_shadow instead of reading hardware msr, from Sean
Christopherson
- avoid WRMSR whenever possible, from Sean Christopherson.

v1->v2:
- move the save/restore of cpuid faulting bit to
vmx_prepare_swich_to_guest/vmx_prepare_swich_to_host to avoid every
vmentry RDMSR, based on Paolo's comment.

Xiaoyao Li (2):
kvm/vmx: Switch MSR_MISC_FEATURES_ENABLES between host and guest
x86/vmx: optimize MSR_MISC_FEATURES_ENABLES switch

arch/x86/include/asm/kvm_host.h | 2 ++
arch/x86/kernel/process.c | 1 +
arch/x86/kvm/vmx/vmx.c | 17 +++++++++++++++++
arch/x86/kvm/x86.c | 11 ++++++++---
4 files changed, 28 insertions(+), 3 deletions(-)

--
2.19.1



2019-03-29 13:55:47

by Xiaoyao Li

[permalink] [raw]
Subject: [PATCH v4 1/2] kvm/vmx: Switch MSR_MISC_FEATURES_ENABLES between host and guest

There are two defined bits in MSR_MISC_FEATURES_ENABLES, bit 0 for cpuid
faulting and bit 1 for ring3mwait.

== cpuid Faulting ==
cpuid faulting is a feature about CPUID instruction. When cpuid faulting
is enabled, all execution of the CPUID instruction outside system-management
mode (SMM) cause a general-protection (#GP) if the CPL > 0.

About this feature, detailed information can be found at
https://www.intel.com/content/dam/www/public/us/en/documents/application-notes/virtualization-technology-flexmigration-application-note.pdf

Current KVM provides software emulation of this feature for guest.
However, because cpuid faulting takes higher priority over CPUID vm exit (Intel
SDM vol3.25.1.1), there is a risk of leaking cpuid faulting to guest when host
enables it. If host enables cpuid faulting by setting the bit 0 of
MSR_MISC_FEATURES_ENABLES, it will pass to guest since there is no switch of
MSR_MISC_FEATURES_ENABLES yet. As a result, when guest calls CPUID instruction
in CPL > 0, it will generate a #GP instead of CPUID vm eixt.

This issue will cause guest boot failure when guest uses *modprobe*
to load modules. *modprobe* calls CPUID instruction, thus causing #GP in
guest. Since there is no handling of cpuid faulting in #GP handler, guest
fails boot.

== ring3mwait ==
Ring3mwait is a Xeon-Phi Product Family x200 series specific feature,
which allows the MONITOR and MWAIT instructions to be executed in rings
other than ring 0. The feature can be enabled by setting bit 1 in
MSR_MISC_FEATURES_ENABLES. The register can also be read to determine
whether the instructions are enabled at other than ring 0.

About this feature, description can be found at
https://software.intel.com/en-us/blogs/2016/10/06/intel-xeon-phi-product-family-x200-knl-user-mode-ring-3-monitor-and-mwait

Current kvm doesn't expose feature ring3mwait to guest. However, there is also
a risk of leaking ring3mwait to guest if host enables it since there is no
switch of MSR_MISC_FEATURES_ENABLES.

== solution ==
From above analysis, both cpuid faulting and ring3mwait can be leaked to guest.
To fix this issue, MSR_MISC_FEATURES_ENABLES should be switched between host
and guest. Since MSR_MISC_FEATURES_ENABLES is intel-specific, this patch
implement the switching only in vmx.

For the reason that kvm provides the software emulation of cpuid faulting and
kvm doesn't expose ring3mwait to guest. MSR_MISC_FEATURES_ENABLES can be just
cleared to zero for guest when any of the features is enabled in host.

Signed-off-by: Xiaoyao Li <[email protected]>
---
arch/x86/kernel/process.c | 1 +
arch/x86/kvm/vmx/vmx.c | 8 ++++++++
2 files changed, 9 insertions(+)

diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index 1bba1a3c0b01..94a566e79b6c 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -191,6 +191,7 @@ int set_tsc_mode(unsigned int val)
}

DEFINE_PER_CPU(u64, msr_misc_features_shadow);
+EXPORT_PER_CPU_SYMBOL_GPL(msr_misc_features_shadow);

static void set_cpuid_faulting(bool on)
{
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 270c6566fd5a..cb0f63879a25 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -1064,6 +1064,9 @@ void vmx_prepare_switch_to_guest(struct kvm_vcpu *vcpu)
vmx->loaded_cpu_state = vmx->loaded_vmcs;
host_state = &vmx->loaded_cpu_state->host_state;

+ if (this_cpu_read(msr_misc_features_shadow))
+ wrmsrl(MSR_MISC_FEATURES_ENABLES, 0ULL);
+
/*
* Set host fs and gs selectors. Unfortunately, 22.2.3 does not
* allow segment selectors with cpl > 0 or ti == 1.
@@ -1123,6 +1126,7 @@ void vmx_prepare_switch_to_guest(struct kvm_vcpu *vcpu)
static void vmx_prepare_switch_to_host(struct vcpu_vmx *vmx)
{
struct vmcs_host_state *host_state;
+ u64 msrval;

if (!vmx->loaded_cpu_state)
return;
@@ -1133,6 +1137,10 @@ static void vmx_prepare_switch_to_host(struct vcpu_vmx *vmx)
++vmx->vcpu.stat.host_state_reload;
vmx->loaded_cpu_state = NULL;

+ msrval = this_cpu_read(msr_misc_features_shadow);
+ if (msrval)
+ wrmsrl(MSR_MISC_FEATURES_ENABLES, msrval);
+
#ifdef CONFIG_X86_64
rdmsrl(MSR_KERNEL_GS_BASE, vmx->msr_guest_kernel_gs_base);
#endif
--
2.19.1


2019-03-29 13:55:55

by Xiaoyao Li

[permalink] [raw]
Subject: [PATCH v4 2/2] x86/vmx: optimize MSR_MISC_FEATURES_ENABLES switch

KVM needs to switch MSR_MISC_FEATURES_ENABLES between host and guest in
every pcpu/vcpu context switch. Since WRMSR is expensive, this patch tries
to save cycles by avoiding WRMSR MSR_MISC_FEATURES_ENABLES whenever possible.

If host's value is zero, nothing needs to do, since guest can use kvm
emulated cpuid faulting.

If host's value is non-zero, it need not clear MSR_MISC_FEATURES_ENABLES
unconditionally. We can use hardware cpuid faulting if guest's
value is equal to host'value, thus avoid WRMSR
MSR_MISC_FEATURES_ENABLES.

Since hardware cpuid faulting takes higher priority than CPUID vm exit,
it should be updated to hardware while guest wrmsr and hardware cpuid
faulting is used for guest.

Note that MSR_MISC_FEATURES_ENABLES only exists in Intel CPU, only
applying this optimization to vmx.

Signed-off-by: Xiaoyao Li <[email protected]>
---
arch/x86/include/asm/kvm_host.h | 2 ++
arch/x86/kvm/vmx/vmx.c | 15 ++++++++++++---
arch/x86/kvm/x86.c | 11 ++++++++---
3 files changed, 22 insertions(+), 6 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 2c53df4a5a2a..105691e069d9 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1343,6 +1343,8 @@ void kvm_lmsw(struct kvm_vcpu *vcpu, unsigned long msw);
void kvm_get_cs_db_l_bits(struct kvm_vcpu *vcpu, int *db, int *l);
int kvm_set_xcr(struct kvm_vcpu *vcpu, u32 index, u64 xcr);

+bool kvm_misc_features_enables_msr_invalid(struct kvm_vcpu *vcpu, u64 data);
+
int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr);
int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr);

diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index cb0f63879a25..07a598663ace 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -1041,6 +1041,7 @@ void vmx_prepare_switch_to_guest(struct kvm_vcpu *vcpu)
unsigned long fs_base, gs_base;
u16 fs_sel, gs_sel;
int i;
+ u64 msrval;

vmx->req_immediate_exit = false;

@@ -1064,8 +1065,9 @@ void vmx_prepare_switch_to_guest(struct kvm_vcpu *vcpu)
vmx->loaded_cpu_state = vmx->loaded_vmcs;
host_state = &vmx->loaded_cpu_state->host_state;

- if (this_cpu_read(msr_misc_features_shadow))
- wrmsrl(MSR_MISC_FEATURES_ENABLES, 0ULL);
+ msrval = this_cpu_read(msr_misc_features_shadow);
+ if (msrval && msrval != vcpu->arch.msr_misc_features_enables)
+ wrmsrl(MSR_MISC_FEATURES_ENABLES, vcpu->arch.msr_misc_features_enables);

/*
* Set host fs and gs selectors. Unfortunately, 22.2.3 does not
@@ -1138,7 +1140,7 @@ static void vmx_prepare_switch_to_host(struct vcpu_vmx *vmx)
vmx->loaded_cpu_state = NULL;

msrval = this_cpu_read(msr_misc_features_shadow);
- if (msrval)
+ if (msrval && msrval != vmx->vcpu.arch.msr_misc_features_enables)
wrmsrl(MSR_MISC_FEATURES_ENABLES, msrval);

#ifdef CONFIG_X86_64
@@ -2027,6 +2029,13 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
else
vmx->pt_desc.guest.addr_a[index / 2] = data;
break;
+ case MSR_MISC_FEATURES_ENABLES:
+ if (kvm_misc_features_enables_msr_invalid(vcpu, data))
+ return 1;
+ if (vmx->loaded_cpu_state && this_cpu_read(msr_misc_features_shadow))
+ wrmsrl(MSR_MISC_FEATURES_ENABLES, data);
+ vcpu->arch.msr_misc_features_enables = data;
+ break;
case MSR_TSC_AUX:
if (!msr_info->host_initiated &&
!guest_cpuid_has(vcpu, X86_FEATURE_RDTSCP))
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index ad1df965574e..749aa4c9437a 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -2449,6 +2449,13 @@ static void record_steal_time(struct kvm_vcpu *vcpu)
&vcpu->arch.st.steal, sizeof(struct kvm_steal_time));
}

+bool kvm_misc_features_enables_msr_invalid(struct kvm_vcpu *vcpu, u64 data)
+{
+ return (data & ~MSR_MISC_FEATURES_ENABLES_CPUID_FAULT) ||
+ (data && !supports_cpuid_fault(vcpu));
+}
+EXPORT_SYMBOL_GPL(kvm_misc_features_enables_msr_invalid);
+
int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
{
bool pr = false;
@@ -2669,9 +2676,7 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
vcpu->arch.msr_platform_info = data;
break;
case MSR_MISC_FEATURES_ENABLES:
- if (data & ~MSR_MISC_FEATURES_ENABLES_CPUID_FAULT ||
- (data & MSR_MISC_FEATURES_ENABLES_CPUID_FAULT &&
- !supports_cpuid_fault(vcpu)))
+ if (kvm_misc_features_enables_msr_invalid(vcpu, data))
return 1;
vcpu->arch.msr_misc_features_enables = data;
break;
--
2.19.1