2024-05-23 12:19:20

by Ravi Bangoria

[permalink] [raw]
Subject: [PATCH v3 0/3] KVM: SEV-ES: Fix KVM_{GET|SET}_MSRS and LBRV handling

Fix couple of interrelated issues:

o KVM currently allows userspace to access MSRs even after the VMSA is
encrypted. This can result into issues if MSR update has side effects on
VM configuration. Patch 1 fixes that by preventing KVM_{GET|SET}_MSRS
for SEV-ES guests once VMSA is encrypted.

o As documented in APM, LBR Virtualization must be enabled for SEV-ES
guests. However, KVM currently enables LBRV unconditionally without
checking feature bit, which is wrong. Patch 2 prevents SEV-ES guests
when LBRV support is missing.

o Although LBRV is enabled for SEV-ES guests, MSR_IA32_DEBUGCTLMSR was
still intercepted. This can crash SEV-ES guest if used inadvertently.
Patch 3 fixes it.

Patches prepared on kvm/next (6f627b425378)

v2: https://lore.kernel.org/r/[email protected]
v2->v3: Fix all affiliated issues along with fixing MSR_IA32_DEBUGCTLMSR
interception:
- Block KVM_{GET|SET}_MSRS for SEV-ES guests post VMSA encryption (new)
- Block SEV-ES guest creation when LBRV is disabled/not supported (new)
- Move LBRV enablement code after VMSA encryption to avoid a scenario
where LBRV can be disabled inadvertently through MSR_IA32_DEBUGCTLMSR
write. (new)
- Removed all 'Reviewed-by' since the code has changed quite a bit.

Nikunj A Dadhania (1):
KVM: SEV-ES: Prevent MSR access post VMSA encryption

Ravi Bangoria (2):
KVM: SEV-ES: Disallow SEV-ES guests when X86_FEATURE_LBRV is absent
KVM: SEV-ES: Fix LBRV code

arch/x86/kvm/svm/sev.c | 21 +++++++++++++++------
arch/x86/kvm/svm/svm.c | 42 ++++++++++++++++++++++++++++++++----------
arch/x86/kvm/svm/svm.h | 7 ++++---
3 files changed, 51 insertions(+), 19 deletions(-)

--
2.45.1



2024-05-23 12:19:38

by Ravi Bangoria

[permalink] [raw]
Subject: [PATCH v3 1/3] KVM: SEV-ES: Prevent MSR access post VMSA encryption

From: Nikunj A Dadhania <[email protected]>

KVM currently allows userspace to read/write MSRs even after the VMSA is
encrypted. This can cause unintentional issues if MSR access has side-
effects. For ex, while migrating a guest, userspace could attempt to
migrate MSR_IA32_DEBUGCTLMSR and end up unintentionally disabling LBRV on
the target. Fix this by preventing access to those MSRs which are context
switched via the VMSA, once the VMSA is encrypted.

Suggested-by: Sean Christopherson <[email protected]>
Signed-off-by: Nikunj A Dadhania <[email protected]>
Signed-off-by: Ravi Bangoria <[email protected]>
---
arch/x86/kvm/svm/svm.c | 18 ++++++++++++++++++
1 file changed, 18 insertions(+)

diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 3d0549ca246f..489b0183f37d 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -2834,10 +2834,24 @@ static int svm_get_msr_feature(struct kvm_msr_entry *msr)
return 0;
}

+static bool
+sev_es_prevent_msr_access(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+{
+ return sev_es_guest(vcpu->kvm) &&
+ vcpu->arch.guest_state_protected &&
+ svm_msrpm_offset(msr_info->index) != MSR_INVALID &&
+ !msr_write_intercepted(vcpu, msr_info->index);
+}
+
static int svm_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
{
struct vcpu_svm *svm = to_svm(vcpu);

+ if (sev_es_prevent_msr_access(vcpu, msr_info)) {
+ msr_info->data = 0;
+ return 0;
+ }
+
switch (msr_info->index) {
case MSR_AMD64_TSC_RATIO:
if (!msr_info->host_initiated &&
@@ -2988,6 +3002,10 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)

u32 ecx = msr->index;
u64 data = msr->data;
+
+ if (sev_es_prevent_msr_access(vcpu, msr))
+ return 0;
+
switch (ecx) {
case MSR_AMD64_TSC_RATIO:

--
2.45.1


2024-05-23 12:19:53

by Ravi Bangoria

[permalink] [raw]
Subject: [PATCH v3 3/3] KVM: SEV-ES: Fix LBRV code

As documented in APM[1], LBR Virtualization must be enabled for SEV-ES
guests. Although KVM currently enforces LBRV for SEV-ES guests, there
are multiple issues with it:

o MSR_IA32_DEBUGCTLMSR is still intercepted. Since MSR_IA32_DEBUGCTLMSR
interception is used to dynamically toggle LBRV for performance reasons,
this can be fatal for SEV-ES guests. For ex SEV-ES guest on Zen3:

[guest ~]# wrmsr 0x1d9 0x4
KVM: entry failed, hardware error 0xffffffff
EAX=00000004 EBX=00000000 ECX=000001d9 EDX=00000000

Fix this by never intercepting MSR_IA32_DEBUGCTLMSR for SEV-ES guests.
No additional save/restore logic is required since MSR_IA32_DEBUGCTLMSR
is of swap type A.

o KVM will disable LBRV if userspace sets MSR_IA32_DEBUGCTLMSR before the
VMSA is encrypted. Fix this by moving LBRV enablement code post VMSA
encryption.

[1]: AMD64 Architecture Programmer's Manual Pub. 40332, Rev. 4.07 - June
2023, Vol 2, 15.35.2 Enabling SEV-ES.
https://bugzilla.kernel.org/attachment.cgi?id=304653

Co-developed-by: Nikunj A Dadhania <[email protected]>
Signed-off-by: Nikunj A Dadhania <[email protected]>
Signed-off-by: Ravi Bangoria <[email protected]>
---
arch/x86/kvm/svm/sev.c | 13 ++++++++-----
arch/x86/kvm/svm/svm.c | 8 +++++++-
arch/x86/kvm/svm/svm.h | 3 ++-
3 files changed, 17 insertions(+), 7 deletions(-)

diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 1a2bde579727..3f0c3dbce0c5 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -851,6 +851,14 @@ static int __sev_launch_update_vmsa(struct kvm *kvm, struct kvm_vcpu *vcpu,
*/
fpstate_set_confidential(&vcpu->arch.guest_fpu);
vcpu->arch.guest_state_protected = true;
+
+ /*
+ * SEV-ES guest mandates LBR Virtualization to be _always_ ON. Enable it
+ * only after setting guest_state_protected because KVM_SET_MSRS allows
+ * dynamic toggling of LBRV (for performance reason) on write access to
+ * MSR_IA32_DEBUGCTLMSR when guest_state_protected is not set.
+ */
+ svm_enable_lbrv(vcpu);
return 0;
}

@@ -4279,7 +4287,6 @@ static void sev_es_init_vmcb(struct vcpu_svm *svm)
struct kvm_vcpu *vcpu = &svm->vcpu;

svm->vmcb->control.nested_ctl |= SVM_NESTED_CTL_SEV_ES_ENABLE;
- svm->vmcb->control.virt_ext |= LBR_CTL_ENABLE_MASK;

/*
* An SEV-ES guest requires a VMSA area that is a separate from the
@@ -4331,10 +4338,6 @@ static void sev_es_init_vmcb(struct vcpu_svm *svm)
/* Clear intercepts on selected MSRs */
set_msr_interception(vcpu, svm->msrpm, MSR_EFER, 1, 1);
set_msr_interception(vcpu, svm->msrpm, MSR_IA32_CR_PAT, 1, 1);
- set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTBRANCHFROMIP, 1, 1);
- set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTBRANCHTOIP, 1, 1);
- set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTINTFROMIP, 1, 1);
- set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTINTTOIP, 1, 1);
}

void sev_init_vmcb(struct vcpu_svm *svm)
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index dcb5eb00a4f5..011e8e6c5c53 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -99,6 +99,7 @@ static const struct svm_direct_access_msrs {
{ .index = MSR_IA32_SPEC_CTRL, .always = false },
{ .index = MSR_IA32_PRED_CMD, .always = false },
{ .index = MSR_IA32_FLUSH_CMD, .always = false },
+ { .index = MSR_IA32_DEBUGCTLMSR, .always = false },
{ .index = MSR_IA32_LASTBRANCHFROMIP, .always = false },
{ .index = MSR_IA32_LASTBRANCHTOIP, .always = false },
{ .index = MSR_IA32_LASTINTFROMIP, .always = false },
@@ -990,7 +991,7 @@ void svm_copy_lbrs(struct vmcb *to_vmcb, struct vmcb *from_vmcb)
vmcb_mark_dirty(to_vmcb, VMCB_LBR);
}

-static void svm_enable_lbrv(struct kvm_vcpu *vcpu)
+void svm_enable_lbrv(struct kvm_vcpu *vcpu)
{
struct vcpu_svm *svm = to_svm(vcpu);

@@ -1000,6 +1001,9 @@ static void svm_enable_lbrv(struct kvm_vcpu *vcpu)
set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTINTFROMIP, 1, 1);
set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTINTTOIP, 1, 1);

+ if (sev_es_guest(vcpu->kvm))
+ set_msr_interception(vcpu, svm->msrpm, MSR_IA32_DEBUGCTLMSR, 1, 1);
+
/* Move the LBR msrs to the vmcb02 so that the guest can see them. */
if (is_guest_mode(vcpu))
svm_copy_lbrs(svm->vmcb, svm->vmcb01.ptr);
@@ -1009,6 +1013,8 @@ static void svm_disable_lbrv(struct kvm_vcpu *vcpu)
{
struct vcpu_svm *svm = to_svm(vcpu);

+ KVM_BUG_ON(sev_es_guest(vcpu->kvm), vcpu->kvm);
+
svm->vmcb->control.virt_ext &= ~LBR_CTL_ENABLE_MASK;
set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTBRANCHFROMIP, 0, 0);
set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTBRANCHTOIP, 0, 0);
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 2d7fd09c08c9..c483d7149420 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -30,7 +30,7 @@
#define IOPM_SIZE PAGE_SIZE * 3
#define MSRPM_SIZE PAGE_SIZE * 2

-#define MAX_DIRECT_ACCESS_MSRS 47
+#define MAX_DIRECT_ACCESS_MSRS 48
#define MSRPM_OFFSETS 32
extern u32 msrpm_offsets[MSRPM_OFFSETS] __read_mostly;
extern bool npt_enabled;
@@ -582,6 +582,7 @@ u32 *svm_vcpu_alloc_msrpm(void);
void svm_vcpu_init_msrpm(struct kvm_vcpu *vcpu, u32 *msrpm);
void svm_vcpu_free_msrpm(u32 *msrpm);
void svm_copy_lbrs(struct vmcb *to_vmcb, struct vmcb *from_vmcb);
+void svm_enable_lbrv(struct kvm_vcpu *vcpu);
void svm_update_lbrv(struct kvm_vcpu *vcpu);

int svm_set_efer(struct kvm_vcpu *vcpu, u64 efer);
--
2.45.1


2024-05-23 12:20:09

by Ravi Bangoria

[permalink] [raw]
Subject: [PATCH v3 2/3] KVM: SEV-ES: Disallow SEV-ES guests when X86_FEATURE_LBRV is absent

As documented in APM[1], LBR Virtualization must be enabled for SEV-ES
guests. So, prevent SEV-ES guests when LBRV support is missing.

[1]: AMD64 Architecture Programmer's Manual Pub. 40332, Rev. 4.07 - June
2023, Vol 2, 15.35.2 Enabling SEV-ES.
https://bugzilla.kernel.org/attachment.cgi?id=304653

Fixes: 376c6d285017 ("KVM: SVM: Provide support for SEV-ES vCPU creation/loading")
Signed-off-by: Ravi Bangoria <[email protected]>
---
arch/x86/kvm/svm/sev.c | 8 +++++++-
arch/x86/kvm/svm/svm.c | 16 +++++++---------
arch/x86/kvm/svm/svm.h | 4 ++--
3 files changed, 16 insertions(+), 12 deletions(-)

diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 176ba117413a..1a2bde579727 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -2856,7 +2856,7 @@ void __init sev_set_cpu_caps(void)
}
}

-void __init sev_hardware_setup(void)
+void __init sev_hardware_setup(int lbrv)
{
unsigned int eax, ebx, ecx, edx, sev_asid_count, sev_es_asid_count;
bool sev_snp_supported = false;
@@ -2933,6 +2933,12 @@ void __init sev_hardware_setup(void)
if (!boot_cpu_has(X86_FEATURE_SEV_ES))
goto out;

+ if (!lbrv) {
+ WARN_ONCE(!boot_cpu_has(X86_FEATURE_LBRV),
+ "LBRV must be present for SEV-ES support");
+ goto out;
+ }
+
/* Has the system been allocated ASIDs for SEV-ES? */
if (min_sev_asid == 1)
goto out;
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 489b0183f37d..dcb5eb00a4f5 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -5308,11 +5308,17 @@ static __init int svm_hardware_setup(void)

nrips = nrips && boot_cpu_has(X86_FEATURE_NRIPS);

+ if (lbrv) {
+ if (!boot_cpu_has(X86_FEATURE_LBRV))
+ lbrv = false;
+ else
+ pr_info("LBR virtualization supported\n");
+ }
/*
* Note, SEV setup consumes npt_enabled and enable_mmio_caching (which
* may be modified by svm_adjust_mmio_mask()), as well as nrips.
*/
- sev_hardware_setup();
+ sev_hardware_setup(lbrv);

svm_hv_hardware_setup();

@@ -5361,14 +5367,6 @@ static __init int svm_hardware_setup(void)
svm_x86_ops.set_vnmi_pending = NULL;
}

-
- if (lbrv) {
- if (!boot_cpu_has(X86_FEATURE_LBRV))
- lbrv = false;
- else
- pr_info("LBR virtualization supported\n");
- }
-
if (!enable_pmu)
pr_info("PMU virtualization is disabled\n");

diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 555c55f50298..2d7fd09c08c9 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -728,7 +728,7 @@ struct page *snp_safe_alloc_page(struct kvm_vcpu *vcpu);
void sev_free_vcpu(struct kvm_vcpu *vcpu);
void sev_vm_destroy(struct kvm *kvm);
void __init sev_set_cpu_caps(void);
-void __init sev_hardware_setup(void);
+void __init sev_hardware_setup(int lbrv);
void sev_hardware_unsetup(void);
int sev_cpu_init(struct svm_cpu_data *sd);
int sev_dev_get_attr(u32 group, u64 attr, u64 *val);
@@ -747,7 +747,7 @@ static inline struct page *snp_safe_alloc_page(struct kvm_vcpu *vcpu) {
static inline void sev_free_vcpu(struct kvm_vcpu *vcpu) {}
static inline void sev_vm_destroy(struct kvm *kvm) {}
static inline void __init sev_set_cpu_caps(void) {}
-static inline void __init sev_hardware_setup(void) {}
+static inline void __init sev_hardware_setup(int lbrv) {}
static inline void sev_hardware_unsetup(void) {}
static inline int sev_cpu_init(struct svm_cpu_data *sd) { return 0; }
static inline int sev_dev_get_attr(u32 group, u64 attr, u64 *val) { return -ENXIO; }
--
2.45.1


2024-05-28 16:32:00

by Paolo Bonzini

[permalink] [raw]
Subject: Re: [PATCH v3 1/3] KVM: SEV-ES: Prevent MSR access post VMSA encryption

On 5/23/24 14:18, Ravi Bangoria wrote:
> From: Nikunj A Dadhania <[email protected]>
>
> KVM currently allows userspace to read/write MSRs even after the VMSA is
> encrypted. This can cause unintentional issues if MSR access has side-
> effects. For ex, while migrating a guest, userspace could attempt to
> migrate MSR_IA32_DEBUGCTLMSR and end up unintentionally disabling LBRV on
> the target. Fix this by preventing access to those MSRs which are context
> switched via the VMSA, once the VMSA is encrypted.
>
> Suggested-by: Sean Christopherson <[email protected]>
> Signed-off-by: Nikunj A Dadhania <[email protected]>
> Signed-off-by: Ravi Bangoria <[email protected]>
> ---
> arch/x86/kvm/svm/svm.c | 18 ++++++++++++++++++
> 1 file changed, 18 insertions(+)
>
> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> index 3d0549ca246f..489b0183f37d 100644
> --- a/arch/x86/kvm/svm/svm.c
> +++ b/arch/x86/kvm/svm/svm.c
> @@ -2834,10 +2834,24 @@ static int svm_get_msr_feature(struct kvm_msr_entry *msr)
> return 0;
> }
>
> +static bool
> +sev_es_prevent_msr_access(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
> +{
> + return sev_es_guest(vcpu->kvm) &&
> + vcpu->arch.guest_state_protected &&
> + svm_msrpm_offset(msr_info->index) != MSR_INVALID &&
> + !msr_write_intercepted(vcpu, msr_info->index);
> +}
> +
> static int svm_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
> {
> struct vcpu_svm *svm = to_svm(vcpu);
>
> + if (sev_es_prevent_msr_access(vcpu, msr_info)) {
> + msr_info->data = 0;
> + return 0;

This should return -EINVAL, not 0. Likewise below in svm_set_msr().

Paolo

> + }
> +
> switch (msr_info->index) {
> case MSR_AMD64_TSC_RATIO:
> if (!msr_info->host_initiated &&
> @@ -2988,6 +3002,10 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
>
> u32 ecx = msr->index;
> u64 data = msr->data;
> +
> + if (sev_es_prevent_msr_access(vcpu, msr))
> + return 0;
> +
> switch (ecx) {
> case MSR_AMD64_TSC_RATIO:
>


2024-05-28 16:34:44

by Paolo Bonzini

[permalink] [raw]
Subject: Re: [PATCH v3 2/3] KVM: SEV-ES: Disallow SEV-ES guests when X86_FEATURE_LBRV is absent

On 5/23/24 14:18, Ravi Bangoria wrote:
> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> index 489b0183f37d..dcb5eb00a4f5 100644
> --- a/arch/x86/kvm/svm/svm.c
> +++ b/arch/x86/kvm/svm/svm.c
> @@ -5308,11 +5308,17 @@ static __init int svm_hardware_setup(void)
>
> nrips = nrips && boot_cpu_has(X86_FEATURE_NRIPS);
>
> + if (lbrv) {
> + if (!boot_cpu_has(X86_FEATURE_LBRV))
> + lbrv = false;
> + else
> + pr_info("LBR virtualization supported\n");
> + }
> /*
> * Note, SEV setup consumes npt_enabled and enable_mmio_caching (which
> * may be modified by svm_adjust_mmio_mask()), as well as nrips.
> */

Since it consumes nrips, just make lbrv non-static:

/* enable/disable Next RIP Save */
int nrips = true;
module_param(nrips, int, 0444);

Paolo


2024-05-29 10:44:28

by Ravi Bangoria

[permalink] [raw]
Subject: Re: [PATCH v3 2/3] KVM: SEV-ES: Disallow SEV-ES guests when X86_FEATURE_LBRV is absent

On 5/28/2024 10:03 PM, Paolo Bonzini wrote:
> On 5/23/24 14:18, Ravi Bangoria wrote:
>> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
>> index 489b0183f37d..dcb5eb00a4f5 100644
>> --- a/arch/x86/kvm/svm/svm.c
>> +++ b/arch/x86/kvm/svm/svm.c
>> @@ -5308,11 +5308,17 @@ static __init int svm_hardware_setup(void)
>>         nrips = nrips && boot_cpu_has(X86_FEATURE_NRIPS);
>>   +    if (lbrv) {
>> +        if (!boot_cpu_has(X86_FEATURE_LBRV))
>> +            lbrv = false;
>> +        else
>> +            pr_info("LBR virtualization supported\n");
>> +    }
>>       /*
>>        * Note, SEV setup consumes npt_enabled and enable_mmio_caching (which
>>        * may be modified by svm_adjust_mmio_mask()), as well as nrips.
>>        */
>
> Since it consumes nrips, just make lbrv non-static:

Sure.

Thanks,
Ravi

2024-05-29 10:44:38

by Ravi Bangoria

[permalink] [raw]
Subject: Re: [PATCH v3 1/3] KVM: SEV-ES: Prevent MSR access post VMSA encryption

On 5/28/2024 10:01 PM, Paolo Bonzini wrote:
> On 5/23/24 14:18, Ravi Bangoria wrote:
>> From: Nikunj A Dadhania <[email protected]>
>>
>> KVM currently allows userspace to read/write MSRs even after the VMSA is
>> encrypted. This can cause unintentional issues if MSR access has side-
>> effects. For ex, while migrating a guest, userspace could attempt to
>> migrate MSR_IA32_DEBUGCTLMSR and end up unintentionally disabling LBRV on
>> the target. Fix this by preventing access to those MSRs which are context
>> switched via the VMSA, once the VMSA is encrypted.
>>
>> Suggested-by: Sean Christopherson <[email protected]>
>> Signed-off-by: Nikunj A Dadhania <[email protected]>
>> Signed-off-by: Ravi Bangoria <[email protected]>
>> ---
>>   arch/x86/kvm/svm/svm.c | 18 ++++++++++++++++++
>>   1 file changed, 18 insertions(+)
>>
>> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
>> index 3d0549ca246f..489b0183f37d 100644
>> --- a/arch/x86/kvm/svm/svm.c
>> +++ b/arch/x86/kvm/svm/svm.c
>> @@ -2834,10 +2834,24 @@ static int svm_get_msr_feature(struct kvm_msr_entry *msr)
>>       return 0;
>>   }
>>   +static bool
>> +sev_es_prevent_msr_access(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
>> +{
>> +    return sev_es_guest(vcpu->kvm) &&
>> +           vcpu->arch.guest_state_protected &&
>> +           svm_msrpm_offset(msr_info->index) != MSR_INVALID &&
>> +           !msr_write_intercepted(vcpu, msr_info->index);
>> +}
>> +
>>   static int svm_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
>>   {
>>       struct vcpu_svm *svm = to_svm(vcpu);
>>   +    if (sev_es_prevent_msr_access(vcpu, msr_info)) {
>> +        msr_info->data = 0;
>> +        return 0;
>
> This should return -EINVAL, not 0.  Likewise below in svm_set_msr().

Sure.

Thanks,
Ravi

2024-05-30 14:04:00

by Tom Lendacky

[permalink] [raw]
Subject: Re: [PATCH v3 3/3] KVM: SEV-ES: Fix LBRV code

On 5/23/24 07:18, Ravi Bangoria wrote:
> As documented in APM[1], LBR Virtualization must be enabled for SEV-ES
> guests. Although KVM currently enforces LBRV for SEV-ES guests, there
> are multiple issues with it:
>
> o MSR_IA32_DEBUGCTLMSR is still intercepted. Since MSR_IA32_DEBUGCTLMSR
> interception is used to dynamically toggle LBRV for performance reasons,
> this can be fatal for SEV-ES guests. For ex SEV-ES guest on Zen3:
>
> [guest ~]# wrmsr 0x1d9 0x4
> KVM: entry failed, hardware error 0xffffffff
> EAX=00000004 EBX=00000000 ECX=000001d9 EDX=00000000
>
> Fix this by never intercepting MSR_IA32_DEBUGCTLMSR for SEV-ES guests.
> No additional save/restore logic is required since MSR_IA32_DEBUGCTLMSR
> is of swap type A.
>
> o KVM will disable LBRV if userspace sets MSR_IA32_DEBUGCTLMSR before the
> VMSA is encrypted. Fix this by moving LBRV enablement code post VMSA
> encryption.
>
> [1]: AMD64 Architecture Programmer's Manual Pub. 40332, Rev. 4.07 - June
> 2023, Vol 2, 15.35.2 Enabling SEV-ES.
> https://bugzilla.kernel.org/attachment.cgi?id=304653
>
> Co-developed-by: Nikunj A Dadhania <[email protected]>
> Signed-off-by: Nikunj A Dadhania <[email protected]>
> Signed-off-by: Ravi Bangoria <[email protected]>

Should this have a Fixes: tag, too?

Thanks,
Tom

> ---
> arch/x86/kvm/svm/sev.c | 13 ++++++++-----
> arch/x86/kvm/svm/svm.c | 8 +++++++-
> arch/x86/kvm/svm/svm.h | 3 ++-
> 3 files changed, 17 insertions(+), 7 deletions(-)
>
> diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
> index 1a2bde579727..3f0c3dbce0c5 100644
> --- a/arch/x86/kvm/svm/sev.c
> +++ b/arch/x86/kvm/svm/sev.c
> @@ -851,6 +851,14 @@ static int __sev_launch_update_vmsa(struct kvm *kvm, struct kvm_vcpu *vcpu,
> */
> fpstate_set_confidential(&vcpu->arch.guest_fpu);
> vcpu->arch.guest_state_protected = true;
> +
> + /*
> + * SEV-ES guest mandates LBR Virtualization to be _always_ ON. Enable it
> + * only after setting guest_state_protected because KVM_SET_MSRS allows
> + * dynamic toggling of LBRV (for performance reason) on write access to
> + * MSR_IA32_DEBUGCTLMSR when guest_state_protected is not set.
> + */
> + svm_enable_lbrv(vcpu);
> return 0;
> }
>
> @@ -4279,7 +4287,6 @@ static void sev_es_init_vmcb(struct vcpu_svm *svm)
> struct kvm_vcpu *vcpu = &svm->vcpu;
>
> svm->vmcb->control.nested_ctl |= SVM_NESTED_CTL_SEV_ES_ENABLE;
> - svm->vmcb->control.virt_ext |= LBR_CTL_ENABLE_MASK;
>
> /*
> * An SEV-ES guest requires a VMSA area that is a separate from the
> @@ -4331,10 +4338,6 @@ static void sev_es_init_vmcb(struct vcpu_svm *svm)
> /* Clear intercepts on selected MSRs */
> set_msr_interception(vcpu, svm->msrpm, MSR_EFER, 1, 1);
> set_msr_interception(vcpu, svm->msrpm, MSR_IA32_CR_PAT, 1, 1);
> - set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTBRANCHFROMIP, 1, 1);
> - set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTBRANCHTOIP, 1, 1);
> - set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTINTFROMIP, 1, 1);
> - set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTINTTOIP, 1, 1);
> }
>
> void sev_init_vmcb(struct vcpu_svm *svm)
> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> index dcb5eb00a4f5..011e8e6c5c53 100644
> --- a/arch/x86/kvm/svm/svm.c
> +++ b/arch/x86/kvm/svm/svm.c
> @@ -99,6 +99,7 @@ static const struct svm_direct_access_msrs {
> { .index = MSR_IA32_SPEC_CTRL, .always = false },
> { .index = MSR_IA32_PRED_CMD, .always = false },
> { .index = MSR_IA32_FLUSH_CMD, .always = false },
> + { .index = MSR_IA32_DEBUGCTLMSR, .always = false },
> { .index = MSR_IA32_LASTBRANCHFROMIP, .always = false },
> { .index = MSR_IA32_LASTBRANCHTOIP, .always = false },
> { .index = MSR_IA32_LASTINTFROMIP, .always = false },
> @@ -990,7 +991,7 @@ void svm_copy_lbrs(struct vmcb *to_vmcb, struct vmcb *from_vmcb)
> vmcb_mark_dirty(to_vmcb, VMCB_LBR);
> }
>
> -static void svm_enable_lbrv(struct kvm_vcpu *vcpu)
> +void svm_enable_lbrv(struct kvm_vcpu *vcpu)
> {
> struct vcpu_svm *svm = to_svm(vcpu);
>
> @@ -1000,6 +1001,9 @@ static void svm_enable_lbrv(struct kvm_vcpu *vcpu)
> set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTINTFROMIP, 1, 1);
> set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTINTTOIP, 1, 1);
>
> + if (sev_es_guest(vcpu->kvm))
> + set_msr_interception(vcpu, svm->msrpm, MSR_IA32_DEBUGCTLMSR, 1, 1);
> +
> /* Move the LBR msrs to the vmcb02 so that the guest can see them. */
> if (is_guest_mode(vcpu))
> svm_copy_lbrs(svm->vmcb, svm->vmcb01.ptr);
> @@ -1009,6 +1013,8 @@ static void svm_disable_lbrv(struct kvm_vcpu *vcpu)
> {
> struct vcpu_svm *svm = to_svm(vcpu);
>
> + KVM_BUG_ON(sev_es_guest(vcpu->kvm), vcpu->kvm);
> +
> svm->vmcb->control.virt_ext &= ~LBR_CTL_ENABLE_MASK;
> set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTBRANCHFROMIP, 0, 0);
> set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTBRANCHTOIP, 0, 0);
> diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
> index 2d7fd09c08c9..c483d7149420 100644
> --- a/arch/x86/kvm/svm/svm.h
> +++ b/arch/x86/kvm/svm/svm.h
> @@ -30,7 +30,7 @@
> #define IOPM_SIZE PAGE_SIZE * 3
> #define MSRPM_SIZE PAGE_SIZE * 2
>
> -#define MAX_DIRECT_ACCESS_MSRS 47
> +#define MAX_DIRECT_ACCESS_MSRS 48
> #define MSRPM_OFFSETS 32
> extern u32 msrpm_offsets[MSRPM_OFFSETS] __read_mostly;
> extern bool npt_enabled;
> @@ -582,6 +582,7 @@ u32 *svm_vcpu_alloc_msrpm(void);
> void svm_vcpu_init_msrpm(struct kvm_vcpu *vcpu, u32 *msrpm);
> void svm_vcpu_free_msrpm(u32 *msrpm);
> void svm_copy_lbrs(struct vmcb *to_vmcb, struct vmcb *from_vmcb);
> +void svm_enable_lbrv(struct kvm_vcpu *vcpu);
> void svm_update_lbrv(struct kvm_vcpu *vcpu);
>
> int svm_set_efer(struct kvm_vcpu *vcpu, u64 efer);

2024-05-31 04:07:13

by Ravi Bangoria

[permalink] [raw]
Subject: Re: [PATCH v3 3/3] KVM: SEV-ES: Fix LBRV code

On 5/30/2024 7:33 PM, Tom Lendacky wrote:
> On 5/23/24 07:18, Ravi Bangoria wrote:
>> As documented in APM[1], LBR Virtualization must be enabled for SEV-ES
>> guests. Although KVM currently enforces LBRV for SEV-ES guests, there
>> are multiple issues with it:
>>
>> o MSR_IA32_DEBUGCTLMSR is still intercepted. Since MSR_IA32_DEBUGCTLMSR
>>    interception is used to dynamically toggle LBRV for performance reasons,
>>    this can be fatal for SEV-ES guests. For ex SEV-ES guest on Zen3:
>>
>>    [guest ~]# wrmsr 0x1d9 0x4
>>    KVM: entry failed, hardware error 0xffffffff
>>    EAX=00000004 EBX=00000000 ECX=000001d9 EDX=00000000
>>
>>    Fix this by never intercepting MSR_IA32_DEBUGCTLMSR for SEV-ES guests.
>>    No additional save/restore logic is required since MSR_IA32_DEBUGCTLMSR
>>    is of swap type A.
>>
>> o KVM will disable LBRV if userspace sets MSR_IA32_DEBUGCTLMSR before the
>>    VMSA is encrypted. Fix this by moving LBRV enablement code post VMSA
>>    encryption.
>>
>> [1]: AMD64 Architecture Programmer's Manual Pub. 40332, Rev. 4.07 - June
>>       2023, Vol 2, 15.35.2 Enabling SEV-ES.
>>       https://bugzilla.kernel.org/attachment.cgi?id=304653
>>
>> Co-developed-by: Nikunj A Dadhania <[email protected]>
>> Signed-off-by: Nikunj A Dadhania <[email protected]>
>> Signed-off-by: Ravi Bangoria <[email protected]>
>
> Should this have a Fixes: tag, too?

Yeah, will add
Fixes: 376c6d285017 ("KVM: SVM: Provide support for SEV-ES vCPU creation/loading")

Thanks,
Ravi

2024-06-04 23:10:40

by Michael Roth

[permalink] [raw]
Subject: Re: [PATCH v3 1/3] KVM: SEV-ES: Prevent MSR access post VMSA encryption

On Wed, May 29, 2024 at 04:14:10PM +0530, Ravi Bangoria wrote:
> On 5/28/2024 10:01 PM, Paolo Bonzini wrote:
> > On 5/23/24 14:18, Ravi Bangoria wrote:
> >> From: Nikunj A Dadhania <[email protected]>
> >>
> >> KVM currently allows userspace to read/write MSRs even after the VMSA is
> >> encrypted. This can cause unintentional issues if MSR access has side-
> >> effects. For ex, while migrating a guest, userspace could attempt to
> >> migrate MSR_IA32_DEBUGCTLMSR and end up unintentionally disabling LBRV on
> >> the target. Fix this by preventing access to those MSRs which are context
> >> switched via the VMSA, once the VMSA is encrypted.
> >>
> >> Suggested-by: Sean Christopherson <[email protected]>
> >> Signed-off-by: Nikunj A Dadhania <[email protected]>
> >> Signed-off-by: Ravi Bangoria <[email protected]>
> >> ---
> >> ? arch/x86/kvm/svm/svm.c | 18 ++++++++++++++++++
> >> ? 1 file changed, 18 insertions(+)
> >>
> >> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> >> index 3d0549ca246f..489b0183f37d 100644
> >> --- a/arch/x86/kvm/svm/svm.c
> >> +++ b/arch/x86/kvm/svm/svm.c
> >> @@ -2834,10 +2834,24 @@ static int svm_get_msr_feature(struct kvm_msr_entry *msr)
> >> ????? return 0;
> >> ? }
> >> ? +static bool
> >> +sev_es_prevent_msr_access(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
> >> +{
> >> +??? return sev_es_guest(vcpu->kvm) &&
> >> +?????????? vcpu->arch.guest_state_protected &&
> >> +?????????? svm_msrpm_offset(msr_info->index) != MSR_INVALID &&
> >> +?????????? !msr_write_intercepted(vcpu, msr_info->index);
> >> +}
> >> +
> >> ? static int svm_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
> >> ? {
> >> ????? struct vcpu_svm *svm = to_svm(vcpu);
> >> ? +??? if (sev_es_prevent_msr_access(vcpu, msr_info)) {
> >> +??????? msr_info->data = 0;
> >> +??????? return 0;
> >
> > This should return -EINVAL, not 0.? Likewise below in svm_set_msr().
>
> Sure.

One consequence of this change is that older VMMs that might still call
svm_get_msr()/svm_set_msr() for SEV-ES guests.

Newer VMMs that are aware of KVM_SEV_INIT2 however are already aware of
the stricter limitations of what vCPU state can be sync'd during
guest run-time, so newer QEMU for instance will work both for legacy
KVM_SEV_ES_INIT interface as well as KVM_SEV_INIT2.

So when using KVM_SEV_INIT2 it's okay to assume userspace can deal with
-EINVAL, whereas for legacy KVM_SEV_ES_INIT we sort of have to assume the
VMM does not have the necessary changes to deal with -EINVAL, so in that
case it's probably more appropriate to return 0 and just silently noop.

We had a similar situations with stricter limitations on fpstate sync'ing
for KVM_SEV_INIT2 and that was the approach taken there:

https://lore.kernel.org/kvm/[email protected]/

so I'll submit a patch that takes the same approach.

-Mike

>
> Thanks,
> Ravi