2024-02-26 21:33:51

by John Allen

[permalink] [raw]
Subject: [PATCH v2 0/9] SVM guest shadow stack support

AMD Zen3 and newer processors support shadow stack, a feature designed
to protect against ROP (return-oriented programming) attacks in which an
attacker manipulates return addresses on the call stack in order to
execute arbitrary code. To prevent this, shadow stacks can be allocated
that are only used by control transfer and return instructions. When a
CALL instruction is issued, it writes the return address to both the
program stack and the shadow stack. When the subsequent RET instruction
is issued, it pops the return address from both stacks and compares
them. If the addresses don't match, a control-protection exception is
raised.

Shadow stack and a related feature, Indirect Branch Tracking (IBT), are
collectively referred to as Control-flow Enforcement Technology (CET).
However, current AMD processors only support shadow stack and not IBT.

This series adds support for shadow stack in SVM guests and builds upon
the support added in the CET guest support patch series [1]. Additional
patches are required to support shadow stack enabled guests in qemu [2].

[1]: CET guest support patches (v10)
https://lore.kernel.org/all/[email protected]/

[2]: CET qemu patches
https://lore.kernel.org/all/[email protected]/

[3]: Initial SVM support series
https://lore.kernel.org/all/[email protected]/

---

RFC v2:
- Rebased on v3 of the Intel CET virtualization series, dropping the
patch that moved cet_is_msr_accessible to common code as that has
been pulled into the Intel series.
- Minor change removing curly brackets around if statement introduced
in patch 6/6.
RFC v3:
- Rebased on v5 of the Intel CET virtualization series.
- Add patch changing the name of vmplX_ssp SEV-ES save area fields to
plX_ssp.
- Merge this series intended for KVM with the separate guest kernel
patch (now patch 7/8).
- Update MSR passthrough code to conditionally pass through shadow
stack MSRS based on both host and guest support.
- Don't save PL0_SSP, PL1_SSP, and PL2_SSP MSRs on SEV-ES VMRUN as
these are currently unused.
v1:
- Remove RFC tag from series
- Rebase on v6 of the Intel CET virtualization series
- Use KVM-governed feature to track SHSTK for SVM
v2:
- Add new patch renaming boot_*msr to raw_*msr. Utilize raw_rdmsr when
reading XSS on SEV-ES cpuid instructions.
- Omit unnecessary patch for saving shadow stack msrs on SEV-ES VMRUN
- Omit passing through of XSS for SEV-ES as support has already been
properly implemented in a26b7cd22546 ("KVM: SEV: Do not intercept
accesses to MSR_IA32_XSS for SEV-ES guests")

John Allen (9):
x86/boot: Move boot_*msr helpers to asm/shared/msr.h
KVM: x86: SVM: Emulate reads and writes to shadow stack MSRs
KVM: x86: SVM: Update dump_vmcb with shadow stack save area additions
KVM: x86: SVM: Pass through shadow stack MSRs
KVM: SVM: Rename vmplX_ssp -> plX_ssp
KVM: SVM: Add MSR_IA32_XSS to the GHCB for hypervisor kernel
x86/sev-es: Include XSS value in GHCB CPUID request
KVM: SVM: Use KVM-governed features to track SHSTK
KVM: SVM: Add CET features to supported_xss

arch/x86/boot/compressed/sev.c | 10 +++---
arch/x86/boot/cpucheck.c | 16 +++++-----
arch/x86/boot/msr.h | 26 ---------------
arch/x86/include/asm/shared/msr.h | 15 +++++++++
arch/x86/include/asm/svm.h | 9 +++---
arch/x86/kernel/sev-shared.c | 7 ++++
arch/x86/kvm/svm/sev.c | 9 ++++--
arch/x86/kvm/svm/svm.c | 53 +++++++++++++++++++++++++++++++
arch/x86/kvm/svm/svm.h | 3 +-
9 files changed, 102 insertions(+), 46 deletions(-)
delete mode 100644 arch/x86/boot/msr.h

--
2.40.1



2024-02-26 21:35:40

by John Allen

[permalink] [raw]
Subject: [PATCH v2 5/9] KVM: SVM: Rename vmplX_ssp -> plX_ssp

Rename SEV-ES save area SSP fields to be consistent with the APM.

Signed-off-by: John Allen <[email protected]>
---
arch/x86/include/asm/svm.h | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h
index 87a7b917d30e..728c98175b9c 100644
--- a/arch/x86/include/asm/svm.h
+++ b/arch/x86/include/asm/svm.h
@@ -358,10 +358,10 @@ struct sev_es_save_area {
struct vmcb_seg ldtr;
struct vmcb_seg idtr;
struct vmcb_seg tr;
- u64 vmpl0_ssp;
- u64 vmpl1_ssp;
- u64 vmpl2_ssp;
- u64 vmpl3_ssp;
+ u64 pl0_ssp;
+ u64 pl1_ssp;
+ u64 pl2_ssp;
+ u64 pl3_ssp;
u64 u_cet;
u8 reserved_0xc8[2];
u8 vmpl;
--
2.40.1


2024-02-26 21:36:57

by John Allen

[permalink] [raw]
Subject: [PATCH v2 6/9] KVM: SVM: Add MSR_IA32_XSS to the GHCB for hypervisor kernel

When a guest issues a cpuid instruction for Fn0000000D_x0B
(CetUserOffset), KVM will intercept and need to access the guest
MSR_IA32_XSS value. For SEV-ES, this is encrypted and needs to be
included in the GHCB to be visible to the hypervisor.

Signed-off-by: John Allen <[email protected]>
---
v2:
- Omit passing through XSS as this has already been properly
implemented in a26b7cd22546 ("KVM: SEV: Do not intercept
accesses to MSR_IA32_XSS for SEV-ES guests")
---
arch/x86/include/asm/svm.h | 1 +
arch/x86/kvm/svm/sev.c | 9 +++++++--
arch/x86/kvm/svm/svm.h | 1 +
3 files changed, 9 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h
index 728c98175b9c..44cd41e2fb68 100644
--- a/arch/x86/include/asm/svm.h
+++ b/arch/x86/include/asm/svm.h
@@ -673,5 +673,6 @@ DEFINE_GHCB_ACCESSORS(sw_exit_info_1)
DEFINE_GHCB_ACCESSORS(sw_exit_info_2)
DEFINE_GHCB_ACCESSORS(sw_scratch)
DEFINE_GHCB_ACCESSORS(xcr0)
+DEFINE_GHCB_ACCESSORS(xss)

#endif
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index f06f9e51ad9d..c3060d2068eb 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -2458,8 +2458,13 @@ static void sev_es_sync_from_ghcb(struct vcpu_svm *svm)

svm->vmcb->save.cpl = kvm_ghcb_get_cpl_if_valid(svm, ghcb);

- if (kvm_ghcb_xcr0_is_valid(svm)) {
- vcpu->arch.xcr0 = ghcb_get_xcr0(ghcb);
+ if (kvm_ghcb_xcr0_is_valid(svm) || kvm_ghcb_xss_is_valid(svm)) {
+ if (kvm_ghcb_xcr0_is_valid(svm))
+ vcpu->arch.xcr0 = ghcb_get_xcr0(ghcb);
+
+ if (kvm_ghcb_xss_is_valid(svm))
+ vcpu->arch.ia32_xss = ghcb_get_xss(ghcb);
+
kvm_update_cpuid_runtime(vcpu);
}

diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 0741fa049fd7..eb9c9e337c43 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -723,5 +723,6 @@ DEFINE_KVM_GHCB_ACCESSORS(sw_exit_info_1)
DEFINE_KVM_GHCB_ACCESSORS(sw_exit_info_2)
DEFINE_KVM_GHCB_ACCESSORS(sw_scratch)
DEFINE_KVM_GHCB_ACCESSORS(xcr0)
+DEFINE_KVM_GHCB_ACCESSORS(xss)

#endif
--
2.40.1


2024-02-26 21:37:12

by John Allen

[permalink] [raw]
Subject: [PATCH v2 8/9] KVM: SVM: Use KVM-governed features to track SHSTK

Use the KVM-governed features framework to track whether SHSTK can be by
both userspace and guest for SVM.

Signed-off-by: John Allen <[email protected]>
---
arch/x86/kvm/svm/svm.c | 1 +
1 file changed, 1 insertion(+)

diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 68da482713cf..1181f017c173 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -4386,6 +4386,7 @@ static void svm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu)
kvm_governed_feature_check_and_set(vcpu, X86_FEATURE_PFTHRESHOLD);
kvm_governed_feature_check_and_set(vcpu, X86_FEATURE_VGIF);
kvm_governed_feature_check_and_set(vcpu, X86_FEATURE_VNMI);
+ kvm_governed_feature_check_and_set(vcpu, X86_FEATURE_SHSTK);

svm_recalc_instruction_intercepts(vcpu, svm);

--
2.40.1


2024-02-26 21:37:17

by John Allen

[permalink] [raw]
Subject: [PATCH v2 7/9] x86/sev-es: Include XSS value in GHCB CPUID request

When a guest issues a cpuid instruction for Fn0000000D_x0B
(CetUserOffset), the hypervisor may intercept and access the guest XSS
value. For SEV-ES, this is encrypted and needs to be included in the
GHCB to be visible to the hypervisor. The rdmsr instruction needs to be
called directly as the code may be used in early boot in which case the
rdmsr wrappers should be avoided as they are incompatible with the
decompression boot phase.

Signed-off-by: John Allen <[email protected]>
---
v2:
- Use raw_rdmsr instead of calling rdmsr directly.
---
arch/x86/kernel/sev-shared.c | 7 +++++++
1 file changed, 7 insertions(+)

diff --git a/arch/x86/kernel/sev-shared.c b/arch/x86/kernel/sev-shared.c
index 1d24ec679915..10ac130cc953 100644
--- a/arch/x86/kernel/sev-shared.c
+++ b/arch/x86/kernel/sev-shared.c
@@ -966,6 +966,13 @@ static enum es_result vc_handle_cpuid(struct ghcb *ghcb,
/* xgetbv will cause #GP - use reset value for xcr0 */
ghcb_set_xcr0(ghcb, 1);

+ if (has_cpuflag(X86_FEATURE_SHSTK) && regs->ax == 0xd && regs->cx <= 1) {
+ struct msr m;
+
+ raw_rdmsr(MSR_IA32_XSS, &m);
+ ghcb_set_xss(ghcb, m.q);
+ }
+
ret = sev_es_ghcb_hv_call(ghcb, ctxt, SVM_EXIT_CPUID, 0, 0);
if (ret != ES_OK)
return ret;
--
2.40.1


2024-02-26 21:37:35

by John Allen

[permalink] [raw]
Subject: [PATCH v2 9/9] KVM: SVM: Add CET features to supported_xss

If the CPU supports CET, add CET XSAVES feature bits to the
supported_xss mask.

Signed-off-by: John Allen <[email protected]>
---
arch/x86/kvm/svm/svm.c | 4 ++++
1 file changed, 4 insertions(+)

diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 1181f017c173..d97d82ebec4a 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -5177,6 +5177,10 @@ static __init void svm_set_cpu_caps(void)
boot_cpu_has(X86_FEATURE_AMD_SSBD))
kvm_cpu_cap_set(X86_FEATURE_VIRT_SSBD);

+ if (kvm_cpu_cap_has(X86_FEATURE_SHSTK))
+ kvm_caps.supported_xss |= XFEATURE_MASK_CET_USER |
+ XFEATURE_MASK_CET_KERNEL;
+
if (enable_pmu) {
/*
* Enumerate support for PERFCTR_CORE if and only if KVM has
--
2.40.1


2024-02-26 21:42:09

by John Allen

[permalink] [raw]
Subject: [PATCH v2 2/9] KVM: x86: SVM: Emulate reads and writes to shadow stack MSRs

Set up interception of shadow stack MSRs. In the event that shadow stack
is unsupported on the host or the MSRs are otherwise inaccessible, the
interception code will return an error. In certain circumstances such as
host initiated MSR reads or writes, the interception code will get or
set the requested MSR value.

Signed-off-by: John Allen <[email protected]>
---
arch/x86/kvm/svm/svm.c | 18 ++++++++++++++++++
1 file changed, 18 insertions(+)

diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index e90b429c84f1..70f6fb1a166b 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -2864,6 +2864,15 @@ static int svm_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
if (guest_cpuid_is_intel(vcpu))
msr_info->data |= (u64)svm->sysenter_esp_hi << 32;
break;
+ case MSR_IA32_S_CET:
+ msr_info->data = svm->vmcb->save.s_cet;
+ break;
+ case MSR_IA32_INT_SSP_TAB:
+ msr_info->data = svm->vmcb->save.isst_addr;
+ break;
+ case MSR_KVM_SSP:
+ msr_info->data = svm->vmcb->save.ssp;
+ break;
case MSR_TSC_AUX:
msr_info->data = svm->tsc_aux;
break;
@@ -3090,6 +3099,15 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
svm->vmcb01.ptr->save.sysenter_esp = (u32)data;
svm->sysenter_esp_hi = guest_cpuid_is_intel(vcpu) ? (data >> 32) : 0;
break;
+ case MSR_IA32_S_CET:
+ svm->vmcb->save.s_cet = data;
+ break;
+ case MSR_IA32_INT_SSP_TAB:
+ svm->vmcb->save.isst_addr = data;
+ break;
+ case MSR_KVM_SSP:
+ svm->vmcb->save.ssp = data;
+ break;
case MSR_TSC_AUX:
/*
* TSC_AUX is always virtualized for SEV-ES guests when the
--
2.40.1


2024-02-26 21:42:33

by John Allen

[permalink] [raw]
Subject: [PATCH v2 3/9] KVM: x86: SVM: Update dump_vmcb with shadow stack save area additions

Add shadow stack VMCB save area fields to dump_vmcb. Only include S_CET,
SSP, and ISST_ADDR. Since there currently isn't support to decrypt and
dump the SEV-ES save area, exclude PL0_SSP, PL1_SSP, PL2_SSP, PL3_SSP,
and U_CET which are only inlcuded in the SEV-ES save area.

Signed-off-by: John Allen <[email protected]>
---
arch/x86/kvm/svm/svm.c | 4 ++++
1 file changed, 4 insertions(+)

diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 70f6fb1a166b..0b8b346a470a 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -3431,6 +3431,10 @@ static void dump_vmcb(struct kvm_vcpu *vcpu)
"rip:", save->rip, "rflags:", save->rflags);
pr_err("%-15s %016llx %-13s %016llx\n",
"rsp:", save->rsp, "rax:", save->rax);
+ pr_err("%-15s %016llx %-13s %016llx\n",
+ "s_cet:", save->s_cet, "ssp:", save->ssp);
+ pr_err("%-15s %016llx\n",
+ "isst_addr:", save->isst_addr);
pr_err("%-15s %016llx %-13s %016llx\n",
"star:", save01->star, "lstar:", save01->lstar);
pr_err("%-15s %016llx %-13s %016llx\n",
--
2.40.1


2024-02-26 21:42:49

by John Allen

[permalink] [raw]
Subject: [PATCH v2 1/9] x86/boot: Move boot_*msr helpers to asm/shared/msr.h

The boot_rdmsr and boot_wrmsr helpers used to reduce the need for inline
assembly in the boot kernel can also be useful in code shared by boot
and run-time kernel code. Move these helpers to asm/shared/msr.h and
rename to raw_rdmsr and raw_wrmsr to indicate that these may also be
used outside of the boot kernel.

Signed-off-by: John Allen <[email protected]>
---
v2:
- New in v2
---
arch/x86/boot/compressed/sev.c | 10 +++++-----
arch/x86/boot/cpucheck.c | 16 ++++++++--------
arch/x86/boot/msr.h | 26 --------------------------
arch/x86/include/asm/shared/msr.h | 15 +++++++++++++++
4 files changed, 28 insertions(+), 39 deletions(-)
delete mode 100644 arch/x86/boot/msr.h

diff --git a/arch/x86/boot/compressed/sev.c b/arch/x86/boot/compressed/sev.c
index 454acd7a2daf..743b9eb8b7c3 100644
--- a/arch/x86/boot/compressed/sev.c
+++ b/arch/x86/boot/compressed/sev.c
@@ -13,6 +13,7 @@
#include "misc.h"

#include <asm/pgtable_types.h>
+#include <asm/shared/msr.h>
#include <asm/sev.h>
#include <asm/trapnr.h>
#include <asm/trap_pf.h>
@@ -23,7 +24,6 @@
#include <asm/cpuid.h>

#include "error.h"
-#include "../msr.h"

static struct ghcb boot_ghcb_page __aligned(PAGE_SIZE);
struct ghcb *boot_ghcb;
@@ -60,7 +60,7 @@ static inline u64 sev_es_rd_ghcb_msr(void)
{
struct msr m;

- boot_rdmsr(MSR_AMD64_SEV_ES_GHCB, &m);
+ raw_rdmsr(MSR_AMD64_SEV_ES_GHCB, &m);

return m.q;
}
@@ -70,7 +70,7 @@ static inline void sev_es_wr_ghcb_msr(u64 val)
struct msr m;

m.q = val;
- boot_wrmsr(MSR_AMD64_SEV_ES_GHCB, &m);
+ raw_wrmsr(MSR_AMD64_SEV_ES_GHCB, &m);
}

static enum es_result vc_decode_insn(struct es_em_ctxt *ctxt)
@@ -482,7 +482,7 @@ void sev_enable(struct boot_params *bp)
}

/* Set the SME mask if this is an SEV guest. */
- boot_rdmsr(MSR_AMD64_SEV, &m);
+ raw_rdmsr(MSR_AMD64_SEV, &m);
sev_status = m.q;
if (!(sev_status & MSR_AMD64_SEV_ENABLED))
return;
@@ -523,7 +523,7 @@ u64 sev_get_status(void)
if (sev_check_cpu_support() < 0)
return 0;

- boot_rdmsr(MSR_AMD64_SEV, &m);
+ raw_rdmsr(MSR_AMD64_SEV, &m);
return m.q;
}

diff --git a/arch/x86/boot/cpucheck.c b/arch/x86/boot/cpucheck.c
index fed8d13ce252..bb5c28d0a1f1 100644
--- a/arch/x86/boot/cpucheck.c
+++ b/arch/x86/boot/cpucheck.c
@@ -25,9 +25,9 @@
#include <asm/intel-family.h>
#include <asm/processor-flags.h>
#include <asm/required-features.h>
+#include <asm/shared/msr.h>
#include <asm/msr-index.h>
#include "string.h"
-#include "msr.h"

static u32 err_flags[NCAPINTS];

@@ -133,9 +133,9 @@ int check_cpu(int *cpu_level_ptr, int *req_level_ptr, u32 **err_flags_ptr)

struct msr m;

- boot_rdmsr(MSR_K7_HWCR, &m);
+ raw_rdmsr(MSR_K7_HWCR, &m);
m.l &= ~(1 << 15);
- boot_wrmsr(MSR_K7_HWCR, &m);
+ raw_wrmsr(MSR_K7_HWCR, &m);

get_cpuflags(); /* Make sure it really did something */
err = check_cpuflags();
@@ -147,9 +147,9 @@ int check_cpu(int *cpu_level_ptr, int *req_level_ptr, u32 **err_flags_ptr)

struct msr m;

- boot_rdmsr(MSR_VIA_FCR, &m);
+ raw_rdmsr(MSR_VIA_FCR, &m);
m.l |= (1 << 1) | (1 << 7);
- boot_wrmsr(MSR_VIA_FCR, &m);
+ raw_wrmsr(MSR_VIA_FCR, &m);

set_bit(X86_FEATURE_CX8, cpu.flags);
err = check_cpuflags();
@@ -159,14 +159,14 @@ int check_cpu(int *cpu_level_ptr, int *req_level_ptr, u32 **err_flags_ptr)
struct msr m, m_tmp;
u32 level = 1;

- boot_rdmsr(0x80860004, &m);
+ raw_rdmsr(0x80860004, &m);
m_tmp = m;
m_tmp.l = ~0;
- boot_wrmsr(0x80860004, &m_tmp);
+ raw_wrmsr(0x80860004, &m_tmp);
asm("cpuid"
: "+a" (level), "=d" (cpu.flags[0])
: : "ecx", "ebx");
- boot_wrmsr(0x80860004, &m);
+ raw_wrmsr(0x80860004, &m);

err = check_cpuflags();
} else if (err == 0x01 &&
diff --git a/arch/x86/boot/msr.h b/arch/x86/boot/msr.h
deleted file mode 100644
index aed66f7ae199..000000000000
--- a/arch/x86/boot/msr.h
+++ /dev/null
@@ -1,26 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0-only */
-/*
- * Helpers/definitions related to MSR access.
- */
-
-#ifndef BOOT_MSR_H
-#define BOOT_MSR_H
-
-#include <asm/shared/msr.h>
-
-/*
- * The kernel proper already defines rdmsr()/wrmsr(), but they are not for the
- * boot kernel since they rely on tracepoint/exception handling infrastructure
- * that's not available here.
- */
-static inline void boot_rdmsr(unsigned int reg, struct msr *m)
-{
- asm volatile("rdmsr" : "=a" (m->l), "=d" (m->h) : "c" (reg));
-}
-
-static inline void boot_wrmsr(unsigned int reg, const struct msr *m)
-{
- asm volatile("wrmsr" : : "c" (reg), "a"(m->l), "d" (m->h) : "memory");
-}
-
-#endif /* BOOT_MSR_H */
diff --git a/arch/x86/include/asm/shared/msr.h b/arch/x86/include/asm/shared/msr.h
index 1e6ec10b3a15..a20b1c08c99f 100644
--- a/arch/x86/include/asm/shared/msr.h
+++ b/arch/x86/include/asm/shared/msr.h
@@ -12,4 +12,19 @@ struct msr {
};
};

+/*
+ * The kernel proper already defines rdmsr()/wrmsr(), but they are not for the
+ * boot kernel since they rely on tracepoint/exception handling infrastructure
+ * that's not available here.
+ */
+static inline void raw_rdmsr(unsigned int reg, struct msr *m)
+{
+ asm volatile("rdmsr" : "=a" (m->l), "=d" (m->h) : "c" (reg));
+}
+
+static inline void raw_wrmsr(unsigned int reg, const struct msr *m)
+{
+ asm volatile("wrmsr" : : "c" (reg), "a"(m->l), "d" (m->h) : "memory");
+}
+
#endif /* _ASM_X86_SHARED_MSR_H */
--
2.40.1


2024-02-26 21:43:34

by John Allen

[permalink] [raw]
Subject: [PATCH v2 4/9] KVM: x86: SVM: Pass through shadow stack MSRs

If kvm supports shadow stack, pass through shadow stack MSRs to improve
guest performance.

Signed-off-by: John Allen <[email protected]>
---
arch/x86/kvm/svm/svm.c | 26 ++++++++++++++++++++++++++
arch/x86/kvm/svm/svm.h | 2 +-
2 files changed, 27 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 0b8b346a470a..68da482713cf 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -140,6 +140,13 @@ static const struct svm_direct_access_msrs {
{ .index = X2APIC_MSR(APIC_TMICT), .always = false },
{ .index = X2APIC_MSR(APIC_TMCCT), .always = false },
{ .index = X2APIC_MSR(APIC_TDCR), .always = false },
+ { .index = MSR_IA32_U_CET, .always = false },
+ { .index = MSR_IA32_S_CET, .always = false },
+ { .index = MSR_IA32_INT_SSP_TAB, .always = false },
+ { .index = MSR_IA32_PL0_SSP, .always = false },
+ { .index = MSR_IA32_PL1_SSP, .always = false },
+ { .index = MSR_IA32_PL2_SSP, .always = false },
+ { .index = MSR_IA32_PL3_SSP, .always = false },
{ .index = MSR_INVALID, .always = false },
};

@@ -1222,6 +1229,25 @@ static inline void init_vmcb_after_set_cpuid(struct kvm_vcpu *vcpu)
set_msr_interception(vcpu, svm->msrpm, MSR_IA32_SYSENTER_EIP, 1, 1);
set_msr_interception(vcpu, svm->msrpm, MSR_IA32_SYSENTER_ESP, 1, 1);
}
+
+ if (kvm_cpu_cap_has(X86_FEATURE_SHSTK)) {
+ bool shstk_enabled = guest_cpuid_has(vcpu, X86_FEATURE_SHSTK);
+
+ set_msr_interception(vcpu, svm->msrpm, MSR_IA32_U_CET,
+ shstk_enabled, shstk_enabled);
+ set_msr_interception(vcpu, svm->msrpm, MSR_IA32_S_CET,
+ shstk_enabled, shstk_enabled);
+ set_msr_interception(vcpu, svm->msrpm, MSR_IA32_INT_SSP_TAB,
+ shstk_enabled, shstk_enabled);
+ set_msr_interception(vcpu, svm->msrpm, MSR_IA32_PL0_SSP,
+ shstk_enabled, shstk_enabled);
+ set_msr_interception(vcpu, svm->msrpm, MSR_IA32_PL1_SSP,
+ shstk_enabled, shstk_enabled);
+ set_msr_interception(vcpu, svm->msrpm, MSR_IA32_PL2_SSP,
+ shstk_enabled, shstk_enabled);
+ set_msr_interception(vcpu, svm->msrpm, MSR_IA32_PL3_SSP,
+ shstk_enabled, shstk_enabled);
+ }
}

static void init_vmcb(struct kvm_vcpu *vcpu)
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 8ef95139cd24..0741fa049fd7 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -30,7 +30,7 @@
#define IOPM_SIZE PAGE_SIZE * 3
#define MSRPM_SIZE PAGE_SIZE * 2

-#define MAX_DIRECT_ACCESS_MSRS 47
+#define MAX_DIRECT_ACCESS_MSRS 54
#define MSRPM_OFFSETS 32
extern u32 msrpm_offsets[MSRPM_OFFSETS] __read_mostly;
extern bool npt_enabled;
--
2.40.1


2024-02-27 18:14:36

by Sean Christopherson

[permalink] [raw]
Subject: Re: [PATCH v2 5/9] KVM: SVM: Rename vmplX_ssp -> plX_ssp

On Mon, Feb 26, 2024, John Allen wrote:
> Rename SEV-ES save area SSP fields to be consistent with the APM.
>
> Signed-off-by: John Allen <[email protected]>
> ---
> arch/x86/include/asm/svm.h | 8 ++++----
> 1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h
> index 87a7b917d30e..728c98175b9c 100644
> --- a/arch/x86/include/asm/svm.h
> +++ b/arch/x86/include/asm/svm.h
> @@ -358,10 +358,10 @@ struct sev_es_save_area {
> struct vmcb_seg ldtr;
> struct vmcb_seg idtr;
> struct vmcb_seg tr;
> - u64 vmpl0_ssp;
> - u64 vmpl1_ssp;
> - u64 vmpl2_ssp;
> - u64 vmpl3_ssp;
> + u64 pl0_ssp;
> + u64 pl1_ssp;
> + u64 pl2_ssp;
> + u64 pl3_ssp;

Are these CPL fields, or VMPL fields? Presumably it's the former since this is
a single save area. If so, the changelog should call that out, i.e. make it clear
that the current names are outright bugs. If these somehow really are VMPL fields,
I would prefer to diverge from the APM, because pl[0..3] is way to ambiguous in
that case.

It's borderline if they're CPL fields, but Intel calls them PL[0..3]_SSP, so I'm
much less inclined to diverge from two other things in that case.

2024-02-27 19:19:49

by John Allen

[permalink] [raw]
Subject: Re: [PATCH v2 5/9] KVM: SVM: Rename vmplX_ssp -> plX_ssp

On Tue, Feb 27, 2024 at 01:15:09PM -0600, Tom Lendacky wrote:
> On 2/27/24 12:14, Sean Christopherson wrote:
> > On Mon, Feb 26, 2024, John Allen wrote:
> > > Rename SEV-ES save area SSP fields to be consistent with the APM.
> > >
> > > Signed-off-by: John Allen <[email protected]>
> > > ---
> > > arch/x86/include/asm/svm.h | 8 ++++----
> > > 1 file changed, 4 insertions(+), 4 deletions(-)
> > >
> > > diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h
> > > index 87a7b917d30e..728c98175b9c 100644
> > > --- a/arch/x86/include/asm/svm.h
> > > +++ b/arch/x86/include/asm/svm.h
> > > @@ -358,10 +358,10 @@ struct sev_es_save_area {
> > > struct vmcb_seg ldtr;
> > > struct vmcb_seg idtr;
> > > struct vmcb_seg tr;
> > > - u64 vmpl0_ssp;
> > > - u64 vmpl1_ssp;
> > > - u64 vmpl2_ssp;
> > > - u64 vmpl3_ssp;
> > > + u64 pl0_ssp;
> > > + u64 pl1_ssp;
> > > + u64 pl2_ssp;
> > > + u64 pl3_ssp;
> >
> > Are these CPL fields, or VMPL fields? Presumably it's the former since this is
> > a single save area. If so, the changelog should call that out, i.e. make it clear
> > that the current names are outright bugs. If these somehow really are VMPL fields,
> > I would prefer to diverge from the APM, because pl[0..3] is way to ambiguous in
> > that case.
>
> Definitely not VMPL fields... I guess I had VMPL levels on my mind when I
> was typing those names.

FWIW, the patch that accessed these fields has been omitted in this
version so if we just want to correct the names of these fields, this
patch can be pulled in separately from this series.

Thanks,
John

>
> Thanks,
> Tom
>
> >
> > It's borderline if they're CPL fields, but Intel calls them PL[0..3]_SSP, so I'm
> > much less inclined to diverge from two other things in that case.

2024-02-27 19:28:49

by John Allen

[permalink] [raw]
Subject: Re: [PATCH v2 5/9] KVM: SVM: Rename vmplX_ssp -> plX_ssp

On Tue, Feb 27, 2024 at 11:23:33AM -0800, Sean Christopherson wrote:
> On Tue, Feb 27, 2024, John Allen wrote:
> > On Tue, Feb 27, 2024 at 01:15:09PM -0600, Tom Lendacky wrote:
> > > On 2/27/24 12:14, Sean Christopherson wrote:
> > > > On Mon, Feb 26, 2024, John Allen wrote:
> > > > > Rename SEV-ES save area SSP fields to be consistent with the APM.
> > > > >
> > > > > Signed-off-by: John Allen <[email protected]>
> > > > > ---
> > > > > arch/x86/include/asm/svm.h | 8 ++++----
> > > > > 1 file changed, 4 insertions(+), 4 deletions(-)
> > > > >
> > > > > diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h
> > > > > index 87a7b917d30e..728c98175b9c 100644
> > > > > --- a/arch/x86/include/asm/svm.h
> > > > > +++ b/arch/x86/include/asm/svm.h
> > > > > @@ -358,10 +358,10 @@ struct sev_es_save_area {
> > > > > struct vmcb_seg ldtr;
> > > > > struct vmcb_seg idtr;
> > > > > struct vmcb_seg tr;
> > > > > - u64 vmpl0_ssp;
> > > > > - u64 vmpl1_ssp;
> > > > > - u64 vmpl2_ssp;
> > > > > - u64 vmpl3_ssp;
> > > > > + u64 pl0_ssp;
> > > > > + u64 pl1_ssp;
> > > > > + u64 pl2_ssp;
> > > > > + u64 pl3_ssp;
> > > >
> > > > Are these CPL fields, or VMPL fields? Presumably it's the former since this is
> > > > a single save area. If so, the changelog should call that out, i.e. make it clear
> > > > that the current names are outright bugs. If these somehow really are VMPL fields,
> > > > I would prefer to diverge from the APM, because pl[0..3] is way to ambiguous in
> > > > that case.
> > >
> > > Definitely not VMPL fields... I guess I had VMPL levels on my mind when I
> > > was typing those names.
> >
> > FWIW, the patch that accessed these fields has been omitted in this
> > version so if we just want to correct the names of these fields, this
> > patch can be pulled in separately from this series.
>
> Nice! Can you post this as a standalone patch, with a massage changelog to
> explain that the vmpl prefix was just a braino?
>
> Thanks!

Will do.

Thanks,
John

2024-02-27 19:46:43

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH v2 1/9] x86/boot: Move boot_*msr helpers to asm/shared/msr.h

On Mon, Feb 26, 2024 at 09:32:36PM +0000, John Allen wrote:
> The boot_rdmsr and boot_wrmsr helpers used to reduce the need for inline
> assembly in the boot kernel can also be useful in code shared by boot
> and run-time kernel code. Move these helpers to asm/shared/msr.h and
> rename to raw_rdmsr and raw_wrmsr to indicate that these may also be
> used outside of the boot kernel.
>
> Signed-off-by: John Allen <[email protected]>
> ---
> v2:
> - New in v2
> ---
> arch/x86/boot/compressed/sev.c | 10 +++++-----
> arch/x86/boot/cpucheck.c | 16 ++++++++--------
> arch/x86/boot/msr.h | 26 --------------------------
> arch/x86/include/asm/shared/msr.h | 15 +++++++++++++++
> 4 files changed, 28 insertions(+), 39 deletions(-)
> delete mode 100644 arch/x86/boot/msr.h

Acked-by: Borislav Petkov (AMD) <[email protected]>

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette

2024-02-27 19:48:15

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH v2 7/9] x86/sev-es: Include XSS value in GHCB CPUID request

On Mon, Feb 26, 2024 at 09:32:42PM +0000, John Allen wrote:
> When a guest issues a cpuid instruction for Fn0000000D_x0B
> (CetUserOffset), the hypervisor may intercept and access the guest XSS
> value. For SEV-ES, this is encrypted and needs to be included in the
> GHCB to be visible to the hypervisor. The rdmsr instruction needs to be
> called directly as the code may be used in early boot in which case the
> rdmsr wrappers should be avoided as they are incompatible with the
> decompression boot phase.
>
> Signed-off-by: John Allen <[email protected]>
> ---
> v2:
> - Use raw_rdmsr instead of calling rdmsr directly.
> ---
> arch/x86/kernel/sev-shared.c | 7 +++++++
> 1 file changed, 7 insertions(+)
>
> diff --git a/arch/x86/kernel/sev-shared.c b/arch/x86/kernel/sev-shared.c
> index 1d24ec679915..10ac130cc953 100644
> --- a/arch/x86/kernel/sev-shared.c
> +++ b/arch/x86/kernel/sev-shared.c
> @@ -966,6 +966,13 @@ static enum es_result vc_handle_cpuid(struct ghcb *ghcb,
> /* xgetbv will cause #GP - use reset value for xcr0 */
> ghcb_set_xcr0(ghcb, 1);
>
> + if (has_cpuflag(X86_FEATURE_SHSTK) && regs->ax == 0xd && regs->cx <= 1) {
> + struct msr m;
> +
> + raw_rdmsr(MSR_IA32_XSS, &m);
> + ghcb_set_xss(ghcb, m.q);
> + }
> +
> ret = sev_es_ghcb_hv_call(ghcb, ctxt, SVM_EXIT_CPUID, 0, 0);
> if (ret != ES_OK)
> return ret;
> --

Acked-by: Borislav Petkov (AMD) <[email protected]>

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette

2024-02-27 19:58:45

by Tom Lendacky

[permalink] [raw]
Subject: Re: [PATCH v2 5/9] KVM: SVM: Rename vmplX_ssp -> plX_ssp

On 2/27/24 12:14, Sean Christopherson wrote:
> On Mon, Feb 26, 2024, John Allen wrote:
>> Rename SEV-ES save area SSP fields to be consistent with the APM.
>>
>> Signed-off-by: John Allen <[email protected]>
>> ---
>> arch/x86/include/asm/svm.h | 8 ++++----
>> 1 file changed, 4 insertions(+), 4 deletions(-)
>>
>> diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h
>> index 87a7b917d30e..728c98175b9c 100644
>> --- a/arch/x86/include/asm/svm.h
>> +++ b/arch/x86/include/asm/svm.h
>> @@ -358,10 +358,10 @@ struct sev_es_save_area {
>> struct vmcb_seg ldtr;
>> struct vmcb_seg idtr;
>> struct vmcb_seg tr;
>> - u64 vmpl0_ssp;
>> - u64 vmpl1_ssp;
>> - u64 vmpl2_ssp;
>> - u64 vmpl3_ssp;
>> + u64 pl0_ssp;
>> + u64 pl1_ssp;
>> + u64 pl2_ssp;
>> + u64 pl3_ssp;
>
> Are these CPL fields, or VMPL fields? Presumably it's the former since this is
> a single save area. If so, the changelog should call that out, i.e. make it clear
> that the current names are outright bugs. If these somehow really are VMPL fields,
> I would prefer to diverge from the APM, because pl[0..3] is way to ambiguous in
> that case.

Definitely not VMPL fields... I guess I had VMPL levels on my mind when I
was typing those names.

Thanks,
Tom

>
> It's borderline if they're CPL fields, but Intel calls them PL[0..3]_SSP, so I'm
> much less inclined to diverge from two other things in that case.

2024-02-27 20:16:25

by Sean Christopherson

[permalink] [raw]
Subject: Re: [PATCH v2 5/9] KVM: SVM: Rename vmplX_ssp -> plX_ssp

On Tue, Feb 27, 2024, John Allen wrote:
> On Tue, Feb 27, 2024 at 01:15:09PM -0600, Tom Lendacky wrote:
> > On 2/27/24 12:14, Sean Christopherson wrote:
> > > On Mon, Feb 26, 2024, John Allen wrote:
> > > > Rename SEV-ES save area SSP fields to be consistent with the APM.
> > > >
> > > > Signed-off-by: John Allen <[email protected]>
> > > > ---
> > > > arch/x86/include/asm/svm.h | 8 ++++----
> > > > 1 file changed, 4 insertions(+), 4 deletions(-)
> > > >
> > > > diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h
> > > > index 87a7b917d30e..728c98175b9c 100644
> > > > --- a/arch/x86/include/asm/svm.h
> > > > +++ b/arch/x86/include/asm/svm.h
> > > > @@ -358,10 +358,10 @@ struct sev_es_save_area {
> > > > struct vmcb_seg ldtr;
> > > > struct vmcb_seg idtr;
> > > > struct vmcb_seg tr;
> > > > - u64 vmpl0_ssp;
> > > > - u64 vmpl1_ssp;
> > > > - u64 vmpl2_ssp;
> > > > - u64 vmpl3_ssp;
> > > > + u64 pl0_ssp;
> > > > + u64 pl1_ssp;
> > > > + u64 pl2_ssp;
> > > > + u64 pl3_ssp;
> > >
> > > Are these CPL fields, or VMPL fields? Presumably it's the former since this is
> > > a single save area. If so, the changelog should call that out, i.e. make it clear
> > > that the current names are outright bugs. If these somehow really are VMPL fields,
> > > I would prefer to diverge from the APM, because pl[0..3] is way to ambiguous in
> > > that case.
> >
> > Definitely not VMPL fields... I guess I had VMPL levels on my mind when I
> > was typing those names.
>
> FWIW, the patch that accessed these fields has been omitted in this
> version so if we just want to correct the names of these fields, this
> patch can be pulled in separately from this series.

Nice! Can you post this as a standalone patch, with a massage changelog to
explain that the vmpl prefix was just a braino?

Thanks!