The AMD Zen4 core supports a new feature called Automatic IBRS
(Indirect Branch Restricted Speculation).
Enable Automatic IBRS by default if the CPU feature is present.
It typically provides greater performance over the incumbent
generic retpolines mitigation.
Patches 1-3 take the existing CPUID 0x80000021 EAX feature bits
that are being propagated to the guest and define scattered
versions for patch 4.
Patch 4 moves CPUID 0x80000021 EAX feature bits propagation code
to kvm_set_cpu_caps().
Patch 5 Defines the AutoIBRS feature bit.
Patch 6 Adds support for AutoIBRS by turning its EFER
enablement bit on at startup if the feature is available.
Patch 7 Adds support for propagating AutoIBRS to the guest.
Thanks to Babu Moger for helping debug guest propagation.
Babu, feel free to add your Co-developed and Signed-off-bys
to patches 4 and/or 7?
v3: Addressed v2 comments:
- Remove Co-developed-bys. They require signed-off-bys,
so co-developers need to add them themselves.
- update check_null_seg_clears_base() [Boris]
- Made the feature bit additions separate patches
because v2 patch was clearly doing too many things at once.
v2: https://lkml.org/lkml/2022/11/23/1690
- Use synthetic/scattered bits instead of introducing new leaf [Boris]
- Combine the rest of the leaf's bits being used [Paolo]
Note: Bits not used by the host can be moved to kvm/cpuid.c if
maintainers do not want them in cpufeatures.h.
- Hoist bitsetting code to kvm_set_cpu_caps(), and use
cpuid_entry_override() in __do_cpuid_func() [Paolo]
- Reuse SPECTRE_V2_EIBRS spectre_v2_mitigation enum [Boris, PeterZ, D.Hansen]
- Change from Boris' diff:
Moved setting X86_FEATURE_IBRS_ENHANCED to after BUG_EIBRS_PBRSB
so PBRSB mitigations wouldn't be enabled.
- Allow for users to specify "autoibrs,lfence/retpoline" instead
of actively preventing the extra protections. AutoIBRS doesn't
require the extra protection, but we allow it anyway.
v1: https://lore.kernel.org/lkml/[email protected]/, and
https://lore.kernel.org/lkml/[email protected]/, and
https://lore.kernel.org/lkml/[email protected]/
Signed-off-by: Kim Phillips <[email protected]>
Cc: Babu Moger <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Boris Ostrovsky <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Joao Martins <[email protected]>
Cc: Jonathan Corbet <[email protected]>
Cc: Konrad Rzeszutek Wilk <[email protected]>
Cc: Paolo Bonzini <[email protected]>
Cc: Sean Christopherson <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: David Woodhouse <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: Juergen Gross <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Tony Luck <[email protected]>
Cc: Tom Lendacky <[email protected]>
Cc: Alexey Kardashevskiy <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Kim Phillips (7):
x86/cpu, kvm: Define a scattered No Nested Data Breakpoints feature
bit
x86/cpu, kvm: Define a scattered Null Selector Clears Base feature bit
x86/cpu, kvm: Make X86_FEATURE_LFENCE_RDTSC a scattered feature bit
x86/cpu, kvm: Move CPUID 0x80000021 EAX feature bits propagation to
kvm_set_cpu_caps
x86/cpu, kvm: Define a scattered AMD Automatic IBRS feature bit
x86/cpu, kvm: Support AMD Automatic IBRS
x86/cpu, kvm: Propagate the AMD Automatic IBRS feature to the guest
.../admin-guide/kernel-parameters.txt | 9 +++--
arch/x86/include/asm/cpufeatures.h | 4 ++-
arch/x86/include/asm/msr-index.h | 2 ++
arch/x86/kernel/cpu/bugs.c | 23 +++++++-----
arch/x86/kernel/cpu/common.c | 11 ++++--
arch/x86/kernel/cpu/scattered.c | 4 +++
arch/x86/kvm/cpuid.c | 35 +++++++++++--------
arch/x86/kvm/reverse_cpuid.h | 24 +++++++++----
arch/x86/kvm/svm/svm.c | 3 ++
arch/x86/kvm/x86.c | 3 ++
10 files changed, 83 insertions(+), 35 deletions(-)
--
2.34.1
It's a part of the CPUID 0x80000021 leaf, and this allows us to
group this and other CPUID 0x80000021 EAX feature bits to being
propagated via kvm_set_cpu_caps instead of open-coding them in
__do_cpuid_func().
Signed-off-by: Kim Phillips <[email protected]>
---
arch/x86/include/asm/cpufeatures.h | 2 +-
arch/x86/kernel/cpu/scattered.c | 1 +
2 files changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index b6525491a41b..b16fdcedc2b5 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -306,8 +306,8 @@
#define X86_FEATURE_RSB_VMEXIT_LITE (11*32+17) /* "" Fill RSB on VM exit when EIBRS is enabled */
#define X86_FEATURE_SGX_EDECCSSA (11*32+18) /* "" SGX EDECCSSA user leaf function */
#define X86_FEATURE_CALL_DEPTH (11*32+19) /* "" Call depth tracking for RSB stuffing */
-
#define X86_FEATURE_MSR_TSX_CTRL (11*32+20) /* "" MSR IA32_TSX_CTRL (Intel) implemented */
+#define X86_FEATURE_NO_NESTED_DATA_BP (11*32+21) /* "" AMD No Nested Data Breakpoints */
/* Intel-defined CPU features, CPUID level 0x00000007:1 (EAX), word 12 */
#define X86_FEATURE_AVX_VNNI (12*32+ 4) /* AVX VNNI instructions */
diff --git a/arch/x86/kernel/cpu/scattered.c b/arch/x86/kernel/cpu/scattered.c
index f53944fb8f7f..079e253e1049 100644
--- a/arch/x86/kernel/cpu/scattered.c
+++ b/arch/x86/kernel/cpu/scattered.c
@@ -45,6 +45,7 @@ static const struct cpuid_bit cpuid_bits[] = {
{ X86_FEATURE_CPB, CPUID_EDX, 9, 0x80000007, 0 },
{ X86_FEATURE_PROC_FEEDBACK, CPUID_EDX, 11, 0x80000007, 0 },
{ X86_FEATURE_MBA, CPUID_EBX, 6, 0x80000008, 0 },
+ { X86_FEATURE_NO_NESTED_DATA_BP,CPUID_EAX, 0, 0x80000021, 0 },
{ X86_FEATURE_PERFMON_V2, CPUID_EAX, 0, 0x80000022, 0 },
{ X86_FEATURE_AMD_LBR_V2, CPUID_EAX, 1, 0x80000022, 0 },
{ 0, 0, 0, 0, 0 }
--
2.34.1
Add the AMD Automatic IBRS feature bit to those being
propagated to the guest.
Signed-off-by: Kim Phillips <[email protected]>
---
arch/x86/kvm/cpuid.c | 2 +-
arch/x86/kvm/reverse_cpuid.h | 2 ++
2 files changed, 3 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 8e37760cea1b..acda3883a905 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -743,7 +743,7 @@ void kvm_set_cpu_caps(void)
*/
kvm_cpu_cap_init_scattered(CPUID_8000_0021_EAX,
SF(NO_NESTED_DATA_BP) | SF(LFENCE_RDTSC) |
- SF(NULL_SEL_CLR_BASE));
+ SF(NULL_SEL_CLR_BASE) | SF(AUTOIBRS));
if (cpu_feature_enabled(X86_FEATURE_LFENCE_RDTSC))
kvm_cpu_cap_set(X86_FEATURE_LFENCE_RDTSC);
if (!static_cpu_has_bug(X86_BUG_NULL_SEG))
diff --git a/arch/x86/kvm/reverse_cpuid.h b/arch/x86/kvm/reverse_cpuid.h
index 184614e27d5b..0bf02c02bb0a 100644
--- a/arch/x86/kvm/reverse_cpuid.h
+++ b/arch/x86/kvm/reverse_cpuid.h
@@ -30,6 +30,7 @@ enum kvm_only_cpuid_leafs {
#define KVM_X86_FEATURE_NO_NESTED_DATA_BP KVM_X86_FEATURE(CPUID_8000_0021_EAX, 0)
#define KVM_X86_FEATURE_LFENCE_RDTSC KVM_X86_FEATURE(CPUID_8000_0021_EAX, 2)
#define KVM_X86_FEATURE_NULL_SEL_CLR_BASE KVM_X86_FEATURE(CPUID_8000_0021_EAX, 6)
+#define KVM_X86_FEATURE_AUTOIBRS KVM_X86_FEATURE(CPUID_8000_0021_EAX, 8)
struct cpuid_reg {
u32 function;
@@ -89,6 +90,7 @@ static __always_inline u32 __feature_translate(int x86_feature)
case X86_FEATURE_NO_NESTED_DATA_BP: return KVM_X86_FEATURE_NO_NESTED_DATA_BP;
case X86_FEATURE_LFENCE_RDTSC: return KVM_X86_FEATURE_LFENCE_RDTSC;
case X86_FEATURE_NULL_SEL_CLR_BASE: return KVM_X86_FEATURE_NULL_SEL_CLR_BASE;
+ case X86_FEATURE_AUTOIBRS: return KVM_X86_FEATURE_AUTOIBRS;
default: break;
}
--
2.34.1
It's a part of the CPUID 0x80000021 leaf, and this allows us to
group this and other CPUID 0x80000021 EAX feature bits to being
propagated via kvm_set_cpu_caps instead of open-coding them in
__do_cpuid_func().
Also use the feature bit definition in check_null_seg_clears_base()
instead of open-coding it.
Signed-off-by: Kim Phillips <[email protected]>
---
arch/x86/include/asm/cpufeatures.h | 1 +
arch/x86/kernel/cpu/common.c | 3 +--
arch/x86/kernel/cpu/scattered.c | 1 +
3 files changed, 3 insertions(+), 2 deletions(-)
diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index b16fdcedc2b5..5ddde18c1ae8 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -308,6 +308,7 @@
#define X86_FEATURE_CALL_DEPTH (11*32+19) /* "" Call depth tracking for RSB stuffing */
#define X86_FEATURE_MSR_TSX_CTRL (11*32+20) /* "" MSR IA32_TSX_CTRL (Intel) implemented */
#define X86_FEATURE_NO_NESTED_DATA_BP (11*32+21) /* "" AMD No Nested Data Breakpoints */
+#define X86_FEATURE_NULL_SEL_CLR_BASE (11*32+22) /* "" AMD Null Selector Clears Base */
/* Intel-defined CPU features, CPUID level 0x00000007:1 (EAX), word 12 */
#define X86_FEATURE_AVX_VNNI (12*32+ 4) /* AVX VNNI instructions */
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 73cc546e024d..8d28cd7c9072 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1683,8 +1683,7 @@ void check_null_seg_clears_base(struct cpuinfo_x86 *c)
return;
/* Zen3 CPUs advertise Null Selector Clears Base in CPUID. */
- if (c->extended_cpuid_level >= 0x80000021 &&
- cpuid_eax(0x80000021) & BIT(6))
+ if (cpu_has(c, X86_FEATURE_NULL_SEL_CLR_BASE))
return;
/*
diff --git a/arch/x86/kernel/cpu/scattered.c b/arch/x86/kernel/cpu/scattered.c
index 079e253e1049..d0734cc19d37 100644
--- a/arch/x86/kernel/cpu/scattered.c
+++ b/arch/x86/kernel/cpu/scattered.c
@@ -46,6 +46,7 @@ static const struct cpuid_bit cpuid_bits[] = {
{ X86_FEATURE_PROC_FEEDBACK, CPUID_EDX, 11, 0x80000007, 0 },
{ X86_FEATURE_MBA, CPUID_EBX, 6, 0x80000008, 0 },
{ X86_FEATURE_NO_NESTED_DATA_BP,CPUID_EAX, 0, 0x80000021, 0 },
+ { X86_FEATURE_NULL_SEL_CLR_BASE,CPUID_EAX, 6, 0x80000021, 0 },
{ X86_FEATURE_PERFMON_V2, CPUID_EAX, 0, 0x80000022, 0 },
{ X86_FEATURE_AMD_LBR_V2, CPUID_EAX, 1, 0x80000022, 0 },
{ 0, 0, 0, 0, 0 }
--
2.34.1
Since they're now all scattered, group CPUID 0x80000021 EAX feature bits
propagation to kvm_set_cpu_caps instead of open-coding them in
__do_cpuid_func().
Signed-off-by: Kim Phillips <[email protected]>
---
arch/x86/kvm/cpuid.c | 35 ++++++++++++++++++++---------------
arch/x86/kvm/reverse_cpuid.h | 22 ++++++++++++++++------
2 files changed, 36 insertions(+), 21 deletions(-)
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index c92c49a0b35b..8e37760cea1b 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -730,6 +730,25 @@ void kvm_set_cpu_caps(void)
0 /* SME */ | F(SEV) | 0 /* VM_PAGE_FLUSH */ | F(SEV_ES) |
F(SME_COHERENT));
+ /*
+ * Pass down these bits:
+ * EAX 0 NNDBP, Processor ignores nested data breakpoints
+ * EAX 2 LAS, LFENCE always serializing
+ * EAX 6 NSCB, Null selector clear base
+ * EAX 8 Automatic IBRS
+ *
+ * Other defined bits are for MSRs that KVM does not expose:
+ * EAX 3 SPCL, SMM page configuration lock
+ * EAX 13 PCMSR, Prefetch control MSR
+ */
+ kvm_cpu_cap_init_scattered(CPUID_8000_0021_EAX,
+ SF(NO_NESTED_DATA_BP) | SF(LFENCE_RDTSC) |
+ SF(NULL_SEL_CLR_BASE));
+ if (cpu_feature_enabled(X86_FEATURE_LFENCE_RDTSC))
+ kvm_cpu_cap_set(X86_FEATURE_LFENCE_RDTSC);
+ if (!static_cpu_has_bug(X86_BUG_NULL_SEG))
+ kvm_cpu_cap_set(X86_FEATURE_NULL_SEL_CLR_BASE);
+
kvm_cpu_cap_mask(CPUID_C000_0001_EDX,
F(XSTORE) | F(XSTORE_EN) | F(XCRYPT) | F(XCRYPT_EN) |
F(ACE2) | F(ACE2_EN) | F(PHE) | F(PHE_EN) |
@@ -1211,21 +1230,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
break;
case 0x80000021:
entry->ebx = entry->ecx = entry->edx = 0;
- /*
- * Pass down these bits:
- * EAX 0 NNDBP, Processor ignores nested data breakpoints
- * EAX 2 LAS, LFENCE always serializing
- * EAX 6 NSCB, Null selector clear base
- *
- * Other defined bits are for MSRs that KVM does not expose:
- * EAX 3 SPCL, SMM page configuration lock
- * EAX 13 PCMSR, Prefetch control MSR
- */
- entry->eax &= BIT(0) | BIT(2) | BIT(6);
- if (static_cpu_has(X86_FEATURE_LFENCE_RDTSC))
- entry->eax |= BIT(2);
- if (!static_cpu_has_bug(X86_BUG_NULL_SEG))
- entry->eax |= BIT(6);
+ cpuid_entry_override(entry, CPUID_8000_0021_EAX);
break;
/*Add support for Centaur's CPUID instruction*/
case 0xC0000000:
diff --git a/arch/x86/kvm/reverse_cpuid.h b/arch/x86/kvm/reverse_cpuid.h
index 4e5b8444f161..184614e27d5b 100644
--- a/arch/x86/kvm/reverse_cpuid.h
+++ b/arch/x86/kvm/reverse_cpuid.h
@@ -13,6 +13,7 @@
*/
enum kvm_only_cpuid_leafs {
CPUID_12_EAX = NCAPINTS,
+ CPUID_8000_0021_EAX,
NR_KVM_CPU_CAPS,
NKVMCAPINTS = NR_KVM_CPU_CAPS - NCAPINTS,
@@ -25,6 +26,11 @@ enum kvm_only_cpuid_leafs {
#define KVM_X86_FEATURE_SGX2 KVM_X86_FEATURE(CPUID_12_EAX, 1)
#define KVM_X86_FEATURE_SGX_EDECCSSA KVM_X86_FEATURE(CPUID_12_EAX, 11)
+/* AMD-defined Extended Feature 2 EAX, CPUID level 0x80000021 (EAX) */
+#define KVM_X86_FEATURE_NO_NESTED_DATA_BP KVM_X86_FEATURE(CPUID_8000_0021_EAX, 0)
+#define KVM_X86_FEATURE_LFENCE_RDTSC KVM_X86_FEATURE(CPUID_8000_0021_EAX, 2)
+#define KVM_X86_FEATURE_NULL_SEL_CLR_BASE KVM_X86_FEATURE(CPUID_8000_0021_EAX, 6)
+
struct cpuid_reg {
u32 function;
u32 index;
@@ -49,6 +55,7 @@ static const struct cpuid_reg reverse_cpuid[] = {
[CPUID_7_1_EAX] = { 7, 1, CPUID_EAX},
[CPUID_12_EAX] = {0x00000012, 0, CPUID_EAX},
[CPUID_8000_001F_EAX] = {0x8000001f, 0, CPUID_EAX},
+ [CPUID_8000_0021_EAX] = {0x80000021, 0, CPUID_EAX},
};
/*
@@ -75,12 +82,15 @@ static __always_inline void reverse_cpuid_check(unsigned int x86_leaf)
*/
static __always_inline u32 __feature_translate(int x86_feature)
{
- if (x86_feature == X86_FEATURE_SGX1)
- return KVM_X86_FEATURE_SGX1;
- else if (x86_feature == X86_FEATURE_SGX2)
- return KVM_X86_FEATURE_SGX2;
- else if (x86_feature == X86_FEATURE_SGX_EDECCSSA)
- return KVM_X86_FEATURE_SGX_EDECCSSA;
+ switch (x86_feature) {
+ case X86_FEATURE_SGX1: return KVM_X86_FEATURE_SGX1;
+ case X86_FEATURE_SGX2: return KVM_X86_FEATURE_SGX2;
+ case X86_FEATURE_SGX_EDECCSSA: return KVM_X86_FEATURE_SGX_EDECCSSA;
+ case X86_FEATURE_NO_NESTED_DATA_BP: return KVM_X86_FEATURE_NO_NESTED_DATA_BP;
+ case X86_FEATURE_LFENCE_RDTSC: return KVM_X86_FEATURE_LFENCE_RDTSC;
+ case X86_FEATURE_NULL_SEL_CLR_BASE: return KVM_X86_FEATURE_NULL_SEL_CLR_BASE;
+ default: break;
+ }
return x86_feature;
}
--
2.34.1
The AMD Zen4 core supports a new feature called Automatic IBRS.
It is a "set-and-forget" feature that means that, like
Intel's Enhanced IBRS, h/w manages its IBRS mitigation
resources automatically across CPL transitions.
The feature is advertised by CPUID_Fn80000021_EAX bit 8 and is
enabled by setting MSR C000_0080 (EFER) bit 21.
Enable Automatic IBRS by default if the CPU feature is present.
It typically provides greater performance over the incumbent
generic retpolines mitigation.
Reuse the SPECTRE_V2_EIBRS spectre_v2_mitigation enum.
AMD Automatic IBRS and Intel Enhanced IBRS have similar
bugs.c enablement.
Also allow for spectre_v2=autoibrs on the kernel command line.
'spectre_v2=autoibrs,retpoline' and 'autoibrs,lfence' are
honoured but not required. AutoIBRS will also be enabled if
the =eibrs[,{lfence,retpoline}] variants are specified.
Signed-off-by: Kim Phillips <[email protected]>
---
.../admin-guide/kernel-parameters.txt | 9 +++++---
arch/x86/include/asm/msr-index.h | 2 ++
arch/x86/kernel/cpu/bugs.c | 23 ++++++++++++-------
arch/x86/kernel/cpu/common.c | 8 +++++++
arch/x86/kvm/svm/svm.c | 3 +++
arch/x86/kvm/x86.c | 3 +++
6 files changed, 37 insertions(+), 11 deletions(-)
diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index a465d5242774..880016d06a8a 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -5698,9 +5698,12 @@
retpoline,generic - Retpolines
retpoline,lfence - LFENCE; indirect branch
retpoline,amd - alias for retpoline,lfence
- eibrs - enhanced IBRS
- eibrs,retpoline - enhanced IBRS + Retpolines
- eibrs,lfence - enhanced IBRS + LFENCE
+ eibrs - Enhanced/Auto IBRS
+ autoibrs - Enhanced/Auto IBRS
+ eibrs,retpoline - Enhanced/Auto IBRS + Retpolines
+ autoibrs,retpoline- Enhanced/Auto IBRS + Retpolines
+ eibrs,lfence - Enhanced/Auto IBRS + LFENCE
+ autoibrs,lfence - Enhanced/Auto IBRS + LFENCE
ibrs - use IBRS to protect kernel
Not specifying this option is equivalent to
diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
index 8519191c6409..88fdd75f6a2f 100644
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -30,6 +30,7 @@
#define _EFER_SVME 12 /* Enable virtualization */
#define _EFER_LMSLE 13 /* Long Mode Segment Limit Enable */
#define _EFER_FFXSR 14 /* Enable Fast FXSAVE/FXRSTOR */
+#define _EFER_AUTOIBRS 21 /* Enable Automatic IBRS */
#define EFER_SCE (1<<_EFER_SCE)
#define EFER_LME (1<<_EFER_LME)
@@ -38,6 +39,7 @@
#define EFER_SVME (1<<_EFER_SVME)
#define EFER_LMSLE (1<<_EFER_LMSLE)
#define EFER_FFXSR (1<<_EFER_FFXSR)
+#define EFER_AUTOIBRS (1<<_EFER_AUTOIBRS)
/* Intel MSRs. Some also available on other CPUs */
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index aa0819252c88..5f48dd4dbc48 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -1222,9 +1222,9 @@ static const char * const spectre_v2_strings[] = {
[SPECTRE_V2_NONE] = "Vulnerable",
[SPECTRE_V2_RETPOLINE] = "Mitigation: Retpolines",
[SPECTRE_V2_LFENCE] = "Mitigation: LFENCE",
- [SPECTRE_V2_EIBRS] = "Mitigation: Enhanced IBRS",
- [SPECTRE_V2_EIBRS_LFENCE] = "Mitigation: Enhanced IBRS + LFENCE",
- [SPECTRE_V2_EIBRS_RETPOLINE] = "Mitigation: Enhanced IBRS + Retpolines",
+ [SPECTRE_V2_EIBRS] = "Mitigation: Enhanced / Automatic IBRS",
+ [SPECTRE_V2_EIBRS_LFENCE] = "Mitigation: Enhanced / Automatic IBRS + LFENCE",
+ [SPECTRE_V2_EIBRS_RETPOLINE] = "Mitigation: Enhanced / Automatic IBRS + Retpolines",
[SPECTRE_V2_IBRS] = "Mitigation: IBRS",
};
@@ -1240,8 +1240,11 @@ static const struct {
{ "retpoline,lfence", SPECTRE_V2_CMD_RETPOLINE_LFENCE, false },
{ "retpoline,generic", SPECTRE_V2_CMD_RETPOLINE_GENERIC, false },
{ "eibrs", SPECTRE_V2_CMD_EIBRS, false },
+ { "autoibrs", SPECTRE_V2_CMD_EIBRS, false },
{ "eibrs,lfence", SPECTRE_V2_CMD_EIBRS_LFENCE, false },
+ { "autoibrs,lfence", SPECTRE_V2_CMD_EIBRS_LFENCE, false },
{ "eibrs,retpoline", SPECTRE_V2_CMD_EIBRS_RETPOLINE, false },
+ { "autoibrs,retpoline", SPECTRE_V2_CMD_EIBRS_RETPOLINE, false },
{ "auto", SPECTRE_V2_CMD_AUTO, false },
{ "ibrs", SPECTRE_V2_CMD_IBRS, false },
};
@@ -1293,7 +1296,7 @@ static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
cmd == SPECTRE_V2_CMD_EIBRS_LFENCE ||
cmd == SPECTRE_V2_CMD_EIBRS_RETPOLINE) &&
!boot_cpu_has(X86_FEATURE_IBRS_ENHANCED)) {
- pr_err("%s selected but CPU doesn't have eIBRS. Switching to AUTO select\n",
+ pr_err("%s selected but CPU doesn't have Enhanced or Automatic IBRS. Switching to AUTO select\n",
mitigation_options[i].option);
return SPECTRE_V2_CMD_AUTO;
}
@@ -1479,8 +1482,12 @@ static void __init spectre_v2_select_mitigation(void)
pr_err(SPECTRE_V2_EIBRS_EBPF_MSG);
if (spectre_v2_in_ibrs_mode(mode)) {
- x86_spec_ctrl_base |= SPEC_CTRL_IBRS;
- write_spec_ctrl_current(x86_spec_ctrl_base, true);
+ if (boot_cpu_has(X86_FEATURE_AUTOIBRS)) {
+ msr_set_bit(MSR_EFER, _EFER_AUTOIBRS);
+ } else {
+ x86_spec_ctrl_base |= SPEC_CTRL_IBRS;
+ write_spec_ctrl_current(x86_spec_ctrl_base, true);
+ }
}
switch (mode) {
@@ -1564,8 +1571,8 @@ static void __init spectre_v2_select_mitigation(void)
/*
* Retpoline protects the kernel, but doesn't protect firmware. IBRS
* and Enhanced IBRS protect firmware too, so enable IBRS around
- * firmware calls only when IBRS / Enhanced IBRS aren't otherwise
- * enabled.
+ * firmware calls only when IBRS / Enhanced / Automatic IBRS aren't
+ * otherwise enabled.
*
* Use "mode" to check Enhanced IBRS instead of boot_cpu_has(), because
* the user might select retpoline on the kernel command line and if
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 8d28cd7c9072..0af7b963f2e4 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1406,6 +1406,14 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
!(ia32_cap & ARCH_CAP_PBRSB_NO))
setup_force_cpu_bug(X86_BUG_EIBRS_PBRSB);
+ /*
+ * AMD's AutoIBRS is equivalent to Intel's eIBRS - use the Intel flag only
+ * after IBRS_ENHANCED bugs such as BUG_EIBRS_PBRSB above have been
+ * determined.
+ */
+ if (cpu_has(c, X86_FEATURE_AUTOIBRS))
+ setup_force_cpu_cap(X86_FEATURE_IBRS_ENHANCED);
+
if (cpu_matches(cpu_vuln_whitelist, NO_MELTDOWN))
return;
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 4b6d2b050e57..3ac3d4cfce24 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -4960,6 +4960,9 @@ static __init int svm_hardware_setup(void)
tsc_aux_uret_slot = kvm_add_user_return_msr(MSR_TSC_AUX);
+ if (boot_cpu_has(X86_FEATURE_AUTOIBRS))
+ kvm_enable_efer_bits(EFER_AUTOIBRS);
+
/* Check for pause filtering support */
if (!boot_cpu_has(X86_FEATURE_PAUSEFILTER)) {
pause_filter_count = 0;
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 490ec23c8450..db0f522fd597 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -1682,6 +1682,9 @@ static int do_get_msr_feature(struct kvm_vcpu *vcpu, unsigned index, u64 *data)
static bool __kvm_valid_efer(struct kvm_vcpu *vcpu, u64 efer)
{
+ if (efer & EFER_AUTOIBRS && !guest_cpuid_has(vcpu, X86_FEATURE_AUTOIBRS))
+ return false;
+
if (efer & EFER_FFXSR && !guest_cpuid_has(vcpu, X86_FEATURE_FXSR_OPT))
return false;
--
2.34.1
On Tue, Nov 29, 2022 at 05:58:15PM -0600, Kim Phillips wrote:
>--- a/arch/x86/kernel/cpu/common.c
>+++ b/arch/x86/kernel/cpu/common.c
>@@ -1406,6 +1406,14 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
> !(ia32_cap & ARCH_CAP_PBRSB_NO))
> setup_force_cpu_bug(X86_BUG_EIBRS_PBRSB);
>
>+ /*
>+ * AMD's AutoIBRS is equivalent to Intel's eIBRS - use the Intel flag only
>+ * after IBRS_ENHANCED bugs such as BUG_EIBRS_PBRSB above have been
>+ * determined.
>+ */
Minor comment, setting NO_EIBRS_PBRSB in cpu_vuln_whitelist for
non-EIBRS CPUs also (AMD and others) can remove this order dependency.
>+ if (cpu_has(c, X86_FEATURE_AUTOIBRS))
>+ setup_force_cpu_cap(X86_FEATURE_IBRS_ENHANCED);