2023-01-24 16:33:42

by Kim Phillips

[permalink] [raw]
Subject: [PATCH v9 0/8] x86/cpu, kvm: Support AMD Automatic IBRS

The AMD Zen4 core supports a new feature called Automatic IBRS
(Indirect Branch Restricted Speculation).

Enable Automatic IBRS by default if the CPU feature is present.
It typically provides greater performance over the incumbent
generic retpolines mitigation.

Patch 1 [unchanged from v7] Adds support for the leaf that
contains the AutoIBRS feature bit.

Patch 2 moves the leaf's open-coded code from __do_cpuid_func()
to kvm_set_cpu_caps() in preparation for adding the features in
their native leaf.

Patches 3-6 introduce the new leaf's supported bits one by one.

Patch 7 [unchanged from v7] Adds support for AutoIBRS by turning
its EFER enablement bit on at startup if the feature is available.

Patch 8 [unchanged from v7] Adds support for propagating AutoIBRS
to the guest.

v9: Address comments from Sean Christopherson:
- Patch 1: "KVM guests", not "VM guests"
- Patches 2 & 8:
- Subject prefix change to "KVM: x86: "
- Patch 2:
- Added Sean's blurb about the synthetic features to the commit message
- Added BIT(2) /* LFENCE always serializing */ to
kvm_cpu_cap_mask() in kvm_set_cpu_caps() to mirror true code movement
- Patch 4: Removed the kvm_cpu_cap_set for LFENCE_RDTSC since the
previous set_cpu_cap()s and kvm_cpu_cap_mask() presence will effectively
synthesize the feature now that the feature bit is in its proper place.

v8: https://lore.kernel.org/lkml/[email protected]/
Address comments from Sean Christopherson, Boris:
- Move open-coded cpuid leaf 0x80000021 EAX bit propagation code
in a single step/patch to avoid CPUID bits getting cleared
by the open-coded ANDs coming after cpuid_entry_override().
- Includes changes Boris made when committing v7 to tip/x86-cpu, i.e.:
- Removing "AMD" prefix from feature comment text in cpufeatures.h
- Convert test in check_null_seg_clears_base() too.
- commit message changes

v7: https://lore.kernel.org/lkml/[email protected]/
- Add Dave Hansen's Acked-by to unchanged patch 6/7
- Change patch 3/7 to not bother to set MSR DE_CFG[1]
if X86_FEATURE_LFENCE_RDTSC is already set [Boris]
- v6 went out with two 1/1's, try to not do that again

v6: https://lore.kernel.org/lkml/[email protected]/
Address v5 comment from Boris:
- Move CPUID leaf 0x8000021 EAX feature bits from scattered
to the new whole leaf since the majority of the features
will be used in the kernel and thus a separate leaf is
appropriate.

v5: https://lore.kernel.org/lkml/[email protected]/
Address v4 comments from Dave Hansen, Pawan Gupta, and Boris:
- Don't add new user-visible 'autoibrs' command line
options that have to be documented: reuse 'eibrs'
- Update Documentation/admin-guide/hw-vuln/spectre.rst
- Add NO_EIBRS_PBRSB to Hygon as well
- Re-word commit texts to not use words like 'us'

v4: https://lore.kernel.org/lkml/[email protected]/
Moved some kvm bits that had crept into patch 6/7 back into 7/7,
and addressed v3 comments:
- Don't put ", kvm" in titles of patches that don't touch kvm. [SeanC]
- () after function names, i.e. kvm_set_cpu_caps(). [SeanC]
- follow the established kvm_cpu_cap_init_scattered() style [SeanC]
- Add using cpu_feature_enabled() instead of static_cpu_has() to
commit text [SeanC]
- Pawan Gupta mentioned that the ordering of enabling the Intel
feature bit past Intel EIBRS bug detection could be avoided
by setting NO_EIBRS_PBRSB to cpu_vuln_whitelist, so did that
which allowed regrouping all EIBRS related code to one place
in cpu_set_bug_bits().

v3: https://lore.kernel.org/lkml/[email protected]/
- Remove Co-developed-bys. They require signed-off-bys,
so co-developers need to add them themselves.
- update check_null_seg_clears_base() [Boris]
- Made the feature bit additions separate patches
because v2 patch was clearly doing too many things at once.

v2: https://lore.kernel.org/lkml/[email protected]/
https://lkml.org/lkml/2022/11/23/1690
- Use synthetic/scattered bits instead of introducing new leaf [Boris]
- Combine the rest of the leaf's bits being used [Paolo]
Note: Bits not used by the host can be moved to kvm/cpuid.c if
maintainers do not want them in cpufeatures.h.
- Hoist bitsetting code to kvm_set_cpu_caps(), and use
cpuid_entry_override() in __do_cpuid_func() [Paolo]
- Reuse SPECTRE_V2_EIBRS spectre_v2_mitigation enum [Boris, PeterZ, D.Hansen]
- Change from Boris' diff:
Moved setting X86_FEATURE_IBRS_ENHANCED to after BUG_EIBRS_PBRSB
so PBRSB mitigations wouldn't be enabled.
- Allow for users to specify "autoibrs,lfence/retpoline" instead
of actively preventing the extra protections. AutoIBRS doesn't
require the extra protection, but allow it anyway.

v1: https://lore.kernel.org/lkml/[email protected]/

Signed-off-by: Kim Phillips <[email protected]>
Cc: Borislav Petkov (AMD) <[email protected]>
Cc: Boris Ostrovsky <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Joao Martins <[email protected]>
Cc: Jonathan Corbet <[email protected]>
Cc: Konrad Rzeszutek Wilk <[email protected]>
Cc: Paolo Bonzini <[email protected]>
Cc: Sean Christopherson <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: David Woodhouse <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: Juergen Gross <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Tony Luck <[email protected]>
Cc: Tom Lendacky <[email protected]>
Cc: Alexey Kardashevskiy <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]

Kim Phillips (8):
x86/cpu, kvm: Add support for CPUID_80000021_EAX
x86/cpu, kvm: Move open-coded cpuid leaf 0x80000021 EAX bit
propagation code
x86/cpu, kvm: Add the NO_NESTED_DATA_BP feature
x86/cpu, kvm: Move X86_FEATURE_LFENCE_RDTSC to its native leaf
x86/cpu, kvm: Add the Null Selector Clears Base feature
x86/cpu, kvm: Add the SMM_CTL MSR not present feature
x86/cpu: Support AMD Automatic IBRS
x86/cpu, kvm: Propagate the AMD Automatic IBRS feature to the guest

Documentation/admin-guide/hw-vuln/spectre.rst | 6 ++--
.../admin-guide/kernel-parameters.txt | 6 ++--
arch/x86/include/asm/cpufeature.h | 7 +++--
arch/x86/include/asm/cpufeatures.h | 11 +++++--
arch/x86/include/asm/disabled-features.h | 3 +-
arch/x86/include/asm/msr-index.h | 2 ++
arch/x86/include/asm/required-features.h | 3 +-
arch/x86/kernel/cpu/amd.c | 2 +-
arch/x86/kernel/cpu/bugs.c | 20 ++++++++-----
arch/x86/kernel/cpu/common.c | 26 +++++++++-------
arch/x86/kvm/cpuid.c | 30 +++++++------------
arch/x86/kvm/reverse_cpuid.h | 1 +
arch/x86/kvm/svm/svm.c | 3 ++
arch/x86/kvm/x86.c | 3 ++
14 files changed, 72 insertions(+), 51 deletions(-)

--
2.34.1

Kim Phillips (8):
x86/cpu, kvm: Add support for CPUID_80000021_EAX
KVM: x86: Move open-coded cpuid leaf 0x80000021 EAX bit propagation
code
x86/cpu, kvm: Add the NO_NESTED_DATA_BP feature
x86/cpu, kvm: Move X86_FEATURE_LFENCE_RDTSC to its native leaf
x86/cpu, kvm: Add the Null Selector Clears Base feature
x86/cpu, kvm: Add the SMM_CTL MSR not present feature
x86/cpu: Support AMD Automatic IBRS
KVM: x86: Propagate the AMD Automatic IBRS feature to the guest

Documentation/admin-guide/hw-vuln/spectre.rst | 6 ++--
.../admin-guide/kernel-parameters.txt | 6 ++--
arch/x86/include/asm/cpufeature.h | 7 +++--
arch/x86/include/asm/cpufeatures.h | 11 ++++++--
arch/x86/include/asm/disabled-features.h | 3 +-
arch/x86/include/asm/msr-index.h | 2 ++
arch/x86/include/asm/required-features.h | 3 +-
arch/x86/kernel/cpu/amd.c | 2 +-
arch/x86/kernel/cpu/bugs.c | 20 +++++++------
arch/x86/kernel/cpu/common.c | 26 +++++++++--------
arch/x86/kvm/cpuid.c | 28 ++++++-------------
arch/x86/kvm/reverse_cpuid.h | 1 +
arch/x86/kvm/svm/svm.c | 3 ++
arch/x86/kvm/x86.c | 3 ++
14 files changed, 70 insertions(+), 51 deletions(-)

--
2.34.1



2023-01-24 16:34:03

by Kim Phillips

[permalink] [raw]
Subject: [PATCH v9 1/8] x86/cpu, kvm: Add support for CPUID_80000021_EAX

Add support for CPUID leaf 80000021, EAX. The majority of the features will be
used in the kernel and thus a separate leaf is appropriate.

Include KVM's reverse_cpuid entry because features are used by KVM guests, too.

[ bp: Massage commit message. ]

Signed-off-by: Kim Phillips <[email protected]>
---
arch/x86/include/asm/cpufeature.h | 7 +++++--
arch/x86/include/asm/cpufeatures.h | 2 +-
arch/x86/include/asm/disabled-features.h | 3 ++-
arch/x86/include/asm/required-features.h | 3 ++-
arch/x86/kernel/cpu/common.c | 3 +++
arch/x86/kvm/reverse_cpuid.h | 1 +
6 files changed, 14 insertions(+), 5 deletions(-)

diff --git a/arch/x86/include/asm/cpufeature.h b/arch/x86/include/asm/cpufeature.h
index 1a85e1fb0922..ce0c8f7d3218 100644
--- a/arch/x86/include/asm/cpufeature.h
+++ b/arch/x86/include/asm/cpufeature.h
@@ -32,6 +32,7 @@ enum cpuid_leafs
CPUID_8000_0007_EBX,
CPUID_7_EDX,
CPUID_8000_001F_EAX,
+ CPUID_8000_0021_EAX,
};

#define X86_CAP_FMT_NUM "%d:%d"
@@ -94,8 +95,9 @@ extern const char * const x86_bug_flags[NBUGINTS*32];
CHECK_BIT_IN_MASK_WORD(REQUIRED_MASK, 17, feature_bit) || \
CHECK_BIT_IN_MASK_WORD(REQUIRED_MASK, 18, feature_bit) || \
CHECK_BIT_IN_MASK_WORD(REQUIRED_MASK, 19, feature_bit) || \
+ CHECK_BIT_IN_MASK_WORD(REQUIRED_MASK, 20, feature_bit) || \
REQUIRED_MASK_CHECK || \
- BUILD_BUG_ON_ZERO(NCAPINTS != 20))
+ BUILD_BUG_ON_ZERO(NCAPINTS != 21))

#define DISABLED_MASK_BIT_SET(feature_bit) \
( CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 0, feature_bit) || \
@@ -118,8 +120,9 @@ extern const char * const x86_bug_flags[NBUGINTS*32];
CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 17, feature_bit) || \
CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 18, feature_bit) || \
CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 19, feature_bit) || \
+ CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 20, feature_bit) || \
DISABLED_MASK_CHECK || \
- BUILD_BUG_ON_ZERO(NCAPINTS != 20))
+ BUILD_BUG_ON_ZERO(NCAPINTS != 21))

#define cpu_has(c, bit) \
(__builtin_constant_p(bit) && REQUIRED_MASK_BIT_SET(bit) ? 1 : \
diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index 123613cdae93..552bd3943370 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -13,7 +13,7 @@
/*
* Defines x86 CPU feature bits
*/
-#define NCAPINTS 20 /* N 32-bit words worth of info */
+#define NCAPINTS 21 /* N 32-bit words worth of info */
#define NBUGINTS 1 /* N 32-bit bug flags */

/*
diff --git a/arch/x86/include/asm/disabled-features.h b/arch/x86/include/asm/disabled-features.h
index c44b56f7ffba..5dfa4fb76f4b 100644
--- a/arch/x86/include/asm/disabled-features.h
+++ b/arch/x86/include/asm/disabled-features.h
@@ -124,6 +124,7 @@
#define DISABLED_MASK17 0
#define DISABLED_MASK18 0
#define DISABLED_MASK19 0
-#define DISABLED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 20)
+#define DISABLED_MASK20 0
+#define DISABLED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 21)

#endif /* _ASM_X86_DISABLED_FEATURES_H */
diff --git a/arch/x86/include/asm/required-features.h b/arch/x86/include/asm/required-features.h
index aff774775c67..7ba1726b71c7 100644
--- a/arch/x86/include/asm/required-features.h
+++ b/arch/x86/include/asm/required-features.h
@@ -98,6 +98,7 @@
#define REQUIRED_MASK17 0
#define REQUIRED_MASK18 0
#define REQUIRED_MASK19 0
-#define REQUIRED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 20)
+#define REQUIRED_MASK20 0
+#define REQUIRED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 21)

#endif /* _ASM_X86_REQUIRED_FEATURES_H */
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 5fe56f0ec9d7..094dbcd63f2a 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1093,6 +1093,9 @@ void get_cpu_cap(struct cpuinfo_x86 *c)
if (c->extended_cpuid_level >= 0x8000001f)
c->x86_capability[CPUID_8000_001F_EAX] = cpuid_eax(0x8000001f);

+ if (c->extended_cpuid_level >= 0x80000021)
+ c->x86_capability[CPUID_8000_0021_EAX] = cpuid_eax(0x80000021);
+
init_scattered_cpuid_features(c);
init_speculation_control(c);

diff --git a/arch/x86/kvm/reverse_cpuid.h b/arch/x86/kvm/reverse_cpuid.h
index 042d0aca3c92..81f4e9ce0c77 100644
--- a/arch/x86/kvm/reverse_cpuid.h
+++ b/arch/x86/kvm/reverse_cpuid.h
@@ -68,6 +68,7 @@ static const struct cpuid_reg reverse_cpuid[] = {
[CPUID_12_EAX] = {0x00000012, 0, CPUID_EAX},
[CPUID_8000_001F_EAX] = {0x8000001f, 0, CPUID_EAX},
[CPUID_7_1_EDX] = { 7, 1, CPUID_EDX},
+ [CPUID_8000_0021_EAX] = {0x80000021, 0, CPUID_EAX},
};

/*
--
2.34.1


2023-01-24 16:34:19

by Kim Phillips

[permalink] [raw]
Subject: [PATCH v9 2/8] KVM: x86: Move open-coded cpuid leaf 0x80000021 EAX bit propagation code

Move code from __do_cpuid_func() to kvm_set_cpu_caps() in preparation
for adding the features in their native leaf.

Also drop the bit description comments as it will be more self-
describing once the individual features are added.

Whilst there, switch to using the more efficient cpu_feature_enabled()
instead of static_cpu_has().

Note, LFENCE_RDTSC and "NULL selector clears base" are currently
synthetic, Linux-defined feature flags as Linux tracking of the features
predates AMD's definition. Keep the manual propagation of the flags from
their synthetic counterparts until the kernel fully converts to AMD's
definition, otherwise KVM would stop synthesizing the flags as intended.

Signed-off-by: Kim Phillips <[email protected]>
---
arch/x86/kvm/cpuid.c | 31 ++++++++++++-------------------
1 file changed, 12 insertions(+), 19 deletions(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 596061c1610e..a1047763fdd3 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -741,6 +741,17 @@ void kvm_set_cpu_caps(void)
0 /* SME */ | F(SEV) | 0 /* VM_PAGE_FLUSH */ | F(SEV_ES) |
F(SME_COHERENT));

+ kvm_cpu_cap_mask(CPUID_8000_0021_EAX,
+ BIT(0) /* NO_NESTED_DATA_BP */ |
+ BIT(2) /* LFENCE Always serializing */ | 0 /* SmmPgCfgLock */ |
+ BIT(6) /* NULL_SEL_CLR_BASE */ | 0 /* PrefetchCtlMsr */
+ );
+ if (cpu_feature_enabled(X86_FEATURE_LFENCE_RDTSC))
+ kvm_cpu_caps[CPUID_8000_0021_EAX] |= BIT(2) /* LFENCE Always serializing */;
+ if (!static_cpu_has_bug(X86_BUG_NULL_SEG))
+ kvm_cpu_caps[CPUID_8000_0021_EAX] |= BIT(6) /* NULL_SEL_CLR_BASE */;
+ kvm_cpu_caps[CPUID_8000_0021_EAX] |= BIT(9) /* NO_SMM_CTL_MSR */;
+
kvm_cpu_cap_mask(CPUID_C000_0001_EDX,
F(XSTORE) | F(XSTORE_EN) | F(XCRYPT) | F(XCRYPT_EN) |
F(ACE2) | F(ACE2_EN) | F(PHE) | F(PHE_EN) |
@@ -1222,25 +1233,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
break;
case 0x80000021:
entry->ebx = entry->ecx = entry->edx = 0;
- /*
- * Pass down these bits:
- * EAX 0 NNDBP, Processor ignores nested data breakpoints
- * EAX 2 LAS, LFENCE always serializing
- * EAX 6 NSCB, Null selector clear base
- *
- * Other defined bits are for MSRs that KVM does not expose:
- * EAX 3 SPCL, SMM page configuration lock
- * EAX 13 PCMSR, Prefetch control MSR
- *
- * KVM doesn't support SMM_CTL.
- * EAX 9 SMM_CTL MSR is not supported
- */
- entry->eax &= BIT(0) | BIT(2) | BIT(6);
- entry->eax |= BIT(9);
- if (static_cpu_has(X86_FEATURE_LFENCE_RDTSC))
- entry->eax |= BIT(2);
- if (!static_cpu_has_bug(X86_BUG_NULL_SEG))
- entry->eax |= BIT(6);
+ cpuid_entry_override(entry, CPUID_8000_0021_EAX);
break;
/*Add support for Centaur's CPUID instruction*/
case 0xC0000000:
--
2.34.1


2023-01-24 16:34:43

by Kim Phillips

[permalink] [raw]
Subject: [PATCH v9 3/8] x86/cpu, kvm: Add the NO_NESTED_DATA_BP feature

The "Processor ignores nested data breakpoints" feature was being
open-coded for KVM. Add the feature to its newly introduced
CPUID leaf 0x80000021 EAX proper.

Signed-off-by: Kim Phillips <[email protected]>
---
arch/x86/include/asm/cpufeatures.h | 3 +++
arch/x86/kvm/cpuid.c | 2 +-
2 files changed, 4 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index 552bd3943370..4637fd7a84d6 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -430,6 +430,9 @@
#define X86_FEATURE_V_TSC_AUX (19*32+ 9) /* "" Virtual TSC_AUX */
#define X86_FEATURE_SME_COHERENT (19*32+10) /* "" AMD hardware-enforced cache coherency */

+/* AMD-defined Extended Feature 2 EAX, CPUID level 0x80000021 (EAX), word 20 */
+#define X86_FEATURE_NO_NESTED_DATA_BP (20*32+ 0) /* "" No Nested Data Breakpoints */
+
/*
* BUG word(s)
*/
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index a1047763fdd3..9764499acce2 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -742,7 +742,7 @@ void kvm_set_cpu_caps(void)
F(SME_COHERENT));

kvm_cpu_cap_mask(CPUID_8000_0021_EAX,
- BIT(0) /* NO_NESTED_DATA_BP */ |
+ F(NO_NESTED_DATA_BP) |
BIT(2) /* LFENCE Always serializing */ | 0 /* SmmPgCfgLock */ |
BIT(6) /* NULL_SEL_CLR_BASE */ | 0 /* PrefetchCtlMsr */
);
--
2.34.1


2023-01-24 16:35:05

by Kim Phillips

[permalink] [raw]
Subject: [PATCH v9 4/8] x86/cpu, kvm: Move X86_FEATURE_LFENCE_RDTSC to its native leaf

The LFENCE always serializing feature bit was defined as scattered
LFENCE_RDTSC and its native leaf bit position open-coded for KVM.
Add it to its newly added CPUID leaf 0x80000021 EAX proper.
With LFENCE_RDTSC is in its proper place, the kernel's set_cpu_cap()
will effectively sythesize the feature for KVM going forward.

Drop the bit description comments now it's more self-describing.

Also, in amd_init(), don't bother setting DE_CFG[1] any more.

Signed-off-by: Kim Phillips <[email protected]>
---
arch/x86/include/asm/cpufeatures.h | 3 ++-
arch/x86/kernel/cpu/amd.c | 2 +-
arch/x86/kvm/cpuid.c | 5 +----
3 files changed, 4 insertions(+), 6 deletions(-)

diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index 4637fd7a84d6..e975822951b2 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -97,7 +97,7 @@
#define X86_FEATURE_SYSENTER32 ( 3*32+15) /* "" sysenter in IA32 userspace */
#define X86_FEATURE_REP_GOOD ( 3*32+16) /* REP microcode works well */
#define X86_FEATURE_AMD_LBR_V2 ( 3*32+17) /* AMD Last Branch Record Extension Version 2 */
-#define X86_FEATURE_LFENCE_RDTSC ( 3*32+18) /* "" LFENCE synchronizes RDTSC */
+/* FREE, was #define X86_FEATURE_LFENCE_RDTSC ( 3*32+18) "" LFENCE synchronizes RDTSC */
#define X86_FEATURE_ACC_POWER ( 3*32+19) /* AMD Accumulated Power Mechanism */
#define X86_FEATURE_NOPL ( 3*32+20) /* The NOPL (0F 1F) instructions */
#define X86_FEATURE_ALWAYS ( 3*32+21) /* "" Always-present feature */
@@ -432,6 +432,7 @@

/* AMD-defined Extended Feature 2 EAX, CPUID level 0x80000021 (EAX), word 20 */
#define X86_FEATURE_NO_NESTED_DATA_BP (20*32+ 0) /* "" No Nested Data Breakpoints */
+#define X86_FEATURE_LFENCE_RDTSC (20*32+ 2) /* "" LFENCE always serializing / synchronizes RDTSC */

/*
* BUG word(s)
diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
index f769d6d08b43..208c2ce8598a 100644
--- a/arch/x86/kernel/cpu/amd.c
+++ b/arch/x86/kernel/cpu/amd.c
@@ -956,7 +956,7 @@ static void init_amd(struct cpuinfo_x86 *c)

init_amd_cacheinfo(c);

- if (cpu_has(c, X86_FEATURE_XMM2)) {
+ if (!cpu_has(c, X86_FEATURE_LFENCE_RDTSC) && cpu_has(c, X86_FEATURE_XMM2)) {
/*
* Use LFENCE for execution serialization. On families which
* don't have that MSR, LFENCE is already serializing.
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 9764499acce2..448b5de98b8f 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -742,12 +742,9 @@ void kvm_set_cpu_caps(void)
F(SME_COHERENT));

kvm_cpu_cap_mask(CPUID_8000_0021_EAX,
- F(NO_NESTED_DATA_BP) |
- BIT(2) /* LFENCE Always serializing */ | 0 /* SmmPgCfgLock */ |
+ F(NO_NESTED_DATA_BP) | F(LFENCE_RDTSC) | 0 /* SmmPgCfgLock */ |
BIT(6) /* NULL_SEL_CLR_BASE */ | 0 /* PrefetchCtlMsr */
);
- if (cpu_feature_enabled(X86_FEATURE_LFENCE_RDTSC))
- kvm_cpu_caps[CPUID_8000_0021_EAX] |= BIT(2) /* LFENCE Always serializing */;
if (!static_cpu_has_bug(X86_BUG_NULL_SEG))
kvm_cpu_caps[CPUID_8000_0021_EAX] |= BIT(6) /* NULL_SEL_CLR_BASE */;
kvm_cpu_caps[CPUID_8000_0021_EAX] |= BIT(9) /* NO_SMM_CTL_MSR */;
--
2.34.1


2023-01-24 16:35:17

by Kim Phillips

[permalink] [raw]
Subject: [PATCH v9 5/8] x86/cpu, kvm: Add the Null Selector Clears Base feature

The Null Selector Clears Base feature was being open-coded for KVM.
Add it to its newly added native CPUID leaf 0x80000021 EAX proper.

Also drop the bit description comments now it's more self-describing.

[ bp: Convert test in check_null_seg_clears_base() too. ]

Signed-off-by: Kim Phillips <[email protected]>
---
arch/x86/include/asm/cpufeatures.h | 1 +
arch/x86/kernel/cpu/common.c | 4 +---
arch/x86/kvm/cpuid.c | 4 ++--
3 files changed, 4 insertions(+), 5 deletions(-)

diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index e975822951b2..bb0b483dcfd1 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -433,6 +433,7 @@
/* AMD-defined Extended Feature 2 EAX, CPUID level 0x80000021 (EAX), word 20 */
#define X86_FEATURE_NO_NESTED_DATA_BP (20*32+ 0) /* "" No Nested Data Breakpoints */
#define X86_FEATURE_LFENCE_RDTSC (20*32+ 2) /* "" LFENCE always serializing / synchronizes RDTSC */
+#define X86_FEATURE_NULL_SEL_CLR_BASE (20*32+ 6) /* "" Null Selector Clears Base */

/*
* BUG word(s)
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 094dbcd63f2a..162352d42ce0 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1685,9 +1685,7 @@ void check_null_seg_clears_base(struct cpuinfo_x86 *c)
if (!IS_ENABLED(CONFIG_X86_64))
return;

- /* Zen3 CPUs advertise Null Selector Clears Base in CPUID. */
- if (c->extended_cpuid_level >= 0x80000021 &&
- cpuid_eax(0x80000021) & BIT(6))
+ if (cpu_has(c, X86_FEATURE_NULL_SEL_CLR_BASE))
return;

/*
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 448b5de98b8f..e2c403cd33f1 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -743,10 +743,10 @@ void kvm_set_cpu_caps(void)

kvm_cpu_cap_mask(CPUID_8000_0021_EAX,
F(NO_NESTED_DATA_BP) | F(LFENCE_RDTSC) | 0 /* SmmPgCfgLock */ |
- BIT(6) /* NULL_SEL_CLR_BASE */ | 0 /* PrefetchCtlMsr */
+ F(NULL_SEL_CLR_BASE) | 0 /* PrefetchCtlMsr */
);
if (!static_cpu_has_bug(X86_BUG_NULL_SEG))
- kvm_cpu_caps[CPUID_8000_0021_EAX] |= BIT(6) /* NULL_SEL_CLR_BASE */;
+ kvm_cpu_cap_set(X86_FEATURE_NULL_SEL_CLR_BASE);
kvm_cpu_caps[CPUID_8000_0021_EAX] |= BIT(9) /* NO_SMM_CTL_MSR */;

kvm_cpu_cap_mask(CPUID_C000_0001_EDX,
--
2.34.1


2023-01-24 16:35:34

by Kim Phillips

[permalink] [raw]
Subject: [PATCH v9 6/8] x86/cpu, kvm: Add the SMM_CTL MSR not present feature

The SMM_CTL MSR not present feature was being open-coded for KVM.
Add it to its newly added CPUID leaf 0x80000021 EAX proper.

Also drop the bit description comments now the code is more
self-describing.

Signed-off-by: Kim Phillips <[email protected]>
---
arch/x86/include/asm/cpufeatures.h | 1 +
arch/x86/kvm/cpuid.c | 2 +-
2 files changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index bb0b483dcfd1..8ef89d595771 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -434,6 +434,7 @@
#define X86_FEATURE_NO_NESTED_DATA_BP (20*32+ 0) /* "" No Nested Data Breakpoints */
#define X86_FEATURE_LFENCE_RDTSC (20*32+ 2) /* "" LFENCE always serializing / synchronizes RDTSC */
#define X86_FEATURE_NULL_SEL_CLR_BASE (20*32+ 6) /* "" Null Selector Clears Base */
+#define X86_FEATURE_NO_SMM_CTL_MSR (20*32+ 9) /* "" SMM_CTL MSR is not present */

/*
* BUG word(s)
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index e2c403cd33f1..8519f4a993f7 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -747,7 +747,7 @@ void kvm_set_cpu_caps(void)
);
if (!static_cpu_has_bug(X86_BUG_NULL_SEG))
kvm_cpu_cap_set(X86_FEATURE_NULL_SEL_CLR_BASE);
- kvm_cpu_caps[CPUID_8000_0021_EAX] |= BIT(9) /* NO_SMM_CTL_MSR */;
+ kvm_cpu_cap_set(X86_FEATURE_NO_SMM_CTL_MSR);

kvm_cpu_cap_mask(CPUID_C000_0001_EDX,
F(XSTORE) | F(XSTORE_EN) | F(XCRYPT) | F(XCRYPT_EN) |
--
2.34.1


2023-01-24 16:35:58

by Kim Phillips

[permalink] [raw]
Subject: [PATCH v9 7/8] x86/cpu: Support AMD Automatic IBRS

The AMD Zen4 core supports a new feature called Automatic IBRS.

It is a "set-and-forget" feature that means that, like Intel's Enhanced IBRS,
h/w manages its IBRS mitigation resources automatically across CPL transitions.

The feature is advertised by CPUID_Fn80000021_EAX bit 8 and is enabled by
setting MSR C000_0080 (EFER) bit 21.

Enable Automatic IBRS by default if the CPU feature is present. It typically
provides greater performance over the incumbent generic retpolines mitigation.

Reuse the SPECTRE_V2_EIBRS spectre_v2_mitigation enum. AMD Automatic IBRS and
Intel Enhanced IBRS have similar enablement. Add NO_EIBRS_PBRSB to
cpu_vuln_whitelist, since AMD Automatic IBRS isn't affected by PBRSB-eIBRS.

The kernel command line option spectre_v2=eibrs is used to select AMD Automatic
IBRS, if available.

Signed-off-by: Kim Phillips <[email protected]>
Acked-by: Dave Hansen <[email protected]>
---
Documentation/admin-guide/hw-vuln/spectre.rst | 6 +++---
.../admin-guide/kernel-parameters.txt | 6 +++---
arch/x86/include/asm/cpufeatures.h | 1 +
arch/x86/include/asm/msr-index.h | 2 ++
arch/x86/kernel/cpu/bugs.c | 20 +++++++++++--------
arch/x86/kernel/cpu/common.c | 19 ++++++++++--------
6 files changed, 32 insertions(+), 22 deletions(-)

diff --git a/Documentation/admin-guide/hw-vuln/spectre.rst b/Documentation/admin-guide/hw-vuln/spectre.rst
index c4dcdb3d0d45..3fe6511c5405 100644
--- a/Documentation/admin-guide/hw-vuln/spectre.rst
+++ b/Documentation/admin-guide/hw-vuln/spectre.rst
@@ -610,9 +610,9 @@ kernel command line.
retpoline,generic Retpolines
retpoline,lfence LFENCE; indirect branch
retpoline,amd alias for retpoline,lfence
- eibrs enhanced IBRS
- eibrs,retpoline enhanced IBRS + Retpolines
- eibrs,lfence enhanced IBRS + LFENCE
+ eibrs Enhanced/Auto IBRS
+ eibrs,retpoline Enhanced/Auto IBRS + Retpolines
+ eibrs,lfence Enhanced/Auto IBRS + LFENCE
ibrs use IBRS to protect kernel

Not specifying this option is equivalent to
diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 0ee891133d76..1d2f92edb5a1 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -5729,9 +5729,9 @@
retpoline,generic - Retpolines
retpoline,lfence - LFENCE; indirect branch
retpoline,amd - alias for retpoline,lfence
- eibrs - enhanced IBRS
- eibrs,retpoline - enhanced IBRS + Retpolines
- eibrs,lfence - enhanced IBRS + LFENCE
+ eibrs - Enhanced/Auto IBRS
+ eibrs,retpoline - Enhanced/Auto IBRS + Retpolines
+ eibrs,lfence - Enhanced/Auto IBRS + LFENCE
ibrs - use IBRS to protect kernel

Not specifying this option is equivalent to
diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index 8ef89d595771..fdb8e09234ba 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -434,6 +434,7 @@
#define X86_FEATURE_NO_NESTED_DATA_BP (20*32+ 0) /* "" No Nested Data Breakpoints */
#define X86_FEATURE_LFENCE_RDTSC (20*32+ 2) /* "" LFENCE always serializing / synchronizes RDTSC */
#define X86_FEATURE_NULL_SEL_CLR_BASE (20*32+ 6) /* "" Null Selector Clears Base */
+#define X86_FEATURE_AUTOIBRS (20*32+ 8) /* "" Automatic IBRS */
#define X86_FEATURE_NO_SMM_CTL_MSR (20*32+ 9) /* "" SMM_CTL MSR is not present */

/*
diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
index 4e0a7ad17083..ad35355ee43e 100644
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -25,6 +25,7 @@
#define _EFER_SVME 12 /* Enable virtualization */
#define _EFER_LMSLE 13 /* Long Mode Segment Limit Enable */
#define _EFER_FFXSR 14 /* Enable Fast FXSAVE/FXRSTOR */
+#define _EFER_AUTOIBRS 21 /* Enable Automatic IBRS */

#define EFER_SCE (1<<_EFER_SCE)
#define EFER_LME (1<<_EFER_LME)
@@ -33,6 +34,7 @@
#define EFER_SVME (1<<_EFER_SVME)
#define EFER_LMSLE (1<<_EFER_LMSLE)
#define EFER_FFXSR (1<<_EFER_FFXSR)
+#define EFER_AUTOIBRS (1<<_EFER_AUTOIBRS)

/* Intel MSRs. Some also available on other CPUs */

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 4a0add86c182..cf81848b72f4 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -1238,9 +1238,9 @@ static const char * const spectre_v2_strings[] = {
[SPECTRE_V2_NONE] = "Vulnerable",
[SPECTRE_V2_RETPOLINE] = "Mitigation: Retpolines",
[SPECTRE_V2_LFENCE] = "Mitigation: LFENCE",
- [SPECTRE_V2_EIBRS] = "Mitigation: Enhanced IBRS",
- [SPECTRE_V2_EIBRS_LFENCE] = "Mitigation: Enhanced IBRS + LFENCE",
- [SPECTRE_V2_EIBRS_RETPOLINE] = "Mitigation: Enhanced IBRS + Retpolines",
+ [SPECTRE_V2_EIBRS] = "Mitigation: Enhanced / Automatic IBRS",
+ [SPECTRE_V2_EIBRS_LFENCE] = "Mitigation: Enhanced / Automatic IBRS + LFENCE",
+ [SPECTRE_V2_EIBRS_RETPOLINE] = "Mitigation: Enhanced / Automatic IBRS + Retpolines",
[SPECTRE_V2_IBRS] = "Mitigation: IBRS",
};

@@ -1309,7 +1309,7 @@ static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
cmd == SPECTRE_V2_CMD_EIBRS_LFENCE ||
cmd == SPECTRE_V2_CMD_EIBRS_RETPOLINE) &&
!boot_cpu_has(X86_FEATURE_IBRS_ENHANCED)) {
- pr_err("%s selected but CPU doesn't have eIBRS. Switching to AUTO select\n",
+ pr_err("%s selected but CPU doesn't have Enhanced or Automatic IBRS. Switching to AUTO select\n",
mitigation_options[i].option);
return SPECTRE_V2_CMD_AUTO;
}
@@ -1495,8 +1495,12 @@ static void __init spectre_v2_select_mitigation(void)
pr_err(SPECTRE_V2_EIBRS_EBPF_MSG);

if (spectre_v2_in_ibrs_mode(mode)) {
- x86_spec_ctrl_base |= SPEC_CTRL_IBRS;
- update_spec_ctrl(x86_spec_ctrl_base);
+ if (boot_cpu_has(X86_FEATURE_AUTOIBRS)) {
+ msr_set_bit(MSR_EFER, _EFER_AUTOIBRS);
+ } else {
+ x86_spec_ctrl_base |= SPEC_CTRL_IBRS;
+ update_spec_ctrl(x86_spec_ctrl_base);
+ }
}

switch (mode) {
@@ -1580,8 +1584,8 @@ static void __init spectre_v2_select_mitigation(void)
/*
* Retpoline protects the kernel, but doesn't protect firmware. IBRS
* and Enhanced IBRS protect firmware too, so enable IBRS around
- * firmware calls only when IBRS / Enhanced IBRS aren't otherwise
- * enabled.
+ * firmware calls only when IBRS / Enhanced / Automatic IBRS aren't
+ * otherwise enabled.
*
* Use "mode" to check Enhanced IBRS instead of boot_cpu_has(), because
* the user might select retpoline on the kernel command line and if
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 162352d42ce0..8ce67a8a61a6 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1229,8 +1229,8 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = {
VULNWL_AMD(0x12, NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO),

/* FAMILY_ANY must be last, otherwise 0x0f - 0x12 matches won't work */
- VULNWL_AMD(X86_FAMILY_ANY, NO_MELTDOWN | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO),
- VULNWL_HYGON(X86_FAMILY_ANY, NO_MELTDOWN | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO),
+ VULNWL_AMD(X86_FAMILY_ANY, NO_MELTDOWN | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO | NO_EIBRS_PBRSB),
+ VULNWL_HYGON(X86_FAMILY_ANY, NO_MELTDOWN | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO | NO_EIBRS_PBRSB),

/* Zhaoxin Family 7 */
VULNWL(CENTAUR, 7, X86_MODEL_ANY, NO_SPECTRE_V2 | NO_SWAPGS | NO_MMIO),
@@ -1341,8 +1341,16 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
!cpu_has(c, X86_FEATURE_AMD_SSB_NO))
setup_force_cpu_bug(X86_BUG_SPEC_STORE_BYPASS);

- if (ia32_cap & ARCH_CAP_IBRS_ALL)
+ /*
+ * AMD's AutoIBRS is equivalent to Intel's eIBRS - use the Intel feature
+ * flag and protect from vendor-specific bugs via the whitelist.
+ */
+ if ((ia32_cap & ARCH_CAP_IBRS_ALL) || cpu_has(c, X86_FEATURE_AUTOIBRS)) {
setup_force_cpu_cap(X86_FEATURE_IBRS_ENHANCED);
+ if (!cpu_matches(cpu_vuln_whitelist, NO_EIBRS_PBRSB) &&
+ !(ia32_cap & ARCH_CAP_PBRSB_NO))
+ setup_force_cpu_bug(X86_BUG_EIBRS_PBRSB);
+ }

if (!cpu_matches(cpu_vuln_whitelist, NO_MDS) &&
!(ia32_cap & ARCH_CAP_MDS_NO)) {
@@ -1404,11 +1412,6 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
setup_force_cpu_bug(X86_BUG_RETBLEED);
}

- if (cpu_has(c, X86_FEATURE_IBRS_ENHANCED) &&
- !cpu_matches(cpu_vuln_whitelist, NO_EIBRS_PBRSB) &&
- !(ia32_cap & ARCH_CAP_PBRSB_NO))
- setup_force_cpu_bug(X86_BUG_EIBRS_PBRSB);
-
if (cpu_matches(cpu_vuln_whitelist, NO_MELTDOWN))
return;

--
2.34.1


2023-01-24 16:36:20

by Kim Phillips

[permalink] [raw]
Subject: [PATCH v9 8/8] KVM: x86: Propagate the AMD Automatic IBRS feature to the guest

Add the AMD Automatic IBRS feature bit to those being propagated to the guest,
and enable the guest EFER bit.

Signed-off-by: Kim Phillips <[email protected]>
---
arch/x86/kvm/cpuid.c | 2 +-
arch/x86/kvm/svm/svm.c | 3 +++
arch/x86/kvm/x86.c | 3 +++
3 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 8519f4a993f7..f29d35c20c7e 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -743,7 +743,7 @@ void kvm_set_cpu_caps(void)

kvm_cpu_cap_mask(CPUID_8000_0021_EAX,
F(NO_NESTED_DATA_BP) | F(LFENCE_RDTSC) | 0 /* SmmPgCfgLock */ |
- F(NULL_SEL_CLR_BASE) | 0 /* PrefetchCtlMsr */
+ F(NULL_SEL_CLR_BASE) | F(AUTOIBRS) | 0 /* PrefetchCtlMsr */
);
if (!static_cpu_has_bug(X86_BUG_NULL_SEG))
kvm_cpu_cap_set(X86_FEATURE_NULL_SEL_CLR_BASE);
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 9a194aa1a75a..60c7c880266b 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -4969,6 +4969,9 @@ static __init int svm_hardware_setup(void)

tsc_aux_uret_slot = kvm_add_user_return_msr(MSR_TSC_AUX);

+ if (boot_cpu_has(X86_FEATURE_AUTOIBRS))
+ kvm_enable_efer_bits(EFER_AUTOIBRS);
+
/* Check for pause filtering support */
if (!boot_cpu_has(X86_FEATURE_PAUSEFILTER)) {
pause_filter_count = 0;
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index da4bbd043a7b..8dd0cb230ef5 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -1685,6 +1685,9 @@ static int do_get_msr_feature(struct kvm_vcpu *vcpu, unsigned index, u64 *data)

static bool __kvm_valid_efer(struct kvm_vcpu *vcpu, u64 efer)
{
+ if (efer & EFER_AUTOIBRS && !guest_cpuid_has(vcpu, X86_FEATURE_AUTOIBRS))
+ return false;
+
if (efer & EFER_FFXSR && !guest_cpuid_has(vcpu, X86_FEATURE_FXSR_OPT))
return false;

--
2.34.1


2023-01-24 21:33:06

by Sean Christopherson

[permalink] [raw]
Subject: Re: [PATCH v9 4/8] x86/cpu, kvm: Move X86_FEATURE_LFENCE_RDTSC to its native leaf

On Tue, Jan 24, 2023, Kim Phillips wrote:
> The LFENCE always serializing feature bit was defined as scattered
> LFENCE_RDTSC and its native leaf bit position open-coded for KVM.
> Add it to its newly added CPUID leaf 0x80000021 EAX proper.
> With LFENCE_RDTSC is in its proper place, the kernel's set_cpu_cap()
> will effectively sythesize the feature for KVM going forward.
>
> Drop the bit description comments now it's more self-describing.
>
> Also, in amd_init(), don't bother setting DE_CFG[1] any more.
>
> Signed-off-by: Kim Phillips <[email protected]>
> ---
> arch/x86/include/asm/cpufeatures.h | 3 ++-
> arch/x86/kernel/cpu/amd.c | 2 +-
> arch/x86/kvm/cpuid.c | 5 +----
> 3 files changed, 4 insertions(+), 6 deletions(-)
>
> diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
> index 4637fd7a84d6..e975822951b2 100644
> --- a/arch/x86/include/asm/cpufeatures.h
> +++ b/arch/x86/include/asm/cpufeatures.h
> @@ -97,7 +97,7 @@
> #define X86_FEATURE_SYSENTER32 ( 3*32+15) /* "" sysenter in IA32 userspace */
> #define X86_FEATURE_REP_GOOD ( 3*32+16) /* REP microcode works well */
> #define X86_FEATURE_AMD_LBR_V2 ( 3*32+17) /* AMD Last Branch Record Extension Version 2 */
> -#define X86_FEATURE_LFENCE_RDTSC ( 3*32+18) /* "" LFENCE synchronizes RDTSC */
> +/* FREE, was #define X86_FEATURE_LFENCE_RDTSC ( 3*32+18) "" LFENCE synchronizes RDTSC */
> #define X86_FEATURE_ACC_POWER ( 3*32+19) /* AMD Accumulated Power Mechanism */
> #define X86_FEATURE_NOPL ( 3*32+20) /* The NOPL (0F 1F) instructions */
> #define X86_FEATURE_ALWAYS ( 3*32+21) /* "" Always-present feature */
> @@ -432,6 +432,7 @@
>
> /* AMD-defined Extended Feature 2 EAX, CPUID level 0x80000021 (EAX), word 20 */
> #define X86_FEATURE_NO_NESTED_DATA_BP (20*32+ 0) /* "" No Nested Data Breakpoints */
> +#define X86_FEATURE_LFENCE_RDTSC (20*32+ 2) /* "" LFENCE always serializing / synchronizes RDTSC */
>
> /*
> * BUG word(s)
> diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
> index f769d6d08b43..208c2ce8598a 100644
> --- a/arch/x86/kernel/cpu/amd.c
> +++ b/arch/x86/kernel/cpu/amd.c
> @@ -956,7 +956,7 @@ static void init_amd(struct cpuinfo_x86 *c)
>
> init_amd_cacheinfo(c);
>
> - if (cpu_has(c, X86_FEATURE_XMM2)) {
> + if (!cpu_has(c, X86_FEATURE_LFENCE_RDTSC) && cpu_has(c, X86_FEATURE_XMM2)) {
> /*
> * Use LFENCE for execution serialization. On families which
> * don't have that MSR, LFENCE is already serializing.
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index 9764499acce2..448b5de98b8f 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -742,12 +742,9 @@ void kvm_set_cpu_caps(void)
> F(SME_COHERENT));
>
> kvm_cpu_cap_mask(CPUID_8000_0021_EAX,
> - F(NO_NESTED_DATA_BP) |
> - BIT(2) /* LFENCE Always serializing */ | 0 /* SmmPgCfgLock */ |
> + F(NO_NESTED_DATA_BP) | F(LFENCE_RDTSC) | 0 /* SmmPgCfgLock */ |
> BIT(6) /* NULL_SEL_CLR_BASE */ | 0 /* PrefetchCtlMsr */
> );
> - if (cpu_feature_enabled(X86_FEATURE_LFENCE_RDTSC))
> - kvm_cpu_caps[CPUID_8000_0021_EAX] |= BIT(2) /* LFENCE Always serializing */;

Gah, I was wrong. I lost track of the fact that kvm_cpu_cap_mask() does an
actual CPUID, i.e. the oddball code is necessary to manual synthesize the flag.

Boris, can you fold this in?

---
arch/x86/kvm/cpuid.c | 13 +++++++++++++
1 file changed, 13 insertions(+)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index fb32e084a40f..12455dc5afe5 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -745,6 +745,19 @@ void kvm_set_cpu_caps(void)
F(NO_NESTED_DATA_BP) | F(LFENCE_RDTSC) | 0 /* SmmPgCfgLock */ |
BIT(6) /* NULL_SEL_CLR_BASE */ | 0 /* PrefetchCtlMsr */
);
+
+ /*
+ * Synthesize "LFENCE is serializing" into the AMD-defined entry in
+ * KVM's supported CPUID if the feature is reported as supported by the
+ * kernel. LFENCE_RDTSC was a Linux-defined synthetic feature long
+ * before AMD joined the bandwagon, e.g. LFENCE is serializing on most
+ * CPUs that support SSE2. On CPUs that don't support AMD's leaf,
+ * kvm_cpu_cap_mask() will unfortunately drop the flag due to ANDing
+ * the mask with the raw host CPUID, and reporting support in AMD's
+ * leaf can make it easier for userspace to detect the feature.
+ */
+ if (cpu_feature_enabled(X86_FEATURE_LFENCE_RDTSC))
+ kvm_cpu_cap_set(X86_FEATURE_LFENCE_RDTSC);
if (!static_cpu_has_bug(X86_BUG_NULL_SEG))
kvm_cpu_caps[CPUID_8000_0021_EAX] |= BIT(6) /* NULL_SEL_CLR_BASE */;
kvm_cpu_caps[CPUID_8000_0021_EAX] |= BIT(9) /* NO_SMM_CTL_MSR */;

base-commit: f607476ee37397a72a2abb687bc170ce0bbec780
--

I.e. end up with this as of this patch:

kvm_cpu_cap_mask(CPUID_8000_0021_EAX,
F(NO_NESTED_DATA_BP) | F(LFENCE_RDTSC) | 0 /* SmmPgCfgLock */ |
BIT(6) /* NULL_SEL_CLR_BASE */ | 0 /* PrefetchCtlMsr */
);

/*
* Synthesize "LFENCE is serializing" into the AMD-defined entry in
* KVM's supported CPUID if the feature is reported as supported by the
* kernel. LFENCE_RDTSC was a Linux-defined synthetic feature long
* before AMD joined the bandwagon, e.g. LFENCE is serializing on most
* CPUs that support SSE2. On CPUs that don't support AMD's leaf,
* kvm_cpu_cap_mask() will unfortunately drop the flag due to ANDing
* the mask with the raw host CPUID, and reporting support in AMD's
* leaf can make it easier for userspace to detect the feature.
*/
if (cpu_feature_enabled(X86_FEATURE_LFENCE_RDTSC))
kvm_cpu_cap_set(X86_FEATURE_LFENCE_RDTSC);
if (!static_cpu_has_bug(X86_BUG_NULL_SEG))
kvm_cpu_caps[CPUID_8000_0021_EAX] |= BIT(6) /* NULL_SEL_CLR_BASE */;
kvm_cpu_caps[CPUID_8000_0021_EAX] |= BIT(9) /* NO_SMM_CTL_MSR */;


2023-01-24 21:37:48

by Sean Christopherson

[permalink] [raw]
Subject: Re: [PATCH v9 0/8] x86/cpu, kvm: Support AMD Automatic IBRS

On Tue, Jan 24, 2023, Kim Phillips wrote:
> Kim Phillips (8):
> x86/cpu, kvm: Add support for CPUID_80000021_EAX
> KVM: x86: Move open-coded cpuid leaf 0x80000021 EAX bit propagation
> code
> x86/cpu, kvm: Add the NO_NESTED_DATA_BP feature
> x86/cpu, kvm: Move X86_FEATURE_LFENCE_RDTSC to its native leaf
> x86/cpu, kvm: Add the Null Selector Clears Base feature
> x86/cpu, kvm: Add the SMM_CTL MSR not present feature
> x86/cpu: Support AMD Automatic IBRS
> KVM: x86: Propagate the AMD Automatic IBRS feature to the guest
>
> Documentation/admin-guide/hw-vuln/spectre.rst | 6 ++--
> .../admin-guide/kernel-parameters.txt | 6 ++--
> arch/x86/include/asm/cpufeature.h | 7 +++--
> arch/x86/include/asm/cpufeatures.h | 11 ++++++--
> arch/x86/include/asm/disabled-features.h | 3 +-
> arch/x86/include/asm/msr-index.h | 2 ++
> arch/x86/include/asm/required-features.h | 3 +-
> arch/x86/kernel/cpu/amd.c | 2 +-
> arch/x86/kernel/cpu/bugs.c | 20 +++++++------
> arch/x86/kernel/cpu/common.c | 26 +++++++++--------
> arch/x86/kvm/cpuid.c | 28 ++++++-------------
> arch/x86/kvm/reverse_cpuid.h | 1 +
> arch/x86/kvm/svm/svm.c | 3 ++
> arch/x86/kvm/x86.c | 3 ++
> 14 files changed, 70 insertions(+), 51 deletions(-)
>
> --

With my goof in the LFENCE_RDTSC patch fixed,

Acked-by: Sean Christopherson <[email protected]>

2023-01-25 12:09:52

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH v9 4/8] x86/cpu, kvm: Move X86_FEATURE_LFENCE_RDTSC to its native leaf

On Tue, Jan 24, 2023 at 09:32:55PM +0000, Sean Christopherson wrote:
> Boris, can you fold this in?

Sure, see below.

---
From: Kim Phillips <[email protected]>
Date: Tue, 24 Jan 2023 10:33:15 -0600
Subject: [PATCH] x86/cpu, kvm: Move X86_FEATURE_LFENCE_RDTSC to its native leaf

The LFENCE always serializing feature bit was defined as scattered
LFENCE_RDTSC and its native leaf bit position open-coded for KVM. Add
it to its newly added CPUID leaf 0x80000021 EAX proper. With
LFENCE_RDTSC in its proper place, the kernel's set_cpu_cap() will
effectively synthesize the feature for KVM going forward.

Also, DE_CFG[1] doesn't need to be set on such CPUs anymore.

[ bp: Massage and merge diff from Sean. ]

Signed-off-by: Kim Phillips <[email protected]>
Signed-off-by: Borislav Petkov (AMD) <[email protected]>
Acked-by: Sean Christopherson <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
---
arch/x86/include/asm/cpufeatures.h | 3 ++-
arch/x86/kernel/cpu/amd.c | 2 +-
arch/x86/kvm/cpuid.c | 16 +++++++++++++---
3 files changed, 16 insertions(+), 5 deletions(-)

diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index 1b2d40a96b97..901128ed4c7a 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -97,7 +97,7 @@
#define X86_FEATURE_SYSENTER32 ( 3*32+15) /* "" sysenter in IA32 userspace */
#define X86_FEATURE_REP_GOOD ( 3*32+16) /* REP microcode works well */
#define X86_FEATURE_AMD_LBR_V2 ( 3*32+17) /* AMD Last Branch Record Extension Version 2 */
-#define X86_FEATURE_LFENCE_RDTSC ( 3*32+18) /* "" LFENCE synchronizes RDTSC */
+/* FREE, was #define X86_FEATURE_LFENCE_RDTSC ( 3*32+18) "" LFENCE synchronizes RDTSC */
#define X86_FEATURE_ACC_POWER ( 3*32+19) /* AMD Accumulated Power Mechanism */
#define X86_FEATURE_NOPL ( 3*32+20) /* The NOPL (0F 1F) instructions */
#define X86_FEATURE_ALWAYS ( 3*32+21) /* "" Always-present feature */
@@ -429,6 +429,7 @@

/* AMD-defined Extended Feature 2 EAX, CPUID level 0x80000021 (EAX), word 20 */
#define X86_FEATURE_NO_NESTED_DATA_BP (20*32+ 0) /* "" No Nested Data Breakpoints */
+#define X86_FEATURE_LFENCE_RDTSC (20*32+ 2) /* "" LFENCE always serializing / synchronizes RDTSC */

/*
* BUG word(s)
diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
index f769d6d08b43..208c2ce8598a 100644
--- a/arch/x86/kernel/cpu/amd.c
+++ b/arch/x86/kernel/cpu/amd.c
@@ -956,7 +956,7 @@ static void init_amd(struct cpuinfo_x86 *c)

init_amd_cacheinfo(c);

- if (cpu_has(c, X86_FEATURE_XMM2)) {
+ if (!cpu_has(c, X86_FEATURE_LFENCE_RDTSC) && cpu_has(c, X86_FEATURE_XMM2)) {
/*
* Use LFENCE for execution serialization. On families which
* don't have that MSR, LFENCE is already serializing.
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index aa3a6dc74e95..12455dc5afe5 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -742,12 +742,22 @@ void kvm_set_cpu_caps(void)
F(SME_COHERENT));

kvm_cpu_cap_mask(CPUID_8000_0021_EAX,
- F(NO_NESTED_DATA_BP) |
- BIT(2) /* LFENCE Always serializing */ | 0 /* SmmPgCfgLock */ |
+ F(NO_NESTED_DATA_BP) | F(LFENCE_RDTSC) | 0 /* SmmPgCfgLock */ |
BIT(6) /* NULL_SEL_CLR_BASE */ | 0 /* PrefetchCtlMsr */
);
+
+ /*
+ * Synthesize "LFENCE is serializing" into the AMD-defined entry in
+ * KVM's supported CPUID if the feature is reported as supported by the
+ * kernel. LFENCE_RDTSC was a Linux-defined synthetic feature long
+ * before AMD joined the bandwagon, e.g. LFENCE is serializing on most
+ * CPUs that support SSE2. On CPUs that don't support AMD's leaf,
+ * kvm_cpu_cap_mask() will unfortunately drop the flag due to ANDing
+ * the mask with the raw host CPUID, and reporting support in AMD's
+ * leaf can make it easier for userspace to detect the feature.
+ */
if (cpu_feature_enabled(X86_FEATURE_LFENCE_RDTSC))
- kvm_cpu_caps[CPUID_8000_0021_EAX] |= BIT(2) /* LFENCE Always serializing */;
+ kvm_cpu_cap_set(X86_FEATURE_LFENCE_RDTSC);
if (!static_cpu_has_bug(X86_BUG_NULL_SEG))
kvm_cpu_caps[CPUID_8000_0021_EAX] |= BIT(6) /* NULL_SEL_CLR_BASE */;
kvm_cpu_caps[CPUID_8000_0021_EAX] |= BIT(9) /* NO_SMM_CTL_MSR */;
--
2.35.1

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette

Subject: [tip: x86/cpu] x86/cpu, kvm: Add the Null Selector Clears Base feature

The following commit has been merged into the x86/cpu branch of tip:

Commit-ID: 5b909d4ae59aedc711b7a432da021be0e82c95a0
Gitweb: https://git.kernel.org/tip/5b909d4ae59aedc711b7a432da021be0e82c95a0
Author: Kim Phillips <[email protected]>
AuthorDate: Tue, 24 Jan 2023 10:33:16 -06:00
Committer: Borislav Petkov (AMD) <[email protected]>
CommitterDate: Wed, 25 Jan 2023 16:25:46 +01:00

x86/cpu, kvm: Add the Null Selector Clears Base feature

The Null Selector Clears Base feature was being open-coded for KVM.
Add it to its newly added native CPUID leaf 0x80000021 EAX proper.

Also drop the bit description comments now it's more self-describing.

[ bp: Convert test in check_null_seg_clears_base() too. ]

Signed-off-by: Kim Phillips <[email protected]>
Signed-off-by: Borislav Petkov (AMD) <[email protected]>
Acked-by: Sean Christopherson <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
---
arch/x86/include/asm/cpufeatures.h | 1 +
arch/x86/kernel/cpu/common.c | 4 +---
arch/x86/kvm/cpuid.c | 4 ++--
3 files changed, 4 insertions(+), 5 deletions(-)

diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index 901128e..6bed80c 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -430,6 +430,7 @@
/* AMD-defined Extended Feature 2 EAX, CPUID level 0x80000021 (EAX), word 20 */
#define X86_FEATURE_NO_NESTED_DATA_BP (20*32+ 0) /* "" No Nested Data Breakpoints */
#define X86_FEATURE_LFENCE_RDTSC (20*32+ 2) /* "" LFENCE always serializing / synchronizes RDTSC */
+#define X86_FEATURE_NULL_SEL_CLR_BASE (20*32+ 6) /* "" Null Selector Clears Base */

/*
* BUG word(s)
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index e6f3234..e6bf9b1 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1685,9 +1685,7 @@ void check_null_seg_clears_base(struct cpuinfo_x86 *c)
if (!IS_ENABLED(CONFIG_X86_64))
return;

- /* Zen3 CPUs advertise Null Selector Clears Base in CPUID. */
- if (c->extended_cpuid_level >= 0x80000021 &&
- cpuid_eax(0x80000021) & BIT(6))
+ if (cpu_has(c, X86_FEATURE_NULL_SEL_CLR_BASE))
return;

/*
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 12455dc..dde8d6b 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -743,7 +743,7 @@ void kvm_set_cpu_caps(void)

kvm_cpu_cap_mask(CPUID_8000_0021_EAX,
F(NO_NESTED_DATA_BP) | F(LFENCE_RDTSC) | 0 /* SmmPgCfgLock */ |
- BIT(6) /* NULL_SEL_CLR_BASE */ | 0 /* PrefetchCtlMsr */
+ F(NULL_SEL_CLR_BASE) | 0 /* PrefetchCtlMsr */
);

/*
@@ -759,7 +759,7 @@ void kvm_set_cpu_caps(void)
if (cpu_feature_enabled(X86_FEATURE_LFENCE_RDTSC))
kvm_cpu_cap_set(X86_FEATURE_LFENCE_RDTSC);
if (!static_cpu_has_bug(X86_BUG_NULL_SEG))
- kvm_cpu_caps[CPUID_8000_0021_EAX] |= BIT(6) /* NULL_SEL_CLR_BASE */;
+ kvm_cpu_cap_set(X86_FEATURE_NULL_SEL_CLR_BASE);
kvm_cpu_caps[CPUID_8000_0021_EAX] |= BIT(9) /* NO_SMM_CTL_MSR */;

kvm_cpu_cap_mask(CPUID_C000_0001_EDX,

Subject: [tip: x86/cpu] KVM: x86: Propagate the AMD Automatic IBRS feature to the guest

The following commit has been merged into the x86/cpu branch of tip:

Commit-ID: 8c19b6f257fa71ed3a7a9df6ce466c6be31ca04c
Gitweb: https://git.kernel.org/tip/8c19b6f257fa71ed3a7a9df6ce466c6be31ca04c
Author: Kim Phillips <[email protected]>
AuthorDate: Tue, 24 Jan 2023 10:33:19 -06:00
Committer: Borislav Petkov (AMD) <[email protected]>
CommitterDate: Wed, 25 Jan 2023 17:21:40 +01:00

KVM: x86: Propagate the AMD Automatic IBRS feature to the guest

Add the AMD Automatic IBRS feature bit to those being propagated to the guest,
and enable the guest EFER bit.

Signed-off-by: Kim Phillips <[email protected]>
Signed-off-by: Borislav Petkov (AMD) <[email protected]>
Acked-by: Sean Christopherson <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
---
arch/x86/kvm/cpuid.c | 2 +-
arch/x86/kvm/svm/svm.c | 3 +++
arch/x86/kvm/x86.c | 3 +++
3 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 28071e9..f1f4fe8 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -743,7 +743,7 @@ void kvm_set_cpu_caps(void)

kvm_cpu_cap_mask(CPUID_8000_0021_EAX,
F(NO_NESTED_DATA_BP) | F(LFENCE_RDTSC) | 0 /* SmmPgCfgLock */ |
- F(NULL_SEL_CLR_BASE) | 0 /* PrefetchCtlMsr */
+ F(NULL_SEL_CLR_BASE) | F(AUTOIBRS) | 0 /* PrefetchCtlMsr */
);

/*
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 9a194aa..60c7c88 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -4969,6 +4969,9 @@ static __init int svm_hardware_setup(void)

tsc_aux_uret_slot = kvm_add_user_return_msr(MSR_TSC_AUX);

+ if (boot_cpu_has(X86_FEATURE_AUTOIBRS))
+ kvm_enable_efer_bits(EFER_AUTOIBRS);
+
/* Check for pause filtering support */
if (!boot_cpu_has(X86_FEATURE_PAUSEFILTER)) {
pause_filter_count = 0;
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index da4bbd0..8dd0cb2 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -1685,6 +1685,9 @@ static int do_get_msr_feature(struct kvm_vcpu *vcpu, unsigned index, u64 *data)

static bool __kvm_valid_efer(struct kvm_vcpu *vcpu, u64 efer)
{
+ if (efer & EFER_AUTOIBRS && !guest_cpuid_has(vcpu, X86_FEATURE_AUTOIBRS))
+ return false;
+
if (efer & EFER_FFXSR && !guest_cpuid_has(vcpu, X86_FEATURE_FXSR_OPT))
return false;


Subject: [tip: x86/cpu] x86/cpu, kvm: Add the SMM_CTL MSR not present feature

The following commit has been merged into the x86/cpu branch of tip:

Commit-ID: faabfcb194a8d0686396e3fff6a5b42911f65191
Gitweb: https://git.kernel.org/tip/faabfcb194a8d0686396e3fff6a5b42911f65191
Author: Kim Phillips <[email protected]>
AuthorDate: Tue, 24 Jan 2023 10:33:17 -06:00
Committer: Borislav Petkov (AMD) <[email protected]>
CommitterDate: Wed, 25 Jan 2023 16:37:20 +01:00

x86/cpu, kvm: Add the SMM_CTL MSR not present feature

The SMM_CTL MSR not present feature was being open-coded for KVM.
Add it to its newly added CPUID leaf 0x80000021 EAX proper.

Also drop the bit description comments now the code is more
self-describing.

Signed-off-by: Kim Phillips <[email protected]>
Signed-off-by: Borislav Petkov (AMD) <[email protected]>
Acked-by: Sean Christopherson <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
---
arch/x86/include/asm/cpufeatures.h | 1 +
arch/x86/kvm/cpuid.c | 2 +-
2 files changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index 6bed80c..86e98bd 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -431,6 +431,7 @@
#define X86_FEATURE_NO_NESTED_DATA_BP (20*32+ 0) /* "" No Nested Data Breakpoints */
#define X86_FEATURE_LFENCE_RDTSC (20*32+ 2) /* "" LFENCE always serializing / synchronizes RDTSC */
#define X86_FEATURE_NULL_SEL_CLR_BASE (20*32+ 6) /* "" Null Selector Clears Base */
+#define X86_FEATURE_NO_SMM_CTL_MSR (20*32+ 9) /* "" SMM_CTL MSR is not present */

/*
* BUG word(s)
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index dde8d6b..28071e9 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -760,7 +760,7 @@ void kvm_set_cpu_caps(void)
kvm_cpu_cap_set(X86_FEATURE_LFENCE_RDTSC);
if (!static_cpu_has_bug(X86_BUG_NULL_SEG))
kvm_cpu_cap_set(X86_FEATURE_NULL_SEL_CLR_BASE);
- kvm_cpu_caps[CPUID_8000_0021_EAX] |= BIT(9) /* NO_SMM_CTL_MSR */;
+ kvm_cpu_cap_set(X86_FEATURE_NO_SMM_CTL_MSR);

kvm_cpu_cap_mask(CPUID_C000_0001_EDX,
F(XSTORE) | F(XSTORE_EN) | F(XCRYPT) | F(XCRYPT_EN) |

Subject: [tip: x86/cpu] x86/cpu, kvm: Move X86_FEATURE_LFENCE_RDTSC to its native leaf

The following commit has been merged into the x86/cpu branch of tip:

Commit-ID: 84168ae786f8a15a7eb0f79d34f20b8d261ce2f5
Gitweb: https://git.kernel.org/tip/84168ae786f8a15a7eb0f79d34f20b8d261ce2f5
Author: Kim Phillips <[email protected]>
AuthorDate: Tue, 24 Jan 2023 10:33:15 -06:00
Committer: Borislav Petkov (AMD) <[email protected]>
CommitterDate: Wed, 25 Jan 2023 13:06:13 +01:00

x86/cpu, kvm: Move X86_FEATURE_LFENCE_RDTSC to its native leaf

The LFENCE always serializing feature bit was defined as scattered
LFENCE_RDTSC and its native leaf bit position open-coded for KVM. Add
it to its newly added CPUID leaf 0x80000021 EAX proper. With
LFENCE_RDTSC in its proper place, the kernel's set_cpu_cap() will
effectively synthesize the feature for KVM going forward.

Also, DE_CFG[1] doesn't need to be set on such CPUs anymore.

[ bp: Massage and merge diff from Sean. ]

Signed-off-by: Kim Phillips <[email protected]>
Signed-off-by: Borislav Petkov (AMD) <[email protected]>
Acked-by: Sean Christopherson <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
---
arch/x86/include/asm/cpufeatures.h | 3 ++-
arch/x86/kernel/cpu/amd.c | 2 +-
arch/x86/kvm/cpuid.c | 16 +++++++++++++---
3 files changed, 16 insertions(+), 5 deletions(-)

diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index 1b2d40a..901128e 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -97,7 +97,7 @@
#define X86_FEATURE_SYSENTER32 ( 3*32+15) /* "" sysenter in IA32 userspace */
#define X86_FEATURE_REP_GOOD ( 3*32+16) /* REP microcode works well */
#define X86_FEATURE_AMD_LBR_V2 ( 3*32+17) /* AMD Last Branch Record Extension Version 2 */
-#define X86_FEATURE_LFENCE_RDTSC ( 3*32+18) /* "" LFENCE synchronizes RDTSC */
+/* FREE, was #define X86_FEATURE_LFENCE_RDTSC ( 3*32+18) "" LFENCE synchronizes RDTSC */
#define X86_FEATURE_ACC_POWER ( 3*32+19) /* AMD Accumulated Power Mechanism */
#define X86_FEATURE_NOPL ( 3*32+20) /* The NOPL (0F 1F) instructions */
#define X86_FEATURE_ALWAYS ( 3*32+21) /* "" Always-present feature */
@@ -429,6 +429,7 @@

/* AMD-defined Extended Feature 2 EAX, CPUID level 0x80000021 (EAX), word 20 */
#define X86_FEATURE_NO_NESTED_DATA_BP (20*32+ 0) /* "" No Nested Data Breakpoints */
+#define X86_FEATURE_LFENCE_RDTSC (20*32+ 2) /* "" LFENCE always serializing / synchronizes RDTSC */

/*
* BUG word(s)
diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
index f769d6d..208c2ce 100644
--- a/arch/x86/kernel/cpu/amd.c
+++ b/arch/x86/kernel/cpu/amd.c
@@ -956,7 +956,7 @@ static void init_amd(struct cpuinfo_x86 *c)

init_amd_cacheinfo(c);

- if (cpu_has(c, X86_FEATURE_XMM2)) {
+ if (!cpu_has(c, X86_FEATURE_LFENCE_RDTSC) && cpu_has(c, X86_FEATURE_XMM2)) {
/*
* Use LFENCE for execution serialization. On families which
* don't have that MSR, LFENCE is already serializing.
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index aa3a6dc..12455dc 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -742,12 +742,22 @@ void kvm_set_cpu_caps(void)
F(SME_COHERENT));

kvm_cpu_cap_mask(CPUID_8000_0021_EAX,
- F(NO_NESTED_DATA_BP) |
- BIT(2) /* LFENCE Always serializing */ | 0 /* SmmPgCfgLock */ |
+ F(NO_NESTED_DATA_BP) | F(LFENCE_RDTSC) | 0 /* SmmPgCfgLock */ |
BIT(6) /* NULL_SEL_CLR_BASE */ | 0 /* PrefetchCtlMsr */
);
+
+ /*
+ * Synthesize "LFENCE is serializing" into the AMD-defined entry in
+ * KVM's supported CPUID if the feature is reported as supported by the
+ * kernel. LFENCE_RDTSC was a Linux-defined synthetic feature long
+ * before AMD joined the bandwagon, e.g. LFENCE is serializing on most
+ * CPUs that support SSE2. On CPUs that don't support AMD's leaf,
+ * kvm_cpu_cap_mask() will unfortunately drop the flag due to ANDing
+ * the mask with the raw host CPUID, and reporting support in AMD's
+ * leaf can make it easier for userspace to detect the feature.
+ */
if (cpu_feature_enabled(X86_FEATURE_LFENCE_RDTSC))
- kvm_cpu_caps[CPUID_8000_0021_EAX] |= BIT(2) /* LFENCE Always serializing */;
+ kvm_cpu_cap_set(X86_FEATURE_LFENCE_RDTSC);
if (!static_cpu_has_bug(X86_BUG_NULL_SEG))
kvm_cpu_caps[CPUID_8000_0021_EAX] |= BIT(6) /* NULL_SEL_CLR_BASE */;
kvm_cpu_caps[CPUID_8000_0021_EAX] |= BIT(9) /* NO_SMM_CTL_MSR */;

Subject: [tip: x86/cpu] x86/cpu: Support AMD Automatic IBRS

The following commit has been merged into the x86/cpu branch of tip:

Commit-ID: e7862eda309ecfccc36bb5558d937ed3ace07f3f
Gitweb: https://git.kernel.org/tip/e7862eda309ecfccc36bb5558d937ed3ace07f3f
Author: Kim Phillips <[email protected]>
AuthorDate: Tue, 24 Jan 2023 10:33:18 -06:00
Committer: Borislav Petkov (AMD) <[email protected]>
CommitterDate: Wed, 25 Jan 2023 17:16:01 +01:00

x86/cpu: Support AMD Automatic IBRS

The AMD Zen4 core supports a new feature called Automatic IBRS.

It is a "set-and-forget" feature that means that, like Intel's Enhanced IBRS,
h/w manages its IBRS mitigation resources automatically across CPL transitions.

The feature is advertised by CPUID_Fn80000021_EAX bit 8 and is enabled by
setting MSR C000_0080 (EFER) bit 21.

Enable Automatic IBRS by default if the CPU feature is present. It typically
provides greater performance over the incumbent generic retpolines mitigation.

Reuse the SPECTRE_V2_EIBRS spectre_v2_mitigation enum. AMD Automatic IBRS and
Intel Enhanced IBRS have similar enablement. Add NO_EIBRS_PBRSB to
cpu_vuln_whitelist, since AMD Automatic IBRS isn't affected by PBRSB-eIBRS.

The kernel command line option spectre_v2=eibrs is used to select AMD Automatic
IBRS, if available.

Signed-off-by: Kim Phillips <[email protected]>
Signed-off-by: Borislav Petkov (AMD) <[email protected]>
Acked-by: Sean Christopherson <[email protected]>
Acked-by: Dave Hansen <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
---
Documentation/admin-guide/hw-vuln/spectre.rst | 6 ++---
Documentation/admin-guide/kernel-parameters.txt | 6 ++---
arch/x86/include/asm/cpufeatures.h | 1 +-
arch/x86/include/asm/msr-index.h | 2 ++-
arch/x86/kernel/cpu/bugs.c | 20 +++++++++-------
arch/x86/kernel/cpu/common.c | 19 ++++++++-------
6 files changed, 32 insertions(+), 22 deletions(-)

diff --git a/Documentation/admin-guide/hw-vuln/spectre.rst b/Documentation/admin-guide/hw-vuln/spectre.rst
index c4dcdb3..3fe6511 100644
--- a/Documentation/admin-guide/hw-vuln/spectre.rst
+++ b/Documentation/admin-guide/hw-vuln/spectre.rst
@@ -610,9 +610,9 @@ kernel command line.
retpoline,generic Retpolines
retpoline,lfence LFENCE; indirect branch
retpoline,amd alias for retpoline,lfence
- eibrs enhanced IBRS
- eibrs,retpoline enhanced IBRS + Retpolines
- eibrs,lfence enhanced IBRS + LFENCE
+ eibrs Enhanced/Auto IBRS
+ eibrs,retpoline Enhanced/Auto IBRS + Retpolines
+ eibrs,lfence Enhanced/Auto IBRS + LFENCE
ibrs use IBRS to protect kernel

Not specifying this option is equivalent to
diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 6cfa6e3..839fa0f 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -5729,9 +5729,9 @@
retpoline,generic - Retpolines
retpoline,lfence - LFENCE; indirect branch
retpoline,amd - alias for retpoline,lfence
- eibrs - enhanced IBRS
- eibrs,retpoline - enhanced IBRS + Retpolines
- eibrs,lfence - enhanced IBRS + LFENCE
+ eibrs - Enhanced/Auto IBRS
+ eibrs,retpoline - Enhanced/Auto IBRS + Retpolines
+ eibrs,lfence - Enhanced/Auto IBRS + LFENCE
ibrs - use IBRS to protect kernel

Not specifying this option is equivalent to
diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index 86e98bd..06909dc 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -431,6 +431,7 @@
#define X86_FEATURE_NO_NESTED_DATA_BP (20*32+ 0) /* "" No Nested Data Breakpoints */
#define X86_FEATURE_LFENCE_RDTSC (20*32+ 2) /* "" LFENCE always serializing / synchronizes RDTSC */
#define X86_FEATURE_NULL_SEL_CLR_BASE (20*32+ 6) /* "" Null Selector Clears Base */
+#define X86_FEATURE_AUTOIBRS (20*32+ 8) /* "" Automatic IBRS */
#define X86_FEATURE_NO_SMM_CTL_MSR (20*32+ 9) /* "" SMM_CTL MSR is not present */

/*
diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
index cb359d6..617b29a 100644
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -25,6 +25,7 @@
#define _EFER_SVME 12 /* Enable virtualization */
#define _EFER_LMSLE 13 /* Long Mode Segment Limit Enable */
#define _EFER_FFXSR 14 /* Enable Fast FXSAVE/FXRSTOR */
+#define _EFER_AUTOIBRS 21 /* Enable Automatic IBRS */

#define EFER_SCE (1<<_EFER_SCE)
#define EFER_LME (1<<_EFER_LME)
@@ -33,6 +34,7 @@
#define EFER_SVME (1<<_EFER_SVME)
#define EFER_LMSLE (1<<_EFER_LMSLE)
#define EFER_FFXSR (1<<_EFER_FFXSR)
+#define EFER_AUTOIBRS (1<<_EFER_AUTOIBRS)

/* Intel MSRs. Some also available on other CPUs */

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 5f33704..b41486a 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -1238,9 +1238,9 @@ static const char * const spectre_v2_strings[] = {
[SPECTRE_V2_NONE] = "Vulnerable",
[SPECTRE_V2_RETPOLINE] = "Mitigation: Retpolines",
[SPECTRE_V2_LFENCE] = "Mitigation: LFENCE",
- [SPECTRE_V2_EIBRS] = "Mitigation: Enhanced IBRS",
- [SPECTRE_V2_EIBRS_LFENCE] = "Mitigation: Enhanced IBRS + LFENCE",
- [SPECTRE_V2_EIBRS_RETPOLINE] = "Mitigation: Enhanced IBRS + Retpolines",
+ [SPECTRE_V2_EIBRS] = "Mitigation: Enhanced / Automatic IBRS",
+ [SPECTRE_V2_EIBRS_LFENCE] = "Mitigation: Enhanced / Automatic IBRS + LFENCE",
+ [SPECTRE_V2_EIBRS_RETPOLINE] = "Mitigation: Enhanced / Automatic IBRS + Retpolines",
[SPECTRE_V2_IBRS] = "Mitigation: IBRS",
};

@@ -1309,7 +1309,7 @@ static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
cmd == SPECTRE_V2_CMD_EIBRS_LFENCE ||
cmd == SPECTRE_V2_CMD_EIBRS_RETPOLINE) &&
!boot_cpu_has(X86_FEATURE_IBRS_ENHANCED)) {
- pr_err("%s selected but CPU doesn't have eIBRS. Switching to AUTO select\n",
+ pr_err("%s selected but CPU doesn't have Enhanced or Automatic IBRS. Switching to AUTO select\n",
mitigation_options[i].option);
return SPECTRE_V2_CMD_AUTO;
}
@@ -1495,8 +1495,12 @@ static void __init spectre_v2_select_mitigation(void)
pr_err(SPECTRE_V2_EIBRS_EBPF_MSG);

if (spectre_v2_in_ibrs_mode(mode)) {
- x86_spec_ctrl_base |= SPEC_CTRL_IBRS;
- update_spec_ctrl(x86_spec_ctrl_base);
+ if (boot_cpu_has(X86_FEATURE_AUTOIBRS)) {
+ msr_set_bit(MSR_EFER, _EFER_AUTOIBRS);
+ } else {
+ x86_spec_ctrl_base |= SPEC_CTRL_IBRS;
+ update_spec_ctrl(x86_spec_ctrl_base);
+ }
}

switch (mode) {
@@ -1580,8 +1584,8 @@ static void __init spectre_v2_select_mitigation(void)
/*
* Retpoline protects the kernel, but doesn't protect firmware. IBRS
* and Enhanced IBRS protect firmware too, so enable IBRS around
- * firmware calls only when IBRS / Enhanced IBRS aren't otherwise
- * enabled.
+ * firmware calls only when IBRS / Enhanced / Automatic IBRS aren't
+ * otherwise enabled.
*
* Use "mode" to check Enhanced IBRS instead of boot_cpu_has(), because
* the user might select retpoline on the kernel command line and if
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index e6bf9b1..62c73c5 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1229,8 +1229,8 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = {
VULNWL_AMD(0x12, NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO),

/* FAMILY_ANY must be last, otherwise 0x0f - 0x12 matches won't work */
- VULNWL_AMD(X86_FAMILY_ANY, NO_MELTDOWN | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO),
- VULNWL_HYGON(X86_FAMILY_ANY, NO_MELTDOWN | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO),
+ VULNWL_AMD(X86_FAMILY_ANY, NO_MELTDOWN | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO | NO_EIBRS_PBRSB),
+ VULNWL_HYGON(X86_FAMILY_ANY, NO_MELTDOWN | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO | NO_EIBRS_PBRSB),

/* Zhaoxin Family 7 */
VULNWL(CENTAUR, 7, X86_MODEL_ANY, NO_SPECTRE_V2 | NO_SWAPGS | NO_MMIO),
@@ -1341,8 +1341,16 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
!cpu_has(c, X86_FEATURE_AMD_SSB_NO))
setup_force_cpu_bug(X86_BUG_SPEC_STORE_BYPASS);

- if (ia32_cap & ARCH_CAP_IBRS_ALL)
+ /*
+ * AMD's AutoIBRS is equivalent to Intel's eIBRS - use the Intel feature
+ * flag and protect from vendor-specific bugs via the whitelist.
+ */
+ if ((ia32_cap & ARCH_CAP_IBRS_ALL) || cpu_has(c, X86_FEATURE_AUTOIBRS)) {
setup_force_cpu_cap(X86_FEATURE_IBRS_ENHANCED);
+ if (!cpu_matches(cpu_vuln_whitelist, NO_EIBRS_PBRSB) &&
+ !(ia32_cap & ARCH_CAP_PBRSB_NO))
+ setup_force_cpu_bug(X86_BUG_EIBRS_PBRSB);
+ }

if (!cpu_matches(cpu_vuln_whitelist, NO_MDS) &&
!(ia32_cap & ARCH_CAP_MDS_NO)) {
@@ -1404,11 +1412,6 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
setup_force_cpu_bug(X86_BUG_RETBLEED);
}

- if (cpu_has(c, X86_FEATURE_IBRS_ENHANCED) &&
- !cpu_matches(cpu_vuln_whitelist, NO_EIBRS_PBRSB) &&
- !(ia32_cap & ARCH_CAP_PBRSB_NO))
- setup_force_cpu_bug(X86_BUG_EIBRS_PBRSB);
-
if (cpu_matches(cpu_vuln_whitelist, NO_MELTDOWN))
return;


Subject: [tip: x86/cpu] x86/cpu, kvm: Add the NO_NESTED_DATA_BP feature

The following commit has been merged into the x86/cpu branch of tip:

Commit-ID: a9dc9ec5a1fafc3d2fe7a7b594eefaeaccf89a6b
Gitweb: https://git.kernel.org/tip/a9dc9ec5a1fafc3d2fe7a7b594eefaeaccf89a6b
Author: Kim Phillips <[email protected]>
AuthorDate: Tue, 24 Jan 2023 10:33:14 -06:00
Committer: Borislav Petkov (AMD) <[email protected]>
CommitterDate: Wed, 25 Jan 2023 12:36:34 +01:00

x86/cpu, kvm: Add the NO_NESTED_DATA_BP feature

The "Processor ignores nested data breakpoints" feature was being
open-coded for KVM. Add the feature to its newly introduced CPUID leaf
0x80000021 EAX proper.

Signed-off-by: Kim Phillips <[email protected]>
Signed-off-by: Borislav Petkov (AMD) <[email protected]>
Acked-by: Sean Christopherson <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
---
arch/x86/include/asm/cpufeatures.h | 3 +++
arch/x86/kvm/cpuid.c | 2 +-
2 files changed, 4 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index b890058..1b2d40a 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -427,6 +427,9 @@
#define X86_FEATURE_V_TSC_AUX (19*32+ 9) /* "" Virtual TSC_AUX */
#define X86_FEATURE_SME_COHERENT (19*32+10) /* "" AMD hardware-enforced cache coherency */

+/* AMD-defined Extended Feature 2 EAX, CPUID level 0x80000021 (EAX), word 20 */
+#define X86_FEATURE_NO_NESTED_DATA_BP (20*32+ 0) /* "" No Nested Data Breakpoints */
+
/*
* BUG word(s)
*/
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index f3edc35..aa3a6dc 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -742,7 +742,7 @@ void kvm_set_cpu_caps(void)
F(SME_COHERENT));

kvm_cpu_cap_mask(CPUID_8000_0021_EAX,
- BIT(0) /* NO_NESTED_DATA_BP */ |
+ F(NO_NESTED_DATA_BP) |
BIT(2) /* LFENCE Always serializing */ | 0 /* SmmPgCfgLock */ |
BIT(6) /* NULL_SEL_CLR_BASE */ | 0 /* PrefetchCtlMsr */
);

Subject: [tip: x86/cpu] KVM: x86: Move open-coded CPUID leaf 0x80000021 EAX bit propagation code

The following commit has been merged into the x86/cpu branch of tip:

Commit-ID: c35ac8c4bf600ee23bacb20f863aa7830efb23fb
Gitweb: https://git.kernel.org/tip/c35ac8c4bf600ee23bacb20f863aa7830efb23fb
Author: Kim Phillips <[email protected]>
AuthorDate: Tue, 24 Jan 2023 10:33:13 -06:00
Committer: Borislav Petkov (AMD) <[email protected]>
CommitterDate: Wed, 25 Jan 2023 12:33:13 +01:00

KVM: x86: Move open-coded CPUID leaf 0x80000021 EAX bit propagation code

Move code from __do_cpuid_func() to kvm_set_cpu_caps() in preparation for adding
the features in their native leaf.

Also drop the bit description comments as it will be more self-describing once
the individual features are added.

Whilst there, switch to using the more efficient cpu_feature_enabled() instead
of static_cpu_has().

Note, LFENCE_RDTSC and "NULL selector clears base" are currently synthetic,
Linux-defined feature flags as Linux tracking of the features predates AMD's
definition. Keep the manual propagation of the flags from their synthetic
counterparts until the kernel fully converts to AMD's definition, otherwise KVM
would stop synthesizing the flags as intended.

Signed-off-by: Kim Phillips <[email protected]>
Signed-off-by: Borislav Petkov (AMD) <[email protected]>
Acked-by: Sean Christopherson <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
---
arch/x86/kvm/cpuid.c | 31 ++++++++++++-------------------
1 file changed, 12 insertions(+), 19 deletions(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index b14653b..f3edc35 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -741,6 +741,17 @@ void kvm_set_cpu_caps(void)
0 /* SME */ | F(SEV) | 0 /* VM_PAGE_FLUSH */ | F(SEV_ES) |
F(SME_COHERENT));

+ kvm_cpu_cap_mask(CPUID_8000_0021_EAX,
+ BIT(0) /* NO_NESTED_DATA_BP */ |
+ BIT(2) /* LFENCE Always serializing */ | 0 /* SmmPgCfgLock */ |
+ BIT(6) /* NULL_SEL_CLR_BASE */ | 0 /* PrefetchCtlMsr */
+ );
+ if (cpu_feature_enabled(X86_FEATURE_LFENCE_RDTSC))
+ kvm_cpu_caps[CPUID_8000_0021_EAX] |= BIT(2) /* LFENCE Always serializing */;
+ if (!static_cpu_has_bug(X86_BUG_NULL_SEG))
+ kvm_cpu_caps[CPUID_8000_0021_EAX] |= BIT(6) /* NULL_SEL_CLR_BASE */;
+ kvm_cpu_caps[CPUID_8000_0021_EAX] |= BIT(9) /* NO_SMM_CTL_MSR */;
+
kvm_cpu_cap_mask(CPUID_C000_0001_EDX,
F(XSTORE) | F(XSTORE_EN) | F(XCRYPT) | F(XCRYPT_EN) |
F(ACE2) | F(ACE2_EN) | F(PHE) | F(PHE_EN) |
@@ -1222,25 +1233,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
break;
case 0x80000021:
entry->ebx = entry->ecx = entry->edx = 0;
- /*
- * Pass down these bits:
- * EAX 0 NNDBP, Processor ignores nested data breakpoints
- * EAX 2 LAS, LFENCE always serializing
- * EAX 6 NSCB, Null selector clear base
- *
- * Other defined bits are for MSRs that KVM does not expose:
- * EAX 3 SPCL, SMM page configuration lock
- * EAX 13 PCMSR, Prefetch control MSR
- *
- * KVM doesn't support SMM_CTL.
- * EAX 9 SMM_CTL MSR is not supported
- */
- entry->eax &= BIT(0) | BIT(2) | BIT(6);
- entry->eax |= BIT(9);
- if (static_cpu_has(X86_FEATURE_LFENCE_RDTSC))
- entry->eax |= BIT(2);
- if (!static_cpu_has_bug(X86_BUG_NULL_SEG))
- entry->eax |= BIT(6);
+ cpuid_entry_override(entry, CPUID_8000_0021_EAX);
break;
/*Add support for Centaur's CPUID instruction*/
case 0xC0000000:

Subject: [tip: x86/cpu] x86/cpu, kvm: Add support for CPUID_80000021_EAX

The following commit has been merged into the x86/cpu branch of tip:

Commit-ID: 8415a74852d7c24795007ee9862d25feb519007c
Gitweb: https://git.kernel.org/tip/8415a74852d7c24795007ee9862d25feb519007c
Author: Kim Phillips <[email protected]>
AuthorDate: Tue, 10 Jan 2023 16:46:37 -06:00
Committer: Borislav Petkov (AMD) <[email protected]>
CommitterDate: Wed, 25 Jan 2023 12:33:06 +01:00

x86/cpu, kvm: Add support for CPUID_80000021_EAX

Add support for CPUID leaf 80000021, EAX. The majority of the features will be
used in the kernel and thus a separate leaf is appropriate.

Include KVM's reverse_cpuid entry because features are used by VM guests, too.

[ bp: Massage commit message. ]

Signed-off-by: Kim Phillips <[email protected]>
Signed-off-by: Borislav Petkov (AMD) <[email protected]>
Acked-by: Sean Christopherson <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
---
arch/x86/include/asm/cpufeature.h | 7 +++++--
arch/x86/include/asm/cpufeatures.h | 2 +-
arch/x86/include/asm/disabled-features.h | 3 ++-
arch/x86/include/asm/required-features.h | 3 ++-
arch/x86/kernel/cpu/common.c | 3 +++
arch/x86/kvm/reverse_cpuid.h | 1 +
6 files changed, 14 insertions(+), 5 deletions(-)

diff --git a/arch/x86/include/asm/cpufeature.h b/arch/x86/include/asm/cpufeature.h
index 1a85e1f..ce0c8f7 100644
--- a/arch/x86/include/asm/cpufeature.h
+++ b/arch/x86/include/asm/cpufeature.h
@@ -32,6 +32,7 @@ enum cpuid_leafs
CPUID_8000_0007_EBX,
CPUID_7_EDX,
CPUID_8000_001F_EAX,
+ CPUID_8000_0021_EAX,
};

#define X86_CAP_FMT_NUM "%d:%d"
@@ -94,8 +95,9 @@ extern const char * const x86_bug_flags[NBUGINTS*32];
CHECK_BIT_IN_MASK_WORD(REQUIRED_MASK, 17, feature_bit) || \
CHECK_BIT_IN_MASK_WORD(REQUIRED_MASK, 18, feature_bit) || \
CHECK_BIT_IN_MASK_WORD(REQUIRED_MASK, 19, feature_bit) || \
+ CHECK_BIT_IN_MASK_WORD(REQUIRED_MASK, 20, feature_bit) || \
REQUIRED_MASK_CHECK || \
- BUILD_BUG_ON_ZERO(NCAPINTS != 20))
+ BUILD_BUG_ON_ZERO(NCAPINTS != 21))

#define DISABLED_MASK_BIT_SET(feature_bit) \
( CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 0, feature_bit) || \
@@ -118,8 +120,9 @@ extern const char * const x86_bug_flags[NBUGINTS*32];
CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 17, feature_bit) || \
CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 18, feature_bit) || \
CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 19, feature_bit) || \
+ CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 20, feature_bit) || \
DISABLED_MASK_CHECK || \
- BUILD_BUG_ON_ZERO(NCAPINTS != 20))
+ BUILD_BUG_ON_ZERO(NCAPINTS != 21))

#define cpu_has(c, bit) \
(__builtin_constant_p(bit) && REQUIRED_MASK_BIT_SET(bit) ? 1 : \
diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index b70111a..b890058 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -13,7 +13,7 @@
/*
* Defines x86 CPU feature bits
*/
-#define NCAPINTS 20 /* N 32-bit words worth of info */
+#define NCAPINTS 21 /* N 32-bit words worth of info */
#define NBUGINTS 1 /* N 32-bit bug flags */

/*
diff --git a/arch/x86/include/asm/disabled-features.h b/arch/x86/include/asm/disabled-features.h
index c44b56f..5dfa4fb 100644
--- a/arch/x86/include/asm/disabled-features.h
+++ b/arch/x86/include/asm/disabled-features.h
@@ -124,6 +124,7 @@
#define DISABLED_MASK17 0
#define DISABLED_MASK18 0
#define DISABLED_MASK19 0
-#define DISABLED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 20)
+#define DISABLED_MASK20 0
+#define DISABLED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 21)

#endif /* _ASM_X86_DISABLED_FEATURES_H */
diff --git a/arch/x86/include/asm/required-features.h b/arch/x86/include/asm/required-features.h
index aff7747..7ba1726 100644
--- a/arch/x86/include/asm/required-features.h
+++ b/arch/x86/include/asm/required-features.h
@@ -98,6 +98,7 @@
#define REQUIRED_MASK17 0
#define REQUIRED_MASK18 0
#define REQUIRED_MASK19 0
-#define REQUIRED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 20)
+#define REQUIRED_MASK20 0
+#define REQUIRED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 21)

#endif /* _ASM_X86_REQUIRED_FEATURES_H */
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index b7ac85a..e6f3234 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1093,6 +1093,9 @@ void get_cpu_cap(struct cpuinfo_x86 *c)
if (c->extended_cpuid_level >= 0x8000001f)
c->x86_capability[CPUID_8000_001F_EAX] = cpuid_eax(0x8000001f);

+ if (c->extended_cpuid_level >= 0x80000021)
+ c->x86_capability[CPUID_8000_0021_EAX] = cpuid_eax(0x80000021);
+
init_scattered_cpuid_features(c);
init_speculation_control(c);

diff --git a/arch/x86/kvm/reverse_cpuid.h b/arch/x86/kvm/reverse_cpuid.h
index 042d0ac..81f4e9c 100644
--- a/arch/x86/kvm/reverse_cpuid.h
+++ b/arch/x86/kvm/reverse_cpuid.h
@@ -68,6 +68,7 @@ static const struct cpuid_reg reverse_cpuid[] = {
[CPUID_12_EAX] = {0x00000012, 0, CPUID_EAX},
[CPUID_8000_001F_EAX] = {0x8000001f, 0, CPUID_EAX},
[CPUID_7_1_EDX] = { 7, 1, CPUID_EDX},
+ [CPUID_8000_0021_EAX] = {0x80000021, 0, CPUID_EAX},
};

/*

2023-02-24 18:53:06

by Josh Poimboeuf

[permalink] [raw]
Subject: Re: [PATCH v9 7/8] x86/cpu: Support AMD Automatic IBRS

On Tue, Jan 24, 2023 at 10:33:18AM -0600, Kim Phillips wrote:
> @@ -1495,8 +1495,12 @@ static void __init spectre_v2_select_mitigation(void)
> pr_err(SPECTRE_V2_EIBRS_EBPF_MSG);
>
> if (spectre_v2_in_ibrs_mode(mode)) {
> - x86_spec_ctrl_base |= SPEC_CTRL_IBRS;
> - update_spec_ctrl(x86_spec_ctrl_base);
> + if (boot_cpu_has(X86_FEATURE_AUTOIBRS)) {
> + msr_set_bit(MSR_EFER, _EFER_AUTOIBRS);

Doesn't this only enable it on the boot CPU?

--
Josh

2023-02-24 21:08:45

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH v9 7/8] x86/cpu: Support AMD Automatic IBRS

On Fri, Feb 24, 2023 at 10:52:57AM -0800, Josh Poimboeuf wrote:
> Doesn't this only enable it on the boot CPU?

Whoops, you might be right.

Lemme fix it.

Thx!

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette

2023-02-24 21:35:34

by Josh Poimboeuf

[permalink] [raw]
Subject: Re: [PATCH v9 7/8] x86/cpu: Support AMD Automatic IBRS

On Fri, Feb 24, 2023 at 10:08:32PM +0100, Borislav Petkov wrote:
> On Fri, Feb 24, 2023 at 10:52:57AM -0800, Josh Poimboeuf wrote:
> > Doesn't this only enable it on the boot CPU?
>
> Whoops, you might be right.
>
> Lemme fix it.
>
> Thx!

BTW, I wasn't copied on the patch set, despite having dedicated years of
my life that file ;-)

Can we add bugs.c and friends to MAINTAINERS?

---8<---

From: Josh Poimboeuf <[email protected]>
Subject: [PATCH] MAINTAINERS: Add x86 hardware vulnerabilities section

Signed-off-by: Josh Poimboeuf <[email protected]>
---
MAINTAINERS | 10 ++++++++++
1 file changed, 10 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index eb6f650c6c0b..338dc7469f80 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -22553,6 +22553,16 @@ S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git x86/asm
F: arch/x86/entry/

+X86 HARDWARE VULNERABILITIES
+M: Thomas Gleixner <[email protected]>
+M: Borislav Petkov <[email protected]>
+M: Peter Zijlstra <[email protected]>
+M: Josh Poimboeuf <[email protected]>
+S: Maintained
+F: Documentation/admin-guide/hw-vuln/
+F: arch/x86/include/asm/nospec-branch.h
+F: arch/x86/kernel/cpu/bugs.c
+
X86 MCE INFRASTRUCTURE
M: Tony Luck <[email protected]>
M: Borislav Petkov <[email protected]>
--
2.39.1


2023-02-24 21:59:18

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH v9 7/8] x86/cpu: Support AMD Automatic IBRS

On Fri, Feb 24, 2023 at 01:35:22PM -0800, Josh Poimboeuf wrote:
> BTW, I wasn't copied on the patch set, despite having dedicated years of
> my life that file ;-)

... and yet, even after all that pain, you're still willing to
self-inflict moar. :-P

> Can we add bugs.c and friends to MAINTAINERS?

Sure, might as well.

Acked-by: Borislav Petkov (AMD) <[email protected]>

I'll queue it after the MW is over.

Thx.

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette

2023-02-24 22:03:28

by Luck, Tony

[permalink] [raw]
Subject: RE: [PATCH v9 7/8] x86/cpu: Support AMD Automatic IBRS

>> Can we add bugs.c and friends to MAINTAINERS?
>
> Sure, might as well.
>
> Acked-by: Borislav Petkov (AMD) <[email protected]>
>
> I'll queue it after the MW is over.

Should also include Pawan as another unfortunate soul sucked
into keeping that file up to date with the latest wreckage. If not
as "M", at least as "R":

R: Pawan Gupta <[email protected]>

-Tony


2023-02-24 22:12:39

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH v9 7/8] x86/cpu: Support AMD Automatic IBRS

On Fri, Feb 24, 2023 at 10:03:16PM +0000, Luck, Tony wrote:
> Should also include Pawan as another unfortunate soul sucked
> into keeping that file up to date with the latest wreckage. If not
> as "M", at least as "R":
>
> R: Pawan Gupta <[email protected]>

We probably should hear from him before you offer his soul into the
purgatory of hardware speculation.

:-P

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette

2023-02-24 22:51:36

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH v9 7/8] x86/cpu: Support AMD Automatic IBRS

On Fri, Feb 24, 2023 at 10:08:32PM +0100, Borislav Petkov wrote:
> On Fri, Feb 24, 2023 at 10:52:57AM -0800, Josh Poimboeuf wrote:
> > Doesn't this only enable it on the boot CPU?
>
> Whoops, you might be right.

Actually, we stick that MSR - EFER - into the trampoline header and then
each AP gets it written to in arch/x86/realmode/rm/trampoline_64.S

But this is only from code staring - I'll confirm this tomorrow.

And if so, we should at least put comments in that trampoline code so
that people do not remove the MSR writes.

Or, actually, we should simply write it again because it is the init
path and not really a hot path but it should damn well make sure that
that bit gets set.

Thx.

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette

2023-02-24 23:23:14

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH v9 7/8] x86/cpu: Support AMD Automatic IBRS

On Fri, Feb 24, 2023 at 11:51:17PM +0100, Borislav Petkov wrote:
> Or, actually, we should simply write it again because it is the init
> path and not really a hot path but it should damn well make sure that
> that bit gets set.

Yeah, we have this fancy msr_set_bit() interface which saves us the MSR
write when not needed. And it also tells us that. :-)

So we can do:

diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
index 380753b14cab..2aa089aa23db 100644
--- a/arch/x86/kernel/cpu/amd.c
+++ b/arch/x86/kernel/cpu/amd.c
@@ -996,6 +996,12 @@ static void init_amd(struct cpuinfo_x86 *c)
msr_set_bit(MSR_K7_HWCR, MSR_K7_HWCR_IRPERF_EN_BIT);

check_null_seg_clears_base(c);
+
+ if (cpu_has(c, X86_FEATURE_AUTOIBRS)) {
+ int ret = msr_set_bit(MSR_EFER, _EFER_AUTOIBRS);
+
+ pr_info("%s: CPU%d, ret: %d\n", __func__, smp_processor_id(), ret);
+ }
}

#ifdef CONFIG_X86_32

---

and the output looks like this:

[ 3.046607] x86: Booting SMP configuration:
[ 3.046609] .... node #0, CPUs: #1
[ 2.874768] init_amd: CPU1, ret: 0
[ 3.046873] #2
[ 2.874768] init_amd: CPU2, ret: 0
[ 3.049155] #3
[ 2.874768] init_amd: CPU3, ret: 0
[ 3.050834] #4
[ 2.874768] init_amd: CPU4, ret: 0
...

which says that the bit was already set - which confirms the
trampoline setting thing.

And doing the write again serves as a guard when in the future we decide
to not set EFER anymore - I doubt it - but we can't allow ourselves to
not set the autoibrs bit so one more RDMSR on init doesn't matter.

Proper patch tomorrow.

Thx.

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette

2023-02-24 23:30:23

by Pawan Gupta

[permalink] [raw]
Subject: Re: [PATCH v9 7/8] x86/cpu: Support AMD Automatic IBRS

On Fri, Feb 24, 2023 at 11:12:29PM +0100, Borislav Petkov wrote:
> On Fri, Feb 24, 2023 at 10:03:16PM +0000, Luck, Tony wrote:
> > Should also include Pawan as another unfortunate soul sucked
> > into keeping that file up to date with the latest wreckage. If not
> > as "M", at least as "R":
> >
> > R: Pawan Gupta <[email protected]>
>
> We probably should hear from him before you offer his soul into the
> purgatory of hardware speculation.

I will be happy to review what I can.

Soulfully yours,
Pawan

2023-02-25 00:09:46

by Josh Poimboeuf

[permalink] [raw]
Subject: Re: [PATCH v9 7/8] x86/cpu: Support AMD Automatic IBRS

On Fri, Feb 24, 2023 at 11:51:17PM +0100, Borislav Petkov wrote:
> On Fri, Feb 24, 2023 at 10:08:32PM +0100, Borislav Petkov wrote:
> > On Fri, Feb 24, 2023 at 10:52:57AM -0800, Josh Poimboeuf wrote:
> > > Doesn't this only enable it on the boot CPU?
> >
> > Whoops, you might be right.
>
> Actually, we stick that MSR - EFER - into the trampoline header and then
> each AP gets it written to in arch/x86/realmode/rm/trampoline_64.S
>
> But this is only from code staring - I'll confirm this tomorrow.

Ah, I had to stare it that for a bit to figure out how it works.
setup_real_mode() reads MSR_EFER from the boot CPU and stores it in
trampoline_header->efer. Then the other CPUs read that stored value in
startup_32() and write it into their MSR.

> And if so, we should at least put comments in that trampoline code so
> that people do not remove the MSR writes.
>
> Or, actually, we should simply write it again because it is the init
> path and not really a hot path but it should damn well make sure that
> that bit gets set.

Yeah, I think that would be good. Otherwise it's rather magical. That
EFER MSR is a surprising place to put that bit.

--
Josh

2023-02-25 00:20:36

by Borislav Petkov

[permalink] [raw]
Subject: [PATCH] x86/CPU/AMD: Make sure EFER[AIBRSE] is set

On Fri, Feb 24, 2023 at 04:09:31PM -0800, Josh Poimboeuf wrote:
> Ah, I had to stare it that for a bit to figure out how it works.

Yeah, it is a bit "hidden". :)

> setup_real_mode() reads MSR_EFER from the boot CPU and stores it in
> trampoline_header->efer. Then the other CPUs read that stored value in
> startup_32() and write it into their MSR.

Exactly.

> Yeah, I think that would be good. Otherwise it's rather magical.

Yap, see below.

> That EFER MSR is a surprising place to put that bit.

That MSR is very important on AMD. Consider it AMD's CR4. :-)

Thx.

---
From: "Borislav Petkov (AMD)" <[email protected]>
Date: Sat, 25 Feb 2023 01:11:31 +0100
Subject: [PATCH] x86/CPU/AMD: Make sure EFER[AIBRSE] is set

The AutoIBRS bit gets set only on the BSP as part of determining which
mitigation to enable on AMD. Setting on the APs relies on the
circumstance that the APs get booted through the trampoline and EFER
- the MSR which contains that bit - gets replicated on every AP from the
BSP.

However, this can change in the future and considering the security
implications of this bit not being set on every CPU, make sure it is set
by verifying EFER later in the boot process and on every AP.

Reported-by: Josh Poimboeuf <[email protected]>
Signed-off-by: Borislav Petkov (AMD) <[email protected]>
Link: https://lore.kernel.org/r/20230224185257.o3mcmloei5zqu7wa@treble
---
arch/x86/kernel/cpu/amd.c | 10 ++++++++++
1 file changed, 10 insertions(+)

diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
index 380753b14cab..de624c1442c2 100644
--- a/arch/x86/kernel/cpu/amd.c
+++ b/arch/x86/kernel/cpu/amd.c
@@ -996,6 +996,16 @@ static void init_amd(struct cpuinfo_x86 *c)
msr_set_bit(MSR_K7_HWCR, MSR_K7_HWCR_IRPERF_EN_BIT);

check_null_seg_clears_base(c);
+
+ /*
+ * Make sure EFER[AIBRSE - Automatic IBRS Enable] is set. The APs are brought up
+ * using the trampoline code and as part of it, EFER gets prepared there in order
+ * to be replicated onto them. Regardless, set it here again, if not set, to protect
+ * against any future refactoring/code reorganization which might miss setting
+ * this important bit.
+ */
+ if (cpu_has(c, X86_FEATURE_AUTOIBRS))
+ msr_set_bit(MSR_EFER, _EFER_AUTOIBRS);
}

#ifdef CONFIG_X86_32
--
2.35.1



--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette

2023-02-25 00:52:34

by Pawan Gupta

[permalink] [raw]
Subject: Re: [PATCH] x86/CPU/AMD: Make sure EFER[AIBRSE] is set

On Sat, Feb 25, 2023 at 01:20:24AM +0100, Borislav Petkov wrote:
> On Fri, Feb 24, 2023 at 04:09:31PM -0800, Josh Poimboeuf wrote:
> > Ah, I had to stare it that for a bit to figure out how it works.
>
> Yeah, it is a bit "hidden". :)
>
> > setup_real_mode() reads MSR_EFER from the boot CPU and stores it in
> > trampoline_header->efer. Then the other CPUs read that stored value in
> > startup_32() and write it into their MSR.
>
> Exactly.
>
> > Yeah, I think that would be good. Otherwise it's rather magical.
>
> Yap, see below.
>
> > That EFER MSR is a surprising place to put that bit.
>
> That MSR is very important on AMD. Consider it AMD's CR4. :-)
>
> Thx.
>
> ---
> From: "Borislav Petkov (AMD)" <[email protected]>
> Date: Sat, 25 Feb 2023 01:11:31 +0100
> Subject: [PATCH] x86/CPU/AMD: Make sure EFER[AIBRSE] is set
>
> The AutoIBRS bit gets set only on the BSP as part of determining which
> mitigation to enable on AMD. Setting on the APs relies on the
> circumstance that the APs get booted through the trampoline and EFER
> - the MSR which contains that bit - gets replicated on every AP from the
> BSP.
>
> However, this can change in the future and considering the security
> implications of this bit not being set on every CPU, make sure it is set
> by verifying EFER later in the boot process and on every AP.
>
> Reported-by: Josh Poimboeuf <[email protected]>
> Signed-off-by: Borislav Petkov (AMD) <[email protected]>
> Link: https://lore.kernel.org/r/20230224185257.o3mcmloei5zqu7wa@treble
> ---
> arch/x86/kernel/cpu/amd.c | 10 ++++++++++
> 1 file changed, 10 insertions(+)
>
> diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
> index 380753b14cab..de624c1442c2 100644
> --- a/arch/x86/kernel/cpu/amd.c
> +++ b/arch/x86/kernel/cpu/amd.c
> @@ -996,6 +996,16 @@ static void init_amd(struct cpuinfo_x86 *c)
> msr_set_bit(MSR_K7_HWCR, MSR_K7_HWCR_IRPERF_EN_BIT);
>
> check_null_seg_clears_base(c);
> +
> + /*
> + * Make sure EFER[AIBRSE - Automatic IBRS Enable] is set. The APs are brought up
> + * using the trampoline code and as part of it, EFER gets prepared there in order
> + * to be replicated onto them. Regardless, set it here again, if not set, to protect
> + * against any future refactoring/code reorganization which might miss setting
> + * this important bit.
> + */
> + if (cpu_has(c, X86_FEATURE_AUTOIBRS))
> + msr_set_bit(MSR_EFER, _EFER_AUTOIBRS);

Is it intended to be set regardless of the spectre_v2 mitigation status?

2023-02-25 01:32:10

by Josh Poimboeuf

[permalink] [raw]
Subject: Re: [PATCH] x86/CPU/AMD: Make sure EFER[AIBRSE] is set

On Fri, Feb 24, 2023 at 04:52:21PM -0800, Pawan Gupta wrote:
> On Sat, Feb 25, 2023 at 01:20:24AM +0100, Borislav Petkov wrote:
> > On Fri, Feb 24, 2023 at 04:09:31PM -0800, Josh Poimboeuf wrote:
> > > Ah, I had to stare it that for a bit to figure out how it works.
> >
> > Yeah, it is a bit "hidden". :)
> >
> > > setup_real_mode() reads MSR_EFER from the boot CPU and stores it in
> > > trampoline_header->efer. Then the other CPUs read that stored value in
> > > startup_32() and write it into their MSR.
> >
> > Exactly.
> >
> > > Yeah, I think that would be good. Otherwise it's rather magical.
> >
> > Yap, see below.
> >
> > > That EFER MSR is a surprising place to put that bit.
> >
> > That MSR is very important on AMD. Consider it AMD's CR4. :-)
> >
> > Thx.
> >
> > ---
> > From: "Borislav Petkov (AMD)" <[email protected]>
> > Date: Sat, 25 Feb 2023 01:11:31 +0100
> > Subject: [PATCH] x86/CPU/AMD: Make sure EFER[AIBRSE] is set
> >
> > The AutoIBRS bit gets set only on the BSP as part of determining which
> > mitigation to enable on AMD. Setting on the APs relies on the
> > circumstance that the APs get booted through the trampoline and EFER
> > - the MSR which contains that bit - gets replicated on every AP from the
> > BSP.
> >
> > However, this can change in the future and considering the security
> > implications of this bit not being set on every CPU, make sure it is set
> > by verifying EFER later in the boot process and on every AP.
> >
> > Reported-by: Josh Poimboeuf <[email protected]>
> > Signed-off-by: Borislav Petkov (AMD) <[email protected]>
> > Link: https://lore.kernel.org/r/20230224185257.o3mcmloei5zqu7wa@treble
> > ---
> > arch/x86/kernel/cpu/amd.c | 10 ++++++++++
> > 1 file changed, 10 insertions(+)
> >
> > diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
> > index 380753b14cab..de624c1442c2 100644
> > --- a/arch/x86/kernel/cpu/amd.c
> > +++ b/arch/x86/kernel/cpu/amd.c
> > @@ -996,6 +996,16 @@ static void init_amd(struct cpuinfo_x86 *c)
> > msr_set_bit(MSR_K7_HWCR, MSR_K7_HWCR_IRPERF_EN_BIT);
> >
> > check_null_seg_clears_base(c);
> > +
> > + /*
> > + * Make sure EFER[AIBRSE - Automatic IBRS Enable] is set. The APs are brought up
> > + * using the trampoline code and as part of it, EFER gets prepared there in order
> > + * to be replicated onto them. Regardless, set it here again, if not set, to protect
> > + * against any future refactoring/code reorganization which might miss setting
> > + * this important bit.
> > + */
> > + if (cpu_has(c, X86_FEATURE_AUTOIBRS))
> > + msr_set_bit(MSR_EFER, _EFER_AUTOIBRS);
>
> Is it intended to be set regardless of the spectre_v2 mitigation status?

Right, it needs to check spectre_v2_enabled.

Also, this code might be a better fit in identify_secondary_cpu() with
the other MSR-writing bug-related code.

--
Josh

2023-02-25 12:22:12

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH] x86/CPU/AMD: Make sure EFER[AIBRSE] is set

On Fri, Feb 24, 2023 at 05:32:02PM -0800, Josh Poimboeuf wrote:
> > Is it intended to be set regardless of the spectre_v2 mitigation status?
>
> Right, it needs to check spectre_v2_enabled.

Right, I realized this too this morning, while sleeping, so I made me
a note on the nightstand to fix it... :-)

> Also, this code might be a better fit in identify_secondary_cpu() with
> the other MSR-writing bug-related code.

Same path:

identify_secondary_cpu->identify_cpu->this_cpu->c_init(c)->init_amd

Plus, it keeps the vendor code where it belongs.

v2 below, still untested.

---
From: "Borislav Petkov (AMD)" <[email protected]>
Date: Sat, 25 Feb 2023 01:11:31 +0100
Subject: [PATCH] x86/CPU/AMD: Make sure EFER[AIBRSE] is set

The AutoIBRS bit gets set only on the BSP as part of determining which
mitigation to enable on AMD. Setting on the APs relies on the
circumstance that the APs get booted through the trampoline and EFER
- the MSR which contains that bit - gets replicated on every AP from the
BSP.

However, this can change in the future and considering the security
implications of this bit not being set on every CPU, make sure it is set
by verifying EFER later in the boot process and on every AP.

Reported-by: Josh Poimboeuf <[email protected]>
Signed-off-by: Borislav Petkov (AMD) <[email protected]>
Link: https://lore.kernel.org/r/20230224185257.o3mcmloei5zqu7wa@treble
---
arch/x86/kernel/cpu/amd.c | 11 +++++++++++
arch/x86/kernel/cpu/bugs.c | 14 ++------------
arch/x86/kernel/cpu/cpu.h | 10 ++++++++++
3 files changed, 23 insertions(+), 12 deletions(-)

diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
index 380753b14cab..aba1b43ed6fd 100644
--- a/arch/x86/kernel/cpu/amd.c
+++ b/arch/x86/kernel/cpu/amd.c
@@ -996,6 +996,17 @@ static void init_amd(struct cpuinfo_x86 *c)
msr_set_bit(MSR_K7_HWCR, MSR_K7_HWCR_IRPERF_EN_BIT);

check_null_seg_clears_base(c);
+
+ /*
+ * Make sure EFER[AIBRSE - Automatic IBRS Enable] is set. The APs are brought up
+ * using the trampoline code and as part of it, EFER gets prepared there in order
+ * to be replicated onto them. Regardless, set it here again, if not set, to protect
+ * against any future refactoring/code reorganization which might miss setting
+ * this important bit.
+ */
+ if (spectre_v2_in_ibrs_mode(spectre_v2_enabled) &&
+ cpu_has(c, X86_FEATURE_AUTOIBRS))
+ msr_set_bit(MSR_EFER, _EFER_AUTOIBRS);
}

#ifdef CONFIG_X86_32
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 4fd43d25b483..407c73d3beb9 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -784,8 +784,7 @@ static int __init nospectre_v1_cmdline(char *str)
}
early_param("nospectre_v1", nospectre_v1_cmdline);

-static enum spectre_v2_mitigation spectre_v2_enabled __ro_after_init =
- SPECTRE_V2_NONE;
+enum spectre_v2_mitigation spectre_v2_enabled __ro_after_init = SPECTRE_V2_NONE;

#undef pr_fmt
#define pr_fmt(fmt) "RETBleed: " fmt
@@ -1133,16 +1132,7 @@ spectre_v2_parse_user_cmdline(void)
return SPECTRE_V2_USER_CMD_AUTO;
}

-static inline bool spectre_v2_in_ibrs_mode(enum spectre_v2_mitigation mode)
-{
- return mode == SPECTRE_V2_IBRS ||
- mode == SPECTRE_V2_EIBRS ||
- mode == SPECTRE_V2_EIBRS_RETPOLINE ||
- mode == SPECTRE_V2_EIBRS_LFENCE;
-}
-
-static void __init
-spectre_v2_user_select_mitigation(void)
+static void __init spectre_v2_user_select_mitigation(void)
{
enum spectre_v2_user_mitigation mode = SPECTRE_V2_USER_NONE;
bool smt_possible = IS_ENABLED(CONFIG_SMP);
diff --git a/arch/x86/kernel/cpu/cpu.h b/arch/x86/kernel/cpu/cpu.h
index 57a5349e6954..99c507c42901 100644
--- a/arch/x86/kernel/cpu/cpu.h
+++ b/arch/x86/kernel/cpu/cpu.h
@@ -83,4 +83,14 @@ unsigned int aperfmperf_get_khz(int cpu);
extern void x86_spec_ctrl_setup_ap(void);
extern void update_srbds_msr(void);

+extern enum spectre_v2_mitigation spectre_v2_enabled;
+
+static inline bool spectre_v2_in_ibrs_mode(enum spectre_v2_mitigation mode)
+{
+ return mode == SPECTRE_V2_IBRS ||
+ mode == SPECTRE_V2_EIBRS ||
+ mode == SPECTRE_V2_EIBRS_RETPOLINE ||
+ mode == SPECTRE_V2_EIBRS_LFENCE;
+}
+
#endif /* ARCH_X86_CPU_H */
--
2.35.1

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette

2023-02-25 17:28:41

by Josh Poimboeuf

[permalink] [raw]
Subject: Re: [PATCH] x86/CPU/AMD: Make sure EFER[AIBRSE] is set

On Sat, Feb 25, 2023 at 01:21:49PM +0100, Borislav Petkov wrote:
> On Fri, Feb 24, 2023 at 05:32:02PM -0800, Josh Poimboeuf wrote:
> > > Is it intended to be set regardless of the spectre_v2 mitigation status?
> >
> > Right, it needs to check spectre_v2_enabled.
>
> Right, I realized this too this morning, while sleeping, so I made me
> a note on the nightstand to fix it... :-)
>
> > Also, this code might be a better fit in identify_secondary_cpu() with
> > the other MSR-writing bug-related code.
>
> Same path:
>
> identify_secondary_cpu->identify_cpu->this_cpu->c_init(c)->init_amd
>
> Plus, it keeps the vendor code where it belongs.

All the other "bug" code in identify_secondary_cpu() *is*
vendor-specific.

And for that matter, so is most of the code in bugs.c.

I'm thinking we should just move all this MSR-writing bug-related code
into a new cpu_init_bugs() function in bugs.c which can be called by
identify_secondary_cpu().

Then we have more "bug" code together and all the local
variables/functions like spectre_v2_in_ibrs_mode() can remain local.

--
Josh

2023-02-25 22:57:00

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH] x86/CPU/AMD: Make sure EFER[AIBRSE] is set

On Sat, Feb 25, 2023 at 09:28:32AM -0800, Josh Poimboeuf wrote:
> All the other "bug" code in identify_secondary_cpu() *is*
> vendor-specific.

I meant "vendor-specific" in the sense that AMD code goes to amd.c, etc.

As to the identify_secondary_cpu() code - I didn't like it being
slapped there either but it got stuck in there hastily during the
mitigations upstreaming as back then we had bigger fish to fry than
paying too much attention to clean design...

> And for that matter, so is most of the code in bugs.c.
>
> I'm thinking we should just move all this MSR-writing bug-related code
> into a new cpu_init_bugs() function in bugs.c which can be called by
> identify_secondary_cpu().

I guess.

> Then we have more "bug" code together and all the local
> variables/functions like spectre_v2_in_ibrs_mode() can remain local.

They're still local, more or less. Note the special cpu.h header which
is private to arch/x86/kernel/cpu/

Thx.

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette

2023-02-25 23:43:42

by Josh Poimboeuf

[permalink] [raw]
Subject: Re: [PATCH] x86/CPU/AMD: Make sure EFER[AIBRSE] is set

On Sat, Feb 25, 2023 at 11:56:37PM +0100, Borislav Petkov wrote:
> On Sat, Feb 25, 2023 at 09:28:32AM -0800, Josh Poimboeuf wrote:
> > All the other "bug" code in identify_secondary_cpu() *is*
> > vendor-specific.
>
> I meant "vendor-specific" in the sense that AMD code goes to amd.c, etc.

Hm? So code in bugs.c is not vendor-specific? That seems circular and
I don't get your point.

> As to the identify_secondary_cpu() code - I didn't like it being
> slapped there either but it got stuck in there hastily during the
> mitigations upstreaming as back then we had bigger fish to fry than
> paying too much attention to clean design...

Right, so rather than spreading all the bug-related MSR logic around,
just do it in one spot.

--
Josh

2023-02-26 11:18:28

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH] x86/CPU/AMD: Make sure EFER[AIBRSE] is set

On Sat, Feb 25, 2023 at 03:43:30PM -0800, Josh Poimboeuf wrote:
> Hm? So code in bugs.c is not vendor-specific? That seems circular and
> I don't get your point.

Lemme try that again...

So there's an obvious benefit of keeping vendor-specific CPU code in one
place: Intel stuff in cpu/intel*, AMD stuff in cpu/amd.c

The sekjority stuff is still vendor-specific CPU code.

Now, if you wanna add a function pointer ->bugs_init or so, say, to

struct cpu_dev

and keep the respective code in amd.c or intel.c, then we get the best
of both worlds:

- vendor-specific code remains in the respective file

- you have a vendor-specific function which does hw vuln-specific work
*without* vendor checks and so on

> Right, so rather than spreading all the bug-related MSR logic around,
> just do it in one spot.

It is all CPU init code and I'm wondering if splitting stuff by vendor
wouldn't make all that maze in bugs.c a lot more palatable. And get rid
of

$ git grep VENDOR arch/x86/kernel/cpu/bugs.c | wc -l
11

those, for starters.

There's this trade-off of

1. keeping bugs setup code in one place - but then you need to do vendor
checks and the other CPU setup code is somewhere else and it is
probably related, MSR_AMD64_DE_CFG_LFENCE_SERIALIZE_BIT in amd.c for
example.

or

2. separating it into their respective files. Then the respective vendor
code is simple because you don't need vendor checks. It would need to
be done in a slick way, though, so that it remains maintainable.

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette

2023-02-26 17:27:36

by Josh Poimboeuf

[permalink] [raw]
Subject: Re: [PATCH] x86/CPU/AMD: Make sure EFER[AIBRSE] is set

On Sun, Feb 26, 2023 at 12:18:06PM +0100, Borislav Petkov wrote:
> > Right, so rather than spreading all the bug-related MSR logic around,
> > just do it in one spot.
>
> It is all CPU init code and I'm wondering if splitting stuff by vendor
> wouldn't make all that maze in bugs.c a lot more palatable. And get rid
> of
>
> $ git grep VENDOR arch/x86/kernel/cpu/bugs.c | wc -l
> 11
>
> those, for starters.
>
> There's this trade-off of
>
> 1. keeping bugs setup code in one place - but then you need to do vendor
> checks and the other CPU setup code is somewhere else and it is
> probably related, MSR_AMD64_DE_CFG_LFENCE_SERIALIZE_BIT in amd.c for
> example.
>
> or
>
> 2. separating it into their respective files. Then the respective vendor
> code is simple because you don't need vendor checks. It would need to
> be done in a slick way, though, so that it remains maintainable.

At least now it's a (mostly) self-contained hornets nest. I'm not sure
we want to poke it :-)

And I'm not sure spreading the mess around would be an improvement.

--
Josh

2023-02-26 18:44:21

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH] x86/CPU/AMD: Make sure EFER[AIBRSE] is set

On Sun, Feb 26, 2023 at 09:27:26AM -0800, Josh Poimboeuf wrote:
> At least now it's a (mostly) self-contained hornets nest. I'm not sure
> we want to poke it :-)
>
> And I'm not sure spreading the mess around would be an improvement.

Yah, if anything, I wanna see the change first and it has to be an
obvious and good one.

What I think we should finish doing, though, is documenting it. Because
there are aspects/mitigations that are missing, e.g., there's no
retbleed.rst in Documentation/admin-guide/hw-vuln/

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette

2023-02-27 15:25:06

by Dave Hansen

[permalink] [raw]
Subject: Re: [PATCH] x86/CPU/AMD: Make sure EFER[AIBRSE] is set

On 2/25/23 04:21, Borislav Petkov wrote:
> + /*
> + * Make sure EFER[AIBRSE - Automatic IBRS Enable] is set. The APs are brought up
> + * using the trampoline code and as part of it, EFER gets prepared there in order
> + * to be replicated onto them. Regardless, set it here again, if not set, to protect
> + * against any future refactoring/code reorganization which might miss setting
> + * this important bit.
> + */
> + if (spectre_v2_in_ibrs_mode(spectre_v2_enabled) &&
> + cpu_has(c, X86_FEATURE_AUTOIBRS))
> + msr_set_bit(MSR_EFER, _EFER_AUTOIBRS);
> }

I guess the belt and suspenders could be justified here by how important
the bit is.

But, if EFER[AIBRSE] gets clear somehow shouldn't we also dump a warning
out here so the fool who botched it can fix it? Even if AIBRSE is fixed
up, some less important bit could still be botched.

It will freak some users out, but it does seem like the kind of thing we
_want_ a bug report for.

2023-02-27 15:40:39

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH] x86/CPU/AMD: Make sure EFER[AIBRSE] is set

On Mon, Feb 27, 2023 at 07:25:00AM -0800, Dave Hansen wrote:
> It will freak some users out, but it does seem like the kind of thing we
> _want_ a bug report for.

You mean, something like:

if (spectre_v2_in_ibrs_mode(spectre_v2_enabled) &&
cpu_has(c, X86_FEATURE_AUTOIBRS))
WARN_ON_ONCE(msr_set_bit(MSR_EFER, _EFER_AUTOIBRS));

?

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette

2023-02-27 16:39:10

by Dave Hansen

[permalink] [raw]
Subject: Re: [PATCH] x86/CPU/AMD: Make sure EFER[AIBRSE] is set

On 2/27/23 07:40, Borislav Petkov wrote:
> On Mon, Feb 27, 2023 at 07:25:00AM -0800, Dave Hansen wrote:
>> It will freak some users out, but it does seem like the kind of thing we
>> _want_ a bug report for.
> You mean, something like:
>
> if (spectre_v2_in_ibrs_mode(spectre_v2_enabled) &&
> cpu_has(c, X86_FEATURE_AUTOIBRS))
> WARN_ON_ONCE(msr_set_bit(MSR_EFER, _EFER_AUTOIBRS));
>
> ?

Yep, that looks sane.

Subject: [tip: x86/misc] MAINTAINERS: Add x86 hardware vulnerabilities section

The following commit has been merged into the x86/misc branch of tip:

Commit-ID: 5910f06503aae3cc4890e562683abc3e38857ff9
Gitweb: https://git.kernel.org/tip/5910f06503aae3cc4890e562683abc3e38857ff9
Author: Josh Poimboeuf <[email protected]>
AuthorDate: Fri, 24 Feb 2023 13:35:22 -08:00
Committer: Borislav Petkov (AMD) <[email protected]>
CommitterDate: Fri, 10 Mar 2023 11:13:30 +01:00

MAINTAINERS: Add x86 hardware vulnerabilities section

Add the bunch of losers who have to deal with this to MAINTAINERS so
that they can get explicitly CCed on more hw nightmares.

[ bp: Add commit message. ]

Signed-off-by: Josh Poimboeuf <[email protected]>
Signed-off-by: Borislav Petkov (AMD) <[email protected]>
Link: https://lore.kernel.org/r/20230224213522.nofavod2jzhn22wp@treble
---
MAINTAINERS | 11 +++++++++++
1 file changed, 11 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 8d5bc22..d95c6cc 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -22660,6 +22660,17 @@ S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git x86/asm
F: arch/x86/entry/

+X86 HARDWARE VULNERABILITIES
+M: Thomas Gleixner <[email protected]>
+M: Borislav Petkov <[email protected]>
+M: Peter Zijlstra <[email protected]>
+M: Josh Poimboeuf <[email protected]>
+R: Pawan Gupta <[email protected]>
+S: Maintained
+F: Documentation/admin-guide/hw-vuln/
+F: arch/x86/include/asm/nospec-branch.h
+F: arch/x86/kernel/cpu/bugs.c
+
X86 MCE INFRASTRUCTURE
M: Tony Luck <[email protected]>
M: Borislav Petkov <[email protected]>

2023-03-10 16:27:37

by Borislav Petkov

[permalink] [raw]
Subject: [PATCH -v2] x86/CPU/AMD: Make sure EFER[AIBRSE] is set

v2, with feedback addressed, and rediffed ontop of 6.3-rc1:

---
From: "Borislav Petkov (AMD)" <[email protected]>

The AutoIBRS bit gets set only on the BSP as part of determining which
mitigation to enable on AMD. Setting on the APs relies on the
circumstance that the APs get booted through the trampoline and EFER
- the MSR which contains that bit - gets replicated on every AP from the
BSP.

However, this can change in the future and considering the security
implications of this bit not being set on every CPU, make sure it is set
by verifying EFER later in the boot process and on every AP.

Reported-by: Josh Poimboeuf <[email protected]>
Signed-off-by: Borislav Petkov (AMD) <[email protected]>
Link: https://lore.kernel.org/r/20230224185257.o3mcmloei5zqu7wa@treble
---
arch/x86/kernel/cpu/amd.c | 11 +++++++++++
arch/x86/kernel/cpu/bugs.c | 10 +---------
arch/x86/kernel/cpu/cpu.h | 8 ++++++++
3 files changed, 20 insertions(+), 9 deletions(-)

diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
index 380753b14cab..dd32dbc7c33e 100644
--- a/arch/x86/kernel/cpu/amd.c
+++ b/arch/x86/kernel/cpu/amd.c
@@ -996,6 +996,17 @@ static void init_amd(struct cpuinfo_x86 *c)
msr_set_bit(MSR_K7_HWCR, MSR_K7_HWCR_IRPERF_EN_BIT);

check_null_seg_clears_base(c);
+
+ /*
+ * Make sure EFER[AIBRSE - Automatic IBRS Enable] is set. The APs are brought up
+ * using the trampoline code and as part of it, MSR_EFER gets prepared there in
+ * order to be replicated onto them. Regardless, set it here again, if not set,
+ * to protect against any future refactoring/code reorganization which might
+ * miss setting this important bit.
+ */
+ if (spectre_v2_in_eibrs_mode(spectre_v2_enabled) &&
+ cpu_has(c, X86_FEATURE_AUTOIBRS))
+ WARN_ON_ONCE(msr_set_bit(MSR_EFER, _EFER_AUTOIBRS));
}

#ifdef CONFIG_X86_32
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index f9d060e71c3e..182af64387d0 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -784,8 +784,7 @@ static int __init nospectre_v1_cmdline(char *str)
}
early_param("nospectre_v1", nospectre_v1_cmdline);

-static enum spectre_v2_mitigation spectre_v2_enabled __ro_after_init =
- SPECTRE_V2_NONE;
+enum spectre_v2_mitigation spectre_v2_enabled __ro_after_init = SPECTRE_V2_NONE;

#undef pr_fmt
#define pr_fmt(fmt) "RETBleed: " fmt
@@ -1133,13 +1132,6 @@ spectre_v2_parse_user_cmdline(void)
return SPECTRE_V2_USER_CMD_AUTO;
}

-static inline bool spectre_v2_in_eibrs_mode(enum spectre_v2_mitigation mode)
-{
- return mode == SPECTRE_V2_EIBRS ||
- mode == SPECTRE_V2_EIBRS_RETPOLINE ||
- mode == SPECTRE_V2_EIBRS_LFENCE;
-}
-
static inline bool spectre_v2_in_ibrs_mode(enum spectre_v2_mitigation mode)
{
return spectre_v2_in_eibrs_mode(mode) || mode == SPECTRE_V2_IBRS;
diff --git a/arch/x86/kernel/cpu/cpu.h b/arch/x86/kernel/cpu/cpu.h
index 57a5349e6954..f97b0fe13da8 100644
--- a/arch/x86/kernel/cpu/cpu.h
+++ b/arch/x86/kernel/cpu/cpu.h
@@ -83,4 +83,12 @@ unsigned int aperfmperf_get_khz(int cpu);
extern void x86_spec_ctrl_setup_ap(void);
extern void update_srbds_msr(void);

+extern enum spectre_v2_mitigation spectre_v2_enabled;
+
+static inline bool spectre_v2_in_eibrs_mode(enum spectre_v2_mitigation mode)
+{
+ return mode == SPECTRE_V2_EIBRS ||
+ mode == SPECTRE_V2_EIBRS_RETPOLINE ||
+ mode == SPECTRE_V2_EIBRS_LFENCE;
+}
#endif /* ARCH_X86_CPU_H */
--
2.35.1

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette

2023-03-13 15:42:21

by Dave Hansen

[permalink] [raw]
Subject: Re: [PATCH -v2] x86/CPU/AMD: Make sure EFER[AIBRSE] is set

On 3/10/23 08:22, Borislav Petkov wrote:
> The AutoIBRS bit gets set only on the BSP as part of determining which
> mitigation to enable on AMD. Setting on the APs relies on the
> circumstance that the APs get booted through the trampoline and EFER
> - the MSR which contains that bit - gets replicated on every AP from the
> BSP.
>
> However, this can change in the future and considering the security
> implications of this bit not being set on every CPU, make sure it is set
> by verifying EFER later in the boot process and on every AP.
>
> Reported-by: Josh Poimboeuf <[email protected]>
> Signed-off-by: Borislav Petkov (AMD) <[email protected]>
> Link: https://lore.kernel.org/r/20230224185257.o3mcmloei5zqu7wa@treble

Looks sane, thanks for adding the warning:

Acked-by: Dave Hansen <[email protected]>


Subject: [tip: x86/cpu] x86/CPU/AMD: Make sure EFER[AIBRSE] is set

The following commit has been merged into the x86/cpu branch of tip:

Commit-ID: 8cc68c9c9e92dbaae51a711454c66eb668045508
Gitweb: https://git.kernel.org/tip/8cc68c9c9e92dbaae51a711454c66eb668045508
Author: Borislav Petkov (AMD) <[email protected]>
AuthorDate: Sat, 25 Feb 2023 01:11:31 +01:00
Committer: Borislav Petkov (AMD) <[email protected]>
CommitterDate: Thu, 16 Mar 2023 11:50:00 +01:00

x86/CPU/AMD: Make sure EFER[AIBRSE] is set

The AutoIBRS bit gets set only on the BSP as part of determining which
mitigation to enable on AMD. Setting on the APs relies on the
circumstance that the APs get booted through the trampoline and EFER
- the MSR which contains that bit - gets replicated on every AP from the
BSP.

However, this can change in the future and considering the security
implications of this bit not being set on every CPU, make sure it is set
by verifying EFER later in the boot process and on every AP.

Reported-by: Josh Poimboeuf <[email protected]>
Signed-off-by: Borislav Petkov (AMD) <[email protected]>
Acked-by: Dave Hansen <[email protected]>
Link: https://lore.kernel.org/r/20230224185257.o3mcmloei5zqu7wa@treble
---
arch/x86/kernel/cpu/amd.c | 11 +++++++++++
arch/x86/kernel/cpu/bugs.c | 10 +---------
arch/x86/kernel/cpu/cpu.h | 8 ++++++++
3 files changed, 20 insertions(+), 9 deletions(-)

diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
index 380753b..dd32dbc 100644
--- a/arch/x86/kernel/cpu/amd.c
+++ b/arch/x86/kernel/cpu/amd.c
@@ -996,6 +996,17 @@ static void init_amd(struct cpuinfo_x86 *c)
msr_set_bit(MSR_K7_HWCR, MSR_K7_HWCR_IRPERF_EN_BIT);

check_null_seg_clears_base(c);
+
+ /*
+ * Make sure EFER[AIBRSE - Automatic IBRS Enable] is set. The APs are brought up
+ * using the trampoline code and as part of it, MSR_EFER gets prepared there in
+ * order to be replicated onto them. Regardless, set it here again, if not set,
+ * to protect against any future refactoring/code reorganization which might
+ * miss setting this important bit.
+ */
+ if (spectre_v2_in_eibrs_mode(spectre_v2_enabled) &&
+ cpu_has(c, X86_FEATURE_AUTOIBRS))
+ WARN_ON_ONCE(msr_set_bit(MSR_EFER, _EFER_AUTOIBRS));
}

#ifdef CONFIG_X86_32
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index f9d060e..182af64 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -784,8 +784,7 @@ static int __init nospectre_v1_cmdline(char *str)
}
early_param("nospectre_v1", nospectre_v1_cmdline);

-static enum spectre_v2_mitigation spectre_v2_enabled __ro_after_init =
- SPECTRE_V2_NONE;
+enum spectre_v2_mitigation spectre_v2_enabled __ro_after_init = SPECTRE_V2_NONE;

#undef pr_fmt
#define pr_fmt(fmt) "RETBleed: " fmt
@@ -1133,13 +1132,6 @@ spectre_v2_parse_user_cmdline(void)
return SPECTRE_V2_USER_CMD_AUTO;
}

-static inline bool spectre_v2_in_eibrs_mode(enum spectre_v2_mitigation mode)
-{
- return mode == SPECTRE_V2_EIBRS ||
- mode == SPECTRE_V2_EIBRS_RETPOLINE ||
- mode == SPECTRE_V2_EIBRS_LFENCE;
-}
-
static inline bool spectre_v2_in_ibrs_mode(enum spectre_v2_mitigation mode)
{
return spectre_v2_in_eibrs_mode(mode) || mode == SPECTRE_V2_IBRS;
diff --git a/arch/x86/kernel/cpu/cpu.h b/arch/x86/kernel/cpu/cpu.h
index 57a5349..f97b0fe 100644
--- a/arch/x86/kernel/cpu/cpu.h
+++ b/arch/x86/kernel/cpu/cpu.h
@@ -83,4 +83,12 @@ unsigned int aperfmperf_get_khz(int cpu);
extern void x86_spec_ctrl_setup_ap(void);
extern void update_srbds_msr(void);

+extern enum spectre_v2_mitigation spectre_v2_enabled;
+
+static inline bool spectre_v2_in_eibrs_mode(enum spectre_v2_mitigation mode)
+{
+ return mode == SPECTRE_V2_EIBRS ||
+ mode == SPECTRE_V2_EIBRS_RETPOLINE ||
+ mode == SPECTRE_V2_EIBRS_LFENCE;
+}
#endif /* ARCH_X86_CPU_H */