2023-01-23 22:57:19

by Kim Phillips

[permalink] [raw]
Subject: [PATCH v8 0/8] x86/cpu, kvm: Support AMD Automatic IBRS

The AMD Zen4 core supports a new feature called Automatic IBRS
(Indirect Branch Restricted Speculation).

Enable Automatic IBRS by default if the CPU feature is present.
It typically provides greater performance over the incumbent
generic retpolines mitigation.

Patch 1 [unchanged from v7] Adds support for the leaf that
contains the AutoIBRS feature bit.

Patch 2 moves the leaf's open-coded code from __do_cpuid_func()
to kvm_set_cpu_caps() in preparation for adding the features in
their native leaf.

Patches 3-6 introduce the new leaf's supported bits one by one.

Patch 7 [unchanged from v7] Adds support for AutoIBRS by turning
its EFER enablement bit on at startup if the feature is available.

Patch 8 [unchanged from v7] Adds support for propagating AutoIBRS
to the guest.

v8: Address comments from Sean Christopherson, Boris:
- Move open-coded cpuid leaf 0x80000021 EAX bit propagation code
in a single step/patch to avoid CPUID bits getting cleared
by the open-coded ANDs coming after cpuid_entry_override().
- Includes changes Boris made when committing v7 to tip/x86-cpu, i.e.:
- Removing "AMD" prefix from feature comment text in cpufeatures.h
- Convert test in check_null_seg_clears_base() too.
- commit message changes

v7: https://lore.kernel.org/lkml/[email protected]/
- Add Dave Hansen's Acked-by to unchanged patch 6/7
- Change patch 3/7 to not bother to set MSR DE_CFG[1]
if X86_FEATURE_LFENCE_RDTSC is already set [Boris]
- v6 went out with two 1/1's, try to not do that again

v6: https://lore.kernel.org/lkml/[email protected]/
Address v5 comment from Boris:
- Move CPUID leaf 0x8000021 EAX feature bits from scattered
to the new whole leaf since the majority of the features
will be used in the kernel and thus a separate leaf is
appropriate.

v5: https://lore.kernel.org/lkml/[email protected]/
Address v4 comments from Dave Hansen, Pawan Gupta, and Boris:
- Don't add new user-visible 'autoibrs' command line
options that have to be documented: reuse 'eibrs'
- Update Documentation/admin-guide/hw-vuln/spectre.rst
- Add NO_EIBRS_PBRSB to Hygon as well
- Re-word commit texts to not use words like 'us'

v4: https://lore.kernel.org/lkml/[email protected]/
Moved some kvm bits that had crept into patch 6/7 back into 7/7,
and addressed v3 comments:
- Don't put ", kvm" in titles of patches that don't touch kvm. [SeanC]
- () after function names, i.e. kvm_set_cpu_caps(). [SeanC]
- follow the established kvm_cpu_cap_init_scattered() style [SeanC]
- Add using cpu_feature_enabled() instead of static_cpu_has() to
commit text [SeanC]
- Pawan Gupta mentioned that the ordering of enabling the Intel
feature bit past Intel EIBRS bug detection could be avoided
by setting NO_EIBRS_PBRSB to cpu_vuln_whitelist, so did that
which allowed regrouping all EIBRS related code to one place
in cpu_set_bug_bits().

v3: https://lore.kernel.org/lkml/[email protected]/
- Remove Co-developed-bys. They require signed-off-bys,
so co-developers need to add them themselves.
- update check_null_seg_clears_base() [Boris]
- Made the feature bit additions separate patches
because v2 patch was clearly doing too many things at once.

v2: https://lore.kernel.org/lkml/[email protected]/
https://lkml.org/lkml/2022/11/23/1690
- Use synthetic/scattered bits instead of introducing new leaf [Boris]
- Combine the rest of the leaf's bits being used [Paolo]
Note: Bits not used by the host can be moved to kvm/cpuid.c if
maintainers do not want them in cpufeatures.h.
- Hoist bitsetting code to kvm_set_cpu_caps(), and use
cpuid_entry_override() in __do_cpuid_func() [Paolo]
- Reuse SPECTRE_V2_EIBRS spectre_v2_mitigation enum [Boris, PeterZ, D.Hansen]
- Change from Boris' diff:
Moved setting X86_FEATURE_IBRS_ENHANCED to after BUG_EIBRS_PBRSB
so PBRSB mitigations wouldn't be enabled.
- Allow for users to specify "autoibrs,lfence/retpoline" instead
of actively preventing the extra protections. AutoIBRS doesn't
require the extra protection, but allow it anyway.

v1: https://lore.kernel.org/lkml/[email protected]/

Signed-off-by: Kim Phillips <[email protected]>
Cc: Borislav Petkov (AMD) <[email protected]>
Cc: Boris Ostrovsky <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Joao Martins <[email protected]>
Cc: Jonathan Corbet <[email protected]>
Cc: Konrad Rzeszutek Wilk <[email protected]>
Cc: Paolo Bonzini <[email protected]>
Cc: Sean Christopherson <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: David Woodhouse <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: Juergen Gross <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Tony Luck <[email protected]>
Cc: Tom Lendacky <[email protected]>
Cc: Alexey Kardashevskiy <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]

Kim Phillips (8):
x86/cpu, kvm: Add support for CPUID_80000021_EAX
x86/cpu, kvm: Move open-coded cpuid leaf 0x80000021 EAX bit
propagation code
x86/cpu, kvm: Add the NO_NESTED_DATA_BP feature
x86/cpu, kvm: Move X86_FEATURE_LFENCE_RDTSC to its native leaf
x86/cpu, kvm: Add the Null Selector Clears Base feature
x86/cpu, kvm: Add the SMM_CTL MSR not present feature
x86/cpu: Support AMD Automatic IBRS
x86/cpu, kvm: Propagate the AMD Automatic IBRS feature to the guest

Documentation/admin-guide/hw-vuln/spectre.rst | 6 ++--
.../admin-guide/kernel-parameters.txt | 6 ++--
arch/x86/include/asm/cpufeature.h | 7 +++--
arch/x86/include/asm/cpufeatures.h | 11 +++++--
arch/x86/include/asm/disabled-features.h | 3 +-
arch/x86/include/asm/msr-index.h | 2 ++
arch/x86/include/asm/required-features.h | 3 +-
arch/x86/kernel/cpu/amd.c | 2 +-
arch/x86/kernel/cpu/bugs.c | 20 ++++++++-----
arch/x86/kernel/cpu/common.c | 26 +++++++++-------
arch/x86/kvm/cpuid.c | 30 +++++++------------
arch/x86/kvm/reverse_cpuid.h | 1 +
arch/x86/kvm/svm/svm.c | 3 ++
arch/x86/kvm/x86.c | 3 ++
14 files changed, 72 insertions(+), 51 deletions(-)

--
2.34.1



2023-01-23 22:57:32

by Kim Phillips

[permalink] [raw]
Subject: [PATCH v8 1/8] x86/cpu, kvm: Add support for CPUID_80000021_EAX

Add support for CPUID leaf 80000021, EAX. The majority of the features will be
used in the kernel and thus a separate leaf is appropriate.

Include KVM's reverse_cpuid entry because features are used by VM guests, too.

[ bp: Massage commit message. ]

Signed-off-by: Kim Phillips <[email protected]>
---
arch/x86/include/asm/cpufeature.h | 7 +++++--
arch/x86/include/asm/cpufeatures.h | 2 +-
arch/x86/include/asm/disabled-features.h | 3 ++-
arch/x86/include/asm/required-features.h | 3 ++-
arch/x86/kernel/cpu/common.c | 3 +++
arch/x86/kvm/reverse_cpuid.h | 1 +
6 files changed, 14 insertions(+), 5 deletions(-)

diff --git a/arch/x86/include/asm/cpufeature.h b/arch/x86/include/asm/cpufeature.h
index 1a85e1fb0922..ce0c8f7d3218 100644
--- a/arch/x86/include/asm/cpufeature.h
+++ b/arch/x86/include/asm/cpufeature.h
@@ -32,6 +32,7 @@ enum cpuid_leafs
CPUID_8000_0007_EBX,
CPUID_7_EDX,
CPUID_8000_001F_EAX,
+ CPUID_8000_0021_EAX,
};

#define X86_CAP_FMT_NUM "%d:%d"
@@ -94,8 +95,9 @@ extern const char * const x86_bug_flags[NBUGINTS*32];
CHECK_BIT_IN_MASK_WORD(REQUIRED_MASK, 17, feature_bit) || \
CHECK_BIT_IN_MASK_WORD(REQUIRED_MASK, 18, feature_bit) || \
CHECK_BIT_IN_MASK_WORD(REQUIRED_MASK, 19, feature_bit) || \
+ CHECK_BIT_IN_MASK_WORD(REQUIRED_MASK, 20, feature_bit) || \
REQUIRED_MASK_CHECK || \
- BUILD_BUG_ON_ZERO(NCAPINTS != 20))
+ BUILD_BUG_ON_ZERO(NCAPINTS != 21))

#define DISABLED_MASK_BIT_SET(feature_bit) \
( CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 0, feature_bit) || \
@@ -118,8 +120,9 @@ extern const char * const x86_bug_flags[NBUGINTS*32];
CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 17, feature_bit) || \
CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 18, feature_bit) || \
CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 19, feature_bit) || \
+ CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 20, feature_bit) || \
DISABLED_MASK_CHECK || \
- BUILD_BUG_ON_ZERO(NCAPINTS != 20))
+ BUILD_BUG_ON_ZERO(NCAPINTS != 21))

#define cpu_has(c, bit) \
(__builtin_constant_p(bit) && REQUIRED_MASK_BIT_SET(bit) ? 1 : \
diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index 6cfa7143c316..a84536876794 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -13,7 +13,7 @@
/*
* Defines x86 CPU feature bits
*/
-#define NCAPINTS 20 /* N 32-bit words worth of info */
+#define NCAPINTS 21 /* N 32-bit words worth of info */
#define NBUGINTS 1 /* N 32-bit bug flags */

/*
diff --git a/arch/x86/include/asm/disabled-features.h b/arch/x86/include/asm/disabled-features.h
index c44b56f7ffba..5dfa4fb76f4b 100644
--- a/arch/x86/include/asm/disabled-features.h
+++ b/arch/x86/include/asm/disabled-features.h
@@ -124,6 +124,7 @@
#define DISABLED_MASK17 0
#define DISABLED_MASK18 0
#define DISABLED_MASK19 0
-#define DISABLED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 20)
+#define DISABLED_MASK20 0
+#define DISABLED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 21)

#endif /* _ASM_X86_DISABLED_FEATURES_H */
diff --git a/arch/x86/include/asm/required-features.h b/arch/x86/include/asm/required-features.h
index aff774775c67..7ba1726b71c7 100644
--- a/arch/x86/include/asm/required-features.h
+++ b/arch/x86/include/asm/required-features.h
@@ -98,6 +98,7 @@
#define REQUIRED_MASK17 0
#define REQUIRED_MASK18 0
#define REQUIRED_MASK19 0
-#define REQUIRED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 20)
+#define REQUIRED_MASK20 0
+#define REQUIRED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 21)

#endif /* _ASM_X86_REQUIRED_FEATURES_H */
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 5fe56f0ec9d7..094dbcd63f2a 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1093,6 +1093,9 @@ void get_cpu_cap(struct cpuinfo_x86 *c)
if (c->extended_cpuid_level >= 0x8000001f)
c->x86_capability[CPUID_8000_001F_EAX] = cpuid_eax(0x8000001f);

+ if (c->extended_cpuid_level >= 0x80000021)
+ c->x86_capability[CPUID_8000_0021_EAX] = cpuid_eax(0x80000021);
+
init_scattered_cpuid_features(c);
init_speculation_control(c);

diff --git a/arch/x86/kvm/reverse_cpuid.h b/arch/x86/kvm/reverse_cpuid.h
index 042d0aca3c92..81f4e9ce0c77 100644
--- a/arch/x86/kvm/reverse_cpuid.h
+++ b/arch/x86/kvm/reverse_cpuid.h
@@ -68,6 +68,7 @@ static const struct cpuid_reg reverse_cpuid[] = {
[CPUID_12_EAX] = {0x00000012, 0, CPUID_EAX},
[CPUID_8000_001F_EAX] = {0x8000001f, 0, CPUID_EAX},
[CPUID_7_1_EDX] = { 7, 1, CPUID_EDX},
+ [CPUID_8000_0021_EAX] = {0x80000021, 0, CPUID_EAX},
};

/*
--
2.34.1


2023-01-23 22:58:15

by Kim Phillips

[permalink] [raw]
Subject: [PATCH v8 2/8] x86/cpu, kvm: Move open-coded cpuid leaf 0x80000021 EAX bit propagation code

Move code from __do_cpuid_func() to kvm_set_cpu_caps() in preparation
for adding the features in their native leaf.

Also drop the bit description comments as it will be more self-
describing once the individual features are added.

Whilst there, switch to using the more efficient cpu_feature_enabled()
instead of static_cpu_has().

Signed-off-by: Kim Phillips <[email protected]>
---
arch/x86/kvm/cpuid.c | 30 +++++++++++-------------------
1 file changed, 11 insertions(+), 19 deletions(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 596061c1610e..3930452bf06e 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -741,6 +741,16 @@ void kvm_set_cpu_caps(void)
0 /* SME */ | F(SEV) | 0 /* VM_PAGE_FLUSH */ | F(SEV_ES) |
F(SME_COHERENT));

+ kvm_cpu_cap_mask(CPUID_8000_0021_EAX,
+ BIT(0) /* NO_NESTED_DATA_BP */ | 0 /* SmmPgCfgLock */ |
+ BIT(6) /* NULL_SEL_CLR_BASE */ | 0 /* PrefetchCtlMsr */
+ );
+ if (cpu_feature_enabled(X86_FEATURE_LFENCE_RDTSC))
+ kvm_cpu_caps[CPUID_8000_0021_EAX] |= BIT(2) /* LFENCE Always serializing */;
+ if (!static_cpu_has_bug(X86_BUG_NULL_SEG))
+ kvm_cpu_caps[CPUID_8000_0021_EAX] |= BIT(6) /* NULL_SEL_CLR_BASE */;
+ kvm_cpu_caps[CPUID_8000_0021_EAX] |= BIT(9) /* NO_SMM_CTL_MSR */;
+
kvm_cpu_cap_mask(CPUID_C000_0001_EDX,
F(XSTORE) | F(XSTORE_EN) | F(XCRYPT) | F(XCRYPT_EN) |
F(ACE2) | F(ACE2_EN) | F(PHE) | F(PHE_EN) |
@@ -1222,25 +1232,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
break;
case 0x80000021:
entry->ebx = entry->ecx = entry->edx = 0;
- /*
- * Pass down these bits:
- * EAX 0 NNDBP, Processor ignores nested data breakpoints
- * EAX 2 LAS, LFENCE always serializing
- * EAX 6 NSCB, Null selector clear base
- *
- * Other defined bits are for MSRs that KVM does not expose:
- * EAX 3 SPCL, SMM page configuration lock
- * EAX 13 PCMSR, Prefetch control MSR
- *
- * KVM doesn't support SMM_CTL.
- * EAX 9 SMM_CTL MSR is not supported
- */
- entry->eax &= BIT(0) | BIT(2) | BIT(6);
- entry->eax |= BIT(9);
- if (static_cpu_has(X86_FEATURE_LFENCE_RDTSC))
- entry->eax |= BIT(2);
- if (!static_cpu_has_bug(X86_BUG_NULL_SEG))
- entry->eax |= BIT(6);
+ cpuid_entry_override(entry, CPUID_8000_0021_EAX);
break;
/*Add support for Centaur's CPUID instruction*/
case 0xC0000000:
--
2.34.1


2023-01-23 22:58:23

by Kim Phillips

[permalink] [raw]
Subject: [PATCH v8 3/8] x86/cpu, kvm: Add the NO_NESTED_DATA_BP feature

The "Processor ignores nested data breakpoints" feature was being
open-coded for KVM. Add the feature to its newly introduced
CPUID leaf 0x80000021 EAX proper.

Signed-off-by: Kim Phillips <[email protected]>
---
arch/x86/include/asm/cpufeatures.h | 3 +++
arch/x86/kvm/cpuid.c | 2 +-
2 files changed, 4 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index a84536876794..7f0fb894e432 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -428,6 +428,9 @@
#define X86_FEATURE_V_TSC_AUX (19*32+ 9) /* "" Virtual TSC_AUX */
#define X86_FEATURE_SME_COHERENT (19*32+10) /* "" AMD hardware-enforced cache coherency */

+/* AMD-defined Extended Feature 2 EAX, CPUID level 0x80000021 (EAX), word 20 */
+#define X86_FEATURE_NO_NESTED_DATA_BP (20*32+ 0) /* "" No Nested Data Breakpoints */
+
/*
* BUG word(s)
*/
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 3930452bf06e..13bd2769fa5a 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -742,7 +742,7 @@ void kvm_set_cpu_caps(void)
F(SME_COHERENT));

kvm_cpu_cap_mask(CPUID_8000_0021_EAX,
- BIT(0) /* NO_NESTED_DATA_BP */ | 0 /* SmmPgCfgLock */ |
+ F(NO_NESTED_DATA_BP) | 0 /* SmmPgCfgLock */ |
BIT(6) /* NULL_SEL_CLR_BASE */ | 0 /* PrefetchCtlMsr */
);
if (cpu_feature_enabled(X86_FEATURE_LFENCE_RDTSC))
--
2.34.1


2023-01-23 22:58:26

by Kim Phillips

[permalink] [raw]
Subject: [PATCH v8 4/8] x86/cpu, kvm: Move X86_FEATURE_LFENCE_RDTSC to its native leaf

The LFENCE always serializing feature bit was defined as scattered
LFENCE_RDTSC and its native leaf bit position open-coded for KVM.
Add it to its newly added CPUID leaf 0x80000021 EAX proper.

Drop the bit description comments now it's more self-describing.

Also, in amd_init(), don't bother setting DE_CFG[1] any more.

Signed-off-by: Kim Phillips <[email protected]>
---
arch/x86/include/asm/cpufeatures.h | 3 ++-
arch/x86/kernel/cpu/amd.c | 2 +-
arch/x86/kvm/cpuid.c | 4 ++--
3 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index 7f0fb894e432..4f22d828c753 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -97,7 +97,7 @@
#define X86_FEATURE_SYSENTER32 ( 3*32+15) /* "" sysenter in IA32 userspace */
#define X86_FEATURE_REP_GOOD ( 3*32+16) /* REP microcode works well */
#define X86_FEATURE_AMD_LBR_V2 ( 3*32+17) /* AMD Last Branch Record Extension Version 2 */
-#define X86_FEATURE_LFENCE_RDTSC ( 3*32+18) /* "" LFENCE synchronizes RDTSC */
+/* FREE, was #define X86_FEATURE_LFENCE_RDTSC ( 3*32+18) "" LFENCE synchronizes RDTSC */
#define X86_FEATURE_ACC_POWER ( 3*32+19) /* AMD Accumulated Power Mechanism */
#define X86_FEATURE_NOPL ( 3*32+20) /* The NOPL (0F 1F) instructions */
#define X86_FEATURE_ALWAYS ( 3*32+21) /* "" Always-present feature */
@@ -430,6 +430,7 @@

/* AMD-defined Extended Feature 2 EAX, CPUID level 0x80000021 (EAX), word 20 */
#define X86_FEATURE_NO_NESTED_DATA_BP (20*32+ 0) /* "" No Nested Data Breakpoints */
+#define X86_FEATURE_LFENCE_RDTSC (20*32+ 2) /* "" LFENCE always serializing / synchronizes RDTSC */

/*
* BUG word(s)
diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
index f769d6d08b43..208c2ce8598a 100644
--- a/arch/x86/kernel/cpu/amd.c
+++ b/arch/x86/kernel/cpu/amd.c
@@ -956,7 +956,7 @@ static void init_amd(struct cpuinfo_x86 *c)

init_amd_cacheinfo(c);

- if (cpu_has(c, X86_FEATURE_XMM2)) {
+ if (!cpu_has(c, X86_FEATURE_LFENCE_RDTSC) && cpu_has(c, X86_FEATURE_XMM2)) {
/*
* Use LFENCE for execution serialization. On families which
* don't have that MSR, LFENCE is already serializing.
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 13bd2769fa5a..601eeb03ebc9 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -742,11 +742,11 @@ void kvm_set_cpu_caps(void)
F(SME_COHERENT));

kvm_cpu_cap_mask(CPUID_8000_0021_EAX,
- F(NO_NESTED_DATA_BP) | 0 /* SmmPgCfgLock */ |
+ F(NO_NESTED_DATA_BP) | F(LFENCE_RDTSC) | 0 /* SmmPgCfgLock */ |
BIT(6) /* NULL_SEL_CLR_BASE */ | 0 /* PrefetchCtlMsr */
);
if (cpu_feature_enabled(X86_FEATURE_LFENCE_RDTSC))
- kvm_cpu_caps[CPUID_8000_0021_EAX] |= BIT(2) /* LFENCE Always serializing */;
+ kvm_cpu_cap_set(X86_FEATURE_LFENCE_RDTSC);
if (!static_cpu_has_bug(X86_BUG_NULL_SEG))
kvm_cpu_caps[CPUID_8000_0021_EAX] |= BIT(6) /* NULL_SEL_CLR_BASE */;
kvm_cpu_caps[CPUID_8000_0021_EAX] |= BIT(9) /* NO_SMM_CTL_MSR */;
--
2.34.1


2023-01-23 22:58:32

by Kim Phillips

[permalink] [raw]
Subject: [PATCH v8 5/8] x86/cpu, kvm: Add the Null Selector Clears Base feature

The Null Selector Clears Base feature was being open-coded for KVM.
Add it to its newly added native CPUID leaf 0x80000021 EAX proper.

Also drop the bit description comments now it's more self-describing.

[ bp: Convert test in check_null_seg_clears_base() too. ]

Signed-off-by: Kim Phillips <[email protected]>
---
arch/x86/include/asm/cpufeatures.h | 1 +
arch/x86/kernel/cpu/common.c | 4 +---
arch/x86/kvm/cpuid.c | 4 ++--
3 files changed, 4 insertions(+), 5 deletions(-)

diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index 4f22d828c753..403a534691cc 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -431,6 +431,7 @@
/* AMD-defined Extended Feature 2 EAX, CPUID level 0x80000021 (EAX), word 20 */
#define X86_FEATURE_NO_NESTED_DATA_BP (20*32+ 0) /* "" No Nested Data Breakpoints */
#define X86_FEATURE_LFENCE_RDTSC (20*32+ 2) /* "" LFENCE always serializing / synchronizes RDTSC */
+#define X86_FEATURE_NULL_SEL_CLR_BASE (20*32+ 6) /* "" Null Selector Clears Base */

/*
* BUG word(s)
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 094dbcd63f2a..162352d42ce0 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1685,9 +1685,7 @@ void check_null_seg_clears_base(struct cpuinfo_x86 *c)
if (!IS_ENABLED(CONFIG_X86_64))
return;

- /* Zen3 CPUs advertise Null Selector Clears Base in CPUID. */
- if (c->extended_cpuid_level >= 0x80000021 &&
- cpuid_eax(0x80000021) & BIT(6))
+ if (cpu_has(c, X86_FEATURE_NULL_SEL_CLR_BASE))
return;

/*
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 601eeb03ebc9..f1625a58b5ec 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -743,12 +743,12 @@ void kvm_set_cpu_caps(void)

kvm_cpu_cap_mask(CPUID_8000_0021_EAX,
F(NO_NESTED_DATA_BP) | F(LFENCE_RDTSC) | 0 /* SmmPgCfgLock */ |
- BIT(6) /* NULL_SEL_CLR_BASE */ | 0 /* PrefetchCtlMsr */
+ F(NULL_SEL_CLR_BASE) | 0 /* PrefetchCtlMsr */
);
if (cpu_feature_enabled(X86_FEATURE_LFENCE_RDTSC))
kvm_cpu_cap_set(X86_FEATURE_LFENCE_RDTSC);
if (!static_cpu_has_bug(X86_BUG_NULL_SEG))
- kvm_cpu_caps[CPUID_8000_0021_EAX] |= BIT(6) /* NULL_SEL_CLR_BASE */;
+ kvm_cpu_cap_set(X86_FEATURE_NULL_SEL_CLR_BASE);
kvm_cpu_caps[CPUID_8000_0021_EAX] |= BIT(9) /* NO_SMM_CTL_MSR */;

kvm_cpu_cap_mask(CPUID_C000_0001_EDX,
--
2.34.1


2023-01-23 22:59:05

by Kim Phillips

[permalink] [raw]
Subject: [PATCH v8 6/8] x86/cpu, kvm: Add the SMM_CTL MSR not present feature

The SMM_CTL MSR not present feature was being open-coded for KVM.
Add it to its newly added CPUID leaf 0x80000021 EAX proper.

Also drop the bit description comments now the code is more
self-describing.

Signed-off-by: Kim Phillips <[email protected]>
---
arch/x86/include/asm/cpufeatures.h | 1 +
arch/x86/kvm/cpuid.c | 2 +-
2 files changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index 403a534691cc..6dcc3c3d0c8d 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -432,6 +432,7 @@
#define X86_FEATURE_NO_NESTED_DATA_BP (20*32+ 0) /* "" No Nested Data Breakpoints */
#define X86_FEATURE_LFENCE_RDTSC (20*32+ 2) /* "" LFENCE always serializing / synchronizes RDTSC */
#define X86_FEATURE_NULL_SEL_CLR_BASE (20*32+ 6) /* "" Null Selector Clears Base */
+#define X86_FEATURE_NO_SMM_CTL_MSR (20*32+ 9) /* "" SMM_CTL MSR is not present */

/*
* BUG word(s)
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index f1625a58b5ec..9ba75ad9d976 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -749,7 +749,7 @@ void kvm_set_cpu_caps(void)
kvm_cpu_cap_set(X86_FEATURE_LFENCE_RDTSC);
if (!static_cpu_has_bug(X86_BUG_NULL_SEG))
kvm_cpu_cap_set(X86_FEATURE_NULL_SEL_CLR_BASE);
- kvm_cpu_caps[CPUID_8000_0021_EAX] |= BIT(9) /* NO_SMM_CTL_MSR */;
+ kvm_cpu_cap_set(X86_FEATURE_NO_SMM_CTL_MSR);

kvm_cpu_cap_mask(CPUID_C000_0001_EDX,
F(XSTORE) | F(XSTORE_EN) | F(XCRYPT) | F(XCRYPT_EN) |
--
2.34.1


2023-01-23 23:00:12

by Kim Phillips

[permalink] [raw]
Subject: [PATCH v8 8/8] x86/cpu, kvm: Propagate the AMD Automatic IBRS feature to the guest

Add the AMD Automatic IBRS feature bit to those being propagated to the guest,
and enable the guest EFER bit.

Signed-off-by: Kim Phillips <[email protected]>
---
arch/x86/kvm/cpuid.c | 2 +-
arch/x86/kvm/svm/svm.c | 3 +++
arch/x86/kvm/x86.c | 3 +++
3 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 9ba75ad9d976..293ef07b34c3 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -743,7 +743,7 @@ void kvm_set_cpu_caps(void)

kvm_cpu_cap_mask(CPUID_8000_0021_EAX,
F(NO_NESTED_DATA_BP) | F(LFENCE_RDTSC) | 0 /* SmmPgCfgLock */ |
- F(NULL_SEL_CLR_BASE) | 0 /* PrefetchCtlMsr */
+ F(NULL_SEL_CLR_BASE) | F(AUTOIBRS) | 0 /* PrefetchCtlMsr */
);
if (cpu_feature_enabled(X86_FEATURE_LFENCE_RDTSC))
kvm_cpu_cap_set(X86_FEATURE_LFENCE_RDTSC);
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 9a194aa1a75a..60c7c880266b 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -4969,6 +4969,9 @@ static __init int svm_hardware_setup(void)

tsc_aux_uret_slot = kvm_add_user_return_msr(MSR_TSC_AUX);

+ if (boot_cpu_has(X86_FEATURE_AUTOIBRS))
+ kvm_enable_efer_bits(EFER_AUTOIBRS);
+
/* Check for pause filtering support */
if (!boot_cpu_has(X86_FEATURE_PAUSEFILTER)) {
pause_filter_count = 0;
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index da4bbd043a7b..8dd0cb230ef5 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -1685,6 +1685,9 @@ static int do_get_msr_feature(struct kvm_vcpu *vcpu, unsigned index, u64 *data)

static bool __kvm_valid_efer(struct kvm_vcpu *vcpu, u64 efer)
{
+ if (efer & EFER_AUTOIBRS && !guest_cpuid_has(vcpu, X86_FEATURE_AUTOIBRS))
+ return false;
+
if (efer & EFER_FFXSR && !guest_cpuid_has(vcpu, X86_FEATURE_FXSR_OPT))
return false;

--
2.34.1


2023-01-23 23:00:15

by Kim Phillips

[permalink] [raw]
Subject: [PATCH v8 7/8] x86/cpu: Support AMD Automatic IBRS

The AMD Zen4 core supports a new feature called Automatic IBRS.

It is a "set-and-forget" feature that means that, like Intel's Enhanced IBRS,
h/w manages its IBRS mitigation resources automatically across CPL transitions.

The feature is advertised by CPUID_Fn80000021_EAX bit 8 and is enabled by
setting MSR C000_0080 (EFER) bit 21.

Enable Automatic IBRS by default if the CPU feature is present. It typically
provides greater performance over the incumbent generic retpolines mitigation.

Reuse the SPECTRE_V2_EIBRS spectre_v2_mitigation enum. AMD Automatic IBRS and
Intel Enhanced IBRS have similar enablement. Add NO_EIBRS_PBRSB to
cpu_vuln_whitelist, since AMD Automatic IBRS isn't affected by PBRSB-eIBRS.

The kernel command line option spectre_v2=eibrs is used to select AMD Automatic
IBRS, if available.

Signed-off-by: Kim Phillips <[email protected]>
Acked-by: Dave Hansen <[email protected]>
---
Documentation/admin-guide/hw-vuln/spectre.rst | 6 +++---
.../admin-guide/kernel-parameters.txt | 6 +++---
arch/x86/include/asm/cpufeatures.h | 1 +
arch/x86/include/asm/msr-index.h | 2 ++
arch/x86/kernel/cpu/bugs.c | 20 +++++++++++--------
arch/x86/kernel/cpu/common.c | 19 ++++++++++--------
6 files changed, 32 insertions(+), 22 deletions(-)

diff --git a/Documentation/admin-guide/hw-vuln/spectre.rst b/Documentation/admin-guide/hw-vuln/spectre.rst
index c4dcdb3d0d45..3fe6511c5405 100644
--- a/Documentation/admin-guide/hw-vuln/spectre.rst
+++ b/Documentation/admin-guide/hw-vuln/spectre.rst
@@ -610,9 +610,9 @@ kernel command line.
retpoline,generic Retpolines
retpoline,lfence LFENCE; indirect branch
retpoline,amd alias for retpoline,lfence
- eibrs enhanced IBRS
- eibrs,retpoline enhanced IBRS + Retpolines
- eibrs,lfence enhanced IBRS + LFENCE
+ eibrs Enhanced/Auto IBRS
+ eibrs,retpoline Enhanced/Auto IBRS + Retpolines
+ eibrs,lfence Enhanced/Auto IBRS + LFENCE
ibrs use IBRS to protect kernel

Not specifying this option is equivalent to
diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 6cfa6e3996cf..839fa0fefb58 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -5729,9 +5729,9 @@
retpoline,generic - Retpolines
retpoline,lfence - LFENCE; indirect branch
retpoline,amd - alias for retpoline,lfence
- eibrs - enhanced IBRS
- eibrs,retpoline - enhanced IBRS + Retpolines
- eibrs,lfence - enhanced IBRS + LFENCE
+ eibrs - Enhanced/Auto IBRS
+ eibrs,retpoline - Enhanced/Auto IBRS + Retpolines
+ eibrs,lfence - Enhanced/Auto IBRS + LFENCE
ibrs - use IBRS to protect kernel

Not specifying this option is equivalent to
diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index 6dcc3c3d0c8d..7b319acda31a 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -432,6 +432,7 @@
#define X86_FEATURE_NO_NESTED_DATA_BP (20*32+ 0) /* "" No Nested Data Breakpoints */
#define X86_FEATURE_LFENCE_RDTSC (20*32+ 2) /* "" LFENCE always serializing / synchronizes RDTSC */
#define X86_FEATURE_NULL_SEL_CLR_BASE (20*32+ 6) /* "" Null Selector Clears Base */
+#define X86_FEATURE_AUTOIBRS (20*32+ 8) /* "" Automatic IBRS */
#define X86_FEATURE_NO_SMM_CTL_MSR (20*32+ 9) /* "" SMM_CTL MSR is not present */

/*
diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
index 47878f048aa6..b78336599247 100644
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -25,6 +25,7 @@
#define _EFER_SVME 12 /* Enable virtualization */
#define _EFER_LMSLE 13 /* Long Mode Segment Limit Enable */
#define _EFER_FFXSR 14 /* Enable Fast FXSAVE/FXRSTOR */
+#define _EFER_AUTOIBRS 21 /* Enable Automatic IBRS */

#define EFER_SCE (1<<_EFER_SCE)
#define EFER_LME (1<<_EFER_LME)
@@ -33,6 +34,7 @@
#define EFER_SVME (1<<_EFER_SVME)
#define EFER_LMSLE (1<<_EFER_LMSLE)
#define EFER_FFXSR (1<<_EFER_FFXSR)
+#define EFER_AUTOIBRS (1<<_EFER_AUTOIBRS)

/* Intel MSRs. Some also available on other CPUs */

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 4a0add86c182..cf81848b72f4 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -1238,9 +1238,9 @@ static const char * const spectre_v2_strings[] = {
[SPECTRE_V2_NONE] = "Vulnerable",
[SPECTRE_V2_RETPOLINE] = "Mitigation: Retpolines",
[SPECTRE_V2_LFENCE] = "Mitigation: LFENCE",
- [SPECTRE_V2_EIBRS] = "Mitigation: Enhanced IBRS",
- [SPECTRE_V2_EIBRS_LFENCE] = "Mitigation: Enhanced IBRS + LFENCE",
- [SPECTRE_V2_EIBRS_RETPOLINE] = "Mitigation: Enhanced IBRS + Retpolines",
+ [SPECTRE_V2_EIBRS] = "Mitigation: Enhanced / Automatic IBRS",
+ [SPECTRE_V2_EIBRS_LFENCE] = "Mitigation: Enhanced / Automatic IBRS + LFENCE",
+ [SPECTRE_V2_EIBRS_RETPOLINE] = "Mitigation: Enhanced / Automatic IBRS + Retpolines",
[SPECTRE_V2_IBRS] = "Mitigation: IBRS",
};

@@ -1309,7 +1309,7 @@ static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
cmd == SPECTRE_V2_CMD_EIBRS_LFENCE ||
cmd == SPECTRE_V2_CMD_EIBRS_RETPOLINE) &&
!boot_cpu_has(X86_FEATURE_IBRS_ENHANCED)) {
- pr_err("%s selected but CPU doesn't have eIBRS. Switching to AUTO select\n",
+ pr_err("%s selected but CPU doesn't have Enhanced or Automatic IBRS. Switching to AUTO select\n",
mitigation_options[i].option);
return SPECTRE_V2_CMD_AUTO;
}
@@ -1495,8 +1495,12 @@ static void __init spectre_v2_select_mitigation(void)
pr_err(SPECTRE_V2_EIBRS_EBPF_MSG);

if (spectre_v2_in_ibrs_mode(mode)) {
- x86_spec_ctrl_base |= SPEC_CTRL_IBRS;
- update_spec_ctrl(x86_spec_ctrl_base);
+ if (boot_cpu_has(X86_FEATURE_AUTOIBRS)) {
+ msr_set_bit(MSR_EFER, _EFER_AUTOIBRS);
+ } else {
+ x86_spec_ctrl_base |= SPEC_CTRL_IBRS;
+ update_spec_ctrl(x86_spec_ctrl_base);
+ }
}

switch (mode) {
@@ -1580,8 +1584,8 @@ static void __init spectre_v2_select_mitigation(void)
/*
* Retpoline protects the kernel, but doesn't protect firmware. IBRS
* and Enhanced IBRS protect firmware too, so enable IBRS around
- * firmware calls only when IBRS / Enhanced IBRS aren't otherwise
- * enabled.
+ * firmware calls only when IBRS / Enhanced / Automatic IBRS aren't
+ * otherwise enabled.
*
* Use "mode" to check Enhanced IBRS instead of boot_cpu_has(), because
* the user might select retpoline on the kernel command line and if
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 162352d42ce0..8ce67a8a61a6 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1229,8 +1229,8 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = {
VULNWL_AMD(0x12, NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO),

/* FAMILY_ANY must be last, otherwise 0x0f - 0x12 matches won't work */
- VULNWL_AMD(X86_FAMILY_ANY, NO_MELTDOWN | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO),
- VULNWL_HYGON(X86_FAMILY_ANY, NO_MELTDOWN | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO),
+ VULNWL_AMD(X86_FAMILY_ANY, NO_MELTDOWN | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO | NO_EIBRS_PBRSB),
+ VULNWL_HYGON(X86_FAMILY_ANY, NO_MELTDOWN | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO | NO_EIBRS_PBRSB),

/* Zhaoxin Family 7 */
VULNWL(CENTAUR, 7, X86_MODEL_ANY, NO_SPECTRE_V2 | NO_SWAPGS | NO_MMIO),
@@ -1341,8 +1341,16 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
!cpu_has(c, X86_FEATURE_AMD_SSB_NO))
setup_force_cpu_bug(X86_BUG_SPEC_STORE_BYPASS);

- if (ia32_cap & ARCH_CAP_IBRS_ALL)
+ /*
+ * AMD's AutoIBRS is equivalent to Intel's eIBRS - use the Intel feature
+ * flag and protect from vendor-specific bugs via the whitelist.
+ */
+ if ((ia32_cap & ARCH_CAP_IBRS_ALL) || cpu_has(c, X86_FEATURE_AUTOIBRS)) {
setup_force_cpu_cap(X86_FEATURE_IBRS_ENHANCED);
+ if (!cpu_matches(cpu_vuln_whitelist, NO_EIBRS_PBRSB) &&
+ !(ia32_cap & ARCH_CAP_PBRSB_NO))
+ setup_force_cpu_bug(X86_BUG_EIBRS_PBRSB);
+ }

if (!cpu_matches(cpu_vuln_whitelist, NO_MDS) &&
!(ia32_cap & ARCH_CAP_MDS_NO)) {
@@ -1404,11 +1412,6 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
setup_force_cpu_bug(X86_BUG_RETBLEED);
}

- if (cpu_has(c, X86_FEATURE_IBRS_ENHANCED) &&
- !cpu_matches(cpu_vuln_whitelist, NO_EIBRS_PBRSB) &&
- !(ia32_cap & ARCH_CAP_PBRSB_NO))
- setup_force_cpu_bug(X86_BUG_EIBRS_PBRSB);
-
if (cpu_matches(cpu_vuln_whitelist, NO_MELTDOWN))
return;

--
2.34.1


2023-01-23 23:54:27

by Sean Christopherson

[permalink] [raw]
Subject: Re: [PATCH v8 1/8] x86/cpu, kvm: Add support for CPUID_80000021_EAX

On Mon, Jan 23, 2023, Kim Phillips wrote:
> Add support for CPUID leaf 80000021, EAX. The majority of the features will be
> used in the kernel and thus a separate leaf is appropriate.
>
> Include KVM's reverse_cpuid entry because features are used by VM guests, too.

Nit, "KVM guests", not "VM guests".

2023-01-24 01:15:00

by Sean Christopherson

[permalink] [raw]
Subject: Re: [PATCH v8 2/8] x86/cpu, kvm: Move open-coded cpuid leaf 0x80000021 EAX bit propagation code

Nit, shortlog for this should be

KVM: x86:

since this touches only KVM code.

On Mon, Jan 23, 2023, Kim Phillips wrote:
> Move code from __do_cpuid_func() to kvm_set_cpu_caps() in preparation
> for adding the features in their native leaf.

Huh, this wasn't why I was expecting, but this is better than what I had in mind.
Moving everything all at once wouldn't work well because of the kernel dependencies.

> Also drop the bit description comments as it will be more self-
> describing once the individual features are added.
>
> Whilst there, switch to using the more efficient cpu_feature_enabled()
> instead of static_cpu_has().

One more nit/request. Can you add a blurb about the synthetic features? That
part is easy to miss and will be confusing after the fact. E.g.

Note, LFENCE_RDTSC and "NULL selector clears base" are is currently
synthetic, Linux-defined feature flags as Linux tracking of the features
predates AMD's definition. Keep the manual propagation of the flags from
their synthetic counterparts until the kernel fully converts to AMD's
definition, otherwise KVM would stop synthesizing the flags as intended.

> Signed-off-by: Kim Phillips <[email protected]>
> ---
> arch/x86/kvm/cpuid.c | 30 +++++++++++-------------------
> 1 file changed, 11 insertions(+), 19 deletions(-)
>
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index 596061c1610e..3930452bf06e 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -741,6 +741,16 @@ void kvm_set_cpu_caps(void)
> 0 /* SME */ | F(SEV) | 0 /* VM_PAGE_FLUSH */ | F(SEV_ES) |
> F(SME_COHERENT));
>
> + kvm_cpu_cap_mask(CPUID_8000_0021_EAX,
> + BIT(0) /* NO_NESTED_DATA_BP */ | 0 /* SmmPgCfgLock */ |

Uber nit, to make this a bit closer to pure code movement, this should include
BIT(2) as well. Mainly because BIT(6) is also kept even though it too may be
synthesized by KVM.

> + BIT(6) /* NULL_SEL_CLR_BASE */ | 0 /* PrefetchCtlMsr */
> + );
> + if (cpu_feature_enabled(X86_FEATURE_LFENCE_RDTSC))
> + kvm_cpu_caps[CPUID_8000_0021_EAX] |= BIT(2) /* LFENCE Always serializing */;
> + if (!static_cpu_has_bug(X86_BUG_NULL_SEG))
> + kvm_cpu_caps[CPUID_8000_0021_EAX] |= BIT(6) /* NULL_SEL_CLR_BASE */;
> + kvm_cpu_caps[CPUID_8000_0021_EAX] |= BIT(9) /* NO_SMM_CTL_MSR */;
> +
> kvm_cpu_cap_mask(CPUID_C000_0001_EDX,
> F(XSTORE) | F(XSTORE_EN) | F(XCRYPT) | F(XCRYPT_EN) |
> F(ACE2) | F(ACE2_EN) | F(PHE) | F(PHE_EN) |

2023-01-24 01:18:00

by Sean Christopherson

[permalink] [raw]
Subject: Re: [PATCH v8 4/8] x86/cpu, kvm: Move X86_FEATURE_LFENCE_RDTSC to its native leaf

On Mon, Jan 23, 2023, Kim Phillips wrote:
> The LFENCE always serializing feature bit was defined as scattered
> LFENCE_RDTSC and its native leaf bit position open-coded for KVM.
> Add it to its newly added CPUID leaf 0x80000021 EAX proper.
>
> Drop the bit description comments now it's more self-describing.
>
> Also, in amd_init(), don't bother setting DE_CFG[1] any more.
>
> Signed-off-by: Kim Phillips <[email protected]>
> ---
> arch/x86/include/asm/cpufeatures.h | 3 ++-
> arch/x86/kernel/cpu/amd.c | 2 +-
> arch/x86/kvm/cpuid.c | 4 ++--
> 3 files changed, 5 insertions(+), 4 deletions(-)
>
> diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
> index 7f0fb894e432..4f22d828c753 100644
> --- a/arch/x86/include/asm/cpufeatures.h
> +++ b/arch/x86/include/asm/cpufeatures.h
> @@ -97,7 +97,7 @@
> #define X86_FEATURE_SYSENTER32 ( 3*32+15) /* "" sysenter in IA32 userspace */
> #define X86_FEATURE_REP_GOOD ( 3*32+16) /* REP microcode works well */
> #define X86_FEATURE_AMD_LBR_V2 ( 3*32+17) /* AMD Last Branch Record Extension Version 2 */
> -#define X86_FEATURE_LFENCE_RDTSC ( 3*32+18) /* "" LFENCE synchronizes RDTSC */
> +/* FREE, was #define X86_FEATURE_LFENCE_RDTSC ( 3*32+18) "" LFENCE synchronizes RDTSC */
> #define X86_FEATURE_ACC_POWER ( 3*32+19) /* AMD Accumulated Power Mechanism */
> #define X86_FEATURE_NOPL ( 3*32+20) /* The NOPL (0F 1F) instructions */
> #define X86_FEATURE_ALWAYS ( 3*32+21) /* "" Always-present feature */
> @@ -430,6 +430,7 @@
>
> /* AMD-defined Extended Feature 2 EAX, CPUID level 0x80000021 (EAX), word 20 */
> #define X86_FEATURE_NO_NESTED_DATA_BP (20*32+ 0) /* "" No Nested Data Breakpoints */
> +#define X86_FEATURE_LFENCE_RDTSC (20*32+ 2) /* "" LFENCE always serializing / synchronizes RDTSC */
>
> /*
> * BUG word(s)
> diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
> index f769d6d08b43..208c2ce8598a 100644
> --- a/arch/x86/kernel/cpu/amd.c
> +++ b/arch/x86/kernel/cpu/amd.c
> @@ -956,7 +956,7 @@ static void init_amd(struct cpuinfo_x86 *c)
>
> init_amd_cacheinfo(c);
>
> - if (cpu_has(c, X86_FEATURE_XMM2)) {
> + if (!cpu_has(c, X86_FEATURE_LFENCE_RDTSC) && cpu_has(c, X86_FEATURE_XMM2)) {
> /*
> * Use LFENCE for execution serialization. On families which
> * don't have that MSR, LFENCE is already serializing.
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index 13bd2769fa5a..601eeb03ebc9 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -742,11 +742,11 @@ void kvm_set_cpu_caps(void)
> F(SME_COHERENT));
>
> kvm_cpu_cap_mask(CPUID_8000_0021_EAX,
> - F(NO_NESTED_DATA_BP) | 0 /* SmmPgCfgLock */ |
> + F(NO_NESTED_DATA_BP) | F(LFENCE_RDTSC) | 0 /* SmmPgCfgLock */ |
> BIT(6) /* NULL_SEL_CLR_BASE */ | 0 /* PrefetchCtlMsr */
> );
> if (cpu_feature_enabled(X86_FEATURE_LFENCE_RDTSC))
> - kvm_cpu_caps[CPUID_8000_0021_EAX] |= BIT(2) /* LFENCE Always serializing */;
> + kvm_cpu_cap_set(X86_FEATURE_LFENCE_RDTSC);

Now that LFENCE_RDTSC is in its proper place, the kernel's set_cpu_cap() will
effectively sythesize the feature for KVM. I.e. this patch can simply delete the
synthesis of the feature.

> if (!static_cpu_has_bug(X86_BUG_NULL_SEG))
> kvm_cpu_caps[CPUID_8000_0021_EAX] |= BIT(6) /* NULL_SEL_CLR_BASE */;
> kvm_cpu_caps[CPUID_8000_0021_EAX] |= BIT(9) /* NO_SMM_CTL_MSR */;
> --
> 2.34.1
>

2023-01-24 01:19:08

by Sean Christopherson

[permalink] [raw]
Subject: Re: [PATCH v8 0/8] x86/cpu, kvm: Support AMD Automatic IBRS

On Mon, Jan 23, 2023, Kim Phillips wrote:
> The AMD Zen4 core supports a new feature called Automatic IBRS
> (Indirect Branch Restricted Speculation).
>
> Enable Automatic IBRS by default if the CPU feature is present.
> It typically provides greater performance over the incumbent
> generic retpolines mitigation.
>
> Patch 1 [unchanged from v7] Adds support for the leaf that
> contains the AutoIBRS feature bit.
>
> Patch 2 moves the leaf's open-coded code from __do_cpuid_func()
> to kvm_set_cpu_caps() in preparation for adding the features in
> their native leaf.
>
> Patches 3-6 introduce the new leaf's supported bits one by one.
>
> Patch 7 [unchanged from v7] Adds support for AutoIBRS by turning
> its EFER enablement bit on at startup if the feature is available.
>
> Patch 8 [unchanged from v7] Adds support for propagating AutoIBRS
> to the guest.

A few nits, but otherwise looks good. Thanks!