2023-08-21 06:22:06

by Josh Poimboeuf

[permalink] [raw]
Subject: [PATCH 00/22] SRSO fixes/cleanups

Here are several SRSO fixes and cleanups, based on tip/x86/urgent.

One of the patches also adds KVM support, though a corresponding patch
is still needed in QEMU. I have a working QEMU patch which I can post
to qemu-devel.

Josh Poimboeuf (22):
x86/srso: Fix srso_show_state() side effect
x86/srso: Set CPUID feature bits independently of bug or mitigation
status
KVM: x86: Support IBPB_BRTYPE and SBPB
x86/srso: Fix SBPB enablement for spec_rstack_overflow=off
x86/srso: Fix SBPB enablement for mitigations=off
x86/srso: Print actual mitigation if requested mitigation isn't
possible
x86/srso: Remove default case in srso_select_mitigation()
x86/srso: Downgrade retbleed IBPB warning to informational message
x86/srso: Simplify exit paths
x86/srso: Print mitigation for retbleed IBPB case
x86/srso: Slight simplification
x86/srso: Remove redundant X86_FEATURE_ENTRY_IBPB check
x86/srso: Fix vulnerability reporting for missing microcode
x86/srso: Fix unret validation dependencies
x86/alternatives: Remove faulty optimization
x86/srso: Unexport untraining functions
x86/srso: Disentangle rethunk-dependent options
x86/rethunk: Use SYM_CODE_START[_LOCAL]_NOALIGN macros
x86/srso: Improve i-cache locality for alias mitigation
x86/retpoline: Remove .text..__x86.return_thunk section
x86/nospec: Refactor UNTRAIN_RET[_*]
x86/calldepth: Rename __x86_return_skl() to call_depth_return_thunk()

Documentation/admin-guide/hw-vuln/srso.rst | 22 ++-
arch/x86/include/asm/nospec-branch.h | 69 ++++-----
arch/x86/include/asm/processor.h | 2 -
arch/x86/kernel/alternative.c | 8 -
arch/x86/kernel/cpu/amd.c | 28 ++--
arch/x86/kernel/cpu/bugs.c | 87 +++++------
arch/x86/kernel/vmlinux.lds.S | 10 +-
arch/x86/kvm/cpuid.c | 4 +
arch/x86/kvm/x86.c | 9 +-
arch/x86/lib/retpoline.S | 171 +++++++++++----------
include/linux/objtool.h | 3 +-
scripts/Makefile.vmlinux_o | 3 +-
12 files changed, 199 insertions(+), 217 deletions(-)

--
2.41.0



2023-08-21 06:34:12

by Josh Poimboeuf

[permalink] [raw]
Subject: [PATCH 13/22] x86/srso: Fix vulnerability reporting for missing microcode

The SRSO default safe-ret mitigation is reported as "mitigated" even if
microcode hasn't been updated. That's wrong because userspace may still
be vulnerable to SRSO attacks due to IBPB not flushing branch type
predictions.

Report the safe-ret + !microcode case as vulnerable.

Also report the microcode-only case as vulnerable as it leaves the
kernel open to attacks.

Fixes: fb3bd914b3ec ("x86/srso: Add a Speculative RAS Overflow mitigation")
Signed-off-by: Josh Poimboeuf <[email protected]>
---
Documentation/admin-guide/hw-vuln/srso.rst | 22 +++++++++----
arch/x86/kernel/cpu/bugs.c | 36 +++++++++++++---------
2 files changed, 38 insertions(+), 20 deletions(-)

diff --git a/Documentation/admin-guide/hw-vuln/srso.rst b/Documentation/admin-guide/hw-vuln/srso.rst
index b6cfb51cb0b4..4516719e00b5 100644
--- a/Documentation/admin-guide/hw-vuln/srso.rst
+++ b/Documentation/admin-guide/hw-vuln/srso.rst
@@ -46,12 +46,22 @@ The possible values in this file are:

The processor is not vulnerable

- * 'Vulnerable: no microcode':
+* 'Vulnerable':
+
+ The processor is vulnerable and no mitigations have been applied.
+
+ * 'Vulnerable: No microcode':

The processor is vulnerable, no microcode extending IBPB
functionality to address the vulnerability has been applied.

- * 'Mitigation: microcode':
+ * 'Vulnerable: Safe RET, no microcode':
+
+ The "Safe Ret" mitigation (see below) has been applied to protect the
+ kernel, but the IBPB-extending microcode has not been applied. User
+ space tasks may still be vulnerable.
+
+ * 'Vulnerable: Microcode, no safe RET':

Extended IBPB functionality microcode patch has been applied. It does
not address User->Kernel and Guest->Host transitions protection but it
@@ -72,11 +82,11 @@ The possible values in this file are:

(spec_rstack_overflow=microcode)

- * 'Mitigation: safe RET':
+ * 'Mitigation: Safe RET':

- Software-only mitigation. It complements the extended IBPB microcode
- patch functionality by addressing User->Kernel and Guest->Host
- transitions protection.
+ Combined microcode/software mitigation. It complements the
+ extended IBPB microcode patch functionality by addressing
+ User->Kernel and Guest->Host transitions protection.

Selected by default or by spec_rstack_overflow=safe-ret

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index aeddd5ce9f34..f24c0f7e3e8a 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -2353,6 +2353,8 @@ early_param("l1tf", l1tf_cmdline);

enum srso_mitigation {
SRSO_MITIGATION_NONE,
+ SRSO_MITIGATION_UCODE_NEEDED,
+ SRSO_MITIGATION_SAFE_RET_UCODE_NEEDED,
SRSO_MITIGATION_MICROCODE,
SRSO_MITIGATION_SAFE_RET,
SRSO_MITIGATION_IBPB,
@@ -2368,11 +2370,13 @@ enum srso_mitigation_cmd {
};

static const char * const srso_strings[] = {
- [SRSO_MITIGATION_NONE] = "Vulnerable",
- [SRSO_MITIGATION_MICROCODE] = "Mitigation: microcode",
- [SRSO_MITIGATION_SAFE_RET] = "Mitigation: safe RET",
- [SRSO_MITIGATION_IBPB] = "Mitigation: IBPB",
- [SRSO_MITIGATION_IBPB_ON_VMEXIT] = "Mitigation: IBPB on VMEXIT only"
+ [SRSO_MITIGATION_NONE] = "Vulnerable",
+ [SRSO_MITIGATION_UCODE_NEEDED] = "Vulnerable: No microcode",
+ [SRSO_MITIGATION_SAFE_RET_UCODE_NEEDED] = "Vulnerable: Safe RET, no microcode",
+ [SRSO_MITIGATION_MICROCODE] = "Vulnerable: Microcode, no safe RET",
+ [SRSO_MITIGATION_SAFE_RET] = "Mitigation: Safe RET",
+ [SRSO_MITIGATION_IBPB] = "Mitigation: IBPB",
+ [SRSO_MITIGATION_IBPB_ON_VMEXIT] = "Mitigation: IBPB on VMEXIT only"
};

static enum srso_mitigation srso_mitigation __ro_after_init = SRSO_MITIGATION_NONE;
@@ -2406,13 +2410,10 @@ static void __init srso_select_mitigation(void)
{
bool has_microcode = boot_cpu_has(X86_FEATURE_IBPB_BRTYPE);

- if (!boot_cpu_has_bug(X86_BUG_SRSO) || cpu_mitigations_off())
+ if (!boot_cpu_has_bug(X86_BUG_SRSO) || cpu_mitigations_off() || srso_cmd == SRSO_CMD_OFF)
goto out;

- if (!has_microcode) {
- pr_warn("IBPB-extending microcode not applied!\n");
- pr_warn(SRSO_NOTICE);
- } else {
+ if (has_microcode) {
/*
* Zen1/2 with SMT off aren't vulnerable after the right
* IBPB microcode has been applied.
@@ -2427,6 +2428,12 @@ static void __init srso_select_mitigation(void)
srso_mitigation = SRSO_MITIGATION_IBPB;
goto out_print;
}
+ } else {
+ pr_warn("IBPB-extending microcode not applied!\n");
+ pr_warn(SRSO_NOTICE);
+
+ /* may be overwritten by SRSO_CMD_SAFE_RET below */
+ srso_mitigation = SRSO_MITIGATION_UCODE_NEEDED;
}

switch (srso_cmd) {
@@ -2456,7 +2463,10 @@ static void __init srso_select_mitigation(void)
setup_force_cpu_cap(X86_FEATURE_SRSO);
x86_return_thunk = srso_return_thunk;
}
- srso_mitigation = SRSO_MITIGATION_SAFE_RET;
+ if (has_microcode)
+ srso_mitigation = SRSO_MITIGATION_SAFE_RET;
+ else
+ srso_mitigation = SRSO_MITIGATION_SAFE_RET_UCODE_NEEDED;
} else {
pr_err("WARNING: kernel not compiled with CPU_SRSO.\n");
}
@@ -2696,9 +2706,7 @@ static ssize_t srso_show_state(char *buf)
if (boot_cpu_has(X86_FEATURE_SRSO_NO))
return sysfs_emit(buf, "Mitigation: SMT disabled\n");

- return sysfs_emit(buf, "%s%s\n",
- srso_strings[srso_mitigation],
- boot_cpu_has(X86_FEATURE_IBPB_BRTYPE) ? "" : ", no microcode");
+ return sysfs_emit(buf, "%s\n", srso_strings[srso_mitigation]);
}

static ssize_t gds_show_state(char *buf)
--
2.41.0


2023-08-21 06:46:33

by Josh Poimboeuf

[permalink] [raw]
Subject: [PATCH 07/22] x86/srso: Remove default case in srso_select_mitigation()

Remove the default case so a compiler warning gets printed if we forget
one of the enums.

Signed-off-by: Josh Poimboeuf <[email protected]>
---
arch/x86/kernel/cpu/bugs.c | 3 ---
1 file changed, 3 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 579e06655613..cda4b5e6a362 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -2485,9 +2485,6 @@ static void __init srso_select_mitigation(void)
pr_err("WARNING: kernel not compiled with CPU_SRSO.\n");
}
break;
-
- default:
- break;
}

pr_info("%s%s\n", srso_strings[srso_mitigation], (has_microcode ? "" : ", no microcode"));
--
2.41.0


2023-08-21 08:09:25

by Josh Poimboeuf

[permalink] [raw]
Subject: [PATCH 03/22] KVM: x86: Support IBPB_BRTYPE and SBPB

The IBPB_BRTYPE and SBPB CPUID bits aren't set by HW.

From the AMD SRSO whitepaper:

"Hypervisor software should synthesize the value of both the
IBPB_BRTYPE and SBPB CPUID bits on these platforms for use by guest
software."

These bits are already set during kernel boot. Manually propagate them
to the guest.

Also, propagate PRED_CMD_SBPB writes.

Signed-off-by: Josh Poimboeuf <[email protected]>
---
arch/x86/kvm/cpuid.c | 4 ++++
arch/x86/kvm/x86.c | 9 +++++----
2 files changed, 9 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index d3432687c9e6..cdf703eec42d 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -729,6 +729,10 @@ void kvm_set_cpu_caps(void)
F(NULL_SEL_CLR_BASE) | F(AUTOIBRS) | 0 /* PrefetchCtlMsr */
);

+ if (cpu_feature_enabled(X86_FEATURE_SBPB))
+ kvm_cpu_cap_set(X86_FEATURE_SBPB);
+ if (cpu_feature_enabled(X86_FEATURE_IBPB_BRTYPE))
+ kvm_cpu_cap_set(X86_FEATURE_IBPB_BRTYPE);
if (cpu_feature_enabled(X86_FEATURE_SRSO_NO))
kvm_cpu_cap_set(X86_FEATURE_SRSO_NO);

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index c381770bcbf1..dd7472121142 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -3676,12 +3676,13 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
if (!msr_info->host_initiated && !guest_has_pred_cmd_msr(vcpu))
return 1;

- if (!boot_cpu_has(X86_FEATURE_IBPB) || (data & ~PRED_CMD_IBPB))
+ if (boot_cpu_has(X86_FEATURE_IBPB) && data == PRED_CMD_IBPB)
+ wrmsrl(MSR_IA32_PRED_CMD, PRED_CMD_IBPB);
+ else if (boot_cpu_has(X86_FEATURE_SBPB) && data == PRED_CMD_SBPB)
+ wrmsrl(MSR_IA32_PRED_CMD, PRED_CMD_SBPB);
+ else if (data)
return 1;
- if (!data)
- break;

- wrmsrl(MSR_IA32_PRED_CMD, PRED_CMD_IBPB);
break;
case MSR_IA32_FLUSH_CMD:
if (!msr_info->host_initiated &&
--
2.41.0


2023-08-21 09:27:00

by Josh Poimboeuf

[permalink] [raw]
Subject: [PATCH 08/22] x86/srso: Downgrade retbleed IBPB warning to informational message

This warning is nothing to get excited over. Downgrade to pr_info().

Signed-off-by: Josh Poimboeuf <[email protected]>
---
arch/x86/kernel/cpu/bugs.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index cda4b5e6a362..e59e09babf8f 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -2425,7 +2425,7 @@ static void __init srso_select_mitigation(void)

if (retbleed_mitigation == RETBLEED_MITIGATION_IBPB) {
if (has_microcode) {
- pr_err("Retbleed IBPB mitigation enabled, using same for SRSO\n");
+ pr_info("Retbleed IBPB mitigation enabled, using same for SRSO\n");
srso_mitigation = SRSO_MITIGATION_IBPB;
goto pred_cmd;
}
--
2.41.0


2023-08-21 10:20:22

by Josh Poimboeuf

[permalink] [raw]
Subject: [PATCH 05/22] x86/srso: Fix SBPB enablement for mitigations=off

If the user has requested no mitigations with mitigations=off, use the
lighter-weight SBPB instead of IBPB for other mitigations.

Note that even with mitigations=off, IBPB/SBPB may still be used for
Spectre v2 user <-> user protection. Whether that makes sense is a
question for another day.

Fixes: fb3bd914b3ec ("x86/srso: Add a Speculative RAS Overflow mitigation")
Signed-off-by: Josh Poimboeuf <[email protected]>
---
arch/x86/kernel/cpu/bugs.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 10499bcd4e39..ff5bfe8f0ee9 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -2496,8 +2496,7 @@ static void __init srso_select_mitigation(void)
pr_info("%s%s\n", srso_strings[srso_mitigation], (has_microcode ? "" : ", no microcode"));

pred_cmd:
- if ((boot_cpu_has(X86_FEATURE_SRSO_NO) || srso_cmd == SRSO_CMD_OFF) &&
- boot_cpu_has(X86_FEATURE_SBPB))
+ if (boot_cpu_has(X86_FEATURE_SBPB) && srso_mitigation == SRSO_MITIGATION_NONE)
x86_pred_cmd = PRED_CMD_SBPB;
}

--
2.41.0


2023-08-21 10:57:24

by Josh Poimboeuf

[permalink] [raw]
Subject: [PATCH 20/22] x86/retpoline: Remove .text..__x86.return_thunk section

The '.text..__x86.return_thunk' section has no purpose. Remove it and
let the return thunk code live in '.text..__x86.indirect_thunk'.

Signed-off-by: Josh Poimboeuf <[email protected]>
---
arch/x86/kernel/vmlinux.lds.S | 3 ---
arch/x86/lib/retpoline.S | 2 --
2 files changed, 5 deletions(-)

diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
index 9188834e56c9..f1c3516d356d 100644
--- a/arch/x86/kernel/vmlinux.lds.S
+++ b/arch/x86/kernel/vmlinux.lds.S
@@ -132,10 +132,7 @@ SECTIONS
LOCK_TEXT
KPROBES_TEXT
SOFTIRQENTRY_TEXT
-#ifdef CONFIG_RETPOLINE
*(.text..__x86.indirect_thunk)
- *(.text..__x86.return_thunk)
-#endif
STATIC_CALL_TEXT

ALIGN_ENTRY_TEXT_BEGIN
diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S
index 415521dbe15e..49f2be7c7b35 100644
--- a/arch/x86/lib/retpoline.S
+++ b/arch/x86/lib/retpoline.S
@@ -129,8 +129,6 @@ SYM_CODE_END(__x86_indirect_jump_thunk_array)

#ifdef CONFIG_RETHUNK

- .section .text..__x86.return_thunk
-
#ifdef CONFIG_CPU_SRSO

/*
--
2.41.0


2023-08-21 13:42:12

by Josh Poimboeuf

[permalink] [raw]
Subject: [PATCH 11/22] x86/srso: Slight simplification

Simplify the code flow a bit by moving the retbleed IBPB check into the
existing 'has_microcode' block.

Signed-off-by: Josh Poimboeuf <[email protected]>
---
arch/x86/kernel/cpu/bugs.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 4e332707a343..b27aeb86ed7a 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -2421,10 +2421,8 @@ static void __init srso_select_mitigation(void)
setup_force_cpu_cap(X86_FEATURE_SRSO_NO);
goto out;
}
- }

- if (retbleed_mitigation == RETBLEED_MITIGATION_IBPB) {
- if (has_microcode) {
+ if (retbleed_mitigation == RETBLEED_MITIGATION_IBPB) {
pr_info("Retbleed IBPB mitigation enabled, using same for SRSO\n");
srso_mitigation = SRSO_MITIGATION_IBPB;
goto out_print;
--
2.41.0


2023-08-21 16:48:34

by Josh Poimboeuf

[permalink] [raw]
Subject: Re: [PATCH 03/22] KVM: x86: Support IBPB_BRTYPE and SBPB

On Mon, Aug 21, 2023 at 10:34:38AM +0100, Andrew Cooper wrote:
> On 21/08/2023 2:19 am, Josh Poimboeuf wrote:
> > The IBPB_BRTYPE and SBPB CPUID bits aren't set by HW.
>
> "Current hardware".
>
> > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> > index c381770bcbf1..dd7472121142 100644
> > --- a/arch/x86/kvm/x86.c
> > +++ b/arch/x86/kvm/x86.c
> > @@ -3676,12 +3676,13 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
> > if (!msr_info->host_initiated && !guest_has_pred_cmd_msr(vcpu))
> > return 1;
> >
> > - if (!boot_cpu_has(X86_FEATURE_IBPB) || (data & ~PRED_CMD_IBPB))
> > + if (boot_cpu_has(X86_FEATURE_IBPB) && data == PRED_CMD_IBPB)
> > + wrmsrl(MSR_IA32_PRED_CMD, PRED_CMD_IBPB);
> > + else if (boot_cpu_has(X86_FEATURE_SBPB) && data == PRED_CMD_SBPB)
> > + wrmsrl(MSR_IA32_PRED_CMD, PRED_CMD_SBPB);
> > + else if (data)
> > return 1;
>
> SBPB | IBPB is an explicitly permitted combination, but will be rejected
> by this logic.

Ah yes, I see that now:

If software writes PRED_CMD with both bits 0 and 7 set to 1, the
processor performs an IBPB operation.

--
Josh

2023-08-21 17:28:36

by Josh Poimboeuf

[permalink] [raw]
Subject: [PATCH 02/22] x86/srso: Set CPUID feature bits independently of bug or mitigation status

Booting with mitigations=off incorrectly prevents the
X86_FEATURE_{IBPB_BRTYPE,SBPB} CPUID bits from getting set.

Also, future CPUs without X86_BUG_SRSO might still have IBPB with branch
type prediction flushing, in which case SBPB should be used instead of
IBPB. The current code doesn't allow for that.

Also, cpu_has_ibpb_brtype_microcode() has some surprising side effects
and the setting of these feature bits really doesn't belong in the
mitigation code anyway. Move it to earlier.

Fixes: fb3bd914b3ec ("x86/srso: Add a Speculative RAS Overflow mitigation")
Signed-off-by: Josh Poimboeuf <[email protected]>
---
arch/x86/include/asm/processor.h | 2 --
arch/x86/kernel/cpu/amd.c | 28 +++++++++-------------------
arch/x86/kernel/cpu/bugs.c | 13 +------------
3 files changed, 10 insertions(+), 33 deletions(-)

diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
index fd750247ca89..9e26294e415c 100644
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -676,12 +676,10 @@ extern u16 get_llc_id(unsigned int cpu);
#ifdef CONFIG_CPU_SUP_AMD
extern u32 amd_get_nodes_per_socket(void);
extern u32 amd_get_highest_perf(void);
-extern bool cpu_has_ibpb_brtype_microcode(void);
extern void amd_clear_divider(void);
#else
static inline u32 amd_get_nodes_per_socket(void) { return 0; }
static inline u32 amd_get_highest_perf(void) { return 0; }
-static inline bool cpu_has_ibpb_brtype_microcode(void) { return false; }
static inline void amd_clear_divider(void) { }
#endif

diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
index 7eca6a8abbb1..b08af929135d 100644
--- a/arch/x86/kernel/cpu/amd.c
+++ b/arch/x86/kernel/cpu/amd.c
@@ -766,6 +766,15 @@ static void early_init_amd(struct cpuinfo_x86 *c)

if (cpu_has(c, X86_FEATURE_TOPOEXT))
smp_num_siblings = ((cpuid_ebx(0x8000001e) >> 8) & 0xff) + 1;
+
+ if (!cpu_has(c, X86_FEATURE_IBPB_BRTYPE)) {
+ if (c->x86 == 0x17 && boot_cpu_has(X86_FEATURE_AMD_IBPB))
+ setup_force_cpu_cap(X86_FEATURE_IBPB_BRTYPE);
+ else if (c->x86 >= 0x19 && !wrmsrl_safe(MSR_IA32_PRED_CMD, PRED_CMD_SBPB)) {
+ setup_force_cpu_cap(X86_FEATURE_IBPB_BRTYPE);
+ setup_force_cpu_cap(X86_FEATURE_SBPB);
+ }
+ }
}

static void init_amd_k8(struct cpuinfo_x86 *c)
@@ -1301,25 +1310,6 @@ void amd_check_microcode(void)
on_each_cpu(zenbleed_check_cpu, NULL, 1);
}

-bool cpu_has_ibpb_brtype_microcode(void)
-{
- switch (boot_cpu_data.x86) {
- /* Zen1/2 IBPB flushes branch type predictions too. */
- case 0x17:
- return boot_cpu_has(X86_FEATURE_AMD_IBPB);
- case 0x19:
- /* Poke the MSR bit on Zen3/4 to check its presence. */
- if (!wrmsrl_safe(MSR_IA32_PRED_CMD, PRED_CMD_SBPB)) {
- setup_force_cpu_cap(X86_FEATURE_SBPB);
- return true;
- } else {
- return false;
- }
- default:
- return false;
- }
-}
-
/*
* Issue a DIV 0/1 insn to clear any division data from previous DIV
* operations.
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index bdd3e296f72b..b0ae985aa6a4 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -2404,26 +2404,15 @@ early_param("spec_rstack_overflow", srso_parse_cmdline);

static void __init srso_select_mitigation(void)
{
- bool has_microcode;
+ bool has_microcode = boot_cpu_has(X86_FEATURE_IBPB_BRTYPE);

if (!boot_cpu_has_bug(X86_BUG_SRSO) || cpu_mitigations_off())
goto pred_cmd;

- /*
- * The first check is for the kernel running as a guest in order
- * for guests to verify whether IBPB is a viable mitigation.
- */
- has_microcode = boot_cpu_has(X86_FEATURE_IBPB_BRTYPE) || cpu_has_ibpb_brtype_microcode();
if (!has_microcode) {
pr_warn("IBPB-extending microcode not applied!\n");
pr_warn(SRSO_NOTICE);
} else {
- /*
- * Enable the synthetic (even if in a real CPUID leaf)
- * flags for guests.
- */
- setup_force_cpu_cap(X86_FEATURE_IBPB_BRTYPE);
-
/*
* Zen1/2 with SMT off aren't vulnerable after the right
* IBPB microcode has been applied.
--
2.41.0


2023-08-21 18:16:50

by Josh Poimboeuf

[permalink] [raw]
Subject: [PATCH 10/22] x86/srso: Print mitigation for retbleed IBPB case

When overriding the requested mitigation with IBPB due to retbleed=ibpb,
print the actual mitigation.

Signed-off-by: Josh Poimboeuf <[email protected]>
---
arch/x86/kernel/cpu/bugs.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index da480c089739..4e332707a343 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -2427,7 +2427,7 @@ static void __init srso_select_mitigation(void)
if (has_microcode) {
pr_info("Retbleed IBPB mitigation enabled, using same for SRSO\n");
srso_mitigation = SRSO_MITIGATION_IBPB;
- goto out;
+ goto out_print;
}
}

@@ -2487,7 +2487,8 @@ static void __init srso_select_mitigation(void)
break;
}

- pr_info("%s%s\n", srso_strings[srso_mitigation], (has_microcode ? "" : ", no microcode"));
+out_print:
+ pr_info("%s%s\n", srso_strings[srso_mitigation], has_microcode ? "" : ", no microcode");

out:
if (boot_cpu_has(X86_FEATURE_SBPB) && srso_mitigation == SRSO_MITIGATION_NONE)
--
2.41.0


2023-08-21 18:50:19

by Josh Poimboeuf

[permalink] [raw]
Subject: [PATCH 15/22] x86/alternatives: Remove faulty optimization

The following commit

095b8303f383 ("x86/alternative: Make custom return thunk

made '__x86_return_thunk' a placeholder value. All code setting
X86_FEATURE_RETHUNK also changes the value of 'x86_return_thunk'. So
the optimization at the beginning of apply_returns() is dead code.

Also, before the above-mentioned commit, the optimization actually had a
bug It bypassed __static_call_fixup(), causing some raw returns to
remain unpatched in static call trampolines. Thus the 'Fixes' tag.

Fixes: d2408e043e72 ("x86/alternative: Optimize returns patching")
Signed-off-by: Josh Poimboeuf <[email protected]>
---
arch/x86/kernel/alternative.c | 8 --------
1 file changed, 8 deletions(-)

diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index 099d58d02a26..34be5fbaf41e 100644
--- a/arch/x86/kernel/alternative.c
+++ b/arch/x86/kernel/alternative.c
@@ -720,14 +720,6 @@ void __init_or_module noinline apply_returns(s32 *start, s32 *end)
{
s32 *s;

- /*
- * Do not patch out the default return thunks if those needed are the
- * ones generated by the compiler.
- */
- if (cpu_feature_enabled(X86_FEATURE_RETHUNK) &&
- (x86_return_thunk == __x86_return_thunk))
- return;
-
for (s = start; s < end; s++) {
void *dest = NULL, *addr = (void *)s + *s;
struct insn insn;
--
2.41.0