2024-04-12 18:10:47

by Josh Poimboeuf

[permalink] [raw]
Subject: [PATCH 0/3] x86/bugs: BHI fixes / improvements - round 2

BHI fixes round 2:

- An updated version of "Only harden syscalls when needed" with review
comments addressed

- A BHI retpoline check fix

- Remove the obsolete LFENCE "retpolines"

Josh Poimboeuf (3):
x86/bugs: Only harden syscalls when needed
x86/bugs: Fix BHI retpoline check
x86/bugs: Remove support for Spectre v2 LFENCE "retpolines"

arch/x86/Makefile | 1 -
arch/x86/entry/common.c | 15 +++-
arch/x86/entry/syscall_32.c | 11 +--
arch/x86/entry/syscall_64.c | 6 --
arch/x86/entry/syscall_x32.c | 7 +-
arch/x86/include/asm/cpufeatures.h | 2 +-
arch/x86/include/asm/disabled-features.h | 3 +-
arch/x86/include/asm/nospec-branch.h | 18 +---
arch/x86/include/asm/syscall.h | 8 +-
arch/x86/kernel/alternative.c | 17 +---
arch/x86/kernel/cpu/bugs.c | 88 +++++++------------
arch/x86/kernel/cpu/cpu.h | 3 +-
arch/x86/lib/retpoline.S | 5 +-
arch/x86/net/bpf_jit_comp.c | 5 +-
tools/arch/x86/include/asm/cpufeatures.h | 1 -
.../arch/x86/include/asm/disabled-features.h | 3 +-
16 files changed, 69 insertions(+), 124 deletions(-)

--
2.44.0



2024-04-12 18:10:52

by Josh Poimboeuf

[permalink] [raw]
Subject: [PATCH v2 1/3] x86/bugs: Only harden syscalls when needed

Syscall hardening (i.e., converting the syscall indirect branch to a
series of direct branches) may cause performance regressions in certain
scenarios. Only use the syscall hardening when indirect branches are
considered unsafe.

Fixes: 1e3ad78334a6 ("x86/syscall: Don't force use of indirect calls for system calls")
Signed-off-by: Josh Poimboeuf <[email protected]>
---
arch/x86/entry/common.c | 15 ++++++++++++---
arch/x86/entry/syscall_32.c | 11 +----------
arch/x86/entry/syscall_64.c | 6 ------
arch/x86/entry/syscall_x32.c | 7 ++++++-
arch/x86/include/asm/cpufeatures.h | 1 +
arch/x86/include/asm/syscall.h | 8 +++++++-
arch/x86/kernel/cpu/bugs.c | 31 +++++++++++++++++++++++++++++-
7 files changed, 57 insertions(+), 22 deletions(-)

diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c
index 6de50b80702e..c0f8116291e4 100644
--- a/arch/x86/entry/common.c
+++ b/arch/x86/entry/common.c
@@ -49,7 +49,10 @@ static __always_inline bool do_syscall_x64(struct pt_regs *regs, int nr)

if (likely(unr < NR_syscalls)) {
unr = array_index_nospec(unr, NR_syscalls);
- regs->ax = x64_sys_call(regs, unr);
+ if (likely(cpu_feature_enabled(X86_FEATURE_INDIRECT_SAFE)))
+ regs->ax = sys_call_table[unr](regs);
+ else
+ regs->ax = x64_sys_call(regs, unr);
return true;
}
return false;
@@ -66,7 +69,10 @@ static __always_inline bool do_syscall_x32(struct pt_regs *regs, int nr)

if (IS_ENABLED(CONFIG_X86_X32_ABI) && likely(xnr < X32_NR_syscalls)) {
xnr = array_index_nospec(xnr, X32_NR_syscalls);
- regs->ax = x32_sys_call(regs, xnr);
+ if (likely(cpu_feature_enabled(X86_FEATURE_INDIRECT_SAFE)))
+ regs->ax = x32_sys_call_table[xnr](regs);
+ else
+ regs->ax = x32_sys_call(regs, xnr);
return true;
}
return false;
@@ -162,7 +168,10 @@ static __always_inline void do_syscall_32_irqs_on(struct pt_regs *regs, int nr)

if (likely(unr < IA32_NR_syscalls)) {
unr = array_index_nospec(unr, IA32_NR_syscalls);
- regs->ax = ia32_sys_call(regs, unr);
+ if (likely(cpu_feature_enabled(X86_FEATURE_INDIRECT_SAFE)))
+ regs->ax = ia32_sys_call_table[unr](regs);
+ else
+ regs->ax = ia32_sys_call(regs, unr);
} else if (nr != -1) {
regs->ax = __ia32_sys_ni_syscall(regs);
}
diff --git a/arch/x86/entry/syscall_32.c b/arch/x86/entry/syscall_32.c
index c2235bae17ef..aab31760b4e3 100644
--- a/arch/x86/entry/syscall_32.c
+++ b/arch/x86/entry/syscall_32.c
@@ -14,25 +14,16 @@
#endif

#define __SYSCALL(nr, sym) extern long __ia32_##sym(const struct pt_regs *);
-
#include <asm/syscalls_32.h>
#undef __SYSCALL

-/*
- * The sys_call_table[] is no longer used for system calls, but
- * kernel/trace/trace_syscalls.c still wants to know the system
- * call address.
- */
-#ifdef CONFIG_X86_32
#define __SYSCALL(nr, sym) __ia32_##sym,
-const sys_call_ptr_t sys_call_table[] = {
+const sys_call_ptr_t ia32_sys_call_table[] = {
#include <asm/syscalls_32.h>
};
#undef __SYSCALL
-#endif

#define __SYSCALL(nr, sym) case nr: return __ia32_##sym(regs);
-
long ia32_sys_call(const struct pt_regs *regs, unsigned int nr)
{
switch (nr) {
diff --git a/arch/x86/entry/syscall_64.c b/arch/x86/entry/syscall_64.c
index 33b3f09e6f15..96ea1f8a1d3f 100644
--- a/arch/x86/entry/syscall_64.c
+++ b/arch/x86/entry/syscall_64.c
@@ -11,11 +11,6 @@
#include <asm/syscalls_64.h>
#undef __SYSCALL

-/*
- * The sys_call_table[] is no longer used for system calls, but
- * kernel/trace/trace_syscalls.c still wants to know the system
- * call address.
- */
#define __SYSCALL(nr, sym) __x64_##sym,
const sys_call_ptr_t sys_call_table[] = {
#include <asm/syscalls_64.h>
@@ -23,7 +18,6 @@ const sys_call_ptr_t sys_call_table[] = {
#undef __SYSCALL

#define __SYSCALL(nr, sym) case nr: return __x64_##sym(regs);
-
long x64_sys_call(const struct pt_regs *regs, unsigned int nr)
{
switch (nr) {
diff --git a/arch/x86/entry/syscall_x32.c b/arch/x86/entry/syscall_x32.c
index 03de4a932131..5aef4230faca 100644
--- a/arch/x86/entry/syscall_x32.c
+++ b/arch/x86/entry/syscall_x32.c
@@ -11,8 +11,13 @@
#include <asm/syscalls_x32.h>
#undef __SYSCALL

-#define __SYSCALL(nr, sym) case nr: return __x64_##sym(regs);
+#define __SYSCALL(nr, sym) __x64_##sym,
+const sys_call_ptr_t x32_sys_call_table[] = {
+#include <asm/syscalls_x32.h>
+};
+#undef __SYSCALL

+#define __SYSCALL(nr, sym) case nr: return __x64_##sym(regs);
long x32_sys_call(const struct pt_regs *regs, unsigned int nr)
{
switch (nr) {
diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index 3c7434329661..7c87fe80c696 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -470,6 +470,7 @@
#define X86_FEATURE_BHI_CTRL (21*32+ 2) /* "" BHI_DIS_S HW control available */
#define X86_FEATURE_CLEAR_BHB_HW (21*32+ 3) /* "" BHI_DIS_S HW control enabled */
#define X86_FEATURE_CLEAR_BHB_LOOP_ON_VMEXIT (21*32+ 4) /* "" Clear branch history at vmexit using SW loop */
+#define X86_FEATURE_INDIRECT_SAFE (21*32+ 4) /* "" Indirect branches aren't vulnerable to Spectre v2 */

/*
* BUG word(s)
diff --git a/arch/x86/include/asm/syscall.h b/arch/x86/include/asm/syscall.h
index 2fc7bc3863ff..dfb59521244c 100644
--- a/arch/x86/include/asm/syscall.h
+++ b/arch/x86/include/asm/syscall.h
@@ -16,14 +16,20 @@
#include <asm/thread_info.h> /* for TS_COMPAT */
#include <asm/unistd.h>

-/* This is used purely for kernel/trace/trace_syscalls.c */
typedef long (*sys_call_ptr_t)(const struct pt_regs *);
extern const sys_call_ptr_t sys_call_table[];

+#if defined(CONFIG_X86_32)
+#define ia32_sys_call_table sys_call_table
+#else
/*
* These may not exist, but still put the prototypes in so we
* can use IS_ENABLED().
*/
+extern const sys_call_ptr_t ia32_sys_call_table[];
+extern const sys_call_ptr_t x32_sys_call_table[];
+#endif
+
extern long ia32_sys_call(const struct pt_regs *, unsigned int nr);
extern long x32_sys_call(const struct pt_regs *, unsigned int nr);
extern long x64_sys_call(const struct pt_regs *, unsigned int nr);
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index ca295b0c1eee..dcb97cc2758f 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -1664,6 +1664,15 @@ static void __init bhi_select_mitigation(void)
if (!IS_ENABLED(CONFIG_X86_64))
return;

+ /*
+ * There's no hardware mitigation in place, so mark indirect branches
+ * as unsafe.
+ *
+ * One could argue the SW loop makes indirect branches safe again, but
+ * Linus prefers it this way.
+ */
+ setup_clear_cpu_cap(X86_FEATURE_INDIRECT_SAFE);
+
/* Mitigate KVM by default */
setup_force_cpu_cap(X86_FEATURE_CLEAR_BHB_LOOP_ON_VMEXIT);
pr_info("Spectre BHI mitigation: SW BHB clearing on vm exit\n");
@@ -1678,6 +1687,21 @@ static void __init spectre_v2_select_mitigation(void)
enum spectre_v2_mitigation_cmd cmd = spectre_v2_parse_cmdline();
enum spectre_v2_mitigation mode = SPECTRE_V2_NONE;

+ /*
+ * X86_FEATURE_INDIRECT_SAFE indicates whether indirect calls can be
+ * considered safe. That means either:
+ *
+ * - the CPU isn't vulnerable to Spectre v2 or its variants;
+ *
+ * - a hardware mitigation is in place (e.g., IBRS, BHI_DIS_S); or
+ *
+ * - the user turned off mitigations altogether.
+ *
+ * Assume innocence until proven guilty: set the cap bit now, then
+ * clear it later if/when needed.
+ */
+ setup_force_cpu_cap(X86_FEATURE_INDIRECT_SAFE);
+
/*
* If the CPU is not affected and the command line mode is NONE or AUTO
* then nothing to do.
@@ -1764,11 +1788,16 @@ static void __init spectre_v2_select_mitigation(void)
break;

case SPECTRE_V2_LFENCE:
+ setup_clear_cpu_cap(X86_FEATURE_INDIRECT_SAFE);
+ fallthrough;
case SPECTRE_V2_EIBRS_LFENCE:
setup_force_cpu_cap(X86_FEATURE_RETPOLINE_LFENCE);
- fallthrough;
+ setup_force_cpu_cap(X86_FEATURE_RETPOLINE);
+ break;

case SPECTRE_V2_RETPOLINE:
+ setup_clear_cpu_cap(X86_FEATURE_INDIRECT_SAFE);
+ fallthrough;
case SPECTRE_V2_EIBRS_RETPOLINE:
setup_force_cpu_cap(X86_FEATURE_RETPOLINE);
break;
--
2.44.0


2024-04-12 18:11:00

by Josh Poimboeuf

[permalink] [raw]
Subject: [PATCH 2/3] x86/bugs: Fix BHI retpoline check

Confusingly, X86_FEATURE_RETPOLINE doesn't mean retpolines are enabled,
as it also includes the original "AMD retpoline" which isn't a retpoline
at all.

Also replace cpu_feature_enabled() with boot_cpu_has() because this is
before alternatives are patched and cpu_feature_enabled()'s fallback
path is slower than plain old boot_cpu_has().

Fixes: ec9404e40e8f ("x86/bhi: Add BHI mitigation knob")
Signed-off-by: Josh Poimboeuf <[email protected]>
---
arch/x86/kernel/cpu/bugs.c | 11 +++++++----
1 file changed, 7 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index dcb97cc2758f..e827613c9e39 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -1652,7 +1652,8 @@ static void __init bhi_select_mitigation(void)
return;

/* Retpoline mitigates against BHI unless the CPU has RRSBA behavior */
- if (cpu_feature_enabled(X86_FEATURE_RETPOLINE)) {
+ if (boot_cpu_has(X86_FEATURE_RETPOLINE) &&
+ !boot_cpu_has(X86_FEATURE_RETPOLINE_LFENCE)) {
spec_ctrl_disable_kernel_rrsba();
if (rrsba_disabled)
return;
@@ -2833,11 +2834,13 @@ static const char *spectre_bhi_state(void)
{
if (!boot_cpu_has_bug(X86_BUG_BHI))
return "; BHI: Not affected";
- else if (boot_cpu_has(X86_FEATURE_CLEAR_BHB_HW))
+ else if (boot_cpu_has(X86_FEATURE_CLEAR_BHB_HW))
return "; BHI: BHI_DIS_S";
- else if (boot_cpu_has(X86_FEATURE_CLEAR_BHB_LOOP))
+ else if (boot_cpu_has(X86_FEATURE_CLEAR_BHB_LOOP))
return "; BHI: SW loop, KVM: SW loop";
- else if (boot_cpu_has(X86_FEATURE_RETPOLINE) && rrsba_disabled)
+ else if (boot_cpu_has(X86_FEATURE_RETPOLINE) &&
+ !boot_cpu_has(X86_FEATURE_RETPOLINE_LFENCE) &&
+ rrsba_disabled)
return "; BHI: Retpoline";
else if (boot_cpu_has(X86_FEATURE_CLEAR_BHB_LOOP_ON_VMEXIT))
return "; BHI: Vulnerable, KVM: SW loop";
--
2.44.0


2024-04-12 18:11:20

by Josh Poimboeuf

[permalink] [raw]
Subject: [PATCH 3/3] x86/bugs: Remove support for Spectre v2 LFENCE "retpolines"

I found several bugs where code assumes that X86_FEATURE_RETPOLINE
actually means retpolines (imagine that!). In fact that feature also
includes the original AMD LFENCE "retpolines", which aren't in fact
retpolines.

Really, those "retpolines" should just be removed. They're already
considered vulnerable due to the fact that the speculative window after
the indirect branch can still be long enough to do multiple dependent
loads. And recent tooling makes such gadgets easier to find.

Also, EIBRS_LFENCE tests worse in real-world benchmarks than the actual
BHI mitigations, so it's both slower and less secure.

Specifically this removes support for the following cmdline options:

- spectre_v2=retpoline,amd
- spectre_v2=retpoline,lfence
- spectre_v2=eibrs,lfence

Now when any of those options are used, it will print an error and fall
back to the defaults (spectre_v2=auto spectre_bhi=on).

Signed-off-by: Josh Poimboeuf <[email protected]>
---
arch/x86/Makefile | 1 -
arch/x86/include/asm/cpufeatures.h | 1 -
arch/x86/include/asm/disabled-features.h | 3 +-
arch/x86/include/asm/nospec-branch.h | 18 ++---
arch/x86/kernel/alternative.c | 17 +----
arch/x86/kernel/cpu/bugs.c | 66 +------------------
arch/x86/kernel/cpu/cpu.h | 3 +-
arch/x86/lib/retpoline.S | 5 +-
arch/x86/net/bpf_jit_comp.c | 5 +-
tools/arch/x86/include/asm/cpufeatures.h | 1 -
.../arch/x86/include/asm/disabled-features.h | 3 +-
11 files changed, 15 insertions(+), 108 deletions(-)

diff --git a/arch/x86/Makefile b/arch/x86/Makefile
index 5ab93fcdd691..f0f91810a79e 100644
--- a/arch/x86/Makefile
+++ b/arch/x86/Makefile
@@ -20,7 +20,6 @@ ifdef CONFIG_CC_IS_CLANG
RETPOLINE_CFLAGS := -mretpoline-external-thunk
RETPOLINE_VDSO_CFLAGS := -mretpoline
endif
-RETPOLINE_CFLAGS += $(call cc-option,-mindirect-branch-cs-prefix)

ifdef CONFIG_MITIGATION_RETHUNK
RETHUNK_CFLAGS := -mfunction-return=thunk-extern
diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index 7c87fe80c696..fccc838ac8ff 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -296,7 +296,6 @@
#define X86_FEATURE_ENTRY_IBPB (11*32+10) /* "" Issue an IBPB on kernel entry */
#define X86_FEATURE_RRSBA_CTRL (11*32+11) /* "" RET prediction control */
#define X86_FEATURE_RETPOLINE (11*32+12) /* "" Generic Retpoline mitigation for Spectre variant 2 */
-#define X86_FEATURE_RETPOLINE_LFENCE (11*32+13) /* "" Use LFENCE for Spectre variant 2 */
#define X86_FEATURE_RETHUNK (11*32+14) /* "" Use REturn THUNK */
#define X86_FEATURE_UNRET (11*32+15) /* "" AMD BTB untrain return */
#define X86_FEATURE_USE_IBPB_FW (11*32+16) /* "" Use IBPB during runtime firmware calls */
diff --git a/arch/x86/include/asm/disabled-features.h b/arch/x86/include/asm/disabled-features.h
index c492bdc97b05..e1c421282a78 100644
--- a/arch/x86/include/asm/disabled-features.h
+++ b/arch/x86/include/asm/disabled-features.h
@@ -53,8 +53,7 @@
#ifdef CONFIG_MITIGATION_RETPOLINE
# define DISABLE_RETPOLINE 0
#else
-# define DISABLE_RETPOLINE ((1 << (X86_FEATURE_RETPOLINE & 31)) | \
- (1 << (X86_FEATURE_RETPOLINE_LFENCE & 31)))
+# define DISABLE_RETPOLINE (1 << (X86_FEATURE_RETPOLINE & 31)
#endif

#ifdef CONFIG_MITIGATION_RETHUNK
diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
index ff5f1ecc7d1e..b98fb18928f6 100644
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -434,15 +434,11 @@ static inline void call_depth_return_thunk(void) {}
* which is ensured when CONFIG_MITIGATION_RETPOLINE is defined.
*/
# define CALL_NOSPEC \
- ALTERNATIVE_2( \
+ ALTERNATIVE( \
ANNOTATE_RETPOLINE_SAFE \
"call *%[thunk_target]\n", \
"call __x86_indirect_thunk_%V[thunk_target]\n", \
- X86_FEATURE_RETPOLINE, \
- "lfence;\n" \
- ANNOTATE_RETPOLINE_SAFE \
- "call *%[thunk_target]\n", \
- X86_FEATURE_RETPOLINE_LFENCE)
+ X86_FEATURE_RETPOLINE)

# define THUNK_TARGET(addr) [thunk_target] "r" (addr)

@@ -453,7 +449,7 @@ static inline void call_depth_return_thunk(void) {}
* here, anyway.
*/
# define CALL_NOSPEC \
- ALTERNATIVE_2( \
+ ALTERNATIVE( \
ANNOTATE_RETPOLINE_SAFE \
"call *%[thunk_target]\n", \
" jmp 904f;\n" \
@@ -468,11 +464,7 @@ static inline void call_depth_return_thunk(void) {}
" ret;\n" \
" .align 16\n" \
"904: call 901b;\n", \
- X86_FEATURE_RETPOLINE, \
- "lfence;\n" \
- ANNOTATE_RETPOLINE_SAFE \
- "call *%[thunk_target]\n", \
- X86_FEATURE_RETPOLINE_LFENCE)
+ X86_FEATURE_RETPOLINE)

# define THUNK_TARGET(addr) [thunk_target] "rm" (addr)
#endif
@@ -485,10 +477,8 @@ static inline void call_depth_return_thunk(void) {}
enum spectre_v2_mitigation {
SPECTRE_V2_NONE,
SPECTRE_V2_RETPOLINE,
- SPECTRE_V2_LFENCE,
SPECTRE_V2_EIBRS,
SPECTRE_V2_EIBRS_RETPOLINE,
- SPECTRE_V2_EIBRS_LFENCE,
SPECTRE_V2_IBRS,
};

diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index 45a280f2161c..a7eb8203a8cd 100644
--- a/arch/x86/kernel/alternative.c
+++ b/arch/x86/kernel/alternative.c
@@ -639,8 +639,6 @@ static int emit_call_track_retpoline(void *addr, struct insn *insn, int reg, u8
* into:
*
* CALL *%\reg
- *
- * It also tries to inline spectre_v2=retpoline,lfence when size permits.
*/
static int patch_retpoline(void *addr, struct insn *insn, u8 *bytes)
{
@@ -657,8 +655,7 @@ static int patch_retpoline(void *addr, struct insn *insn, u8 *bytes)
/* If anyone ever does: CALL/JMP *%rsp, we're in deep trouble. */
BUG_ON(reg == 4);

- if (cpu_feature_enabled(X86_FEATURE_RETPOLINE) &&
- !cpu_feature_enabled(X86_FEATURE_RETPOLINE_LFENCE)) {
+ if (cpu_feature_enabled(X86_FEATURE_RETPOLINE)) {
if (cpu_feature_enabled(X86_FEATURE_CALL_DEPTH))
return emit_call_track_retpoline(addr, insn, reg, bytes);

@@ -675,9 +672,8 @@ static int patch_retpoline(void *addr, struct insn *insn, u8 *bytes)
* into:
*
* Jncc.d8 1f
- * [ LFENCE ]
* JMP *%\reg
- * [ NOP ]
+ * NOP
* 1:
*/
if (is_jcc32(insn)) {
@@ -691,15 +687,6 @@ static int patch_retpoline(void *addr, struct insn *insn, u8 *bytes)
op = JMP32_INSN_OPCODE;
}

- /*
- * For RETPOLINE_LFENCE: prepend the indirect CALL/JMP with an LFENCE.
- */
- if (cpu_feature_enabled(X86_FEATURE_RETPOLINE_LFENCE)) {
- bytes[i++] = 0x0f;
- bytes[i++] = 0xae;
- bytes[i++] = 0xe8; /* LFENCE */
- }
-
ret = emit_indirect(op, reg, bytes + i);
if (ret < 0)
return ret;
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index e827613c9e39..2e71bacc8191 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -1143,7 +1143,6 @@ static void __init retbleed_select_mitigation(void)
break;
case SPECTRE_V2_EIBRS:
case SPECTRE_V2_EIBRS_RETPOLINE:
- case SPECTRE_V2_EIBRS_LFENCE:
retbleed_mitigation = RETBLEED_MITIGATION_EIBRS;
break;
default:
@@ -1184,9 +1183,7 @@ static inline const char *spectre_v2_module_string(void)
static inline const char *spectre_v2_module_string(void) { return ""; }
#endif

-#define SPECTRE_V2_LFENCE_MSG "WARNING: LFENCE mitigation is not recommended for this CPU, data leaks possible!\n"
#define SPECTRE_V2_EIBRS_EBPF_MSG "WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!\n"
-#define SPECTRE_V2_EIBRS_LFENCE_EBPF_SMT_MSG "WARNING: Unprivileged eBPF is enabled with eIBRS+LFENCE mitigation and SMT, data leaks possible via Spectre v2 BHB attacks!\n"
#define SPECTRE_V2_IBRS_PERF_MSG "WARNING: IBRS mitigation selected on Enhanced IBRS CPU, this may cause unnecessary performance loss\n"

#ifdef CONFIG_BPF_SYSCALL
@@ -1201,10 +1198,6 @@ void unpriv_ebpf_notify(int new_state)
case SPECTRE_V2_EIBRS:
pr_err(SPECTRE_V2_EIBRS_EBPF_MSG);
break;
- case SPECTRE_V2_EIBRS_LFENCE:
- if (sched_smt_active())
- pr_err(SPECTRE_V2_EIBRS_LFENCE_EBPF_SMT_MSG);
- break;
default:
break;
}
@@ -1225,10 +1218,8 @@ enum spectre_v2_mitigation_cmd {
SPECTRE_V2_CMD_FORCE,
SPECTRE_V2_CMD_RETPOLINE,
SPECTRE_V2_CMD_RETPOLINE_GENERIC,
- SPECTRE_V2_CMD_RETPOLINE_LFENCE,
SPECTRE_V2_CMD_EIBRS,
SPECTRE_V2_CMD_EIBRS_RETPOLINE,
- SPECTRE_V2_CMD_EIBRS_LFENCE,
SPECTRE_V2_CMD_IBRS,
};

@@ -1414,9 +1405,7 @@ spectre_v2_user_select_mitigation(void)
static const char * const spectre_v2_strings[] = {
[SPECTRE_V2_NONE] = "Vulnerable",
[SPECTRE_V2_RETPOLINE] = "Mitigation: Retpolines",
- [SPECTRE_V2_LFENCE] = "Mitigation: LFENCE",
[SPECTRE_V2_EIBRS] = "Mitigation: Enhanced / Automatic IBRS",
- [SPECTRE_V2_EIBRS_LFENCE] = "Mitigation: Enhanced / Automatic IBRS + LFENCE",
[SPECTRE_V2_EIBRS_RETPOLINE] = "Mitigation: Enhanced / Automatic IBRS + Retpolines",
[SPECTRE_V2_IBRS] = "Mitigation: IBRS",
};
@@ -1429,11 +1418,8 @@ static const struct {
{ "off", SPECTRE_V2_CMD_NONE, false },
{ "on", SPECTRE_V2_CMD_FORCE, true },
{ "retpoline", SPECTRE_V2_CMD_RETPOLINE, false },
- { "retpoline,amd", SPECTRE_V2_CMD_RETPOLINE_LFENCE, false },
- { "retpoline,lfence", SPECTRE_V2_CMD_RETPOLINE_LFENCE, false },
{ "retpoline,generic", SPECTRE_V2_CMD_RETPOLINE_GENERIC, false },
{ "eibrs", SPECTRE_V2_CMD_EIBRS, false },
- { "eibrs,lfence", SPECTRE_V2_CMD_EIBRS_LFENCE, false },
{ "eibrs,retpoline", SPECTRE_V2_CMD_EIBRS_RETPOLINE, false },
{ "auto", SPECTRE_V2_CMD_AUTO, false },
{ "ibrs", SPECTRE_V2_CMD_IBRS, false },
@@ -1472,9 +1458,7 @@ static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
}

if ((cmd == SPECTRE_V2_CMD_RETPOLINE ||
- cmd == SPECTRE_V2_CMD_RETPOLINE_LFENCE ||
cmd == SPECTRE_V2_CMD_RETPOLINE_GENERIC ||
- cmd == SPECTRE_V2_CMD_EIBRS_LFENCE ||
cmd == SPECTRE_V2_CMD_EIBRS_RETPOLINE) &&
!IS_ENABLED(CONFIG_MITIGATION_RETPOLINE)) {
pr_err("%s selected but not compiled in. Switching to AUTO select\n",
@@ -1483,7 +1467,6 @@ static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
}

if ((cmd == SPECTRE_V2_CMD_EIBRS ||
- cmd == SPECTRE_V2_CMD_EIBRS_LFENCE ||
cmd == SPECTRE_V2_CMD_EIBRS_RETPOLINE) &&
!boot_cpu_has(X86_FEATURE_IBRS_ENHANCED)) {
pr_err("%s selected but CPU doesn't have Enhanced or Automatic IBRS. Switching to AUTO select\n",
@@ -1491,14 +1474,6 @@ static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
return SPECTRE_V2_CMD_AUTO;
}

- if ((cmd == SPECTRE_V2_CMD_RETPOLINE_LFENCE ||
- cmd == SPECTRE_V2_CMD_EIBRS_LFENCE) &&
- !boot_cpu_has(X86_FEATURE_LFENCE_RDTSC)) {
- pr_err("%s selected, but CPU doesn't have a serializing LFENCE. Switching to AUTO select\n",
- mitigation_options[i].option);
- return SPECTRE_V2_CMD_AUTO;
- }
-
if (cmd == SPECTRE_V2_CMD_IBRS && !IS_ENABLED(CONFIG_MITIGATION_IBRS_ENTRY)) {
pr_err("%s selected but not compiled in. Switching to AUTO select\n",
mitigation_options[i].option);
@@ -1585,7 +1560,6 @@ static void __init spectre_v2_determine_rsb_fill_type_at_vmexit(enum spectre_v2_
case SPECTRE_V2_NONE:
return;

- case SPECTRE_V2_EIBRS_LFENCE:
case SPECTRE_V2_EIBRS:
if (boot_cpu_has_bug(X86_BUG_EIBRS_PBRSB)) {
setup_force_cpu_cap(X86_FEATURE_RSB_VMEXIT_LITE);
@@ -1595,7 +1569,6 @@ static void __init spectre_v2_determine_rsb_fill_type_at_vmexit(enum spectre_v2_

case SPECTRE_V2_EIBRS_RETPOLINE:
case SPECTRE_V2_RETPOLINE:
- case SPECTRE_V2_LFENCE:
case SPECTRE_V2_IBRS:
setup_force_cpu_cap(X86_FEATURE_RSB_VMEXIT);
pr_info("Spectre v2 / SpectreRSB : Filling RSB on VMEXIT\n");
@@ -1652,8 +1625,7 @@ static void __init bhi_select_mitigation(void)
return;

/* Retpoline mitigates against BHI unless the CPU has RRSBA behavior */
- if (boot_cpu_has(X86_FEATURE_RETPOLINE) &&
- !boot_cpu_has(X86_FEATURE_RETPOLINE_LFENCE)) {
+ if (boot_cpu_has(X86_FEATURE_RETPOLINE)) {
spec_ctrl_disable_kernel_rrsba();
if (rrsba_disabled)
return;
@@ -1735,11 +1707,6 @@ static void __init spectre_v2_select_mitigation(void)
mode = spectre_v2_select_retpoline();
break;

- case SPECTRE_V2_CMD_RETPOLINE_LFENCE:
- pr_err(SPECTRE_V2_LFENCE_MSG);
- mode = SPECTRE_V2_LFENCE;
- break;
-
case SPECTRE_V2_CMD_RETPOLINE_GENERIC:
mode = SPECTRE_V2_RETPOLINE;
break;
@@ -1756,10 +1723,6 @@ static void __init spectre_v2_select_mitigation(void)
mode = SPECTRE_V2_EIBRS;
break;

- case SPECTRE_V2_CMD_EIBRS_LFENCE:
- mode = SPECTRE_V2_EIBRS_LFENCE;
- break;
-
case SPECTRE_V2_CMD_EIBRS_RETPOLINE:
mode = SPECTRE_V2_EIBRS_RETPOLINE;
break;
@@ -1788,14 +1751,6 @@ static void __init spectre_v2_select_mitigation(void)
pr_warn(SPECTRE_V2_IBRS_PERF_MSG);
break;

- case SPECTRE_V2_LFENCE:
- setup_clear_cpu_cap(X86_FEATURE_INDIRECT_SAFE);
- fallthrough;
- case SPECTRE_V2_EIBRS_LFENCE:
- setup_force_cpu_cap(X86_FEATURE_RETPOLINE_LFENCE);
- setup_force_cpu_cap(X86_FEATURE_RETPOLINE);
- break;
-
case SPECTRE_V2_RETPOLINE:
setup_clear_cpu_cap(X86_FEATURE_INDIRECT_SAFE);
fallthrough;
@@ -1809,9 +1764,7 @@ static void __init spectre_v2_select_mitigation(void)
* JMPs gets protection against BHI and Intramode-BTI, but RET
* prediction from a non-RSB predictor is still a risk.
*/
- if (mode == SPECTRE_V2_EIBRS_LFENCE ||
- mode == SPECTRE_V2_EIBRS_RETPOLINE ||
- mode == SPECTRE_V2_RETPOLINE)
+ if (boot_cpu_has(X86_FEATURE_RETPOLINE))
spec_ctrl_disable_kernel_rrsba();

if (boot_cpu_has(X86_BUG_BHI))
@@ -1958,10 +1911,6 @@ void cpu_bugs_smt_update(void)
{
mutex_lock(&spec_ctrl_mutex);

- if (sched_smt_active() && unprivileged_ebpf_enabled() &&
- spectre_v2_enabled == SPECTRE_V2_EIBRS_LFENCE)
- pr_warn_once(SPECTRE_V2_EIBRS_LFENCE_EBPF_SMT_MSG);
-
switch (spectre_v2_user_stibp) {
case SPECTRE_V2_USER_NONE:
break;
@@ -2838,9 +2787,7 @@ static const char *spectre_bhi_state(void)
return "; BHI: BHI_DIS_S";
else if (boot_cpu_has(X86_FEATURE_CLEAR_BHB_LOOP))
return "; BHI: SW loop, KVM: SW loop";
- else if (boot_cpu_has(X86_FEATURE_RETPOLINE) &&
- !boot_cpu_has(X86_FEATURE_RETPOLINE_LFENCE) &&
- rrsba_disabled)
+ else if (boot_cpu_has(X86_FEATURE_RETPOLINE) && rrsba_disabled)
return "; BHI: Retpoline";
else if (boot_cpu_has(X86_FEATURE_CLEAR_BHB_LOOP_ON_VMEXIT))
return "; BHI: Vulnerable, KVM: SW loop";
@@ -2850,16 +2797,9 @@ static const char *spectre_bhi_state(void)

static ssize_t spectre_v2_show_state(char *buf)
{
- if (spectre_v2_enabled == SPECTRE_V2_LFENCE)
- return sysfs_emit(buf, "Vulnerable: LFENCE\n");
-
if (spectre_v2_enabled == SPECTRE_V2_EIBRS && unprivileged_ebpf_enabled())
return sysfs_emit(buf, "Vulnerable: eIBRS with unprivileged eBPF\n");

- if (sched_smt_active() && unprivileged_ebpf_enabled() &&
- spectre_v2_enabled == SPECTRE_V2_EIBRS_LFENCE)
- return sysfs_emit(buf, "Vulnerable: eIBRS+LFENCE with unprivileged eBPF and SMT\n");
-
return sysfs_emit(buf, "%s%s%s%s%s%s%s%s\n",
spectre_v2_strings[spectre_v2_enabled],
ibpb_state(),
diff --git a/arch/x86/kernel/cpu/cpu.h b/arch/x86/kernel/cpu/cpu.h
index ea9e07d57c8d..602318b87bd9 100644
--- a/arch/x86/kernel/cpu/cpu.h
+++ b/arch/x86/kernel/cpu/cpu.h
@@ -93,8 +93,7 @@ extern enum spectre_v2_mitigation spectre_v2_enabled;
static inline bool spectre_v2_in_eibrs_mode(enum spectre_v2_mitigation mode)
{
return mode == SPECTRE_V2_EIBRS ||
- mode == SPECTRE_V2_EIBRS_RETPOLINE ||
- mode == SPECTRE_V2_EIBRS_LFENCE;
+ mode == SPECTRE_V2_EIBRS_RETPOLINE;
}

#endif /* ARCH_X86_CPU_H */
diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S
index e674ccf720b9..e32199f40c48 100644
--- a/arch/x86/lib/retpoline.S
+++ b/arch/x86/lib/retpoline.S
@@ -37,9 +37,8 @@ SYM_INNER_LABEL(__x86_indirect_thunk_\reg, SYM_L_GLOBAL)
UNWIND_HINT_UNDEFINED
ANNOTATE_NOENDBR

- ALTERNATIVE_2 __stringify(RETPOLINE \reg), \
- __stringify(lfence; ANNOTATE_RETPOLINE_SAFE; jmp *%\reg; int3), X86_FEATURE_RETPOLINE_LFENCE, \
- __stringify(ANNOTATE_RETPOLINE_SAFE; jmp *%\reg), ALT_NOT(X86_FEATURE_RETPOLINE)
+ ALTERNATIVE __stringify(RETPOLINE \reg), \
+ __stringify(ANNOTATE_RETPOLINE_SAFE; jmp *%\reg), ALT_NOT(X86_FEATURE_RETPOLINE)

.endm

diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index df5fac428408..39ed0107fb59 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -556,10 +556,7 @@ static void emit_indirect_jump(u8 **pprog, int reg, u8 *ip)
{
u8 *prog = *pprog;

- if (cpu_feature_enabled(X86_FEATURE_RETPOLINE_LFENCE)) {
- EMIT_LFENCE();
- EMIT2(0xFF, 0xE0 + reg);
- } else if (cpu_feature_enabled(X86_FEATURE_RETPOLINE)) {
+ if (cpu_feature_enabled(X86_FEATURE_RETPOLINE)) {
OPTIMIZER_HIDE_VAR(reg);
if (cpu_feature_enabled(X86_FEATURE_CALL_DEPTH))
emit_jump(&prog, &__x86_indirect_jump_thunk_array[reg], ip);
diff --git a/tools/arch/x86/include/asm/cpufeatures.h b/tools/arch/x86/include/asm/cpufeatures.h
index 25160d26764b..6627e0d25425 100644
--- a/tools/arch/x86/include/asm/cpufeatures.h
+++ b/tools/arch/x86/include/asm/cpufeatures.h
@@ -298,7 +298,6 @@
#define X86_FEATURE_ENTRY_IBPB (11*32+10) /* "" Issue an IBPB on kernel entry */
#define X86_FEATURE_RRSBA_CTRL (11*32+11) /* "" RET prediction control */
#define X86_FEATURE_RETPOLINE (11*32+12) /* "" Generic Retpoline mitigation for Spectre variant 2 */
-#define X86_FEATURE_RETPOLINE_LFENCE (11*32+13) /* "" Use LFENCE for Spectre variant 2 */
#define X86_FEATURE_RETHUNK (11*32+14) /* "" Use REturn THUNK */
#define X86_FEATURE_UNRET (11*32+15) /* "" AMD BTB untrain return */
#define X86_FEATURE_USE_IBPB_FW (11*32+16) /* "" Use IBPB during runtime firmware calls */
diff --git a/tools/arch/x86/include/asm/disabled-features.h b/tools/arch/x86/include/asm/disabled-features.h
index 1f23960d2b06..74538147edc8 100644
--- a/tools/arch/x86/include/asm/disabled-features.h
+++ b/tools/arch/x86/include/asm/disabled-features.h
@@ -53,8 +53,7 @@
#ifdef CONFIG_MITIGATION_RETPOLINE
# define DISABLE_RETPOLINE 0
#else
-# define DISABLE_RETPOLINE ((1 << (X86_FEATURE_RETPOLINE & 31)) | \
- (1 << (X86_FEATURE_RETPOLINE_LFENCE & 31)))
+# define DISABLE_RETPOLINE (1 << (X86_FEATURE_RETPOLINE & 31)
#endif

#ifdef CONFIG_MITIGATION_RETHUNK
--
2.44.0


2024-04-12 18:22:19

by Josh Poimboeuf

[permalink] [raw]
Subject: Re: [PATCH 3/3] x86/bugs: Remove support for Spectre v2 LFENCE "retpolines"

On Fri, Apr 12, 2024 at 11:10:34AM -0700, Josh Poimboeuf wrote:
> ---
> arch/x86/Makefile | 1 -
> arch/x86/include/asm/cpufeatures.h | 1 -
> arch/x86/include/asm/disabled-features.h | 3 +-
> arch/x86/include/asm/nospec-branch.h | 18 ++---
> arch/x86/kernel/alternative.c | 17 +----
> arch/x86/kernel/cpu/bugs.c | 66 +------------------
> arch/x86/kernel/cpu/cpu.h | 3 +-
> arch/x86/lib/retpoline.S | 5 +-
> arch/x86/net/bpf_jit_comp.c | 5 +-
> tools/arch/x86/include/asm/cpufeatures.h | 1 -
> .../arch/x86/include/asm/disabled-features.h | 3 +-

Forgot the documentation updates:

diff --git a/Documentation/admin-guide/hw-vuln/spectre.rst b/Documentation/admin-guide/hw-vuln/spectre.rst
index 25a04cda4c2c..de780db82cd8 100644
--- a/Documentation/admin-guide/hw-vuln/spectre.rst
+++ b/Documentation/admin-guide/hw-vuln/spectre.rst
@@ -380,10 +380,8 @@ The possible values in this file are:
'Not affected' The processor is not vulnerable
'Mitigation: None' Vulnerable, no mitigation
'Mitigation: Retpolines' Use Retpoline thunks
- 'Mitigation: LFENCE' Use LFENCE instructions
'Mitigation: Enhanced IBRS' Hardware-focused mitigation
'Mitigation: Enhanced IBRS + Retpolines' Hardware-focused + Retpolines
- 'Mitigation: Enhanced IBRS + LFENCE' Hardware-focused + LFENCE
======================================== =================================

- Firmware status: Show if Indirect Branch Restricted Speculation (IBRS) is
@@ -640,13 +638,10 @@ kernel command line.

Specific mitigations can also be selected manually:

- retpoline auto pick between generic,lfence
+ retpoline Retpolines
retpoline,generic Retpolines
- retpoline,lfence LFENCE; indirect branch
- retpoline,amd alias for retpoline,lfence
eibrs Enhanced/Auto IBRS
eibrs,retpoline Enhanced/Auto IBRS + Retpolines
- eibrs,lfence Enhanced/Auto IBRS + LFENCE
ibrs use IBRS to protect kernel

Not specifying this option is equivalent to
diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 902ecd92a29f..edbfba7299e7 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -6099,13 +6099,10 @@

Specific mitigations can also be selected manually:

- retpoline - replace indirect branches
+ retpoline - Retpolines
retpoline,generic - Retpolines
- retpoline,lfence - LFENCE; indirect branch
- retpoline,amd - alias for retpoline,lfence
eibrs - Enhanced/Auto IBRS
eibrs,retpoline - Enhanced/Auto IBRS + Retpolines
- eibrs,lfence - Enhanced/Auto IBRS + LFENCE
ibrs - use IBRS to protect kernel

Not specifying this option is equivalent to

2024-04-12 20:49:45

by Andrew Cooper

[permalink] [raw]
Subject: Re: [PATCH 3/3] x86/bugs: Remove support for Spectre v2 LFENCE "retpolines"

On 12/04/2024 7:10 pm, Josh Poimboeuf wrote:
> I found several bugs where code assumes that X86_FEATURE_RETPOLINE
> actually means retpolines (imagine that!).

Yeah :(   One could also imagine a past where that was pointed out, or
just read about it in the archives.

> In fact that feature also
> includes the original AMD LFENCE "retpolines", which aren't in fact
> retpolines.
>
> Really, those "retpolines" should just be removed. They're already
> considered vulnerable due to the fact that the speculative window after
> the indirect branch can still be long enough to do multiple dependent
> loads. And recent tooling makes such gadgets easier to find.

There are two Atom CPUs which are not repotline safe, and for which
Intel released a statement saying "use lfence/jmp" on these.

I'm still trying to find it...

~Andrew

2024-04-12 22:42:50

by Pawan Gupta

[permalink] [raw]
Subject: Re: [PATCH v2 1/3] x86/bugs: Only harden syscalls when needed

On Fri, Apr 12, 2024 at 11:10:32AM -0700, Josh Poimboeuf wrote:
> diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
> index 3c7434329661..7c87fe80c696 100644
> --- a/arch/x86/include/asm/cpufeatures.h
> +++ b/arch/x86/include/asm/cpufeatures.h
> @@ -470,6 +470,7 @@
> #define X86_FEATURE_BHI_CTRL (21*32+ 2) /* "" BHI_DIS_S HW control available */
> #define X86_FEATURE_CLEAR_BHB_HW (21*32+ 3) /* "" BHI_DIS_S HW control enabled */
> #define X86_FEATURE_CLEAR_BHB_LOOP_ON_VMEXIT (21*32+ 4) /* "" Clear branch history at vmexit using SW loop */
> +#define X86_FEATURE_INDIRECT_SAFE (21*32+ 4) /* "" Indirect branches aren't vulnerable to Spectre v2 */

This should be (21*32+ 5).

Other than that:

Reviewed-by: Pawan Gupta <[email protected]>

2024-04-12 22:58:35

by Josh Poimboeuf

[permalink] [raw]
Subject: Re: [PATCH v2 1/3] x86/bugs: Only harden syscalls when needed

On Fri, Apr 12, 2024 at 03:42:32PM -0700, Pawan Gupta wrote:
> On Fri, Apr 12, 2024 at 11:10:32AM -0700, Josh Poimboeuf wrote:
> > diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
> > index 3c7434329661..7c87fe80c696 100644
> > --- a/arch/x86/include/asm/cpufeatures.h
> > +++ b/arch/x86/include/asm/cpufeatures.h
> > @@ -470,6 +470,7 @@
> > #define X86_FEATURE_BHI_CTRL (21*32+ 2) /* "" BHI_DIS_S HW control available */
> > #define X86_FEATURE_CLEAR_BHB_HW (21*32+ 3) /* "" BHI_DIS_S HW control enabled */
> > #define X86_FEATURE_CLEAR_BHB_LOOP_ON_VMEXIT (21*32+ 4) /* "" Clear branch history at vmexit using SW loop */
> > +#define X86_FEATURE_INDIRECT_SAFE (21*32+ 4) /* "" Indirect branches aren't vulnerable to Spectre v2 */
>
> This should be (21*32+ 5).

Argh :-/

> Other than that:
>
> Reviewed-by: Pawan Gupta <[email protected]>

Thanks!

--
Josh

2024-04-12 23:18:40

by Pawan Gupta

[permalink] [raw]
Subject: Re: [PATCH 2/3] x86/bugs: Fix BHI retpoline check

On Fri, Apr 12, 2024 at 11:10:33AM -0700, Josh Poimboeuf wrote:
> Confusingly, X86_FEATURE_RETPOLINE doesn't mean retpolines are enabled,
> as it also includes the original "AMD retpoline" which isn't a retpoline
> at all.
>
> Also replace cpu_feature_enabled() with boot_cpu_has() because this is
> before alternatives are patched and cpu_feature_enabled()'s fallback
> path is slower than plain old boot_cpu_has().
>
> Fixes: ec9404e40e8f ("x86/bhi: Add BHI mitigation knob")
> Signed-off-by: Josh Poimboeuf <[email protected]>

Reviewed-by: Pawan Gupta <[email protected]>

2024-04-13 04:21:59

by Josh Poimboeuf

[permalink] [raw]
Subject: Re: [PATCH 3/3] x86/bugs: Remove support for Spectre v2 LFENCE "retpolines"

On Fri, Apr 12, 2024 at 11:10:34AM -0700, Josh Poimboeuf wrote:
> I found several bugs where code assumes that X86_FEATURE_RETPOLINE
> actually means retpolines (imagine that!). In fact that feature also
> includes the original AMD LFENCE "retpolines", which aren't in fact
> retpolines.
>
> Really, those "retpolines" should just be removed. They're already
> considered vulnerable due to the fact that the speculative window after
> the indirect branch can still be long enough to do multiple dependent
> loads. And recent tooling makes such gadgets easier to find.
>
> Also, EIBRS_LFENCE tests worse in real-world benchmarks than the actual
> BHI mitigations, so it's both slower and less secure.
>
> Specifically this removes support for the following cmdline options:
>
> - spectre_v2=retpoline,amd
> - spectre_v2=retpoline,lfence
> - spectre_v2=eibrs,lfence
>
> Now when any of those options are used, it will print an error and fall
> back to the defaults (spectre_v2=auto spectre_bhi=on).
>
> Signed-off-by: Josh Poimboeuf <[email protected]>

Compile fix:

diff --git a/arch/x86/include/asm/disabled-features.h b/arch/x86/include/asm/disabled-features.h
index e1c421282a78..3a1349c0f225 100644
--- a/arch/x86/include/asm/disabled-features.h
+++ b/arch/x86/include/asm/disabled-features.h
@@ -53,7 +53,7 @@
#ifdef CONFIG_MITIGATION_RETPOLINE
# define DISABLE_RETPOLINE 0
#else
-# define DISABLE_RETPOLINE (1 << (X86_FEATURE_RETPOLINE & 31)
+# define DISABLE_RETPOLINE (1 << (X86_FEATURE_RETPOLINE & 31))
#endif

#ifdef CONFIG_MITIGATION_RETHUNK

Subject: [tip: x86/urgent] x86/bugs: Fix BHI retpoline check

The following commit has been merged into the x86/urgent branch of tip:

Commit-ID: 69129794d94c544810e68b2b4eaa7e44063f9bf2
Gitweb: https://git.kernel.org/tip/69129794d94c544810e68b2b4eaa7e44063f9bf2
Author: Josh Poimboeuf <[email protected]>
AuthorDate: Fri, 12 Apr 2024 11:10:33 -07:00
Committer: Ingo Molnar <[email protected]>
CommitterDate: Sun, 14 Apr 2024 11:10:05 +02:00

x86/bugs: Fix BHI retpoline check

Confusingly, X86_FEATURE_RETPOLINE doesn't mean retpolines are enabled,
as it also includes the original "AMD retpoline" which isn't a retpoline
at all.

Also replace cpu_feature_enabled() with boot_cpu_has() because this is
before alternatives are patched and cpu_feature_enabled()'s fallback
path is slower than plain old boot_cpu_has().

Fixes: ec9404e40e8f ("x86/bhi: Add BHI mitigation knob")
Signed-off-by: Josh Poimboeuf <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>
Reviewed-by: Pawan Gupta <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Linus Torvalds <[email protected]>
Link: https://lore.kernel.org/r/ad3807424a3953f0323c011a643405619f2a4927.1712944776.git.jpoimboe@kernel.org
---
arch/x86/kernel/cpu/bugs.c | 11 +++++++----
1 file changed, 7 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index ca295b0..ab18185 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -1652,7 +1652,8 @@ static void __init bhi_select_mitigation(void)
return;

/* Retpoline mitigates against BHI unless the CPU has RRSBA behavior */
- if (cpu_feature_enabled(X86_FEATURE_RETPOLINE)) {
+ if (boot_cpu_has(X86_FEATURE_RETPOLINE) &&
+ !boot_cpu_has(X86_FEATURE_RETPOLINE_LFENCE)) {
spec_ctrl_disable_kernel_rrsba();
if (rrsba_disabled)
return;
@@ -2804,11 +2805,13 @@ static const char *spectre_bhi_state(void)
{
if (!boot_cpu_has_bug(X86_BUG_BHI))
return "; BHI: Not affected";
- else if (boot_cpu_has(X86_FEATURE_CLEAR_BHB_HW))
+ else if (boot_cpu_has(X86_FEATURE_CLEAR_BHB_HW))
return "; BHI: BHI_DIS_S";
- else if (boot_cpu_has(X86_FEATURE_CLEAR_BHB_LOOP))
+ else if (boot_cpu_has(X86_FEATURE_CLEAR_BHB_LOOP))
return "; BHI: SW loop, KVM: SW loop";
- else if (boot_cpu_has(X86_FEATURE_RETPOLINE) && rrsba_disabled)
+ else if (boot_cpu_has(X86_FEATURE_RETPOLINE) &&
+ !boot_cpu_has(X86_FEATURE_RETPOLINE_LFENCE) &&
+ rrsba_disabled)
return "; BHI: Retpoline";
else if (boot_cpu_has(X86_FEATURE_CLEAR_BHB_LOOP_ON_VMEXIT))
return "; BHI: Vulnerable, KVM: SW loop";

2024-04-15 07:54:01

by Nikolay Borisov

[permalink] [raw]
Subject: Re: [PATCH v2 1/3] x86/bugs: Only harden syscalls when needed


On 12.04.24 г. 21:10 ч., Josh Poimboeuf wrote:
> Syscall hardening (i.e., converting the syscall indirect branch to a
> series of direct branches) may cause performance regressions in certain
> scenarios. Only use the syscall hardening when indirect branches are
> considered unsafe.
>
> Fixes: 1e3ad78334a6 ("x86/syscall: Don't force use of indirect calls for system calls")
> Signed-off-by: Josh Poimboeuf <[email protected]>
> ---
> arch/x86/entry/common.c | 15 ++++++++++++---
> arch/x86/entry/syscall_32.c | 11 +----------
> arch/x86/entry/syscall_64.c | 6 ------
> arch/x86/entry/syscall_x32.c | 7 ++++++-
> arch/x86/include/asm/cpufeatures.h | 1 +
> arch/x86/include/asm/syscall.h | 8 +++++++-
> arch/x86/kernel/cpu/bugs.c | 31 +++++++++++++++++++++++++++++-
> 7 files changed, 57 insertions(+), 22 deletions(-)
>

To ask again, what do we gain by having this syscall hardening at the
same time as the always on BHB scrubbing sequence?


2024-04-15 15:17:10

by Linus Torvalds

[permalink] [raw]
Subject: Re: [PATCH v2 1/3] x86/bugs: Only harden syscalls when needed

On Mon, 15 Apr 2024 at 00:37, Nikolay Borisov <[email protected]> wrote:
>
> To ask again, what do we gain by having this syscall hardening at the
> same time as the always on BHB scrubbing sequence?

What happens the next time some indirect call problem comes up?

If we had had *one* hardware bug in this area, that would be one
thing. But this has been going on for a decade now.

Linus

2024-04-15 15:30:20

by Nikolay Borisov

[permalink] [raw]
Subject: Re: [PATCH v2 1/3] x86/bugs: Only harden syscalls when needed



On 15.04.24 г. 18:16 ч., Linus Torvalds wrote:
> On Mon, 15 Apr 2024 at 00:37, Nikolay Borisov <[email protected]> wrote:
>>
>> To ask again, what do we gain by having this syscall hardening at the
>> same time as the always on BHB scrubbing sequence?
>
> What happens the next time some indirect call problem comes up?

Same as with every issue - assess the problem and develop fixes. Let's
be honest, the indirect branches in the syscall handler aren't the
biggest problem, it's the stacked LSMs. And even if those get fixes
chances are the security people will likely find some other avenue of
attack, I think even now the attack is somewhat hard to pull off.


So in any case this could have been a completely independent patch of
the BHI series.

>
> If we had had *one* hardware bug in this area, that would be one
> thing. But this has been going on for a decade now.




>
> Linus

2024-04-15 15:48:16

by Linus Torvalds

[permalink] [raw]
Subject: Re: [PATCH v2 1/3] x86/bugs: Only harden syscalls when needed

On Mon, 15 Apr 2024 at 08:27, Nikolay Borisov <[email protected]> wrote:
>
> Same as with every issue - assess the problem and develop fixes.

No. Let's have at least all the infrastructure in place to be a bit proactive.

> Let's be honest, the indirect branches in the syscall handler aren't the
> biggest problem

Oh, they have been.

> it's the stacked LSMs.

Hopefully those will get fixed too.

There's a few other fairly reachable ones (the timer indirection ones
are much too close, and VFS file ops aren't entirely out of reach).

But maybe some day we'll be in a situation where it's actually fairly
hard to reach indirect kernel calls from untrusted user space.

The system call ones are pretty much always the first ones, though.

> And even if those get fixes
> chances are the security people will likely find some other avenue of
> attack, I think even now the attack is somewhat hard to pull off.

No disagreement about that. I think outright sw bugs are still the
99.9% thing. But let's learn from history instead of "assess the
problem" every time anew.

Linus

2024-04-15 23:44:23

by Josh Poimboeuf

[permalink] [raw]
Subject: Re: [PATCH 3/3] x86/bugs: Remove support for Spectre v2 LFENCE "retpolines"

On Fri, Apr 12, 2024 at 09:49:33PM +0100, Andrew Cooper wrote:
> On 12/04/2024 7:10 pm, Josh Poimboeuf wrote:
> > I found several bugs where code assumes that X86_FEATURE_RETPOLINE
> > actually means retpolines (imagine that!).
>
> Yeah :(   One could also imagine a past where that was pointed out, or
> just read about it in the archives.
>
> > In fact that feature also
> > includes the original AMD LFENCE "retpolines", which aren't in fact
> > retpolines.
> >
> > Really, those "retpolines" should just be removed. They're already
> > considered vulnerable due to the fact that the speculative window after
> > the indirect branch can still be long enough to do multiple dependent
> > loads. And recent tooling makes such gadgets easier to find.
>
> There are two Atom CPUs which are not repotline safe, and for which
> Intel released a statement saying "use lfence/jmp" on these.
>
> I'm still trying to find it...

Any luck finding it? The only thing I found [1] isn't exactly a ringing
endorsement of LFENCE;JMP over retpolines.

[1] https://www.intel.com/content/www/us/en/developer/articles/technical/software-security-guidance/technical-documentation/branch-history-injection.html

--
Josh