2023-11-21 17:12:28

by Breno Leitao

[permalink] [raw]
Subject: [PATCH v6 00/13] x86/bugs: Add a separate config for each mitigation

Currently, the CONFIG_SPECULATION_MITIGATIONS is halfway populated,
where some mitigations have entries in Kconfig, and they could be
modified, while others mitigations do not have Kconfig entries, and
could not be controlled at build time.

The fact of having a fine grained control can help in a few ways:

1) Users can choose and pick only mitigations that are important for
their workloads.

2) Users and developers can choose to disable mitigations that mangle
the assembly code generation, making it hard to read.

3) Separate configs for just source code readability,
so that we see *which* butt-ugly piece of crap code is for what
reason.

Important to say, if a mitigation is disabled at compilation time, it
could be enabled at runtime using kernel command line arguments.

Discussion about this approach:
https://lore.kernel.org/all/CAHk-=wjTHeQjsqtHcBGvy9TaJQ5uAm5HrCDuOD9v7qA9U1Xr4w@mail.gmail.com/
and
https://lore.kernel.org/lkml/20231011044252.42bplzjsam3qsasz@treble/

In order to get the missing mitigations, some clean up was done.

1) Get a namespace for mitigations, prepending MITIGATION to the Kconfig
entries.

2) Adding the missing mitigations, so, the mitigations have entries in the
Kconfig that could be easily configure by the user.

With this patchset applied, all configs have an individual entry under
CONFIG_SPECULATION_MITIGATIONS, and all of them starts with CONFIG_MITIGATION.

Changelog
---------
V1:
* Creates a way to mitigate all (or none) hardware bugs
V2:
* Create KCONFIGs entries only some hardware bugs (MDS, TAA, MMIO)
V3:
* Expand the mitigations KCONFIGs to all hardware bugs that are
Linux mitigates.
V4:
* Patch rebase.
* Better documentation about the reasons of this decision.
V5:
* Create a "MITIGATION" Kconfig namespace for the entries mitigating
hardware bugs.
* Add GDS to the set of mitigations that are being covered.
* Reduce the ifdefs in the code by leveraging conditionals with omitted
operands.
V6:
* Reference documentation RST files from Kconfig entries
* Fix some grammar mistakes and Kconfig dependencies
* Now spectre v2 user depends on CONFIG_MITIGATION_SPECTRE_V2. See
patch "spectre_v2_user default mode depends on Kconfig"

Breno Leitao (13):
x86/bugs: Rename GDS_FORCE_MITIGATION to MITIGATION_GDS_FORCE
x86/bugs: Rename CPU_IBPB_ENTRY to MITIGATION_IBPB_ENTRY
x86/bugs: Rename CALL_DEPTH_TRACKING to MITIGATION_CALL_DEPTH_TRACKING
x86/bugs: Rename PAGE_TABLE_ISOLATION to MITIGATION_PAGE_TABLE_ISOLATION
x86/bugs: Rename RETPOLINE to MITIGATION_RETPOLINE
x86/bugs: Rename SLS to CONFIG_MITIGATION_SLS
x86/bugs: Rename CPU_UNRET_ENTRY to MITIGATION_UNRET_ENTRY
x86/bugs: Rename CPU_IBRS_ENTRY to MITIGATION_IBRS_ENTRY
x86/bugs: Rename CPU_SRSO to MITIGATION_SRSO
x86/bugs: Rename RETHUNK to MITIGATION_RETHUNK
x86/bugs: Create a way to disable GDS mitigation
x86/bugs: spectre_v2_user default mode depends on Kconfig
x86/bugs: Add a separate config for missing mitigation

Documentation/admin-guide/hw-vuln/spectre.rst | 8 +-
.../admin-guide/kernel-parameters.txt | 4 +-
Documentation/arch/x86/pti.rst | 6 +-
arch/x86/Kconfig | 151 +++++++++++++++---
arch/x86/Makefile | 8 +-
arch/x86/boot/compressed/ident_map_64.c | 4 +-
arch/x86/configs/i386_defconfig | 2 +-
arch/x86/entry/calling.h | 8 +-
arch/x86/entry/entry_64.S | 2 +-
arch/x86/entry/vdso/Makefile | 4 +-
arch/x86/include/asm/current.h | 2 +-
arch/x86/include/asm/disabled-features.h | 10 +-
arch/x86/include/asm/linkage.h | 16 +-
arch/x86/include/asm/nospec-branch.h | 30 ++--
arch/x86/include/asm/pgalloc.h | 2 +-
arch/x86/include/asm/pgtable-3level.h | 2 +-
arch/x86/include/asm/pgtable.h | 18 +--
arch/x86/include/asm/pgtable_64.h | 3 +-
arch/x86/include/asm/processor-flags.h | 2 +-
arch/x86/include/asm/pti.h | 2 +-
arch/x86/include/asm/static_call.h | 2 +-
arch/x86/kernel/alternative.c | 14 +-
arch/x86/kernel/asm-offsets.c | 2 +-
arch/x86/kernel/cpu/amd.c | 2 +-
arch/x86/kernel/cpu/bugs.c | 98 +++++++-----
arch/x86/kernel/dumpstack.c | 2 +-
arch/x86/kernel/ftrace.c | 3 +-
arch/x86/kernel/head_32.S | 4 +-
arch/x86/kernel/head_64.S | 2 +-
arch/x86/kernel/kprobes/opt.c | 2 +-
arch/x86/kernel/ldt.c | 8 +-
arch/x86/kernel/static_call.c | 2 +-
arch/x86/kernel/vmlinux.lds.S | 10 +-
arch/x86/kvm/mmu/mmu.c | 2 +-
arch/x86/kvm/mmu/mmu_internal.h | 2 +-
arch/x86/kvm/svm/svm.c | 2 +-
arch/x86/kvm/svm/vmenter.S | 4 +-
arch/x86/kvm/vmx/vmx.c | 2 +-
arch/x86/lib/Makefile | 2 +-
arch/x86/lib/retpoline.S | 26 +--
arch/x86/mm/Makefile | 2 +-
arch/x86/mm/debug_pagetables.c | 4 +-
arch/x86/mm/dump_pagetables.c | 4 +-
arch/x86/mm/pgtable.c | 4 +-
arch/x86/mm/tlb.c | 10 +-
arch/x86/net/bpf_jit_comp.c | 4 +-
arch/x86/net/bpf_jit_comp32.c | 2 +-
arch/x86/purgatory/Makefile | 2 +-
include/linux/compiler-gcc.h | 2 +-
include/linux/indirect_call_wrapper.h | 2 +-
include/linux/module.h | 2 +-
include/linux/objtool.h | 2 +-
include/linux/pti.h | 2 +-
include/net/netfilter/nf_tables_core.h | 2 +-
include/net/tc_wrapper.h | 2 +-
kernel/trace/ring_buffer.c | 2 +-
net/netfilter/Makefile | 2 +-
net/netfilter/nf_tables_core.c | 6 +-
net/netfilter/nft_ct.c | 4 +-
net/netfilter/nft_lookup.c | 2 +-
net/sched/sch_api.c | 2 +-
scripts/Makefile.lib | 8 +-
scripts/Makefile.vmlinux_o | 2 +-
scripts/generate_rust_target.rs | 2 +-
scripts/mod/modpost.c | 2 +-
.../arch/x86/include/asm/disabled-features.h | 10 +-
66 files changed, 344 insertions(+), 219 deletions(-)

--
2.34.1


2023-11-21 17:13:03

by Breno Leitao

[permalink] [raw]
Subject: [PATCH v6 09/13] x86/bugs: Rename CPU_SRSO to MITIGATION_SRSO

CPU mitigations config entries are inconsistent, and names are hard to
related. There are concrete benefits for both users and developers of
having all the mitigation config options living in the same config
namespace.

The mitigation options should have consistency and start with
MITIGATION.

Rename the Kconfig entry from CPU_SRSO to MITIGATION_SRSO.

Suggested-by: Josh Poimboeuf <[email protected]>
Signed-off-by: Breno Leitao <[email protected]>
---
arch/x86/Kconfig | 2 +-
arch/x86/include/asm/nospec-branch.h | 6 +++---
arch/x86/kernel/cpu/bugs.c | 8 ++++----
arch/x86/kernel/vmlinux.lds.S | 4 ++--
arch/x86/lib/retpoline.S | 10 +++++-----
include/linux/objtool.h | 2 +-
scripts/Makefile.vmlinux_o | 2 +-
7 files changed, 17 insertions(+), 17 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 392e94fded3d..30c2f880caf9 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -2573,7 +2573,7 @@ config MITIGATION_IBRS_ENTRY
This mitigates both spectre_v2 and retbleed at great cost to
performance.

-config CPU_SRSO
+config MITIGATION_SRSO
bool "Mitigate speculative RAS overflow on AMD"
depends on CPU_SUP_AMD && X86_64 && RETHUNK
default y
diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
index e25e98f012a3..9ea93a298a43 100644
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -212,7 +212,7 @@
*/
.macro VALIDATE_UNRET_END
#if defined(CONFIG_NOINSTR_VALIDATION) && \
- (defined(CONFIG_MITIGATION_UNRET_ENTRY) || defined(CONFIG_CPU_SRSO))
+ (defined(CONFIG_MITIGATION_UNRET_ENTRY) || defined(CONFIG_MITIGATION_SRSO))
ANNOTATE_RETPOLINE_SAFE
nop
#endif
@@ -271,7 +271,7 @@
.Lskip_rsb_\@:
.endm

-#if defined(CONFIG_MITIGATION_UNRET_ENTRY) || defined(CONFIG_CPU_SRSO)
+#if defined(CONFIG_MITIGATION_UNRET_ENTRY) || defined(CONFIG_MITIGATION_SRSO)
#define CALL_UNTRAIN_RET "call entry_untrain_ret"
#else
#define CALL_UNTRAIN_RET ""
@@ -340,7 +340,7 @@ extern void retbleed_return_thunk(void);
static inline void retbleed_return_thunk(void) {}
#endif

-#ifdef CONFIG_CPU_SRSO
+#ifdef CONFIG_MITIGATION_SRSO
extern void srso_return_thunk(void);
extern void srso_alias_return_thunk(void);
#else
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index e11bacbd8f39..f2775417bda2 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -2458,7 +2458,7 @@ static void __init srso_select_mitigation(void)
break;

case SRSO_CMD_SAFE_RET:
- if (IS_ENABLED(CONFIG_CPU_SRSO)) {
+ if (IS_ENABLED(CONFIG_MITIGATION_SRSO)) {
/*
* Enable the return thunk for generated code
* like ftrace, static_call, etc.
@@ -2478,7 +2478,7 @@ static void __init srso_select_mitigation(void)
else
srso_mitigation = SRSO_MITIGATION_SAFE_RET_UCODE_NEEDED;
} else {
- pr_err("WARNING: kernel not compiled with CPU_SRSO.\n");
+ pr_err("WARNING: kernel not compiled with MITIGATION_SRSO.\n");
}
break;

@@ -2494,13 +2494,13 @@ static void __init srso_select_mitigation(void)
break;

case SRSO_CMD_IBPB_ON_VMEXIT:
- if (IS_ENABLED(CONFIG_CPU_SRSO)) {
+ if (IS_ENABLED(CONFIG_MITIGATION_SRSO)) {
if (!boot_cpu_has(X86_FEATURE_ENTRY_IBPB) && has_microcode) {
setup_force_cpu_cap(X86_FEATURE_IBPB_ON_VMEXIT);
srso_mitigation = SRSO_MITIGATION_IBPB_ON_VMEXIT;
}
} else {
- pr_err("WARNING: kernel not compiled with CPU_SRSO.\n");
+ pr_err("WARNING: kernel not compiled with MITIGATION_SRSO.\n");
}
break;
}
diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
index d7ee79b6756f..8d1143ab05b7 100644
--- a/arch/x86/kernel/vmlinux.lds.S
+++ b/arch/x86/kernel/vmlinux.lds.S
@@ -142,7 +142,7 @@ SECTIONS
*(.text..__x86.rethunk_untrain)
ENTRY_TEXT

-#ifdef CONFIG_CPU_SRSO
+#ifdef CONFIG_MITIGATION_SRSO
/*
* See the comment above srso_alias_untrain_ret()'s
* definition.
@@ -521,7 +521,7 @@ INIT_PER_CPU(irq_stack_backing_store);
. = ASSERT((retbleed_return_thunk & 0x3f) == 0, "retbleed_return_thunk not cacheline-aligned");
#endif

-#ifdef CONFIG_CPU_SRSO
+#ifdef CONFIG_MITIGATION_SRSO
. = ASSERT((srso_safe_ret & 0x3f) == 0, "srso_safe_ret not cacheline-aligned");
/*
* GNU ld cannot do XOR until 2.41.
diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S
index 0ad67ccadd4c..67b52cbec648 100644
--- a/arch/x86/lib/retpoline.S
+++ b/arch/x86/lib/retpoline.S
@@ -138,7 +138,7 @@ SYM_CODE_END(__x86_indirect_jump_thunk_array)
*/
.section .text..__x86.return_thunk

-#ifdef CONFIG_CPU_SRSO
+#ifdef CONFIG_MITIGATION_SRSO

/*
* srso_alias_untrain_ret() and srso_alias_safe_ret() are placed at
@@ -225,10 +225,10 @@ SYM_CODE_END(srso_return_thunk)

#define JMP_SRSO_UNTRAIN_RET "jmp srso_untrain_ret"
#define JMP_SRSO_ALIAS_UNTRAIN_RET "jmp srso_alias_untrain_ret"
-#else /* !CONFIG_CPU_SRSO */
+#else /* !CONFIG_MITIGATION_SRSO */
#define JMP_SRSO_UNTRAIN_RET "ud2"
#define JMP_SRSO_ALIAS_UNTRAIN_RET "ud2"
-#endif /* CONFIG_CPU_SRSO */
+#endif /* CONFIG_MITIGATION_SRSO */

#ifdef CONFIG_MITIGATION_UNRET_ENTRY

@@ -316,7 +316,7 @@ SYM_FUNC_END(retbleed_untrain_ret)
#define JMP_RETBLEED_UNTRAIN_RET "ud2"
#endif /* CONFIG_MITIGATION_UNRET_ENTRY */

-#if defined(CONFIG_MITIGATION_UNRET_ENTRY) || defined(CONFIG_CPU_SRSO)
+#if defined(CONFIG_MITIGATION_UNRET_ENTRY) || defined(CONFIG_MITIGATION_SRSO)

SYM_FUNC_START(entry_untrain_ret)
ALTERNATIVE_2 JMP_RETBLEED_UNTRAIN_RET, \
@@ -325,7 +325,7 @@ SYM_FUNC_START(entry_untrain_ret)
SYM_FUNC_END(entry_untrain_ret)
__EXPORT_THUNK(entry_untrain_ret)

-#endif /* CONFIG_MITIGATION_UNRET_ENTRY || CONFIG_CPU_SRSO */
+#endif /* CONFIG_MITIGATION_UNRET_ENTRY || CONFIG_MITIGATION_SRSO */

#ifdef CONFIG_MITIGATION_CALL_DEPTH_TRACKING

diff --git a/include/linux/objtool.h b/include/linux/objtool.h
index d030671a4c49..b3b8d3dab52d 100644
--- a/include/linux/objtool.h
+++ b/include/linux/objtool.h
@@ -131,7 +131,7 @@
*/
.macro VALIDATE_UNRET_BEGIN
#if defined(CONFIG_NOINSTR_VALIDATION) && \
- (defined(CONFIG_MITIGATION_UNRET_ENTRY) || defined(CONFIG_CPU_SRSO))
+ (defined(CONFIG_MITIGATION_UNRET_ENTRY) || defined(CONFIG_MITIGATION_SRSO))
.Lhere_\@:
.pushsection .discard.validate_unret
.long .Lhere_\@ - .
diff --git a/scripts/Makefile.vmlinux_o b/scripts/Makefile.vmlinux_o
index 6277dbd730bb..6de297916ce6 100644
--- a/scripts/Makefile.vmlinux_o
+++ b/scripts/Makefile.vmlinux_o
@@ -38,7 +38,7 @@ objtool-enabled := $(or $(delay-objtool),$(CONFIG_NOINSTR_VALIDATION))
vmlinux-objtool-args-$(delay-objtool) += $(objtool-args-y)
vmlinux-objtool-args-$(CONFIG_GCOV_KERNEL) += --no-unreachable
vmlinux-objtool-args-$(CONFIG_NOINSTR_VALIDATION) += --noinstr \
- $(if $(or $(CONFIG_MITIGATION_UNRET_ENTRY),$(CONFIG_CPU_SRSO)), --unret)
+ $(if $(or $(CONFIG_MITIGATION_UNRET_ENTRY),$(CONFIG_MITIGATION_SRSO)), --unret)

objtool-args = $(vmlinux-objtool-args-y) --link

--
2.34.1

2023-11-21 17:13:34

by Breno Leitao

[permalink] [raw]
Subject: [PATCH v6 06/13] x86/bugs: Rename SLS to CONFIG_MITIGATION_SLS

CPU mitigations config entries are inconsistent, and names are hard to
related. There are concrete benefits for both users and developers of
having all the mitigation config options living in the same config
namespace.

The mitigation options should have consistency and start with
MITIGATION.

Rename the Kconfig entry from SLS to MITIGATION_SLS.

Suggested-by: Josh Poimboeuf <[email protected]>
Signed-off-by: Breno Leitao <[email protected]>
---
arch/x86/Kconfig | 2 +-
arch/x86/Makefile | 2 +-
arch/x86/include/asm/linkage.h | 4 ++--
arch/x86/kernel/alternative.c | 4 ++--
arch/x86/kernel/ftrace.c | 3 ++-
arch/x86/net/bpf_jit_comp.c | 4 ++--
scripts/Makefile.lib | 2 +-
7 files changed, 11 insertions(+), 10 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 862be9b3b216..fa246de60cdb 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -2580,7 +2580,7 @@ config CPU_SRSO
help
Enable the SRSO mitigation needed on AMD Zen1-4 machines.

-config SLS
+config MITIGATION_SLS
bool "Mitigate Straight-Line-Speculation"
depends on CC_HAS_SLS && X86_64
select OBJTOOL if HAVE_OBJTOOL
diff --git a/arch/x86/Makefile b/arch/x86/Makefile
index b8d23ed059fb..5ce8c30e7701 100644
--- a/arch/x86/Makefile
+++ b/arch/x86/Makefile
@@ -205,7 +205,7 @@ ifdef CONFIG_MITIGATION_RETPOLINE
endif
endif

-ifdef CONFIG_SLS
+ifdef CONFIG_MITIGATION_SLS
KBUILD_CFLAGS += -mharden-sls=all
endif

diff --git a/arch/x86/include/asm/linkage.h b/arch/x86/include/asm/linkage.h
index c5165204c66f..09e2d026df33 100644
--- a/arch/x86/include/asm/linkage.h
+++ b/arch/x86/include/asm/linkage.h
@@ -43,7 +43,7 @@
#if defined(CONFIG_RETHUNK) && !defined(__DISABLE_EXPORTS) && !defined(BUILD_VDSO)
#define RET jmp __x86_return_thunk
#else /* CONFIG_MITIGATION_RETPOLINE */
-#ifdef CONFIG_SLS
+#ifdef CONFIG_MITIGATION_SLS
#define RET ret; int3
#else
#define RET ret
@@ -55,7 +55,7 @@
#if defined(CONFIG_RETHUNK) && !defined(__DISABLE_EXPORTS) && !defined(BUILD_VDSO)
#define ASM_RET "jmp __x86_return_thunk\n\t"
#else /* CONFIG_MITIGATION_RETPOLINE */
-#ifdef CONFIG_SLS
+#ifdef CONFIG_MITIGATION_SLS
#define ASM_RET "ret; int3\n\t"
#else
#define ASM_RET "ret\n\t"
diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index 5ec887d065ce..b01d49862497 100644
--- a/arch/x86/kernel/alternative.c
+++ b/arch/x86/kernel/alternative.c
@@ -637,8 +637,8 @@ static int patch_retpoline(void *addr, struct insn *insn, u8 *bytes)
/*
* The compiler is supposed to EMIT an INT3 after every unconditional
* JMP instruction due to AMD BTC. However, if the compiler is too old
- * or SLS isn't enabled, we still need an INT3 after indirect JMPs
- * even on Intel.
+ * or MITIGATION_SLS isn't enabled, we still need an INT3 after
+ * indirect JMPs even on Intel.
*/
if (op == JMP32_INSN_OPCODE && i < insn->length)
bytes[i++] = INT3_INSN_OPCODE;
diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
index 93bc52d4a472..70139d9d2e01 100644
--- a/arch/x86/kernel/ftrace.c
+++ b/arch/x86/kernel/ftrace.c
@@ -307,7 +307,8 @@ union ftrace_op_code_union {
} __attribute__((packed));
};

-#define RET_SIZE (IS_ENABLED(CONFIG_MITIGATION_RETPOLINE) ? 5 : 1 + IS_ENABLED(CONFIG_SLS))
+#define RET_SIZE \
+ (IS_ENABLED(CONFIG_MITIGATION_RETPOLINE) ? 5 : 1 + IS_ENABLED(CONFIG_MITIGATION_SLS))

static unsigned long
create_trampoline(struct ftrace_ops *ops, unsigned int *tramp_size)
diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index ef732f323926..96a63c4386a9 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -469,7 +469,7 @@ static void emit_indirect_jump(u8 **pprog, int reg, u8 *ip)
emit_jump(&prog, &__x86_indirect_thunk_array[reg], ip);
} else {
EMIT2(0xFF, 0xE0 + reg); /* jmp *%\reg */
- if (IS_ENABLED(CONFIG_MITIGATION_RETPOLINE) || IS_ENABLED(CONFIG_SLS))
+ if (IS_ENABLED(CONFIG_MITIGATION_RETPOLINE) || IS_ENABLED(CONFIG_MITIGATION_SLS))
EMIT1(0xCC); /* int3 */
}

@@ -484,7 +484,7 @@ static void emit_return(u8 **pprog, u8 *ip)
emit_jump(&prog, x86_return_thunk, ip);
} else {
EMIT1(0xC3); /* ret */
- if (IS_ENABLED(CONFIG_SLS))
+ if (IS_ENABLED(CONFIG_MITIGATION_SLS))
EMIT1(0xCC); /* int3 */
}

diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib
index d6e157938b5f..0d5461276179 100644
--- a/scripts/Makefile.lib
+++ b/scripts/Makefile.lib
@@ -264,7 +264,7 @@ endif
objtool-args-$(CONFIG_UNWINDER_ORC) += --orc
objtool-args-$(CONFIG_MITIGATION_RETPOLINE) += --retpoline
objtool-args-$(CONFIG_RETHUNK) += --rethunk
-objtool-args-$(CONFIG_SLS) += --sls
+objtool-args-$(CONFIG_MITIGATION_SLS) += --sls
objtool-args-$(CONFIG_STACK_VALIDATION) += --stackval
objtool-args-$(CONFIG_HAVE_STATIC_CALL_INLINE) += --static-call
objtool-args-$(CONFIG_HAVE_UACCESS_VALIDATION) += --uaccess
--
2.34.1

2023-11-21 17:13:39

by Breno Leitao

[permalink] [raw]
Subject: [PATCH v6 10/13] x86/bugs: Rename RETHUNK to MITIGATION_RETHUNK

CPU mitigations config entries are inconsistent, and names are hard to
related. There are concrete benefits for both users and developers of
having all the mitigation config options living in the same config
namespace.

The mitigation options should have consistency and start with
MITIGATION.

Rename the Kconfig entry from RETHUNK to MITIGATION_RETHUNK.

Suggested-by: Josh Poimboeuf <[email protected]>
Signed-off-by: Breno Leitao <[email protected]>
---
arch/x86/Kconfig | 8 ++++----
arch/x86/Makefile | 2 +-
arch/x86/configs/i386_defconfig | 2 +-
arch/x86/include/asm/disabled-features.h | 2 +-
arch/x86/include/asm/linkage.h | 4 ++--
arch/x86/include/asm/nospec-branch.h | 4 ++--
arch/x86/include/asm/static_call.h | 2 +-
arch/x86/kernel/alternative.c | 4 ++--
arch/x86/kernel/static_call.c | 2 +-
arch/x86/lib/retpoline.S | 4 ++--
scripts/Makefile.lib | 2 +-
tools/arch/x86/include/asm/disabled-features.h | 2 +-
12 files changed, 19 insertions(+), 19 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 30c2f880caf9..ee939de1bb05 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -2465,7 +2465,7 @@ config FINEIBT

config HAVE_CALL_THUNKS
def_bool y
- depends on CC_HAS_ENTRY_PADDING && RETHUNK && OBJTOOL
+ depends on CC_HAS_ENTRY_PADDING && MITIGATION_RETHUNK && OBJTOOL

config CALL_THUNKS
def_bool n
@@ -2508,7 +2508,7 @@ config MITIGATION_RETPOLINE
branches. Requires a compiler with -mindirect-branch=thunk-extern
support for full protection. The kernel may run slower.

-config RETHUNK
+config MITIGATION_RETHUNK
bool "Enable return-thunks"
depends on MITIGATION_RETPOLINE && CC_HAS_RETURN_THUNK
select OBJTOOL if HAVE_OBJTOOL
@@ -2521,7 +2521,7 @@ config RETHUNK

config MITIGATION_UNRET_ENTRY
bool "Enable UNRET on kernel entry"
- depends on CPU_SUP_AMD && RETHUNK && X86_64
+ depends on CPU_SUP_AMD && MITIGATION_RETHUNK && X86_64
default y
help
Compile the kernel with support for the retbleed=unret mitigation.
@@ -2575,7 +2575,7 @@ config MITIGATION_IBRS_ENTRY

config MITIGATION_SRSO
bool "Mitigate speculative RAS overflow on AMD"
- depends on CPU_SUP_AMD && X86_64 && RETHUNK
+ depends on CPU_SUP_AMD && X86_64 && MITIGATION_RETHUNK
default y
help
Enable the SRSO mitigation needed on AMD Zen1-4 machines.
diff --git a/arch/x86/Makefile b/arch/x86/Makefile
index 5ce8c30e7701..ba046afb850e 100644
--- a/arch/x86/Makefile
+++ b/arch/x86/Makefile
@@ -22,7 +22,7 @@ RETPOLINE_VDSO_CFLAGS := -mretpoline
endif
RETPOLINE_CFLAGS += $(call cc-option,-mindirect-branch-cs-prefix)

-ifdef CONFIG_RETHUNK
+ifdef CONFIG_MITIGATION_RETHUNK
RETHUNK_CFLAGS := -mfunction-return=thunk-extern
RETPOLINE_CFLAGS += $(RETHUNK_CFLAGS)
endif
diff --git a/arch/x86/configs/i386_defconfig b/arch/x86/configs/i386_defconfig
index 73abbbdd26f8..91801138b10b 100644
--- a/arch/x86/configs/i386_defconfig
+++ b/arch/x86/configs/i386_defconfig
@@ -42,7 +42,7 @@ CONFIG_EFI_STUB=y
CONFIG_HZ_1000=y
CONFIG_KEXEC=y
CONFIG_CRASH_DUMP=y
-# CONFIG_RETHUNK is not set
+# CONFIG_MITIGATION_RETHUNK is not set
CONFIG_HIBERNATION=y
CONFIG_PM_DEBUG=y
CONFIG_PM_TRACE_RTC=y
diff --git a/arch/x86/include/asm/disabled-features.h b/arch/x86/include/asm/disabled-features.h
index 151f0d50e7e0..36d0c1e05e60 100644
--- a/arch/x86/include/asm/disabled-features.h
+++ b/arch/x86/include/asm/disabled-features.h
@@ -57,7 +57,7 @@
(1 << (X86_FEATURE_RETPOLINE_LFENCE & 31)))
#endif

-#ifdef CONFIG_RETHUNK
+#ifdef CONFIG_MITIGATION_RETHUNK
# define DISABLE_RETHUNK 0
#else
# define DISABLE_RETHUNK (1 << (X86_FEATURE_RETHUNK & 31))
diff --git a/arch/x86/include/asm/linkage.h b/arch/x86/include/asm/linkage.h
index 09e2d026df33..dc31b13b87a0 100644
--- a/arch/x86/include/asm/linkage.h
+++ b/arch/x86/include/asm/linkage.h
@@ -40,7 +40,7 @@

#ifdef __ASSEMBLY__

-#if defined(CONFIG_RETHUNK) && !defined(__DISABLE_EXPORTS) && !defined(BUILD_VDSO)
+#if defined(CONFIG_MITIGATION_RETHUNK) && !defined(__DISABLE_EXPORTS) && !defined(BUILD_VDSO)
#define RET jmp __x86_return_thunk
#else /* CONFIG_MITIGATION_RETPOLINE */
#ifdef CONFIG_MITIGATION_SLS
@@ -52,7 +52,7 @@

#else /* __ASSEMBLY__ */

-#if defined(CONFIG_RETHUNK) && !defined(__DISABLE_EXPORTS) && !defined(BUILD_VDSO)
+#if defined(CONFIG_MITIGATION_RETHUNK) && !defined(__DISABLE_EXPORTS) && !defined(BUILD_VDSO)
#define ASM_RET "jmp __x86_return_thunk\n\t"
#else /* CONFIG_MITIGATION_RETPOLINE */
#ifdef CONFIG_MITIGATION_SLS
diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
index 9ea93a298a43..33f76848c838 100644
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -289,7 +289,7 @@
* where we have a stack but before any RET instruction.
*/
.macro __UNTRAIN_RET ibpb_feature, call_depth_insns
-#if defined(CONFIG_RETHUNK) || defined(CONFIG_MITIGATION_IBPB_ENTRY)
+#if defined(CONFIG_MITIGATION_RETHUNK) || defined(CONFIG_MITIGATION_IBPB_ENTRY)
VALIDATE_UNRET_END
ALTERNATIVE_3 "", \
CALL_UNTRAIN_RET, X86_FEATURE_UNRET, \
@@ -328,7 +328,7 @@ extern retpoline_thunk_t __x86_indirect_thunk_array[];
extern retpoline_thunk_t __x86_indirect_call_thunk_array[];
extern retpoline_thunk_t __x86_indirect_jump_thunk_array[];

-#ifdef CONFIG_RETHUNK
+#ifdef CONFIG_MITIGATION_RETHUNK
extern void __x86_return_thunk(void);
#else
static inline void __x86_return_thunk(void) {}
diff --git a/arch/x86/include/asm/static_call.h b/arch/x86/include/asm/static_call.h
index 343b722ccaf2..125c407e2abe 100644
--- a/arch/x86/include/asm/static_call.h
+++ b/arch/x86/include/asm/static_call.h
@@ -46,7 +46,7 @@
#define ARCH_DEFINE_STATIC_CALL_TRAMP(name, func) \
__ARCH_DEFINE_STATIC_CALL_TRAMP(name, ".byte 0xe9; .long " #func " - (. + 4)")

-#ifdef CONFIG_RETHUNK
+#ifdef CONFIG_MITIGATION_RETHUNK
#define ARCH_DEFINE_STATIC_CALL_NULL_TRAMP(name) \
__ARCH_DEFINE_STATIC_CALL_TRAMP(name, "jmp __x86_return_thunk")
#else
diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index b01d49862497..f7c11bef19bb 100644
--- a/arch/x86/kernel/alternative.c
+++ b/arch/x86/kernel/alternative.c
@@ -698,7 +698,7 @@ void __init_or_module noinline apply_retpolines(s32 *start, s32 *end)
}
}

-#ifdef CONFIG_RETHUNK
+#ifdef CONFIG_MITIGATION_RETHUNK

/*
* Rewrite the compiler generated return thunk tail-calls.
@@ -771,7 +771,7 @@ void __init_or_module noinline apply_returns(s32 *start, s32 *end)
}
#else
void __init_or_module noinline apply_returns(s32 *start, s32 *end) { }
-#endif /* CONFIG_RETHUNK */
+#endif /* CONFIG_MITIGATION_RETHUNK */

#else /* !CONFIG_MITIGATION_RETPOLINE || !CONFIG_OBJTOOL */

diff --git a/arch/x86/kernel/static_call.c b/arch/x86/kernel/static_call.c
index 77a9316da435..4eefaac64c6c 100644
--- a/arch/x86/kernel/static_call.c
+++ b/arch/x86/kernel/static_call.c
@@ -172,7 +172,7 @@ void arch_static_call_transform(void *site, void *tramp, void *func, bool tail)
}
EXPORT_SYMBOL_GPL(arch_static_call_transform);

-#ifdef CONFIG_RETHUNK
+#ifdef CONFIG_MITIGATION_RETHUNK
/*
* This is called by apply_returns() to fix up static call trampolines,
* specifically ARCH_DEFINE_STATIC_CALL_NULL_TRAMP which is recorded as
diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S
index 67b52cbec648..0045153ba222 100644
--- a/arch/x86/lib/retpoline.S
+++ b/arch/x86/lib/retpoline.S
@@ -127,7 +127,7 @@ SYM_CODE_END(__x86_indirect_jump_thunk_array)
#undef GEN
#endif

-#ifdef CONFIG_RETHUNK
+#ifdef CONFIG_MITIGATION_RETHUNK

/*
* Be careful here: that label cannot really be removed because in
@@ -386,4 +386,4 @@ SYM_CODE_START(__x86_return_thunk)
SYM_CODE_END(__x86_return_thunk)
EXPORT_SYMBOL(__x86_return_thunk)

-#endif /* CONFIG_RETHUNK */
+#endif /* CONFIG_MITIGATION_RETHUNK */
diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib
index 0d5461276179..48a4a81edac1 100644
--- a/scripts/Makefile.lib
+++ b/scripts/Makefile.lib
@@ -263,7 +263,7 @@ objtool-args-$(CONFIG_HAVE_OBJTOOL_NOP_MCOUNT) += --mnop
endif
objtool-args-$(CONFIG_UNWINDER_ORC) += --orc
objtool-args-$(CONFIG_MITIGATION_RETPOLINE) += --retpoline
-objtool-args-$(CONFIG_RETHUNK) += --rethunk
+objtool-args-$(CONFIG_MITIGATION_RETHUNK) += --rethunk
objtool-args-$(CONFIG_MITIGATION_SLS) += --sls
objtool-args-$(CONFIG_STACK_VALIDATION) += --stackval
objtool-args-$(CONFIG_HAVE_STATIC_CALL_INLINE) += --static-call
diff --git a/tools/arch/x86/include/asm/disabled-features.h b/tools/arch/x86/include/asm/disabled-features.h
index 4b816f55c634..bd7071f34f6b 100644
--- a/tools/arch/x86/include/asm/disabled-features.h
+++ b/tools/arch/x86/include/asm/disabled-features.h
@@ -57,7 +57,7 @@
(1 << (X86_FEATURE_RETPOLINE_LFENCE & 31)))
#endif

-#ifdef CONFIG_RETHUNK
+#ifdef CONFIG_MITIGATION_RETHUNK
# define DISABLE_RETHUNK 0
#else
# define DISABLE_RETHUNK (1 << (X86_FEATURE_RETHUNK & 31))
--
2.34.1

2023-11-21 17:13:47

by Breno Leitao

[permalink] [raw]
Subject: [PATCH v6 08/13] x86/bugs: Rename CPU_IBRS_ENTRY to MITIGATION_IBRS_ENTRY

CPU mitigations config entries are inconsistent, and names are hard to
related. There are concrete benefits for both users and developers of
having all the mitigation config options living in the same config
namespace.

The mitigation options should have consistency and start with
MITIGATION.

Rename the Kconfig entry from CPU_IBRS_ENTRY to MITIGATION_IBRS_ENTRY.

Suggested-by: Josh Poimboeuf <[email protected]>
Signed-off-by: Breno Leitao <[email protected]>
---
arch/x86/Kconfig | 2 +-
arch/x86/entry/calling.h | 4 ++--
arch/x86/kernel/cpu/bugs.c | 4 ++--
3 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index fa078d3655ff..392e94fded3d 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -2564,7 +2564,7 @@ config MITIGATION_IBPB_ENTRY
help
Compile the kernel with support for the retbleed=ibpb mitigation.

-config CPU_IBRS_ENTRY
+config MITIGATION_IBRS_ENTRY
bool "Enable IBRS on kernel entry"
depends on CPU_SUP_INTEL && X86_64
default y
diff --git a/arch/x86/entry/calling.h b/arch/x86/entry/calling.h
index ace89d5c1ddd..2afdff42c107 100644
--- a/arch/x86/entry/calling.h
+++ b/arch/x86/entry/calling.h
@@ -297,7 +297,7 @@ For 32-bit we have the following conventions - kernel is built with
* Assumes x86_spec_ctrl_{base,current} to have SPEC_CTRL_IBRS set.
*/
.macro IBRS_ENTER save_reg
-#ifdef CONFIG_CPU_IBRS_ENTRY
+#ifdef CONFIG_MITIGATION_IBRS_ENTRY
ALTERNATIVE "jmp .Lend_\@", "", X86_FEATURE_KERNEL_IBRS
movl $MSR_IA32_SPEC_CTRL, %ecx

@@ -326,7 +326,7 @@ For 32-bit we have the following conventions - kernel is built with
* regs. Must be called after the last RET.
*/
.macro IBRS_EXIT save_reg
-#ifdef CONFIG_CPU_IBRS_ENTRY
+#ifdef CONFIG_MITIGATION_IBRS_ENTRY
ALTERNATIVE "jmp .Lend_\@", "", X86_FEATURE_KERNEL_IBRS
movl $MSR_IA32_SPEC_CTRL, %ecx

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 2580368c32d1..e11bacbd8f39 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -1439,7 +1439,7 @@ static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
return SPECTRE_V2_CMD_AUTO;
}

- if (cmd == SPECTRE_V2_CMD_IBRS && !IS_ENABLED(CONFIG_CPU_IBRS_ENTRY)) {
+ if (cmd == SPECTRE_V2_CMD_IBRS && !IS_ENABLED(CONFIG_MITIGATION_IBRS_ENTRY)) {
pr_err("%s selected but not compiled in. Switching to AUTO select\n",
mitigation_options[i].option);
return SPECTRE_V2_CMD_AUTO;
@@ -1565,7 +1565,7 @@ static void __init spectre_v2_select_mitigation(void)
break;
}

- if (IS_ENABLED(CONFIG_CPU_IBRS_ENTRY) &&
+ if (IS_ENABLED(CONFIG_MITIGATION_IBRS_ENTRY) &&
boot_cpu_has_bug(X86_BUG_RETBLEED) &&
retbleed_cmd != RETBLEED_CMD_OFF &&
retbleed_cmd != RETBLEED_CMD_STUFF &&
--
2.34.1

2023-11-21 17:14:01

by Breno Leitao

[permalink] [raw]
Subject: [PATCH v6 12/13] x86/bugs: spectre_v2_user default mode depends on Kconfig

Change the default value of spectre v2 in user mode to respect the
CONFIG_MITIGATION_SPECTRE_V2 config option.

Currently, user mode spectre v2 is set to auto
(SPECTRE_V2_USER_CMD_AUTO) by default, even if
CONFIG_MITIGATION_SPECTRE_V2 is disabled.

Set the Spectre_v2 value to auto (SPECTRE_V2_USER_CMD_AUTO) if the
Spectre v2 config (CONFIG_MITIGATION_SPECTRE_V2) is enabled, otherwise
set the value to none (SPECTRE_V2_USER_CMD_NONE).

Important to say the command line argument "spectre_v2_user" overwrites
the default value in both cases.

Signed-off-by: Breno Leitao <[email protected]>
---
arch/x86/kernel/cpu/bugs.c | 11 +++++++----
1 file changed, 7 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 0172bb0f61fe..a99f8c064682 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -1216,8 +1216,11 @@ static __ro_after_init enum spectre_v2_mitigation_cmd spectre_v2_cmd;
static enum spectre_v2_user_cmd __init
spectre_v2_parse_user_cmdline(void)
{
+ int ret, i, mode;
char arg[20];
- int ret, i;
+
+ mode = IS_ENABLED(CONFIG_MITIGATION_SPECTRE_V2) ?
+ SPECTRE_V2_USER_CMD_AUTO : SPECTRE_V2_USER_CMD_NONE;

switch (spectre_v2_cmd) {
case SPECTRE_V2_CMD_NONE:
@@ -1231,7 +1234,7 @@ spectre_v2_parse_user_cmdline(void)
ret = cmdline_find_option(boot_command_line, "spectre_v2_user",
arg, sizeof(arg));
if (ret < 0)
- return SPECTRE_V2_USER_CMD_AUTO;
+ return mode;

for (i = 0; i < ARRAY_SIZE(v2_user_options); i++) {
if (match_option(arg, ret, v2_user_options[i].option)) {
@@ -1241,8 +1244,8 @@ spectre_v2_parse_user_cmdline(void)
}
}

- pr_err("Unknown user space protection option (%s). Switching to AUTO select\n", arg);
- return SPECTRE_V2_USER_CMD_AUTO;
+ pr_err("Unknown user space protection option (%s). Switching to default\n", arg);
+ return mode;
}

static inline bool spectre_v2_in_ibrs_mode(enum spectre_v2_mitigation mode)
--
2.34.1

2023-11-21 17:14:06

by Breno Leitao

[permalink] [raw]
Subject: [PATCH v6 11/13] x86/bugs: Create a way to disable GDS mitigation

Currently there is no way to disable GDS mitigation at build time.
The current config option (GDS_MITIGATION_FORCE) just enables a more
drastic mitigation.

Create a new kernel config that allows GDS to be completely disabled,
similarly to the "gather_data_sampling=off" or "mitigations=off" kernel
command-line. Move the GDS_MITIGATION_FORCE under this new mitigation.

Suggested-by: Josh Poimboeuf <[email protected]>
Signed-off-by: Breno Leitao <[email protected]>
---
arch/x86/Kconfig | 18 +++++++++++++-----
arch/x86/kernel/cpu/bugs.c | 7 ++++---
2 files changed, 17 insertions(+), 8 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index ee939de1bb05..b1a59e5d6fb6 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -2590,15 +2590,23 @@ config MITIGATION_SLS
against straight line speculation. The kernel image might be slightly
larger.

+config MITIGATION_GDS
+ bool "Mitigate Gather Data Sampling"
+ depends on CPU_SUP_INTEL
+ default y
+ help
+ Enable mitigation for Gather Data Sampling (GDS). GDS is a hardware
+ vulnerability which allows unprivileged speculative access to data
+ which was previously stored in vector registers. The attacker uses
+ gather instructions to infer the stale vector register data.
+ See also
+ <file:Documentation/admin-guide/hw-vuln/gather_data_sampling.rst>
+
config MITIGATION_GDS_FORCE
bool "Force GDS Mitigation"
- depends on CPU_SUP_INTEL
+ depends on MITIGATION_GDS
default n
help
- Gather Data Sampling (GDS) is a hardware vulnerability which allows
- unprivileged speculative access to data which was previously stored in
- vector registers.
-
This option is equivalent to setting gather_data_sampling=force on the
command line. The microcode mitigation is used if present, otherwise
AVX is disabled as a mitigation. On affected systems that are missing
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index f2775417bda2..0172bb0f61fe 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -671,10 +671,11 @@ enum gds_mitigations {
GDS_MITIGATION_HYPERVISOR,
};

-#if IS_ENABLED(CONFIG_MITIGATION_GDS_FORCE)
-static enum gds_mitigations gds_mitigation __ro_after_init = GDS_MITIGATION_FORCE;
+#if IS_ENABLED(CONFIG_MITIGATION_GDS)
+static enum gds_mitigations gds_mitigation __ro_after_init =
+ IS_ENABLED(CONFIG_MITIGATION_GDS_FORCE) ? GDS_MITIGATION_FORCE : GDS_MITIGATION_FULL;
#else
-static enum gds_mitigations gds_mitigation __ro_after_init = GDS_MITIGATION_FULL;
+static enum gds_mitigations gds_mitigation __ro_after_init = GDS_MITIGATION_OFF;
#endif

static const char * const gds_strings[] = {
--
2.34.1

2023-11-21 17:14:09

by Breno Leitao

[permalink] [raw]
Subject: [PATCH v6 13/13] x86/bugs: Add a separate config for missing mitigation

Currently, the CONFIG_SPECULATION_MITIGATIONS is halfway populated,
where some mitigations have entries in Kconfig, and they could be
modified, while others mitigations do not have Kconfig entries, and
could not be controlled at build time.

Create an entry for each CPU mitigation under
CONFIG_SPECULATION_MITIGATIONS. This allow users to enable or disable
them at compilation time.

Signed-off-by: Breno Leitao <[email protected]>
---
arch/x86/Kconfig | 101 +++++++++++++++++++++++++++++++++++++
arch/x86/kernel/cpu/bugs.c | 39 ++++++++------
2 files changed, 125 insertions(+), 15 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index b1a59e5d6fb6..def8561bc2a3 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -2617,6 +2617,107 @@ config MITIGATION_GDS_FORCE

If in doubt, say N.

+config MITIGATION_MDS
+ bool "Mitigate Microarchitectural Data Sampling (MDS) hardware bug"
+ depends on CPU_SUP_INTEL
+ default y
+ help
+ Enable mitigation for Microarchitectural Data Sampling (MDS). MDS is
+ a hardware vulnerability which allows unprivileged speculative access
+ to data which is available in various CPU internal buffers.
+ See also <file:Documentation/admin-guide/hw-vuln/mds.rst>
+
+config MITIGATION_TAA
+ bool "Mitigate TSX Asynchronous Abort (TAA) hardware bug"
+ depends on CPU_SUP_INTEL
+ default y
+ help
+ Enable mitigation for TSX Asynchronous Abort (TAA). TAA is a hardware
+ vulnerability that allows unprivileged speculative access to data
+ which is available in various CPU internal buffers by using
+ asynchronous aborts within an Intel TSX transactional region.
+ See also <file:Documentation/admin-guide/hw-vuln/tsx_async_abort.rst>
+
+config MITIGATION_MMIO_STALE_DATA
+ bool "Mitigate MMIO Stale Data hardware bug"
+ depends on CPU_SUP_INTEL
+ default y
+ help
+ Enable mitigation for MMIO Stale Data hardware bugs. Processor MMIO
+ Stale Data Vulnerabilities are a class of memory-mapped I/O (MMIO)
+ vulnerabilities that can expose data. The vulnerabilities require the
+ attacker to have access to MMIO.
+ See also
+ <file:Documentation/admin-guide/hw-vuln/processor_mmio_stale_data.rst>
+
+config MITIGATION_L1TF
+ bool "Mitigate L1 Terminal Fault (L1TF) hardware bug"
+ depends on CPU_SUP_INTEL
+ default y
+ help
+ Mitigate L1 Terminal Fault (L1TF) hardware bug. L1 Terminal Fault is a
+ hardware vulnerability which allows unprivileged speculative access to data
+ available in the Level 1 Data Cache.
+ See also <file:Documentation/admin-guide/hw-vuln/l1tf.rst>
+
+config MITIGATION_RETBLEED
+ bool "Mitigate RETBleed hardware bug"
+ depends on CPU_SUP_INTEL || (CPU_SUP_AMD && MITIGATION_UNRET_ENTRY)
+ default y
+ help
+ Enable mitigation for RETBleed (Arbitrary Speculative Code Execution
+ with Return Instructions) vulnerability. RETBleed is a speculative
+ execution attack which takes advantage of microarchitectural behavior
+ in many modern microprocessors, similar to Spectre v2. An
+ unprivileged attacker can use these flaws to bypass conventional
+ memory security restrictions to gain read access to privileged memory
+ that would otherwise be inaccessible.
+
+config MITIGATION_SPECTRE_V1
+ bool "Mitigate SPECTRE V1 hardware bug"
+ default y
+ help
+ Enable mitigation for Spectre V1 (Bounds Check Bypass). Spectre V1 is a
+ class of side channel attacks that takes advantage of speculative
+ execution that bypasses conditional branch instructions used for
+ memory access bounds check.
+ See also <file:Documentation/admin-guide/hw-vuln/spectre.rst>
+
+config MITIGATION_SPECTRE_V2
+ bool "Mitigate SPECTRE V2 hardware bug"
+ default y
+ help
+ Enable mitigation for Spectre V2 (Branch Target Injection). Spectre
+ V2 is a class of side channel attacks that takes advantage of
+ indirect branch predictors inside the processor. In Spectre variant 2
+ attacks, the attacker can steer speculative indirect branches in the
+ victim to gadget code by poisoning the branch target buffer of a CPU
+ used for predicting indirect branch addresses.
+ See also <file:Documentation/admin-guide/hw-vuln/spectre.rst>
+
+config MITIGATION_SRBDS
+ bool "Mitigate Special Register Buffer Data Sampling (SRBDS) hardware bug"
+ depends on CPU_SUP_INTEL
+ default y
+ help
+ Enable mitigation for Special Register Buffer Data Sampling (SRBDS).
+ SRBDS is a hardware vulnerability that allows Microarchitectural Data
+ Sampling (MDS) techniques to infer values returned from special
+ register accesses. An unprivileged user can extract values returned
+ from RDRAND and RDSEED executed on another core or sibling thread
+ using MDS techniques.
+ See also
+ <file:Documentation/admin-guide/hw-vuln/special-register-buffer-data-sampling.rst>
+
+config MITIGATION_SSB
+ bool "Mitigate Speculative Store Bypass (SSB) hardware bug"
+ default y
+ help
+ Enable mitigation for Speculative Store Bypass (SSB). SSB is a
+ hardware security vulnerability and its exploitation takes advantage
+ of speculative execution in a similar way to the Meltdown and Spectre
+ security vulnerabilities.
+
endif

config ARCH_HAS_ADD_PAGES
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index a99f8c064682..4f1da92784c6 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -232,7 +232,8 @@ static void x86_amd_ssb_disable(void)
#define pr_fmt(fmt) "MDS: " fmt

/* Default mitigation for MDS-affected CPUs */
-static enum mds_mitigations mds_mitigation __ro_after_init = MDS_MITIGATION_FULL;
+static enum mds_mitigations mds_mitigation __ro_after_init =
+ IS_ENABLED(CONFIG_MITIGATION_MDS) ? MDS_MITIGATION_FULL : MDS_MITIGATION_OFF;
static bool mds_nosmt __ro_after_init = false;

static const char * const mds_strings[] = {
@@ -292,7 +293,8 @@ enum taa_mitigations {
};

/* Default mitigation for TAA-affected CPUs */
-static enum taa_mitigations taa_mitigation __ro_after_init = TAA_MITIGATION_VERW;
+static enum taa_mitigations taa_mitigation __ro_after_init =
+ IS_ENABLED(CONFIG_MITIGATION_TAA) ? TAA_MITIGATION_VERW : TAA_MITIGATION_OFF;
static bool taa_nosmt __ro_after_init;

static const char * const taa_strings[] = {
@@ -393,7 +395,8 @@ enum mmio_mitigations {
};

/* Default mitigation for Processor MMIO Stale Data vulnerabilities */
-static enum mmio_mitigations mmio_mitigation __ro_after_init = MMIO_MITIGATION_VERW;
+static enum mmio_mitigations mmio_mitigation __ro_after_init =
+ IS_ENABLED(CONFIG_MITIGATION_MMIO_STALE_DATA) ? MMIO_MITIGATION_VERW : MMIO_MITIGATION_OFF;
static bool mmio_nosmt __ro_after_init = false;

static const char * const mmio_strings[] = {
@@ -542,7 +545,8 @@ enum srbds_mitigations {
SRBDS_MITIGATION_HYPERVISOR,
};

-static enum srbds_mitigations srbds_mitigation __ro_after_init = SRBDS_MITIGATION_FULL;
+static enum srbds_mitigations srbds_mitigation __ro_after_init =
+ IS_ENABLED(CONFIG_MITIGATION_SRBDS) ? SRBDS_MITIGATION_FULL : SRBDS_MITIGATION_OFF;

static const char * const srbds_strings[] = {
[SRBDS_MITIGATION_OFF] = "Vulnerable",
@@ -812,7 +816,8 @@ enum spectre_v1_mitigation {
};

static enum spectre_v1_mitigation spectre_v1_mitigation __ro_after_init =
- SPECTRE_V1_MITIGATION_AUTO;
+ IS_ENABLED(CONFIG_MITIGATION_SPECTRE_V1) ?
+ SPECTRE_V1_MITIGATION_AUTO : SPECTRE_V1_MITIGATION_NONE;

static const char * const spectre_v1_strings[] = {
[SPECTRE_V1_MITIGATION_NONE] = "Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers",
@@ -927,7 +932,7 @@ static const char * const retbleed_strings[] = {
static enum retbleed_mitigation retbleed_mitigation __ro_after_init =
RETBLEED_MITIGATION_NONE;
static enum retbleed_mitigation_cmd retbleed_cmd __ro_after_init =
- RETBLEED_CMD_AUTO;
+ IS_ENABLED(CONFIG_MITIGATION_RETBLEED) ? RETBLEED_CMD_AUTO : RETBLEED_CMD_OFF;

static int __ro_after_init retbleed_nosmt = false;

@@ -1391,17 +1396,18 @@ static void __init spec_v2_print_cond(const char *reason, bool secure)

static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
{
- enum spectre_v2_mitigation_cmd cmd = SPECTRE_V2_CMD_AUTO;
+ enum spectre_v2_mitigation_cmd cmd;
char arg[20];
int ret, i;

+ cmd = IS_ENABLED(CONFIG_MITIGATION_SPECTRE_V2) ? SPECTRE_V2_CMD_AUTO : SPECTRE_V2_CMD_NONE;
if (cmdline_find_option_bool(boot_command_line, "nospectre_v2") ||
cpu_mitigations_off())
return SPECTRE_V2_CMD_NONE;

ret = cmdline_find_option(boot_command_line, "spectre_v2", arg, sizeof(arg));
if (ret < 0)
- return SPECTRE_V2_CMD_AUTO;
+ return cmd;

for (i = 0; i < ARRAY_SIZE(mitigation_options); i++) {
if (!match_option(arg, ret, mitigation_options[i].option))
@@ -1411,8 +1417,8 @@ static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
}

if (i >= ARRAY_SIZE(mitigation_options)) {
- pr_err("unknown option (%s). Switching to AUTO select\n", arg);
- return SPECTRE_V2_CMD_AUTO;
+ pr_err("unknown option (%s). Switching to default mode\n", arg);
+ return cmd;
}

if ((cmd == SPECTRE_V2_CMD_RETPOLINE ||
@@ -1885,10 +1891,12 @@ static const struct {

static enum ssb_mitigation_cmd __init ssb_parse_cmdline(void)
{
- enum ssb_mitigation_cmd cmd = SPEC_STORE_BYPASS_CMD_AUTO;
+ enum ssb_mitigation_cmd cmd;
char arg[20];
int ret, i;

+ cmd = IS_ENABLED(CONFIG_MITIGATION_SSB) ?
+ SPEC_STORE_BYPASS_CMD_AUTO : SPEC_STORE_BYPASS_CMD_NONE;
if (cmdline_find_option_bool(boot_command_line, "nospec_store_bypass_disable") ||
cpu_mitigations_off()) {
return SPEC_STORE_BYPASS_CMD_NONE;
@@ -1896,7 +1904,7 @@ static enum ssb_mitigation_cmd __init ssb_parse_cmdline(void)
ret = cmdline_find_option(boot_command_line, "spec_store_bypass_disable",
arg, sizeof(arg));
if (ret < 0)
- return SPEC_STORE_BYPASS_CMD_AUTO;
+ return cmd;

for (i = 0; i < ARRAY_SIZE(ssb_mitigation_options); i++) {
if (!match_option(arg, ret, ssb_mitigation_options[i].option))
@@ -1907,8 +1915,8 @@ static enum ssb_mitigation_cmd __init ssb_parse_cmdline(void)
}

if (i >= ARRAY_SIZE(ssb_mitigation_options)) {
- pr_err("unknown option (%s). Switching to AUTO select\n", arg);
- return SPEC_STORE_BYPASS_CMD_AUTO;
+ pr_err("unknown option (%s). Switching to default mode\n", arg);
+ return cmd;
}
}

@@ -2235,7 +2243,8 @@ EXPORT_SYMBOL_GPL(itlb_multihit_kvm_mitigation);
#define pr_fmt(fmt) "L1TF: " fmt

/* Default mitigation for L1TF-affected CPUs */
-enum l1tf_mitigations l1tf_mitigation __ro_after_init = L1TF_MITIGATION_FLUSH;
+enum l1tf_mitigations l1tf_mitigation __ro_after_init =
+ IS_ENABLED(CONFIG_MITIGATION_L1TF) ? L1TF_MITIGATION_FLUSH : L1TF_MITIGATION_OFF;
#if IS_ENABLED(CONFIG_KVM_INTEL)
EXPORT_SYMBOL_GPL(l1tf_mitigation);
#endif
--
2.34.1

2023-11-21 17:14:27

by Breno Leitao

[permalink] [raw]
Subject: [PATCH v6 07/13] x86/bugs: Rename CPU_UNRET_ENTRY to MITIGATION_UNRET_ENTRY

CPU mitigations config entries are inconsistent, and names are hard to
related. There are concrete benefits for both users and developers of
having all the mitigation config options living in the same config
namespace.

The mitigation options should have consistency and start with
MITIGATION.

Rename the Kconfig entry from CPU_UNRET_ENTRY to MITIGATION_UNRET_ENTRY.

Suggested-by: Josh Poimboeuf <[email protected]>
Signed-off-by: Breno Leitao <[email protected]>
---
arch/x86/Kconfig | 2 +-
arch/x86/include/asm/disabled-features.h | 2 +-
arch/x86/include/asm/nospec-branch.h | 6 +++---
arch/x86/kernel/cpu/amd.c | 2 +-
arch/x86/kernel/cpu/bugs.c | 6 +++---
arch/x86/kernel/vmlinux.lds.S | 2 +-
arch/x86/lib/retpoline.S | 10 +++++-----
include/linux/objtool.h | 2 +-
scripts/Makefile.vmlinux_o | 2 +-
tools/arch/x86/include/asm/disabled-features.h | 2 +-
10 files changed, 18 insertions(+), 18 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index fa246de60cdb..fa078d3655ff 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -2519,7 +2519,7 @@ config RETHUNK
Requires a compiler with -mfunction-return=thunk-extern
support for full protection. The kernel may run slower.

-config CPU_UNRET_ENTRY
+config MITIGATION_UNRET_ENTRY
bool "Enable UNRET on kernel entry"
depends on CPU_SUP_AMD && RETHUNK && X86_64
default y
diff --git a/arch/x86/include/asm/disabled-features.h b/arch/x86/include/asm/disabled-features.h
index 24e4010c33b6..151f0d50e7e0 100644
--- a/arch/x86/include/asm/disabled-features.h
+++ b/arch/x86/include/asm/disabled-features.h
@@ -63,7 +63,7 @@
# define DISABLE_RETHUNK (1 << (X86_FEATURE_RETHUNK & 31))
#endif

-#ifdef CONFIG_CPU_UNRET_ENTRY
+#ifdef CONFIG_MITIGATION_UNRET_ENTRY
# define DISABLE_UNRET 0
#else
# define DISABLE_UNRET (1 << (X86_FEATURE_UNRET & 31))
diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
index cab7c937c71b..e25e98f012a3 100644
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -212,7 +212,7 @@
*/
.macro VALIDATE_UNRET_END
#if defined(CONFIG_NOINSTR_VALIDATION) && \
- (defined(CONFIG_CPU_UNRET_ENTRY) || defined(CONFIG_CPU_SRSO))
+ (defined(CONFIG_MITIGATION_UNRET_ENTRY) || defined(CONFIG_CPU_SRSO))
ANNOTATE_RETPOLINE_SAFE
nop
#endif
@@ -271,7 +271,7 @@
.Lskip_rsb_\@:
.endm

-#if defined(CONFIG_CPU_UNRET_ENTRY) || defined(CONFIG_CPU_SRSO)
+#if defined(CONFIG_MITIGATION_UNRET_ENTRY) || defined(CONFIG_CPU_SRSO)
#define CALL_UNTRAIN_RET "call entry_untrain_ret"
#else
#define CALL_UNTRAIN_RET ""
@@ -334,7 +334,7 @@ extern void __x86_return_thunk(void);
static inline void __x86_return_thunk(void) {}
#endif

-#ifdef CONFIG_CPU_UNRET_ENTRY
+#ifdef CONFIG_MITIGATION_UNRET_ENTRY
extern void retbleed_return_thunk(void);
#else
static inline void retbleed_return_thunk(void) {}
diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
index a7eab05e5f29..8d38299cec83 100644
--- a/arch/x86/kernel/cpu/amd.c
+++ b/arch/x86/kernel/cpu/amd.c
@@ -943,7 +943,7 @@ static void init_amd_bd(struct cpuinfo_x86 *c)

void init_spectral_chicken(struct cpuinfo_x86 *c)
{
-#ifdef CONFIG_CPU_UNRET_ENTRY
+#ifdef CONFIG_MITIGATION_UNRET_ENTRY
u64 value;

/*
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index fc46fd6447f9..2580368c32d1 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -982,10 +982,10 @@ static void __init retbleed_select_mitigation(void)
return;

case RETBLEED_CMD_UNRET:
- if (IS_ENABLED(CONFIG_CPU_UNRET_ENTRY)) {
+ if (IS_ENABLED(CONFIG_MITIGATION_UNRET_ENTRY)) {
retbleed_mitigation = RETBLEED_MITIGATION_UNRET;
} else {
- pr_err("WARNING: kernel not compiled with CPU_UNRET_ENTRY.\n");
+ pr_err("WARNING: kernel not compiled with MITIGATION_UNRET_ENTRY.\n");
goto do_cmd_auto;
}
break;
@@ -1021,7 +1021,7 @@ static void __init retbleed_select_mitigation(void)
case RETBLEED_CMD_AUTO:
if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD ||
boot_cpu_data.x86_vendor == X86_VENDOR_HYGON) {
- if (IS_ENABLED(CONFIG_CPU_UNRET_ENTRY))
+ if (IS_ENABLED(CONFIG_MITIGATION_UNRET_ENTRY))
retbleed_mitigation = RETBLEED_MITIGATION_UNRET;
else if (IS_ENABLED(CONFIG_MITIGATION_IBPB_ENTRY) &&
boot_cpu_has(X86_FEATURE_IBPB))
diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
index 985984919d81..d7ee79b6756f 100644
--- a/arch/x86/kernel/vmlinux.lds.S
+++ b/arch/x86/kernel/vmlinux.lds.S
@@ -517,7 +517,7 @@ INIT_PER_CPU(irq_stack_backing_store);
"fixed_percpu_data is not at start of per-cpu area");
#endif

-#ifdef CONFIG_CPU_UNRET_ENTRY
+#ifdef CONFIG_MITIGATION_UNRET_ENTRY
. = ASSERT((retbleed_return_thunk & 0x3f) == 0, "retbleed_return_thunk not cacheline-aligned");
#endif

diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S
index ff46f48a0cc4..0ad67ccadd4c 100644
--- a/arch/x86/lib/retpoline.S
+++ b/arch/x86/lib/retpoline.S
@@ -230,7 +230,7 @@ SYM_CODE_END(srso_return_thunk)
#define JMP_SRSO_ALIAS_UNTRAIN_RET "ud2"
#endif /* CONFIG_CPU_SRSO */

-#ifdef CONFIG_CPU_UNRET_ENTRY
+#ifdef CONFIG_MITIGATION_UNRET_ENTRY

/*
* Some generic notes on the untraining sequences:
@@ -312,11 +312,11 @@ SYM_CODE_END(retbleed_return_thunk)
SYM_FUNC_END(retbleed_untrain_ret)

#define JMP_RETBLEED_UNTRAIN_RET "jmp retbleed_untrain_ret"
-#else /* !CONFIG_CPU_UNRET_ENTRY */
+#else /* !CONFIG_MITIGATION_UNRET_ENTRY */
#define JMP_RETBLEED_UNTRAIN_RET "ud2"
-#endif /* CONFIG_CPU_UNRET_ENTRY */
+#endif /* CONFIG_MITIGATION_UNRET_ENTRY */

-#if defined(CONFIG_CPU_UNRET_ENTRY) || defined(CONFIG_CPU_SRSO)
+#if defined(CONFIG_MITIGATION_UNRET_ENTRY) || defined(CONFIG_CPU_SRSO)

SYM_FUNC_START(entry_untrain_ret)
ALTERNATIVE_2 JMP_RETBLEED_UNTRAIN_RET, \
@@ -325,7 +325,7 @@ SYM_FUNC_START(entry_untrain_ret)
SYM_FUNC_END(entry_untrain_ret)
__EXPORT_THUNK(entry_untrain_ret)

-#endif /* CONFIG_CPU_UNRET_ENTRY || CONFIG_CPU_SRSO */
+#endif /* CONFIG_MITIGATION_UNRET_ENTRY || CONFIG_CPU_SRSO */

#ifdef CONFIG_MITIGATION_CALL_DEPTH_TRACKING

diff --git a/include/linux/objtool.h b/include/linux/objtool.h
index 33212e93f4a6..d030671a4c49 100644
--- a/include/linux/objtool.h
+++ b/include/linux/objtool.h
@@ -131,7 +131,7 @@
*/
.macro VALIDATE_UNRET_BEGIN
#if defined(CONFIG_NOINSTR_VALIDATION) && \
- (defined(CONFIG_CPU_UNRET_ENTRY) || defined(CONFIG_CPU_SRSO))
+ (defined(CONFIG_MITIGATION_UNRET_ENTRY) || defined(CONFIG_CPU_SRSO))
.Lhere_\@:
.pushsection .discard.validate_unret
.long .Lhere_\@ - .
diff --git a/scripts/Makefile.vmlinux_o b/scripts/Makefile.vmlinux_o
index 25b3b587d37c..6277dbd730bb 100644
--- a/scripts/Makefile.vmlinux_o
+++ b/scripts/Makefile.vmlinux_o
@@ -38,7 +38,7 @@ objtool-enabled := $(or $(delay-objtool),$(CONFIG_NOINSTR_VALIDATION))
vmlinux-objtool-args-$(delay-objtool) += $(objtool-args-y)
vmlinux-objtool-args-$(CONFIG_GCOV_KERNEL) += --no-unreachable
vmlinux-objtool-args-$(CONFIG_NOINSTR_VALIDATION) += --noinstr \
- $(if $(or $(CONFIG_CPU_UNRET_ENTRY),$(CONFIG_CPU_SRSO)), --unret)
+ $(if $(or $(CONFIG_MITIGATION_UNRET_ENTRY),$(CONFIG_CPU_SRSO)), --unret)

objtool-args = $(vmlinux-objtool-args-y) --link

diff --git a/tools/arch/x86/include/asm/disabled-features.h b/tools/arch/x86/include/asm/disabled-features.h
index d05158d8fe5f..4b816f55c634 100644
--- a/tools/arch/x86/include/asm/disabled-features.h
+++ b/tools/arch/x86/include/asm/disabled-features.h
@@ -63,7 +63,7 @@
# define DISABLE_RETHUNK (1 << (X86_FEATURE_RETHUNK & 31))
#endif

-#ifdef CONFIG_CPU_UNRET_ENTRY
+#ifdef CONFIG_MITIGATION_UNRET_ENTRY
# define DISABLE_UNRET 0
#else
# define DISABLE_UNRET (1 << (X86_FEATURE_UNRET & 31))
--
2.34.1

2023-11-21 21:40:15

by Andrew Cooper

[permalink] [raw]
Subject: Re: [PATCH v6 10/13] x86/bugs: Rename RETHUNK to MITIGATION_RETHUNK

On 21/11/2023 4:07 pm, Breno Leitao wrote:
> CPU mitigations config entries are inconsistent, and names are hard to
> related. There are concrete benefits for both users and developers of
> having all the mitigation config options living in the same config
> namespace.
>
> The mitigation options should have consistency and start with
> MITIGATION.
>
> Rename the Kconfig entry from RETHUNK to MITIGATION_RETHUNK.
>
> Suggested-by: Josh Poimboeuf <[email protected]>
> Signed-off-by: Breno Leitao <[email protected]>

(I'm CC'd on only this single patch so I can't see what's going on, but)

Really?  Rethunk[sic] isn't a mitigation.  It's just a compiler
transformation for return instructions upon which various mitigations
depend.

~Andrew

2023-11-22 15:54:24

by Breno Leitao

[permalink] [raw]
Subject: Re: [PATCH v6 10/13] x86/bugs: Rename RETHUNK to MITIGATION_RETHUNK

Hello Andrew,

On Tue, Nov 21, 2023 at 09:39:47PM +0000, Andrew Cooper wrote:
> On 21/11/2023 4:07 pm, Breno Leitao wrote:
> > CPU mitigations config entries are inconsistent, and names are hard to
> > related. There are concrete benefits for both users and developers of
> > having all the mitigation config options living in the same config
> > namespace.
> >
> > The mitigation options should have consistency and start with
> > MITIGATION.
> >
> > Rename the Kconfig entry from RETHUNK to MITIGATION_RETHUNK.
> >
> > Suggested-by: Josh Poimboeuf <[email protected]>
> > Signed-off-by: Breno Leitao <[email protected]>
>
> (I'm CC'd on only this single patch so I can't see what's going on, but)
>
> Really?? Rethunk[sic] isn't a mitigation.? It's just a compiler
> transformation for return instructions upon which various mitigations
> depend.

The MITIGATION namespace is not only for mitigation, but, for "features"
that are available with the only purpose of mitigating speculative
hardware vulnerability. The original suggested namespace was "MITIGATE",
and then it would make no sense for RETHUNK, since, we are not
mitigating RETHUNK per se. Please check the discussion here:

https://lore.kernel.org/all/20231011044252.42bplzjsam3qsasz@treble/

That said, the way the x86 Kconfig is organized today, CONFIG_RETHUNK is
very focused in solving a mitigations problem, thus, the MITIGATION_
namespace has been added to it.

For instance, CONFIG_RETHUNK is inside the SPECULATION_MITIGATIONS,
thus, it is only enabled if SPECULATION_MITIGATIONS is set.

menuconfig SPECULATION_MITIGATIONS
bool "Mitigations for speculative execution vulnerabilities"

config RETHUNK
bool "Enable return-thunks"
depends on MITIGATION_RETPOLINE && CC_HAS_RETURN_THUNK
help
Compile the kernel with the return-thunks compiler option to guard
against kernel-to-user data leaks by avoiding return speculation.
Requires a compiler with -mfunction-return=thunk-extern
support for full protection. The kernel may run slower.

endif

Thanks for the feedback.

2023-11-29 04:42:52

by Josh Poimboeuf

[permalink] [raw]
Subject: Re: [PATCH v6 00/13] x86/bugs: Add a separate config for each mitigation

On Tue, Nov 21, 2023 at 08:07:27AM -0800, Breno Leitao wrote:
> Currently, the CONFIG_SPECULATION_MITIGATIONS is halfway populated,
> where some mitigations have entries in Kconfig, and they could be
> modified, while others mitigations do not have Kconfig entries, and
> could not be controlled at build time.

All looks good to me, thanks!

Acked-by: Josh Poimboeuf <[email protected]>

--
Josh

2024-01-10 09:57:26

by Ingo Molnar

[permalink] [raw]
Subject: Re: [PATCH v6 00/13] x86/bugs: Add a separate config for each mitigation


* Breno Leitao <[email protected]> wrote:

> Currently, the CONFIG_SPECULATION_MITIGATIONS is halfway populated,
> where some mitigations have entries in Kconfig, and they could be
> modified, while others mitigations do not have Kconfig entries, and
> could not be controlled at build time.
>
> The fact of having a fine grained control can help in a few ways:
>
> 1) Users can choose and pick only mitigations that are important for
> their workloads.
>
> 2) Users and developers can choose to disable mitigations that mangle
> the assembly code generation, making it hard to read.
>
> 3) Separate configs for just source code readability,
> so that we see *which* butt-ugly piece of crap code is for what
> reason.
>
> Important to say, if a mitigation is disabled at compilation time, it
> could be enabled at runtime using kernel command line arguments.
>
> Discussion about this approach:
> https://lore.kernel.org/all/CAHk-=wjTHeQjsqtHcBGvy9TaJQ5uAm5HrCDuOD9v7qA9U1Xr4w@mail.gmail.com/
> and
> https://lore.kernel.org/lkml/20231011044252.42bplzjsam3qsasz@treble/
>
> In order to get the missing mitigations, some clean up was done.
>
> 1) Get a namespace for mitigations, prepending MITIGATION to the Kconfig
> entries.
>
> 2) Adding the missing mitigations, so, the mitigations have entries in the
> Kconfig that could be easily configure by the user.
>
> With this patchset applied, all configs have an individual entry under
> CONFIG_SPECULATION_MITIGATIONS, and all of them starts with CONFIG_MITIGATION.

Yeah, so:

- I took this older series and updated it to current upstream, and made
sure all renames were fully done: there were two new Kconfig option
uses, which I integrated into the series. (Sorry about the delay, holiday & stuff.)

- I also widened the renames to comments and messages, which were not
always covered.

- Then I took this cover letter and combined it with a more high level
description of the reasoning behind this series I wrote up, and added it
to patch #1. (see it below.)

- Then I removed the changelog repetition from the other patches and just
referred them back to patch #1.

- Then I stuck the resulting updated series into tip:x86/bugs, without the
last 3 patches that modify behavior.

- You might notice the somewhat weird extra whitespaces in the titles -
I've done that so that it all looks tidy in the shortlog:

Breno Leitao (10):
x86/bugs: Rename CONFIG_GDS_FORCE_MITIGATION => CONFIG_MITIGATION_GDS_FORCE
x86/bugs: Rename CONFIG_CPU_IBPB_ENTRY => CONFIG_MITIGATION_IBPB_ENTRY
x86/bugs: Rename CONFIG_CALL_DEPTH_TRACKING => CONFIG_MITIGATION_CALL_DEPTH_TRACKING
x86/bugs: Rename CONFIG_PAGE_TABLE_ISOLATION => CONFIG_MITIGATION_PAGE_TABLE_ISOLATION
x86/bugs: Rename CONFIG_RETPOLINE => CONFIG_MITIGATION_RETPOLINE
x86/bugs: Rename CONFIG_SLS => CONFIG_MITIGATION_SLS
x86/bugs: Rename CONFIG_CPU_UNRET_ENTRY => CONFIG_MITIGATION_UNRET_ENTRY
x86/bugs: Rename CONFIG_CPU_IBRS_ENTRY => CONFIG_MITIGATION_IBRS_ENTRY
x86/bugs: Rename CONFIG_CPU_SRSO => CONFIG_MITIGATION_SRSO
x86/bugs: Rename CONFIG_RETHUNK => CONFIG_MITIGATION_RETHUNK

I think the resulting tree is all mostly good, but still I'd like to see
just the 10 pure low-risk renames done in this first step, to not carry too
much of this around unnecessarily - maybe even send it Linuswards in this
cycle if it's problem-free - without any real regression risk to upstream.

Thanks,

Ingo

=============================>
commit be83e809ca67bca98fde97ad6b9344237963220b
Author: Breno Leitao <[email protected]>
Date: Tue Nov 21 08:07:28 2023 -0800

x86/bugs: Rename CONFIG_GDS_FORCE_MITIGATION => CONFIG_MITIGATION_GDS_FORCE

So the CPU mitigations Kconfig entries - there's 10 meanwhile - are named
in a historically idiosyncratic and hence rather inconsistent fashion
and have become hard to relate with each other over the years:

https://lore.kernel.org/lkml/20231011044252.42bplzjsam3qsasz@treble/

When they were introduced we never expected that we'd eventually have
about a dozen of them, and that more organization would be useful,
especially for Linux distributions that want to enable them in an
informed fashion, and want to make sure all mitigations are configured
as expected.

For example, the current CONFIG_SPECULATION_MITIGATIONS namespace is only
halfway populated, where some mitigations have entries in Kconfig, and
they could be modified, while others mitigations do not have Kconfig entries,
and can not be controlled at build time.

Fine-grained control over these Kconfig entries can help in a number of ways:

1) Users can choose and pick only mitigations that are important for
their workloads.

2) Users and developers can choose to disable mitigations that mangle
the assembly code generation, making it hard to read.

3) Separate Kconfigs for just source code readability,
so that we see *which* butt-ugly piece of crap code is for what
reason...

In most cases, if a mitigation is disabled at compilation time, it
can still be enabled at runtime using kernel command line arguments.

This is the first patch of an initial series that renames various
mitigation related Kconfig options, unifying them under a single
CONFIG_MITIGATION_* namespace:

CONFIG_GDS_FORCE_MITIGATION => CONFIG_MITIGATION_GDS_FORCE
CONFIG_CPU_IBPB_ENTRY => CONFIG_MITIGATION_IBPB_ENTRY
CONFIG_CALL_DEPTH_TRACKING => CONFIG_MITIGATION_CALL_DEPTH_TRACKING
CONFIG_PAGE_TABLE_ISOLATION => CONFIG_MITIGATION_PAGE_TABLE_ISOLATION
CONFIG_RETPOLINE => CONFIG_MITIGATION_RETPOLINE
CONFIG_SLS => CONFIG_MITIGATION_SLS
CONFIG_CPU_UNRET_ENTRY => CONFIG_MITIGATION_UNRET_ENTRY
CONFIG_CPU_IBRS_ENTRY => CONFIG_MITIGATION_IBRS_ENTRY
CONFIG_CPU_SRSO => CONFIG_MITIGATION_SRSO
CONFIG_RETHUNK => CONFIG_MITIGATION_RETHUNK

Implement step 1/10 of the namespace unification of CPU mitigations related
Kconfig options and rename CONFIG_GDS_FORCE_MITIGATION to
CONFIG_MITIGATION_GDS_FORCE.

[ mingo: Rewrote changelog for clarity. ]

Suggested-by: Josh Poimboeuf <[email protected]>
Signed-off-by: Breno Leitao <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>
Acked-by: Josh Poimboeuf <[email protected]>
Cc: Linus Torvalds <[email protected]>
Link: https://lore.kernel.org/r/[email protected]

Subject: [tip: x86/bugs] x86/bugs: Rename CONFIG_CPU_IBRS_ENTRY => CONFIG_MITIGATION_IBRS_ENTRY

The following commit has been merged into the x86/bugs branch of tip:

Commit-ID: 1da8d2172ce5175118929363fe568e41f24ad3d6
Gitweb: https://git.kernel.org/tip/1da8d2172ce5175118929363fe568e41f24ad3d6
Author: Breno Leitao <[email protected]>
AuthorDate: Tue, 21 Nov 2023 08:07:35 -08:00
Committer: Ingo Molnar <[email protected]>
CommitterDate: Wed, 10 Jan 2024 10:52:29 +01:00

x86/bugs: Rename CONFIG_CPU_IBRS_ENTRY => CONFIG_MITIGATION_IBRS_ENTRY

Step 8/10 of the namespace unification of CPU mitigations related Kconfig options.

Suggested-by: Josh Poimboeuf <[email protected]>
Signed-off-by: Breno Leitao <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>
Acked-by: Josh Poimboeuf <[email protected]>
Cc: Linus Torvalds <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
---
arch/x86/Kconfig | 2 +-
arch/x86/entry/calling.h | 4 ++--
arch/x86/kernel/cpu/bugs.c | 4 ++--
3 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 77b8769..60d38df 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -2561,7 +2561,7 @@ config MITIGATION_IBPB_ENTRY
help
Compile the kernel with support for the retbleed=ibpb mitigation.

-config CPU_IBRS_ENTRY
+config MITIGATION_IBRS_ENTRY
bool "Enable IBRS on kernel entry"
depends on CPU_SUP_INTEL && X86_64
default y
diff --git a/arch/x86/entry/calling.h b/arch/x86/entry/calling.h
index 7ac2d6f..39e069b 100644
--- a/arch/x86/entry/calling.h
+++ b/arch/x86/entry/calling.h
@@ -303,7 +303,7 @@ For 32-bit we have the following conventions - kernel is built with
* Assumes x86_spec_ctrl_{base,current} to have SPEC_CTRL_IBRS set.
*/
.macro IBRS_ENTER save_reg
-#ifdef CONFIG_CPU_IBRS_ENTRY
+#ifdef CONFIG_MITIGATION_IBRS_ENTRY
ALTERNATIVE "jmp .Lend_\@", "", X86_FEATURE_KERNEL_IBRS
movl $MSR_IA32_SPEC_CTRL, %ecx

@@ -332,7 +332,7 @@ For 32-bit we have the following conventions - kernel is built with
* regs. Must be called after the last RET.
*/
.macro IBRS_EXIT save_reg
-#ifdef CONFIG_CPU_IBRS_ENTRY
+#ifdef CONFIG_MITIGATION_IBRS_ENTRY
ALTERNATIVE "jmp .Lend_\@", "", X86_FEATURE_KERNEL_IBRS
movl $MSR_IA32_SPEC_CTRL, %ecx

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 2580368..e11bacb 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -1439,7 +1439,7 @@ static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
return SPECTRE_V2_CMD_AUTO;
}

- if (cmd == SPECTRE_V2_CMD_IBRS && !IS_ENABLED(CONFIG_CPU_IBRS_ENTRY)) {
+ if (cmd == SPECTRE_V2_CMD_IBRS && !IS_ENABLED(CONFIG_MITIGATION_IBRS_ENTRY)) {
pr_err("%s selected but not compiled in. Switching to AUTO select\n",
mitigation_options[i].option);
return SPECTRE_V2_CMD_AUTO;
@@ -1565,7 +1565,7 @@ static void __init spectre_v2_select_mitigation(void)
break;
}

- if (IS_ENABLED(CONFIG_CPU_IBRS_ENTRY) &&
+ if (IS_ENABLED(CONFIG_MITIGATION_IBRS_ENTRY) &&
boot_cpu_has_bug(X86_BUG_RETBLEED) &&
retbleed_cmd != RETBLEED_CMD_OFF &&
retbleed_cmd != RETBLEED_CMD_STUFF &&

Subject: [tip: x86/bugs] x86/bugs: Rename CONFIG_SLS => CONFIG_MITIGATION_SLS

The following commit has been merged into the x86/bugs branch of tip:

Commit-ID: 7b75782ffd8243288d0661750b2dcc2596d676cb
Gitweb: https://git.kernel.org/tip/7b75782ffd8243288d0661750b2dcc2596d676cb
Author: Breno Leitao <[email protected]>
AuthorDate: Tue, 21 Nov 2023 08:07:33 -08:00
Committer: Ingo Molnar <[email protected]>
CommitterDate: Wed, 10 Jan 2024 10:52:28 +01:00

x86/bugs: Rename CONFIG_SLS => CONFIG_MITIGATION_SLS

Step 6/10 of the namespace unification of CPU mitigations related Kconfig options.

Suggested-by: Josh Poimboeuf <[email protected]>
Signed-off-by: Breno Leitao <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>
Acked-by: Josh Poimboeuf <[email protected]>
Cc: Linus Torvalds <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
---
arch/x86/Kconfig | 2 +-
arch/x86/Makefile | 2 +-
arch/x86/include/asm/linkage.h | 4 ++--
arch/x86/kernel/alternative.c | 4 ++--
arch/x86/kernel/ftrace.c | 3 ++-
arch/x86/net/bpf_jit_comp.c | 4 ++--
scripts/Makefile.lib | 2 +-
7 files changed, 11 insertions(+), 10 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 2a3ebd6..ba45465 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -2577,7 +2577,7 @@ config CPU_SRSO
help
Enable the SRSO mitigation needed on AMD Zen1-4 machines.

-config SLS
+config MITIGATION_SLS
bool "Mitigate Straight-Line-Speculation"
depends on CC_HAS_SLS && X86_64
select OBJTOOL if HAVE_OBJTOOL
diff --git a/arch/x86/Makefile b/arch/x86/Makefile
index b8d23ed..5ce8c30 100644
--- a/arch/x86/Makefile
+++ b/arch/x86/Makefile
@@ -205,7 +205,7 @@ ifdef CONFIG_MITIGATION_RETPOLINE
endif
endif

-ifdef CONFIG_SLS
+ifdef CONFIG_MITIGATION_SLS
KBUILD_CFLAGS += -mharden-sls=all
endif

diff --git a/arch/x86/include/asm/linkage.h b/arch/x86/include/asm/linkage.h
index c516520..09e2d02 100644
--- a/arch/x86/include/asm/linkage.h
+++ b/arch/x86/include/asm/linkage.h
@@ -43,7 +43,7 @@
#if defined(CONFIG_RETHUNK) && !defined(__DISABLE_EXPORTS) && !defined(BUILD_VDSO)
#define RET jmp __x86_return_thunk
#else /* CONFIG_MITIGATION_RETPOLINE */
-#ifdef CONFIG_SLS
+#ifdef CONFIG_MITIGATION_SLS
#define RET ret; int3
#else
#define RET ret
@@ -55,7 +55,7 @@
#if defined(CONFIG_RETHUNK) && !defined(__DISABLE_EXPORTS) && !defined(BUILD_VDSO)
#define ASM_RET "jmp __x86_return_thunk\n\t"
#else /* CONFIG_MITIGATION_RETPOLINE */
-#ifdef CONFIG_SLS
+#ifdef CONFIG_MITIGATION_SLS
#define ASM_RET "ret; int3\n\t"
#else
#define ASM_RET "ret\n\t"
diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index 08c182f..f5442d0 100644
--- a/arch/x86/kernel/alternative.c
+++ b/arch/x86/kernel/alternative.c
@@ -708,8 +708,8 @@ static int patch_retpoline(void *addr, struct insn *insn, u8 *bytes)
/*
* The compiler is supposed to EMIT an INT3 after every unconditional
* JMP instruction due to AMD BTC. However, if the compiler is too old
- * or SLS isn't enabled, we still need an INT3 after indirect JMPs
- * even on Intel.
+ * or MITIGATION_SLS isn't enabled, we still need an INT3 after
+ * indirect JMPs even on Intel.
*/
if (op == JMP32_INSN_OPCODE && i < insn->length)
bytes[i++] = INT3_INSN_OPCODE;
diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
index 93bc52d..70139d9 100644
--- a/arch/x86/kernel/ftrace.c
+++ b/arch/x86/kernel/ftrace.c
@@ -307,7 +307,8 @@ union ftrace_op_code_union {
} __attribute__((packed));
};

-#define RET_SIZE (IS_ENABLED(CONFIG_MITIGATION_RETPOLINE) ? 5 : 1 + IS_ENABLED(CONFIG_SLS))
+#define RET_SIZE \
+ (IS_ENABLED(CONFIG_MITIGATION_RETPOLINE) ? 5 : 1 + IS_ENABLED(CONFIG_MITIGATION_SLS))

static unsigned long
create_trampoline(struct ftrace_ops *ops, unsigned int *tramp_size)
diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index ad1396b..63b7aa4 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -469,7 +469,7 @@ static void emit_indirect_jump(u8 **pprog, int reg, u8 *ip)
emit_jump(&prog, &__x86_indirect_thunk_array[reg], ip);
} else {
EMIT2(0xFF, 0xE0 + reg); /* jmp *%\reg */
- if (IS_ENABLED(CONFIG_MITIGATION_RETPOLINE) || IS_ENABLED(CONFIG_SLS))
+ if (IS_ENABLED(CONFIG_MITIGATION_RETPOLINE) || IS_ENABLED(CONFIG_MITIGATION_SLS))
EMIT1(0xCC); /* int3 */
}

@@ -484,7 +484,7 @@ static void emit_return(u8 **pprog, u8 *ip)
emit_jump(&prog, x86_return_thunk, ip);
} else {
EMIT1(0xC3); /* ret */
- if (IS_ENABLED(CONFIG_SLS))
+ if (IS_ENABLED(CONFIG_MITIGATION_SLS))
EMIT1(0xCC); /* int3 */
}

diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib
index 615f261..b272ca6 100644
--- a/scripts/Makefile.lib
+++ b/scripts/Makefile.lib
@@ -264,7 +264,7 @@ endif
objtool-args-$(CONFIG_UNWINDER_ORC) += --orc
objtool-args-$(CONFIG_MITIGATION_RETPOLINE) += --retpoline
objtool-args-$(CONFIG_RETHUNK) += --rethunk
-objtool-args-$(CONFIG_SLS) += --sls
+objtool-args-$(CONFIG_MITIGATION_SLS) += --sls
objtool-args-$(CONFIG_STACK_VALIDATION) += --stackval
objtool-args-$(CONFIG_HAVE_STATIC_CALL_INLINE) += --static-call
objtool-args-$(CONFIG_HAVE_UACCESS_VALIDATION) += --uaccess

Subject: [tip: x86/bugs] x86/bugs: Rename CONFIG_CPU_SRSO => CONFIG_MITIGATION_SRSO

The following commit has been merged into the x86/bugs branch of tip:

Commit-ID: a033eec9a06ce25388e71fa1e888792a718b9c17
Gitweb: https://git.kernel.org/tip/a033eec9a06ce25388e71fa1e888792a718b9c17
Author: Breno Leitao <[email protected]>
AuthorDate: Tue, 21 Nov 2023 08:07:36 -08:00
Committer: Ingo Molnar <[email protected]>
CommitterDate: Wed, 10 Jan 2024 10:52:29 +01:00

x86/bugs: Rename CONFIG_CPU_SRSO => CONFIG_MITIGATION_SRSO

Step 9/10 of the namespace unification of CPU mitigations related Kconfig options.

Suggested-by: Josh Poimboeuf <[email protected]>
Signed-off-by: Breno Leitao <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>
Acked-by: Josh Poimboeuf <[email protected]>
Cc: Linus Torvalds <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
---
arch/x86/Kconfig | 2 +-
arch/x86/include/asm/nospec-branch.h | 6 +++---
arch/x86/kernel/cpu/bugs.c | 8 ++++----
arch/x86/kernel/vmlinux.lds.S | 4 ++--
arch/x86/lib/retpoline.S | 10 +++++-----
include/linux/objtool.h | 2 +-
scripts/Makefile.vmlinux_o | 2 +-
7 files changed, 17 insertions(+), 17 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 60d38df..a2743b7 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -2570,7 +2570,7 @@ config MITIGATION_IBRS_ENTRY
This mitigates both spectre_v2 and retbleed at great cost to
performance.

-config CPU_SRSO
+config MITIGATION_SRSO
bool "Mitigate speculative RAS overflow on AMD"
depends on CPU_SUP_AMD && X86_64 && RETHUNK
default y
diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
index 9747869..94c7083 100644
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -212,7 +212,7 @@
*/
.macro VALIDATE_UNRET_END
#if defined(CONFIG_NOINSTR_VALIDATION) && \
- (defined(CONFIG_MITIGATION_UNRET_ENTRY) || defined(CONFIG_CPU_SRSO))
+ (defined(CONFIG_MITIGATION_UNRET_ENTRY) || defined(CONFIG_MITIGATION_SRSO))
ANNOTATE_RETPOLINE_SAFE
nop
#endif
@@ -271,7 +271,7 @@
.Lskip_rsb_\@:
.endm

-#if defined(CONFIG_MITIGATION_UNRET_ENTRY) || defined(CONFIG_CPU_SRSO)
+#if defined(CONFIG_MITIGATION_UNRET_ENTRY) || defined(CONFIG_MITIGATION_SRSO)
#define CALL_UNTRAIN_RET "call entry_untrain_ret"
#else
#define CALL_UNTRAIN_RET ""
@@ -340,7 +340,7 @@ extern void retbleed_return_thunk(void);
static inline void retbleed_return_thunk(void) {}
#endif

-#ifdef CONFIG_CPU_SRSO
+#ifdef CONFIG_MITIGATION_SRSO
extern void srso_return_thunk(void);
extern void srso_alias_return_thunk(void);
#else
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index e11bacb..f277541 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -2458,7 +2458,7 @@ static void __init srso_select_mitigation(void)
break;

case SRSO_CMD_SAFE_RET:
- if (IS_ENABLED(CONFIG_CPU_SRSO)) {
+ if (IS_ENABLED(CONFIG_MITIGATION_SRSO)) {
/*
* Enable the return thunk for generated code
* like ftrace, static_call, etc.
@@ -2478,7 +2478,7 @@ static void __init srso_select_mitigation(void)
else
srso_mitigation = SRSO_MITIGATION_SAFE_RET_UCODE_NEEDED;
} else {
- pr_err("WARNING: kernel not compiled with CPU_SRSO.\n");
+ pr_err("WARNING: kernel not compiled with MITIGATION_SRSO.\n");
}
break;

@@ -2494,13 +2494,13 @@ static void __init srso_select_mitigation(void)
break;

case SRSO_CMD_IBPB_ON_VMEXIT:
- if (IS_ENABLED(CONFIG_CPU_SRSO)) {
+ if (IS_ENABLED(CONFIG_MITIGATION_SRSO)) {
if (!boot_cpu_has(X86_FEATURE_ENTRY_IBPB) && has_microcode) {
setup_force_cpu_cap(X86_FEATURE_IBPB_ON_VMEXIT);
srso_mitigation = SRSO_MITIGATION_IBPB_ON_VMEXIT;
}
} else {
- pr_err("WARNING: kernel not compiled with CPU_SRSO.\n");
+ pr_err("WARNING: kernel not compiled with MITIGATION_SRSO.\n");
}
break;
}
diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
index 9c5cca5..6716fcc 100644
--- a/arch/x86/kernel/vmlinux.lds.S
+++ b/arch/x86/kernel/vmlinux.lds.S
@@ -142,7 +142,7 @@ SECTIONS
*(.text..__x86.rethunk_untrain)
ENTRY_TEXT

-#ifdef CONFIG_CPU_SRSO
+#ifdef CONFIG_MITIGATION_SRSO
/*
* See the comment above srso_alias_untrain_ret()'s
* definition.
@@ -508,7 +508,7 @@ INIT_PER_CPU(irq_stack_backing_store);
. = ASSERT((retbleed_return_thunk & 0x3f) == 0, "retbleed_return_thunk not cacheline-aligned");
#endif

-#ifdef CONFIG_CPU_SRSO
+#ifdef CONFIG_MITIGATION_SRSO
. = ASSERT((srso_safe_ret & 0x3f) == 0, "srso_safe_ret not cacheline-aligned");
/*
* GNU ld cannot do XOR until 2.41.
diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S
index 0ad67cc..67b52cb 100644
--- a/arch/x86/lib/retpoline.S
+++ b/arch/x86/lib/retpoline.S
@@ -138,7 +138,7 @@ SYM_CODE_END(__x86_indirect_jump_thunk_array)
*/
.section .text..__x86.return_thunk

-#ifdef CONFIG_CPU_SRSO
+#ifdef CONFIG_MITIGATION_SRSO

/*
* srso_alias_untrain_ret() and srso_alias_safe_ret() are placed at
@@ -225,10 +225,10 @@ SYM_CODE_END(srso_return_thunk)

#define JMP_SRSO_UNTRAIN_RET "jmp srso_untrain_ret"
#define JMP_SRSO_ALIAS_UNTRAIN_RET "jmp srso_alias_untrain_ret"
-#else /* !CONFIG_CPU_SRSO */
+#else /* !CONFIG_MITIGATION_SRSO */
#define JMP_SRSO_UNTRAIN_RET "ud2"
#define JMP_SRSO_ALIAS_UNTRAIN_RET "ud2"
-#endif /* CONFIG_CPU_SRSO */
+#endif /* CONFIG_MITIGATION_SRSO */

#ifdef CONFIG_MITIGATION_UNRET_ENTRY

@@ -316,7 +316,7 @@ SYM_FUNC_END(retbleed_untrain_ret)
#define JMP_RETBLEED_UNTRAIN_RET "ud2"
#endif /* CONFIG_MITIGATION_UNRET_ENTRY */

-#if defined(CONFIG_MITIGATION_UNRET_ENTRY) || defined(CONFIG_CPU_SRSO)
+#if defined(CONFIG_MITIGATION_UNRET_ENTRY) || defined(CONFIG_MITIGATION_SRSO)

SYM_FUNC_START(entry_untrain_ret)
ALTERNATIVE_2 JMP_RETBLEED_UNTRAIN_RET, \
@@ -325,7 +325,7 @@ SYM_FUNC_START(entry_untrain_ret)
SYM_FUNC_END(entry_untrain_ret)
__EXPORT_THUNK(entry_untrain_ret)

-#endif /* CONFIG_MITIGATION_UNRET_ENTRY || CONFIG_CPU_SRSO */
+#endif /* CONFIG_MITIGATION_UNRET_ENTRY || CONFIG_MITIGATION_SRSO */

#ifdef CONFIG_MITIGATION_CALL_DEPTH_TRACKING

diff --git a/include/linux/objtool.h b/include/linux/objtool.h
index d030671..b3b8d3d 100644
--- a/include/linux/objtool.h
+++ b/include/linux/objtool.h
@@ -131,7 +131,7 @@
*/
.macro VALIDATE_UNRET_BEGIN
#if defined(CONFIG_NOINSTR_VALIDATION) && \
- (defined(CONFIG_MITIGATION_UNRET_ENTRY) || defined(CONFIG_CPU_SRSO))
+ (defined(CONFIG_MITIGATION_UNRET_ENTRY) || defined(CONFIG_MITIGATION_SRSO))
.Lhere_\@:
.pushsection .discard.validate_unret
.long .Lhere_\@ - .
diff --git a/scripts/Makefile.vmlinux_o b/scripts/Makefile.vmlinux_o
index 6277dbd..6de2979 100644
--- a/scripts/Makefile.vmlinux_o
+++ b/scripts/Makefile.vmlinux_o
@@ -38,7 +38,7 @@ objtool-enabled := $(or $(delay-objtool),$(CONFIG_NOINSTR_VALIDATION))
vmlinux-objtool-args-$(delay-objtool) += $(objtool-args-y)
vmlinux-objtool-args-$(CONFIG_GCOV_KERNEL) += --no-unreachable
vmlinux-objtool-args-$(CONFIG_NOINSTR_VALIDATION) += --noinstr \
- $(if $(or $(CONFIG_MITIGATION_UNRET_ENTRY),$(CONFIG_CPU_SRSO)), --unret)
+ $(if $(or $(CONFIG_MITIGATION_UNRET_ENTRY),$(CONFIG_MITIGATION_SRSO)), --unret)

objtool-args = $(vmlinux-objtool-args-y) --link


Subject: [tip: x86/bugs] x86/bugs: Rename CONFIG_CPU_UNRET_ENTRY => CONFIG_MITIGATION_UNRET_ENTRY

The following commit has been merged into the x86/bugs branch of tip:

Commit-ID: ac61d43983a4fe8e3ee600eee44c40868c14340a
Gitweb: https://git.kernel.org/tip/ac61d43983a4fe8e3ee600eee44c40868c14340a
Author: Breno Leitao <[email protected]>
AuthorDate: Tue, 21 Nov 2023 08:07:34 -08:00
Committer: Ingo Molnar <[email protected]>
CommitterDate: Wed, 10 Jan 2024 10:52:28 +01:00

x86/bugs: Rename CONFIG_CPU_UNRET_ENTRY => CONFIG_MITIGATION_UNRET_ENTRY

Step 7/10 of the namespace unification of CPU mitigations related Kconfig options.

Suggested-by: Josh Poimboeuf <[email protected]>
Signed-off-by: Breno Leitao <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>
Acked-by: Josh Poimboeuf <[email protected]>
Cc: Linus Torvalds <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
---
arch/x86/Kconfig | 2 +-
arch/x86/include/asm/disabled-features.h | 2 +-
arch/x86/include/asm/nospec-branch.h | 6 +++---
arch/x86/kernel/cpu/amd.c | 2 +-
arch/x86/kernel/cpu/bugs.c | 6 +++---
arch/x86/kernel/vmlinux.lds.S | 2 +-
arch/x86/lib/retpoline.S | 10 +++++-----
include/linux/objtool.h | 2 +-
scripts/Makefile.vmlinux_o | 2 +-
tools/arch/x86/include/asm/disabled-features.h | 2 +-
10 files changed, 18 insertions(+), 18 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index ba45465..77b8769 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -2516,7 +2516,7 @@ config RETHUNK
Requires a compiler with -mfunction-return=thunk-extern
support for full protection. The kernel may run slower.

-config CPU_UNRET_ENTRY
+config MITIGATION_UNRET_ENTRY
bool "Enable UNRET on kernel entry"
depends on CPU_SUP_AMD && RETHUNK && X86_64
default y
diff --git a/arch/x86/include/asm/disabled-features.h b/arch/x86/include/asm/disabled-features.h
index 24e4010..151f0d5 100644
--- a/arch/x86/include/asm/disabled-features.h
+++ b/arch/x86/include/asm/disabled-features.h
@@ -63,7 +63,7 @@
# define DISABLE_RETHUNK (1 << (X86_FEATURE_RETHUNK & 31))
#endif

-#ifdef CONFIG_CPU_UNRET_ENTRY
+#ifdef CONFIG_MITIGATION_UNRET_ENTRY
# define DISABLE_UNRET 0
#else
# define DISABLE_UNRET (1 << (X86_FEATURE_UNRET & 31))
diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
index 32680cb..9747869 100644
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -212,7 +212,7 @@
*/
.macro VALIDATE_UNRET_END
#if defined(CONFIG_NOINSTR_VALIDATION) && \
- (defined(CONFIG_CPU_UNRET_ENTRY) || defined(CONFIG_CPU_SRSO))
+ (defined(CONFIG_MITIGATION_UNRET_ENTRY) || defined(CONFIG_CPU_SRSO))
ANNOTATE_RETPOLINE_SAFE
nop
#endif
@@ -271,7 +271,7 @@
.Lskip_rsb_\@:
.endm

-#if defined(CONFIG_CPU_UNRET_ENTRY) || defined(CONFIG_CPU_SRSO)
+#if defined(CONFIG_MITIGATION_UNRET_ENTRY) || defined(CONFIG_CPU_SRSO)
#define CALL_UNTRAIN_RET "call entry_untrain_ret"
#else
#define CALL_UNTRAIN_RET ""
@@ -334,7 +334,7 @@ extern void __x86_return_thunk(void);
static inline void __x86_return_thunk(void) {}
#endif

-#ifdef CONFIG_CPU_UNRET_ENTRY
+#ifdef CONFIG_MITIGATION_UNRET_ENTRY
extern void retbleed_return_thunk(void);
#else
static inline void retbleed_return_thunk(void) {}
diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
index 9f42d1c..6c7db2e 100644
--- a/arch/x86/kernel/cpu/amd.c
+++ b/arch/x86/kernel/cpu/amd.c
@@ -928,7 +928,7 @@ static void fix_erratum_1386(struct cpuinfo_x86 *c)

void init_spectral_chicken(struct cpuinfo_x86 *c)
{
-#ifdef CONFIG_CPU_UNRET_ENTRY
+#ifdef CONFIG_MITIGATION_UNRET_ENTRY
u64 value;

/*
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index fc46fd6..2580368 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -982,10 +982,10 @@ static void __init retbleed_select_mitigation(void)
return;

case RETBLEED_CMD_UNRET:
- if (IS_ENABLED(CONFIG_CPU_UNRET_ENTRY)) {
+ if (IS_ENABLED(CONFIG_MITIGATION_UNRET_ENTRY)) {
retbleed_mitigation = RETBLEED_MITIGATION_UNRET;
} else {
- pr_err("WARNING: kernel not compiled with CPU_UNRET_ENTRY.\n");
+ pr_err("WARNING: kernel not compiled with MITIGATION_UNRET_ENTRY.\n");
goto do_cmd_auto;
}
break;
@@ -1021,7 +1021,7 @@ do_cmd_auto:
case RETBLEED_CMD_AUTO:
if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD ||
boot_cpu_data.x86_vendor == X86_VENDOR_HYGON) {
- if (IS_ENABLED(CONFIG_CPU_UNRET_ENTRY))
+ if (IS_ENABLED(CONFIG_MITIGATION_UNRET_ENTRY))
retbleed_mitigation = RETBLEED_MITIGATION_UNRET;
else if (IS_ENABLED(CONFIG_MITIGATION_IBPB_ENTRY) &&
boot_cpu_has(X86_FEATURE_IBPB))
diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
index bb2ec03..9c5cca5 100644
--- a/arch/x86/kernel/vmlinux.lds.S
+++ b/arch/x86/kernel/vmlinux.lds.S
@@ -504,7 +504,7 @@ INIT_PER_CPU(irq_stack_backing_store);
"fixed_percpu_data is not at start of per-cpu area");
#endif

-#ifdef CONFIG_CPU_UNRET_ENTRY
+#ifdef CONFIG_MITIGATION_UNRET_ENTRY
. = ASSERT((retbleed_return_thunk & 0x3f) == 0, "retbleed_return_thunk not cacheline-aligned");
#endif

diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S
index ff46f48..0ad67cc 100644
--- a/arch/x86/lib/retpoline.S
+++ b/arch/x86/lib/retpoline.S
@@ -230,7 +230,7 @@ SYM_CODE_END(srso_return_thunk)
#define JMP_SRSO_ALIAS_UNTRAIN_RET "ud2"
#endif /* CONFIG_CPU_SRSO */

-#ifdef CONFIG_CPU_UNRET_ENTRY
+#ifdef CONFIG_MITIGATION_UNRET_ENTRY

/*
* Some generic notes on the untraining sequences:
@@ -312,11 +312,11 @@ SYM_CODE_END(retbleed_return_thunk)
SYM_FUNC_END(retbleed_untrain_ret)

#define JMP_RETBLEED_UNTRAIN_RET "jmp retbleed_untrain_ret"
-#else /* !CONFIG_CPU_UNRET_ENTRY */
+#else /* !CONFIG_MITIGATION_UNRET_ENTRY */
#define JMP_RETBLEED_UNTRAIN_RET "ud2"
-#endif /* CONFIG_CPU_UNRET_ENTRY */
+#endif /* CONFIG_MITIGATION_UNRET_ENTRY */

-#if defined(CONFIG_CPU_UNRET_ENTRY) || defined(CONFIG_CPU_SRSO)
+#if defined(CONFIG_MITIGATION_UNRET_ENTRY) || defined(CONFIG_CPU_SRSO)

SYM_FUNC_START(entry_untrain_ret)
ALTERNATIVE_2 JMP_RETBLEED_UNTRAIN_RET, \
@@ -325,7 +325,7 @@ SYM_FUNC_START(entry_untrain_ret)
SYM_FUNC_END(entry_untrain_ret)
__EXPORT_THUNK(entry_untrain_ret)

-#endif /* CONFIG_CPU_UNRET_ENTRY || CONFIG_CPU_SRSO */
+#endif /* CONFIG_MITIGATION_UNRET_ENTRY || CONFIG_CPU_SRSO */

#ifdef CONFIG_MITIGATION_CALL_DEPTH_TRACKING

diff --git a/include/linux/objtool.h b/include/linux/objtool.h
index 33212e9..d030671 100644
--- a/include/linux/objtool.h
+++ b/include/linux/objtool.h
@@ -131,7 +131,7 @@
*/
.macro VALIDATE_UNRET_BEGIN
#if defined(CONFIG_NOINSTR_VALIDATION) && \
- (defined(CONFIG_CPU_UNRET_ENTRY) || defined(CONFIG_CPU_SRSO))
+ (defined(CONFIG_MITIGATION_UNRET_ENTRY) || defined(CONFIG_CPU_SRSO))
.Lhere_\@:
.pushsection .discard.validate_unret
.long .Lhere_\@ - .
diff --git a/scripts/Makefile.vmlinux_o b/scripts/Makefile.vmlinux_o
index 25b3b58..6277dbd 100644
--- a/scripts/Makefile.vmlinux_o
+++ b/scripts/Makefile.vmlinux_o
@@ -38,7 +38,7 @@ objtool-enabled := $(or $(delay-objtool),$(CONFIG_NOINSTR_VALIDATION))
vmlinux-objtool-args-$(delay-objtool) += $(objtool-args-y)
vmlinux-objtool-args-$(CONFIG_GCOV_KERNEL) += --no-unreachable
vmlinux-objtool-args-$(CONFIG_NOINSTR_VALIDATION) += --noinstr \
- $(if $(or $(CONFIG_CPU_UNRET_ENTRY),$(CONFIG_CPU_SRSO)), --unret)
+ $(if $(or $(CONFIG_MITIGATION_UNRET_ENTRY),$(CONFIG_CPU_SRSO)), --unret)

objtool-args = $(vmlinux-objtool-args-y) --link

diff --git a/tools/arch/x86/include/asm/disabled-features.h b/tools/arch/x86/include/asm/disabled-features.h
index 24e4010..151f0d5 100644
--- a/tools/arch/x86/include/asm/disabled-features.h
+++ b/tools/arch/x86/include/asm/disabled-features.h
@@ -63,7 +63,7 @@
# define DISABLE_RETHUNK (1 << (X86_FEATURE_RETHUNK & 31))
#endif

-#ifdef CONFIG_CPU_UNRET_ENTRY
+#ifdef CONFIG_MITIGATION_UNRET_ENTRY
# define DISABLE_UNRET 0
#else
# define DISABLE_UNRET (1 << (X86_FEATURE_UNRET & 31))

Subject: [tip: x86/bugs] x86/bugs: Rename CONFIG_RETHUNK => CONFIG_MITIGATION_RETHUNK

The following commit has been merged into the x86/bugs branch of tip:

Commit-ID: 0911b8c52c4d68c57d02f172daa55a42bce703f0
Gitweb: https://git.kernel.org/tip/0911b8c52c4d68c57d02f172daa55a42bce703f0
Author: Breno Leitao <[email protected]>
AuthorDate: Tue, 21 Nov 2023 08:07:37 -08:00
Committer: Ingo Molnar <[email protected]>
CommitterDate: Wed, 10 Jan 2024 10:52:29 +01:00

x86/bugs: Rename CONFIG_RETHUNK => CONFIG_MITIGATION_RETHUNK

Step 10/10 of the namespace unification of CPU mitigations related Kconfig options.

[ mingo: Added one more case. ]

Suggested-by: Josh Poimboeuf <[email protected]>
Signed-off-by: Breno Leitao <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>
Acked-by: Josh Poimboeuf <[email protected]>
Cc: Linus Torvalds <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
---
arch/x86/Kconfig | 8 ++++----
arch/x86/Makefile | 2 +-
arch/x86/configs/i386_defconfig | 2 +-
arch/x86/include/asm/disabled-features.h | 2 +-
arch/x86/include/asm/linkage.h | 4 ++--
arch/x86/include/asm/nospec-branch.h | 4 ++--
arch/x86/include/asm/static_call.h | 2 +-
arch/x86/kernel/alternative.c | 4 ++--
arch/x86/kernel/static_call.c | 2 +-
arch/x86/lib/retpoline.S | 4 ++--
scripts/Makefile.lib | 2 +-
tools/arch/x86/include/asm/disabled-features.h | 2 +-
tools/objtool/check.c | 2 +-
13 files changed, 20 insertions(+), 20 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index a2743b7..0a9fea3 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -2462,7 +2462,7 @@ config FINEIBT

config HAVE_CALL_THUNKS
def_bool y
- depends on CC_HAS_ENTRY_PADDING && RETHUNK && OBJTOOL
+ depends on CC_HAS_ENTRY_PADDING && MITIGATION_RETHUNK && OBJTOOL

config CALL_THUNKS
def_bool n
@@ -2505,7 +2505,7 @@ config MITIGATION_RETPOLINE
branches. Requires a compiler with -mindirect-branch=thunk-extern
support for full protection. The kernel may run slower.

-config RETHUNK
+config MITIGATION_RETHUNK
bool "Enable return-thunks"
depends on MITIGATION_RETPOLINE && CC_HAS_RETURN_THUNK
select OBJTOOL if HAVE_OBJTOOL
@@ -2518,7 +2518,7 @@ config RETHUNK

config MITIGATION_UNRET_ENTRY
bool "Enable UNRET on kernel entry"
- depends on CPU_SUP_AMD && RETHUNK && X86_64
+ depends on CPU_SUP_AMD && MITIGATION_RETHUNK && X86_64
default y
help
Compile the kernel with support for the retbleed=unret mitigation.
@@ -2572,7 +2572,7 @@ config MITIGATION_IBRS_ENTRY

config MITIGATION_SRSO
bool "Mitigate speculative RAS overflow on AMD"
- depends on CPU_SUP_AMD && X86_64 && RETHUNK
+ depends on CPU_SUP_AMD && X86_64 && MITIGATION_RETHUNK
default y
help
Enable the SRSO mitigation needed on AMD Zen1-4 machines.
diff --git a/arch/x86/Makefile b/arch/x86/Makefile
index 5ce8c30..ba046af 100644
--- a/arch/x86/Makefile
+++ b/arch/x86/Makefile
@@ -22,7 +22,7 @@ RETPOLINE_VDSO_CFLAGS := -mretpoline
endif
RETPOLINE_CFLAGS += $(call cc-option,-mindirect-branch-cs-prefix)

-ifdef CONFIG_RETHUNK
+ifdef CONFIG_MITIGATION_RETHUNK
RETHUNK_CFLAGS := -mfunction-return=thunk-extern
RETPOLINE_CFLAGS += $(RETHUNK_CFLAGS)
endif
diff --git a/arch/x86/configs/i386_defconfig b/arch/x86/configs/i386_defconfig
index 73abbbd..9180113 100644
--- a/arch/x86/configs/i386_defconfig
+++ b/arch/x86/configs/i386_defconfig
@@ -42,7 +42,7 @@ CONFIG_EFI_STUB=y
CONFIG_HZ_1000=y
CONFIG_KEXEC=y
CONFIG_CRASH_DUMP=y
-# CONFIG_RETHUNK is not set
+# CONFIG_MITIGATION_RETHUNK is not set
CONFIG_HIBERNATION=y
CONFIG_PM_DEBUG=y
CONFIG_PM_TRACE_RTC=y
diff --git a/arch/x86/include/asm/disabled-features.h b/arch/x86/include/asm/disabled-features.h
index 151f0d5..36d0c1e 100644
--- a/arch/x86/include/asm/disabled-features.h
+++ b/arch/x86/include/asm/disabled-features.h
@@ -57,7 +57,7 @@
(1 << (X86_FEATURE_RETPOLINE_LFENCE & 31)))
#endif

-#ifdef CONFIG_RETHUNK
+#ifdef CONFIG_MITIGATION_RETHUNK
# define DISABLE_RETHUNK 0
#else
# define DISABLE_RETHUNK (1 << (X86_FEATURE_RETHUNK & 31))
diff --git a/arch/x86/include/asm/linkage.h b/arch/x86/include/asm/linkage.h
index 09e2d02..dc31b13 100644
--- a/arch/x86/include/asm/linkage.h
+++ b/arch/x86/include/asm/linkage.h
@@ -40,7 +40,7 @@

#ifdef __ASSEMBLY__

-#if defined(CONFIG_RETHUNK) && !defined(__DISABLE_EXPORTS) && !defined(BUILD_VDSO)
+#if defined(CONFIG_MITIGATION_RETHUNK) && !defined(__DISABLE_EXPORTS) && !defined(BUILD_VDSO)
#define RET jmp __x86_return_thunk
#else /* CONFIG_MITIGATION_RETPOLINE */
#ifdef CONFIG_MITIGATION_SLS
@@ -52,7 +52,7 @@

#else /* __ASSEMBLY__ */

-#if defined(CONFIG_RETHUNK) && !defined(__DISABLE_EXPORTS) && !defined(BUILD_VDSO)
+#if defined(CONFIG_MITIGATION_RETHUNK) && !defined(__DISABLE_EXPORTS) && !defined(BUILD_VDSO)
#define ASM_RET "jmp __x86_return_thunk\n\t"
#else /* CONFIG_MITIGATION_RETPOLINE */
#ifdef CONFIG_MITIGATION_SLS
diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
index 94c7083..2c0679e 100644
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -289,7 +289,7 @@
* where we have a stack but before any RET instruction.
*/
.macro __UNTRAIN_RET ibpb_feature, call_depth_insns
-#if defined(CONFIG_RETHUNK) || defined(CONFIG_MITIGATION_IBPB_ENTRY)
+#if defined(CONFIG_MITIGATION_RETHUNK) || defined(CONFIG_MITIGATION_IBPB_ENTRY)
VALIDATE_UNRET_END
ALTERNATIVE_3 "", \
CALL_UNTRAIN_RET, X86_FEATURE_UNRET, \
@@ -328,7 +328,7 @@ extern retpoline_thunk_t __x86_indirect_thunk_array[];
extern retpoline_thunk_t __x86_indirect_call_thunk_array[];
extern retpoline_thunk_t __x86_indirect_jump_thunk_array[];

-#ifdef CONFIG_RETHUNK
+#ifdef CONFIG_MITIGATION_RETHUNK
extern void __x86_return_thunk(void);
#else
static inline void __x86_return_thunk(void) {}
diff --git a/arch/x86/include/asm/static_call.h b/arch/x86/include/asm/static_call.h
index 343b722..125c407 100644
--- a/arch/x86/include/asm/static_call.h
+++ b/arch/x86/include/asm/static_call.h
@@ -46,7 +46,7 @@
#define ARCH_DEFINE_STATIC_CALL_TRAMP(name, func) \
__ARCH_DEFINE_STATIC_CALL_TRAMP(name, ".byte 0xe9; .long " #func " - (. + 4)")

-#ifdef CONFIG_RETHUNK
+#ifdef CONFIG_MITIGATION_RETHUNK
#define ARCH_DEFINE_STATIC_CALL_NULL_TRAMP(name) \
__ARCH_DEFINE_STATIC_CALL_TRAMP(name, "jmp __x86_return_thunk")
#else
diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index f5442d0..df91abe 100644
--- a/arch/x86/kernel/alternative.c
+++ b/arch/x86/kernel/alternative.c
@@ -769,7 +769,7 @@ void __init_or_module noinline apply_retpolines(s32 *start, s32 *end)
}
}

-#ifdef CONFIG_RETHUNK
+#ifdef CONFIG_MITIGATION_RETHUNK

/*
* Rewrite the compiler generated return thunk tail-calls.
@@ -842,7 +842,7 @@ void __init_or_module noinline apply_returns(s32 *start, s32 *end)
}
#else
void __init_or_module noinline apply_returns(s32 *start, s32 *end) { }
-#endif /* CONFIG_RETHUNK */
+#endif /* CONFIG_MITIGATION_RETHUNK */

#else /* !CONFIG_MITIGATION_RETPOLINE || !CONFIG_OBJTOOL */

diff --git a/arch/x86/kernel/static_call.c b/arch/x86/kernel/static_call.c
index 77a9316..4eefaac 100644
--- a/arch/x86/kernel/static_call.c
+++ b/arch/x86/kernel/static_call.c
@@ -172,7 +172,7 @@ void arch_static_call_transform(void *site, void *tramp, void *func, bool tail)
}
EXPORT_SYMBOL_GPL(arch_static_call_transform);

-#ifdef CONFIG_RETHUNK
+#ifdef CONFIG_MITIGATION_RETHUNK
/*
* This is called by apply_returns() to fix up static call trampolines,
* specifically ARCH_DEFINE_STATIC_CALL_NULL_TRAMP which is recorded as
diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S
index 67b52cb..0045153 100644
--- a/arch/x86/lib/retpoline.S
+++ b/arch/x86/lib/retpoline.S
@@ -127,7 +127,7 @@ SYM_CODE_END(__x86_indirect_jump_thunk_array)
#undef GEN
#endif

-#ifdef CONFIG_RETHUNK
+#ifdef CONFIG_MITIGATION_RETHUNK

/*
* Be careful here: that label cannot really be removed because in
@@ -386,4 +386,4 @@ SYM_CODE_START(__x86_return_thunk)
SYM_CODE_END(__x86_return_thunk)
EXPORT_SYMBOL(__x86_return_thunk)

-#endif /* CONFIG_RETHUNK */
+#endif /* CONFIG_MITIGATION_RETHUNK */
diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib
index b272ca6..c3f4cac 100644
--- a/scripts/Makefile.lib
+++ b/scripts/Makefile.lib
@@ -263,7 +263,7 @@ objtool-args-$(CONFIG_HAVE_OBJTOOL_NOP_MCOUNT) += --mnop
endif
objtool-args-$(CONFIG_UNWINDER_ORC) += --orc
objtool-args-$(CONFIG_MITIGATION_RETPOLINE) += --retpoline
-objtool-args-$(CONFIG_RETHUNK) += --rethunk
+objtool-args-$(CONFIG_MITIGATION_RETHUNK) += --rethunk
objtool-args-$(CONFIG_MITIGATION_SLS) += --sls
objtool-args-$(CONFIG_STACK_VALIDATION) += --stackval
objtool-args-$(CONFIG_HAVE_STATIC_CALL_INLINE) += --static-call
diff --git a/tools/arch/x86/include/asm/disabled-features.h b/tools/arch/x86/include/asm/disabled-features.h
index 151f0d5..36d0c1e 100644
--- a/tools/arch/x86/include/asm/disabled-features.h
+++ b/tools/arch/x86/include/asm/disabled-features.h
@@ -57,7 +57,7 @@
(1 << (X86_FEATURE_RETPOLINE_LFENCE & 31)))
#endif

-#ifdef CONFIG_RETHUNK
+#ifdef CONFIG_MITIGATION_RETHUNK
# define DISABLE_RETHUNK 0
#else
# define DISABLE_RETHUNK (1 << (X86_FEATURE_RETHUNK & 31))
diff --git a/tools/objtool/check.c b/tools/objtool/check.c
index 84067f0..8440b7b 100644
--- a/tools/objtool/check.c
+++ b/tools/objtool/check.c
@@ -3980,7 +3980,7 @@ static int validate_retpoline(struct objtool_file *file)

if (insn->type == INSN_RETURN) {
if (opts.rethunk) {
- WARN_INSN(insn, "'naked' return found in RETHUNK build");
+ WARN_INSN(insn, "'naked' return found in MITIGATION_RETHUNK build");
} else
continue;
} else {

2024-01-10 11:55:58

by Breno Leitao

[permalink] [raw]
Subject: Re: [PATCH v6 00/13] x86/bugs: Add a separate config for each mitigation

On Wed, Jan 10, 2024 at 10:56:46AM +0100, Ingo Molnar wrote:
>
> * Breno Leitao <[email protected]> wrote:
>
> > Currently, the CONFIG_SPECULATION_MITIGATIONS is halfway populated,
> > where some mitigations have entries in Kconfig, and they could be
> > modified, while others mitigations do not have Kconfig entries, and
> > could not be controlled at build time.
> >
> > The fact of having a fine grained control can help in a few ways:
> >
> > 1) Users can choose and pick only mitigations that are important for
> > their workloads.
> >
> > 2) Users and developers can choose to disable mitigations that mangle
> > the assembly code generation, making it hard to read.
> >
> > 3) Separate configs for just source code readability,
> > so that we see *which* butt-ugly piece of crap code is for what
> > reason.
> >
> > Important to say, if a mitigation is disabled at compilation time, it
> > could be enabled at runtime using kernel command line arguments.
> >
> > Discussion about this approach:
> > https://lore.kernel.org/all/CAHk-=wjTHeQjsqtHcBGvy9TaJQ5uAm5HrCDuOD9v7qA9U1Xr4w@mail.gmail.com/
> > and
> > https://lore.kernel.org/lkml/20231011044252.42bplzjsam3qsasz@treble/
> >
> > In order to get the missing mitigations, some clean up was done.
> >
> > 1) Get a namespace for mitigations, prepending MITIGATION to the Kconfig
> > entries.
> >
> > 2) Adding the missing mitigations, so, the mitigations have entries in the
> > Kconfig that could be easily configure by the user.
> >
> > With this patchset applied, all configs have an individual entry under
> > CONFIG_SPECULATION_MITIGATIONS, and all of them starts with CONFIG_MITIGATION.
>
> Yeah, so:
>
> - I took this older series and updated it to current upstream, and made
> sure all renames were fully done: there were two new Kconfig option
> uses, which I integrated into the series. (Sorry about the delay, holiday & stuff.)
>
> - I also widened the renames to comments and messages, which were not
> always covered.
>
> - Then I took this cover letter and combined it with a more high level
> description of the reasoning behind this series I wrote up, and added it
> to patch #1. (see it below.)
>
> - Then I removed the changelog repetition from the other patches and just
> referred them back to patch #1.
>
> - Then I stuck the resulting updated series into tip:x86/bugs, without the
> last 3 patches that modify behavior.

Thanks for your work. I am currently reviwing the tip branch and the
merge seems go so far.

Regarding the last 3 patches, what are the next steps?

Thank you!
Breno

2024-01-10 18:08:18

by Ingo Molnar

[permalink] [raw]
Subject: Re: [PATCH v6 00/13] x86/bugs: Add a separate config for each mitigation


* Breno Leitao <[email protected]> wrote:

> > Yeah, so:
> >
> > - I took this older series and updated it to current upstream, and made
> > sure all renames were fully done: there were two new Kconfig option
> > uses, which I integrated into the series. (Sorry about the delay, holiday & stuff.)
> >
> > - I also widened the renames to comments and messages, which were not
> > always covered.
> >
> > - Then I took this cover letter and combined it with a more high level
> > description of the reasoning behind this series I wrote up, and added it
> > to patch #1. (see it below.)
> >
> > - Then I removed the changelog repetition from the other patches and just
> > referred them back to patch #1.
> >
> > - Then I stuck the resulting updated series into tip:x86/bugs, without the
> > last 3 patches that modify behavior.
>
> Thanks for your work. I am currently reviwing the tip branch and the
> merge seems go so far.
>
> Regarding the last 3 patches, what are the next steps?

Please resubmit them in a few days (with Josh's Acked-by added and any
fixes/enhancements done along the way), on top of tip:x86/bugs.

Thanks,

Ingo

2024-04-30 13:12:59

by Breno Leitao

[permalink] [raw]
Subject: Re: [PATCH v6 00/13] x86/bugs: Add a separate config for each mitigation

Hello Ingo,

On Wed, Jan 10, 2024 at 07:07:48PM +0100, Ingo Molnar wrote:
>
> * Breno Leitao <[email protected]> wrote:
>
> > > Yeah, so:
> > >
> > > - I took this older series and updated it to current upstream, and made
> > > sure all renames were fully done: there were two new Kconfig option
> > > uses, which I integrated into the series. (Sorry about the delay, holiday & stuff.)
> > >
> > > - I also widened the renames to comments and messages, which were not
> > > always covered.
> > >
> > > - Then I took this cover letter and combined it with a more high level
> > > description of the reasoning behind this series I wrote up, and added it
> > > to patch #1. (see it below.)
> > >
> > > - Then I removed the changelog repetition from the other patches and just
> > > referred them back to patch #1.
> > >
> > > - Then I stuck the resulting updated series into tip:x86/bugs, without the
> > > last 3 patches that modify behavior.
> >
> > Thanks for your work. I am currently reviwing the tip branch and the
> > merge seems go so far.
> >
> > Regarding the last 3 patches, what are the next steps?
>
> Please resubmit them in a few days (with Josh's Acked-by added and any
> fixes/enhancements done along the way), on top of tip:x86/bugs.

I've sent the commits on top of the latest mitigations. Have you had a
chance to see them?

https://lore.kernel.org/all/[email protected]/

PS: I took the opportunity to break them down, one per mitigation, so, it
could simplify the patch management.

Thanks