Received: by 2002:ad5:4acb:0:0:0:0:0 with SMTP id n11csp3713869imw; Mon, 18 Jul 2022 13:09:17 -0700 (PDT) X-Google-Smtp-Source: AGRyM1vJNtAxmtUTiw9Iy0JVpyPEd60Q5myInIRrU+RpKS6wI1rEyR5/KZRKpamD4bilogmHxx3/ X-Received: by 2002:a17:907:980d:b0:72f:2cf2:9aff with SMTP id ji13-20020a170907980d00b0072f2cf29affmr6868186ejc.165.1658174957698; Mon, 18 Jul 2022 13:09:17 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1658174957; cv=none; d=google.com; s=arc-20160816; b=KHjCGuxcT4z+CCO4QoWEu2/8Egp7jlMfqJB4Emf+nHXJEDNlIfHaFhKR8vJpI8isvo JRQiQTCxN8exJ6llq2zG0bGYyN7BxBipxrVBo78RmU1GVhs9jxUREzUFLGoXVw82UCfh FFCU9GAhRyzy8vyZBiJ+vECb+dN6PyxP+ao3eWUBkyJfhFhgLFmX7RPLgrkWVUhaQKHq wqRtt7c1lJV9wdqQxo0arrkUrn2bcfHh55R672X1mxvaJKPfPpf8NWsePnrDb6DEffK1 1jp6rQRAKevwt5zFkE9bjSsEcFF0jF4NGBL2jcNLRQxgRODsFqRd9tQ+MW/SR+aksPtL 6xew== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:message-id:date:references :in-reply-to:subject:cc:to:dkim-signature:dkim-signature:from; bh=yRkr9UwC+lB25qp1u1NCqDonwbQ2o0Jt76E3z+tWvPA=; b=OydvbsQQ7jHnHDD2Age/l7bYLYaxFcMk/4Gb9xjM1VVozG8rCugQR7xJOLfpHUByou Fj6xXv/B1DSHpieX9pjMPzXTlJ0VTk9Eu91CbuPVKHpD4QU//NMcTpaHS6U/qQ31W91G umnAre5L3qQ0kaSiVF0agX2SzjUQAZ5X7UgkW7aAbKXd1IW67JINgng2gpW3/ahx0BT1 Pe/7/FvRDyWD81lGFaC9jtzUCpF1iogFIDhREnxHHEXmWSntjlNL9tMMwP/vn76YF3aM rE+Ebj1RmhJJ1Da2DgiP6IE8g9XaAedF7X35zvlcj6V3VDykCOGVeSnCS/oEVjWA/6up E90A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linutronix.de header.s=2020 header.b=f09p4gta; dkim=neutral (no key) header.i=@linutronix.de; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=linutronix.de Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id hq8-20020a1709073f0800b0072636d48d28si19525412ejc.453.2022.07.18.13.08.52; Mon, 18 Jul 2022 13:09:17 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linutronix.de header.s=2020 header.b=f09p4gta; dkim=neutral (no key) header.i=@linutronix.de; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=linutronix.de Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234907AbiGRT4V (ORCPT + 99 others); Mon, 18 Jul 2022 15:56:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45582 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234649AbiGRTzw (ORCPT ); Mon, 18 Jul 2022 15:55:52 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EE90A2E9C5 for ; Mon, 18 Jul 2022 12:55:50 -0700 (PDT) From: Thomas Gleixner DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1658174149; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=yRkr9UwC+lB25qp1u1NCqDonwbQ2o0Jt76E3z+tWvPA=; b=f09p4gtaLpG4FJTDu8UImaW3T5lvYsjwh73p2TPlsd1YCV+1gfkWWcihSGL4PavoIsIxKg SpsAs1/viKR+QjjKopCqo4O1iQGVZv4PsRMtuUWuD7RVfmTBSagh/xtWV94i62Ime1al0F ebGD3URxoA4EcfTMAU1m7sccNExdcqitnEQ3ddWLnoxrFB7BC+6RJP0DfkGLkeoRJp2YpJ ai/etHkO14wAujhBh6G9lreaMuCjyOWZe+GX1MyCPO6qTAsaOZSCKPx4FClxW2+JVgottQ eIgTP5Ygk3BEVWb13vftU4wJOG0AzLfipcklBZG3Jx1f8xMNZFTiK/cbXZJH0g== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1658174149; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=yRkr9UwC+lB25qp1u1NCqDonwbQ2o0Jt76E3z+tWvPA=; b=reZ5sePYWGOIBJcZuH+YP7KmX4FgiFHR5+AKd7AwfQzVj4NHjCpItpI03K+NJ+8HssZsJw otGjbRLvEnw4bsDg== To: LKML Cc: x86@kernel.org, Linus Torvalds , Tim Chen , Josh Poimboeuf , Andrew Cooper , Pawan Gupta , Johannes Wikner , Alyssa Milburn , Jann Horn , "H.J. Lu" , Joao Moreira , Joseph Nuzman , Steven Rostedt , Juergen Gross , "Peter Zijlstra (Intel)" , Masami Hiramatsu , Alexei Starovoitov , Daniel Borkmann Subject: Re: [patch 00/38] x86/retbleed: Call depth tracking mitigation In-Reply-To: <20220716230344.239749011@linutronix.de> References: <20220716230344.239749011@linutronix.de> Date: Mon, 18 Jul 2022 21:55:48 +0200 Message-ID: <87r12iurej.ffs@tglx> MIME-Version: 1.0 Content-Type: text/plain X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sun, Jul 17 2022 at 01:17, Thomas Gleixner wrote: > For 4 RET pathes randomized with randomize_kstack_offset=y and RSP bit 3, 6, 5: > > IBRS stuff stuff(pad) confuse > microbench: +37.20% +18.46% +15.47% +7.46% > sockperf 14 bytes: -23.76% -19.26% -14.31% -16.80% > sockperf 1472 bytes: -22.51% -18.40% -12.25% -15.95% > > So for the more randomized variant sockperf tanks and is already slower > than stuffing with thunks in the compiler provided padding space. > > I send out a patch in reply to this series which implements that variant, > but there needs to be input from the security researchers how protective > this is. If we could get away with 2 RET pathes (perhaps multiple instances > with different bits), that would be amazing. Here is goes. --- Subject: x86/retbleed: Add confusion mitigation From: Thomas Gleixner Date: Fri, 15 Jul 2022 11:41:05 +0200 - NOT FOR INCLUSION - Experimental option to confuse the return path by randomization. The following command line options enable this: retbleed=confuse 4 return pathes retbleed=confuse,4 4 return pathes retbleed=confuse,3 3 return pathes retbleed=confuse,2 2 return pathes This need scrunity by security researchers. Not-Signed-off-by: Thomas Gleixner --- arch/x86/Kconfig | 12 ++++++ arch/x86/include/asm/nospec-branch.h | 23 +++++++++++ arch/x86/kernel/cpu/bugs.c | 41 +++++++++++++++++++++ arch/x86/lib/retpoline.S | 68 +++++++++++++++++++++++++++++++++++ include/linux/randomize_kstack.h | 6 +++ kernel/entry/common.c | 3 + 6 files changed, 153 insertions(+) --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -2538,6 +2538,18 @@ config CALL_THUNKS_DEBUG Only enable this, when you are debugging call thunks as this creates a noticable runtime overhead. If unsure say N. +config RETURN_CONFUSION + bool "Mitigate RSB underflow with return confusion" + depends on CPU_SUP_INTEL && RETHUNK && RANDOMIZE_KSTACK_OFFSET + default y + help + Compile the kernel with return path confusion to mitigate the + Intel SKL Return-Speculation-Buffer (RSB) underflow issue. The + mitigation is off by default and needs to be enabled on the + kernel command line via the retbleed=confuse option. For + non-affected systems the overhead of this option is marginal as + the return thunk jumps are patched to direct ret instructions. + config CPU_IBPB_ENTRY bool "Enable IBPB on kernel entry" depends on CPU_SUP_AMD --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -312,6 +312,29 @@ static inline void x86_set_skl_return_th #endif +#ifdef CONFIG_RETURN_CONFUSION +extern void __x86_return_confused_skl2(void); +extern void __x86_return_confused_skl3(void); +extern void __x86_return_confused_skl4(void); + +static inline void x86_set_skl_confused_return_thunk(int which) +{ + switch (which) { + case 2: + x86_return_thunk = &__x86_return_confused_skl2; + break; + case 3: + x86_return_thunk = &__x86_return_confused_skl3; + break; + case 4: + x86_return_thunk = &__x86_return_confused_skl4; + break; + } +} +#else +static inline void x86_set_skl_confused_return_thunk(void) { } +#endif + #ifdef CONFIG_RETPOLINE #define GEN(reg) \ --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -14,6 +14,7 @@ #include #include #include +#include #include #include #include @@ -785,6 +786,7 @@ enum retbleed_mitigation { RETBLEED_MITIGATION_IBRS, RETBLEED_MITIGATION_EIBRS, RETBLEED_MITIGATION_STUFF, + RETBLEED_MITIGATION_CONFUSE, }; enum retbleed_mitigation_cmd { @@ -793,6 +795,7 @@ enum retbleed_mitigation_cmd { RETBLEED_CMD_UNRET, RETBLEED_CMD_IBPB, RETBLEED_CMD_STUFF, + RETBLEED_CMD_CONFUSE, }; const char * const retbleed_strings[] = { @@ -802,6 +805,7 @@ const char * const retbleed_strings[] = [RETBLEED_MITIGATION_IBRS] = "Mitigation: IBRS", [RETBLEED_MITIGATION_EIBRS] = "Mitigation: Enhanced IBRS", [RETBLEED_MITIGATION_STUFF] = "Mitigation: Stuffing", + [RETBLEED_MITIGATION_CONFUSE] = "Mitigation: Return confusion", }; static enum retbleed_mitigation retbleed_mitigation __ro_after_init = @@ -810,6 +814,7 @@ static enum retbleed_mitigation_cmd retb RETBLEED_CMD_AUTO; static int __ro_after_init retbleed_nosmt = false; +static int __ro_after_init rethunk_confuse_skl = 4; static int __init retbleed_parse_cmdline(char *str) { @@ -833,8 +838,19 @@ static int __init retbleed_parse_cmdline retbleed_cmd = RETBLEED_CMD_IBPB; } else if (!strcmp(str, "stuff")) { retbleed_cmd = RETBLEED_CMD_STUFF; + } else if (!strcmp(str, "confuse")) { + retbleed_cmd = RETBLEED_CMD_CONFUSE; } else if (!strcmp(str, "nosmt")) { retbleed_nosmt = true; + } else if (retbleed_cmd == RETBLEED_CMD_CONFUSE && + !kstrtouint(str, 10, &rethunk_confuse_skl)) { + + if (rethunk_confuse_skl < 2 || + rethunk_confuse_skl > 4) { + pr_err("Ignoring out-of-bound stuff count (%d).", + rethunk_confuse_skl); + rethunk_confuse_skl = 4; + } } else { pr_err("Ignoring unknown retbleed option (%s).", str); } @@ -896,6 +912,25 @@ static void __init retbleed_select_mitig } break; + case RETBLEED_CMD_CONFUSE: + if (IS_ENABLED(CONFIG_RETURN_CONFUSION) && + spectre_v2_enabled == SPECTRE_V2_RETPOLINE && + random_kstack_offset_enabled()) { + retbleed_mitigation = RETBLEED_MITIGATION_CONFUSE; + } else { + if (IS_ENABLED(CONFIG_RETURN_CONFUSION) && + random_kstack_offset_enabled()) + pr_err("WARNING: retbleed=confuse depends on randomize_kstack_offset=y\n"); + else if (IS_ENABLED(CONFIG_RETURN_CONFUSION) && + spectre_v2_enabled != SPECTRE_V2_RETPOLINE) + pr_err("WARNING: retbleed=confuse depends on spectre_v2=retpoline\n"); + else + pr_err("WARNING: kernel not compiled with RETURN_CONFUSION.\n"); + + goto do_cmd_auto; + } + break; + do_cmd_auto: case RETBLEED_CMD_AUTO: default: @@ -939,6 +974,11 @@ static void __init retbleed_select_mitig x86_set_skl_return_thunk(); break; + case RETBLEED_MITIGATION_CONFUSE: + setup_force_cpu_cap(X86_FEATURE_RETHUNK); + x86_set_skl_confused_return_thunk(rethunk_confuse_skl); + break; + default: break; } @@ -1389,6 +1429,7 @@ static void __init spectre_v2_select_mit boot_cpu_has_bug(X86_BUG_RETBLEED) && retbleed_cmd != RETBLEED_CMD_OFF && retbleed_cmd != RETBLEED_CMD_STUFF && + retbleed_cmd != RETBLEED_CMD_CONFUSE && boot_cpu_has(X86_FEATURE_IBRS) && boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) { mode = SPECTRE_V2_IBRS; --- a/arch/x86/lib/retpoline.S +++ b/arch/x86/lib/retpoline.S @@ -230,3 +230,71 @@ SYM_FUNC_START(__x86_return_skl) SYM_FUNC_END(__x86_return_skl) #endif /* CONFIG_CALL_DEPTH_TRACKING */ + +#ifdef CONFIG_RETURN_CONFUSION + .align 64 +SYM_FUNC_START(__x86_return_confused_skl4) + ANNOTATE_NOENDBR + testq $3, %rsp + jz 1f + + ANNOTATE_UNRET_SAFE + ret + int3 +1: + testq $6, %rsp + jz 2f + ANNOTATE_UNRET_SAFE + ret + int3 + +2: + testq $5, %rsp + jz 3f + ANNOTATE_UNRET_SAFE + ret + int3 +3: + ANNOTATE_UNRET_SAFE + ret + int3 +SYM_FUNC_END(__x86_return_confused_skl4) + + .align 64 +SYM_FUNC_START(__x86_return_confused_skl3) + ANNOTATE_NOENDBR + testq $3, %rsp + jz 1f + + ANNOTATE_UNRET_SAFE + ret + int3 +1: + testq $6, %rsp + jz 2f + ANNOTATE_UNRET_SAFE + ret + int3 + +2: + ANNOTATE_UNRET_SAFE + ret + int3 +SYM_FUNC_END(__x86_return_confused_skl3) + + .align 64 +SYM_FUNC_START(__x86_return_confused_skl2) + ANNOTATE_NOENDBR + testq $3, %rsp + jz 1f + + ANNOTATE_UNRET_SAFE + ret + int3 +1: + ANNOTATE_UNRET_SAFE + ret + int3 +SYM_FUNC_END(__x86_return_confused_skl2) + +#endif /* CONFIG_RETURN_CONFUSION */ --- a/include/linux/randomize_kstack.h +++ b/include/linux/randomize_kstack.h @@ -84,9 +84,15 @@ DECLARE_PER_CPU(u32, kstack_offset); raw_cpu_write(kstack_offset, offset); \ } \ } while (0) + +#define random_kstack_offset_enabled() \ + static_branch_maybe(CONFIG_RANDOMIZE_KSTACK_OFFSET_DEFAULT, \ + &randomize_kstack_offset) + #else /* CONFIG_RANDOMIZE_KSTACK_OFFSET */ #define add_random_kstack_offset() do { } while (0) #define choose_random_kstack_offset(rand) do { } while (0) +#define random_kstack_offset_enabled() false #endif /* CONFIG_RANDOMIZE_KSTACK_OFFSET */ #endif --- a/kernel/entry/common.c +++ b/kernel/entry/common.c @@ -298,6 +298,7 @@ void syscall_exit_to_user_mode_work(stru noinstr void irqentry_enter_from_user_mode(struct pt_regs *regs) { + add_random_kstack_offset(); __enter_from_user_mode(regs); } @@ -444,6 +445,8 @@ irqentry_state_t noinstr irqentry_nmi_en { irqentry_state_t irq_state; + if (user_mode(regs)) + add_random_kstack_offset(); irq_state.lockdep = lockdep_hardirqs_enabled(); __nmi_enter();