Received: by 2002:a05:6359:6284:b0:131:369:b2a3 with SMTP id se4csp5291638rwb; Wed, 9 Aug 2023 01:32:06 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFFBL4tgLDV0kmmv3F+0EnVy7+p1Fo04F8OmNDzZFZ8LjoFKqCbowlRB0lOIL6gJYKU0LV9 X-Received: by 2002:a05:6a00:1d92:b0:67a:72d5:3365 with SMTP id z18-20020a056a001d9200b0067a72d53365mr13927207pfw.6.1691569925796; Wed, 09 Aug 2023 01:32:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1691569925; cv=none; d=google.com; s=arc-20160816; b=ibCBsh04QVHyJxGfq7RQW4PMvDSj/YsctrSwsK/id6o/ShnAtNY89q8qyglEH5jVyr aduqfuQOlE8O2dVKsbfC9rINARDY8Ljkb4yS/MzTlmOMrERWnn9YqYC/EdZl7xOigyFa +R3WDnrPV+D69768Vul9P+FT8X4Hh4DEX7aXZBtjPL+3YUhRroPOZ0HKV0BZ/vBiUv2L beBghMb+BY7ajvyvx7CVuOQaICcyw5oJPcmPlvmS9qs94ALiGpm3QQqJ51XNsvwEQj2c HOGnInMfDn1BnvyvdCu+d3VYV1YyZ/DqqFE28TYi1vVWtB+O1QeRrGQjKVsIQk1ZOaTP H1JA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id:dkim-signature; bh=ng0HtP0mPXxyVNZW9LodryzCfpmkt7bPV8Ile831CE4=; fh=o45i9bBrOE/GuAHaEwxsN59PvrZuIOmhioZFWcmA1tw=; b=cT4AFexm9B6UUvv5ZYqEVz9oZocPF/IBOOcLOeqP0cq3yX0eOoDtyL2jsc3Q3xLgwl jukyTzASyfmKwZQ3fQGeymMfverN6eCmhbIi9/LUn0J6+9E5Haq6e440D7BmlXKzgZzm Z35wZ/GYmjBhjv+EtItpe6Gjur8T86ZdI9EbGFq+j85BbHp1uzDU6T1+VlpTfWoAh+V2 4/h7syNQGiF/WUOHLSM2dRsXcXpbqExASJTNtlbbi/ytayXjCGmOXTPAQJsbWjw3zk9I iRalYBSlN4J6SRy8wHPkbY0S+4D4rQ23GhSjxgSWa4/R2ADUFQG6X5zRax6cY1GAOH/o 1MsA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=M47LJF7b; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id i62-20020a638741000000b0055bc30dbcd0si8781641pge.742.2023.08.09.01.31.54; Wed, 09 Aug 2023 01:32:05 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=M47LJF7b; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231719AbjHIH1T (ORCPT + 99 others); Wed, 9 Aug 2023 03:27:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48092 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230166AbjHIH04 (ORCPT ); Wed, 9 Aug 2023 03:26:56 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 12BE41FCC for ; Wed, 9 Aug 2023 00:26:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=ng0HtP0mPXxyVNZW9LodryzCfpmkt7bPV8Ile831CE4=; b=M47LJF7biidaZPu4RGHoBwBHFY zc2GSVau7Cw7LOi9fhrO9IdOjjB8lRI8fPuHpYk2h+Ufl7KEV8EIObz/o+fU1nVmZ/uKL7glayruu 6c6kpHUxPWSzuz9+GeilbxqyRRLbZc2hLKFAqRs7cau4JA0Axo0ZwBYoyBg7JnE5Cnfl4DIACbSF2 5tSFmDa65H38ul7LuvqjKyc/o9jX/uvBIRies7CpZH/m6Kz9YJa1T3Vrb/BdFtbDb7SwWIYm6jU7C un96CE+pcaKewgXJCVWB9vVmSWLMQPKgjXDlzJ7vo8DN+ouCejYhUSK/JdvlOuLczIL5KvzNLmhHI l0dgKq/w==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1qTdaf-004olP-MY; Wed, 09 Aug 2023 07:26:46 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 96BBE300703; Wed, 9 Aug 2023 09:26:44 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 5AE892CC1CFB6; Wed, 9 Aug 2023 09:26:44 +0200 (CEST) Message-ID: <20230809072200.850338672@infradead.org> User-Agent: quilt/0.66 Date: Wed, 09 Aug 2023 09:12:24 +0200 From: Peter Zijlstra To: x86@kernel.org Cc: linux-kernel@vger.kernel.org, peterz@infradead.org, David.Kaplan@amd.com, Andrew.Cooper3@citrix.com, jpoimboe@kernel.org, gregkh@linuxfoundation.org Subject: [RFC][PATCH 06/17] x86/cpu: Add SRSO untrain to retbleed= References: <20230809071218.000335006@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Since it is now readily apparent that the two SRSO untrain_ret+return_thunk variants are exactly the same mechanism as the existing (retbleed) zen untrain_ret+return_thunk, add them to the existing retbleed options. This avoids all confusion as to which of the three -- if any -- ought to be active, there's a single point of control and no funny interactions. Signed-off-by: Peter Zijlstra (Intel) --- arch/x86/kernel/cpu/bugs.c | 87 +++++++++++++++++++++++++++++++++++++++------ 1 file changed, 76 insertions(+), 11 deletions(-) --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -748,6 +748,8 @@ enum spectre_v2_mitigation spectre_v2_en enum retbleed_mitigation { RETBLEED_MITIGATION_NONE, RETBLEED_MITIGATION_UNRET, + RETBLEED_MITIGATION_UNRET_SRSO, + RETBLEED_MITIGATION_UNRET_SRSO_ALIAS, RETBLEED_MITIGATION_IBPB, RETBLEED_MITIGATION_IBRS, RETBLEED_MITIGATION_EIBRS, @@ -758,17 +760,21 @@ enum retbleed_mitigation_cmd { RETBLEED_CMD_OFF, RETBLEED_CMD_AUTO, RETBLEED_CMD_UNRET, + RETBLEED_CMD_UNRET_SRSO, + RETBLEED_CMD_UNRET_SRSO_ALIAS, RETBLEED_CMD_IBPB, RETBLEED_CMD_STUFF, }; static const char * const retbleed_strings[] = { - [RETBLEED_MITIGATION_NONE] = "Vulnerable", - [RETBLEED_MITIGATION_UNRET] = "Mitigation: untrained return thunk", - [RETBLEED_MITIGATION_IBPB] = "Mitigation: IBPB", - [RETBLEED_MITIGATION_IBRS] = "Mitigation: IBRS", - [RETBLEED_MITIGATION_EIBRS] = "Mitigation: Enhanced IBRS", - [RETBLEED_MITIGATION_STUFF] = "Mitigation: Stuffing", + [RETBLEED_MITIGATION_NONE] = "Vulnerable", + [RETBLEED_MITIGATION_UNRET] = "Mitigation: untrained return thunk", + [RETBLEED_MITIGATION_UNRET_SRSO] = "Mitigation: srso untrained return thunk", + [RETBLEED_MITIGATION_UNRET_SRSO_ALIAS] = "Mitigation: srso alias untrained return thunk", + [RETBLEED_MITIGATION_IBPB] = "Mitigation: IBPB", + [RETBLEED_MITIGATION_IBRS] = "Mitigation: IBRS", + [RETBLEED_MITIGATION_EIBRS] = "Mitigation: Enhanced IBRS", + [RETBLEED_MITIGATION_STUFF] = "Mitigation: Stuffing", }; static enum retbleed_mitigation retbleed_mitigation __ro_after_init = @@ -796,6 +802,10 @@ static int __init retbleed_parse_cmdline retbleed_cmd = RETBLEED_CMD_AUTO; } else if (!strcmp(str, "unret")) { retbleed_cmd = RETBLEED_CMD_UNRET; + } else if (!strcmp(str, "srso")) { + retbleed_cmd = RETBLEED_CMD_UNRET_SRSO; + } else if (!strcmp(str, "srso_alias")) { + retbleed_cmd = RETBLEED_CMD_UNRET_SRSO_ALIAS; } else if (!strcmp(str, "ibpb")) { retbleed_cmd = RETBLEED_CMD_IBPB; } else if (!strcmp(str, "stuff")) { @@ -817,21 +827,54 @@ early_param("retbleed", retbleed_parse_c #define RETBLEED_UNTRAIN_MSG "WARNING: BTB untrained return thunk mitigation is only effective on AMD/Hygon!\n" #define RETBLEED_INTEL_MSG "WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!\n" +#define RETBLEED_SRSO_NOTICE "WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options." static void __init retbleed_select_mitigation(void) { bool mitigate_smt = false; + bool has_microcode = false; - if (!boot_cpu_has_bug(X86_BUG_RETBLEED) || cpu_mitigations_off()) + if ((!boot_cpu_has_bug(X86_BUG_RETBLEED) && !boot_cpu_has_bug(X86_BUG_SRSO)) || + cpu_mitigations_off()) return; + if (boot_cpu_has_bug(X86_BUG_SRSO)) { + has_microcode = boot_cpu_has(X86_FEATURE_IBPB_BRTYPE) || cpu_has_ibpb_brtype_microcode(); + if (!has_microcode) { + pr_warn("IBPB-extending microcode not applied!\n"); + pr_warn(RETBLEED_SRSO_NOTICE); + } else { + /* + * Enable the synthetic (even if in a real CPUID leaf) + * flags for guests. + */ + setup_force_cpu_cap(X86_FEATURE_IBPB_BRTYPE); + setup_force_cpu_cap(X86_FEATURE_SBPB); + + /* + * Zen1/2 with SMT off aren't vulnerable after the right + * IBPB microcode has been applied. + */ + if ((boot_cpu_data.x86 < 0x19) && + (cpu_smt_control == CPU_SMT_DISABLED)) + setup_force_cpu_cap(X86_FEATURE_SRSO_NO); + } + } + switch (retbleed_cmd) { case RETBLEED_CMD_OFF: return; case RETBLEED_CMD_UNRET: + case RETBLEED_CMD_UNRET_SRSO: + case RETBLEED_CMD_UNRET_SRSO_ALIAS: if (IS_ENABLED(CONFIG_CPU_UNRET_ENTRY)) { - retbleed_mitigation = RETBLEED_MITIGATION_UNRET; + if (retbleed_cmd == RETBLEED_CMD_UNRET) + retbleed_mitigation = RETBLEED_MITIGATION_UNRET; + if (retbleed_cmd == RETBLEED_CMD_UNRET_SRSO) + retbleed_mitigation = RETBLEED_MITIGATION_UNRET_SRSO; + if (retbleed_cmd == RETBLEED_CMD_UNRET_SRSO_ALIAS) + retbleed_mitigation = RETBLEED_MITIGATION_UNRET_SRSO_ALIAS; } else { pr_err("WARNING: kernel not compiled with CPU_UNRET_ENTRY.\n"); goto do_cmd_auto; @@ -843,6 +886,8 @@ static void __init retbleed_select_mitig pr_err("WARNING: CPU does not support IBPB.\n"); goto do_cmd_auto; } else if (IS_ENABLED(CONFIG_CPU_IBPB_ENTRY)) { + if (boot_cpu_has_bug(X86_BUG_SRSO) && !has_microcode) + pr_err("IBPB-extending microcode not applied; SRSO NOT mitigated\n"); retbleed_mitigation = RETBLEED_MITIGATION_IBPB; } else { pr_err("WARNING: kernel not compiled with CPU_IBPB_ENTRY.\n"); @@ -870,8 +915,17 @@ static void __init retbleed_select_mitig default: if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD || boot_cpu_data.x86_vendor == X86_VENDOR_HYGON) { - if (IS_ENABLED(CONFIG_CPU_UNRET_ENTRY)) - retbleed_mitigation = RETBLEED_MITIGATION_UNRET; + if (IS_ENABLED(CONFIG_CPU_UNRET_ENTRY)) { + if (boot_cpu_has_bug(X86_BUG_RETBLEED)) + retbleed_mitigation = RETBLEED_MITIGATION_UNRET; + + if (boot_cpu_has_bug(X86_BUG_SRSO) && !boot_cpu_has(X86_FEATURE_SRSO_NO)) { + if (boot_cpu_data.x86 == 0x19) + retbleed_mitigation = RETBLEED_MITIGATION_UNRET_SRSO_ALIAS; + else + retbleed_mitigation = RETBLEED_MITIGATION_UNRET_SRSO; + } + } else if (IS_ENABLED(CONFIG_CPU_IBPB_ENTRY) && boot_cpu_has(X86_FEATURE_IBPB)) retbleed_mitigation = RETBLEED_MITIGATION_IBPB; } @@ -886,9 +940,20 @@ static void __init retbleed_select_mitig } switch (retbleed_mitigation) { + case RETBLEED_MITIGATION_UNRET_SRSO_ALIAS: + setup_force_cpu_cap(X86_FEATURE_SRSO_ALIAS); + x86_return_thunk = srso_alias_return_thunk; + goto do_rethunk; + + case RETBLEED_MITIGATION_UNRET_SRSO: + setup_force_cpu_cap(X86_FEATURE_SRSO); + x86_return_thunk = srso_return_thunk; + goto do_rethunk; + case RETBLEED_MITIGATION_UNRET: - setup_force_cpu_cap(X86_FEATURE_RETHUNK); setup_force_cpu_cap(X86_FEATURE_UNRET); +do_rethunk: + setup_force_cpu_cap(X86_FEATURE_RETHUNK); if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD && boot_cpu_data.x86_vendor != X86_VENDOR_HYGON)