Received: by 2002:ac0:da4c:0:0:0:0:0 with SMTP id a12csp1369660imi; Sat, 23 Jul 2022 03:56:32 -0700 (PDT) X-Google-Smtp-Source: AGRyM1u7mcNKipr3Nx+Yyv6BlbnOAxfxaIgxDQqZBpJZFUV1rZZzJc1QXb0aEhlwJIgOOl7zr5xb X-Received: by 2002:a17:907:e91:b0:72f:d76:b22c with SMTP id ho17-20020a1709070e9100b0072f0d76b22cmr3011789ejc.364.1658573792232; Sat, 23 Jul 2022 03:56:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1658573792; cv=none; d=google.com; s=arc-20160816; b=JIvf0Lz8kFLnxquJhwpaIRL4ILAyazc5WbjhtgxxRc5dauCu46Y7wYoUeNF6NVx/9M qnJrcem8v++dnyL6MevzS0f/n6/mjaMteCBpVoMy27Wh8n/Vtkul/SFxt6SD+RIqzH+v YjqgVJfoIkriiT3enXAQnig6aeE2y88j7CdOALME9q1aMtFYsm3B4hRBD41lGXG/N3Gy UsPNU/bzMDVvRwE3YzpEcXgC1tKzOeTGc0wpBYRj2cm7H+e2lG9+qIjZXyezpIzjmEB2 WIe28+thLfSH11uf5hWOWWihReA5CtH1/jzj6WqqLRliZdQ1bzoPkr5XY/AmpPg/LyFG J6yA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=1C6m4sPZgYx5srHOG+HerD74ItYqeeL3spms0i1xCTg=; b=G6jZ1XqJCjs+EF4ufVXphNNoU8zYON2CPdfzxTgYs+qTprrrek3IpcOzp7JK8b3fsQ kddKtDslZb7Wr2Px4OkYJx9pt7bP1rOFx5LdtmXBai2ZUVK6zznycUbmRYpVd+NKKRhu 0ogwb9EHofMJddcNV8YyvS771h/lgtLX74YbEBwrY89LJJErgsmcjNXgW64xxt1dGtCe CDKQRqopD6jKKoC+pptdKy5/r7JORA6Jmu4MYtERT+thtDGWBexwTS5nDUoDywe6WohV cnXbooimitW2jmyyvR8lDYoiBmcY4yjh8T5IwHgd4Xu8IfGNDFSq3fUvhOB4Cj60W7kC Vciw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=BQkS83jL; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ji17-20020a170907981100b007269e67e7ffsi9730301ejc.544.2022.07.23.03.56.07; Sat, 23 Jul 2022 03:56:32 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=BQkS83jL; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238627AbiGWKJs (ORCPT + 99 others); Sat, 23 Jul 2022 06:09:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42464 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238500AbiGWKIe (ORCPT ); Sat, 23 Jul 2022 06:08:34 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8F9631EAF3; Sat, 23 Jul 2022 03:01:53 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 5FCC5B82C22; Sat, 23 Jul 2022 10:01:51 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B061DC341C7; Sat, 23 Jul 2022 10:01:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1658570510; bh=DtFkrCHxAPSTMdz898VV0Dv6t+3H+rDdUVp5aAzqCKc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=BQkS83jLr5V6S6qmM4C8xebFYHghcD09DCO2/cKKAZbIoe7MsAFd51FycEwpxoZob 5w8+tiMiguTXQjqlb+vAEA/nyye8ew088mI5DkLHRDaylq/C0d5nDsNdLGUvbxhbnY z23bE3+CGokSAzjIEghQtpCf6EkYtdyhG31V3XAU= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Josh Poimboeuf , "Peter Zijlstra (Intel)" , Borislav Petkov , Thadeu Lima de Souza Cascardo , Ben Hutchings Subject: [PATCH 5.10 122/148] x86/speculation: Fill RSB on vmexit for IBRS Date: Sat, 23 Jul 2022 11:55:34 +0200 Message-Id: <20220723095258.498105036@linuxfoundation.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220723095224.302504400@linuxfoundation.org> References: <20220723095224.302504400@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-7.8 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Josh Poimboeuf commit 9756bba28470722dacb79ffce554336dd1f6a6cd upstream. Prevent RSB underflow/poisoning attacks with RSB. While at it, add a bunch of comments to attempt to document the current state of tribal knowledge about RSB attacks and what exactly is being mitigated. Signed-off-by: Josh Poimboeuf Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/cpufeatures.h | 2 - arch/x86/kernel/cpu/bugs.c | 63 ++++++++++++++++++++++++++++++++++--- arch/x86/kvm/vmx/vmenter.S | 6 +-- 3 files changed, 62 insertions(+), 9 deletions(-) --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -204,7 +204,7 @@ #define X86_FEATURE_SME ( 7*32+10) /* AMD Secure Memory Encryption */ #define X86_FEATURE_PTI ( 7*32+11) /* Kernel Page Table Isolation enabled */ #define X86_FEATURE_KERNEL_IBRS ( 7*32+12) /* "" Set/clear IBRS on kernel entry/exit */ -/* FREE! ( 7*32+13) */ +#define X86_FEATURE_RSB_VMEXIT ( 7*32+13) /* "" Fill RSB on VM-Exit */ #define X86_FEATURE_INTEL_PPIN ( 7*32+14) /* Intel Processor Inventory Number */ #define X86_FEATURE_CDP_L2 ( 7*32+15) /* Code and Data Prioritization L2 */ #define X86_FEATURE_MSR_SPEC_CTRL ( 7*32+16) /* "" MSR SPEC_CTRL is implemented */ --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -1357,17 +1357,70 @@ static void __init spectre_v2_select_mit pr_info("%s\n", spectre_v2_strings[mode]); /* - * If spectre v2 protection has been enabled, unconditionally fill - * RSB during a context switch; this protects against two independent - * issues: + * If Spectre v2 protection has been enabled, fill the RSB during a + * context switch. In general there are two types of RSB attacks + * across context switches, for which the CALLs/RETs may be unbalanced. * - * - RSB underflow (and switch to BTB) on Skylake+ - * - SpectreRSB variant of spectre v2 on X86_BUG_SPECTRE_V2 CPUs + * 1) RSB underflow + * + * Some Intel parts have "bottomless RSB". When the RSB is empty, + * speculated return targets may come from the branch predictor, + * which could have a user-poisoned BTB or BHB entry. + * + * AMD has it even worse: *all* returns are speculated from the BTB, + * regardless of the state of the RSB. + * + * When IBRS or eIBRS is enabled, the "user -> kernel" attack + * scenario is mitigated by the IBRS branch prediction isolation + * properties, so the RSB buffer filling wouldn't be necessary to + * protect against this type of attack. + * + * The "user -> user" attack scenario is mitigated by RSB filling. + * + * 2) Poisoned RSB entry + * + * If the 'next' in-kernel return stack is shorter than 'prev', + * 'next' could be tricked into speculating with a user-poisoned RSB + * entry. + * + * The "user -> kernel" attack scenario is mitigated by SMEP and + * eIBRS. + * + * The "user -> user" scenario, also known as SpectreBHB, requires + * RSB clearing. + * + * So to mitigate all cases, unconditionally fill RSB on context + * switches. + * + * FIXME: Is this pointless for retbleed-affected AMD? */ setup_force_cpu_cap(X86_FEATURE_RSB_CTXSW); pr_info("Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch\n"); /* + * Similar to context switches, there are two types of RSB attacks + * after vmexit: + * + * 1) RSB underflow + * + * 2) Poisoned RSB entry + * + * When retpoline is enabled, both are mitigated by filling/clearing + * the RSB. + * + * When IBRS is enabled, while #1 would be mitigated by the IBRS branch + * prediction isolation protections, RSB still needs to be cleared + * because of #2. Note that SMEP provides no protection here, unlike + * user-space-poisoned RSB entries. + * + * eIBRS, on the other hand, has RSB-poisoning protections, so it + * doesn't need RSB clearing after vmexit. + */ + if (boot_cpu_has(X86_FEATURE_RETPOLINE) || + boot_cpu_has(X86_FEATURE_KERNEL_IBRS)) + setup_force_cpu_cap(X86_FEATURE_RSB_VMEXIT); + + /* * Retpoline protects the kernel, but doesn't protect firmware. IBRS * and Enhanced IBRS protect firmware too, so enable IBRS around * firmware calls only when IBRS / Enhanced IBRS aren't otherwise --- a/arch/x86/kvm/vmx/vmenter.S +++ b/arch/x86/kvm/vmx/vmenter.S @@ -193,15 +193,15 @@ SYM_INNER_LABEL(vmx_vmexit, SYM_L_GLOBAL * IMPORTANT: RSB filling and SPEC_CTRL handling must be done before * the first unbalanced RET after vmexit! * - * For retpoline, RSB filling is needed to prevent poisoned RSB entries - * and (in some cases) RSB underflow. + * For retpoline or IBRS, RSB filling is needed to prevent poisoned RSB + * entries and (in some cases) RSB underflow. * * eIBRS has its own protection against poisoned RSB, so it doesn't * need the RSB filling sequence. But it does need to be enabled * before the first unbalanced RET. */ - FILL_RETURN_BUFFER %_ASM_CX, RSB_CLEAR_LOOPS, X86_FEATURE_RETPOLINE + FILL_RETURN_BUFFER %_ASM_CX, RSB_CLEAR_LOOPS, X86_FEATURE_RSB_VMEXIT pop %_ASM_ARG2 /* @flags */ pop %_ASM_ARG1 /* @vmx */