Received: by 10.223.176.46 with SMTP id f43csp2695384wra; Mon, 22 Jan 2018 01:46:17 -0800 (PST) X-Google-Smtp-Source: AH8x226+JPa6m0MQ3W8jqkYjfYU9dz9U+gp++QafWVVfNg7CYzRgo3C0F0haRUIHvrLI1gEwvcT7 X-Received: by 2002:a17:902:9343:: with SMTP id g3-v6mr3177959plp.319.1516614376979; Mon, 22 Jan 2018 01:46:16 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1516614376; cv=none; d=google.com; s=arc-20160816; b=k0aJsc6W4GT1wxytW8rabt9yucPPdFxhAGxeV9D5FrXz+V+n6+099XJCv3Z1ZuEZ2q 2cKK2niUf/jsBDpgOw/XQiPI3A7/8erM2EubfXY8RsHJyzZXd/6v2YpiABS2u9y7Frww YtCVmWW73A+IDmbwHuB15070pgEbq2h/ycYmSScBdIpzbICP6oauPfF/maeM4i67PUvf 6jDREvqdmF7EAXyFKdSykJNO8A3V/oBACVyMh14RY0j+HcmgjBcIub+eS3parX4XkEE1 xYj1HwA0Cwka0COGuYRF2DRiKcfjJux2LqJ+SBNU6bMR5RbTJZWSmruxAhMZUb1RSNMe dfRA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=KSOJdOHJR3TV2NRZWdpiZmD734barBJyCBGbslL5EUU=; b=xGeoR/QP+w3vXrbxDxJOwLKPTJlajuKZTjGP7Gsi9h8yoVmlXWCZsEeJayY62kmZ0f b+YC6z3LNNcK6ggEX60NpxJkA1YVu04C4IEEel2XAwHGfcABKagPa+b/145ly/ukwtmQ et9hULtSezgFaOY65QIXnQZEp5LfLw2V6MyVZufAgvyizQHxCXZE4taDctBDVO5tUxg5 6dPrP64Wx4nmKVdc904WYtHhJnTAFj7ut7bLUHG/VagZUeA3gOm0OwqoCqk2jnNEItFS G9Dw1+WoqRnYVSt5OQ/Iqw4trajmtvDBQtDFBqfLTsCDB/XoyTNiP26B3uzr+PCVAmoP u1Gw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id ay6-v6si3223425plb.423.2018.01.22.01.46.02; Mon, 22 Jan 2018 01:46:16 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751393AbeAVIl2 (ORCPT + 99 others); Mon, 22 Jan 2018 03:41:28 -0500 Received: from mail.linuxfoundation.org ([140.211.169.12]:58452 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751156AbeAVIlW (ORCPT ); Mon, 22 Jan 2018 03:41:22 -0500 Received: from localhost (LFbn-1-12258-90.w90-92.abo.wanadoo.fr [90.92.71.90]) by mail.linuxfoundation.org (Postfix) with ESMTPSA id 6083AEFC; Mon, 22 Jan 2018 08:41:21 +0000 (UTC) From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, David Woodhouse , Thomas Gleixner , "Peter Zijlstra (Intel)" , gnomes@lxorguk.ukuu.org.uk, Rik van Riel , Andi Kleen , Josh Poimboeuf , thomas.lendacky@amd.com, Linus Torvalds , Jiri Kosina , Andy Lutomirski , Dave Hansen , Kees Cook , Tim Chen , Paul Turner , Razvan Ghitulete , Greg Kroah-Hartman Subject: [PATCH 4.4 19/53] x86/retpoline: Fill return stack buffer on vmexit Date: Mon, 22 Jan 2018 09:40:11 +0100 Message-Id: <20180122083911.099134896@linuxfoundation.org> X-Mailer: git-send-email 2.16.0 In-Reply-To: <20180122083910.299610926@linuxfoundation.org> References: <20180122083910.299610926@linuxfoundation.org> User-Agent: quilt/0.65 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.4-stable review patch. If anyone has any objections, please let me know. ------------------ From: David Woodhouse commit 117cc7a908c83697b0b737d15ae1eb5943afe35b upstream. In accordance with the Intel and AMD documentation, we need to overwrite all entries in the RSB on exiting a guest, to prevent malicious branch target predictions from affecting the host kernel. This is needed both for retpoline and for IBRS. [ak: numbers again for the RSB stuffing labels] Signed-off-by: David Woodhouse Signed-off-by: Thomas Gleixner Tested-by: Peter Zijlstra (Intel) Cc: gnomes@lxorguk.ukuu.org.uk Cc: Rik van Riel Cc: Andi Kleen Cc: Josh Poimboeuf Cc: thomas.lendacky@amd.com Cc: Linus Torvalds Cc: Jiri Kosina Cc: Andy Lutomirski Cc: Dave Hansen Cc: Kees Cook Cc: Tim Chen Cc: Greg Kroah-Hartman Cc: Paul Turner Link: https://lkml.kernel.org/r/1515755487-8524-1-git-send-email-dwmw@amazon.co.uk Signed-off-by: David Woodhouse Signed-off-by: Razvan Ghitulete Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/nospec-branch.h | 76 ++++++++++++++++++++++++++++++++++- arch/x86/kvm/svm.c | 4 + arch/x86/kvm/vmx.c | 4 + 3 files changed, 83 insertions(+), 1 deletion(-) --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -7,6 +7,48 @@ #include #include +/* + * Fill the CPU return stack buffer. + * + * Each entry in the RSB, if used for a speculative 'ret', contains an + * infinite 'pause; jmp' loop to capture speculative execution. + * + * This is required in various cases for retpoline and IBRS-based + * mitigations for the Spectre variant 2 vulnerability. Sometimes to + * eliminate potentially bogus entries from the RSB, and sometimes + * purely to ensure that it doesn't get empty, which on some CPUs would + * allow predictions from other (unwanted!) sources to be used. + * + * We define a CPP macro such that it can be used from both .S files and + * inline assembly. It's possible to do a .macro and then include that + * from C via asm(".include ") but let's not go there. + */ + +#define RSB_CLEAR_LOOPS 32 /* To forcibly overwrite all entries */ +#define RSB_FILL_LOOPS 16 /* To avoid underflow */ + +/* + * Google experimented with loop-unrolling and this turned out to be + * the optimal version — two calls, each with their own speculation + * trap should their return address end up getting used, in a loop. + */ +#define __FILL_RETURN_BUFFER(reg, nr, sp) \ + mov $(nr/2), reg; \ +771: \ + call 772f; \ +773: /* speculation trap */ \ + pause; \ + jmp 773b; \ +772: \ + call 774f; \ +775: /* speculation trap */ \ + pause; \ + jmp 775b; \ +774: \ + dec reg; \ + jnz 771b; \ + add $(BITS_PER_LONG/8) * nr, sp; + #ifdef __ASSEMBLY__ /* @@ -61,6 +103,19 @@ #endif .endm + /* + * A simpler FILL_RETURN_BUFFER macro. Don't make people use the CPP + * monstrosity above, manually. + */ +.macro FILL_RETURN_BUFFER reg:req nr:req ftr:req +#ifdef CONFIG_RETPOLINE + ALTERNATIVE "jmp .Lskip_rsb_\@", \ + __stringify(__FILL_RETURN_BUFFER(\reg,\nr,%_ASM_SP)) \ + \ftr +.Lskip_rsb_\@: +#endif +.endm + #else /* __ASSEMBLY__ */ #if defined(CONFIG_X86_64) && defined(RETPOLINE) @@ -97,7 +152,7 @@ X86_FEATURE_RETPOLINE) # define THUNK_TARGET(addr) [thunk_target] "rm" (addr) -#else /* No retpoline */ +#else /* No retpoline for C / inline asm */ # define CALL_NOSPEC "call *%[thunk_target]\n" # define THUNK_TARGET(addr) [thunk_target] "rm" (addr) #endif @@ -112,5 +167,24 @@ enum spectre_v2_mitigation { SPECTRE_V2_IBRS, }; +/* + * On VMEXIT we must ensure that no RSB predictions learned in the guest + * can be followed in the host, by overwriting the RSB completely. Both + * retpoline and IBRS mitigations for Spectre v2 need this; only on future + * CPUs with IBRS_ATT *might* it be avoided. + */ +static inline void vmexit_fill_RSB(void) +{ +#ifdef CONFIG_RETPOLINE + unsigned long loops = RSB_CLEAR_LOOPS / 2; + + asm volatile (ALTERNATIVE("jmp 910f", + __stringify(__FILL_RETURN_BUFFER(%0, RSB_CLEAR_LOOPS, %1)), + X86_FEATURE_RETPOLINE) + "910:" + : "=&r" (loops), ASM_CALL_CONSTRAINT + : "r" (loops) : "memory" ); +#endif +} #endif /* __ASSEMBLY__ */ #endif /* __NOSPEC_BRANCH_H__ */ --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -37,6 +37,7 @@ #include #include #include +#include #include #include "trace.h" @@ -3904,6 +3905,9 @@ static void svm_vcpu_run(struct kvm_vcpu #endif ); + /* Eliminate branch target predictions from guest mode */ + vmexit_fill_RSB(); + #ifdef CONFIG_X86_64 wrmsrl(MSR_GS_BASE, svm->host.gs_base); #else --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -47,6 +47,7 @@ #include #include #include +#include #include "trace.h" #include "pmu.h" @@ -8701,6 +8702,9 @@ static void __noclone vmx_vcpu_run(struc #endif ); + /* Eliminate branch target predictions from guest mode */ + vmexit_fill_RSB(); + /* MSR_IA32_DEBUGCTLMSR is zeroed on vmexit. Restore it if needed */ if (debugctlmsr) update_debugctlmsr(debugctlmsr);