Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932280AbeAKXbm (ORCPT + 1 other); Thu, 11 Jan 2018 18:31:42 -0500 Received: from terminus.zytor.com ([65.50.211.136]:58705 "EHLO terminus.zytor.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753558AbeAKXbk (ORCPT ); Thu, 11 Jan 2018 18:31:40 -0500 Date: Thu, 11 Jan 2018 15:27:07 -0800 From: tip-bot for David Woodhouse Message-ID: Cc: dwmw@amazon.co.uk, jikos@kernel.org, torvalds@linux-foundation.org, tglx@linutronix.de, keescook@google.com, ak@linux.intel.com, gregkh@linux-foundation.org, hpa@zytor.com, pjt@google.com, linux-kernel@vger.kernel.org, riel@redhat.com, tim.c.chen@linux.intel.com, peterz@infradead.org, mingo@kernel.org, jpoimboe@redhat.com, dave.hansen@intel.com, luto@amacapital.net Reply-To: keescook@google.com, ak@linux.intel.com, gregkh@linux-foundation.org, jikos@kernel.org, torvalds@linux-foundation.org, dwmw@amazon.co.uk, tglx@linutronix.de, tim.c.chen@linux.intel.com, peterz@infradead.org, mingo@kernel.org, luto@amacapital.net, dave.hansen@intel.com, jpoimboe@redhat.com, hpa@zytor.com, linux-kernel@vger.kernel.org, pjt@google.com, riel@redhat.com In-Reply-To: <1515707194-20531-13-git-send-email-dwmw@amazon.co.uk> References: <1515707194-20531-13-git-send-email-dwmw@amazon.co.uk> To: linux-tip-commits@vger.kernel.org Subject: [tip:x86/pti] x86/retpoline: Fill return stack buffer on vmexit Git-Commit-ID: 85ec967c1dc04bde16d783ea04428bef3c00a171 X-Mailer: tip-git-log-daemon Robot-ID: Robot-Unsubscribe: Contact to get blacklisted from these emails MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset=UTF-8 Content-Disposition: inline Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Return-Path: Commit-ID: 85ec967c1dc04bde16d783ea04428bef3c00a171 Gitweb: https://git.kernel.org/tip/85ec967c1dc04bde16d783ea04428bef3c00a171 Author: David Woodhouse AuthorDate: Thu, 11 Jan 2018 21:46:34 +0000 Committer: Thomas Gleixner CommitDate: Fri, 12 Jan 2018 00:14:32 +0100 x86/retpoline: Fill return stack buffer on vmexit In accordance with the Intel and AMD documentation, all entries in the RSB must be overwrite on exiting a guest, to prevent malicious branch target predictions from affecting the host kernel. This is needed both for retpoline and for IBRS. Signed-off-by: David Woodhouse Signed-off-by: Thomas Gleixner Tested-by: Peter Zijlstra (Intel) Cc: gnomes@lxorguk.ukuu.org.uk Cc: Rik van Riel Cc: Andi Kleen Cc: Josh Poimboeuf Cc: thomas.lendacky@amd.com Cc: Linus Torvalds Cc: Jiri Kosina Cc: Andy Lutomirski Cc: Dave Hansen Cc: Kees Cook Cc: Tim Chen Cc: Greg Kroah-Hartman Cc: Paul Turner Link: https://lkml.kernel.org/r/1515707194-20531-13-git-send-email-dwmw@amazon.co.uk --- arch/x86/include/asm/nospec-branch.h | 73 +++++++++++++++++++++++++++++++++++- arch/x86/kvm/svm.c | 4 ++ arch/x86/kvm/vmx.c | 4 ++ 3 files changed, 80 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h index ea034fa..475ab0c 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -7,6 +7,43 @@ #include #include +/* + * Fill the CPU return stack buffer. + * + * Each entry in the RSB, if used for a speculative 'ret', contains an + * infinite 'pause; jmp' loop to capture speculative execution. + * + * This is required in various cases for retpoline and IBRS-based + * mitigations for the Spectre variant 2 vulnerability. Sometimes to + * eliminate potentially bogus entries from the RSB, and sometimes + * purely to ensure that it doesn't get empty, which on some CPUs would + * allow predictions from other (unwanted!) sources to be used. + * + * We define a CPP macro such that it can be used from both .S files and + * inline assembly. It's possible to do a .macro and then include that + * from C via asm(".include ") but let's not go there. + */ + +#define RSB_CLEAR_LOOPS 32 /* To forcibly overwrite all entries */ +#define RSB_FILL_LOOPS 16 /* To avoid underflow */ + +#define __FILL_RETURN_BUFFER(reg, nr, sp, uniq) \ + mov $(nr/2), reg; \ +.Ldo_call1_ ## uniq: \ + call .Ldo_call2_ ## uniq; \ +.Ltrap1_ ## uniq: \ + pause; \ + jmp .Ltrap1_ ## uniq; \ +.Ldo_call2_ ## uniq: \ + call .Ldo_loop_ ## uniq; \ +.Ltrap2_ ## uniq: \ + pause; \ + jmp .Ltrap2_ ## uniq; \ +.Ldo_loop_ ## uniq: \ + dec reg; \ + jnz .Ldo_call1_ ## uniq; \ + add $(BITS_PER_LONG/8) * nr, sp; + #ifdef __ASSEMBLY__ /* @@ -76,6 +113,20 @@ #endif .endm + /* + * A simpler FILL_RETURN_BUFFER macro. Don't make people use the CPP + * monstrosity above, manually. + */ +.macro FILL_RETURN_BUFFER reg:req nr:req ftr:req +#ifdef CONFIG_RETPOLINE + ANNOTATE_NOSPEC_ALTERNATIVE + ALTERNATIVE "jmp .Lskip_rsb_\@", \ + __stringify(__FILL_RETURN_BUFFER(\reg,\nr,%_ASM_SP,\@)) \ + \ftr +.Lskip_rsb_\@: +#endif +.endm + #else /* __ASSEMBLY__ */ #define ANNOTATE_NOSPEC_ALTERNATIVE \ @@ -119,7 +170,7 @@ X86_FEATURE_RETPOLINE) # define THUNK_TARGET(addr) [thunk_target] "rm" (addr) -#else /* No retpoline */ +#else /* No retpoline for C / inline asm */ # define CALL_NOSPEC "call *%[thunk_target]\n" # define THUNK_TARGET(addr) [thunk_target] "rm" (addr) #endif @@ -134,5 +185,25 @@ enum spectre_v2_mitigation { SPECTRE_V2_IBRS, }; +/* + * On VMEXIT we must ensure that no RSB predictions learned in the guest + * can be followed in the host, by overwriting the RSB completely. Both + * retpoline and IBRS mitigations for Spectre v2 need this; only on future + * CPUs with IBRS_ATT *might* it be avoided. + */ +static inline void vmexit_fill_RSB(void) +{ +#ifdef CONFIG_RETPOLINE + unsigned long loops = RSB_CLEAR_LOOPS / 2; + + asm volatile (ANNOTATE_NOSPEC_ALTERNATIVE + ALTERNATIVE("jmp 910f", + __stringify(__FILL_RETURN_BUFFER(%0, RSB_CLEAR_LOOPS, %1, __LINE__)), + X86_FEATURE_RETPOLINE) + "910:" + : "=&r" (loops), ASM_CALL_CONSTRAINT + : "r" (loops) : "memory" ); +#endif +} #endif /* __ASSEMBLY__ */ #endif /* __NOSPEC_BRANCH_H__ */ diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index 0e68f0b..2744b973 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -45,6 +45,7 @@ #include #include #include +#include #include #include "trace.h" @@ -4985,6 +4986,9 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu) #endif ); + /* Eliminate branch target predictions from guest mode */ + vmexit_fill_RSB(); + #ifdef CONFIG_X86_64 wrmsrl(MSR_GS_BASE, svm->host.gs_base); #else diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 62ee436..d1e25db 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -50,6 +50,7 @@ #include #include #include +#include #include "trace.h" #include "pmu.h" @@ -9403,6 +9404,9 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu) #endif ); + /* Eliminate branch target predictions from guest mode */ + vmexit_fill_RSB(); + /* MSR_IA32_DEBUGCTLMSR is zeroed on vmexit. Restore it if needed */ if (debugctlmsr) update_debugctlmsr(debugctlmsr);