Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753372AbeADOiO (ORCPT + 1 other); Thu, 4 Jan 2018 09:38:14 -0500 Received: from merlin.infradead.org ([205.233.59.134]:46832 "EHLO merlin.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753274AbeADOh2 (ORCPT ); Thu, 4 Jan 2018 09:37:28 -0500 From: David Woodhouse To: ak@linux.intel.com Cc: David Woodhouse , Paul Turner , LKML , Linus Torvalds , Greg Kroah-Hartman , Tim Chen , Dave Hansen , tglx@linutronix.de, Kees Cook , Rik van Riel , Peter Zijlstra , Andy Lutomirski , Jiri Kosina , gnomes@lxorguk.ukuu.org.uk Subject: [PATCH v3 10/13] x86/retpoline/pvops: Convert assembler indirect jumps Date: Thu, 4 Jan 2018 14:37:07 +0000 Message-Id: <20180104143710.8961-10-dwmw@amazon.co.uk> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20180104143710.8961-1-dwmw@amazon.co.uk> References: <20180104143710.8961-1-dwmw@amazon.co.uk> In-Reply-To: <1515058213.12987.89.camel@amazon.co.uk> References: <1515058213.12987.89.camel@amazon.co.uk> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-SRS-Rewrite: SMTP reverse-path rewritten from by merlin.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Return-Path: Convert pvops invocations to use non-speculative call sequences, when CONFIG_RETPOLINE is enabled. There is scope for future optimisation here — once the pvops methods are actually set, we could just turn the damn things into *direct* jumps. But this is perfectly sufficient for now, without that added complexity. Signed-off-by: David Woodhouse --- arch/x86/include/asm/paravirt_types.h | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h index 6ec54d01972d..54b735b8ae12 100644 --- a/arch/x86/include/asm/paravirt_types.h +++ b/arch/x86/include/asm/paravirt_types.h @@ -336,11 +336,17 @@ extern struct pv_lock_ops pv_lock_ops; #define PARAVIRT_PATCH(x) \ (offsetof(struct paravirt_patch_template, x) / sizeof(void *)) +#define paravirt_clobber(clobber) \ + [paravirt_clobber] "i" (clobber) +#ifdef CONFIG_RETPOLINE +#define paravirt_type(op) \ + [paravirt_typenum] "i" (PARAVIRT_PATCH(op)), \ + [paravirt_opptr] "r" ((op)) +#else #define paravirt_type(op) \ [paravirt_typenum] "i" (PARAVIRT_PATCH(op)), \ [paravirt_opptr] "i" (&(op)) -#define paravirt_clobber(clobber) \ - [paravirt_clobber] "i" (clobber) +#endif /* * Generate some code, and mark it as patchable by the @@ -392,7 +398,11 @@ int paravirt_disable_iospace(void); * offset into the paravirt_patch_template structure, and can therefore be * freely converted back into a structure offset. */ +#ifdef CONFIG_RETPOLINE +#define PARAVIRT_CALL "call __x86.indirect_thunk.%V[paravirt_opptr];" +#else #define PARAVIRT_CALL "call *%c[paravirt_opptr];" +#endif /* * These macros are intended to wrap calls through one of the paravirt -- 2.14.3