Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753247AbeADPCL (ORCPT + 1 other); Thu, 4 Jan 2018 10:02:11 -0500 Received: from mx2.suse.de ([195.135.220.15]:43013 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753092AbeADPCJ (ORCPT ); Thu, 4 Jan 2018 10:02:09 -0500 Subject: Re: [PATCH v3 10/13] x86/retpoline/pvops: Convert assembler indirect jumps To: David Woodhouse , ak@linux.intel.com Cc: Paul Turner , LKML , Linus Torvalds , Greg Kroah-Hartman , Tim Chen , Dave Hansen , tglx@linutronix.de, Kees Cook , Rik van Riel , Peter Zijlstra , Andy Lutomirski , Jiri Kosina , gnomes@lxorguk.ukuu.org.uk References: <20180104143710.8961-1-dwmw@amazon.co.uk> <20180104143710.8961-10-dwmw@amazon.co.uk> From: Juergen Gross Message-ID: <380dc816-1732-90dc-268e-4a8c3e7ccc7d@suse.com> Date: Thu, 4 Jan 2018 16:02:06 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.5.2 MIME-Version: 1.0 In-Reply-To: <20180104143710.8961-10-dwmw@amazon.co.uk> Content-Type: text/plain; charset=utf-8 Content-Language: de-DE Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Return-Path: On 04/01/18 15:37, David Woodhouse wrote: > Convert pvops invocations to use non-speculative call sequences, when > CONFIG_RETPOLINE is enabled. > > There is scope for future optimisation here — once the pvops methods are > actually set, we could just turn the damn things into *direct* jumps. > But this is perfectly sufficient for now, without that added complexity. I don't see the need to modify the pvops calls. All indirect calls are replaced by either direct calls or other code long before any user code is active. For modules the replacements are in place before the module is being used. Juergen > > Signed-off-by: David Woodhouse > --- > arch/x86/include/asm/paravirt_types.h | 14 ++++++++++++-- > 1 file changed, 12 insertions(+), 2 deletions(-) > > diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h > index 6ec54d01972d..54b735b8ae12 100644 > --- a/arch/x86/include/asm/paravirt_types.h > +++ b/arch/x86/include/asm/paravirt_types.h > @@ -336,11 +336,17 @@ extern struct pv_lock_ops pv_lock_ops; > #define PARAVIRT_PATCH(x) \ > (offsetof(struct paravirt_patch_template, x) / sizeof(void *)) > > +#define paravirt_clobber(clobber) \ > + [paravirt_clobber] "i" (clobber) > +#ifdef CONFIG_RETPOLINE > +#define paravirt_type(op) \ > + [paravirt_typenum] "i" (PARAVIRT_PATCH(op)), \ > + [paravirt_opptr] "r" ((op)) > +#else > #define paravirt_type(op) \ > [paravirt_typenum] "i" (PARAVIRT_PATCH(op)), \ > [paravirt_opptr] "i" (&(op)) > -#define paravirt_clobber(clobber) \ > - [paravirt_clobber] "i" (clobber) > +#endif > > /* > * Generate some code, and mark it as patchable by the > @@ -392,7 +398,11 @@ int paravirt_disable_iospace(void); > * offset into the paravirt_patch_template structure, and can therefore be > * freely converted back into a structure offset. > */ > +#ifdef CONFIG_RETPOLINE > +#define PARAVIRT_CALL "call __x86.indirect_thunk.%V[paravirt_opptr];" > +#else > #define PARAVIRT_CALL "call *%c[paravirt_opptr];" > +#endif > > /* > * These macros are intended to wrap calls through one of the paravirt >