Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1759422AbcLSLHQ (ORCPT ); Mon, 19 Dec 2016 06:07:16 -0500 Received: from terminus.zytor.com ([198.137.202.10]:58636 "EHLO terminus.zytor.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755792AbcLSLHN (ORCPT ); Mon, 19 Dec 2016 06:07:13 -0500 Date: Mon, 19 Dec 2016 03:05:41 -0800 From: tip-bot for Andy Lutomirski Message-ID: Cc: tglx@linutronix.de, jgross@suse.com, Xen-devel@lists.xen.org, tedheadster@gmail.com, peterz@infradead.org, brgerst@gmail.com, gnomes@lxorguk.ukuu.org.uk, bp@alien8.de, boris.ostrovsky@oracle.com, luto@kernel.org, andrew.cooper3@citrix.com, linux-kernel@vger.kernel.org, hmh@hmh.eng.br, hpa@zytor.com, mingo@kernel.org Reply-To: mingo@kernel.org, hmh@hmh.eng.br, hpa@zytor.com, linux-kernel@vger.kernel.org, bp@alien8.de, andrew.cooper3@citrix.com, luto@kernel.org, boris.ostrovsky@oracle.com, brgerst@gmail.com, gnomes@lxorguk.ukuu.org.uk, Xen-devel@lists.xen.org, tedheadster@gmail.com, jgross@suse.com, peterz@infradead.org, tglx@linutronix.de In-Reply-To: <5c79f0225f68bc8c40335612bf624511abb78941.1481307769.git.luto@kernel.org> References: <5c79f0225f68bc8c40335612bf624511abb78941.1481307769.git.luto@kernel.org> To: linux-tip-commits@vger.kernel.org Subject: [tip:x86/urgent] x86/asm: Rewrite sync_core() to use IRET-to-self Git-Commit-ID: c198b121b1a1d7a7171770c634cd49191bac4477 X-Mailer: tip-git-log-daemon Robot-ID: Robot-Unsubscribe: Contact to get blacklisted from these emails MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset=UTF-8 Content-Disposition: inline Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4622 Lines: 134 Commit-ID: c198b121b1a1d7a7171770c634cd49191bac4477 Gitweb: http://git.kernel.org/tip/c198b121b1a1d7a7171770c634cd49191bac4477 Author: Andy Lutomirski AuthorDate: Fri, 9 Dec 2016 10:24:08 -0800 Committer: Thomas Gleixner CommitDate: Mon, 19 Dec 2016 11:54:21 +0100 x86/asm: Rewrite sync_core() to use IRET-to-self Aside from being excessively slow, CPUID is problematic: Linux runs on a handful of CPUs that don't have CPUID. Use IRET-to-self instead. IRET-to-self works everywhere, so it makes testing easy. For reference, On my laptop, IRET-to-self is ~110ns, CPUID(eax=1, ecx=0) is ~83ns on native and very very slow under KVM, and MOV-to-CR2 is ~42ns. While we're at it: sync_core() serves a very specific purpose. Document it. Signed-off-by: Andy Lutomirski Cc: Juergen Gross Cc: One Thousand Gnomes Cc: Peter Zijlstra Cc: Brian Gerst Cc: Matthew Whitehead Cc: Borislav Petkov Cc: Henrique de Moraes Holschuh Cc: Andrew Cooper Cc: Boris Ostrovsky Cc: xen-devel Link: http://lkml.kernel.org/r/5c79f0225f68bc8c40335612bf624511abb78941.1481307769.git.luto@kernel.org Signed-off-by: Thomas Gleixner --- arch/x86/include/asm/processor.h | 80 +++++++++++++++++++++++++++++----------- 1 file changed, 58 insertions(+), 22 deletions(-) diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h index b934871..eaf1005 100644 --- a/arch/x86/include/asm/processor.h +++ b/arch/x86/include/asm/processor.h @@ -602,33 +602,69 @@ static __always_inline void cpu_relax(void) rep_nop(); } -/* Stop speculative execution and prefetching of modified code. */ +/* + * This function forces the icache and prefetched instruction stream to + * catch up with reality in two very specific cases: + * + * a) Text was modified using one virtual address and is about to be executed + * from the same physical page at a different virtual address. + * + * b) Text was modified on a different CPU, may subsequently be + * executed on this CPU, and you want to make sure the new version + * gets executed. This generally means you're calling this in a IPI. + * + * If you're calling this for a different reason, you're probably doing + * it wrong. + */ static inline void sync_core(void) { - int tmp; - -#ifdef CONFIG_X86_32 /* - * Do a CPUID if available, otherwise do a jump. The jump - * can conveniently enough be the jump around CPUID. + * There are quite a few ways to do this. IRET-to-self is nice + * because it works on every CPU, at any CPL (so it's compatible + * with paravirtualization), and it never exits to a hypervisor. + * The only down sides are that it's a bit slow (it seems to be + * a bit more than 2x slower than the fastest options) and that + * it unmasks NMIs. The "push %cs" is needed because, in + * paravirtual environments, __KERNEL_CS may not be a valid CS + * value when we do IRET directly. + * + * In case NMI unmasking or performance ever becomes a problem, + * the next best option appears to be MOV-to-CR2 and an + * unconditional jump. That sequence also works on all CPUs, + * but it will fault at CPL3 (i.e. Xen PV and lguest). + * + * CPUID is the conventional way, but it's nasty: it doesn't + * exist on some 486-like CPUs, and it usually exits to a + * hypervisor. + * + * Like all of Linux's memory ordering operations, this is a + * compiler barrier as well. */ - asm volatile("cmpl %2,%1\n\t" - "jl 1f\n\t" - "cpuid\n" - "1:" - : "=a" (tmp) - : "rm" (boot_cpu_data.cpuid_level), "ri" (0), "0" (1) - : "ebx", "ecx", "edx", "memory"); + register void *__sp asm(_ASM_SP); + +#ifdef CONFIG_X86_32 + asm volatile ( + "pushfl\n\t" + "pushl %%cs\n\t" + "pushl $1f\n\t" + "iret\n\t" + "1:" + : "+r" (__sp) : : "memory"); #else - /* - * CPUID is a barrier to speculative execution. - * Prefetched instructions are automatically - * invalidated when modified. - */ - asm volatile("cpuid" - : "=a" (tmp) - : "0" (1) - : "ebx", "ecx", "edx", "memory"); + unsigned int tmp; + + asm volatile ( + "mov %%ss, %0\n\t" + "pushq %q0\n\t" + "pushq %%rsp\n\t" + "addq $8, (%%rsp)\n\t" + "pushfq\n\t" + "mov %%cs, %0\n\t" + "pushq %q0\n\t" + "pushq $1f\n\t" + "iretq\n\t" + "1:" + : "=&r" (tmp), "+r" (__sp) : : "cc", "memory"); #endif }