Received: by 10.223.176.46 with SMTP id f43csp3208504wra; Mon, 22 Jan 2018 10:05:53 -0800 (PST) X-Google-Smtp-Source: AH8x225slHomY9JpOljDFI1yXlWVrjKTP5WZgmkVU/R0JKdGksuXiv5zUZDEKU0y5Cp0aF8xlzdX X-Received: by 10.107.199.7 with SMTP id x7mr9606106iof.64.1516644353734; Mon, 22 Jan 2018 10:05:53 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1516644353; cv=none; d=google.com; s=arc-20160816; b=Q9co34gFocTZ7S+erpYCzQ3d1nwoPgsjo6h7qLfocGWNA9t4eYJ4VjFgR2oINoATcn hWVkoZn2SZmfxeMuf7O8YxgOe5+PGzClBHGuzPEklOK5XAfKsPpMdFwLc9Yi7iGn1eOD JXNyv8gllUaHERhmBqZqQjsR+5/BS4wcrlHIbELEmSLiwOvmRL6slN4NG04INXAs2nxo hC3FxjEuhA8qLcKnBAv82fU2q+XRtf4YNf+TmMUVVLKd9Ri7Ptd0a+84eSQjU1gupw/1 3KRoqftLrQK6iEmDzSduDRJ7owXuGTcY1rNAJbn4zPOWpE3bVpENyJq9BnN45frVLonB IV1g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:message-id:date:subject:cc:to:from :dmarc-filter:arc-authentication-results; bh=GaKq8/5CQYB3wlk9QNpdHo3deAJaJj5HYfWrfq9hvAU=; b=xjUrsG/FmRh3iz1SWUcHwAHw0FYOfw/KkZOANzFilbuclDlracFd8LMoh+bGwRegO3 n4HHQXwVjY01vQm58akmHwvPJXGuq4enKz4G27yFP1xSESzzS+rn7zfRqyAkY0uEF7aq buPwjm4t/fRPmy4/+IdfHWC+vlMaGAB/t3vndNZ3R3mFLPAZdfvsc16Wo2m5anMPZI0X 2Ho9lbSrPcgZmiIuzr/DV0jGRTXh3ldAOnGOp7b7YOUCiUER6iiKXjjvztiB+Tlv0WcY Yoy2tu17BrMZzucWm9K2PJuo/FHcx7cdhPh49rwXBr81YZAaknhGbfPmp42YoaSiw0ob IJMQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m124si6362317ita.20.2018.01.22.10.05.30; Mon, 22 Jan 2018 10:05:53 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751251AbeAVSEd (ORCPT + 99 others); Mon, 22 Jan 2018 13:04:33 -0500 Received: from mail.kernel.org ([198.145.29.99]:48834 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751121AbeAVSEc (ORCPT ); Mon, 22 Jan 2018 13:04:32 -0500 Received: from localhost (c-71-202-137-17.hsd1.ca.comcast.net [71.202.137.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 5F16521781; Mon, 22 Jan 2018 18:04:31 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5F16521781 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=luto@kernel.org From: Andy Lutomirski To: x86@kernel.org, LKML Cc: Linus Torvalds , Greg Kroah-Hartman , Alan Cox , Jann Horn , Samuel Neves , Dan Williams , Kernel Hardening , Borislav Petkov , Andy Lutomirski Subject: [PATCH] x86/retpoline/entry: Disable the entire SYSCALL64 fast path with retpolines on Date: Mon, 22 Jan 2018 10:04:29 -0800 Message-Id: <503224b776b9513885453756e44bab235221124e.1516644136.git.luto@kernel.org> X-Mailer: git-send-email 2.13.6 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The existing retpoline code carefully and awkwardly retpolinifies the SYSCALL64 slow path. This stops the fast path from being particularly fast, and it's IMO rather messy. Instead, just bypass the fast path entirely if retpolines are on. This seems to be a speedup on a "minimal" retpoline kernel, mainly because do_syscall_64() ends up calling the syscall handler without using a slow retpoline thunk. As an added benefit, we won't need to apply further Spectre mitigations to the fast path. The current fast path spectre mitigations may have a hole: if the syscall nr is out of bounds, it is plausible that the CPU would mispredict the bounds check and, load a bogus function pointer, and speculatively execute it right though the retpoline. If this is indeed a problem, we need to fix it in the slow paths anyway, but with this patch applied, we can at least leave the fast path alone. Cleans-up: 2641f08bb7fc ("x86/retpoline/entry: Convert entry assembler indirect jumps") Signed-off-by: Andy Lutomirski --- arch/x86/entry/entry_64.S | 23 +++++++++++++++++------ 1 file changed, 17 insertions(+), 6 deletions(-) diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S index 4f8e1d35a97c..b915bad58754 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -245,6 +245,9 @@ GLOBAL(entry_SYSCALL_64_after_hwframe) * If we need to do entry work or if we guess we'll need to do * exit work, go straight to the slow path. */ +#ifdef CONFIG_RETPOLINE + ALTERNATIVE "", "jmp entry_SYSCALL64_slow_path", X86_FEATURE_RETPOLINE +#endif movq PER_CPU_VAR(current_task), %r11 testl $_TIF_WORK_SYSCALL_ENTRY|_TIF_ALLWORK_MASK, TASK_TI_flags(%r11) jnz entry_SYSCALL64_slow_path @@ -270,13 +273,11 @@ entry_SYSCALL_64_fastpath: * This call instruction is handled specially in stub_ptregs_64. * It might end up jumping to the slow path. If it jumps, RAX * and all argument registers are clobbered. + * + * NB: no retpoline needed -- we don't execute this code with + * retpolines enabled. */ -#ifdef CONFIG_RETPOLINE - movq sys_call_table(, %rax, 8), %rax - call __x86_indirect_thunk_rax -#else call *sys_call_table(, %rax, 8) -#endif .Lentry_SYSCALL_64_after_fastpath_call: movq %rax, RAX(%rsp) @@ -431,6 +432,9 @@ ENTRY(stub_ptregs_64) * which we achieve by trying again on the slow path. If we are on * the slow path, the extra regs are already saved. * + * This code is unreachable (even via mispredicted conditional branches) + * if we're using retpolines. + * * RAX stores a pointer to the C function implementing the syscall. * IRQs are on. */ @@ -448,12 +452,19 @@ ENTRY(stub_ptregs_64) jmp entry_SYSCALL64_slow_path 1: - JMP_NOSPEC %rax /* Called from C */ + jmp *%rax /* Called from C */ END(stub_ptregs_64) .macro ptregs_stub func ENTRY(ptregs_\func) UNWIND_HINT_FUNC +#ifdef CONFIG_RETPOLINE + /* + * If retpolines are enabled, we don't use the syscall fast path, + * so just jump straight to the syscall body. + */ + ALTERNATIVE "", __stringify(jmp \func), X86_FEATURE_RETPOLINE +#endif leaq \func(%rip), %rax jmp stub_ptregs_64 END(ptregs_\func) -- 2.13.6