Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S965235AbeALSqF (ORCPT + 1 other); Fri, 12 Jan 2018 13:46:05 -0500 Received: from mga05.intel.com ([192.55.52.43]:4981 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S965136AbeALSqB (ORCPT ); Fri, 12 Jan 2018 13:46:01 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.46,350,1511856000"; d="scan'208";a="19643788" From: Andi Kleen To: tglx@linutronix.de Cc: x86@kernel.org, dwmw@amazon.co.uk, linux-kernel@vger.kernel.org, pjt@google.com, torvalds@linux-foundation.org, gregkh@linux-foundation.org, peterz@infradead.org, luto@amacapital.net, thomas.lendacky@amd.com, arjan.van.de.ven@intel.com, Andi Kleen Subject: [PATCH 2/4] x86/retpoline: Avoid return buffer underflows on context switch Date: Fri, 12 Jan 2018 10:45:48 -0800 Message-Id: <20180112184550.6573-3-andi@firstfloor.org> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20180112184550.6573-1-andi@firstfloor.org> References: <20180112184550.6573-1-andi@firstfloor.org> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Return-Path: From: Andi Kleen CPUs have return buffers which store the return address for RET to predict function returns. Some CPUs (Skylake, some Broadwells) can fall back to indirect branch prediction on return buffer underflow. With retpoline we want to avoid uncontrolled indirect branches, which could be poisoned by ring 3, so we need to avoid uncontrolled return buffer underflows in the kernel. This can happen when we're context switching from a shallower to a deeper kernel stack. The deeper kernel stack would eventually underflow the return buffer, which again would fall back to the indirect branch predictor. The other thread could be running a system call trigger by an attacker too, so the context switch would help the attacked thread to fall back to an uncontrolled indirect branch, which then would use the values passed in by the attacker. To guard against this fill the return buffer with controlled content during context switch. This prevents any underflows. This is only enabled on Skylake. Signed-off-by: Andi Kleen --- arch/x86/entry/entry_32.S | 14 ++++++++++++++ arch/x86/entry/entry_64.S | 14 ++++++++++++++ 2 files changed, 28 insertions(+) diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S index a1f28a54f23a..bbecb7c2f6cb 100644 --- a/arch/x86/entry/entry_32.S +++ b/arch/x86/entry/entry_32.S @@ -250,6 +250,20 @@ ENTRY(__switch_to_asm) popl %ebx popl %ebp + /* + * When we switch from a shallower to a deeper call stack + * the call stack will underflow in the kernel in the next task. + * This could cause the CPU to fall back to indirect branch + * prediction, which may be poisoned. + * + * To guard against that always fill the return stack with + * known values. + * + * We do this in assembler because it needs to be before + * any calls on the new stack, and this can be difficult to + * ensure in a complex C function like __switch_to. + */ + FILL_RETURN_BUFFER %ecx, RSB_FILL_LOOPS, X86_FEATURE_RETURN_UNDERFLOW jmp __switch_to END(__switch_to_asm) diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S index 59874bc1aed2..3caac129cd07 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -495,6 +495,20 @@ ENTRY(__switch_to_asm) popq %rbx popq %rbp + /* + * When we switch from a shallower to a deeper call stack + * the call stack will underflow in the kernel in the next task. + * This could cause the CPU to fall back to indirect branch + * prediction, which may be poisoned. + * + * To guard against that always fill the return stack with + * known values. + * + * We do this in assembler because it needs to be before + * any calls on the new stack, and this can be difficult to + * ensure in a complex C function like __switch_to. + */ + FILL_RETURN_BUFFER %r8, RSB_FILL_LOOPS, X86_FEATURE_RETURN_UNDERFLOW jmp __switch_to END(__switch_to_asm) -- 2.14.3