Received: by 10.223.185.116 with SMTP id b49csp2448019wrg; Mon, 5 Mar 2018 03:06:54 -0800 (PST) X-Google-Smtp-Source: AG47ELuJDbBVuy4mHv6geO5E0wag5kYQILFRnxkwJSEcRNCti1pJAZcZGoJeyrEfvpu3G7XZJhXG X-Received: by 10.98.16.131 with SMTP id 3mr15060626pfq.188.1520248014806; Mon, 05 Mar 2018 03:06:54 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1520248014; cv=none; d=google.com; s=arc-20160816; b=NJwRuiJ/WTiCGfxpIGauKwvK/6YVq9AolWz78RQwIR2lPNwV/ydj4cWE8zZ2HaSJm2 UtKJLwpY078/rNDAxBGuuRoLnx8to3h+7uLPRhONvMX59G//jOjoGSQMIPO0X7r/vSpo 6F5quJfmzoFeXLA8Wp/n1wHKHMjBfxIXNOY/Pej/jKbTdbCh0e0eAfmgtW0tD01HNMjl 7yccNXYP1CQoHeaYDI2NIR1AvEF3GVCvT6jx+xbiyhCuuC7zI+tBaNNIlUAkVif5cFgA M5wd1IRuIGAfM9Bflal4srPu/4QgVLNFQQKdNT3/DUJXl/PgLo1esogoAhMbE6tqD0bI PqhA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=wbV5tAu0wTyAC9OBsjsZxEDpC5ifUQw0rFsnbh3Eu9o=; b=TcobHHDWmN6tNpCPwV2OR6e9ES27L/DjiL3+XL6QWf/yhqQ7zn7X7LCFO4/GRhjxv8 QRy+tdh1gOWbDR6xPD6qwrWW/r2nzYlae93irgKp7YWVy+bRK9Hk6PUbC4Qo7o3TPxm9 Kce2PfoviXpYZXZ4AZdk795AuAa8VPUxXZQWVWJfZ82bi5wM/6I82bCc3YRDWp67SU/T 96sgVXBUr0iC6tR3VIfad8+YDpI8XHdnC/f29Rz3fO/y06ItTDDXl8avjksKVB2o6HV7 HCBcH0x6sfG3dGq5N7PHAxV4V2n+cK29tESI7Cbq+BDW+tsJ3hUIbQ0K4E3AZ1kiX+mt HdQQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail (test mode) header.i=@8bytes.org header.s=mail-1 header.b=qVf7Rtk4; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=8bytes.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c6si8202666pgv.741.2018.03.05.03.06.39; Mon, 05 Mar 2018 03:06:54 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=fail (test mode) header.i=@8bytes.org header.s=mail-1 header.b=qVf7Rtk4; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=8bytes.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933889AbeCEK0g (ORCPT + 99 others); Mon, 5 Mar 2018 05:26:36 -0500 Received: from 8bytes.org ([81.169.241.247]:54818 "EHLO theia.8bytes.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933755AbeCEK0R (ORCPT ); Mon, 5 Mar 2018 05:26:17 -0500 Received: by theia.8bytes.org (Postfix, from userid 1000) id 2C111CA5; Mon, 5 Mar 2018 11:26:06 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=8bytes.org; s=mail-1; t=1520245568; bh=GRQ3Qei+y/cXYprD+XAX51YpTkJy5UpMI/VGF1hGXwA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=qVf7Rtk4kAHoHrkSHtIDXX8myzFS73HCGJTKccqUZRQhhYxUpH0eH1Uxz24OLx+lS ZM8jsOE9PjwP3XjlLCXqU1fzU6/dThR9vIXRlirF3zirpsQiH29Xd5swshPUkwRU50 Xn8xmZB7EQ8q9HK27yTbFPhN+6UbOzFXLvPn+BbrYVT1Yhh9gor3kphpEmQJZV9iJ4 0TI+YVTfaxcHH8XRMJMdqjcFIPZRU3pE4V409shLdk3s9PjjWwVbzsq3/M6w3pQ5KS Md03P6aMVi5EmAer0TuB2PG0XzVEjotws/ljzfcR0GmkZQKBxgBVSBDiakYssr1XD8 NYO6WEYRrLUPg== From: Joerg Roedel To: Thomas Gleixner , Ingo Molnar , "H . Peter Anvin" Cc: x86@kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Linus Torvalds , Andy Lutomirski , Dave Hansen , Josh Poimboeuf , Juergen Gross , Peter Zijlstra , Borislav Petkov , Jiri Kosina , Boris Ostrovsky , Brian Gerst , David Laight , Denys Vlasenko , Eduardo Valentin , Greg KH , Will Deacon , aliguori@amazon.com, daniel.gruss@iaik.tugraz.at, hughd@google.com, keescook@google.com, Andrea Arcangeli , Waiman Long , Pavel Machek , jroedel@suse.de, joro@8bytes.org Subject: [PATCH 14/34] x86/entry/32: Add PTI cr3 switch to non-NMI entry/exit points Date: Mon, 5 Mar 2018 11:25:43 +0100 Message-Id: <1520245563-8444-15-git-send-email-joro@8bytes.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1520245563-8444-1-git-send-email-joro@8bytes.org> References: <1520245563-8444-1-git-send-email-joro@8bytes.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Joerg Roedel Add unconditional cr3 switches between user and kernel cr3 to all non-NMI entry and exit points. Signed-off-by: Joerg Roedel --- arch/x86/entry/entry_32.S | 91 +++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 88 insertions(+), 3 deletions(-) diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S index 35379e5..8f78abc 100644 --- a/arch/x86/entry/entry_32.S +++ b/arch/x86/entry/entry_32.S @@ -328,6 +328,30 @@ #endif /* CONFIG_X86_ESPFIX32 */ .endm +/* Unconditionally switch to user cr3 */ +.macro SWITCH_TO_USER_CR3 scratch_reg:req + ALTERNATIVE "jmp .Lend_\@", "", X86_FEATURE_PTI + + movl %cr3, \scratch_reg + orl $PTI_SWITCH_MASK, \scratch_reg + movl \scratch_reg, %cr3 +.Lend_\@: +.endm + +/* Unconditionally switch to kernel cr3 */ +.macro SWITCH_TO_KERNEL_CR3 scratch_reg:req + ALTERNATIVE "jmp .Lend_\@", "", X86_FEATURE_PTI + movl %cr3, \scratch_reg + /* Test if we are already on kernel CR3 */ + testl $PTI_SWITCH_MASK, \scratch_reg + jz .Lend_\@ + andl $(~PTI_SWITCH_MASK), \scratch_reg + movl \scratch_reg, %cr3 + /* Return original CR3 in \scratch_reg */ + orl $PTI_SWITCH_MASK, \scratch_reg +.Lend_\@: +.endm + /* * Called with pt_regs fully populated and kernel segments loaded, @@ -341,11 +365,19 @@ */ #define CS_FROM_ENTRY_STACK (1 << 31) +#define CS_FROM_USER_CR3 (1 << 30) .macro SWITCH_TO_KERNEL_STACK ALTERNATIVE "", "jmp .Lend_\@", X86_FEATURE_XENPV + SWITCH_TO_KERNEL_CR3 scratch_reg=%eax + + /* + * %eax now contains the entry cr3 and we carry it forward in + * that register for the time this macro runs + */ + /* Are we on the entry stack? Bail out if not! */ movl PER_CPU_VAR(cpu_entry_area), %edi addl $CPU_ENTRY_AREA_entry_stack, %edi @@ -408,7 +440,8 @@ * but switch back to the entry-stack again when we approach * iret and return to the interrupted code-path. This usually * happens when we hit an exception while restoring user-space - * segment registers on the way back to user-space. + * segment registers on the way back to user-space or when the + * sysenter handler runs with eflags.tf set. * * When we switch to the task-stack here, we can't trust the * contents of the entry-stack anymore, as the exception handler @@ -425,6 +458,7 @@ * * %esi: Entry-Stack pointer (same as %esp) * %edi: Top of the task stack + * %eax: CR3 on kernel entry */ /* Calculate number of bytes on the entry stack in %ecx */ @@ -441,6 +475,14 @@ orl $CS_FROM_ENTRY_STACK, PT_CS(%esp) /* + * Test the cr3 used to enter the kernel and add a marker + * so that we can switch back to it before iret. + */ + testl $PTI_SWITCH_MASK, %eax + jz .Lcopy_pt_regs_\@ + orl $CS_FROM_USER_CR3, PT_CS(%esp) + + /* * %esi and %edi are unchanged, %ecx contains the number of * bytes to copy. The code at .Lcopy_pt_regs_\@ will allocate * the stack-frame on task-stack and copy everything over @@ -506,7 +548,7 @@ /* * This macro handles the case when we return to kernel-mode on the iret - * path and have to switch back to the entry stack. + * path and have to switch back to the entry stack and/or user-cr3 * * See the comments below the .Lentry_from_kernel_\@ label in the * SWITCH_TO_KERNEL_STACK macro for more details. @@ -552,6 +594,18 @@ /* Safe to switch to entry-stack now */ movl %ebx, %esp + /* + * We came from entry-stack and need to check if we also need to + * switch back to user cr3. + */ + testl $CS_FROM_USER_CR3, PT_CS(%esp) + jz .Lend_\@ + + /* Clear marker from stack-frame */ + andl $(~CS_FROM_USER_CR3), PT_CS(%esp) + + SWITCH_TO_USER_CR3 scratch_reg=%eax + .Lend_\@: .endm /* @@ -746,6 +800,18 @@ ENTRY(xen_sysenter_target) * 0(%ebp) arg6 */ ENTRY(entry_SYSENTER_32) + /* + * On entry-stack with all userspace-regs live - save and + * restore eflags and %eax to use it as scratch-reg for the cr3 + * switch. + */ + pushfl + pushl %eax + SWITCH_TO_KERNEL_CR3 scratch_reg=%eax + popl %eax + popfl + + /* Stack empty again, switch to task stack */ movl TSS_entry_stack(%esp), %esp .Lsysenter_past_esp: @@ -826,6 +892,9 @@ ENTRY(entry_SYSENTER_32) /* Switch to entry stack */ movl %eax, %esp + /* Now ready to switch the cr3 */ + SWITCH_TO_USER_CR3 scratch_reg=%eax + /* * Restore all flags except IF. (We restore IF separately because * STI gives a one-instruction window in which we won't be interrupted, @@ -906,7 +975,23 @@ restore_all: .Lrestore_all_notrace: CHECK_AND_APPLY_ESPFIX .Lrestore_nocheck: - RESTORE_REGS 4 # skip orig_eax/error_code + /* + * First restore user segments. This can cause exceptions, so we + * run it with kernel cr3. + */ + RESTORE_SEGMENTS + + /* + * Segments are restored - no more exceptions from here on except on + * iret, but that handled safely. + */ + SWITCH_TO_USER_CR3 scratch_reg=%eax + + /* Restore rest */ + RESTORE_INT_REGS + + /* Unwind stack to the iret frame */ + RESTORE_SKIP_SEGMENTS 4 # skip orig_eax/error_code .Lirq_return: INTERRUPT_RETURN -- 2.7.4