Received: by 10.192.165.156 with SMTP id m28csp783002imm; Mon, 16 Apr 2018 08:38:31 -0700 (PDT) X-Google-Smtp-Source: AIpwx48+hIBqjI+4wsVBECJ7fRVJJoKFniADm4ksPOfhR/6ZtFh7ruBAgM65WYMSr5w9oua5zzET X-Received: by 10.99.160.25 with SMTP id r25mr12890476pge.95.1523893111700; Mon, 16 Apr 2018 08:38:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1523893111; cv=none; d=google.com; s=arc-20160816; b=izu0SDx1UZc0SSCEAjfN65oLf+dwSxhTdJaHpXV5qEycmOMotPbZ3Zcr8wrT1pWjF5 Lt6uuPjWXNXw/i/5HE+rOkxbJ9Ph9+DNl6tN1lyoxWuS5pdQJHIqUkwR5wqKzPtM13LC kttOmG/eZjH+lDtIsR4TCGgW8DULkpvSVcZmf1spHrW/W3Ao/9dcmwV9zLVglsPTYmjC qFxs/2oGJrE9LgdD70aSMbHZ+Gw4keY4gi3azJpuI/TlV2YXM7GgUyfToiSNeWDHZY4f gOGnZ9NjftedLeYrOu/oQ6PQ1LthwvoZPHDEoMHTv8Mxn2Dh/bnITA/tG7VilpfE6r0w qI3Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=1qwHGzdWkP5jX8w2SrciXl5XH9MAxHyVcJhmfGmh2J8=; b=kgVpbBLbujrYyG0ReRL2k9KnCLxu57WjkGnKUT/18LgQvFw61dVd3GgWYLHWfYC201 cUOftLZTF3xDDbpwR+UUBaa38XIBUffeMN9FN/WsdzHKUZNovBCgoqvYFQnilKU/PFt7 uNNE6MQ5SYH1pG+tEkgQX8aUGKHP9HH0FZhu/Yu8Wl1qCZAIyYtrNpneDjKlV6oUHQ39 QnazbBV78qioe8FIKgDy4EBPtQf83HBBZULuUvqxYUWHiqDIKfnDDBy8ZZ+yOhdldtZT 5AJXhAx7bUbVz0h+R5nm/CQnHBNOAM62kkH1psnajtybjV5DPfWGNLzZT/xrS4EejaFa VX3w== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail (test mode) header.i=@8bytes.org header.s=mail-1 header.b=POHxIS5Y; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=8bytes.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b14-v6si12040234pll.116.2018.04.16.08.38.17; Mon, 16 Apr 2018 08:38:31 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=fail (test mode) header.i=@8bytes.org header.s=mail-1 header.b=POHxIS5Y; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=8bytes.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753218AbeDPPe7 (ORCPT + 99 others); Mon, 16 Apr 2018 11:34:59 -0400 Received: from 8bytes.org ([81.169.241.247]:35932 "EHLO theia.8bytes.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750989AbeDPPZn (ORCPT ); Mon, 16 Apr 2018 11:25:43 -0400 Received: by theia.8bytes.org (Postfix, from userid 1000) id 4C7ED4B7; Mon, 16 Apr 2018 17:25:36 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=8bytes.org; s=mail-1; t=1523892337; bh=AQ11aB6K0+SrzmUbtDwHbeOfL+77K+UW3OnWN7lV/Ps=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=POHxIS5Y//8uTURhdGOqmaU+9iNiqNg834oXVBJ50Go8ZHLGjvWtZ1F5U7yvGsKIk FszezkFV9oVgBZLP6bezuTmW3mw1KBcRb2tEhMIojA5hKRlfp71FrPe4FEOVfkd/0M CICNvVxp+09eK8vgYWdnVVyqbVItOqdZ6I69mo9LGoHYi1TFs8oalLij8NMyW33h8U TC4TQs+IaqF5njr669lpkFhdyRr4jRNHta800NOuhA/kySBz4klFSgfJWEmix2BlCe mSdn3WsqxwFJQoqaEPY5B8dcFez9RIMkQojqF8/Icmd++jOOWSqbDtu3KVpecZGf+x MhcK5Uz796UyQ== From: Joerg Roedel To: Thomas Gleixner , Ingo Molnar , "H . Peter Anvin" Cc: x86@kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Linus Torvalds , Andy Lutomirski , Dave Hansen , Josh Poimboeuf , Juergen Gross , Peter Zijlstra , Borislav Petkov , Jiri Kosina , Boris Ostrovsky , Brian Gerst , David Laight , Denys Vlasenko , Eduardo Valentin , Greg KH , Will Deacon , aliguori@amazon.com, daniel.gruss@iaik.tugraz.at, hughd@google.com, keescook@google.com, Andrea Arcangeli , Waiman Long , Pavel Machek , "David H . Gutteridge" , jroedel@suse.de, joro@8bytes.org Subject: [PATCH 04/35] x86/entry/32: Put ESPFIX code into a macro Date: Mon, 16 Apr 2018 17:24:52 +0200 Message-Id: <1523892323-14741-5-git-send-email-joro@8bytes.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1523892323-14741-1-git-send-email-joro@8bytes.org> References: <1523892323-14741-1-git-send-email-joro@8bytes.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Joerg Roedel This makes it easier to split up the shared iret code path. Signed-off-by: Joerg Roedel --- arch/x86/entry/entry_32.S | 97 ++++++++++++++++++++++++----------------------- 1 file changed, 49 insertions(+), 48 deletions(-) diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S index ec288be..118420b 100644 --- a/arch/x86/entry/entry_32.S +++ b/arch/x86/entry/entry_32.S @@ -221,6 +221,54 @@ POP_GS_EX .endm +.macro CHECK_AND_APPLY_ESPFIX +#ifdef CONFIG_X86_ESPFIX32 +#define GDT_ESPFIX_SS PER_CPU_VAR(gdt_page) + (GDT_ENTRY_ESPFIX_SS * 8) + + ALTERNATIVE "jmp .Lend_\@", "", X86_BUG_ESPFIX + + movl PT_EFLAGS(%esp), %eax # mix EFLAGS, SS and CS + /* + * Warning: PT_OLDSS(%esp) contains the wrong/random values if we + * are returning to the kernel. + * See comments in process.c:copy_thread() for details. + */ + movb PT_OLDSS(%esp), %ah + movb PT_CS(%esp), %al + andl $(X86_EFLAGS_VM | (SEGMENT_TI_MASK << 8) | SEGMENT_RPL_MASK), %eax + cmpl $((SEGMENT_LDT << 8) | USER_RPL), %eax + jne .Lend_\@ # returning to user-space with LDT SS + + /* + * Setup and switch to ESPFIX stack + * + * We're returning to userspace with a 16 bit stack. The CPU will not + * restore the high word of ESP for us on executing iret... This is an + * "official" bug of all the x86-compatible CPUs, which we can work + * around to make dosemu and wine happy. We do this by preloading the + * high word of ESP with the high word of the userspace ESP while + * compensating for the offset by changing to the ESPFIX segment with + * a base address that matches for the difference. + */ + mov %esp, %edx /* load kernel esp */ + mov PT_OLDESP(%esp), %eax /* load userspace esp */ + mov %dx, %ax /* eax: new kernel esp */ + sub %eax, %edx /* offset (low word is 0) */ + shr $16, %edx + mov %dl, GDT_ESPFIX_SS + 4 /* bits 16..23 */ + mov %dh, GDT_ESPFIX_SS + 7 /* bits 24..31 */ + pushl $__ESPFIX_SS + pushl %eax /* new kernel esp */ + /* + * Disable interrupts, but do not irqtrace this section: we + * will soon execute iret and the tracer was already set to + * the irqstate after the IRET: + */ + DISABLE_INTERRUPTS(CLBR_ANY) + lss (%esp), %esp /* switch to espfix segment */ +.Lend_\@: +#endif /* CONFIG_X86_ESPFIX32 */ +.endm /* * %eax: prev task * %edx: next task @@ -547,21 +595,7 @@ ENTRY(entry_INT80_32) restore_all: TRACE_IRQS_IRET .Lrestore_all_notrace: -#ifdef CONFIG_X86_ESPFIX32 - ALTERNATIVE "jmp .Lrestore_nocheck", "", X86_BUG_ESPFIX - - movl PT_EFLAGS(%esp), %eax # mix EFLAGS, SS and CS - /* - * Warning: PT_OLDSS(%esp) contains the wrong/random values if we - * are returning to the kernel. - * See comments in process.c:copy_thread() for details. - */ - movb PT_OLDSS(%esp), %ah - movb PT_CS(%esp), %al - andl $(X86_EFLAGS_VM | (SEGMENT_TI_MASK << 8) | SEGMENT_RPL_MASK), %eax - cmpl $((SEGMENT_LDT << 8) | USER_RPL), %eax - je .Lldt_ss # returning to user-space with LDT SS -#endif + CHECK_AND_APPLY_ESPFIX .Lrestore_nocheck: RESTORE_REGS 4 # skip orig_eax/error_code .Lirq_return: @@ -579,39 +613,6 @@ ENTRY(iret_exc ) jmp common_exception .previous _ASM_EXTABLE(.Lirq_return, iret_exc) - -#ifdef CONFIG_X86_ESPFIX32 -.Lldt_ss: -/* - * Setup and switch to ESPFIX stack - * - * We're returning to userspace with a 16 bit stack. The CPU will not - * restore the high word of ESP for us on executing iret... This is an - * "official" bug of all the x86-compatible CPUs, which we can work - * around to make dosemu and wine happy. We do this by preloading the - * high word of ESP with the high word of the userspace ESP while - * compensating for the offset by changing to the ESPFIX segment with - * a base address that matches for the difference. - */ -#define GDT_ESPFIX_SS PER_CPU_VAR(gdt_page) + (GDT_ENTRY_ESPFIX_SS * 8) - mov %esp, %edx /* load kernel esp */ - mov PT_OLDESP(%esp), %eax /* load userspace esp */ - mov %dx, %ax /* eax: new kernel esp */ - sub %eax, %edx /* offset (low word is 0) */ - shr $16, %edx - mov %dl, GDT_ESPFIX_SS + 4 /* bits 16..23 */ - mov %dh, GDT_ESPFIX_SS + 7 /* bits 24..31 */ - pushl $__ESPFIX_SS - pushl %eax /* new kernel esp */ - /* - * Disable interrupts, but do not irqtrace this section: we - * will soon execute iret and the tracer was already set to - * the irqstate after the IRET: - */ - DISABLE_INTERRUPTS(CLBR_ANY) - lss (%esp), %esp /* switch to espfix segment */ - jmp .Lrestore_nocheck -#endif ENDPROC(entry_INT80_32) .macro FIXUP_ESPFIX_STACK -- 2.7.4