Received: by 2002:ac0:a5a7:0:0:0:0:0 with SMTP id m36-v6csp322607imm; Wed, 18 Jul 2018 02:47:09 -0700 (PDT) X-Google-Smtp-Source: AAOMgpdI6PbmysX3y60YJagTb4R1LY4oeErnpK0ju4wzxm0mfBKs/qoLhPoubSZb0EJcpvuGrU8r X-Received: by 2002:a17:902:740a:: with SMTP id g10-v6mr101608pll.204.1531907229703; Wed, 18 Jul 2018 02:47:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1531907229; cv=none; d=google.com; s=arc-20160816; b=PnjEZYIJlTWP/UgL54qo6iZLi/GSKfpH+8sckgbJZBQXR8Lr5GMwZ8Vez7uHl5fruM u5NiPfpjGq2kw9ndR1IDy3kTM9ieXRR9vvhftLrknTqdYvlRgywpo0QG6qO1QnMM+mJN MSDxFj1Er8uSqq6JERdDMroLd+AhJqTfEMb7Q115qLK/+RT5hUjB9UlXk0hpN8sUdP1U h3Lezq770ZP42IKh3jVvgkMxdeYGNuzIgdLpqnnP0iPwvnev9xehbSjHFl+bnJ7HC3fu Nk+b/G/hY2i/er6xM48EL67SsEd+fkZU7fNfkTGm+4qpLAMv82EOT2ultMM0bB3j+CIY JfCQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=IstD0PX1gZpjaOZe9ZvCml2CGUgMfzjawl5+VFu27tI=; b=LUo4p1IVKfkKE2F+V8ZZsFaziE8mEAE4LpgZNd5em6rdziHq0QBth3bMXhdvrdVKvc cUHeJKORtE7PUGQe0u0h9mPOAwOPpxP0R1FQ/glOHUi5VYK4vh6/SL8EBjjf+frwCHxM AV8CR5EGR3050ihLNMzKMv2/1PbbRXya01IHxqTiy7nhqGh91fH3mFk9hSv7Ck8mDkMr dfx8ztxXvl5GWTx8IPQy7e/7ap9Zb90O7LEIVs4b6i9yHPqg75pEKUTqZq/LSqTXOes8 4vuH/TzOxzuXy8KwTyz7QQ79e3Uugm9KWEYop3mpnx4jb2Yy6FvUCacYOrSbU4GdH0Uj JFLw== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail (test mode) header.i=@8bytes.org header.s=mail-1 header.b=LPtAA5p7; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=8bytes.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id w19-v6si2744355plq.236.2018.07.18.02.46.54; Wed, 18 Jul 2018 02:47:09 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=fail (test mode) header.i=@8bytes.org header.s=mail-1 header.b=LPtAA5p7; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=8bytes.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731737AbeGRKV4 (ORCPT + 99 others); Wed, 18 Jul 2018 06:21:56 -0400 Received: from 8bytes.org ([81.169.241.247]:53544 "EHLO theia.8bytes.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731270AbeGRKSb (ORCPT ); Wed, 18 Jul 2018 06:18:31 -0400 Received: by theia.8bytes.org (Postfix, from userid 1000) id CB0941EA; Wed, 18 Jul 2018 11:41:18 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=8bytes.org; s=mail-1; t=1531906879; bh=SS6SYyEciXuD1olgi8ibNK5P0rrptUTEqVkwTJMIQ1I=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=LPtAA5p78LKf2HX20ZyqelQxzR/yZHEHw75RkKm2EmUJy7WOpTVztbMq9aEhB6zpz HrmEAHrgcITlk4c5402QyWRD+E0gO8weDum7JD0pV6p3ZGxueVJJlBmCkkiCgNwoMy 7vzTqICfka8XGzv8Jg/BDL0CQWQzY7m3OGiZInTnKcXlFu7+SJ4YOqKI+khXcTYb0v 21LIgHBgt2pXemskQuY0pNSNufnYZeTpqxzmQZjCE/wB9kmAjChQfY07+q928Fi9XH naCeub7E4ZLXTsLyEdbsN+BQcFb1m/vWIKrhZXIQC3rlTOGmn1CXfykfb5zcU4dOsq pPyJZJlMuEUlw== From: Joerg Roedel To: Thomas Gleixner , Ingo Molnar , "H . Peter Anvin" Cc: x86@kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Linus Torvalds , Andy Lutomirski , Dave Hansen , Josh Poimboeuf , Juergen Gross , Peter Zijlstra , Borislav Petkov , Jiri Kosina , Boris Ostrovsky , Brian Gerst , David Laight , Denys Vlasenko , Eduardo Valentin , Greg KH , Will Deacon , aliguori@amazon.com, daniel.gruss@iaik.tugraz.at, hughd@google.com, keescook@google.com, Andrea Arcangeli , Waiman Long , Pavel Machek , "David H . Gutteridge" , jroedel@suse.de, joro@8bytes.org Subject: [PATCH 04/39] x86/entry/32: Put ESPFIX code into a macro Date: Wed, 18 Jul 2018 11:40:41 +0200 Message-Id: <1531906876-13451-5-git-send-email-joro@8bytes.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1531906876-13451-1-git-send-email-joro@8bytes.org> References: <1531906876-13451-1-git-send-email-joro@8bytes.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Joerg Roedel This makes it easier to split up the shared iret code path. Signed-off-by: Joerg Roedel --- arch/x86/entry/entry_32.S | 97 ++++++++++++++++++++++++----------------------- 1 file changed, 49 insertions(+), 48 deletions(-) diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S index 39f711a..ef7d653 100644 --- a/arch/x86/entry/entry_32.S +++ b/arch/x86/entry/entry_32.S @@ -221,6 +221,54 @@ POP_GS_EX .endm +.macro CHECK_AND_APPLY_ESPFIX +#ifdef CONFIG_X86_ESPFIX32 +#define GDT_ESPFIX_SS PER_CPU_VAR(gdt_page) + (GDT_ENTRY_ESPFIX_SS * 8) + + ALTERNATIVE "jmp .Lend_\@", "", X86_BUG_ESPFIX + + movl PT_EFLAGS(%esp), %eax # mix EFLAGS, SS and CS + /* + * Warning: PT_OLDSS(%esp) contains the wrong/random values if we + * are returning to the kernel. + * See comments in process.c:copy_thread() for details. + */ + movb PT_OLDSS(%esp), %ah + movb PT_CS(%esp), %al + andl $(X86_EFLAGS_VM | (SEGMENT_TI_MASK << 8) | SEGMENT_RPL_MASK), %eax + cmpl $((SEGMENT_LDT << 8) | USER_RPL), %eax + jne .Lend_\@ # returning to user-space with LDT SS + + /* + * Setup and switch to ESPFIX stack + * + * We're returning to userspace with a 16 bit stack. The CPU will not + * restore the high word of ESP for us on executing iret... This is an + * "official" bug of all the x86-compatible CPUs, which we can work + * around to make dosemu and wine happy. We do this by preloading the + * high word of ESP with the high word of the userspace ESP while + * compensating for the offset by changing to the ESPFIX segment with + * a base address that matches for the difference. + */ + mov %esp, %edx /* load kernel esp */ + mov PT_OLDESP(%esp), %eax /* load userspace esp */ + mov %dx, %ax /* eax: new kernel esp */ + sub %eax, %edx /* offset (low word is 0) */ + shr $16, %edx + mov %dl, GDT_ESPFIX_SS + 4 /* bits 16..23 */ + mov %dh, GDT_ESPFIX_SS + 7 /* bits 24..31 */ + pushl $__ESPFIX_SS + pushl %eax /* new kernel esp */ + /* + * Disable interrupts, but do not irqtrace this section: we + * will soon execute iret and the tracer was already set to + * the irqstate after the IRET: + */ + DISABLE_INTERRUPTS(CLBR_ANY) + lss (%esp), %esp /* switch to espfix segment */ +.Lend_\@: +#endif /* CONFIG_X86_ESPFIX32 */ +.endm /* * %eax: prev task * %edx: next task @@ -547,21 +595,7 @@ ENTRY(entry_INT80_32) restore_all: TRACE_IRQS_IRET .Lrestore_all_notrace: -#ifdef CONFIG_X86_ESPFIX32 - ALTERNATIVE "jmp .Lrestore_nocheck", "", X86_BUG_ESPFIX - - movl PT_EFLAGS(%esp), %eax # mix EFLAGS, SS and CS - /* - * Warning: PT_OLDSS(%esp) contains the wrong/random values if we - * are returning to the kernel. - * See comments in process.c:copy_thread() for details. - */ - movb PT_OLDSS(%esp), %ah - movb PT_CS(%esp), %al - andl $(X86_EFLAGS_VM | (SEGMENT_TI_MASK << 8) | SEGMENT_RPL_MASK), %eax - cmpl $((SEGMENT_LDT << 8) | USER_RPL), %eax - je .Lldt_ss # returning to user-space with LDT SS -#endif + CHECK_AND_APPLY_ESPFIX .Lrestore_nocheck: RESTORE_REGS 4 # skip orig_eax/error_code .Lirq_return: @@ -579,39 +613,6 @@ ENTRY(iret_exc ) jmp common_exception .previous _ASM_EXTABLE(.Lirq_return, iret_exc) - -#ifdef CONFIG_X86_ESPFIX32 -.Lldt_ss: -/* - * Setup and switch to ESPFIX stack - * - * We're returning to userspace with a 16 bit stack. The CPU will not - * restore the high word of ESP for us on executing iret... This is an - * "official" bug of all the x86-compatible CPUs, which we can work - * around to make dosemu and wine happy. We do this by preloading the - * high word of ESP with the high word of the userspace ESP while - * compensating for the offset by changing to the ESPFIX segment with - * a base address that matches for the difference. - */ -#define GDT_ESPFIX_SS PER_CPU_VAR(gdt_page) + (GDT_ENTRY_ESPFIX_SS * 8) - mov %esp, %edx /* load kernel esp */ - mov PT_OLDESP(%esp), %eax /* load userspace esp */ - mov %dx, %ax /* eax: new kernel esp */ - sub %eax, %edx /* offset (low word is 0) */ - shr $16, %edx - mov %dl, GDT_ESPFIX_SS + 4 /* bits 16..23 */ - mov %dh, GDT_ESPFIX_SS + 7 /* bits 24..31 */ - pushl $__ESPFIX_SS - pushl %eax /* new kernel esp */ - /* - * Disable interrupts, but do not irqtrace this section: we - * will soon execute iret and the tracer was already set to - * the irqstate after the IRET: - */ - DISABLE_INTERRUPTS(CLBR_ANY) - lss (%esp), %esp /* switch to espfix segment */ - jmp .Lrestore_nocheck -#endif ENDPROC(entry_INT80_32) .macro FIXUP_ESPFIX_STACK -- 2.7.4