Received: by 2002:ac0:a5a7:0:0:0:0:0 with SMTP id m36-v6csp448934imm; Wed, 11 Jul 2018 05:28:53 -0700 (PDT) X-Google-Smtp-Source: AAOMgpdoXOErryfaz/SIB2HZ3KhQc0yivAI7/nLbNFGFYNDkNlPY0yeJ2FW3fpAtLBymaffg7dTR X-Received: by 2002:a63:d916:: with SMTP id r22-v6mr1676227pgg.381.1531312133479; Wed, 11 Jul 2018 05:28:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1531312133; cv=none; d=google.com; s=arc-20160816; b=jThmZxBAmTdxBGgRDQwBfvaGB0ndtb/qtQG31M1vw7Caf5yoHlVvkprAZdOKdNxN8l raDy4aqm/OLKB4kILNM0e81OJOJyHoBH5YXm7iJcNR3Tjr9tkf9/JQ9cu2zGDT56J+vV ghxE/WX/JHNcPfaRqNUBZKESYIq2VU7XiVnP1ZWjBUTIrS421vHrRYLE5Jb5+lYsu1VR NQLiX0yOGGrTTIsZJOfedTYY81SDlIy6nIPEmpJLsZY0BgrDUl0wnzgKTkByAR+aZoaa 2qsmGT87d9zXUpSJnUXZJf7hFyA4PpTJpt7Ngm7DnFJkqw2N4RMfP1aGorzW2TtvuRan rMow== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=UEznKuSZaMJ6AsH0FJoohsPpgFrUgGLh2mNGRgHrb1Q=; b=cpdsiQJW11kC1PieyjKWVvlsdyiewMM2PzpDOTdSV+/J7gbZoCvMD+HVmiNmXdvwh0 MZ2DN3xYQ16NQI73CQGtDA3w/zU/TbNFnxHadVLUDnbpQWP0tLXHGun5tBHvW+C3Uh2O QqJCO20k5fkhn7BcJBUMaODCQpiDulbH6MWD63DGvrfrDLxywuAI71sZZOMYf8ZIftwT 7zIfVLq+oVoIYWlr16npMRcK3PrR1KnU17MaU3ynv63WQ3Fdw0qzeZlakoN8wIxzdkwg YIGHh1B19tclWZ7MAlQg7N+O5gb+6PF9/5b2/yYnUCfkmRkSff4K4iuzJfKc0FUSKw1c BZFQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail (test mode) header.i=@8bytes.org header.s=mail-1 header.b=crWKVDRD; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=8bytes.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 184-v6si18831287pfe.249.2018.07.11.05.28.38; Wed, 11 Jul 2018 05:28:53 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=fail (test mode) header.i=@8bytes.org header.s=mail-1 header.b=crWKVDRD; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=8bytes.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732841AbeGKLeA (ORCPT + 99 others); Wed, 11 Jul 2018 07:34:00 -0400 Received: from 8bytes.org ([81.169.241.247]:37384 "EHLO theia.8bytes.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732808AbeGKLd7 (ORCPT ); Wed, 11 Jul 2018 07:33:59 -0400 Received: by theia.8bytes.org (Postfix, from userid 1000) id EC44F5BB; Wed, 11 Jul 2018 13:29:59 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=8bytes.org; s=mail-1; t=1531308599; bh=6bfJ1yD4EqIh0SLTKG6nHvS6Tbjr9DxGgwLNYwVUtAo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=crWKVDRD1dei+2h1W2WAT9KuWkYcMVOkqmfM/WxR4G3RJf6+MjjaUH57m5rX2Zzq4 X/cIgtQaX+bKm6OqBm1h/r1IJAfgCmwsIn5KRiqITvc/T6QP0ZiOWy+ul/3BXZrRm+ L95wcoJHpNakt63x2tRZKxOK6IvbCcI9lTMNApko+rpy9RkTHdsr5gVNc3qv34uHPM sI8JzSpxOPSEwH7hfgIOtajGfBnKPOvCHkDTciKrXV9pUF8QcB2e/Jii0wpX8ntueA 9bYV+tBc+Urf2Zj50xpCSf+JvxFTtlkhcQ76Z2sIg3oUCCo1vTBqbdJn90O9luuS/u TDNi2hqHGIphQ== From: Joerg Roedel To: Thomas Gleixner , Ingo Molnar , "H . Peter Anvin" Cc: x86@kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Linus Torvalds , Andy Lutomirski , Dave Hansen , Josh Poimboeuf , Juergen Gross , Peter Zijlstra , Borislav Petkov , Jiri Kosina , Boris Ostrovsky , Brian Gerst , David Laight , Denys Vlasenko , Eduardo Valentin , Greg KH , Will Deacon , aliguori@amazon.com, daniel.gruss@iaik.tugraz.at, hughd@google.com, keescook@google.com, Andrea Arcangeli , Waiman Long , Pavel Machek , "David H . Gutteridge" , jroedel@suse.de, joro@8bytes.org Subject: [PATCH 04/39] x86/entry/32: Put ESPFIX code into a macro Date: Wed, 11 Jul 2018 13:29:11 +0200 Message-Id: <1531308586-29340-5-git-send-email-joro@8bytes.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1531308586-29340-1-git-send-email-joro@8bytes.org> References: <1531308586-29340-1-git-send-email-joro@8bytes.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Joerg Roedel This makes it easier to split up the shared iret code path. Signed-off-by: Joerg Roedel --- arch/x86/entry/entry_32.S | 97 ++++++++++++++++++++++++----------------------- 1 file changed, 49 insertions(+), 48 deletions(-) diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S index 39fdda3..d35a69a 100644 --- a/arch/x86/entry/entry_32.S +++ b/arch/x86/entry/entry_32.S @@ -221,6 +221,54 @@ POP_GS_EX .endm +.macro CHECK_AND_APPLY_ESPFIX +#ifdef CONFIG_X86_ESPFIX32 +#define GDT_ESPFIX_SS PER_CPU_VAR(gdt_page) + (GDT_ENTRY_ESPFIX_SS * 8) + + ALTERNATIVE "jmp .Lend_\@", "", X86_BUG_ESPFIX + + movl PT_EFLAGS(%esp), %eax # mix EFLAGS, SS and CS + /* + * Warning: PT_OLDSS(%esp) contains the wrong/random values if we + * are returning to the kernel. + * See comments in process.c:copy_thread() for details. + */ + movb PT_OLDSS(%esp), %ah + movb PT_CS(%esp), %al + andl $(X86_EFLAGS_VM | (SEGMENT_TI_MASK << 8) | SEGMENT_RPL_MASK), %eax + cmpl $((SEGMENT_LDT << 8) | USER_RPL), %eax + jne .Lend_\@ # returning to user-space with LDT SS + + /* + * Setup and switch to ESPFIX stack + * + * We're returning to userspace with a 16 bit stack. The CPU will not + * restore the high word of ESP for us on executing iret... This is an + * "official" bug of all the x86-compatible CPUs, which we can work + * around to make dosemu and wine happy. We do this by preloading the + * high word of ESP with the high word of the userspace ESP while + * compensating for the offset by changing to the ESPFIX segment with + * a base address that matches for the difference. + */ + mov %esp, %edx /* load kernel esp */ + mov PT_OLDESP(%esp), %eax /* load userspace esp */ + mov %dx, %ax /* eax: new kernel esp */ + sub %eax, %edx /* offset (low word is 0) */ + shr $16, %edx + mov %dl, GDT_ESPFIX_SS + 4 /* bits 16..23 */ + mov %dh, GDT_ESPFIX_SS + 7 /* bits 24..31 */ + pushl $__ESPFIX_SS + pushl %eax /* new kernel esp */ + /* + * Disable interrupts, but do not irqtrace this section: we + * will soon execute iret and the tracer was already set to + * the irqstate after the IRET: + */ + DISABLE_INTERRUPTS(CLBR_ANY) + lss (%esp), %esp /* switch to espfix segment */ +.Lend_\@: +#endif /* CONFIG_X86_ESPFIX32 */ +.endm /* * %eax: prev task * %edx: next task @@ -547,21 +595,7 @@ ENTRY(entry_INT80_32) restore_all: TRACE_IRQS_IRET .Lrestore_all_notrace: -#ifdef CONFIG_X86_ESPFIX32 - ALTERNATIVE "jmp .Lrestore_nocheck", "", X86_BUG_ESPFIX - - movl PT_EFLAGS(%esp), %eax # mix EFLAGS, SS and CS - /* - * Warning: PT_OLDSS(%esp) contains the wrong/random values if we - * are returning to the kernel. - * See comments in process.c:copy_thread() for details. - */ - movb PT_OLDSS(%esp), %ah - movb PT_CS(%esp), %al - andl $(X86_EFLAGS_VM | (SEGMENT_TI_MASK << 8) | SEGMENT_RPL_MASK), %eax - cmpl $((SEGMENT_LDT << 8) | USER_RPL), %eax - je .Lldt_ss # returning to user-space with LDT SS -#endif + CHECK_AND_APPLY_ESPFIX .Lrestore_nocheck: RESTORE_REGS 4 # skip orig_eax/error_code .Lirq_return: @@ -579,39 +613,6 @@ ENTRY(iret_exc ) jmp common_exception .previous _ASM_EXTABLE(.Lirq_return, iret_exc) - -#ifdef CONFIG_X86_ESPFIX32 -.Lldt_ss: -/* - * Setup and switch to ESPFIX stack - * - * We're returning to userspace with a 16 bit stack. The CPU will not - * restore the high word of ESP for us on executing iret... This is an - * "official" bug of all the x86-compatible CPUs, which we can work - * around to make dosemu and wine happy. We do this by preloading the - * high word of ESP with the high word of the userspace ESP while - * compensating for the offset by changing to the ESPFIX segment with - * a base address that matches for the difference. - */ -#define GDT_ESPFIX_SS PER_CPU_VAR(gdt_page) + (GDT_ENTRY_ESPFIX_SS * 8) - mov %esp, %edx /* load kernel esp */ - mov PT_OLDESP(%esp), %eax /* load userspace esp */ - mov %dx, %ax /* eax: new kernel esp */ - sub %eax, %edx /* offset (low word is 0) */ - shr $16, %edx - mov %dl, GDT_ESPFIX_SS + 4 /* bits 16..23 */ - mov %dh, GDT_ESPFIX_SS + 7 /* bits 24..31 */ - pushl $__ESPFIX_SS - pushl %eax /* new kernel esp */ - /* - * Disable interrupts, but do not irqtrace this section: we - * will soon execute iret and the tracer was already set to - * the irqstate after the IRET: - */ - DISABLE_INTERRUPTS(CLBR_ANY) - lss (%esp), %esp /* switch to espfix segment */ - jmp .Lrestore_nocheck -#endif ENDPROC(entry_INT80_32) .macro FIXUP_ESPFIX_STACK -- 2.7.4