Received: by 10.223.148.5 with SMTP id 5csp6161661wrq; Wed, 17 Jan 2018 10:11:34 -0800 (PST) X-Google-Smtp-Source: ACJfBov5sk9s81OIklIsNAFdNhP7IvSba3K64Bs9KjQk7WNygoNEsxZ0nniiJXl1itYK67lZRfdG X-Received: by 10.159.234.136 with SMTP id d8mr14940770plr.171.1516212694896; Wed, 17 Jan 2018 10:11:34 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1516212694; cv=none; d=google.com; s=arc-20160816; b=HqvQHbzsiOhFY202qV/hk7unnjf2YBpXEkD2WHZ3ecHqNQvZLDRHca7JvhlRD2nmPV u61o+PkEj1CTadirUeQoNUdVD6o5s1ceE69Q3VHTxSzal5wgPsLcN6lJsokP6vWeaX2V hpHFmivFRhFyDhrXECUp4f0u0FJNQU5OoNdGwcf8UwJdjtSQWL/6SzwBfsncCnG6u0c1 N6t5+2plJfRkU0uu7GXccCLZ63aJLWHUkcEa5UOuLPudRCg72RYkoFLNZZysLB3tWm8q DvzTJwTfSnIAiXBis8o/fVWJmFBP2VgrQDGuD8lMo/1PT8x1+ZuGt0EwKO1RP97scWwX VfMw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :references:in-reply-to:mime-version:dmarc-filter :arc-authentication-results; bh=tMuqsydGnNHCKMPMsIqBzburtASZY1ZV3krATRgw47A=; b=vgXA8W9uhZUIVtvD5VH2Vy334BeE3KWFVxYz2waf+Odi9ByHncpEe62+tRrDBMWnRN ulTx5Tm3+HGERA2PV/+1AocurF5DWV1A/AgsY/697hTFos9QzyfEOAgHDv9jSCD1PB4I sX5ePbhuTP2jFIFSjtvK/DlMsfUrgyEyFQWs6TWySfHnNOe0H/ya/ej8TshDTIPNnjP3 jPe7z7EoQyfZlrY2eGFW92wP4WNew3/lxejNnjgqNd8z9Fnn9kMYmsH6Rl90g5XDvs10 YiMYlcPOCNt5IX2V8f14PXd/d8NJe4uMPTMXBy2pAPzxfzhCKabq7eFGwdrDnslu2wFH G9vQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e1si4155092pgo.141.2018.01.17.10.11.20; Wed, 17 Jan 2018 10:11:34 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754031AbeAQSKu (ORCPT + 99 others); Wed, 17 Jan 2018 13:10:50 -0500 Received: from mail.kernel.org ([198.145.29.99]:54386 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752579AbeAQSKp (ORCPT ); Wed, 17 Jan 2018 13:10:45 -0500 Received: from mail-it0-f45.google.com (mail-it0-f45.google.com [209.85.214.45]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id BF9702177D for ; Wed, 17 Jan 2018 18:10:44 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BF9702177D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=luto@kernel.org Received: by mail-it0-f45.google.com with SMTP id e1so10242345ita.0 for ; Wed, 17 Jan 2018 10:10:44 -0800 (PST) X-Gm-Message-State: AKwxytctcNdrqSZPD2WNHDRSCXZJujz2SDzHzhHT8oyPwWy5ul71GDKv XnvJ5s6vaC24DrbwshG7t4jGg+IBzOb670o6CeIkeQ== X-Received: by 10.36.74.200 with SMTP id k191mr25537359itb.69.1516212644147; Wed, 17 Jan 2018 10:10:44 -0800 (PST) MIME-Version: 1.0 Received: by 10.2.152.114 with HTTP; Wed, 17 Jan 2018 10:10:23 -0800 (PST) In-Reply-To: <20180117091853.GI28161@8bytes.org> References: <1516120619-1159-1-git-send-email-joro@8bytes.org> <1516120619-1159-3-git-send-email-joro@8bytes.org> <20180117091853.GI28161@8bytes.org> From: Andy Lutomirski Date: Wed, 17 Jan 2018 10:10:23 -0800 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH 02/16] x86/entry/32: Enter the kernel via trampoline stack To: Joerg Roedel Cc: Andy Lutomirski , Thomas Gleixner , Ingo Molnar , "H . Peter Anvin" , X86 ML , LKML , linux-mm@kvack.org, Linus Torvalds , Dave Hansen , Josh Poimboeuf , Juergen Gross , Peter Zijlstra , Borislav Petkov , Jiri Kosina , Boris Ostrovsky , Brian Gerst , David Laight , Denys Vlasenko , Eduardo Valentin , Greg KH , Will Deacon , "Liguori, Anthony" , Daniel Gruss , Hugh Dickins , Kees Cook , Andrea Arcangeli , Waiman Long , Joerg Roedel Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Jan 17, 2018 at 1:18 AM, Joerg Roedel wrote: > On Tue, Jan 16, 2018 at 02:45:27PM -0800, Andy Lutomirski wrote: >> On Tue, Jan 16, 2018 at 8:36 AM, Joerg Roedel wrote: >> > +.macro SWITCH_TO_KERNEL_STACK nr_regs=0 check_user=0 >> >> How about marking nr_regs with :req to force everyone to be explicit? > > Yeah, that's more readable, I'll change it. > >> > + /* >> > + * TSS_sysenter_stack is the offset from the bottom of the >> > + * entry-stack >> > + */ >> > + movl TSS_sysenter_stack + ((\nr_regs + 1) * 4)(%esp), %esp >> >> This is incomprehensible. You're adding what appears to be the offset >> of sysenter_stack within the TSS to something based on esp and >> dereferencing that to get the new esp. That't not actually what >> you're doing, but please change asm_offsets.c (as in my previous >> email) to avoid putting serious arithmetic in it and then do the >> arithmetic right here so that it's possible to follow what's going on. > > Probably this needs better comments. So TSS_sysenter_stack is the offset > from to tss.sp0 (tss.sp1 later) from the _bottom_ of the stack. But in > this macro the stack might not be empty, it has a configurable (by > \nr_regs) number of dwords on it. Before this instruction we also do a > push %edi, so we need (\nr_regs + 1). > > This can't be put into asm_offset.c, as the actual offset depends on how > much is on the stack. > >> > ENTRY(entry_INT80_32) >> > ASM_CLAC >> > pushl %eax /* pt_regs->orig_ax */ >> > + >> > + /* Stack layout: ss, esp, eflags, cs, eip, orig_eax */ >> > + SWITCH_TO_KERNEL_STACK nr_regs=6 check_user=1 >> > + >> >> Why check_user? > > You are right, check_user shouldn't ne needed as INT80 is never called > from kernel mode. > >> > ENTRY(nmi) >> > ASM_CLAC >> > + >> > + /* Stack layout: ss, esp, eflags, cs, eip */ >> > + SWITCH_TO_KERNEL_STACK nr_regs=5 check_user=1 >> >> This is wrong, I think. If you get an nmi in kernel mode but while >> still on the sysenter stack, you blow up. IIRC we have some crazy >> code already to handle this (for nmi and #DB), and maybe that's >> already adequate or can be made adequate, but at the very least this >> needs a big comment explaining why it's okay. > > If we get an nmi while still on the sysenter stack, then we are not > entering the handler from user-space and the above code will do > nothing and behave as before. > > But you are right, it might blow up. There is a problem with the cr3 > switch, because the nmi can happen in kernel mode before the cr3 is > switched, then this handler will not do the cr3 switch itself and crash > the kernel. But the stack switching should be fine, I think. > >> > + /* >> > + * TODO: Find a way to let cpu_current_top_of_stack point to >> > + * cpu_tss_rw.x86_tss.sp1. Doing so now results in stack corruption with >> > + * iret exceptions. >> > + */ >> > + this_cpu_write(cpu_tss_rw.x86_tss.sp1, next_p->thread.sp0); >> >> Do you know what the issue is? > > No, not yet, I will look into that again. But first I want to get > this series stable enough as it is. > >> As a general comment, the interaction between this patch and vm86 is a >> bit scary. In vm86 mode, the kernel gets entered with extra stuff on >> the stack, which may screw up all your offsets. > > Just read up on vm86 mode control transfers and the stack layout then. > Looks like I need to check for eflags.vm=1 and copy four more registers > from/to the entry stack. Thanks for pointing that out. You could just copy those slots unconditionally. After all, you're slowing down entries by an epic amount due to writing CR3 on with PCID off, so four words copied should be entirely lost in the noise. OTOH, checking for VM86 mode is just a single bt against EFLAGS. With the modern (rewritten a year or two ago by Brian Gerst) vm86 code, all the slots (those actually in pt_regs) are in the same location regardless of whether we're in VM86 mode or not, but we're still fiddling with the bottom of the stack. Since you're controlling the switch to the kernel thread stack, you can easily just write the frame to the correct location, so you should not need to context switch sp1 -- you can do it sanely and leave sp1 as the actual bottom of the kernel stack no matter what. In fact, you could probably avoid context switching sp0, either, which would be a nice cleanup. So I recommend the following. Keep sp0 as the bottom of the sysenter stack no matter what. Then do: bt $X86_EFLAGS_VM_BIT jc .Lfrom_vm_\@ push 5 regs to real stack, starting at four-word offset (so they're in the right place) update %esp ... .Lupdate_esp_\@ .Lfrom_vm_\@: push 9 regs to real stack, starting at the bottom jmp .Lupdate_esp_\@ Does that seem reasonable? It's arguably much nicer than what we have now.