Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752660AbdLLWKX (ORCPT ); Tue, 12 Dec 2017 17:10:23 -0500 Received: from mail.kernel.org ([198.145.29.99]:35006 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752320AbdLLWKW (ORCPT ); Tue, 12 Dec 2017 17:10:22 -0500 DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 924F1218B4 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=luto@kernel.org X-Google-Smtp-Source: ACJfBotG5n/y9XRkJDJQM+LdIhoYheDFsxUD8cC2Sy61MxwyAqGwhJ0M1MiocReS/EJsLr2nC0POkmmurQTyiiytVb4= MIME-Version: 1.0 In-Reply-To: References: <2809506.pL8kVbvXcY@aspire.rjw.lan> <1578405.51lzoSX1jh@aspire.rjw.lan> <20171209103325.GA13867@amd> <20171209220110.GA11496@amd> <20171210162305.GA10159@amd> <20171210185638.GA10363@amd> <20171210204350.GA25013@amd> From: Andy Lutomirski Date: Tue, 12 Dec 2017 14:10:00 -0800 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: Linux 4.15-rc2: Regression in resume from ACPI S3 To: Linus Torvalds Cc: Andy Lutomirski , Pavel Machek , Zhang Rui , Thomas Gleixner , Jarkko Nikula , "Rafael J. Wysocki" , Linux Kernel Mailing List , "the arch/x86 maintainers" Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1756 Lines: 45 On Tue, Dec 12, 2017 at 10:36 AM, Linus Torvalds wrote: > On Tue, Dec 12, 2017 at 10:05 AM, Andy Lutomirski wrote: >>> >>> - do NOT use "load_gs_index()", which does that swapgs dance (twice!) >>> and plays with interrupt state. Just load the segment register, and >>> then do the wrmsrl() of the {FS,GS,KERNEL_GS}_BASE values. There is no >>> need for the swapgs dance. >> >> Using what helper? On x86_64, it can fault, and IIRC we explicitly >> don't allow loadsegment(gs, ...). > > Just do the loadsegment() thing. The fact that we don't have a gs > version of it is legacy - to catch bad users. It shouldn't stop us > from having good users. > > That said - can it really fault? Because if it can, then why can't %fs > fault? And on x86-64, we just do > > asm volatile ("movw %0, %%fs" :: "r" (ctxt->fs)); > > and don't actually use 'loadsegment()' for _any_ of the segments. We > only do the fault protection on 32-bit. > > In fact, we really should try to avoid taking faults here anyway, > shouldn't we? We haven't loaded enough of the context yet. > > Hmm. > > Maybe we should load only the fixed kernel segments at this point, and > then do all the loadsegment() of gs/fs in the later phase when we're > all set up. > > THERE we can do the swapgs dance with interrupt tracing etc, because > *there* we actually are fully set up. I guess that means reloading the > FS/GS base MSR's, Like this? https://git.kernel.org/pub/scm/linux/kernel/git/luto/linux.git/commit/?h=x86/fixes&id=cb855aa9679a15adbe43732f5854270de2b35856 I've barely tested it. It suspended and resumed once in a 64-bit VM. It compiles on 32-bit. (That link might not work for a little bit. I'm not sure what's up.)