Received: by 2002:a25:824b:0:0:0:0:0 with SMTP id d11csp8059085ybn; Tue, 1 Oct 2019 02:31:18 -0700 (PDT) X-Google-Smtp-Source: APXvYqyt9Hy7vw8pZXKI+EJaVDOGKYA02mZRZtFeZ7lX/25Db8CVdjNLbD3cz2ZQMlwJfbnKyRnv X-Received: by 2002:a17:906:c742:: with SMTP id fk2mr23170589ejb.44.1569922278526; Tue, 01 Oct 2019 02:31:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1569922278; cv=none; d=google.com; s=arc-20160816; b=kiG7Ormkvg6HHHizpw2bPm1iH97Nz1a+ai2ntK8P1r1IPISQweiTqeMKRmfRmbaXdq ea0Ns2mAfQ0ZCGg0gxlk6mOW6TCVALdNhYcDUTtDqMhcTiOLWR0Ug8A6fmxswEdki+j7 vRX3SoG69rI/wRup37HAMWOe39V1BGBhTBSXDvMdqm+4lIeA8atHKAWvup2nQa1zBkdg W5hWMuhn60GhPn9CUszqSVlgR7ugOzU1B4P9LKf8cUYti9mSJGRQR23LEiTig4Msg35B t4aIyAWk+mYozfntavas/IS6JeKT2SKOlHzVdYGEg8q4/1JoeUA7DwoNxZDdtEOBKebw QZmA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature; bh=K743jixOLlD8fZrSZawFpzhZqeUDPRyNJBJLytqnyh8=; b=hWvW3MLubPDsXN6Xb31B/QLdD9tN4JzsX1AsOWdbA6N7YQy6GbUXNetOQfQwRl7Dlm P4aKUHJ4OZ3VWSngYqxAD7Q73BhYXoTGn6rJilZ6pvCo7QI3NGSZvjzMp2Ko4Lm+4Tfv GLGx/9nE8TubKO5Nuq4D0aTEcs3ASRjsOpn8Rejrxz0MZYxO1Ni8ps+Up+2H6gWMmEdK LrKcbdLF5L06QvarwNC96Ys0UhQQ5L9Nz/P3g1wies6n3XcdC4IdX7a0eJYcWB+5vD4q qjeLRFd6V/8wux7Hu5NjtsNJCtR6wrgvzNtDBhjFLj4RfcfeuM06U7N7NAh9LME+a70k yR4g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=t6pUcDb0; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id o49si8983334edc.261.2019.10.01.02.30.53; Tue, 01 Oct 2019 02:31:18 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=t6pUcDb0; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729719AbfJAJ2b (ORCPT + 99 others); Tue, 1 Oct 2019 05:28:31 -0400 Received: from mail.kernel.org ([198.145.29.99]:52424 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727326AbfJAJ2a (ORCPT ); Tue, 1 Oct 2019 05:28:30 -0400 Received: from willie-the-truck (236.31.169.217.in-addr.arpa [217.169.31.236]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 6B6BF2133F; Tue, 1 Oct 2019 09:28:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1569922109; bh=VG2yYK04844hua7n2UADcKKpGUe94Zt5h5BgaTNY1Z4=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=t6pUcDb04POb2VM6HATh6teK6DkKKTCdUWUyHuaFg4qR84olfdRS9LPQzIFjqITan r6TPX/lwLmkjGV3Pg11mQ2oQdPvlfSQjFG+mkybm11YESLhKd62/MCwKpgDeIOqYx/ u9DEe+Ah4zTGfLKeAQBeN+bbn8UXtIqWLgWvKvmE= Date: Tue, 1 Oct 2019 10:28:24 +0100 From: Will Deacon To: Nick Desaulniers Cc: Masahiro Yamada , Linus Torvalds , Nicolas Saenz Julienne , Andrew Morton , Ingo Molnar , Borislav Petkov , Miguel Ojeda , linux-arch , LKML , Catalin Marinas , Russell King , Stefan Wahren , Kees Cook Subject: Re: [PATCH] compiler: enable CONFIG_OPTIMIZE_INLINING forcibly Message-ID: <20191001092823.z4zhlbwvtwnlotwc@willie-the-truck> References: <20190830034304.24259-1-yamada.masahiro@socionext.com> <20190930112636.vx2qxo4hdysvxibl@willie-the-truck> <20190930121803.n34i63scet2ec7ll@willie-the-truck> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: NeoMutt/20170113 (1.7.2) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Nick, On Mon, Sep 30, 2019 at 02:50:10PM -0700, Nick Desaulniers wrote: > On Mon, Sep 30, 2019 at 5:18 AM Will Deacon wrote: > > On Mon, Sep 30, 2019 at 09:05:11PM +0900, Masahiro Yamada wrote: > > > On Mon, Sep 30, 2019 at 8:26 PM Will Deacon wrote: > > > > FWIW, we've run into issues with CONFIG_OPTIMIZE_INLINING and local > > > > variables marked as 'register' where GCC would do crazy things and end > > > > up corrupting data, so I suspect the use of fixed registers in the arm > > > > uaccess functions is hitting something similar: > > > > > > > > https://gcc.gnu.org/bugzilla/show_bug.cgi?id=91111 > > > > > > No. Not similar at all. > > > > They're similar in that enabling CONFIG_OPTIMIZE_INLINING causes register > > variables to go wrong. I agree that the ARM code looks dodgy with > > that call to uaccess_save_and_enable(), but there are __asmeq macros > > in there to try to catch that, so it's still very fishy. > > > > > I fixed it already. See > > > https://lore.kernel.org/patchwork/patch/1132459/ > > > > You fixed the specific case above for 32-bit ARM, but the arm64 case > > is due to a compiler bug. As it happens, we've reworked our atomics > > in 5.4 so that particular issue no longer triggers, but the fact remains > > that GCC has been shown to screw up explicit register allocation for > > perfectly legitimate code when giving the flexibility to move code out > > of line. > > So __attribute__((always_inline)) doesn't guarantee that code will be > inlined. For instance in LLVM's inliner, it asks/answers "should I > inline" and "can I inline." "Should" has to do with a cost model, and > is very heuristic-y. "Can" has more to do with the transforms, and > whether they're all implemented and safe. If you if you say > __attribute__((always_inline)), the answer to "can I inline this" can > still be *no*. The only way to guarantee inlining is via the C > preprocessor. The only way to prevent inlining is via > __attribute__((no_inline)). inline and __attribute__((always_inline)) > are a heuristic laden mess and should not be relied upon. I would > also look closely at code that *requires* inlining or the lack there > of to be correct. That the kernel no longer compiles at -O0 is not a > good thing IMO, and hurts developers that want a short > compile/execute/debug cycle. > > In this case, if there's a known codegen bug in a particular compiler > or certain versions of it, I recommend the use of either the C > preprocessor or __attribute__((no_inline)) to get the desired behavior > localized to the function in question, and for us to proceed with > Masahiro's cleanup. Hmm, I don't see how that would help. The problem occurs when things are moved out of line by the compiler (see below). > The comment above the use of CONFIG_OPTIMIZE_INLINING in > include/linux/compiler_types.h says: > * Force always-inline if the user requests it so via the .config. > Which makes me grimace (__attribute__((always_inline)) doesn't *force* > anything as per above), and the idea that forcing things marked inline > to also be __attribute__((always_inline)) is an "optimization" (re: > the name of the config; CONFIG_OPTIMIZE_INLINING) is also highly > suspect. Aggressive inlining leads to image size bloat, instruction > cache and register pressure; it is not exclusively an optimization. Agreed on all of this, but the fact remains that GCC has been shown to *miscompile* the arm64 kernel with CONFIG_OPTIMIZE_INLINING=y. Please, look at this thread: https://www.spinics.net/lists/arm-kernel/msg730329.html https://www.spinics.net/lists/arm-kernel/msg730512.html GCC decides to pull an atomic operation out-of-line and, in doing so, gets the register allocations subtly wrong when passing a 'register' variable into an inline asm. I would like to avoid this sort of thing happening, since it can result in really nasty bugs that manifest at runtime and are extremely difficult to debug, which is why I would much prefer not to have this option on by default for arm64. I sent a patch already: https://lkml.kernel.org/r/20190930114540.27498-1-will@kernel.org and I'm happy to spin a v2 which depends on !CC_IS_CLANG as well. Reducing the instruction cache footprint is great, but not if the resulting code is broken! Will