Received: by 2002:a25:824b:0:0:0:0:0 with SMTP id d11csp8563384ybn; Tue, 1 Oct 2019 09:52:10 -0700 (PDT) X-Google-Smtp-Source: APXvYqwsXBJMRz33UK06GjMIGgqK37VN9jWkxaK+xcXxno8XxHairQsPyZUMWomiuA3UzlHSwVWp X-Received: by 2002:a17:906:7048:: with SMTP id r8mr25141724ejj.101.1569948729884; Tue, 01 Oct 2019 09:52:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1569948729; cv=none; d=google.com; s=arc-20160816; b=XIk+1GxW1IljUUGoo6Xg48h4U2ReL+Z+A80sx0tZdt8qL9bOk2ASjJe/7773G1zl+P iqB/Fbhl9CpsBknMnk+lwwN/UhppHY4JXpf0o8/ikE8Wlc9/cFOsENmYrKTdPpi+9/g3 nqn64T4qtZoJaYVW4jacSnPpARNVDchTxz4+vNWMYS1gtbYl+1yS46vThTsRv0hQUi2W jNCPo2uszOPhuQ8Hb4x0EfIpXDHzHC6GFrcXRGZqR8jwETr21fCk7McFWidjP8pdklb+ JH1aV6lzLAqCLjOKyaoPDgQEjwG3wP71guB6jxn8DeX4etnYfM/hULhkBM5vk33D831Y eCzw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=mLm3V78Nb0tq/zcNx2mEABQ569cNGACMXUe9AHxZPCM=; b=wORRsBqwRTdpTz0fV3qccu85+KO+rIvy4Uw//hYIAJ13kYRW9YxBgKcW/Uao/ytav6 N9AFpzvPWOYvxYBRqtBcS0LipQc5sWijXX06VYsH3luNV4WfP41VJx1keZHJPemN8BkG Sx5JfJWnrSTy4qofWcFZsRBwVb4JwyPIERnMsWSj+OHVD5ki0KVNBu33u9UuvMkxZOBg OqFDc77PFyRLDl6pthBi1kDrx2Hv2CGtlBIAleGcJl8gSzX7I++a28oJ4bc9CIUeMBM9 eRCEnZaFzBNVnCLcx500df2rAkFApe8a+Zws8tZQiXjl81i8uYzkp4lQrzdHpVHf7cLi cfHA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=KfJfkHUP; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id ck24si9012104ejb.400.2019.10.01.09.51.44; Tue, 01 Oct 2019 09:52:09 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=KfJfkHUP; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730487AbfJAQci (ORCPT + 99 others); Tue, 1 Oct 2019 12:32:38 -0400 Received: from mail-pg1-f196.google.com ([209.85.215.196]:34128 "EHLO mail-pg1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726491AbfJAQci (ORCPT ); Tue, 1 Oct 2019 12:32:38 -0400 Received: by mail-pg1-f196.google.com with SMTP id y35so10031038pgl.1 for ; Tue, 01 Oct 2019 09:32:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=mLm3V78Nb0tq/zcNx2mEABQ569cNGACMXUe9AHxZPCM=; b=KfJfkHUPequyd3hiy6z2aKGGX/2G6xIjF3IGbgcrndrvKmDDvI76B+oMbXDdMdg517 7edd6KGmmYrM2HwZ/3YzIKefoAxO253iSyU+LYLxXm3AaLtv/VZCRLbZoH8NQxeXa6pv J0isUOh100Oj3CFe+3gjGZgQzfQ+3KP9bcY2FZnKxR20621nZ3qfGIspGrq/3cvA6dip s3Vy3cGduEnepXlDYeBqVwIitR4pPBFGO+ohkowH5zGuez/DXehdcPAopmI7/uq9EPZA QebtST4zNTHcOYeuaFZTLQQuomMjnd+MgBL+uQoNJ7pCZ/taZ8X6TnOtsvs1qN7CIe7P w/lw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=mLm3V78Nb0tq/zcNx2mEABQ569cNGACMXUe9AHxZPCM=; b=rPHOZ5huG3JVwym8AcwE/AHfMqt0hMgJHg7usEribh7QnA/luQS4IkwnxTPNaVhOZv uj9G8Q+TL6qFIwo5Ink8LD+cwFKRsAMubvRcRKfKYbJJ6oTEZGU65hbGQPOxOki1E09J CU27oYZrzGHf3eq0EBDUaiR0tHzkU0uo2M4/YgYzqAh0zsE/mz71ifi+uqpaKJqVK1b5 +BIynnENBQec0rcq6dCmBDulRqT5DSW+kuEfVTCD/BFMwJsU6d/8YPdvb86nTzNgcY/8 7fUF6E/CMBWYn2DuMtms04w54v+JbryT1zL3oxVkXcYxdRvkRaI8Bt+lB3MHgtGHOBBT IX2A== X-Gm-Message-State: APjAAAWrQC8nlmYILPvrMp5COPMVm6ZC1S/rUsD4Qlk17pD+KaenR1fY ASfTiUHWC2sJEK3EOgi1fT5FlTKcGcpg87if5ctXVg== X-Received: by 2002:a17:90a:154f:: with SMTP id y15mr741498pja.73.1569947556644; Tue, 01 Oct 2019 09:32:36 -0700 (PDT) MIME-Version: 1.0 References: <20190830034304.24259-1-yamada.masahiro@socionext.com> <20190930112636.vx2qxo4hdysvxibl@willie-the-truck> <20190930121803.n34i63scet2ec7ll@willie-the-truck> <20191001092823.z4zhlbwvtwnlotwc@willie-the-truck> In-Reply-To: <20191001092823.z4zhlbwvtwnlotwc@willie-the-truck> From: Nick Desaulniers Date: Tue, 1 Oct 2019 09:32:25 -0700 Message-ID: Subject: Re: [PATCH] compiler: enable CONFIG_OPTIMIZE_INLINING forcibly To: Will Deacon Cc: Masahiro Yamada , Linus Torvalds , Nicolas Saenz Julienne , Andrew Morton , Ingo Molnar , Borislav Petkov , Miguel Ojeda , linux-arch , LKML , Catalin Marinas , Russell King , Stefan Wahren , Kees Cook Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Oct 1, 2019 at 2:28 AM Will Deacon wrote: > > Hi Nick, > > On Mon, Sep 30, 2019 at 02:50:10PM -0700, Nick Desaulniers wrote: > > So __attribute__((always_inline)) doesn't guarantee that code will be > > inlined. For instance in LLVM's inliner, it asks/answers "should I > > inline" and "can I inline." "Should" has to do with a cost model, and > > is very heuristic-y. "Can" has more to do with the transforms, and > > whether they're all implemented and safe. If you if you say > > __attribute__((always_inline)), the answer to "can I inline this" can > > still be *no*. The only way to guarantee inlining is via the C > > preprocessor. The only way to prevent inlining is via > > __attribute__((no_inline)). inline and __attribute__((always_inline)) > > are a heuristic laden mess and should not be relied upon. I would > > also look closely at code that *requires* inlining or the lack there > > of to be correct. That the kernel no longer compiles at -O0 is not a > > good thing IMO, and hurts developers that want a short > > compile/execute/debug cycle. > > > > In this case, if there's a known codegen bug in a particular compiler > > or certain versions of it, I recommend the use of either the C > > preprocessor or __attribute__((no_inline)) to get the desired behavior > > localized to the function in question, and for us to proceed with > > Masahiro's cleanup. > > Hmm, I don't see how that would help. The problem occurs when things > are moved out of line by the compiler (see below). It's being moved out of line because __attribute__((always_inline)) or just inline provide no guarantees that outlining does not occur. It would help to make functions that need to be inlined macros, because the C preprocessor doesn't have that issue. > > > The comment above the use of CONFIG_OPTIMIZE_INLINING in > > include/linux/compiler_types.h says: > > * Force always-inline if the user requests it so via the .config. > > Which makes me grimace (__attribute__((always_inline)) doesn't *force* > > anything as per above), and the idea that forcing things marked inline > > to also be __attribute__((always_inline)) is an "optimization" (re: > > the name of the config; CONFIG_OPTIMIZE_INLINING) is also highly > > suspect. Aggressive inlining leads to image size bloat, instruction > > cache and register pressure; it is not exclusively an optimization. > > Agreed on all of this, but the fact remains that GCC has been shown to > *miscompile* the arm64 kernel with CONFIG_OPTIMIZE_INLINING=y. Please, > look at this thread: > > https://www.spinics.net/lists/arm-kernel/msg730329.html > https://www.spinics.net/lists/arm-kernel/msg730512.html > > GCC decides to pull an atomic operation out-of-line and, in doing so, If the function is incorrect unless inlined, use a macro. > gets the register allocations subtly wrong when passing a 'register' > variable into an inline asm. I would like to avoid this sort of thing > happening, since it can result in really nasty bugs that manifest at > runtime and are extremely difficult to debug, which is why I would much > prefer not to have this option on by default for arm64. I sent a patch > already: > > https://lkml.kernel.org/r/20190930114540.27498-1-will@kernel.org > > and I'm happy to spin a v2 which depends on !CC_IS_CLANG as well. For small things like whether we mark a function always_inline or not, I think it's simpler to just keep the code consistent between compilers, even if it's to work around a bug in one compiler. A comment in the code would be sufficient. > > Reducing the instruction cache footprint is great, but not if the > resulting code is broken! You don't have to convince compiler folks about correctness. ;) Correctness trumps all, especially performance. -- Thanks, ~Nick Desaulniers