Received: by 2002:a25:824b:0:0:0:0:0 with SMTP id d11csp8577979ybn; Tue, 1 Oct 2019 10:04:39 -0700 (PDT) X-Google-Smtp-Source: APXvYqw6zZmeMymZMkLdgy36GASqShck5xgHr11q6/jjEgDYqxRBtvXNkkZ8amBu88UocX/cb8ec X-Received: by 2002:aa7:cdd6:: with SMTP id h22mr26984308edw.132.1569949479329; Tue, 01 Oct 2019 10:04:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1569949479; cv=none; d=google.com; s=arc-20160816; b=A6ZEuPqw0OLbEgwBeN1q/+NK33h+EnrU49gz4Naf/C2WIKRAL0DUfclvdn4rmAmIO0 JA/lmIys2Ft7Jo2nwcAAiUhmEidr6FCZyoc/G5Eva8mWUJ0twbHvMdaY196fTukZJT0C OzsT7ODTd5LQcvCkSERs9DOH1S8fpEQI1jxUm7OsiJyPufTeRKx/PHPMDXvk7x+5aIL5 jA8VrZeoPB2iPWODXTvsfquxfMNhIdawR8iidkU2hGIhnjPAn33dLT07F7rt2Xsn8qa3 1OeVu656n0gG0WCNUM2Yevc9veahQXfRu5A+9Vdr/NEFYJbgJNtguBRIJVsA7GFBMA/8 tyAQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=EcBrAvMosjOG+UozMLy1mX7WITMjOIgSIxKt9S1MY78=; b=qosgJ/lahdaQOn6OR+DYIFiVZ9fJkHYdlDiXNh07Otc6OyV55e2ixolUB8gZ5c21Ig YVkF85K1ZD5c/LMzN0bW9qkLTzQVA7B1ALKjGefepFDUIrxS9WeextVrIb6LX62y3asM Y13AVaHuF/RB6D9IfA5bvgF4vZy+PgtNzg8mkK/vO83Wf24cYaVL3w2OtX2gk21JANmH yUUYfuc0M09y4D+FpGvNGoimNB/UK+BzbpCDKxuUvdpEmqRiPHwHm2OgRioiW3svm+vS V1gtqhraRsC/TGIgZHc1JBbmPMe6oeuouE9UnuLJBGzC40CmtcV8DK0r7tqPkPgDRfYF oXXA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=tQeowPI5; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id y11si9835319ejj.363.2019.10.01.10.04.11; Tue, 01 Oct 2019 10:04:39 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=tQeowPI5; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729895AbfJAREJ (ORCPT + 99 others); Tue, 1 Oct 2019 13:04:09 -0400 Received: from mail-pf1-f194.google.com ([209.85.210.194]:36725 "EHLO mail-pf1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730011AbfJAREC (ORCPT ); Tue, 1 Oct 2019 13:04:02 -0400 Received: by mail-pf1-f194.google.com with SMTP id y22so8455563pfr.3 for ; Tue, 01 Oct 2019 10:04:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=EcBrAvMosjOG+UozMLy1mX7WITMjOIgSIxKt9S1MY78=; b=tQeowPI5FGxOFELVxn4y0uIrJzC5WdQ10Icig06ZTrO0qKSy0zxsNCHYx7PmJoHS7p dog98eiYC996iI878c1DE9kDWe1n3D3glN7bCP3v+0GjdRobfuD6Wz5/LnugQCizZyKk 3t+GPMpzdPfo1HbVPk7aMWwpYS3e2/yiAtXINj1ZEEPjJ/r4rXi1mw3kWiTZbGtX2HY7 3Y64wTqHn2rZkQswmnTCus2NmKzKuBV5mD8CYr2MtWGQn0+L9J4dAcrgVi5Nhlx1+AK/ SQdt2fRwKidwBzn6oVp/ixyPx3dK1TWxMMj4ReQ25zrY85m6rZ9BTXSs2fD6sD4snKLA M4qw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=EcBrAvMosjOG+UozMLy1mX7WITMjOIgSIxKt9S1MY78=; b=AYdz6a3OXXOU9KI976ROpOciQbzrHbO83E4c3iMM4CIgHFbYUjSAQEr052Cnkz1yBz 1SfekutJIkX4a9w9yrY3HM2uyox4yFLrSfeZztDD+/eNTqoYKbQpDJODMfWRsl1xqGtQ bacE3PUgvNChglz2upeTSGRv69kjL26X0n1Dn/Reai1zqdHNlYuQKBucuvfmJV96sisV 3SMQvyA11Zjxl9zDGtUczNrfMxXB3/diJttJ8fzwYDDjMGPDO0XToz5F7WkldtHErdw9 V0Q+c+lNAWjcTpqfa7lwY+sfEDnzofMwAXV/Scgl0YXkmca0wRmlRKoqw4nbgGvfHoSi PAsQ== X-Gm-Message-State: APjAAAW7k009MAU8uk6NrVlPmPYmyao1iiAGGnq800+IpaukQystxiR8 h8AKgXfNRd98AHGvkL8qqlDRnbhue8GwCZ7c4aZo3A== X-Received: by 2002:a62:798e:: with SMTP id u136mr29331106pfc.3.1569949441392; Tue, 01 Oct 2019 10:04:01 -0700 (PDT) MIME-Version: 1.0 References: <20191001083701.27207-1-yamada.masahiro@socionext.com> In-Reply-To: <20191001083701.27207-1-yamada.masahiro@socionext.com> From: Nick Desaulniers Date: Tue, 1 Oct 2019 10:03:50 -0700 Message-ID: Subject: Re: [PATCH v2] ARM: add __always_inline to functions called from __get_user_check() To: Masahiro Yamada Cc: Linux ARM , Russell King , Linus Torvalds , Olof Johansson , Arnd Bergmann , Nicolas Saenz Julienne , Allison Randal , Enrico Weigelt , Greg Kroah-Hartman , Julien Thierry , Kate Stewart , Russell King , Stefan Agner , Thomas Gleixner , Vincent Whitchurch , LKML Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Oct 1, 2019 at 1:37 AM Masahiro Yamada wrote: > > KernelCI reports that bcm2835_defconfig is no longer booting since > commit ac7c3e4ff401 ("compiler: enable CONFIG_OPTIMIZE_INLINING > forcibly") (https://lkml.org/lkml/2019/9/26/825). > > I also received a regression report from Nicolas Saenz Julienne > (https://lkml.org/lkml/2019/9/27/263). > > This problem has cropped up on bcm2835_defconfig because it enables > CONFIG_CC_OPTIMIZE_FOR_SIZE. The compiler tends to prefer not inlining > functions with -Os. I was able to reproduce it with other boards and > defconfig files by manually enabling CONFIG_CC_OPTIMIZE_FOR_SIZE. > > The __get_user_check() specifically uses r0, r1, r2 registers. > So, uaccess_save_and_enable() and uaccess_restore() must be inlined. > Otherwise, those register assignments would be entirely dropped, > according to my analysis of the disassembly. > > Prior to commit 9012d011660e ("compiler: allow all arches to enable > CONFIG_OPTIMIZE_INLINING"), the 'inline' marker was always enough for > inlining functions, except on x86. > > Since that commit, all architectures can enable CONFIG_OPTIMIZE_INLINING. > So, __always_inline is now the only guaranteed way of forcible inlining. No, the C preprocessor is the only guaranteed way of inlining. I preferred v1; if you're going to play with firewrite assembly, don't get burned. > > I also added __always_inline to 4 functions in the call-graph from the > __get_user_check() macro. > > Fixes: 9012d011660e ("compiler: allow all arches to enable CONFIG_OPTIMIZE_INLINING") > Reported-by: "kernelci.org bot" > Reported-by: Nicolas Saenz Julienne > Signed-off-by: Masahiro Yamada > --- > > Changes in v2: > - Use __always_inline instead of changing the function call places > (per Russell King) > - The previous submission is: https://lore.kernel.org/patchwork/patch/1132459/ > > arch/arm/include/asm/domain.h | 8 ++++---- > arch/arm/include/asm/uaccess.h | 4 ++-- > 2 files changed, 6 insertions(+), 6 deletions(-) > > diff --git a/arch/arm/include/asm/domain.h b/arch/arm/include/asm/domain.h > index 567dbede4785..f1d0a7807cd0 100644 > --- a/arch/arm/include/asm/domain.h > +++ b/arch/arm/include/asm/domain.h > @@ -82,7 +82,7 @@ > #ifndef __ASSEMBLY__ > > #ifdef CONFIG_CPU_CP15_MMU > -static inline unsigned int get_domain(void) > +static __always_inline unsigned int get_domain(void) > { > unsigned int domain; > > @@ -94,7 +94,7 @@ static inline unsigned int get_domain(void) > return domain; > } > > -static inline void set_domain(unsigned val) > +static __always_inline void set_domain(unsigned int val) > { > asm volatile( > "mcr p15, 0, %0, c3, c0 @ set domain" > @@ -102,12 +102,12 @@ static inline void set_domain(unsigned val) > isb(); > } > #else > -static inline unsigned int get_domain(void) > +static __always_inline unsigned int get_domain(void) > { > return 0; > } > > -static inline void set_domain(unsigned val) > +static __always_inline void set_domain(unsigned int val) > { > } > #endif > diff --git a/arch/arm/include/asm/uaccess.h b/arch/arm/include/asm/uaccess.h > index 303248e5b990..98c6b91be4a8 100644 > --- a/arch/arm/include/asm/uaccess.h > +++ b/arch/arm/include/asm/uaccess.h > @@ -22,7 +22,7 @@ > * perform such accesses (eg, via list poison values) which could then > * be exploited for priviledge escalation. > */ > -static inline unsigned int uaccess_save_and_enable(void) > +static __always_inline unsigned int uaccess_save_and_enable(void) > { > #ifdef CONFIG_CPU_SW_DOMAIN_PAN > unsigned int old_domain = get_domain(); > @@ -37,7 +37,7 @@ static inline unsigned int uaccess_save_and_enable(void) > #endif > } > > -static inline void uaccess_restore(unsigned int flags) > +static __always_inline void uaccess_restore(unsigned int flags) > { > #ifdef CONFIG_CPU_SW_DOMAIN_PAN > /* Restore the user access mask */ > -- > 2.17.1 > -- Thanks, ~Nick Desaulniers