Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp5041160imu; Wed, 19 Dec 2018 04:44:57 -0800 (PST) X-Google-Smtp-Source: AFSGD/Uh3f6+EzTxcXWz6w+PospW3iHoyxYAsB1L6+zreAaLsMSg5TyQaYo5qrAF2TL1VR8LMEHF X-Received: by 2002:a17:902:8d8e:: with SMTP id v14mr20020355plo.133.1545223497089; Wed, 19 Dec 2018 04:44:57 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1545223497; cv=none; d=google.com; s=arc-20160816; b=CnG5D4sWAfHNWZs97oAL7CeQUEWXc5A2hsnuLIFv4/QlXsFiFqjIMK2g+zli3WFCow O8LCUgoF21V1T+ncUONJJoOMPpFmY80KFkgHHjign+o6Ui9kK5FJO67Fvn4rmWR4PcN4 hK2MmwEvfq41eBvP1h35F87WzOm+80gJL0o4t/eRkjM6Mzqr5MdoSLAOxruPJFJj+MhZ ySJlSozvoUaL3khwqjLYVLZiDKSpTUxwSj3V5ICQA8KtqY8jgigLEfT2ZOHpC3CMT2rP vbYJ9HLa5jN9Mhlks16+hh+QwIpoGTfEgH9aI+iUyHoWD17ylBhRAPScdnyD1pRk56zN aB/w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature; bh=qzRDyJoDOTma597x2MZ7+yRPZMyKKu7YO8LIRpUFIls=; b=LSOqqXHl9/U3XC63nheCNlqsmiRl1wWObzjOThkgQ0RlATNRJuYqpdWXcfqchsPOKs QNy9ANkbtdMzPaIfIrtnoInVMRi63UyCYfq1nfBWoBSGkDByZpEROsAV6II58AWwiMhR ld0UGqCGS71A685cAthy0DvVYT5d50D8Pau79S0YalV7wJEeZz3/IWGxLq8MoGHyT2hF YzqOj22SMrix9z+mNgam45QYH4OwRyl6D/mF+T9VWKRlWDXrcO7UnqpZBEt087jgESE0 WNDZ0Bi5nroavGns/LstSX+v9Otu2FGOOb7FdbJsBjdRqppfnlvmjgoRw7rqcAYLua6n o9iw== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@gmail.com header.s=20161025 header.b=kPvGq5FD; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id a26si15750633pgl.282.2018.12.19.04.44.41; Wed, 19 Dec 2018 04:44:57 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=fail header.i=@gmail.com header.s=20161025 header.b=kPvGq5FD; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729278AbeLSLUg (ORCPT + 99 others); Wed, 19 Dec 2018 06:20:36 -0500 Received: from mail-wr1-f66.google.com ([209.85.221.66]:39438 "EHLO mail-wr1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727171AbeLSLUg (ORCPT ); Wed, 19 Dec 2018 06:20:36 -0500 Received: by mail-wr1-f66.google.com with SMTP id t27so19111279wra.6; Wed, 19 Dec 2018 03:20:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=qzRDyJoDOTma597x2MZ7+yRPZMyKKu7YO8LIRpUFIls=; b=kPvGq5FDYfq/wnWALdG3Xxrs841VQkA65NdvVRRMDz29tYztvvqP0SUnXja7izMvjo YyaKfIX/SVyhhdxj7iDubac8bPkMdaB5zfbI+AlUl97NHCc7bnxjysrcXNHfjwrGcn54 vbPrcLdoyIQmkJd8PiefSsLRVwkYDwZsmxJNTvRd1pEtVIDdpXsjhvf3vmNMkPHo2FHo YgHjLYj3himQf0dx2IKxRuUQBMWXAGeA4FBqDatE4N9sezcJwNyfq/PiICqc4ESlhaPl jryfNNk3nb2MqjxfVXZNz5YJE7w2vkHMKsz7Ey6X7REkNDXtWSaFn+dViEsEjZhI4dNK WSrg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:from:to:cc:subject:message-id :references:mime-version:content-disposition:in-reply-to:user-agent; bh=qzRDyJoDOTma597x2MZ7+yRPZMyKKu7YO8LIRpUFIls=; b=q57gMbtzcpE/iAQs1BzCkrKuz0B6ADnBQah8qJvWPnSfGI8KF7TiVQ7Xtfh5n/UCYm yLFfuFCy/2ek7XvErxPKYLCOkxLtyJdD9jhhQU6dqt9+WCRLA/4lXv5iQ1lPH+jT8AWM eH9fIGrguatzHuYuT5toYDBkqcri5YyJntm0NKUmSTH/UkEMmxiQkp+lOiIOHL43MHfB Zpdjs7xylCXLnDpno8ia8lzXlgtHJuLO4t70UaGhvkN7PfS+6UdJ2bcPTmM3PUNVXiSh FBRCf+fcBLlOjYYob3AJsqFKwF375gxVHmcnaEJw5PH4OSk+keq4O11vEc5aH91Ilf79 6j8g== X-Gm-Message-State: AA+aEWbyESBpeZub1r2Y0uqx0tOgBMg8058w50v65Srh/txSKhnmExMf OLJBnmsbp5ig3Q9XNHgKsLY= X-Received: by 2002:a5d:6a42:: with SMTP id t2mr19854124wrw.50.1545218432704; Wed, 19 Dec 2018 03:20:32 -0800 (PST) Received: from gmail.com (2E8B0CD5.catv.pool.telekom.hu. [46.139.12.213]) by smtp.gmail.com with ESMTPSA id j202sm13789844wmf.15.2018.12.19.03.20.30 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 19 Dec 2018 03:20:31 -0800 (PST) Date: Wed, 19 Dec 2018 12:20:28 +0100 From: Ingo Molnar To: Masahiro Yamada Cc: x86@kernel.org, Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H . Peter Anvin" , Richard Biener , Segher Boessenkool , Peter Zijlstra , Juergen Gross , Josh Poimboeuf , Kees Cook , Linus Torvalds , Arnd Bergmann , Andrey Ryabinin , virtualization@lists.linux-foundation.org, Luc Van Oostenryck , Alok Kataria , Ard Biesheuvel , Jann Horn , linux-arch@vger.kernel.org, Alexey Dobriyan , linux-sparse@vger.kernel.org, Andrew Morton , linux-kbuild@vger.kernel.org, Yonghong Song , Michal Marek , Arnaldo Carvalho de Melo , Jan Beulich , Nadav Amit , David Woodhouse , Alexei Starovoitov , linux-kernel@vger.kernel.org Subject: Re: [PATCH v3 00/12] x86, kbuild: revert macrofying inline assembly code Message-ID: <20181219112028.GA38175@gmail.com> References: <1545062607-8599-1-git-send-email-yamada.masahiro@socionext.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1545062607-8599-1-git-send-email-yamada.masahiro@socionext.com> User-Agent: Mutt/1.9.4 (2018-02-28) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org * Masahiro Yamada wrote: > This series reverts the in-kernel workarounds for inlining issues. > > The commit description of 77b0bf55bc67 mentioned > "We also hope that GCC will eventually get fixed,..." > > Now, GCC provides a solution. > > https://gcc.gnu.org/onlinedocs/gcc/Extended-Asm.html > explains the new "asm inline" syntax. > > The performance issue will be eventually solved. > > [About Code cleanups] > > I know Nadam Amit is opposed to the full revert. > He also claims his motivation for macrofying was not only > performance, but also cleanups. > > IIUC, the criticism addresses the code duplication between C and ASM. > > If so, I'd like to suggest a different approach for cleanups. > Please see the last 3 patches. > IMHO, preprocessor approach is more straight-forward, and readable. > Basically, this idea should work because it is what we already do for > __ASM_FORM() etc. > > [Quick Test of "asm inline" of GCC 9] > > If you want to try "asm inline" feature, the patch is available: > https://lore.kernel.org/patchwork/patch/1024590/ > > The number of symbols for arch/x86/configs/x86_64_defconfig: > > nr_symbols > [1] v4.20-rc7 : 96502 > [2] [1]+full revert : 96705 (+203) > [3] [2]+"asm inline": 96568 (-137) > > [3]: apply my patch, then replace "asm" -> "asm_inline" > for _BUG_FLAGS(), refcount_add(), refcount_inc(), refcount_dec(), > annotate_reachable(), annotate_unreachable() > > > Changes in v3: > - Split into per-commit revert (per Nadav Amit) > - Add some cleanups with preprocessor approach > > Changes in v2: > - Revive clean-ups made by 5bdcd510c2ac (per Peter Zijlstra) > - Fix commit quoting style (per Peter Zijlstra) > > Masahiro Yamada (12): > Revert "x86/jump-labels: Macrofy inline assembly code to work around > GCC inlining bugs" > Revert "x86/cpufeature: Macrofy inline assembly code to work around > GCC inlining bugs" > Revert "x86/extable: Macrofy inline assembly code to work around GCC > inlining bugs" > Revert "x86/paravirt: Work around GCC inlining bugs when compiling > paravirt ops" > Revert "x86/bug: Macrofy the BUG table section handling, to work > around GCC inlining bugs" > Revert "x86/alternatives: Macrofy lock prefixes to work around GCC > inlining bugs" > Revert "x86/refcount: Work around GCC inlining bug" > Revert "x86/objtool: Use asm macros to work around GCC inlining bugs" > Revert "kbuild/Makefile: Prepare for using macros in inline assembly > code to work around asm() related GCC inlining bugs" > linux/linkage: add ASM() macro to reduce duplication between C/ASM > code > x86/alternatives: consolidate LOCK_PREFIX macro > x86/asm: consolidate ASM_EXTABLE_* macros > > Makefile | 9 +-- > arch/x86/Makefile | 7 --- > arch/x86/include/asm/alternative-asm.h | 22 +------ > arch/x86/include/asm/alternative-common.h | 47 +++++++++++++++ > arch/x86/include/asm/alternative.h | 30 +--------- > arch/x86/include/asm/asm.h | 46 +++++---------- > arch/x86/include/asm/bug.h | 98 +++++++++++++------------------ > arch/x86/include/asm/cpufeature.h | 82 +++++++++++--------------- > arch/x86/include/asm/jump_label.h | 22 +++++-- > arch/x86/include/asm/paravirt_types.h | 56 +++++++++--------- > arch/x86/include/asm/refcount.h | 81 +++++++++++-------------- > arch/x86/kernel/macros.S | 16 ----- > include/asm-generic/bug.h | 8 +-- > include/linux/compiler.h | 56 ++++-------------- > include/linux/linkage.h | 8 +++ > scripts/Kbuild.include | 4 +- > scripts/mod/Makefile | 2 - > 17 files changed, 249 insertions(+), 345 deletions(-) > create mode 100644 arch/x86/include/asm/alternative-common.h > delete mode 100644 arch/x86/kernel/macros.S I absolutely agree that this needs to be resolved in v4.20. So I did the 1-9 reverts manually myself as well, because I think the first commit should be reverted fully to get as close to the starting point as possible (we are late in the cycle) - and came to the attached interdiff between your series and mine. Does this approach look OK to you, or did I miss something? Thanks, Ingo =============> entry/calling.h | 2 - include/asm/jump_label.h | 50 ++++++++++++++++++++++++++++++++++------------- 2 files changed, 38 insertions(+), 14 deletions(-) diff --git a/arch/x86/entry/calling.h b/arch/x86/entry/calling.h index 25e5a6bda8c3..20d0885b00fb 100644 --- a/arch/x86/entry/calling.h +++ b/arch/x86/entry/calling.h @@ -352,7 +352,7 @@ For 32-bit we have the following conventions - kernel is built with .macro CALL_enter_from_user_mode #ifdef CONFIG_CONTEXT_TRACKING #ifdef HAVE_JUMP_LABEL - STATIC_BRANCH_JMP l_yes=.Lafter_call_\@, key=context_tracking_enabled, branch=1 + STATIC_JUMP_IF_FALSE .Lafter_call_\@, context_tracking_enabled, def=0 #endif call enter_from_user_mode .Lafter_call_\@: diff --git a/arch/x86/include/asm/jump_label.h b/arch/x86/include/asm/jump_label.h index cf88ebf9a4ca..21efc9d07ed9 100644 --- a/arch/x86/include/asm/jump_label.h +++ b/arch/x86/include/asm/jump_label.h @@ -2,6 +2,19 @@ #ifndef _ASM_X86_JUMP_LABEL_H #define _ASM_X86_JUMP_LABEL_H +#ifndef HAVE_JUMP_LABEL +/* + * For better or for worse, if jump labels (the gcc extension) are missing, + * then the entire static branch patching infrastructure is compiled out. + * If that happens, the code in here will malfunction. Raise a compiler + * error instead. + * + * In theory, jump labels and the static branch patching infrastructure + * could be decoupled to fix this. + */ +#error asm/jump_label.h included on a non-jump-label kernel +#endif + #define JUMP_LABEL_NOP_SIZE 5 #ifdef CONFIG_X86_64 @@ -53,26 +66,37 @@ static __always_inline bool arch_static_branch_jump(struct static_key *key, bool #else /* __ASSEMBLY__ */ -.macro STATIC_BRANCH_NOP l_yes:req key:req branch:req -.Lstatic_branch_nop_\@: - .byte STATIC_KEY_INIT_NOP -.Lstatic_branch_no_after_\@: +.macro STATIC_JUMP_IF_TRUE target, key, def +.Lstatic_jump_\@: + .if \def + /* Equivalent to "jmp.d32 \target" */ + .byte 0xe9 + .long \target - .Lstatic_jump_after_\@ +.Lstatic_jump_after_\@: + .else + .byte STATIC_KEY_INIT_NOP + .endif .pushsection __jump_table, "aw" _ASM_ALIGN - .long .Lstatic_branch_nop_\@ - ., \l_yes - . - _ASM_PTR \key + \branch - . + .long .Lstatic_jump_\@ - ., \target - . + _ASM_PTR \key - . .popsection .endm -.macro STATIC_BRANCH_JMP l_yes:req key:req branch:req -.Lstatic_branch_jmp_\@: - .byte 0xe9 - .long \l_yes - .Lstatic_branch_jmp_after_\@ -.Lstatic_branch_jmp_after_\@: +.macro STATIC_JUMP_IF_FALSE target, key, def +.Lstatic_jump_\@: + .if \def + .byte STATIC_KEY_INIT_NOP + .else + /* Equivalent to "jmp.d32 \target" */ + .byte 0xe9 + .long \target - .Lstatic_jump_after_\@ +.Lstatic_jump_after_\@: + .endif .pushsection __jump_table, "aw" _ASM_ALIGN - .long .Lstatic_branch_jmp_\@ - ., \l_yes - . - _ASM_PTR \key + \branch - . + .long .Lstatic_jump_\@ - ., \target - . + _ASM_PTR \key + 1 - . .popsection .endm