Received: by 2002:ac0:a594:0:0:0:0:0 with SMTP id m20-v6csp3859020imm; Thu, 17 May 2018 16:29:57 -0700 (PDT) X-Google-Smtp-Source: AB8JxZrHPcu5a6xtJ04/Btkg6BHv7bG6uOJQ/PmsGxmZd2gOW/oEB9Pt463QZ5qKTqmc6mcb1nTR X-Received: by 2002:a63:b601:: with SMTP id j1-v6mr1897892pgf.335.1526599797640; Thu, 17 May 2018 16:29:57 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1526599797; cv=none; d=google.com; s=arc-20160816; b=act8+pLZ/iVHjFTDlHc8ostcGFJC0Gy+oPDCxee68omRG34tSw7jrqkXP9V9/z3pS0 J5EM3lfbIcDZbHqpI2tGA+bwQAd3jTpjAm9nwV8ZjEKPpaH2vCdWN8OsYD7i95ezOSRO Ui5xaOY1QI11mEMeFl9Tg6wPPlGQNVdUQuUDQ411WnpJrT9wsGGX8fbLXpZpbB1dr/rM jhjt1JtmFoEDGJM+Kx+EVJNXW+/LYcgINibSUrd36l0Hhbkm5tvK7kh5rIi36DkQJUZz ZGqu2nEIWisbPB8YQKhn1LNvazOIpBXOGt+sb23LjNFulDowA6+5RM5LSxmFWonsfQeK UIbg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:arc-authentication-results; bh=PUULuk/Bnho0OuOfnBt6oBja46ExKcSlQs020n/x3Js=; b=MpbIK4yC1hqscj4DCfuMK9RxtB37d35PIw3oFBXrd+jkw/IvyAYQRp4WWjH79XB7cx ks1LRYPpiZoJpnQv2C3UpxyczUNCKAJgCnOK7TpiAktVljF6AieWzrmONxOr9xh31JJC 9nG45RTYQjFJWHxh3qyMeYHs5TXU1vjp7zYKm3uzX1SEn815j9F0FXh91a5XWHgVLPeq C7dJq3Y+kxEXv3lUXbz5wcvI6oUIDFvGwrlmxIaV9ijuxA+DKt+ZVWhUPqUlhX+yDn5V PaxD4mG0dJzomav7g8yZT2/wufzJfNrlQF7nmrPvF2QtVw0pC8bz/kPIV/nWlhkieIUP 40tg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m1-v6si5870449plt.276.2018.05.17.16.29.43; Thu, 17 May 2018 16:29:57 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752228AbeEQX2p (ORCPT + 99 others); Thu, 17 May 2018 19:28:45 -0400 Received: from ex13-edg-ou-001.vmware.com ([208.91.0.189]:31398 "EHLO EX13-EDG-OU-001.vmware.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751293AbeEQX2k (ORCPT ); Thu, 17 May 2018 19:28:40 -0400 Received: from sc9-mailhost3.vmware.com (10.113.161.73) by EX13-EDG-OU-001.vmware.com (10.113.208.155) with Microsoft SMTP Server id 15.0.1156.6; Thu, 17 May 2018 16:28:13 -0700 Received: from sc2-haas01-esx0118.eng.vmware.com (sc2-haas01-esx0118.eng.vmware.com [10.172.44.118]) by sc9-mailhost3.vmware.com (Postfix) with ESMTP id 4940140785; Thu, 17 May 2018 16:28:40 -0700 (PDT) From: Nadav Amit To: , CC: , Nadav Amit , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , Peter Zijlstra , Josh Poimboeuf Subject: [PATCH 2/6] x86: bug: prevent gcc distortions Date: Thu, 17 May 2018 09:13:58 -0700 Message-ID: <20180517161402.78089-3-namit@vmware.com> X-Mailer: git-send-email 2.17.0 In-Reply-To: <20180517161402.78089-1-namit@vmware.com> References: <20180517161402.78089-1-namit@vmware.com> MIME-Version: 1.0 Content-Type: text/plain Received-SPF: None (EX13-EDG-OU-001.vmware.com: namit@vmware.com does not designate permitted sender hosts) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org GCC considers the number of statements in inlined assembly blocks, according to new-lines and semicolons, as an indication to the cost of the block in time and space. This data is distorted by the kernel code, which puts information in alternative sections. As a result, the compiler may perform incorrect inlining and branch optimizations. The solution is to set an assembly macro and call it from the inlinedv assembly block. As a result GCC considers the inline assembly block as a single instruction. This patch increases the kernel size: text data bss dec hex filename 18126824 10067268 2936832 31130924 1db052c ./vmlinux before 18127205 10068388 2936832 31132425 1db0b09 ./vmlinux after (+1501) But enables more aggressive inlining (and probably branch decisions). The number of static text symbols in vmlinux is lower. Before: 40015 After: 39860 (-155) Cc: Thomas Gleixner Cc: Ingo Molnar Cc: "H. Peter Anvin" Cc: x86@kernel.org Cc: Peter Zijlstra Cc: Josh Poimboeuf Signed-off-by: Nadav Amit --- arch/x86/include/asm/bug.h | 56 +++++++++++++++++++++++++------------- 1 file changed, 37 insertions(+), 19 deletions(-) diff --git a/arch/x86/include/asm/bug.h b/arch/x86/include/asm/bug.h index 6804d6642767..1167e4822a34 100644 --- a/arch/x86/include/asm/bug.h +++ b/arch/x86/include/asm/bug.h @@ -30,33 +30,51 @@ #ifdef CONFIG_DEBUG_BUGVERBOSE -#define _BUG_FLAGS(ins, flags) \ +/* + * Saving the bug data is encapsulated within an assembly macro, which is then + * called on each use. This hack is necessary to prevent GCC from considering + * the inline assembly blocks as costly in time and space, which can prevent + * function inlining and lead to other bad compilation decisions. GCC computes + * inline assembly cost according to the number of perceived number of assembly + * instruction, based on the number of new-lines and semicolons in the assembly + * block. The macro will eventually be compiled into a single instruction (and + * some data). This scheme allows GCC to better understand the inline asm cost. + */ +asm(".macro __BUG_FLAGS ins:req file:req line:req flags:req size:req\n" + "1:\t \\ins\n\t" + ".pushsection __bug_table,\"aw\"\n" + "2:\t "__BUG_REL(1b) "\t# bug_entry::bug_addr\n\t" + __BUG_REL(\\file) "\t# bug_entry::file\n\t" + ".word \\line" "\t# bug_entry::line\n\t" + ".word \\flags" "\t# bug_entry::flags\n\t" + ".org 2b+\\size\n\t" + ".popsection\n\t" + ".endm"); + +#define _BUG_FLAGS(ins, flags) \ do { \ - asm volatile("1:\t" ins "\n" \ - ".pushsection __bug_table,\"aw\"\n" \ - "2:\t" __BUG_REL(1b) "\t# bug_entry::bug_addr\n" \ - "\t" __BUG_REL(%c0) "\t# bug_entry::file\n" \ - "\t.word %c1" "\t# bug_entry::line\n" \ - "\t.word %c2" "\t# bug_entry::flags\n" \ - "\t.org 2b+%c3\n" \ - ".popsection" \ - : : "i" (__FILE__), "i" (__LINE__), \ - "i" (flags), \ + asm volatile("__BUG_FLAGS \"" ins "\" %c0 %c1 %c2 %c3" \ + : : "i" (__FILE__), "i" (__LINE__), \ + "i" (flags), \ "i" (sizeof(struct bug_entry))); \ } while (0) #else /* !CONFIG_DEBUG_BUGVERBOSE */ +asm(".macro __BUG_FLAGS ins:req flags:req size:req\n" + "1:\t\\ins\n\t" + ".pushsection __bug_table,\"aw\"\n" + "2:\t" __BUG_REL(1b) "\t# bug_entry::bug_addr\n\t" + ".word \\flags" "\t# bug_entry::flags\n\t" + ".org 2b+\\size\n\t" + ".popsection\n\t" + ".endm"); + #define _BUG_FLAGS(ins, flags) \ do { \ - asm volatile("1:\t" ins "\n" \ - ".pushsection __bug_table,\"aw\"\n" \ - "2:\t" __BUG_REL(1b) "\t# bug_entry::bug_addr\n" \ - "\t.word %c0" "\t# bug_entry::flags\n" \ - "\t.org 2b+%c1\n" \ - ".popsection" \ - : : "i" (flags), \ - "i" (sizeof(struct bug_entry))); \ + asm volatile("__BUG_FLAGS \"" ins "\" %c0 %c1" \ + : : "i" (flags), \ + "i" (sizeof(struct bug_entry))); \ } while (0) #endif /* CONFIG_DEBUG_BUGVERBOSE */ -- 2.17.0