Received: by 2002:ac0:a594:0:0:0:0:0 with SMTP id m20-v6csp1140141imm; Tue, 15 May 2018 14:30:33 -0700 (PDT) X-Google-Smtp-Source: AB8JxZos9JfsKGWfbElU3jtgm/uWDi3Gr9AysVGj0gcokgOPSo86uEySc1Fd2S0cl36ZuU/w+ClC X-Received: by 2002:a62:5fc5:: with SMTP id t188-v6mr16846498pfb.214.1526419833601; Tue, 15 May 2018 14:30:33 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1526419833; cv=none; d=google.com; s=arc-20160816; b=X8Y5E2UhBSmKYeQBT8QS+KfPEsgLWzU6EM4HjqUZJznWJMIFO/o8kF7yTaicUVbVDF wJuMfMW+vivTJxRRSuNnoihNhZjieVoknmdMisJokox5KqvVs/vC0HnqyrRh06bHJXjb VJAWL9GcIBnbFEN8YyNUsSNdnIQb/jOgpVv+jfbnxi+mRwv3lMtnTUCh46zJ06PVYUx3 XHgkM9Gn6wC7HoJ/5LJZvwnXJmv8N6DuC9FU+Q8u4E3OJGy9hg2S7OEI5zfOTNPzvvzZ ELm1qCrJ2baiXAdQ9bppd7BPrHRzkP95bbW8W3T5CgKkySjl1U4l+VBqW+qSIl89VBqe CXzg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:arc-authentication-results; bh=vavQLsP4LRJotdS9TvXzjOEAiMWiujl6JD98tqrzNuE=; b=nenY0HOUgniplBIM3sYLvE8Z8LHITjM56b/aHBdKs8mTApae70WNidnc5OvUlH4Lzh PRyo+GfcdVN6D1fbXSXv3iq2Zig4SL4YTfZeeiL50uRO2V6vTEvwTpEYQfI/b6wVNDjR btzFTGz9+I7hrYw4KiLsDnRwEsO+2//5VJNTrWruhEtDuWAYMOIuSlnlzo9VOTdtHf76 FTf3XUb4smtBlkdUvzNGqoHCLugrEv4muMwos8A11U16z4m1+g8CIP2xaXwlerlZ2qaE ufuASqybUvyzqaOPYV/QWIjWKwSc/KTzAtJEJOQwZ8h8CdofvDu7/VBDT86MOgTRo8Zo OAMw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 1-v6si682597pla.565.2018.05.15.14.30.18; Tue, 15 May 2018 14:30:33 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752404AbeEOV2Z (ORCPT + 99 others); Tue, 15 May 2018 17:28:25 -0400 Received: from ex13-edg-ou-001.vmware.com ([208.91.0.189]:48538 "EHLO EX13-EDG-OU-001.vmware.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752679AbeEOV0O (ORCPT ); Tue, 15 May 2018 17:26:14 -0400 Received: from sc9-mailhost2.vmware.com (10.113.161.72) by EX13-EDG-OU-001.vmware.com (10.113.208.155) with Microsoft SMTP Server id 15.0.1156.6; Tue, 15 May 2018 14:25:46 -0700 Received: from sc2-haas01-esx0118.eng.vmware.com (sc2-haas01-esx0118.eng.vmware.com [10.172.44.118]) by sc9-mailhost2.vmware.com (Postfix) with ESMTP id E85F1B07B5; Tue, 15 May 2018 14:26:10 -0700 (PDT) From: Nadav Amit To: CC: , Nadav Amit , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , , Kees Cook , Jan Beulich , Josh Poimboeuf Subject: [RFC 5/8] x86: refcount: prevent gcc distortions Date: Tue, 15 May 2018 07:11:12 -0700 Message-ID: <20180515141124.84254-6-namit@vmware.com> X-Mailer: git-send-email 2.17.0 In-Reply-To: <20180515141124.84254-1-namit@vmware.com> References: <20180515141124.84254-1-namit@vmware.com> MIME-Version: 1.0 Content-Type: text/plain Received-SPF: None (EX13-EDG-OU-001.vmware.com: namit@vmware.com does not designate permitted sender hosts) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org GCC considers the number of statements in inlined assembly blocks, according to new-lines and semicolons, as an indication to the cost of the block in time and space. This data is distorted by the kernel code, which puts information in alternative sections. As a result, the compiler may perform incorrect inlining and branch optimizations. The solution is to set an assembly macro and call it from the inlined assembly block. As a result GCC considers the inline assembly block as a single instruction. This patch allows to inline functions such as __get_seccomp_filter(). The effect of the patch is as follows on the kernel size: text data bss dec hex filename 18146418 10064100 2936832 31147350 1db4556 ./vmlinux before 18148228 10063968 2936832 31149028 1db4be4 ./vmlinux after (+1678) Static text symbols: Before: 39673 After: 39649 (-24) Cc: Thomas Gleixner Cc: Ingo Molnar Cc: "H. Peter Anvin" Cc: x86@kernel.org Cc: Kees Cook Cc: Jan Beulich Cc: Josh Poimboeuf Signed-off-by: Nadav Amit --- arch/x86/include/asm/refcount.h | 55 ++++++++++++++++++++------------- 1 file changed, 33 insertions(+), 22 deletions(-) diff --git a/arch/x86/include/asm/refcount.h b/arch/x86/include/asm/refcount.h index 4cf11d88d3b3..a668c534206d 100644 --- a/arch/x86/include/asm/refcount.h +++ b/arch/x86/include/asm/refcount.h @@ -14,34 +14,43 @@ * central refcount exception. The fixup address for the exception points * back to the regular execution flow in .text. */ -#define _REFCOUNT_EXCEPTION \ - ".pushsection .text..refcount\n" \ - "111:\tlea %[counter], %%" _ASM_CX "\n" \ - "112:\t" ASM_UD2 "\n" \ - ASM_UNREACHABLE \ - ".popsection\n" \ - "113:\n" \ + +asm ("\n" + ".macro __REFCOUNT_EXCEPTION counter:vararg\n\t" + ".pushsection .text..refcount\n" + "111:\tlea \\counter, %" _ASM_CX "\n" + "112:\t" ASM_UD2 "\n\t" + ASM_UNREACHABLE + ".popsection\n\t" + "113:\n" _ASM_EXTABLE_REFCOUNT(112b, 113b) + ".endm"); /* Trigger refcount exception if refcount result is negative. */ -#define REFCOUNT_CHECK_LT_ZERO \ - "js 111f\n\t" \ - _REFCOUNT_EXCEPTION +asm ("\n" + ".macro __REFCOUNT_CHECK_LT_ZERO counter:vararg\n" + "js 111f\n\t" + "__REFCOUNT_EXCEPTION \\counter\n" + ".endm"); /* Trigger refcount exception if refcount result is zero or negative. */ -#define REFCOUNT_CHECK_LE_ZERO \ - "jz 111f\n\t" \ - REFCOUNT_CHECK_LT_ZERO +asm ("\n" + ".macro __REFCOUNT_CHECK_LE_ZERO counter:vararg\n" + "jz 111f\n\t" + "__REFCOUNT_CHECK_LT_ZERO counter=\\counter\n" + ".endm"); /* Trigger refcount exception unconditionally. */ -#define REFCOUNT_ERROR \ - "jmp 111f\n\t" \ - _REFCOUNT_EXCEPTION +asm ("\n" + ".macro __REFCOUNT_ERROR counter:vararg\n\t" + "jmp 111f\n\t" + "__REFCOUNT_EXCEPTION counter=\\counter\n" + ".endm"); static __always_inline void refcount_add(unsigned int i, refcount_t *r) { asm volatile(LOCK_PREFIX "addl %1,%0\n\t" - REFCOUNT_CHECK_LT_ZERO + "__REFCOUNT_CHECK_LT_ZERO %[counter]" : [counter] "+m" (r->refs.counter) : "ir" (i) : "cc", "cx"); @@ -50,7 +59,7 @@ static __always_inline void refcount_add(unsigned int i, refcount_t *r) static __always_inline void refcount_inc(refcount_t *r) { asm volatile(LOCK_PREFIX "incl %0\n\t" - REFCOUNT_CHECK_LT_ZERO + "__REFCOUNT_CHECK_LT_ZERO %[counter]" : [counter] "+m" (r->refs.counter) : : "cc", "cx"); } @@ -58,7 +67,7 @@ static __always_inline void refcount_inc(refcount_t *r) static __always_inline void refcount_dec(refcount_t *r) { asm volatile(LOCK_PREFIX "decl %0\n\t" - REFCOUNT_CHECK_LE_ZERO + "__REFCOUNT_CHECK_LE_ZERO %[counter]" : [counter] "+m" (r->refs.counter) : : "cc", "cx"); } @@ -66,13 +75,15 @@ static __always_inline void refcount_dec(refcount_t *r) static __always_inline __must_check bool refcount_sub_and_test(unsigned int i, refcount_t *r) { - GEN_BINARY_SUFFIXED_RMWcc(LOCK_PREFIX "subl", REFCOUNT_CHECK_LT_ZERO, + GEN_BINARY_SUFFIXED_RMWcc(LOCK_PREFIX "subl", + "__REFCOUNT_CHECK_LT_ZERO %[counter]", r->refs.counter, "er", i, "%0", e, "cx"); } static __always_inline __must_check bool refcount_dec_and_test(refcount_t *r) { - GEN_UNARY_SUFFIXED_RMWcc(LOCK_PREFIX "decl", REFCOUNT_CHECK_LT_ZERO, + GEN_UNARY_SUFFIXED_RMWcc(LOCK_PREFIX "decl", + "__REFCOUNT_CHECK_LT_ZERO %[counter]", r->refs.counter, "%0", e, "cx"); } @@ -90,7 +101,7 @@ bool refcount_add_not_zero(unsigned int i, refcount_t *r) /* Did we try to increment from/to an undesirable state? */ if (unlikely(c < 0 || c == INT_MAX || result < c)) { - asm volatile(REFCOUNT_ERROR + asm volatile("__REFCOUNT_ERROR %[counter]" : : [counter] "m" (r->refs.counter) : "cc", "cx"); break; -- 2.17.0