Received: by 2002:ac0:a5b6:0:0:0:0:0 with SMTP id m51-v6csp3929368imm; Mon, 4 Jun 2018 11:38:30 -0700 (PDT) X-Google-Smtp-Source: ADUXVKJZw8GYr2wdrCa+bPfHX01Q3aI53fL8q3O8kHoO9ki/sLruM7ZgUKqpo7+jJv6pMGj8T3Jm X-Received: by 2002:a17:902:6b09:: with SMTP id o9-v6mr7434231plk.256.1528137510571; Mon, 04 Jun 2018 11:38:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1528137510; cv=none; d=google.com; s=arc-20160816; b=QMpQtU6JCOFCnpFzFE96mO8cBSKetn1Z5K9sJ1v32e6ElvIc81MIKNpofs+XMF+LvO hBfC52EO1kLphDXb0h6q0gT0Fn6nfm/mE7ym6uf0+2wyfIycni+QncIOrh3gTrFKYzK+ qrHy3paC16E6NoXF8bD6RLJeqFXc4XIFSzsYy+SDiwS9A/9mS3eYlmiVn2NXHTtcmz3t BmsqZ575hRElb4a2rCdUcZ3ZA9lA75X9sQB6ZdMTRLKiGQxIKt49Mf2Bm/NF0qgWhDAv DJ/qHz4XOx+efqKefq3LkFOrgycEx5YYAJ45BJi9ByIp5bHcjZ1mDhR8LaNwZoqUZL32 ymZA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:arc-authentication-results; bh=2Qme86PgUbf9+H2QKliK8lVllj8aEajjM/S4paflF7A=; b=pxMIdRWf9tb1Yn127CkN04fsuEh+pEMaD2qaQudxuPekQ8LW6i1koc7P4RAO45xmSC gHFp7v6yTqEfXJOxZwg0cwQXU87AeXMyNSYTSvGNx5Cytw+3DbLJFnOLy4aEh8pVbUZ7 g/AzsAAZbKheAswTNSH9uD0Fqe7btS+58sA7u0V8dKDgo4msNxIath8N8V2i7cw3kaGS Jf9nYmjKK7Cvz4cX3WAc0GMtgxR9oQJ6z/rOUr/lRlNwOz0yDHxIDTLmmfitt42E9P3u /m4beGagEyWL2pxc6cWYMBMSfRatGCi5aSjK/pHMYs8YWD2tFOu6P7YpVMJ6urQDPfCH 1V5Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=vmware.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t123-v6si7806780pfd.13.2018.06.04.11.38.15; Mon, 04 Jun 2018 11:38:30 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=vmware.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751406AbeFDSge (ORCPT + 99 others); Mon, 4 Jun 2018 14:36:34 -0400 Received: from ex13-edg-ou-001.vmware.com ([208.91.0.189]:2727 "EHLO EX13-EDG-OU-001.vmware.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750779AbeFDSga (ORCPT ); Mon, 4 Jun 2018 14:36:30 -0400 Received: from sc9-mailhost2.vmware.com (10.113.161.72) by EX13-EDG-OU-001.vmware.com (10.113.208.155) with Microsoft SMTP Server id 15.0.1156.6; Mon, 4 Jun 2018 11:36:14 -0700 Received: from sc2-haas01-esx0118.eng.vmware.com (sc2-haas01-esx0118.eng.vmware.com [10.172.44.118]) by sc9-mailhost2.vmware.com (Postfix) with ESMTP id 52A8BB0CB3; Mon, 4 Jun 2018 11:36:27 -0700 (PDT) From: Nadav Amit To: , CC: Nadav Amit , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , Greg Kroah-Hartman , Kate Stewart , Philippe Ombredanne Subject: [PATCH v2 9/9] x86: jump-labels: use macros instead of inline assembly Date: Mon, 4 Jun 2018 04:21:31 -0700 Message-ID: <20180604112131.59100-10-namit@vmware.com> X-Mailer: git-send-email 2.17.0 In-Reply-To: <20180604112131.59100-1-namit@vmware.com> References: <20180604112131.59100-1-namit@vmware.com> MIME-Version: 1.0 Content-Type: text/plain Received-SPF: None (EX13-EDG-OU-001.vmware.com: namit@vmware.com does not designate permitted sender hosts) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Use assembly macros for jump-labels and call them from inline assembly. This not only makes the code more readable, but also improves compilation decision, specifically inline decisions which GCC base on the number of new lines in inline assembly. As a result the code size is slightly increased. text data bss dec hex filename 18163528 10226300 2957312 31347140 1de51c4 ./vmlinux before 18163608 10227348 2957312 31348268 1de562c ./vmlinux after (+1128) And functions such as intel_pstate_adjust_policy_max(), kvm_cpu_accept_dm_intr(), kvm_register_read() are inlined. Cc: Thomas Gleixner Cc: Ingo Molnar Cc: "H. Peter Anvin" Cc: x86@kernel.org Cc: Greg Kroah-Hartman Cc: Kate Stewart Cc: Philippe Ombredanne Signed-off-by: Nadav Amit --- arch/x86/include/asm/jump_label.h | 65 ++++++++++++++++++------------- arch/x86/kernel/macros.S | 1 + 2 files changed, 39 insertions(+), 27 deletions(-) diff --git a/arch/x86/include/asm/jump_label.h b/arch/x86/include/asm/jump_label.h index 8c0de4282659..ea0633a41122 100644 --- a/arch/x86/include/asm/jump_label.h +++ b/arch/x86/include/asm/jump_label.h @@ -2,19 +2,6 @@ #ifndef _ASM_X86_JUMP_LABEL_H #define _ASM_X86_JUMP_LABEL_H -#ifndef HAVE_JUMP_LABEL -/* - * For better or for worse, if jump labels (the gcc extension) are missing, - * then the entire static branch patching infrastructure is compiled out. - * If that happens, the code in here will malfunction. Raise a compiler - * error instead. - * - * In theory, jump labels and the static branch patching infrastructure - * could be decoupled to fix this. - */ -#error asm/jump_label.h included on a non-jump-label kernel -#endif - #define JUMP_LABEL_NOP_SIZE 5 #ifdef CONFIG_X86_64 @@ -28,18 +15,27 @@ #ifndef __ASSEMBLY__ +#ifndef HAVE_JUMP_LABEL +/* + * For better or for worse, if jump labels (the gcc extension) are missing, + * then the entire static branch patching infrastructure is compiled out. + * If that happens, the code in here will malfunction. Raise a compiler + * error instead. + * + * In theory, jump labels and the static branch patching infrastructure + * could be decoupled to fix this. + */ +#error asm/jump_label.h included on a non-jump-label kernel +#endif + #include #include static __always_inline bool arch_static_branch(struct static_key *key, bool branch) { - asm_volatile_goto("1:" - ".byte " __stringify(STATIC_KEY_INIT_NOP) "\n\t" - ".pushsection __jump_table, \"aw\" \n\t" - _ASM_ALIGN "\n\t" - _ASM_PTR "1b, %l[l_yes], %c0 + %c1 \n\t" - ".popsection \n\t" - : : "i" (key), "i" (branch) : : l_yes); + asm_volatile_goto("STATIC_BRANCH_GOTO l_yes=\"%l[l_yes]\" key=\"%c0\" " + "branch=\"%c1\"" + : : "i" (key), "i" (branch) : : l_yes); return false; l_yes: @@ -48,13 +44,8 @@ static __always_inline bool arch_static_branch(struct static_key *key, bool bran static __always_inline bool arch_static_branch_jump(struct static_key *key, bool branch) { - asm_volatile_goto("1:" - ".byte 0xe9\n\t .long %l[l_yes] - 2f\n\t" - "2:\n\t" - ".pushsection __jump_table, \"aw\" \n\t" - _ASM_ALIGN "\n\t" - _ASM_PTR "1b, %l[l_yes], %c0 + %c1 \n\t" - ".popsection \n\t" + asm_volatile_goto("STATIC_BRANCH_JUMP_GOTO l_yes=\"%l[l_yes]\" key=\"%c0\" " + "branch=\"%c1\"" : : "i" (key), "i" (branch) : : l_yes); return false; @@ -108,6 +99,26 @@ struct jump_entry { .popsection .endm +.macro STATIC_BRANCH_GOTO l_yes:req key:req branch:req +1: + .byte STATIC_KEY_INIT_NOP + .pushsection __jump_table, "aw" + _ASM_ALIGN + _ASM_PTR 1b, \l_yes, \key + \branch + .popsection +.endm + +.macro STATIC_BRANCH_JUMP_GOTO l_yes:req key:req branch:req +1: + .byte 0xe9 + .long \l_yes - 2f +2: + .pushsection __jump_table, "aw" + _ASM_ALIGN + _ASM_PTR 1b, \l_yes, \key + \branch + .popsection +.endm + #endif /* __ASSEMBLY__ */ #endif diff --git a/arch/x86/kernel/macros.S b/arch/x86/kernel/macros.S index bf8b9c93e255..161c95059044 100644 --- a/arch/x86/kernel/macros.S +++ b/arch/x86/kernel/macros.S @@ -13,3 +13,4 @@ #include #include #include +#include -- 2.17.0