Received: by 2002:ac0:a5b6:0:0:0:0:0 with SMTP id m51-v6csp3360131imm; Sun, 10 Jun 2018 14:38:12 -0700 (PDT) X-Google-Smtp-Source: ADUXVKIi0iFvyWJpXG1OMJj7sZ24m7rMPOk9XC8NBJI3MXwi7ziluLbGiqkRzLHpnkL8qjPVRYgH X-Received: by 2002:a63:449:: with SMTP id 70-v6mr12614909pge.229.1528666692436; Sun, 10 Jun 2018 14:38:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1528666692; cv=none; d=google.com; s=arc-20160816; b=xjeMAwYB4RNTlGNMCoaryyXWcnDlqqbPW86FObUxu2I8iHBOb83Prg0oJwze/xU02P eFQg/nVMyRFify2evpp6dY2/V8QOBxlbeOdF1Tda7ndYvLrYatZGdctEgpa3K9HvpJGE aIp7fpRxjeKutjFpODGazysZALoJp8P8C5p9EftUbhdlphIf1pB98aoNegZNJ5RaxCUN IwGv9WNsuivAsbCRa7lyj9KIrOSAbaBAttaY6ew2KkA04O8soq94EttWZqbyoJCavHbs OsrOGDU05W5uagIo87jP51oL/1QNG4o0XrcT/wIHSYuyavC3NtBx3h1tZZQbf96hq4x3 FoMg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:arc-authentication-results; bh=fh3o/HYcmk+U8ewuobcn5iKQMlTBVQ+eiuHm9Yrr+Wc=; b=TT060xWD+BwRHAV040MYTYYf9ke8ieVet+oIj+M7Ei0WCNzllYwTTmtO/HHMM6NJ42 qk0LcV9nu0nwf4oJVJ9eOKM3A/8iSN27RiuaB0210FUN9xUT1O84E7qPf3Tp6wCfE1J8 nHioRbC2rTIbfgghJLSqLjR29EIrnnt5Y25DTKlNdcz1Ei/U2MZkqzjrTseZ8A7oSEe3 f9V4TS1jbzGcTWaDpLxORc6tr6xkuN5s00Pik2ypaOwWNGWgKf6AK5fHWi/vb92q8t9Q uGFDxrZYj7VSVdydbGAH5/F/i24Kgnx/UCAdZlYblw5PoOyV5oHKZAwm/hicDtRAI+K6 COaA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=vmware.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z4-v6si39881584pfl.31.2018.06.10.14.37.58; Sun, 10 Jun 2018 14:38:12 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=vmware.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932334AbeFJVfB (ORCPT + 99 others); Sun, 10 Jun 2018 17:35:01 -0400 Received: from ex13-edg-ou-001.vmware.com ([208.91.0.189]:16964 "EHLO EX13-EDG-OU-001.vmware.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751573AbeFJVeJ (ORCPT ); Sun, 10 Jun 2018 17:34:09 -0400 Received: from sc9-mailhost3.vmware.com (10.113.161.73) by EX13-EDG-OU-001.vmware.com (10.113.208.155) with Microsoft SMTP Server id 15.0.1156.6; Sun, 10 Jun 2018 14:33:45 -0700 Received: from sc2-haas01-esx0118.eng.vmware.com (sc2-haas01-esx0118.eng.vmware.com [10.172.44.118]) by sc9-mailhost3.vmware.com (Postfix) with ESMTP id 2BF32406F6; Sun, 10 Jun 2018 14:34:06 -0700 (PDT) From: Nadav Amit To: , CC: Nadav Amit , Juergen Gross , Alok Kataria , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , Subject: [PATCH v3 6/9] x86: prevent inline distortion by paravirt ops Date: Sun, 10 Jun 2018 07:19:08 -0700 Message-ID: <20180610141911.52948-7-namit@vmware.com> X-Mailer: git-send-email 2.17.0 In-Reply-To: <20180610141911.52948-1-namit@vmware.com> References: <20180610141911.52948-1-namit@vmware.com> MIME-Version: 1.0 Content-Type: text/plain Received-SPF: None (EX13-EDG-OU-001.vmware.com: namit@vmware.com does not designate permitted sender hosts) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org GCC considers the number of statements in inlined assembly blocks, according to new-lines and semicolons, as an indication to the cost of the block in time and space. This data is distorted by the kernel code, which puts information in alternative sections. As a result, the compiler may perform incorrect inlining and branch optimizations. The solution is to set an assembly macro and call it from the inlined assembly block. As a result GCC considers the inline assembly block as a single instruction. The effect of the patch is a more aggressive inlining, which also causes a size increase of kernel. text data bss dec hex filename 18147336 10226688 2957312 31331336 1de1408 ./vmlinux before 18162555 10226288 2957312 31346155 1de4deb ./vmlinux after (+14819) Static text symbols: Before: 40053 After: 39942 (-111) Cc: Juergen Gross Cc: Alok Kataria Cc: Thomas Gleixner Cc: Ingo Molnar Cc: "H. Peter Anvin" Cc: x86@kernel.org Cc: virtualization@lists.linux-foundation.org Signed-off-by: Nadav Amit --- arch/x86/include/asm/paravirt_types.h | 54 +++++++++++++++------------ arch/x86/kernel/macros.S | 1 + 2 files changed, 31 insertions(+), 24 deletions(-) diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h index 180bc0bff0fb..2a9c53f64f1a 100644 --- a/arch/x86/include/asm/paravirt_types.h +++ b/arch/x86/include/asm/paravirt_types.h @@ -347,19 +347,15 @@ extern struct pv_lock_ops pv_lock_ops; * Generate some code, and mark it as patchable by the * apply_paravirt() alternate instruction patcher. */ -#define _paravirt_alt(insn_string, type, clobber) \ - "771:\n\t" insn_string "\n" "772:\n" \ - ".pushsection .parainstructions,\"a\"\n" \ - _ASM_ALIGN "\n" \ - _ASM_PTR " 771b\n" \ - " .byte " type "\n" \ - " .byte 772b-771b\n" \ - " .short " clobber "\n" \ - ".popsection\n" +#define _paravirt_alt(type, clobber, pv_opptr) \ + "PARAVIRT_ALT type=" __stringify(type) \ + " clobber=" __stringify(clobber) \ + " pv_opptr=" __stringify(pv_opptr) "\n\t" /* Generate patchable code, with the default asm parameters. */ -#define paravirt_alt(insn_string) \ - _paravirt_alt(insn_string, "%c[paravirt_typenum]", "%c[paravirt_clobber]") +#define paravirt_alt \ + _paravirt_alt("%c[paravirt_typenum]", "%c[paravirt_clobber]", \ + "%c[paravirt_opptr]") /* Simple instruction patching code. */ #define NATIVE_LABEL(a,x,b) "\n\t.globl " a #x "_" #b "\n" a #x "_" #b ":\n\t" @@ -387,16 +383,6 @@ unsigned native_patch(u8 type, u16 clobbers, void *ibuf, int paravirt_disable_iospace(void); -/* - * This generates an indirect call based on the operation type number. - * The type number, computed in PARAVIRT_PATCH, is derived from the - * offset into the paravirt_patch_template structure, and can therefore be - * freely converted back into a structure offset. - */ -#define PARAVIRT_CALL \ - ANNOTATE_RETPOLINE_SAFE \ - "call *%c[paravirt_opptr];" - /* * These macros are intended to wrap calls through one of the paravirt * ops structs, so that they can be later identified and patched at @@ -534,7 +520,7 @@ int paravirt_disable_iospace(void); /* since this condition will never hold */ \ if (sizeof(rettype) > sizeof(unsigned long)) { \ asm volatile(pre \ - paravirt_alt(PARAVIRT_CALL) \ + paravirt_alt \ post \ : call_clbr, ASM_CALL_CONSTRAINT \ : paravirt_type(op), \ @@ -544,7 +530,7 @@ int paravirt_disable_iospace(void); __ret = (rettype)((((u64)__edx) << 32) | __eax); \ } else { \ asm volatile(pre \ - paravirt_alt(PARAVIRT_CALL) \ + paravirt_alt \ post \ : call_clbr, ASM_CALL_CONSTRAINT \ : paravirt_type(op), \ @@ -571,7 +557,7 @@ int paravirt_disable_iospace(void); PVOP_VCALL_ARGS; \ PVOP_TEST_NULL(op); \ asm volatile(pre \ - paravirt_alt(PARAVIRT_CALL) \ + paravirt_alt \ post \ : call_clbr, ASM_CALL_CONSTRAINT \ : paravirt_type(op), \ @@ -691,6 +677,26 @@ struct paravirt_patch_site { extern struct paravirt_patch_site __parainstructions[], __parainstructions_end[]; +#else /* __ASSEMBLY__ */ + +/* + * This generates an indirect call based on the operation type number. + * The type number, computed in PARAVIRT_PATCH, is derived from the + * offset into the paravirt_patch_template structure, and can therefore be + * freely converted back into a structure offset. + */ +.macro PARAVIRT_ALT type:req clobber:req pv_opptr:req +771: ANNOTATE_RETPOLINE_SAFE + call *\pv_opptr +772: .pushsection .parainstructions,"a" + _ASM_ALIGN + _ASM_PTR 771b + .byte \type + .byte 772b-771b + .short \clobber + .popsection +.endm + #endif /* __ASSEMBLY__ */ #endif /* _ASM_X86_PARAVIRT_TYPES_H */ diff --git a/arch/x86/kernel/macros.S b/arch/x86/kernel/macros.S index 66ccb8e823b1..71d8b716b111 100644 --- a/arch/x86/kernel/macros.S +++ b/arch/x86/kernel/macros.S @@ -10,3 +10,4 @@ #include #include #include +#include -- 2.17.0