Received: by 2002:ac0:aa62:0:0:0:0:0 with SMTP id w31-v6csp312970ima; Wed, 24 Oct 2018 01:30:31 -0700 (PDT) X-Google-Smtp-Source: AJdET5fb9aFIWXsjVFi/OKzC0xTa9p8r1X65sO1t+8bJoMRVIOMiSBUGHWBuVqfGBOu/bfCK/iF9 X-Received: by 2002:a17:902:33c2:: with SMTP id b60-v6mr1638973plc.105.1540369831724; Wed, 24 Oct 2018 01:30:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1540369831; cv=none; d=google.com; s=arc-20160816; b=UmjcYuLONsSyeotCi7UyY4E7T65WKk5eQh6MkLrBzbHA+QLuKDvtkkJaoNscEfTeMS 1pufMP7FvtvqnWtXaNDiVaF3tCd9zNPLPWIRlWdbIaRPrxXm2InzieiGssceuUY+TWMQ 0uMg894oeQj/ZI7vx4f9CjR96pRZJjBjwU3LJhJDxxxbrngShrLk/vPq/08aIObU0rdj Vf62cCCmxhhxKRezBWVe1GLl/zD1bSYdKrQTnaWEfMevwv4E1AEDwWCz1r84vFEZtwHm WE/dqMNxFjknL1hnMyFGJlGTkdhDssTEUGQ5kNImKn4up+AQt1kscqqt/iqWxnnd18vU XDRA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature; bh=r9pFQ2+Kis40MqGPKw3NvesWLLEYany1nU1/R2zjamA=; b=RffKSCzNGbjsxe9azR0kxqrBieFGc1lI/xcZIybjAD9X2Na3p+SYR5POmxQjZXJG8y 6qhzb31QJ13R1G+5SgDNcM9H9Ad3cjqVHvBEWLqeyY6VQhqxdHyXodHUl+4O4fKHyQGB Z3vL4vrYBssY2AHac+sUysFUJbDExLEr0DTD31JZXqpJgaA1Ua4QsbtfHi/6AbqSaGsF kQoLK88tyOq01HayxHZZl834BGFBJMCiop2kDLz1MpqCzDGFzD5J2MxKpWU+gpeQ4P1M eGeKdcKiKqJ/FwZ+hk9w79OYka9Yx4tvHSNjmGOzuw5HZu92ejGMOTgTrt78BY+B/QAq yG7w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@amazon.de header.s=amazon201209 header.b=CbhgQUnm; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=amazon.de Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 30-v6si3141826pgw.208.2018.10.24.01.30.16; Wed, 24 Oct 2018 01:30:31 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@amazon.de header.s=amazon201209 header.b=CbhgQUnm; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=amazon.de Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727542AbeJXQ4z (ORCPT + 99 others); Wed, 24 Oct 2018 12:56:55 -0400 Received: from smtp-fw-9101.amazon.com ([207.171.184.25]:56614 "EHLO smtp-fw-9101.amazon.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727356AbeJXQ4y (ORCPT ); Wed, 24 Oct 2018 12:56:54 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.de; i=@amazon.de; q=dns/txt; s=amazon201209; t=1540369786; x=1571905786; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=r9pFQ2+Kis40MqGPKw3NvesWLLEYany1nU1/R2zjamA=; b=CbhgQUnm1UL0B8nq7nYAE/w3q+Q7uJeygT6nR9iQcFDqfbdGHTUpI6AS k5ZIAUbgzi9c8zJumUsOqbvTVV5I+7P6J0AhTQV3gAUL6BoBEMJWH4xN+ khHnM5SmQs7wBHdfVsXcUxZLMo+OjijGD2eKa739mSeinZUn3+T/sbAZC s=; X-IronPort-AV: E=Sophos;i="5.54,419,1534809600"; d="scan'208";a="766031222" Received: from sea3-co-svc-lb6-vlan3.sea.amazon.com (HELO email-inbound-relay-1d-5dd976cd.us-east-1.amazon.com) ([10.47.22.38]) by smtp-border-fw-out-9101.sea19.amazon.com with ESMTP/TLS/DHE-RSA-AES256-SHA; 24 Oct 2018 08:29:45 +0000 Received: from u54ee758033e858cfa736.ant.amazon.com (iad7-ws-svc-lb50-vlan3.amazon.com [10.0.93.214]) by email-inbound-relay-1d-5dd976cd.us-east-1.amazon.com (8.14.7/8.14.7) with ESMTP id w9O8TcdU044798 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Wed, 24 Oct 2018 08:29:40 GMT Received: from u54ee758033e858cfa736.ant.amazon.com (localhost [127.0.0.1]) by u54ee758033e858cfa736.ant.amazon.com (8.15.2/8.15.2/Debian-3) with ESMTP id w9O8TcPL030186; Wed, 24 Oct 2018 10:29:38 +0200 Received: (from jsteckli@localhost) by u54ee758033e858cfa736.ant.amazon.com (8.15.2/8.15.2/Submit) id w9O8TcFB030185; Wed, 24 Oct 2018 10:29:38 +0200 From: Julian Stecklina To: kvm@vger.kernel.org, Paolo Bonzini Cc: Julian Stecklina , js@alien8.de, linux-kernel@vger.kernel.org Subject: [PATCH 4/4] kvm, vmx: remove manually coded vmx instructions Date: Wed, 24 Oct 2018 10:28:59 +0200 Message-Id: X-Mailer: git-send-email 2.7.4 In-Reply-To: <09986c98c9655f1542768ecfda644ac821e67a57.1540369608.git.jsteckli@amazon.de> References: <09986c98c9655f1542768ecfda644ac821e67a57.1540369608.git.jsteckli@amazon.de> In-Reply-To: <09986c98c9655f1542768ecfda644ac821e67a57.1540369608.git.jsteckli@amazon.de> References: <09986c98c9655f1542768ecfda644ac821e67a57.1540369608.git.jsteckli@amazon.de> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org So far the VMX code relied on manually assembled VMX instructions. This was apparently done to ensure compatibility with old binutils. VMX instructions were introduced with binutils 2.19 and the kernel currently requires binutils 2.20. Remove the manually assembled versions and replace them with the proper inline assembly. This improves code generation (and source code readability). According to the bloat-o-meter this change removes ~1300 bytes from the text segment. Signed-off-by: Julian Stecklina Reviewed-by: Jan H. Schönherr Reviewed-by: Konrad Jan Miller Reviewed-by: Razvan-Alin Ghitulete --- arch/x86/include/asm/virtext.h | 2 +- arch/x86/include/asm/vmx.h | 13 ------------- arch/x86/kvm/vmx.c | 39 ++++++++++++++++++--------------------- 3 files changed, 19 insertions(+), 35 deletions(-) diff --git a/arch/x86/include/asm/virtext.h b/arch/x86/include/asm/virtext.h index 0116b2e..c5395b3 100644 --- a/arch/x86/include/asm/virtext.h +++ b/arch/x86/include/asm/virtext.h @@ -40,7 +40,7 @@ static inline int cpu_has_vmx(void) */ static inline void cpu_vmxoff(void) { - asm volatile (ASM_VMX_VMXOFF : : : "cc"); + asm volatile ("vmxoff" : : : "cc"); cr4_clear_bits(X86_CR4_VMXE); } diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h index 9527ba5..ade0f15 100644 --- a/arch/x86/include/asm/vmx.h +++ b/arch/x86/include/asm/vmx.h @@ -503,19 +503,6 @@ enum vmcs_field { #define VMX_EPT_IDENTITY_PAGETABLE_ADDR 0xfffbc000ul - -#define ASM_VMX_VMCLEAR_RAX ".byte 0x66, 0x0f, 0xc7, 0x30" -#define ASM_VMX_VMLAUNCH ".byte 0x0f, 0x01, 0xc2" -#define ASM_VMX_VMRESUME ".byte 0x0f, 0x01, 0xc3" -#define ASM_VMX_VMPTRLD_RAX ".byte 0x0f, 0xc7, 0x30" -#define ASM_VMX_VMREAD_RDX_RAX ".byte 0x0f, 0x78, 0xd0" -#define ASM_VMX_VMWRITE_RAX_RDX ".byte 0x0f, 0x79, 0xd0" -#define ASM_VMX_VMWRITE_RSP_RDX ".byte 0x0f, 0x79, 0xd4" -#define ASM_VMX_VMXOFF ".byte 0x0f, 0x01, 0xc4" -#define ASM_VMX_VMXON_RAX ".byte 0xf3, 0x0f, 0xc7, 0x30" -#define ASM_VMX_INVEPT ".byte 0x66, 0x0f, 0x38, 0x80, 0x08" -#define ASM_VMX_INVVPID ".byte 0x66, 0x0f, 0x38, 0x81, 0x08" - struct vmx_msr_entry { u32 index; u32 reserved; diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 82cfb909..bbbdccb 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -2077,7 +2077,7 @@ static int __find_msr_index(struct vcpu_vmx *vmx, u32 msr) return -1; } -static inline void __invvpid(int ext, u16 vpid, gva_t gva) +static inline void __invvpid(long ext, u16 vpid, gva_t gva) { struct { u64 vpid : 16; @@ -2086,21 +2086,21 @@ static inline void __invvpid(int ext, u16 vpid, gva_t gva) } operand = { vpid, 0, gva }; bool error; - asm volatile (__ex(ASM_VMX_INVVPID) CC_SET(na) - : CC_OUT(na) (error) : "a"(&operand), "c"(ext) + asm volatile ("invvpid %1, %2" CC_SET(na) + : CC_OUT(na) (error) : "m"(operand), "r"(ext) : "memory"); BUG_ON(error); } -static inline void __invept(int ext, u64 eptp, gpa_t gpa) +static inline void __invept(long ext, u64 eptp, gpa_t gpa) { struct { u64 eptp, gpa; } operand = {eptp, gpa}; bool error; - asm volatile (__ex(ASM_VMX_INVEPT) CC_SET(na) - : CC_OUT(na) (error) : "a" (&operand), "c" (ext) + asm volatile ("invept %1, %2" CC_SET(na) + : CC_OUT(na) (error) : "m" (operand), "r" (ext) : "memory"); BUG_ON(error); } @@ -2120,8 +2120,8 @@ static void vmcs_clear(struct vmcs *vmcs) u64 phys_addr = __pa(vmcs); bool error; - asm volatile (__ex(ASM_VMX_VMCLEAR_RAX) CC_SET(na) - : CC_OUT(na) (error) : "a"(&phys_addr), "m"(phys_addr) + asm volatile ("vmclear %1" CC_SET(na) + : CC_OUT(na) (error) : "m"(phys_addr) : "memory"); if (unlikely(error)) printk(KERN_ERR "kvm: vmclear fail: %p/%llx\n", @@ -2145,8 +2145,8 @@ static void vmcs_load(struct vmcs *vmcs) if (static_branch_unlikely(&enable_evmcs)) return evmcs_load(phys_addr); - asm volatile (__ex(ASM_VMX_VMPTRLD_RAX) CC_SET(na) - : CC_OUT(na) (error) : "a"(&phys_addr), "m"(phys_addr) + asm volatile ("vmptrld %1" CC_SET(na) + : CC_OUT(na) (error) : "m"(phys_addr) : "memory"); if (unlikely(error)) printk(KERN_ERR "kvm: vmptrld %p/%llx failed\n", @@ -2323,8 +2323,7 @@ static __always_inline unsigned long __vmcs_readl(unsigned long field) { unsigned long value; - asm volatile (__ex_clear(ASM_VMX_VMREAD_RDX_RAX, "%0") - : "=a"(value) : "d"(field) : "cc"); + asm volatile ("vmread %1, %0" : "=rm"(value) : "r"(field) : "cc"); return value; } @@ -2375,8 +2374,8 @@ static __always_inline void __vmcs_writel(unsigned long field, unsigned long val { bool error; - asm volatile (__ex(ASM_VMX_VMWRITE_RAX_RDX) CC_SET(na) - : CC_OUT(na) (error) : "a"(value), "d"(field)); + asm volatile ("vmwrite %1, %2" CC_SET(na) + : CC_OUT(na) (error) : "rm"(value), "r"(field)); if (unlikely(error)) vmwrite_error(field, value); } @@ -4397,9 +4396,7 @@ static void kvm_cpu_vmxon(u64 addr) cr4_set_bits(X86_CR4_VMXE); intel_pt_handle_vmx(1); - asm volatile (ASM_VMX_VMXON_RAX - : : "a"(&addr), "m"(addr) - : "memory", "cc"); + asm volatile ("vmxon %0" : : "m"(addr) : "memory", "cc"); } static int hardware_enable(void) @@ -4468,7 +4465,7 @@ static void vmclear_local_loaded_vmcss(void) */ static void kvm_cpu_vmxoff(void) { - asm volatile (__ex(ASM_VMX_VMXOFF) : : : "cc"); + asm volatile ("vmxoff" : : : "cc"); intel_pt_handle_vmx(0); cr4_clear_bits(X86_CR4_VMXE); @@ -10748,7 +10745,7 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu) "mov %%" _ASM_SP ", (%%" _ASM_SI ") \n\t" "jmp 1f \n\t" "2: \n\t" - __ex(ASM_VMX_VMWRITE_RSP_RDX) "\n\t" + "vmwrite %%" _ASM_SP ", %%" _ASM_DX "\n\t" "1: \n\t" /* Check if vmlaunch of vmresume is needed */ "cmpl $0, %c[launched](%0) \n\t" @@ -10773,9 +10770,9 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu) /* Enter guest mode */ "jne 1f \n\t" - __ex(ASM_VMX_VMLAUNCH) "\n\t" + "vmlaunch \n\t" "jmp 2f \n\t" - "1: " __ex(ASM_VMX_VMRESUME) "\n\t" + "1: vmresume \n\t" "2: " /* Save guest registers, load host registers, keep flags */ "mov %0, %c[wordsize](%%" _ASM_SP ") \n\t" -- 2.7.4