Received: by 10.223.185.116 with SMTP id b49csp1048913wrg; Wed, 21 Feb 2018 11:10:52 -0800 (PST) X-Google-Smtp-Source: AH8x225VXmAAg+ducLKxveVweZFp1420gvcB7DPvUeuOhYrEhV1V2cOFlMJYYA7rQwq1lB8RK3o1 X-Received: by 10.98.102.155 with SMTP id s27mr4299637pfj.198.1519240252876; Wed, 21 Feb 2018 11:10:52 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1519240252; cv=none; d=google.com; s=arc-20160816; b=sxPxeRssnw/HmtK68jMfE39f2eXTcyCt6eUOSRnjyh7nMOCScUjDdArgJ+Og9VZVW7 Hx4ZlFtvzRguAMeJSAxRwDuT9vRc2RB9dCLWelRGTYaSlXBz0pmoWKJL/2VeCBCkwiQX qyuFPKdPGHDC5FqbmKwyVll6QzdtQp9nbgcuGOitMICW9FFEN0InOSrtfe4vPAhAUSwo i1wHc+hmH9nhTFcOcMg7jNPTp1HSH1J93CidN3InicdQVE62rdrUIalNgE0QW8F1FsuA eAB40x2DTgqKDcIX9y8qS2zCwPo6ftyCQPS/NHthOnGuxpLsvRcnwmg03wrWniN2XksB 57Sw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=IQQxS0lK8Q4gZ9PPBzjpHsv+1pgTvABRhWeL/bxzQuU=; b=ai6LLxEc7WZAHKvjuZVgENpSMhUpPyh0YByntYeNPzhPHbE/zjYeq334mK541i3PFH ZDjnsbmmDt90yfGIu8iy23k5h1DmkS+OdxriT6ExGF8ZkjeEK7NZBxxPGgutllXsKvnM DEf3C/Scd9CgY1fw3lRh6ZmW/ihzi+jfJ1zDS+TTTNstIigQDCUogKNGzFnjmryK8Fs3 zBmCMaNKVwdZ0dHq0EP08PkG5YQyl4DEdsh3zi5HBjyNvbAzU9kMMQ6SSVD3H+RJo9n6 x/89IAXP1n95D+XuRs1An/jPr45Rp0ZbU2GJ9JF20aDqwTnYntJp5vNMRICYKcmQx6iP 47vg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@amazon.de header.s=amazon201209 header.b=ZiU+h2Ds; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=amazon.de Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id i13si2392121pgq.508.2018.02.21.11.10.38; Wed, 21 Feb 2018 11:10:52 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@amazon.de header.s=amazon201209 header.b=ZiU+h2Ds; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=amazon.de Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S935106AbeBURrv (ORCPT + 99 others); Wed, 21 Feb 2018 12:47:51 -0500 Received: from smtp-fw-6001.amazon.com ([52.95.48.154]:28428 "EHLO smtp-fw-6001.amazon.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S935027AbeBURrt (ORCPT ); Wed, 21 Feb 2018 12:47:49 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.de; i=@amazon.de; q=dns/txt; s=amazon201209; t=1519235269; x=1550771269; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=IQQxS0lK8Q4gZ9PPBzjpHsv+1pgTvABRhWeL/bxzQuU=; b=ZiU+h2DsHfxMFmvOmeh4GeyjDJY8KJSDgacV0TJDeArc8MI4f8ctEAfc 5CbFhMxR4gWCXDNsazkT5t6oPb6xjTGGsGZkDMiUCgjrEPd+aIsX9ZVz7 nq0d29R/1dpWLvltfWf1Wccvyf4Edw5eW9dA/0umaZ5qLB67OSOToF6Ch k=; X-IronPort-AV: E=Sophos;i="5.47,375,1515456000"; d="scan'208";a="332936473" Received: from iad6-co-svc-p1-lb1-vlan3.amazon.com (HELO email-inbound-relay-2a-c5104f52.us-west-2.amazon.com) ([10.124.125.6]) by smtp-border-fw-out-6001.iad6.amazon.com with ESMTP/TLS/DHE-RSA-AES256-SHA; 21 Feb 2018 17:47:47 +0000 Received: from u54e1ad5160425a4b64ea.ant.amazon.com (pdx2-ws-svc-lb17-vlan2.amazon.com [10.247.140.66]) by email-inbound-relay-2a-c5104f52.us-west-2.amazon.com (8.14.7/8.14.7) with ESMTP id w1LHldfa086608 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Wed, 21 Feb 2018 17:47:41 GMT Received: from u54e1ad5160425a4b64ea.ant.amazon.com (localhost [127.0.0.1]) by u54e1ad5160425a4b64ea.ant.amazon.com (8.15.2/8.15.2/Debian-3) with ESMTP id w1LHlcIX006678; Wed, 21 Feb 2018 18:47:38 +0100 Received: (from karahmed@localhost) by u54e1ad5160425a4b64ea.ant.amazon.com (8.15.2/8.15.2/Submit) id w1LHlc9J006677; Wed, 21 Feb 2018 18:47:38 +0100 From: KarimAllah Ahmed To: x86@kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: hpa@zytor.com, jmattson@google.com, mingo@redhat.com, pbonzini@redhat.com, rkrcmar@redhat.com, tglx@linutronix.de, KarimAllah Ahmed Subject: [PATCH 07/10] KVM/nVMX: Use kvm_vcpu_map when mapping the posted interrupt descriptor table Date: Wed, 21 Feb 2018 18:47:18 +0100 Message-Id: <1519235241-6500-8-git-send-email-karahmed@amazon.de> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1519235241-6500-1-git-send-email-karahmed@amazon.de> References: <1519235241-6500-1-git-send-email-karahmed@amazon.de> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org ... since using kvm_vcpu_gpa_to_page() and kmap() will only work for guest memory that has a "struct page". The life-cycle of the mapping also changes to avoid doing map and unmap on every single exit (which becomes very expesive once we use memremap). Now the memory is mapped and only unmapped when a new VMCS12 is loaded into the vCPU (or when the vCPU is freed!). Signed-off-by: KarimAllah Ahmed --- arch/x86/kvm/vmx.c | 45 +++++++++++++-------------------------------- 1 file changed, 13 insertions(+), 32 deletions(-) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index a700338..7b29419 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -461,7 +461,7 @@ struct nested_vmx { */ struct page *apic_access_page; struct kvm_host_map virtual_apic_map; - struct page *pi_desc_page; + struct kvm_host_map pi_desc_map; struct kvm_host_map msr_bitmap_map; struct pi_desc *pi_desc; @@ -7666,6 +7666,7 @@ static inline void nested_release_vmcs12(struct vcpu_vmx *vmx) vmx->nested.cached_vmcs12, 0, VMCS12_SIZE); kvm_vcpu_unmap(&vmx->nested.virtual_apic_map); + kvm_vcpu_unmap(&vmx->nested.pi_desc_map); kvm_vcpu_unmap(&vmx->nested.msr_bitmap_map); vmx->nested.current_vmptr = -1ull; @@ -7698,14 +7699,9 @@ static void free_nested(struct vcpu_vmx *vmx) vmx->nested.apic_access_page = NULL; } kvm_vcpu_unmap(&vmx->nested.virtual_apic_map); - if (vmx->nested.pi_desc_page) { - kunmap(vmx->nested.pi_desc_page); - kvm_release_page_dirty(vmx->nested.pi_desc_page); - vmx->nested.pi_desc_page = NULL; - vmx->nested.pi_desc = NULL; - } - + kvm_vcpu_unmap(&vmx->nested.pi_desc_map); kvm_vcpu_unmap(&vmx->nested.msr_bitmap_map); + vmx->nested.pi_desc = NULL; free_loaded_vmcs(&vmx->nested.vmcs02); } @@ -10278,24 +10274,16 @@ static void nested_get_vmcs12_pages(struct kvm_vcpu *vcpu, } if (nested_cpu_has_posted_intr(vmcs12)) { - if (vmx->nested.pi_desc_page) { /* shouldn't happen */ - kunmap(vmx->nested.pi_desc_page); - kvm_release_page_dirty(vmx->nested.pi_desc_page); - vmx->nested.pi_desc_page = NULL; + map = &vmx->nested.pi_desc_map; + + if (kvm_vcpu_map(vcpu, gpa_to_gfn(vmcs12->posted_intr_desc_addr), map)) { + vmx->nested.pi_desc = + (struct pi_desc *)(((void *)map->kaddr) + + offset_in_page(vmcs12->posted_intr_desc_addr)); + vmcs_write64(POSTED_INTR_DESC_ADDR, pfn_to_hpa(map->pfn) + + offset_in_page(vmcs12->posted_intr_desc_addr)); } - page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->posted_intr_desc_addr); - if (is_error_page(page)) - return; - vmx->nested.pi_desc_page = page; - vmx->nested.pi_desc = kmap(vmx->nested.pi_desc_page); - vmx->nested.pi_desc = - (struct pi_desc *)((void *)vmx->nested.pi_desc + - (unsigned long)(vmcs12->posted_intr_desc_addr & - (PAGE_SIZE - 1))); - vmcs_write64(POSTED_INTR_DESC_ADDR, - page_to_phys(vmx->nested.pi_desc_page) + - (unsigned long)(vmcs12->posted_intr_desc_addr & - (PAGE_SIZE - 1))); + } if (nested_vmx_prepare_msr_bitmap(vcpu, vmcs12)) vmcs_set_bits(CPU_BASED_VM_EXEC_CONTROL, @@ -11893,13 +11881,6 @@ static void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 exit_reason, kvm_release_page_dirty(vmx->nested.apic_access_page); vmx->nested.apic_access_page = NULL; } - if (vmx->nested.pi_desc_page) { - kunmap(vmx->nested.pi_desc_page); - kvm_release_page_dirty(vmx->nested.pi_desc_page); - vmx->nested.pi_desc_page = NULL; - vmx->nested.pi_desc = NULL; - } - /* * We are now running in L2, mmu_notifier will force to reload the * page's hpa for L2 vmcs. Need to reload it for L1 before entering L1. -- 2.7.4