Received: by 10.192.165.156 with SMTP id m28csp2049148imm; Thu, 12 Apr 2018 07:49:21 -0700 (PDT) X-Google-Smtp-Source: AIpwx49jwIgCWKde/2yLxPQQB2MrCN/ubhrfHr0W9xq9uqOdrnYe7wvpw9He5NjhHVAXiBsfyefU X-Received: by 10.99.53.142 with SMTP id c136mr956036pga.37.1523544561486; Thu, 12 Apr 2018 07:49:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1523544561; cv=none; d=google.com; s=arc-20160816; b=Swu0RyyjuYzgPsMc/usPfrnMTmLCwxIHzKyRTO/3LE6YfqymjKPRsUHpcVGKhThv9o +WMAPslosW75rm0L57XtAwb1HrGsv088dptRobVIcM4Kh3OlUYGbyjzffxp5/Ls1U6mX /UOlNpLGnxC2TMvjG/wJKXnRR87kADEqWIvhJlyTi4jRKo6YGpK+QemMjpzQ6iMi4m5U sNVmzCHVRemxOMtNx0Z9/tWgzGAVaRMU/imJjZ6XP12uIZdqstgA92jaPI8+Y5W0j9BQ ZLB5ynUpwvqAXH3v2qI2cIg3es3uRNYM7KWAvhQ6qr3PxMhfTvoA7FdXyMh3NzFGOkoS o2tw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:autocrypt:openpgp:from:references:cc:to:subject :arc-authentication-results; bh=OEO7VJZXX4TJTa5wkjvCMldinVseakcuGBjAA6JFsP4=; b=FDCQzL2MF25Nwg89frfsJd50rylkuEafb2S4mfkC9KWqSvtKVel21Y9cj+UJW/3oaN h62RDsA2OemJ+ODRX2rvuzrQ4TBXT4tjYIo0+rjuSCYMhmotcqX9PBQdgIRSObRBhbRp xtxkLEug/FkCA7Tx7biS8peA4loBMIEHacDBL5+MrMKM36WWHCdbAPDHfMIxcI2WK3hx jC/p1pNZ0WvQ88ToA5cv6yYYr2ME+V5XoGKBNlHhiEyULffzqVDnlSTvQFAFts6g1MC3 V/Qfo+AxwM4VJK2tuiXp0XHQpcI4pe00CiKk58756u5v2HVfR876mBw5CuvB7Knw8lyo ZjAQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m1si68140pgc.731.2018.04.12.07.48.44; Thu, 12 Apr 2018 07:49:21 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753340AbeDLOjX (ORCPT + 99 others); Thu, 12 Apr 2018 10:39:23 -0400 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:40990 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1752668AbeDLOjV (ORCPT ); Thu, 12 Apr 2018 10:39:21 -0400 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id A096E401DEA0; Thu, 12 Apr 2018 14:39:20 +0000 (UTC) Received: from [10.36.117.39] (ovpn-117-39.ams2.redhat.com [10.36.117.39]) by smtp.corp.redhat.com (Postfix) with ESMTPS id F41C110B00B2; Thu, 12 Apr 2018 14:39:18 +0000 (UTC) Subject: Re: [PATCH 07/10] KVM/nVMX: Use kvm_vcpu_map when mapping the posted interrupt descriptor table To: KarimAllah Ahmed , x86@kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: hpa@zytor.com, jmattson@google.com, mingo@redhat.com, rkrcmar@redhat.com, tglx@linutronix.de References: <1519235241-6500-1-git-send-email-karahmed@amazon.de> <1519235241-6500-8-git-send-email-karahmed@amazon.de> From: Paolo Bonzini Openpgp: preference=signencrypt Autocrypt: addr=pbonzini@redhat.com; prefer-encrypt=mutual; keydata= xsEhBFRCcBIBDqDGsz4K0zZun3jh+U6Z9wNGLKQ0kSFyjN38gMqU1SfP+TUNQepFHb/Gc0E2 CxXPkIBTvYY+ZPkoTh5xF9oS1jqI8iRLzouzF8yXs3QjQIZ2SfuCxSVwlV65jotcjD2FTN04 hVopm9llFijNZpVIOGUTqzM4U55sdsCcZUluWM6x4HSOdw5F5Utxfp1wOjD/v92Lrax0hjiX DResHSt48q+8FrZzY+AUbkUS+Jm34qjswdrgsC5uxeVcLkBgWLmov2kMaMROT0YmFY6A3m1S P/kXmHDXxhe23gKb3dgwxUTpENDBGcfEzrzilWueOeUWiOcWuFOed/C3SyijBx3Av/lbCsHU Vx6pMycNTdzU1BuAroB+Y3mNEuW56Yd44jlInzG2UOwt9XjjdKkJZ1g0P9dwptwLEgTEd3Fo UdhAQyRXGYO8oROiuh+RZ1lXp6AQ4ZjoyH8WLfTLf5g1EKCTc4C1sy1vQSdzIRu3rBIjAvnC tGZADei1IExLqB3uzXKzZ1BZ+Z8hnt2og9hb7H0y8diYfEk2w3R7wEr+Ehk5NQsT2MPI2QBd wEv1/Aj1DgUHZAHzG1QN9S8wNWQ6K9DqHZTBnI1hUlkp22zCSHK/6FwUCuYp1zcAEQEAAc0f UGFvbG8gQm9uemluaSA8Ym9uemluaUBnbnUub3JnPsLBTQQTAQIAIwUCVEJ7AwIbAwcLCQgH AwIBBhUIAgkKCwQWAgMBAh4BAheAAAoJEH4VEAzNNmmxNcwOniaZVLsuy1lW/ntYCA0Caz0i sHpmecK8aWlvL9wpQCk4GlOX9L1emyYXZPmzIYB0IRqmSzAlZxi+A2qm9XOxs5gJ2xqMEXX5 FMtUH3kpkWWJeLqe7z0EoQdUI4EG988uv/tdZyqjUn2XJE+K01x7r3MkUSFz/HZKZiCvYuze VlS0NTYdUt5jBXualvAwNKfxEkrxeHjxgdFHjYWhjflahY7TNRmuqPM/Lx7wAuyoDjlYNE40 Z+Kun4/KjMbjgpcF4Nf3PJQR8qXI6p3so2qsSn91tY7DFSJO6v2HwFJkC2jU95wxfNmTEUZc znXahYbVOwCDJRuPrE5GKFd/XJU9u5hNtr/uYipHij01WXal2cce1S5mn1/HuM1yo1u8xdHy IupCd57EWI948e8BlhpujUCU2tzOb2iYS0kpmJ9/oLVZrOcSZCcCl2P0AaCAsj59z2kwQS9D du0WxUs8waso0Qq6tDEHo8yLCOJDzSz4oojTtWe4zsulVnWV+wu70AioemAT8S6JOtlu60C5 dHgQUD1Tp+ReXpDKXmjbASJx4otvW0qah3o6JaqO79tbDqIvncu3tewwp6c85uZd48JnIOh3 utBAu684nJakbbvZUGikJfxd887ATQRUQnHuAQgAx4dxXO6/Zun0eVYOnr5GRl76+2UrAAem Vv9Yfn2PbDIbxXqLff7oyVJIkw4WdhQIIvvtu5zH24iYjmdfbg8iWpP7NqxUQRUZJEWbx2CR wkMHtOmzQiQ2tSLjKh/cHeyFH68xjeLcinR7jXMrHQK+UCEw6jqi1oeZzGvfmxarUmS0uRuf fAb589AJW50kkQK9VD/9QC2FJISSUDnRC0PawGSZDXhmvITJMdD4TjYrePYhSY4uuIV02v02 8TVAaYbIhxvDY0hUQE4r8ZbGRLn52bEzaIPgl1p/adKfeOUeMReg/CkyzQpmyB1TSk8lDMxQ zCYHXAzwnGi8WU9iuE1P0wARAQABwsEzBBgBAgAJBQJUQnHuAhsMAAoJEH4VEAzNNmmxp1EO oJy0uZggJm7gZKeJ7iUpeX4eqUtqelUw6gU2daz2hE/jsxsTbC/w5piHmk1H1VWDKEM4bQBT uiJ0bfo55SWsUNN+c9hhIX+Y8LEe22izK3w7mRpvGcg+/ZRG4DEMHLP6JVsv5GMpoYwYOmHn plOzCXHvmdlW0i6SrMsBDl9rw4AtIa6bRwWLim1lQ6EM3PWifPrWSUPrPcw4OLSwFk0CPqC4 HYv/7ZnASVkR5EERFF3+6iaaVi5OgBd81F1TCvCX2BEyIDRZLJNvX3TOd5FEN+lIrl26xecz 876SvcOb5SL5SKg9/rCBufdPSjojkGFWGziHiFaYhbuI2E+NfWLJtd+ZvWAAV+O0d8vFFSvr iy9enJ8kxJwhC0ECbSKFY+W1eTIhMD3aeAKY90drozWEyHhENf4l/V+Ja5vOnW+gCDQkGt2Y 1lJAPPSIqZKvHzGShdh8DduC0U3xYkfbGAUvbxeepjgzp0uEnBXfPTy09JGpgWbg0w91GyfT /ujKaGd4vxG2Ei+MMNDmS1SMx7wu0evvQ5kT9NPzyq8R2GIhVSiAd2jioGuTjX6AZCFv3ToO 53DliFMkVTecLptsXaesuUHgL9dKIfvpm+rNXRn9wAwGjk0X/A== Message-ID: <5b3e7e52-6353-3fe9-f5e4-ecb1d8e2ac6f@redhat.com> Date: Thu, 12 Apr 2018 16:39:16 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.6.0 MIME-Version: 1.0 In-Reply-To: <1519235241-6500-8-git-send-email-karahmed@amazon.de> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit X-Scanned-By: MIMEDefang 2.78 on 10.11.54.3 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.6]); Thu, 12 Apr 2018 14:39:20 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.6]); Thu, 12 Apr 2018 14:39:20 +0000 (UTC) for IP:'10.11.54.3' DOMAIN:'int-mx03.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'pbonzini@redhat.com' RCPT:'' Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 21/02/2018 18:47, KarimAllah Ahmed wrote: > ... since using kvm_vcpu_gpa_to_page() and kmap() will only work for guest > memory that has a "struct page". > > The life-cycle of the mapping also changes to avoid doing map and unmap on > every single exit (which becomes very expesive once we use memremap). Now > the memory is mapped and only unmapped when a new VMCS12 is loaded into the > vCPU (or when the vCPU is freed!). > > Signed-off-by: KarimAllah Ahmed Same here, let's change the lifecycle separately. Paolo > --- > arch/x86/kvm/vmx.c | 45 +++++++++++++-------------------------------- > 1 file changed, 13 insertions(+), 32 deletions(-) > > diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c > index a700338..7b29419 100644 > --- a/arch/x86/kvm/vmx.c > +++ b/arch/x86/kvm/vmx.c > @@ -461,7 +461,7 @@ struct nested_vmx { > */ > struct page *apic_access_page; > struct kvm_host_map virtual_apic_map; > - struct page *pi_desc_page; > + struct kvm_host_map pi_desc_map; > struct kvm_host_map msr_bitmap_map; > > struct pi_desc *pi_desc; > @@ -7666,6 +7666,7 @@ static inline void nested_release_vmcs12(struct vcpu_vmx *vmx) > vmx->nested.cached_vmcs12, 0, VMCS12_SIZE); > > kvm_vcpu_unmap(&vmx->nested.virtual_apic_map); > + kvm_vcpu_unmap(&vmx->nested.pi_desc_map); > kvm_vcpu_unmap(&vmx->nested.msr_bitmap_map); > > vmx->nested.current_vmptr = -1ull; > @@ -7698,14 +7699,9 @@ static void free_nested(struct vcpu_vmx *vmx) > vmx->nested.apic_access_page = NULL; > } > kvm_vcpu_unmap(&vmx->nested.virtual_apic_map); > - if (vmx->nested.pi_desc_page) { > - kunmap(vmx->nested.pi_desc_page); > - kvm_release_page_dirty(vmx->nested.pi_desc_page); > - vmx->nested.pi_desc_page = NULL; > - vmx->nested.pi_desc = NULL; > - } > - > + kvm_vcpu_unmap(&vmx->nested.pi_desc_map); > kvm_vcpu_unmap(&vmx->nested.msr_bitmap_map); > + vmx->nested.pi_desc = NULL; > > free_loaded_vmcs(&vmx->nested.vmcs02); > } > @@ -10278,24 +10274,16 @@ static void nested_get_vmcs12_pages(struct kvm_vcpu *vcpu, > } > > if (nested_cpu_has_posted_intr(vmcs12)) { > - if (vmx->nested.pi_desc_page) { /* shouldn't happen */ > - kunmap(vmx->nested.pi_desc_page); > - kvm_release_page_dirty(vmx->nested.pi_desc_page); > - vmx->nested.pi_desc_page = NULL; > + map = &vmx->nested.pi_desc_map; > + > + if (kvm_vcpu_map(vcpu, gpa_to_gfn(vmcs12->posted_intr_desc_addr), map)) { > + vmx->nested.pi_desc = > + (struct pi_desc *)(((void *)map->kaddr) + > + offset_in_page(vmcs12->posted_intr_desc_addr)); > + vmcs_write64(POSTED_INTR_DESC_ADDR, pfn_to_hpa(map->pfn) + > + offset_in_page(vmcs12->posted_intr_desc_addr)); > } > - page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->posted_intr_desc_addr); > - if (is_error_page(page)) > - return; > - vmx->nested.pi_desc_page = page; > - vmx->nested.pi_desc = kmap(vmx->nested.pi_desc_page); > - vmx->nested.pi_desc = > - (struct pi_desc *)((void *)vmx->nested.pi_desc + > - (unsigned long)(vmcs12->posted_intr_desc_addr & > - (PAGE_SIZE - 1))); > - vmcs_write64(POSTED_INTR_DESC_ADDR, > - page_to_phys(vmx->nested.pi_desc_page) + > - (unsigned long)(vmcs12->posted_intr_desc_addr & > - (PAGE_SIZE - 1))); > + > } > if (nested_vmx_prepare_msr_bitmap(vcpu, vmcs12)) > vmcs_set_bits(CPU_BASED_VM_EXEC_CONTROL, > @@ -11893,13 +11881,6 @@ static void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 exit_reason, > kvm_release_page_dirty(vmx->nested.apic_access_page); > vmx->nested.apic_access_page = NULL; > } > - if (vmx->nested.pi_desc_page) { > - kunmap(vmx->nested.pi_desc_page); > - kvm_release_page_dirty(vmx->nested.pi_desc_page); > - vmx->nested.pi_desc_page = NULL; > - vmx->nested.pi_desc = NULL; > - } > - > /* > * We are now running in L2, mmu_notifier will force to reload the > * page's hpa for L2 vmcs. Need to reload it for L1 before entering L1. >