Received: by 2002:a05:6a10:a852:0:0:0:0 with SMTP id d18csp1847799pxy; Thu, 6 May 2021 18:04:24 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxGZRSxyKgzQieLnZnD6Ph9bcJhVtmyoWSl3fySzL0dpk3ZIKeUIB1sRzPAMMupyOb3TiAm X-Received: by 2002:a17:902:c941:b029:ee:d501:6fe4 with SMTP id i1-20020a170902c941b02900eed5016fe4mr7203143pla.64.1620349464799; Thu, 06 May 2021 18:04:24 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1620349464; cv=none; d=google.com; s=arc-20160816; b=tPpW4aU0d4ZF6OafrMRAzqNSCuqNGiynZUBHI0/kFOtSkbclzBj6Pah25OwoWhJV+3 KS5hx//CSfaJLKAqPiLrGmGU3e30/ySuQ0vVFXP9NkEJ7avK+CjJPuxBIsV7X52WBJmC uoXwWpxfQKtdbr367l9WtQrfGmxsdYHqAVxuN9AZqti7Q/F6BbZEhGDGNozLcshlIwTi roKKjMlssaXCOGIMIi1NcOVY30WP/tzTdwLHhfb0/Gn/fkOvl5/hkrwT+/f+pGO+DkVN wuTRDeQOoZhNakqDmws/RhhvU2u41Edjl1e7wyLrjayXBj2e+cN6XzSAkS5A7prst87P ZgFw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=puSn4XmETa9i4jT6sRSbeiOvgKRNZLbIOzKaQoODelw=; b=k/gEb5BsXePcE7XIwnpmSfPlEMcQdoxOv3cSALmvoHyq8an/yLMN5yGu9vvPt+FjyG CD70IQjneTBuxNw8o8VZqk6ADSQsVOpMCY8kx47wEXtVrDsOqzFoCpmwxkPsbhK1V5uu O+G92sHti+Ifll25nVU1R//ktLtWyF+s8tNm/eojBg8AywmOW3Hk0GkHU53SnVZAe52u P/Y09Yvbh/v1JxrZ6cWTwznEjq5idO8LBrLHf9lYMxUy1PaKdBa22eLGvIyGY68tKYd/ Acq57hP6tjDC4oQhaJt5SuI25hdPmEG/hfphRHieYRDRqCkIHY0A2Os7U6v2b+UfjUcE 4NVw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=BW0gfeKF; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id e28si4409967pge.292.2021.05.06.18.04.11; Thu, 06 May 2021 18:04:24 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=BW0gfeKF; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231489AbhEFXHd (ORCPT + 99 others); Thu, 6 May 2021 19:07:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49916 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231230AbhEFXHc (ORCPT ); Thu, 6 May 2021 19:07:32 -0400 Received: from mail-ot1-x32f.google.com (mail-ot1-x32f.google.com [IPv6:2607:f8b0:4864:20::32f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D4F76C061761 for ; Thu, 6 May 2021 16:06:31 -0700 (PDT) Received: by mail-ot1-x32f.google.com with SMTP id u25-20020a0568302319b02902ac3d54c25eso6404900ote.1 for ; Thu, 06 May 2021 16:06:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=puSn4XmETa9i4jT6sRSbeiOvgKRNZLbIOzKaQoODelw=; b=BW0gfeKF/RfgaCMS6XoDjwHd3mgxHEIuNDN1gCaM9vQFpnsm6WiHAdzvwiYu/mVYtb BhYCgKo2kmSlSs4v3qM3421of0BJzSHCM1ju+ovOMsg1KIWSRtzFbivGfNih7ZPp5hov 7ldK3dtbJvtGdJEhjQy6urB0y3NikewcYpdZFm5URlfyb8dOXiL4quqryrpJMdUidboC 7Bte+BwLICwW4AoSFH6DnHSXEDmqLb+9hzzuXgezU3Tvcy4wOiBbTi0TEyhx+UeVsCV6 VDMc8Kjp3YaOrtOyXYen9cQx2+CvNoriagUC6uxRN6DXMGs1+zWUCB3ch7TtjPhwE6lx omXg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=puSn4XmETa9i4jT6sRSbeiOvgKRNZLbIOzKaQoODelw=; b=JpkIoVnjSFpD9oTR6zYpEioCgkgmGlQO8ZNfXrZTcheYpWB0TUSmVkpOrF/qbQ2i/6 7brxM7h7AhcJpLycaAVuG2RMnH6iNJXfckpF1oUUBRaBOy6M59WYorDdNfp0htgRBPnl Dc4V/gQ+MssBKqTayc97DKW4/Q7R9zwXZ/wgrwFy3lDKjSpG/uhnHm9FXLge7ZX398v0 8UVPxFA5siqiyGQ0bV1vSqPt2d1THEhHVnKUvk/kjvmTJKH7bGYNdoPAuR4YOeDhmhMb C86CsFs+00Pwh84z2sLh0I82XGh3NstVhBBobMsVWpsKqjfGpqdORTHtQOJ7BS+WPKeF u8LA== X-Gm-Message-State: AOAM5335Y9Pt7oC9dLTs2JNOT1cONFGGatl+0VwRQb/D3WuA7UsMmbYn dy2lPnZZtR1GDkCMKf0rU7nigdXLoxvMmjdEU4yP7Q== X-Received: by 2002:a9d:1b4d:: with SMTP id l71mr5728018otl.241.1620342390829; Thu, 06 May 2021 16:06:30 -0700 (PDT) MIME-Version: 1.0 References: <1548966284-28642-1-git-send-email-karahmed@amazon.de> <1548966284-28642-9-git-send-email-karahmed@amazon.de> In-Reply-To: <1548966284-28642-9-git-send-email-karahmed@amazon.de> From: Jim Mattson Date: Thu, 6 May 2021 16:06:19 -0700 Message-ID: Subject: Re: [PATCH v6 08/14] KVM/nVMX: Use kvm_vcpu_map when mapping the posted interrupt descriptor table To: KarimAllah Ahmed Cc: "the arch/x86 maintainers" , kvm list , LKML , Paolo Bonzini Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Jan 31, 2019 at 12:28 PM KarimAllah Ahmed wrote: > > Use kvm_vcpu_map when mapping the posted interrupt descriptor table since > using kvm_vcpu_gpa_to_page() and kmap() will only work for guest memory > that has a "struct page". > > One additional semantic change is that the virtual host mapping lifecycle > has changed a bit. It now has the same lifetime of the pinning of the > interrupt descriptor table page on the host side. > > Signed-off-by: KarimAllah Ahmed > Reviewed-by: Konrad Rzeszutek Wilk > --- > v4 -> v5: > - unmap with dirty flag > > v1 -> v2: > - Do not change the lifecycle of the mapping (pbonzini) > --- > arch/x86/kvm/vmx/nested.c | 43 ++++++++++++------------------------------- > arch/x86/kvm/vmx/vmx.h | 2 +- > 2 files changed, 13 insertions(+), 32 deletions(-) > > diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c > index 31b352c..53b1063 100644 > --- a/arch/x86/kvm/vmx/nested.c > +++ b/arch/x86/kvm/vmx/nested.c > @@ -230,12 +230,8 @@ static void free_nested(struct kvm_vcpu *vcpu) > vmx->nested.apic_access_page = NULL; > } > kvm_vcpu_unmap(vcpu, &vmx->nested.virtual_apic_map, true); > - if (vmx->nested.pi_desc_page) { > - kunmap(vmx->nested.pi_desc_page); > - kvm_release_page_dirty(vmx->nested.pi_desc_page); > - vmx->nested.pi_desc_page = NULL; > - vmx->nested.pi_desc = NULL; > - } > + kvm_vcpu_unmap(vcpu, &vmx->nested.pi_desc_map, true); > + vmx->nested.pi_desc = NULL; > > kvm_mmu_free_roots(vcpu, &vcpu->arch.guest_mmu, KVM_MMU_ROOTS_ALL); > > @@ -2868,26 +2864,15 @@ static void nested_get_vmcs12_pages(struct kvm_vcpu *vcpu) > } > > if (nested_cpu_has_posted_intr(vmcs12)) { > - if (vmx->nested.pi_desc_page) { /* shouldn't happen */ > - kunmap(vmx->nested.pi_desc_page); > - kvm_release_page_dirty(vmx->nested.pi_desc_page); > - vmx->nested.pi_desc_page = NULL; > - vmx->nested.pi_desc = NULL; > - vmcs_write64(POSTED_INTR_DESC_ADDR, -1ull); > + map = &vmx->nested.pi_desc_map; > + > + if (!kvm_vcpu_map(vcpu, gpa_to_gfn(vmcs12->posted_intr_desc_addr), map)) { > + vmx->nested.pi_desc = > + (struct pi_desc *)(((void *)map->hva) + > + offset_in_page(vmcs12->posted_intr_desc_addr)); > + vmcs_write64(POSTED_INTR_DESC_ADDR, > + pfn_to_hpa(map->pfn) + offset_in_page(vmcs12->posted_intr_desc_addr)); > } Previously, if there was no backing page for the vmcs12->posted_intr_desc_addr, we wrote an illegal value (-1ull) into the vmcs02 POSTED_INTR_DESC_ADDR field to force VM-entry failure. Now, AFAICT, we leave that field unmodified. For a newly constructed vmcs02, doesn't that mean we're going to treat physical address 0 as the address of the vmcs02 posted interrupt descriptor? > - page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->posted_intr_desc_addr); > - if (is_error_page(page)) > - return; > - vmx->nested.pi_desc_page = page; > - vmx->nested.pi_desc = kmap(vmx->nested.pi_desc_page); > - vmx->nested.pi_desc = > - (struct pi_desc *)((void *)vmx->nested.pi_desc + > - (unsigned long)(vmcs12->posted_intr_desc_addr & > - (PAGE_SIZE - 1))); > - vmcs_write64(POSTED_INTR_DESC_ADDR, > - page_to_phys(vmx->nested.pi_desc_page) + > - (unsigned long)(vmcs12->posted_intr_desc_addr & > - (PAGE_SIZE - 1))); > } > if (nested_vmx_prepare_msr_bitmap(vcpu, vmcs12)) > vmcs_set_bits(CPU_BASED_VM_EXEC_CONTROL, > @@ -3911,12 +3896,8 @@ void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 exit_reason, > vmx->nested.apic_access_page = NULL; > } > kvm_vcpu_unmap(vcpu, &vmx->nested.virtual_apic_map, true); > - if (vmx->nested.pi_desc_page) { > - kunmap(vmx->nested.pi_desc_page); > - kvm_release_page_dirty(vmx->nested.pi_desc_page); > - vmx->nested.pi_desc_page = NULL; > - vmx->nested.pi_desc = NULL; > - } > + kvm_vcpu_unmap(vcpu, &vmx->nested.pi_desc_map, true); > + vmx->nested.pi_desc = NULL; > > /* > * We are now running in L2, mmu_notifier will force to reload the > diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h > index f618f52..bd04725 100644 > --- a/arch/x86/kvm/vmx/vmx.h > +++ b/arch/x86/kvm/vmx/vmx.h > @@ -143,7 +143,7 @@ struct nested_vmx { > */ > struct page *apic_access_page; > struct kvm_host_map virtual_apic_map; > - struct page *pi_desc_page; > + struct kvm_host_map pi_desc_map; > > struct kvm_host_map msr_bitmap_map; > > -- > 2.7.4 >