Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp2676643pxj; Mon, 14 Jun 2021 04:43:37 -0700 (PDT) X-Google-Smtp-Source: ABdhPJy3X07DYovX6QaMq/2Gt36rr21k0yUG0YqEujfQ7YBql/hYXxRT597mPAam9jlHoF5Gz/iR X-Received: by 2002:a17:906:8688:: with SMTP id g8mr14515180ejx.470.1623671017380; Mon, 14 Jun 2021 04:43:37 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1623671017; cv=none; d=google.com; s=arc-20160816; b=sVswN0YEYyHzJsS4ZePjvs9LdNkeKbb9GDljaiIXDJYaP7KODHuvwGk+7aKXheqsp+ 05tlO6CM1EmPEKUf0R4MJFSs4VYY8F5l7OLqNBByllWuxRkjn2O5ZneL58GeZwADuCVE txTy6ioMPbZQfo9fobHh+nsUFwqeCw6EWWdz0g+s1jV2K3ZAPGxgXmIn+pqJRK1d3vTq 6FY4nS8zP5c0/tUMW7Gz8so/WPKmHZDFrb6EwxbTgtDqcdV86WCxnxC3cMoNxWZfFfwk 7qtLWi81be2hk2Z+YyIAB2Ha96ddx9sWszLQjGOQtrCrG6FZGYoUf9G+GHjfFuuiWdTb Azqg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:message-id:date:references :in-reply-to:subject:cc:to:from:dkim-signature; bh=8q5ZmtL4FGXBJGoTjTr3bMJxCuZ/3iyvIGyzxvuV6lE=; b=j/ET/ogg00xDEM8whLx7Jh3LMYGnawAPaIPWBavtZreRqhjN9YR83aTzXBE/MXJMnb JCum2Qwi3HVgDgwWdhZhNLKIl+/QxAAxMxnv5DpGy3K9cRdOqRIwN2HsieKYIt6DQkkR sGTvSU5U7UW3bEES58CKyQJOCCVElJYCYcoR2yCOsFKJLVKg1nYESxGGWxRqMfA6J2mX wFN/Oc7A905v0mndH2sslVWilJFoopSsckauntSYq7sWdnHTqQL41jkgIa6P6WWKN6uh wV5zuy/i6nAI3WxHlgF5xuRdy67G6kroE4shQve8NhThl1wrJyFrL4zs2wh1di5FTrSq GMdQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=OS726IPv; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id zh8si4068502ejb.317.2021.06.14.04.43.13; Mon, 14 Jun 2021 04:43:37 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=OS726IPv; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235822AbhFNLoC (ORCPT + 99 others); Mon, 14 Jun 2021 07:44:02 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:41066 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236133AbhFNLgz (ORCPT ); Mon, 14 Jun 2021 07:36:55 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1623670493; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=8q5ZmtL4FGXBJGoTjTr3bMJxCuZ/3iyvIGyzxvuV6lE=; b=OS726IPvoDfp3L2Fap4GUzOTbBWzwVoqrxoi9rr+5h7EhmnkVJvsfy4jZkgHWBiogKV88H Kk6gA+8ZEU20E3sfdamyDk4j5stt1ignzdkeLl7vFSKIjRmM6/No00Um4wfKNciPzq5jcb lqfZ3McLyzumrfEvISruc4FR0v0xgMo= Received: from mail-ej1-f70.google.com (mail-ej1-f70.google.com [209.85.218.70]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-156-yMNH0ztsNb2lWnC5QrciJA-1; Mon, 14 Jun 2021 07:34:51 -0400 X-MC-Unique: yMNH0ztsNb2lWnC5QrciJA-1 Received: by mail-ej1-f70.google.com with SMTP id f1-20020a1709064941b02903f6b5ef17bfso2910873ejt.20 for ; Mon, 14 Jun 2021 04:34:51 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:in-reply-to:references:date :message-id:mime-version; bh=8q5ZmtL4FGXBJGoTjTr3bMJxCuZ/3iyvIGyzxvuV6lE=; b=gmMzVLqkGoTvGBGc5OEvbeRfq5Uw5jst/b1ptCIO0+3hP4TqlvF5azlgk87linHCoO OCgfeuhlCLFk/+SaZaPMqKaHYi3NJ3OEuO9Q5S6YSsVvH01mSpknWwIRgqYWxthFOUop e6cHaeLKoKZLhUXO7bQ2uSR6LIJUW+Qnou0sgIVNsfurKYz2Edm9HWJDxlrJ/EogrU4x BpJ7l9XPPsJuzBCxcABD54JLgTQz2GosZY4Uk5bE5bbiah34SyXPMqTSNyx3PIT9yple 3w79tEwLH8XCwdeqx2c/TXE+huRGWt1kkNnsFTr1SaddqxFv3cQ7rvbNLSvSbGLF7plZ 9dIg== X-Gm-Message-State: AOAM531coSX4U/Jk0d6PeE1cz7VyfVkup3TZstntqlz7dnbUvaSLdlEj Z2jHI+Y3ZXpKrnIrNznAjlk8m/299hgHceWUQ4bEK7cqm6JcO7d4+fvSKMBe5wuXSiTKtMCJqSF 5bPz/82HCNv3mQkJwpSTB3pUn X-Received: by 2002:a05:6402:1801:: with SMTP id g1mr16657880edy.305.1623670490649; Mon, 14 Jun 2021 04:34:50 -0700 (PDT) X-Received: by 2002:a05:6402:1801:: with SMTP id g1mr16657859edy.305.1623670490422; Mon, 14 Jun 2021 04:34:50 -0700 (PDT) Received: from vitty.brq.redhat.com (g-server-2.ign.cz. [91.219.240.2]) by smtp.gmail.com with ESMTPSA id y20sm3639341ejm.44.2021.06.14.04.34.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 14 Jun 2021 04:34:49 -0700 (PDT) From: Vitaly Kuznetsov To: Vineeth Pillai , Paolo Bonzini , kvm@vger.kernel.org Cc: "H. Peter Anvin" , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "K. Y. Srinivasan" , x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-hyperv@vger.kernel.org, Lan Tianyu , Michael Kelley , Sean Christopherson , Tom Lendacky , Wanpeng Li , Jim Mattson , Joerg Roedel , Wei Liu , Stephen Hemminger , Haiyang Zhang , Maxim Levitsky Subject: Re: [PATCH v5 7/7] KVM: SVM: hyper-v: Direct Virtual Flush support In-Reply-To: References: Date: Mon, 14 Jun 2021 13:34:48 +0200 Message-ID: <878s3c65nr.fsf@vitty.brq.redhat.com> MIME-Version: 1.0 Content-Type: text/plain Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Vineeth Pillai writes: > From Hyper-V TLFS: > "The hypervisor exposes hypercalls (HvFlushVirtualAddressSpace, > HvFlushVirtualAddressSpaceEx, HvFlushVirtualAddressList, and > HvFlushVirtualAddressListEx) that allow operating systems to more > efficiently manage the virtual TLB. The L1 hypervisor can choose to > allow its guest to use those hypercalls and delegate the responsibility > to handle them to the L0 hypervisor. This requires the use of a > partition assist page." > > Add the Direct Virtual Flush support for SVM. > > Related VMX changes: > commit 6f6a657c9998 ("KVM/Hyper-V/VMX: Add direct tlb flush support") > > Signed-off-by: Vineeth Pillai > --- > arch/x86/kvm/Makefile | 4 ++++ > arch/x86/kvm/svm/svm.c | 2 ++ > arch/x86/kvm/svm/svm_onhyperv.c | 41 +++++++++++++++++++++++++++++++++ > arch/x86/kvm/svm/svm_onhyperv.h | 36 +++++++++++++++++++++++++++++ > 4 files changed, 83 insertions(+) > create mode 100644 arch/x86/kvm/svm/svm_onhyperv.c > > diff --git a/arch/x86/kvm/Makefile b/arch/x86/kvm/Makefile > index a06745c2fef1..83331376b779 100644 > --- a/arch/x86/kvm/Makefile > +++ b/arch/x86/kvm/Makefile > @@ -32,6 +32,10 @@ kvm-intel-$(CONFIG_X86_SGX_KVM) += vmx/sgx.o > > kvm-amd-y += svm/svm.o svm/vmenter.o svm/pmu.o svm/nested.o svm/avic.o svm/sev.o > > +ifdef CONFIG_HYPERV > +kvm-amd-y += svm/svm_onhyperv.o > +endif > + > obj-$(CONFIG_KVM) += kvm.o > obj-$(CONFIG_KVM_INTEL) += kvm-intel.o > obj-$(CONFIG_KVM_AMD) += kvm-amd.o > diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c > index d2a625411059..5139cb6baadc 100644 > --- a/arch/x86/kvm/svm/svm.c > +++ b/arch/x86/kvm/svm/svm.c > @@ -3779,6 +3779,8 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu) > } > svm->vmcb->save.cr2 = vcpu->arch.cr2; > > + svm_hv_update_vp_id(svm->vmcb, vcpu); > + > /* > * Run with all-zero DR6 unless needed, so that we can get the exact cause > * of a #DB. > diff --git a/arch/x86/kvm/svm/svm_onhyperv.c b/arch/x86/kvm/svm/svm_onhyperv.c > new file mode 100644 > index 000000000000..3281856ebd94 > --- /dev/null > +++ b/arch/x86/kvm/svm/svm_onhyperv.c > @@ -0,0 +1,41 @@ > +// SPDX-License-Identifier: GPL-2.0-only > +/* > + * KVM L1 hypervisor optimizations on Hyper-V for SVM. > + */ > + > +#include > +#include "kvm_cache_regs.h" > + > +#include > + > +#include "svm.h" > +#include "svm_ops.h" > + > +#include "hyperv.h" > +#include "kvm_onhyperv.h" > +#include "svm_onhyperv.h" > + > +int hv_enable_direct_tlbflush(struct kvm_vcpu *vcpu) > +{ > + struct hv_enlightenments *hve; > + struct hv_partition_assist_pg **p_hv_pa_pg = > + &to_kvm_hv(vcpu->kvm)->hv_pa_pg; > + > + if (!*p_hv_pa_pg) > + *p_hv_pa_pg = kzalloc(PAGE_SIZE, GFP_KERNEL); > + > + if (!*p_hv_pa_pg) > + return -ENOMEM; > + > + hve = (struct hv_enlightenments *)to_svm(vcpu)->vmcb->control.reserved_sw; > + > + hve->partition_assist_page = __pa(*p_hv_pa_pg); > + hve->hv_vm_id = (unsigned long)vcpu->kvm; > + if (!hve->hv_enlightenments_control.nested_flush_hypercall) { > + hve->hv_enlightenments_control.nested_flush_hypercall = 1; > + vmcb_mark_dirty(to_svm(vcpu)->vmcb, VMCB_HV_NESTED_ENLIGHTENMENTS); > + } > + > + return 0; > +} > + > diff --git a/arch/x86/kvm/svm/svm_onhyperv.h b/arch/x86/kvm/svm/svm_onhyperv.h > index 0f262460b2e6..7487052fcef8 100644 > --- a/arch/x86/kvm/svm/svm_onhyperv.h > +++ b/arch/x86/kvm/svm/svm_onhyperv.h > @@ -36,6 +36,8 @@ struct hv_enlightenments { > */ > #define VMCB_HV_NESTED_ENLIGHTENMENTS VMCB_SW > > +int hv_enable_direct_tlbflush(struct kvm_vcpu *vcpu); > + > static inline void svm_hv_init_vmcb(struct vmcb *vmcb) > { > struct hv_enlightenments *hve = > @@ -55,6 +57,23 @@ static inline void svm_hv_hardware_setup(void) > svm_x86_ops.tlb_remote_flush_with_range = > hv_remote_flush_tlb_with_range; > } > + > + if (ms_hyperv.nested_features & HV_X64_NESTED_DIRECT_FLUSH) { > + int cpu; > + > + pr_info("kvm: Hyper-V Direct TLB Flush enabled\n"); > + for_each_online_cpu(cpu) { > + struct hv_vp_assist_page *vp_ap = > + hv_get_vp_assist_page(cpu); > + > + if (!vp_ap) > + continue; > + > + vp_ap->nested_control.features.directhypercall = 1; > + } > + svm_x86_ops.enable_direct_tlbflush = > + hv_enable_direct_tlbflush; > + } > } > > static inline void svm_hv_vmcb_dirty_nested_enlightenments( > @@ -74,6 +93,18 @@ static inline void svm_hv_vmcb_dirty_nested_enlightenments( > hve->hv_enlightenments_control.msr_bitmap) > vmcb_mark_dirty(vmcb, VMCB_HV_NESTED_ENLIGHTENMENTS); > } > + > +static inline void svm_hv_update_vp_id(struct vmcb *vmcb, > + struct kvm_vcpu *vcpu) > +{ > + struct hv_enlightenments *hve = > + (struct hv_enlightenments *)vmcb->control.reserved_sw; > + > + if (hve->hv_vp_id != to_hv_vcpu(vcpu)->vp_index) { > + hve->hv_vp_id = to_hv_vcpu(vcpu)->vp_index; > + vmcb_mark_dirty(vmcb, VMCB_HV_NESTED_ENLIGHTENMENTS); > + } This blows up in testing when no Hyper-V context was created on a vCPU, e.g. when running KVM selftests (to_hv_vcpu(vcpu) is NULL when no Hyper-V emulation features were requested on a vCPU but svm_hv_update_vp_id() is called unconditionally by svm_vcpu_run()). I'll be sending a patch to fix the immediate issue but I was wondering why we need to call svm_hv_update_vp_id() from svm_vcpu_run() as VP index is unlikely to change; we can probably just call it from kvm_hv_set_msr() instead. > +} > #else > > static inline void svm_hv_init_vmcb(struct vmcb *vmcb) > @@ -88,6 +119,11 @@ static inline void svm_hv_vmcb_dirty_nested_enlightenments( > struct kvm_vcpu *vcpu) > { > } > + > +static inline void svm_hv_update_vp_id(struct vmcb *vmcb, > + struct kvm_vcpu *vcpu) > +{ > +} > #endif /* CONFIG_HYPERV */ > > #endif /* __ARCH_X86_KVM_SVM_ONHYPERV_H__ */ -- Vitaly