Received: by 2002:ac0:a5b6:0:0:0:0:0 with SMTP id m51-v6csp5466435imm; Tue, 12 Jun 2018 08:14:20 -0700 (PDT) X-Google-Smtp-Source: ADUXVKLqCI7k49nSpCQHqGUmMIwgAACha8RufcpnPLwI0pJYsbcOJgheLUd+282Fi+AcSdIBcQHf X-Received: by 2002:a17:902:e3:: with SMTP id a90-v6mr856052pla.227.1528816460307; Tue, 12 Jun 2018 08:14:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1528816460; cv=none; d=google.com; s=arc-20160816; b=NSvbYqjsqxVd+whPfb3eyE40s1RueDY1KYruDPXu0wbJzrdGaegM3fc61Ttm2pgXi0 EejJCDQW09EmWeG8y5zoHAcB6OtdQYt985Tt+DCiWEdce0ZVziyLWHHnS0so2ztpWM5Q /Ws+k4Td3dK5+mzd5XWywfIodKPmGfzt3e0LbSSW7i0IoPCDsB7PXMrchGlkD/RYbdFu lBNp+HRmCq0KhdG34zVnxscipDTd1TwgSSblGFyPgaBx2U52hCXF7YvqgqhPtjNRZNqf b9gjsVkrWDBomqaij/BMQLX2xymdRSvCqDNB4KPt3JalV6ZeIMp0C77ccSOYLQj3TS40 NcnQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:message-id :in-reply-to:date:references:subject:cc:to:from :arc-authentication-results; bh=BlVBbg8H4halURxMXNDh4m464TPMS/YF1KrCDrhMHpU=; b=rAAMgAdlU7SPSCpKLIDj4o4K8VOL1h8OV0aVLx51YI9LTUfimls5J+2u/yADs/uZnW j9fNY4yRwjZiI6ncVp4HtGE/dy4VLk/Ngl3qr9MLqEOw78RA+bOVpA9irW83pMrCJYYA HlBm6TBbzTpfmKrr3Ul8TdlupGDslqKteNJHANqD+aaonuzC6/qT6haSpQURq46f9xlb jJCRPOuaYIFY/qH6jRIdlgetZy/v/7KLRHpbtcBju7Cll6cQozGNcdKuPrg+qOkt9K+0 ejrD1YOQ+Mw/WIRI9tLTZDIbjPNHHyU6MYFLqovLMWfeAkFTIKtU/lSmCXh8pZevWR0u i98A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 5-v6si329028pfd.73.2018.06.12.08.14.06; Tue, 12 Jun 2018 08:14:20 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934341AbeFLPMM (ORCPT + 99 others); Tue, 12 Jun 2018 11:12:12 -0400 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:58328 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S934257AbeFLPMK (ORCPT ); Tue, 12 Jun 2018 11:12:10 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id C49178011059; Tue, 12 Jun 2018 15:12:09 +0000 (UTC) Received: from vitty.brq.redhat.com.redhat.com (unknown [10.43.2.155]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 4A17010194; Tue, 12 Jun 2018 15:12:08 +0000 (UTC) From: Vitaly Kuznetsov To: Tianyu Lan Cc: "pbonzini\@redhat.com" , "rkrcmar\@redhat.com" , "tglx\@linutronix.de" , "mingo\@redhat.com" , "hpa\@zytor.com" , "x86\@kernel.org" , "kvm\@vger.kernel.org" , "linux-kernel\@vger.kernel.org" , KY Srinivasan Subject: Re: [RFC Patch 3/3] KVM/x86: Add tlb_remote_flush callback support for vmcs References: <20180604090749.489-1-Tianyu.Lan@microsoft.com> <20180604090749.489-4-Tianyu.Lan@microsoft.com> Date: Tue, 12 Jun 2018 17:12:07 +0200 In-Reply-To: <20180604090749.489-4-Tianyu.Lan@microsoft.com> (Tianyu Lan's message of "Mon, 4 Jun 2018 09:08:34 +0000") Message-ID: <87h8m8qbso.fsf@vitty.brq.redhat.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/25.3 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.8]); Tue, 12 Jun 2018 15:12:09 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.8]); Tue, 12 Jun 2018 15:12:09 +0000 (UTC) for IP:'10.11.54.5' DOMAIN:'int-mx05.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'vkuznets@redhat.com' RCPT:'' Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Tianyu Lan writes: > Register tlb_remote_flush callback for vmcs when hyperv capability of > nested guest mapping flush is detected. The interface can help to reduce > overhead when flush ept table among vcpus for nested VM. The tradition way > is to send IPIs to all affected vcpus and executes INVEPT on each vcpus. > It will trigger several vmexits for IPI and INVEPT emulation. Hyperv provides > such hypercall to do flush for all vcpus. > > Signed-off-by: Lan Tianyu > --- > arch/x86/kvm/vmx.c | 15 +++++++++++++++ > 1 file changed, 15 insertions(+) > > diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c > index e50beb76d846..6cb241c05690 100644 > --- a/arch/x86/kvm/vmx.c > +++ b/arch/x86/kvm/vmx.c > @@ -4737,6 +4737,17 @@ static inline void __vmx_flush_tlb(struct kvm_vcpu *vcpu, int vpid, > } > } > > +static int vmx_remote_flush_tlb(struct kvm *kvm) > +{ > + struct kvm_vcpu *vcpu = kvm_get_vcpu(kvm, 0); > + > + if (!VALID_PAGE(vcpu->arch.mmu.root_hpa)) > + return -1; Why vcpu0? Can arch.mmu.root_hpa-s differ across vCPUs? What happens if they do? > + > + return hyperv_flush_guest_mapping(construct_eptp(vcpu, > + vcpu->arch.mmu.root_hpa)); > +} The 'vmx_remote_flush_tlb' name looks generic enough but it is actually Hyper-V-specific. I'd suggest renaming to something like hv_remote_flush_tlb(). > + > static void vmx_flush_tlb(struct kvm_vcpu *vcpu, bool invalidate_gpa) > { > __vmx_flush_tlb(vcpu, to_vmx(vcpu)->vpid, invalidate_gpa); > @@ -7495,6 +7506,10 @@ static __init int hardware_setup(void) > if (enable_ept && !cpu_has_vmx_ept_2m_page()) > kvm_disable_largepages(); > > + if (ms_hyperv.nested_features & HV_X64_NESTED_GUSET_MAPPING_FLUSH > + && enable_ept) > + kvm_x86_ops->tlb_remote_flush = vmx_remote_flush_tlb; > + > if (!cpu_has_vmx_ple()) { > ple_gap = 0; > ple_window = 0; -- Vitaly