Received: by 2002:a05:6a10:1287:0:0:0:0 with SMTP id d7csp1109927pxv; Fri, 16 Jul 2021 01:40:43 -0700 (PDT) X-Google-Smtp-Source: ABdhPJybMEih4Fz7MOHs3JqJr2xpXv1HrdOLyyo8Krfiqf6//nw9LVNFpUgof0JCWqoS/gBGzZl5 X-Received: by 2002:a17:906:368e:: with SMTP id a14mr10810844ejc.60.1626424843686; Fri, 16 Jul 2021 01:40:43 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1626424843; cv=none; d=google.com; s=arc-20160816; b=Dz2n9MJQohw7PfUhIfUMvZZeaLbobyNn29KLF4SZliTPfIYL9i/wGmst4UD6M1k5yl C7olaEa0XJhThy/JCiPXZhcNohabGkvM3PPuDcDjmWmh2oNZbHFugUe1CHTENY7np2wS ZMk2v9+rpwuQttYe7g2E+RufgEYN17yARhpHJu9uKxfUxauMGPaSvRL04Z5hbIQwTU7D NCKOXk/zwRFubwvBY2ndOdATkOIh5HnsdO6nlgbf1VYwvgYJ9Sah0FbiBKn9q7ecbSWA dKhIRup/nQUKbl20FDDJEBuGCf/xcwV9prbh9HzuHGg30R8d8A/SGgpMtmu6Ztrl9vpT GU+w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from; bh=7I1lFtUsy+sC0RCltj4d3xjuw1KKB2geWhcKgulU3/4=; b=xR7wCZDL+X1JZI10UDs8zEf9KJdYGeKIajfFXOimt8a3wt7xINziEkAAtySGi22Q7V JDJbG2OWRt/NHIczCFJ6Uo9oANeWvfWsNewAw7BvdaL2u0UDbSZuEj6PJq1q9MPQLfPA xKpgRvHCNXVV+bu/sKpxUm2bM3F/CpOg94aEXkuG2b5oHpmcFF/4tNsFFAvnevsynZ0m SAZpVuIWBDTE9UsuEq0VjzDPHPUpvtIZHkQgqqvfgcG4IW6s8ZCOVO7VdashP4jfHhSE xDxN3Ym9vIQHuEpOZqRw14sljY8I/2Py2zjoXau9UdReQWsChV7sOJufYDbIRPtgUbnm SZow== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id n3si11237732ejl.4.2021.07.16.01.40.20; Fri, 16 Jul 2021 01:40:43 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238311AbhGPIjd (ORCPT + 99 others); Fri, 16 Jul 2021 04:39:33 -0400 Received: from mga09.intel.com ([134.134.136.24]:15663 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238509AbhGPIjZ (ORCPT ); Fri, 16 Jul 2021 04:39:25 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10046"; a="210687300" X-IronPort-AV: E=Sophos;i="5.84,244,1620716400"; d="scan'208";a="210687300" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Jul 2021 01:36:31 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.84,244,1620716400"; d="scan'208";a="460679801" Received: from michael-optiplex-9020.sh.intel.com ([10.239.159.182]) by orsmga008.jf.intel.com with ESMTP; 16 Jul 2021 01:36:28 -0700 From: Yang Weijiang To: pbonzini@redhat.com, jmattson@google.com, seanjc@google.com, vkuznets@redhat.com, wei.w.wang@intel.com, like.xu.linux@gmail.com, kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Yang Weijiang Subject: [PATCH v6 06/12] KVM: x86/pmu: Refactor code to support guest Arch LBR Date: Fri, 16 Jul 2021 16:50:00 +0800 Message-Id: <1626425406-18582-7-git-send-email-weijiang.yang@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1626425406-18582-1-git-send-email-weijiang.yang@intel.com> References: <1626425406-18582-1-git-send-email-weijiang.yang@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Take account of Arch LBR when do sanity checks before program vPMU for guest. Pass through Arch LBR recording MSRs to guest to gain better performance. Note, Arch LBR and Legacy LBR support are mutually exclusive, i.e., they're not both available on one platform. Co-developed-by: Like Xu Signed-off-by: Like Xu Signed-off-by: Yang Weijiang --- arch/x86/kvm/vmx/pmu_intel.c | 37 +++++++++++++++++++++++++++++------- arch/x86/kvm/vmx/vmx.c | 3 +++ 2 files changed, 33 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index b2631fea5e6c..35bcdc5357ee 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -176,12 +176,16 @@ static inline struct kvm_pmc *get_fw_gp_pmc(struct kvm_pmu *pmu, u32 msr) bool intel_pmu_lbr_is_compatible(struct kvm_vcpu *vcpu) { + if (kvm_cpu_cap_has(X86_FEATURE_ARCH_LBR)) + return guest_cpuid_has(vcpu, X86_FEATURE_ARCH_LBR); + /* * As a first step, a guest could only enable LBR feature if its * cpu model is the same as the host because the LBR registers * would be pass-through to the guest and they're model specific. */ - return boot_cpu_data.x86_model == guest_cpuid_model(vcpu); + return !boot_cpu_has(X86_FEATURE_ARCH_LBR) && + boot_cpu_data.x86_model == guest_cpuid_model(vcpu); } bool intel_pmu_lbr_is_enabled(struct kvm_vcpu *vcpu) @@ -199,12 +203,19 @@ static bool intel_pmu_is_valid_lbr_msr(struct kvm_vcpu *vcpu, u32 index) if (!intel_pmu_lbr_is_enabled(vcpu)) return ret; - ret = (index == MSR_LBR_SELECT) || (index == MSR_LBR_TOS) || - (index >= records->from && index < records->from + records->nr) || - (index >= records->to && index < records->to + records->nr); + if (!guest_cpuid_has(vcpu, X86_FEATURE_ARCH_LBR)) + ret = (index == MSR_LBR_SELECT) || (index == MSR_LBR_TOS); + + if (!ret) { + ret = (index >= records->from && + index < records->from + records->nr) || + (index >= records->to && + index < records->to + records->nr); + } if (!ret && records->info) - ret = (index >= records->info && index < records->info + records->nr); + ret = (index >= records->info && + index < records->info + records->nr); return ret; } @@ -706,6 +717,9 @@ static void vmx_update_intercept_for_lbr_msrs(struct kvm_vcpu *vcpu, bool set) vmx_set_intercept_for_msr(vcpu, lbr->info + i, MSR_TYPE_RW, set); } + if (guest_cpuid_has(vcpu, X86_FEATURE_ARCH_LBR)) + return; + vmx_set_intercept_for_msr(vcpu, MSR_LBR_SELECT, MSR_TYPE_RW, set); vmx_set_intercept_for_msr(vcpu, MSR_LBR_TOS, MSR_TYPE_RW, set); } @@ -746,10 +760,13 @@ void vmx_passthrough_lbr_msrs(struct kvm_vcpu *vcpu) { struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); struct lbr_desc *lbr_desc = vcpu_to_lbr_desc(vcpu); + bool lbr_enable = guest_cpuid_has(vcpu, X86_FEATURE_ARCH_LBR) ? + (vmcs_read64(GUEST_IA32_LBR_CTL) & ARCH_LBR_CTL_LBREN) : + (vmcs_read64(GUEST_IA32_DEBUGCTL) & DEBUGCTLMSR_LBR); if (!lbr_desc->event) { vmx_disable_lbr_msrs_passthrough(vcpu); - if (vmcs_read64(GUEST_IA32_DEBUGCTL) & DEBUGCTLMSR_LBR) + if (lbr_enable) goto warn; if (test_bit(INTEL_PMC_IDX_FIXED_VLBR, pmu->pmc_in_use)) goto warn; @@ -766,13 +783,19 @@ void vmx_passthrough_lbr_msrs(struct kvm_vcpu *vcpu) return; warn: + if (kvm_cpu_cap_has(X86_FEATURE_ARCH_LBR)) + wrmsrl(MSR_ARCH_LBR_DEPTH, lbr_desc->records.nr); pr_warn_ratelimited("kvm: vcpu-%d: fail to passthrough LBR.\n", vcpu->vcpu_id); } static void intel_pmu_cleanup(struct kvm_vcpu *vcpu) { - if (!(vmcs_read64(GUEST_IA32_DEBUGCTL) & DEBUGCTLMSR_LBR)) + bool lbr_enable = guest_cpuid_has(vcpu, X86_FEATURE_ARCH_LBR) ? + (vmcs_read64(GUEST_IA32_LBR_CTL) & ARCH_LBR_CTL_LBREN) : + (vmcs_read64(GUEST_IA32_DEBUGCTL) & DEBUGCTLMSR_LBR); + + if (!lbr_enable) intel_pmu_release_guest_lbr_event(vcpu); } diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 1a79ac1757af..11d15f11ff17 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -671,6 +671,9 @@ static bool is_valid_passthrough_msr(u32 msr) case MSR_LBR_NHM_TO ... MSR_LBR_NHM_TO + 31: case MSR_LBR_CORE_FROM ... MSR_LBR_CORE_FROM + 8: case MSR_LBR_CORE_TO ... MSR_LBR_CORE_TO + 8: + case MSR_ARCH_LBR_FROM_0 ... MSR_ARCH_LBR_FROM_0 + 31: + case MSR_ARCH_LBR_TO_0 ... MSR_ARCH_LBR_TO_0 + 31: + case MSR_ARCH_LBR_INFO_0 ... MSR_ARCH_LBR_INFO_0 + 31: /* LBR MSRs. These are handled in vmx_update_intercept_for_lbr_msrs() */ return true; } -- 2.21.1