Received: by 2002:a05:6a10:1287:0:0:0:0 with SMTP id d7csp1118149pxv; Fri, 16 Jul 2021 01:56:16 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzTHuGmb8xVFnnMwgAK87Pj037fy2z+TID1ccUYQUrip4QK+98Vixe1H5T7zHLimHNBxYfh X-Received: by 2002:a02:8521:: with SMTP id g30mr7999702jai.113.1626425776457; Fri, 16 Jul 2021 01:56:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1626425776; cv=none; d=google.com; s=arc-20160816; b=iZnDVw2LJuoxgfsq8QRFCacKX5gmeiFp/46YxQq0fmu/RVZ6vqDXmupe9Y9ZUS15nN TD/vpYSuZfAGzBByiUWihdVvlqXchmuq1JxFeUjTfYShy/xxOqWLINo2wIl9CypCgc+l t9D/YvlXY4PDCeHWDuyhXidhbZOUFdwpdCIVb3go/x71uyrN/tO5+nN5wohIWHtDNfAa 6GHSmDCSvLjK9GXsjg4H3STD2v8tkOcS4aroBNcAZTuiLOZ61ZwQnW8Y/vA+Jkb5JCAx q3mcO6iVHaFSAjjiiKBSvXggYc6G4FjdpFPuy/lHhToZKsXN8WB2bykD9P89422D8AyC yLxg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=fLtTxkVMaRDwJkbaaKqVxJ70AyttonlNfB2Y/STWiuQ=; b=x82wA6E8bE9A+SstyjACIXLMiTbcrjdhJtlhgvkzKOvW4wDF0mYL9UPhdCBZQMlxq3 bww2zTe1SKmwTUr9JwF4sBgJ479de2As8iqCIAWPYt/Dx1HXde/ee48JS66Yu0vsVYE7 HGsn2Wis4kQKPKfUdOJz+0oDHBOsnV8sY0uOaAs57+s/2gpLCr+z/yziCiZBYKML7fSI i3Bdb3gqoOh0XlPnByGMaAhN4zfTGMpBXDjT5+d1pN5AopuuwdSx1sEiITV8iY4yqtuz n/I/QSVSvgL3K1grOZmDoiqRSbv5UCDqt4OofiNrb2dPBTezyHVLi0oH++7I4GaY7KGH iQ7Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id m8si9647120iow.20.2021.07.16.01.56.04; Fri, 16 Jul 2021 01:56:16 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239396AbhGPI54 (ORCPT + 99 others); Fri, 16 Jul 2021 04:57:56 -0400 Received: from mga07.intel.com ([134.134.136.100]:48049 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239156AbhGPI5o (ORCPT ); Fri, 16 Jul 2021 04:57:44 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10046"; a="274526404" X-IronPort-AV: E=Sophos;i="5.84,244,1620716400"; d="scan'208";a="274526404" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Jul 2021 01:54:50 -0700 X-IronPort-AV: E=Sophos;i="5.84,244,1620716400"; d="scan'208";a="495984036" Received: from vmm_a4_icx.sh.intel.com (HELO localhost.localdomain) ([10.239.53.245]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Jul 2021 01:54:45 -0700 From: Zhu Lingshan To: peterz@infradead.org, pbonzini@redhat.com Cc: bp@alien8.de, seanjc@google.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, kan.liang@linux.intel.com, ak@linux.intel.com, wei.w.wang@intel.com, eranian@google.com, liuxiangdong5@huawei.com, linux-kernel@vger.kernel.org, x86@kernel.org, kvm@vger.kernel.org, like.xu.linux@gmail.com, boris.ostrvsky@oracle.com, Like Xu , Luwei Kang , Zhu Lingshan Subject: [PATCH V8 12/18] KVM: x86/pmu: Add PEBS_DATA_CFG MSR emulation to support adaptive PEBS Date: Fri, 16 Jul 2021 16:53:19 +0800 Message-Id: <20210716085325.10300-13-lingshan.zhu@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210716085325.10300-1-lingshan.zhu@intel.com> References: <20210716085325.10300-1-lingshan.zhu@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Like Xu If IA32_PERF_CAPABILITIES.PEBS_BASELINE [bit 14] is set, the adaptive PEBS is supported. The PEBS_DATA_CFG MSR and adaptive record enable bits (IA32_PERFEVTSELx.Adaptive_Record and IA32_FIXED_CTR_CTRL. FCx_Adaptive_Record) are also supported. Adaptive PEBS provides software the capability to configure the PEBS records to capture only the data of interest, keeping the record size compact. An overflow of PMCx results in generation of an adaptive PEBS record with state information based on the selections specified in MSR_PEBS_DATA_CFG.By default, the record only contain the Basic group. When guest adaptive PEBS is enabled, the IA32_PEBS_ENABLE MSR will be added to the perf_guest_switch_msr() and switched during the VMX transitions just like CORE_PERF_GLOBAL_CTRL MSR. Co-developed-by: Luwei Kang Signed-off-by: Luwei Kang Signed-off-by: Like Xu Signed-off-by: Zhu Lingshan --- arch/x86/events/intel/core.c | 8 ++++++++ arch/x86/include/asm/kvm_host.h | 2 ++ arch/x86/kvm/vmx/pmu_intel.c | 16 ++++++++++++++++ 3 files changed, 26 insertions(+) diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c index b9825d7caaba..71622bf4c4dd 100644 --- a/arch/x86/events/intel/core.c +++ b/arch/x86/events/intel/core.c @@ -3958,6 +3958,14 @@ static struct perf_guest_switch_msr *intel_guest_get_msrs(int *nr, void *data) .guest = kvm_pmu->ds_area, }; + if (x86_pmu.intel_cap.pebs_baseline) { + arr[(*nr)++] = (struct perf_guest_switch_msr){ + .msr = MSR_PEBS_DATA_CFG, + .host = cpuc->pebs_data_cfg, + .guest = kvm_pmu->pebs_data_cfg, + }; + } + arr[*nr] = (struct perf_guest_switch_msr){ .msr = MSR_IA32_PEBS_ENABLE, .host = cpuc->pebs_enabled & ~cpuc->intel_ctrl_guest_mask, diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 35f106f9f124..0fc1fef1af70 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -508,6 +508,8 @@ struct kvm_pmu { u64 ds_area; u64 pebs_enable; u64 pebs_enable_mask; + u64 pebs_data_cfg; + u64 pebs_data_cfg_mask; /* * The gate to release perf_events not marked in diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 5584b8dfadb3..58f32a55cc2e 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -226,6 +226,9 @@ static bool intel_is_valid_msr(struct kvm_vcpu *vcpu, u32 msr) case MSR_IA32_DS_AREA: ret = guest_cpuid_has(vcpu, X86_FEATURE_DS); break; + case MSR_PEBS_DATA_CFG: + ret = vcpu->arch.perf_capabilities & PERF_CAP_PEBS_BASELINE; + break; default: ret = get_gp_pmc(pmu, msr, MSR_IA32_PERFCTR0) || get_gp_pmc(pmu, msr, MSR_P6_EVNTSEL0) || @@ -379,6 +382,9 @@ static int intel_pmu_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) case MSR_IA32_DS_AREA: msr_info->data = pmu->ds_area; return 0; + case MSR_PEBS_DATA_CFG: + msr_info->data = pmu->pebs_data_cfg; + return 0; default: if ((pmc = get_gp_pmc(pmu, msr, MSR_IA32_PERFCTR0)) || (pmc = get_gp_pmc(pmu, msr, MSR_IA32_PMC0))) { @@ -452,6 +458,14 @@ static int intel_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) return 1; pmu->ds_area = data; return 0; + case MSR_PEBS_DATA_CFG: + if (pmu->pebs_data_cfg == data) + return 0; + if (!(data & pmu->pebs_data_cfg_mask)) { + pmu->pebs_data_cfg = data; + return 0; + } + break; default: if ((pmc = get_gp_pmc(pmu, msr, MSR_IA32_PERFCTR0)) || (pmc = get_gp_pmc(pmu, msr, MSR_IA32_PMC0))) { @@ -505,6 +519,7 @@ static void intel_pmu_refresh(struct kvm_vcpu *vcpu) pmu->reserved_bits = 0xffffffff00200000ull; pmu->fixed_ctr_ctrl_mask = ~0ull; pmu->pebs_enable_mask = ~0ull; + pmu->pebs_data_cfg_mask = ~0ull; entry = kvm_find_cpuid_entry(vcpu, 0xa, 0); if (!entry) @@ -580,6 +595,7 @@ static void intel_pmu_refresh(struct kvm_vcpu *vcpu) pmu->fixed_ctr_ctrl_mask &= ~(1ULL << (INTEL_PMC_IDX_FIXED + i * 4)); } + pmu->pebs_data_cfg_mask = ~0xff00000full; } else { pmu->pebs_enable_mask = ~((1ull << pmu->nr_arch_gp_counters) - 1); -- 2.27.0