Received: by 2002:a25:e7d8:0:0:0:0:0 with SMTP id e207csp3511652ybh; Tue, 17 Mar 2020 01:18:23 -0700 (PDT) X-Google-Smtp-Source: ADFU+vsvj0SgDLegqynOcnZ9em2jbCcsffa5jHFgElP7MLBVG7YYJxgihIk6Ffblt/6GeZpuAFkf X-Received: by 2002:a9d:2c69:: with SMTP id f96mr2628416otb.62.1584433103441; Tue, 17 Mar 2020 01:18:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1584433103; cv=none; d=google.com; s=arc-20160816; b=RAo2owKNUd+ecczPKYl0A11V8Nj4Pf1/E2u4uLll7ekjeP7NrTvv8axVbh8wYAMGlL Chn8APlGACVwBN1S+NcxoUnRo76fUTuwHTuVTDfrtW0zj+X8/Ll1yjM+N6AfKhJDpLut Mtem2LItTOMDQYiv20/wDOe7Su/+kbM/BJO1vR8X8sT9rYIHiJMsZ64MS6GE8ijoRYwY XrsbIfCYN0fjnZGVJD0jZdAquwJFygv1Ty8+gxeTBIU9AowOCFDw4AYmnAOD9PUebl5F Mh4KGh2RORRdI+iqJl41gCDuNyXkoL76+jAE8EdQ8BAqJP30+2uCkOe3KPEZrNVK0I1P fIFw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :ironport-sdr:ironport-sdr; bh=d3ZOqweOHyNVM9xBdWom1gTvEjFlup4EqAZ+MedkZ8Q=; b=m1SQI4QdigWOaofBRRLwRluZdUIof3O0sqY5h58WWBaNzXMip9FQb6Ym7OBc8i2dUA RxstU2iykWR6zg9OG5Tsn8fkhCT5qMuqNFJA4eg1s9Eaj3zbcqC32EeT633ESBs25y9Z jEvMXGlFf999kuJLre2pNS6QbrFtCetUyD0vNIdF1rDaiMcFzso/bU08qP/2dCWC4aMd fsnSwE05ysObefOngyycTOz9ywPpB2fw4qIMimWHtz1UpLfqIcW5qLy0d6hoNhSC1/0t IiRxwDmVFoVs+zPexcwqZfnjvYIOHIrzqtfza/RSM7mcom2lzXO0zFuOJemAzAxhxoRj Ohpg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id v130si1387950oib.115.2020.03.17.01.18.09; Tue, 17 Mar 2020 01:18:23 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725957AbgCQIRb (ORCPT + 99 others); Tue, 17 Mar 2020 04:17:31 -0400 Received: from mga11.intel.com ([192.55.52.93]:17620 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725536AbgCQIRa (ORCPT ); Tue, 17 Mar 2020 04:17:30 -0400 IronPort-SDR: JWpHzikvSX4FPH3+JvEfRalfolr5JXzm/ewvSdi61VKpicIbCcgt6CQHzPNDv1oJpKq/XdR1Oq s2g0FZz6rOnA== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Mar 2020 01:17:30 -0700 IronPort-SDR: PogRVo52L4c/bHJT8ZMdD3bunGoSsLhHQc/kjfz1lqeJ2kBhfpOnK+G1yBjX10VJVMqJyVoXWX HHJzp5nQh2ig== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.70,563,1574150400"; d="scan'208";a="445419123" Received: from sqa-gate.sh.intel.com (HELO clx-ap-likexu.tsp.org) ([10.239.48.212]) by fmsmga006.fm.intel.com with ESMTP; 17 Mar 2020 01:17:27 -0700 From: Like Xu To: pbonzini@redhat.com, like.xu@linux.intel.com Cc: ehankland@google.com, jmattson@google.com, joro@8bytes.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, sean.j.christopherson@intel.com, vkuznets@redhat.com, wanpengli@tencent.com Subject: [PATCH v2] KVM: x86/pmu: Reduce counter period change overhead and delay the effective time Date: Tue, 17 Mar 2020 16:14:58 +0800 Message-Id: <20200317081458.88714-1-like.xu@linux.intel.com> X-Mailer: git-send-email 2.21.1 In-Reply-To: <20200317075315.70933-1-like.xu@linux.intel.com> References: <20200317075315.70933-1-like.xu@linux.intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The cost of perf_event_period() is unstable, and when the guest samples multiple events, the overhead increases dramatically (5378 ns on E5-2699). For a non-running counter, the effective time of the new period is when its corresponding enable bit is enabled. Calling perf_event_period() in advance is superfluous. For a running counter, it's safe to delay the effective time until the KVM_REQ_PMU event is handled. If there are multiple perf_event_period() calls before handling KVM_REQ_PMU, it helps to reduce the total cost. Signed-off-by: Like Xu --- arch/x86/kvm/pmu.c | 11 ----------- arch/x86/kvm/pmu.h | 11 +++++++++++ arch/x86/kvm/vmx/pmu_intel.c | 10 ++++------ 3 files changed, 15 insertions(+), 17 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index d1f8ca57d354..527a8bb85080 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -437,17 +437,6 @@ void kvm_pmu_init(struct kvm_vcpu *vcpu) kvm_pmu_refresh(vcpu); } -static inline bool pmc_speculative_in_use(struct kvm_pmc *pmc) -{ - struct kvm_pmu *pmu = pmc_to_pmu(pmc); - - if (pmc_is_fixed(pmc)) - return fixed_ctrl_field(pmu->fixed_ctr_ctrl, - pmc->idx - INTEL_PMC_IDX_FIXED) & 0x3; - - return pmc->eventsel & ARCH_PERFMON_EVENTSEL_ENABLE; -} - /* Release perf_events for vPMCs that have been unused for a full time slice. */ void kvm_pmu_cleanup(struct kvm_vcpu *vcpu) { diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index d7da2b9e0755..cd112e825d2c 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -138,6 +138,17 @@ static inline u64 get_sample_period(struct kvm_pmc *pmc, u64 counter_value) return sample_period; } +static inline bool pmc_speculative_in_use(struct kvm_pmc *pmc) +{ + struct kvm_pmu *pmu = pmc_to_pmu(pmc); + + if (pmc_is_fixed(pmc)) + return fixed_ctrl_field(pmu->fixed_ctr_ctrl, + pmc->idx - INTEL_PMC_IDX_FIXED) & 0x3; + + return pmc->eventsel & ARCH_PERFMON_EVENTSEL_ENABLE; +} + void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel); void reprogram_fixed_counter(struct kvm_pmc *pmc, u8 ctrl, int fixed_idx); void reprogram_counter(struct kvm_pmu *pmu, int pmc_idx); diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 7c857737b438..20f654a0c09b 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -263,15 +263,13 @@ static int intel_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) if (!msr_info->host_initiated) data = (s64)(s32)data; pmc->counter += data - pmc_read_counter(pmc); - if (pmc->perf_event) - perf_event_period(pmc->perf_event, - get_sample_period(pmc, data)); + if (pmc_speculative_in_use(pmc)) + kvm_make_request(KVM_REQ_PMU, vcpu); return 0; } else if ((pmc = get_fixed_pmc(pmu, msr))) { pmc->counter += data - pmc_read_counter(pmc); - if (pmc->perf_event) - perf_event_period(pmc->perf_event, - get_sample_period(pmc, data)); + if (pmc_speculative_in_use(pmc)) + kvm_make_request(KVM_REQ_PMU, vcpu); return 0; } else if ((pmc = get_gp_pmc(pmu, msr, MSR_P6_EVNTSEL0))) { if (data == pmc->eventsel) -- 2.21.1