Received: by 2002:a25:e7d8:0:0:0:0:0 with SMTP id e207csp3494347ybh; Tue, 17 Mar 2020 00:56:42 -0700 (PDT) X-Google-Smtp-Source: ADFU+vtiA0CqCgKG3DhkChs14nfPKZnUFzENcnt+Z2N4rS6MtWcQT2Z75RoHpiTVSl7kwSWuX1fU X-Received: by 2002:a9d:6251:: with SMTP id i17mr2741009otk.14.1584431802701; Tue, 17 Mar 2020 00:56:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1584431802; cv=none; d=google.com; s=arc-20160816; b=XTvHzJDwqe/if5377/gqGx6VAKX5QSPiwnndAtwmiPBnFqOI8i4KR5Scm69CZAq7Vw L+/XaO1BHYsHR82WmeHVX4yNqLPemN1rCYAmmHYmtYtqQFNlmS3xNJQViRr67Cl345Wu PFKXaN+yJUmwsnvwOXhAPW/e9/gplv4BjmfWZ0a03ZN/k2GjS1W8cT11YUdUV1/alOmV 2GPJja6rUhbPCR2qHTRUwm0IeYDaAuWv3hy9mL+4UjoV66EOnEBUSgxAYsZST26FJfRb Q3VMY9XFvFf/fqzi+12uBp6gXN1p5JpalBwgIkjDIXWQulgsC1DWkJ37ptkzGhbpBixN vZrA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:ironport-sdr:ironport-sdr; bh=j54bY4PQh6Oco2q2xqll1CseEhgTirGCjs5JHuX4Vcc=; b=Medsn15/txRGcxq0L2pN9U/WQbHCFYdhEeF7vFw++UNAf8e072pzYKfxovqTo4OoQ6 7AMhJpV2h76hpDnMFXQi+2BZ4u0pqElqsg/XEMQ2DAcq6/VVdh/STpWSmAH+9jRkfBii oaxHzkr2ijgYtNL2bAl1QLxasSXb4vMjdyoI6XaCyZbL8YS4JHMYqcXJwqM8nZucLjZ/ YYgT0mCUCqAH1/GhMQVdw4fmt7Z+4l+GTCsEju2YhojtNAvngaL19XXNACqOXhXyQNb2 iIpjprq71MLwZ149mkQ07F1G3vflZAR5s1paKTfFUVcv9kl/GrS2tnghpIGT4teNZ8pY YtbQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id q13si1348541otn.141.2020.03.17.00.56.31; Tue, 17 Mar 2020 00:56:42 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726112AbgCQHzr (ORCPT + 99 others); Tue, 17 Mar 2020 03:55:47 -0400 Received: from mga04.intel.com ([192.55.52.120]:45769 "EHLO mga04.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725872AbgCQHzr (ORCPT ); Tue, 17 Mar 2020 03:55:47 -0400 IronPort-SDR: /Hm/qcicNixH3kqTxbs+O+3dJLb4Jt6PP3Cqu3/GtmgKSjRYdUGXW/qy7Le92p0+wMapf+6AW1 VtgoPra1XLBw== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Mar 2020 00:55:47 -0700 IronPort-SDR: eOFSdfZVZQknN6RXM/9BeO2Nd2xpz2d7sQp4msxwVTTZseODz8LEN/z76AcPAdzF9Scm4iCM4c 1r3hOkOnq9mg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.70,563,1574150400"; d="scan'208";a="279314858" Received: from sqa-gate.sh.intel.com (HELO clx-ap-likexu.tsp.org) ([10.239.48.212]) by fmsmga002.fm.intel.com with ESMTP; 17 Mar 2020 00:55:44 -0700 From: Like Xu To: Paolo Bonzini , Jim Mattson , Eric Hankland , Wanpeng Li Cc: Sean Christopherson , Vitaly Kuznetsov , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Like Xu Subject: [PATCH] kvm/x86: Reduce counter period change overhead and delay the effective time Date: Tue, 17 Mar 2020 15:53:15 +0800 Message-Id: <20200317075315.70933-1-like.xu@linux.intel.com> X-Mailer: git-send-email 2.21.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The cost of perf_event_period() is unstable, and when the guest samples multiple events, the overhead increases dramatically (5378 ns on E5-2699). For a non-running counter, the effective time of the new period is when its corresponding enable bit is enabled. Calling perf_event_period() in advance is superfluous. For a running counter, it's safe to delay the effective time until the KVM_REQ_PMU event is handled. If there are multiple perf_event_period() calls before handling KVM_REQ_PMU, it helps to reduce the total cost. Signed-off-by: Like Xu --- arch/x86/kvm/pmu.c | 11 ----------- arch/x86/kvm/pmu.h | 11 +++++++++++ arch/x86/kvm/vmx/pmu_intel.c | 10 ++++------ 3 files changed, 15 insertions(+), 17 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index d1f8ca57d354..527a8bb85080 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -437,17 +437,6 @@ void kvm_pmu_init(struct kvm_vcpu *vcpu) kvm_pmu_refresh(vcpu); } -static inline bool pmc_speculative_in_use(struct kvm_pmc *pmc) -{ - struct kvm_pmu *pmu = pmc_to_pmu(pmc); - - if (pmc_is_fixed(pmc)) - return fixed_ctrl_field(pmu->fixed_ctr_ctrl, - pmc->idx - INTEL_PMC_IDX_FIXED) & 0x3; - - return pmc->eventsel & ARCH_PERFMON_EVENTSEL_ENABLE; -} - /* Release perf_events for vPMCs that have been unused for a full time slice. */ void kvm_pmu_cleanup(struct kvm_vcpu *vcpu) { diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index d7da2b9e0755..cd112e825d2c 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -138,6 +138,17 @@ static inline u64 get_sample_period(struct kvm_pmc *pmc, u64 counter_value) return sample_period; } +static inline bool pmc_speculative_in_use(struct kvm_pmc *pmc) +{ + struct kvm_pmu *pmu = pmc_to_pmu(pmc); + + if (pmc_is_fixed(pmc)) + return fixed_ctrl_field(pmu->fixed_ctr_ctrl, + pmc->idx - INTEL_PMC_IDX_FIXED) & 0x3; + + return pmc->eventsel & ARCH_PERFMON_EVENTSEL_ENABLE; +} + void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel); void reprogram_fixed_counter(struct kvm_pmc *pmc, u8 ctrl, int fixed_idx); void reprogram_counter(struct kvm_pmu *pmu, int pmc_idx); diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 7c857737b438..4e689273eb05 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -263,15 +263,13 @@ static int intel_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) if (!msr_info->host_initiated) data = (s64)(s32)data; pmc->counter += data - pmc_read_counter(pmc); - if (pmc->perf_event) - perf_event_period(pmc->perf_event, - get_sample_period(pmc, data)); + if (pmc_speculative_in_use(pmc)) { + kvm_make_request(KVM_REQ_PMU, pmc->vcpu); return 0; } else if ((pmc = get_fixed_pmc(pmu, msr))) { pmc->counter += data - pmc_read_counter(pmc); - if (pmc->perf_event) - perf_event_period(pmc->perf_event, - get_sample_period(pmc, data)); + if (pmc_speculative_in_use(pmc)) { + kvm_make_request(KVM_REQ_PMU, pmc->vcpu); return 0; } else if ((pmc = get_gp_pmc(pmu, msr, MSR_P6_EVNTSEL0))) { if (data == pmc->eventsel) -- 2.21.1