Received: by 2002:a25:1985:0:0:0:0:0 with SMTP id 127csp1676241ybz; Thu, 16 Apr 2020 13:28:34 -0700 (PDT) X-Google-Smtp-Source: APiQypI5+ZTS+YcUn64PUI9J4FFnabtgADad63uLsDG+G/aGQOBj7n7da8/gybnafBU5oHUE+rOY X-Received: by 2002:a50:a685:: with SMTP id e5mr26823248edc.243.1587068914171; Thu, 16 Apr 2020 13:28:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1587068914; cv=none; d=google.com; s=arc-20160816; b=MXlAsOM1/fmxc7CqRNIO00d2xvqcGz2MjZ/c7BKlwg9weSiZmkuoDV8jw53sGrbt7Y wx6JQyaQX/7i3hY3dqvD+Cgjv0sNcXZBrZkc6I67wwEXjcKr4D0gOvXBR9UuIYt9GTw0 LAd6d7rgm2h0cRoIEmMX+fyEHYTEIZzpzi43QzRBgwAN4JgHOeTFREpKCgv6GT75VIH/ L/we0cALqeMYR7ld5JWHeXJWcSdw+ufgB+TRbfT/L6W8cNxvJ/kUIUEa5OQs7QLLeG6s XO8lj4UZNWpTEdJsci5icSLtXS7MYooibLU7IMF38zkLhgfhdnzVWuxDChl5z7xnSXm0 LSWw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-language :content-transfer-encoding:in-reply-to:mime-version:user-agent:date :message-id:organization:references:cc:to:from:subject:ironport-sdr :ironport-sdr; bh=t0fTI6xli/ygaljQjRkwai6489VMacncZidkFA/nW3M=; b=KYCPpowqbSVVBrrdmbMEM5RoQQUQUwzoioq6PSH7qfdd2Jeui4LLa2Bycw3Vye+cYQ uNGkg2KsSaNhVYjg5u7qR8mgbwKl24ptC4E7USJ+e7aKQ24MhFJkyOeb/oTBH0/chktE QGMy4T3u59U6qwik773PfNQgK2X5YBebV5tGa1iZc4V1z2uUaHpsEquYNHGfpf7B9n4u 05zlVQCnm6w/Uaa5Dxq0VmnXN6/a1JeAcLgyOqwAp2lBKb5tT0JDX3GNM39lw9179qZy h0qqJ4UHu1KcXKKVNsF4xGbIvqyKpIdnvHH3Ch4a2d+XXDi+uSzv4igyRglHeuDPPXWP BLBA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id g5si11943153ejm.120.2020.04.16.13.28.11; Thu, 16 Apr 2020 13:28:34 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730579AbgDPOmU (ORCPT + 99 others); Thu, 16 Apr 2020 10:42:20 -0400 Received: from mga05.intel.com ([192.55.52.43]:39396 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728522AbgDPOmB (ORCPT ); Thu, 16 Apr 2020 10:42:01 -0400 IronPort-SDR: eywTvYHzDjkPog1u8P4uJqRr5jKt82OHoCEjbA8JMknfgCMRDRpIrv6ptyC34Pp/2XHpGRXzmf Cs1CcbFwUjGg== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Apr 2020 07:42:00 -0700 IronPort-SDR: lvEpJOC50Hm1QMcKm0qilThYshi+I5TJTmGGWAwOLMchzbZnR3x0Ld8AXoUhQhceEgHmn6XE/y ODRLedmDsamA== X-IronPort-AV: E=Sophos;i="5.72,391,1580803200"; d="scan'208";a="427852877" Received: from likexu-mobl1.ccr.corp.intel.com (HELO [10.249.170.42]) ([10.249.170.42]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Apr 2020 07:41:58 -0700 Subject: Re: [PATCH v2] KVM: x86/pmu: Reduce counter period change overhead and delay the effective time From: Like Xu To: pbonzini@redhat.com Cc: ehankland@google.com, jmattson@google.com, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, wanpengli@tencent.com References: <20200317075315.70933-1-like.xu@linux.intel.com> <20200317081458.88714-1-like.xu@linux.intel.com> <1528e1b4-3dee-161b-9463-57471263b5a8@linux.intel.com> <6a57b701-99a2-3917-3879-bc8141dca9d4@linux.intel.com> Organization: Intel OTC Message-ID: Date: Thu, 16 Apr 2020 22:41:57 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.7.0 MIME-Version: 1.0 In-Reply-To: <6a57b701-99a2-3917-3879-bc8141dca9d4@linux.intel.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit Content-Language: en-US Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Ping. On 2020/4/8 22:04, Like Xu wrote: > Hi Paolo, > > Could you please take a look at this patch? > If there is anything needs to be improved, please let me know. > > Thanks, > Like Xu > > On 2020/3/26 20:47, Like Xu wrote: >> Anyone to help review this change? >> >> Thanks, >> Like Xu >> >> On 2020/3/17 16:14, Like Xu wrote: >>> The cost of perf_event_period() is unstable, and when the guest samples >>> multiple events, the overhead increases dramatically (5378 ns on E5-2699). >>> >>> For a non-running counter, the effective time of the new period is when >>> its corresponding enable bit is enabled. Calling perf_event_period() >>> in advance is superfluous. For a running counter, it's safe to delay the >>> effective time until the KVM_REQ_PMU event is handled. If there are >>> multiple perf_event_period() calls before handling KVM_REQ_PMU, >>> it helps to reduce the total cost. >>> >>> Signed-off-by: Like Xu >>> --- >>>   arch/x86/kvm/pmu.c           | 11 ----------- >>>   arch/x86/kvm/pmu.h           | 11 +++++++++++ >>>   arch/x86/kvm/vmx/pmu_intel.c | 10 ++++------ >>>   3 files changed, 15 insertions(+), 17 deletions(-) >>> >>> diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c >>> index d1f8ca57d354..527a8bb85080 100644 >>> --- a/arch/x86/kvm/pmu.c >>> +++ b/arch/x86/kvm/pmu.c >>> @@ -437,17 +437,6 @@ void kvm_pmu_init(struct kvm_vcpu *vcpu) >>>       kvm_pmu_refresh(vcpu); >>>   } >>> -static inline bool pmc_speculative_in_use(struct kvm_pmc *pmc) >>> -{ >>> -    struct kvm_pmu *pmu = pmc_to_pmu(pmc); >>> - >>> -    if (pmc_is_fixed(pmc)) >>> -        return fixed_ctrl_field(pmu->fixed_ctr_ctrl, >>> -            pmc->idx - INTEL_PMC_IDX_FIXED) & 0x3; >>> - >>> -    return pmc->eventsel & ARCH_PERFMON_EVENTSEL_ENABLE; >>> -} >>> - >>>   /* Release perf_events for vPMCs that have been unused for a full >>> time slice.  */ >>>   void kvm_pmu_cleanup(struct kvm_vcpu *vcpu) >>>   { >>> diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h >>> index d7da2b9e0755..cd112e825d2c 100644 >>> --- a/arch/x86/kvm/pmu.h >>> +++ b/arch/x86/kvm/pmu.h >>> @@ -138,6 +138,17 @@ static inline u64 get_sample_period(struct kvm_pmc >>> *pmc, u64 counter_value) >>>       return sample_period; >>>   } >>> +static inline bool pmc_speculative_in_use(struct kvm_pmc *pmc) >>> +{ >>> +    struct kvm_pmu *pmu = pmc_to_pmu(pmc); >>> + >>> +    if (pmc_is_fixed(pmc)) >>> +        return fixed_ctrl_field(pmu->fixed_ctr_ctrl, >>> +            pmc->idx - INTEL_PMC_IDX_FIXED) & 0x3; >>> + >>> +    return pmc->eventsel & ARCH_PERFMON_EVENTSEL_ENABLE; >>> +} >>> + >>>   void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel); >>>   void reprogram_fixed_counter(struct kvm_pmc *pmc, u8 ctrl, int >>> fixed_idx); >>>   void reprogram_counter(struct kvm_pmu *pmu, int pmc_idx); >>> diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c >>> index 7c857737b438..20f654a0c09b 100644 >>> --- a/arch/x86/kvm/vmx/pmu_intel.c >>> +++ b/arch/x86/kvm/vmx/pmu_intel.c >>> @@ -263,15 +263,13 @@ static int intel_pmu_set_msr(struct kvm_vcpu >>> *vcpu, struct msr_data *msr_info) >>>               if (!msr_info->host_initiated) >>>                   data = (s64)(s32)data; >>>               pmc->counter += data - pmc_read_counter(pmc); >>> -            if (pmc->perf_event) >>> -                perf_event_period(pmc->perf_event, >>> -                          get_sample_period(pmc, data)); >>> +            if (pmc_speculative_in_use(pmc)) >>> +                kvm_make_request(KVM_REQ_PMU, vcpu); >>>               return 0; >>>           } else if ((pmc = get_fixed_pmc(pmu, msr))) { >>>               pmc->counter += data - pmc_read_counter(pmc); >>> -            if (pmc->perf_event) >>> -                perf_event_period(pmc->perf_event, >>> -                          get_sample_period(pmc, data)); >>> +            if (pmc_speculative_in_use(pmc)) >>> +                kvm_make_request(KVM_REQ_PMU, vcpu); >>>               return 0; >>>           } else if ((pmc = get_gp_pmc(pmu, msr, MSR_P6_EVNTSEL0))) { >>>               if (data == pmc->eventsel) >>> >> >