Received: by 2002:a05:6a10:c604:0:0:0:0 with SMTP id y4csp771237pxt; Fri, 6 Aug 2021 13:25:00 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyi7vCJ1kj8ZmD+WBhlbyxkj0lAdcoUbVAYXglolLl7c3pL3BNCEmLIECddgA/pfM5ln3Ff X-Received: by 2002:a5e:830b:: with SMTP id x11mr391813iom.185.1628281500515; Fri, 06 Aug 2021 13:25:00 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1628281500; cv=none; d=google.com; s=arc-20160816; b=CHVS/KoA8aUdum9q8t466Eq7XyKFVSfoNHx8wEpJe5f6BN/qUFJyrLcimOhQtwfUOa xty9tuWMlar7Co8PcK9yxJPWR4f1di0KzErrxkdAIyAv6lQMXRDvs4KR6zJb1cG/eTgF /lhd8T1D8ZeDeQD/GEesEE1mHybxsmboo1Ka0LzwIHQM2Il1NX7Fn4q+qPuEgdi19SoZ o2hZRl/f2NZvhbh5vVQgc4Qv4cwGUGkrT3MBVyfh4KEkcuBxmmCYK98gyhrrp3UgNS/8 rDImt3foteNpjpniTPNEqB0/ZXFXw6tixR7+A/CBWVcgnVPGaQBxivrcK/IGgbcMAqXm oRYA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=Lp2k+uBkoq1bEWF0P5yxRe2ksYgFSqQpXabfV5E5eag=; b=ryYdRa3Bti1rUDxEOfaAFUU7zS8ebLWSDrUOVyEByQaGQj+EkV9sd8+mlPgyaIdx+p Kt9FaHETnplv7m3wCpJ/ngQrOHZB+EJeNE+r7aJLvF1cP0pZDz7ij0lG29QGGQQFo0zj WxmR3WCykp49NsRIvB1OslT7C2uA+OEWUhxkgDieTbBg1h/NF/3C98oJCFqKsGKWGzm9 tn5d5cvAp6XR2vKhTOmAa3cLAJlohTpPNIyX3QbWf9ujCkVXbg+Y6d2p6Gxz9H1uKctK jmN6oXZAIGCc4CDo23fvtczSmINZZiqSUEgufvX1GlCXpIc4NEGEbOH/ZzzSwUrvJOFT nWiA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id h22si9809804jal.57.2021.08.06.13.24.28; Fri, 06 Aug 2021 13:25:00 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344330AbhHFNjx (ORCPT + 99 others); Fri, 6 Aug 2021 09:39:53 -0400 Received: from mga03.intel.com ([134.134.136.65]:4288 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344325AbhHFNjp (ORCPT ); Fri, 6 Aug 2021 09:39:45 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10068"; a="214398062" X-IronPort-AV: E=Sophos;i="5.84,300,1620716400"; d="scan'208";a="214398062" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Aug 2021 06:39:29 -0700 X-IronPort-AV: E=Sophos;i="5.84,300,1620716400"; d="scan'208";a="523463744" Received: from vmm_a4_icx.sh.intel.com (HELO localhost.localdomain) ([10.239.53.245]) by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Aug 2021 06:39:25 -0700 From: Zhu Lingshan To: peterz@infradead.org, pbonzini@redhat.com Cc: bp@alien8.de, seanjc@google.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, kan.liang@linux.intel.com, ak@linux.intel.com, wei.w.wang@intel.com, eranian@google.com, liuxiangdong5@huawei.com, linux-kernel@vger.kernel.org, x86@kernel.org, kvm@vger.kernel.org, like.xu.linux@gmail.com, boris.ostrvsky@oracle.com, Like Xu , Zhu Lingshan Subject: [PATCH V10 14/18] KVM: x86/pmu: Move pmc_speculative_in_use() to arch/x86/kvm/pmu.h Date: Fri, 6 Aug 2021 21:37:58 +0800 Message-Id: <20210806133802.3528-15-lingshan.zhu@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210806133802.3528-1-lingshan.zhu@intel.com> References: <20210806133802.3528-1-lingshan.zhu@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Like Xu It allows this inline function to be reused by more callers in more files, such as pmu_intel.c. Signed-off-by: Like Xu Signed-off-by: Zhu Lingshan Acked-by: Peter Zijlstra (Intel) --- arch/x86/kvm/pmu.c | 11 ----------- arch/x86/kvm/pmu.h | 11 +++++++++++ 2 files changed, 11 insertions(+), 11 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index b907aba35ff3..d957c1e83ec9 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -481,17 +481,6 @@ void kvm_pmu_init(struct kvm_vcpu *vcpu) kvm_pmu_refresh(vcpu); } -static inline bool pmc_speculative_in_use(struct kvm_pmc *pmc) -{ - struct kvm_pmu *pmu = pmc_to_pmu(pmc); - - if (pmc_is_fixed(pmc)) - return fixed_ctrl_field(pmu->fixed_ctr_ctrl, - pmc->idx - INTEL_PMC_IDX_FIXED) & 0x3; - - return pmc->eventsel & ARCH_PERFMON_EVENTSEL_ENABLE; -} - /* Release perf_events for vPMCs that have been unused for a full time slice. */ void kvm_pmu_cleanup(struct kvm_vcpu *vcpu) { diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index 1af86ae1d3f2..5795bb113e76 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -149,6 +149,17 @@ static inline u64 get_sample_period(struct kvm_pmc *pmc, u64 counter_value) return sample_period; } +static inline bool pmc_speculative_in_use(struct kvm_pmc *pmc) +{ + struct kvm_pmu *pmu = pmc_to_pmu(pmc); + + if (pmc_is_fixed(pmc)) + return fixed_ctrl_field(pmu->fixed_ctr_ctrl, + pmc->idx - INTEL_PMC_IDX_FIXED) & 0x3; + + return pmc->eventsel & ARCH_PERFMON_EVENTSEL_ENABLE; +} + void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel); void reprogram_fixed_counter(struct kvm_pmc *pmc, u8 ctrl, int fixed_idx); void reprogram_counter(struct kvm_pmu *pmu, int pmc_idx); -- 2.27.0