Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932865AbcLHV3H (ORCPT ); Thu, 8 Dec 2016 16:29:07 -0500 Received: from mga06.intel.com ([134.134.136.31]:20213 "EHLO mga06.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932179AbcLHV20 (ORCPT ); Thu, 8 Dec 2016 16:28:26 -0500 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.33,321,1477983600"; d="scan'208";a="40528642" From: kan.liang@intel.com To: peterz@infradead.org, mingo@redhat.com, acme@kernel.org, linux-kernel@vger.kernel.org Cc: alexander.shishkin@linux.intel.com, tglx@linutronix.de, namhyung@kernel.org, jolsa@kernel.org, adrian.hunter@intel.com, wangnan0@huawei.com, mark.rutland@arm.com, andi@firstfloor.org, Kan Liang Subject: [PATCH V3 3/6] perf/x86: implement overhead stat for x86 pmu Date: Thu, 8 Dec 2016 16:27:11 -0500 Message-Id: <1481232434-3574-4-git-send-email-kan.liang@intel.com> X-Mailer: git-send-email 2.4.3 In-Reply-To: <1481232434-3574-1-git-send-email-kan.liang@intel.com> References: <1481232434-3574-1-git-send-email-kan.liang@intel.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1797 Lines: 69 From: Kan Liang In STAT_START, resets overhead counter for each possible cpuctx of event pmu. In STAT_DONE, generate overhead information for each possible cpuctx and reset the overhead counteris. Signed-off-by: Kan Liang --- arch/x86/events/core.c | 37 +++++++++++++++++++++++++++++++++++++ 1 file changed, 37 insertions(+) diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c index 6e395c9..09ab36a 100644 --- a/arch/x86/events/core.c +++ b/arch/x86/events/core.c @@ -2198,6 +2198,40 @@ static void x86_pmu_sched_task(struct perf_event_context *ctx, bool sched_in) x86_pmu.sched_task(ctx, sched_in); } +static int x86_pmu_stat(struct perf_event *event, u32 flag) +{ + struct perf_cpu_context *cpuctx; + struct pmu *pmu = event->pmu; + int cpu, i; + + if (!(flag & (PERF_IOC_FLAG_STAT_START | PERF_IOC_FLAG_STAT_DONE))) + return -EINVAL; + + if (!event->attr.overhead) + return -EINVAL; + + if (flag & PERF_IOC_FLAG_STAT_DONE) { + for_each_possible_cpu(cpu) { + cpuctx = per_cpu_ptr(pmu->pmu_cpu_context, cpu); + + for (i = 0; i < PERF_OVERHEAD_MAX; i++) { + if (!cpuctx->overhead[i].nr) + continue; + perf_log_overhead(event, i, cpu, + cpuctx->overhead[i].nr, + cpuctx->overhead[i].time); + } + } + } + + for_each_possible_cpu(cpu) { + cpuctx = per_cpu_ptr(pmu->pmu_cpu_context, cpu); + memset(cpuctx->overhead, 0, PERF_OVERHEAD_MAX * sizeof(struct perf_overhead_entry)); + } + + return 0; +} + void perf_check_microcode(void) { if (x86_pmu.check_microcode) @@ -2228,6 +2262,9 @@ static struct pmu pmu = { .event_idx = x86_pmu_event_idx, .sched_task = x86_pmu_sched_task, + + .stat = x86_pmu_stat, + .task_ctx_size = sizeof(struct x86_perf_task_context), }; -- 2.4.3