Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934039AbeAHP34 (ORCPT + 1 other); Mon, 8 Jan 2018 10:29:56 -0500 Received: from mga07.intel.com ([134.134.136.100]:48195 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933961AbeAHP3w (ORCPT ); Mon, 8 Jan 2018 10:29:52 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.46,330,1511856000"; d="scan'208";a="18400575" From: kan.liang@intel.com To: peterz@infradead.org, mingo@redhat.com, acme@kernel.org, linux-kernel@vger.kernel.org Cc: tglx@linutronix.de, jolsa@redhat.com, eranian@google.com, ak@linux.intel.com, Kan Liang Subject: [RESEND PATCH V2 3/4] perf/x86/intel: drain PEBS buffer in event read Date: Mon, 8 Jan 2018 07:15:15 -0800 Message-Id: <1515424516-143728-4-git-send-email-kan.liang@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1515424516-143728-1-git-send-email-kan.liang@intel.com> References: <1515424516-143728-1-git-send-email-kan.liang@intel.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Return-Path: From: Kan Liang When the PEBS interrupt threshold is larger than one, there is no way to get exact auto-reload times and value needed for event update unless flush the PEBS buffer. Drain the PEBS buffer in event read when large PEBS is enabled. For the threshold is one, even auto-reload is enabled, it doesn't need to be specially handled. Because auto-reload is only effect when event overflow. There is no overflow in event read. Signed-off-by: Kan Liang --- arch/x86/events/intel/core.c | 9 +++++++++ arch/x86/events/intel/ds.c | 10 ++++++++++ arch/x86/events/perf_event.h | 2 ++ 3 files changed, 21 insertions(+) diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c index 09c26a4..bdc35f8 100644 --- a/arch/x86/events/intel/core.c +++ b/arch/x86/events/intel/core.c @@ -2060,6 +2060,14 @@ static void intel_pmu_del_event(struct perf_event *event) intel_pmu_pebs_del(event); } +static void intel_pmu_read_event(struct perf_event *event) +{ + if (event->attr.precise_ip) + return intel_pmu_pebs_read(event); + + x86_perf_event_update(event); +} + static void intel_pmu_enable_fixed(struct hw_perf_event *hwc) { int idx = hwc->idx - INTEL_PMC_IDX_FIXED; @@ -3495,6 +3503,7 @@ static __initconst const struct x86_pmu intel_pmu = { .disable = intel_pmu_disable_event, .add = intel_pmu_add_event, .del = intel_pmu_del_event, + .read = intel_pmu_read_event, .hw_config = intel_pmu_hw_config, .schedule_events = x86_schedule_events, .eventsel = MSR_ARCH_PERFMON_EVENTSEL0, diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c index cc1f373..2027560 100644 --- a/arch/x86/events/intel/ds.c +++ b/arch/x86/events/intel/ds.c @@ -926,6 +926,16 @@ void intel_pmu_pebs_del(struct perf_event *event) pebs_update_state(needed_cb, cpuc, event->ctx->pmu); } +void intel_pmu_pebs_read(struct perf_event *event) +{ + struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events); + + if (pebs_needs_sched_cb(cpuc)) + return intel_pmu_drain_pebs_buffer(); + + x86_perf_event_update(event); +} + void intel_pmu_pebs_disable(struct perf_event *event) { struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events); diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h index 67426e51..93ec3b4 100644 --- a/arch/x86/events/perf_event.h +++ b/arch/x86/events/perf_event.h @@ -928,6 +928,8 @@ void intel_pmu_pebs_add(struct perf_event *event); void intel_pmu_pebs_del(struct perf_event *event); +void intel_pmu_pebs_read(struct perf_event *event); + void intel_pmu_pebs_enable(struct perf_event *event); void intel_pmu_pebs_disable(struct perf_event *event); -- 2.7.4