Received: by 10.223.164.221 with SMTP id h29csp2584866wrb; Thu, 2 Nov 2017 13:33:06 -0700 (PDT) X-Google-Smtp-Source: ABhQp+TQ644eujteWYOS6vIT6vWrroANvlAi6o+iTfW/P02l5QXX6RnlJ3N3yXuKtyYzalRuHrYi X-Received: by 10.99.98.6 with SMTP id w6mr4837642pgb.189.1509654785916; Thu, 02 Nov 2017 13:33:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1509654785; cv=none; d=google.com; s=arc-20160816; b=QgWVepeL5q1VWDGEildGQ716aUj+OggbkovKOFuCZ7l+Adw3yKondsIATGKRpIPOmE Q6ZDiT5tGNPS7uxU/15Zo+1IY0XU6VG7gfAMvgvuT0+0TqUBt9vLorf1Pg9dkyJ6p2Cf 0KGsL9A3GPPquRiE/TOYVT1LVGJW9WfBctIxbUPmFAGEeJhdE9cw2uPOLLu9o1RworST I3tgugjBIzsxSkqDvQLUGJz9rLO6Y4o8YbQFA2eOn7KD27zN85xhAB3YjcPHASShs1Tl 2L7dRRAXj9sfn+jtub/qf414LPUSz3u0s16AdU1HiMaEjUsuPaZ9gCttI+Ed/4s0TxJt wxZw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=TPbsWwKCfkI1kmRmeWETOWy/dvXByZwYx0vTwuOTZpE=; b=pqLPc4+iDaV5eoNjgrrJevLuEgaMiGNMUHLu4Q95Cz9T/0X95b8R5ULO5ehVITweIb dYhyea78q2PrsYHrlsM3g20l1cVX1O70mdtJf0ZjWU72lgdHtv2oCeZwhKqeFb2YWFAM tU+4OrjEBMV0TAzqzvyo+F5nDgEEiuIo2wMJoii/L+XUSrDtNTbrqpGz6mCTiIb7bvHa NSK5tjkQmaB0nPRgWwYMx+DJQYCt6NQLEXV86rwaKzBrwoKBCGynFySdK79xUY2y5rLu z9CtztLhBcy3exgSEXbPrQeJ3g7jOgeYoq45cd9iC+uAhwxiFk9b2/bSLLctkZJcvd7E Pl0g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b3si4392253pff.392.2017.11.02.13.32.52; Thu, 02 Nov 2017 13:33:05 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934492AbdKBUbc (ORCPT + 99 others); Thu, 2 Nov 2017 16:31:32 -0400 Received: from mga11.intel.com ([192.55.52.93]:17873 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S964829AbdKBUaY (ORCPT ); Thu, 2 Nov 2017 16:30:24 -0400 Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 02 Nov 2017 13:30:23 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.44,335,1505804400"; d="scan'208";a="145287046" Received: from otc-lr-04.jf.intel.com ([10.54.39.138]) by orsmga004.jf.intel.com with ESMTP; 02 Nov 2017 13:30:22 -0700 From: kan.liang@intel.com To: tglx@linutronix.de, peterz@infradead.org, mingo@redhat.com, linux-kernel@vger.kernel.org Cc: acme@kernel.org, eranian@google.com, ak@linux.intel.com, Kan Liang Subject: [PATCH V4 5/8] perf/x86/intel/uncore: add infrastructure for free running counter Date: Thu, 2 Nov 2017 13:29:50 -0700 Message-Id: <1509654593-4446-5-git-send-email-kan.liang@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1509654593-4446-1-git-send-email-kan.liang@intel.com> References: <1509654593-4446-1-git-send-email-kan.liang@intel.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Kan Liang The free running counter is read-only and always active. Current generic uncore code does not support this kind of counters. The free running counter is read-only. It cannot be enable/disable in event_start/stop. The free running counter event and free running counter are 1:1 mapped. It doesn't need to be assigned nor tracked by event_list. The free running counters are always active. It doesn't need to check the free running counters' availability. Use inline helpers to check fixed counter index and free running counter index. Signed-off-by: Kan Liang --- Changes since V3: - Split the patch to use the data structures and inline helpers which are introduced from previous patch. arch/x86/events/intel/uncore.c | 68 +++++++++++++++++++++++++++++++++++++++--- 1 file changed, 64 insertions(+), 4 deletions(-) diff --git a/arch/x86/events/intel/uncore.c b/arch/x86/events/intel/uncore.c index 3b8cd88..9849ba1 100644 --- a/arch/x86/events/intel/uncore.c +++ b/arch/x86/events/intel/uncore.c @@ -203,7 +203,7 @@ static void uncore_assign_hw_event(struct intel_uncore_box *box, hwc->idx = idx; hwc->last_tag = ++box->tags[idx]; - if (hwc->idx == UNCORE_PMC_IDX_FIXED) { + if (uncore_pmc_fixed(hwc->idx)) { hwc->event_base = uncore_fixed_ctr(box); hwc->config_base = uncore_fixed_ctl(box); return; @@ -218,7 +218,9 @@ void uncore_perf_event_update(struct intel_uncore_box *box, struct perf_event *e u64 prev_count, new_count, delta; int shift; - if (event->hw.idx == UNCORE_PMC_IDX_FIXED) + if (uncore_pmc_freerunning(event->hw.idx)) + shift = 64 - uncore_freerunning_bits(box, event); + else if (uncore_pmc_fixed(event->hw.idx)) shift = 64 - uncore_fixed_ctr_bits(box); else shift = 64 - uncore_perf_ctr_bits(box); @@ -454,10 +456,25 @@ static void uncore_pmu_event_start(struct perf_event *event, int flags) struct intel_uncore_box *box = uncore_event_to_box(event); int idx = event->hw.idx; - if (WARN_ON_ONCE(!(event->hw.state & PERF_HES_STOPPED))) + if (WARN_ON_ONCE(idx == -1 || idx >= UNCORE_PMC_IDX_MAX)) return; - if (WARN_ON_ONCE(idx == -1 || idx >= UNCORE_PMC_IDX_MAX)) + /* + * Free running counter is read-only and always active. + * Use the current counter value as start point. + * There is no overflow interrupt for free running counter. + * Use hrtimer to periodically poll the counter to avoid overflow. + */ + if (uncore_pmc_freerunning(event->hw.idx)) { + list_add_tail(&event->active_entry, &box->active_list); + local64_set(&event->hw.prev_count, + uncore_read_counter(box, event)); + if (box->n_active++ == 0) + uncore_pmu_start_hrtimer(box); + return; + } + + if (WARN_ON_ONCE(!(event->hw.state & PERF_HES_STOPPED))) return; event->hw.state = 0; @@ -479,6 +496,15 @@ static void uncore_pmu_event_stop(struct perf_event *event, int flags) struct intel_uncore_box *box = uncore_event_to_box(event); struct hw_perf_event *hwc = &event->hw; + /* Cannot disable free running counter which is read-only */ + if (uncore_pmc_freerunning(hwc->idx)) { + list_del(&event->active_entry); + if (--box->n_active == 0) + uncore_pmu_cancel_hrtimer(box); + uncore_perf_event_update(box, event); + return; + } + if (__test_and_clear_bit(hwc->idx, box->active_mask)) { uncore_disable_event(box, event); box->n_active--; @@ -512,6 +538,17 @@ static int uncore_pmu_event_add(struct perf_event *event, int flags) if (!box) return -ENODEV; + /* + * The free funning counter is assigned in event_init(). + * The free running counter event and free running counter + * are 1:1 mapped. It doesn't need to be tracked in event_list. + */ + if (uncore_pmc_freerunning(hwc->idx)) { + if (flags & PERF_EF_START) + uncore_pmu_event_start(event, 0); + return 0; + } + ret = n = uncore_collect_events(box, event, false); if (ret < 0) return ret; @@ -570,6 +607,14 @@ static void uncore_pmu_event_del(struct perf_event *event, int flags) uncore_pmu_event_stop(event, PERF_EF_UPDATE); + /* + * The event for free running counter is not tracked by event_list. + * It doesn't need to force event->hw.idx = -1 to reassign the counter. + * Because the event and the free running counter are 1:1 mapped. + */ + if (uncore_pmc_freerunning(event->hw.idx)) + return; + for (i = 0; i < box->n_events; i++) { if (event == box->event_list[i]) { uncore_put_event_constraint(box, event); @@ -603,6 +648,10 @@ static int uncore_validate_group(struct intel_uncore_pmu *pmu, struct intel_uncore_box *fake_box; int ret = -EINVAL, n; + /* The free running counter is always active. */ + if (uncore_pmc_freerunning(event->hw.idx)) + return 0; + fake_box = uncore_alloc_box(pmu->type, NUMA_NO_NODE); if (!fake_box) return -ENOMEM; @@ -690,6 +739,17 @@ static int uncore_pmu_event_init(struct perf_event *event) /* fixed counters have event field hardcoded to zero */ hwc->config = 0ULL; + } else if (is_freerunning_event(event)) { + if (!check_valid_freerunning_event(box, event)) + return -EINVAL; + event->hw.idx = UNCORE_PMC_IDX_FREERUNNING; + /* + * The free running counter event and free running counter + * are always 1:1 mapped. + * The free running counter is always active. + * Assign the free running counter here. + */ + event->hw.event_base = uncore_freerunning_counter(box, event); } else { hwc->config = event->attr.config & (pmu->type->event_mask | ((u64)pmu->type->event_mask_ext << 32)); -- 2.7.4 From 1592407793182252178@xxx Wed Feb 14 20:00:13 +0000 2018 X-GM-THRID: 1582402843352465273 X-Gmail-Labels: Inbox,Category Forums,Downloaded_2018-02