Received: by 2002:ac0:bc90:0:0:0:0:0 with SMTP id a16csp803968img; Mon, 18 Mar 2019 14:45:44 -0700 (PDT) X-Google-Smtp-Source: APXvYqwP1ihD0mZqhCu7Rz+skIyUIyNqf3RXzUdSHvrQyCmpoN5Wm8ih+QIwh4+YeH4z3CdKyEy2 X-Received: by 2002:a62:ed0c:: with SMTP id u12mr21954142pfh.88.1552945544253; Mon, 18 Mar 2019 14:45:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1552945544; cv=none; d=google.com; s=arc-20160816; b=hIfWSrI6f5fuAZDl/dilDpAiN2ViypJSu6TKQ7u0EWH2DFFNGmMOswGoPF7Lhiqh57 1UG1q3+gLz/476Ypq2ciJiEPehQN2llY/S6S2BWUJwwHr0x8Xn95zzOpkJjx32aJquV8 3lSkoQRAvxgh6ICi6NSl81Upe8etrIGWwSyOuZRtnEGZN/SEYFqdDQSr7UttOSDlTpN+ iaOP94maW4U2kVEPYtwASo1+drBqM4r3hLofoNluLb1QKxP/aRs8uZmIlLlxAfqw9kpE OCgLwkoGlEXXkpSnmZDQ+6BE0JbW4+0ioAJkwWfjIkjviE/VR6S9Nuxxdwm0xCJAr8/I tCyw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=JPfYw6khBOdrpZL/2E7jTXRnedm6LXAP35SufWSgrsI=; b=HTb2vomyi5O7JZ+/PWMTB+qNTRT49StO9LLx3z0e6k9gF8qGUlBHtt99M1pSmdmPfI Xn/RZb7lim5LNnBEk5WoBbseBZXXL4XYOAf6G55vqQOVdyXeJGNVOpfS3d2G23oNGU5v nhSNMvFQ/ONzMzs25zGYHnk49E//CxZk489pi2hIVRNROjQsNGPC6bCwG6dTHPJdCQ8D UOH8nEJUfnlzCMvDc7vmH9RCZOFPmPqbuvM9saLow56Q8r+zZjSoL2PpYKVpQbpIbmqK VnlgxqTylXgrC1uMVDUuhs55dh46bmwdujC57AYX1wX//IvKHsol102mYKIOZ52gfCiJ Q4WQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b23si10627985pfd.182.2019.03.18.14.45.28; Mon, 18 Mar 2019 14:45:44 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727929AbfCRVof (ORCPT + 99 others); Mon, 18 Mar 2019 17:44:35 -0400 Received: from mga04.intel.com ([192.55.52.120]:57622 "EHLO mga04.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727880AbfCRVo2 (ORCPT ); Mon, 18 Mar 2019 17:44:28 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 18 Mar 2019 14:44:28 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.58,495,1544515200"; d="scan'208";a="308301838" Received: from otc-icl-cdi187.jf.intel.com ([10.54.55.103]) by orsmga005.jf.intel.com with ESMTP; 18 Mar 2019 14:44:28 -0700 From: kan.liang@linux.intel.com To: peterz@infradead.org, acme@kernel.org, mingo@redhat.com, linux-kernel@vger.kernel.org Cc: tglx@linutronix.de, jolsa@kernel.org, eranian@google.com, alexander.shishkin@linux.intel.com, ak@linux.intel.com, Kan Liang Subject: [PATCH 18/22] perf/x86/intel: Support CPUID 10.ECX to disable fixed counters Date: Mon, 18 Mar 2019 14:41:40 -0700 Message-Id: <20190318214144.4639-19-kan.liang@linux.intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190318214144.4639-1-kan.liang@linux.intel.com> References: <20190318214144.4639-1-kan.liang@linux.intel.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Andi Kleen Icelake supports a new CPUID 10.ECX cpu leaf to indicate some fixed counters are not supported. This extends the previous count to a bitmap which allows to disable even lower counters. It's a nop on Icelake (all fixed counters are supported), but let's implement it here. This adds the necessary checks. In theory it could be used today by a Hypervisor. For disabled counters disable any constraint events. I reuse the existing intel_ctrl variable to remember which counters are disabled. All code that reads all counters is fixed to check this extra bitmask. Signed-off-by: Andi Kleen Signed-off-by: Kan Liang --- arch/x86/events/core.c | 8 +++++++- arch/x86/events/intel/core.c | 22 +++++++++++++++------- arch/x86/events/perf_event.h | 6 ++++++ 3 files changed, 28 insertions(+), 8 deletions(-) diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c index 796e46a59148..283df78c52e0 100644 --- a/arch/x86/events/core.c +++ b/arch/x86/events/core.c @@ -225,6 +225,8 @@ static bool check_hw_exists(void) if (ret) goto msr_fail; for (i = 0; i < x86_pmu.num_counters_fixed; i++) { + if (fixed_counter_disabled(i)) + continue; if (val & (0x03 << i*4)) { bios_fail = 1; val_fail = val; @@ -1362,6 +1364,8 @@ void perf_event_print_debug(void) cpu, idx, prev_left); } for (idx = 0; idx < x86_pmu.num_counters_fixed; idx++) { + if (fixed_counter_disabled(idx)) + continue; rdmsrl(MSR_ARCH_PERFMON_FIXED_CTR0 + idx, pmc_count); pr_info("CPU#%d: fixed-PMC%d count: %016llx\n", @@ -1877,7 +1881,9 @@ static int __init init_hw_perf_events(void) pr_info("... generic registers: %d\n", x86_pmu.num_counters); pr_info("... value mask: %016Lx\n", x86_pmu.cntval_mask); pr_info("... max period: %016Lx\n", x86_pmu.max_period); - pr_info("... fixed-purpose events: %d\n", x86_pmu.num_counters_fixed); + pr_info("... fixed-purpose events: %lu\n", + hweight64((((1ULL << x86_pmu.num_counters_fixed) - 1) + << INTEL_PMC_IDX_FIXED) & x86_pmu.intel_ctrl)); pr_info("... event mask: %016Lx\n", x86_pmu.intel_ctrl); /* diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c index 3f86af8ce832..433dbd0152a9 100644 --- a/arch/x86/events/intel/core.c +++ b/arch/x86/events/intel/core.c @@ -2278,8 +2278,11 @@ static void intel_pmu_reset(void) wrmsrl_safe(x86_pmu_config_addr(idx), 0ull); wrmsrl_safe(x86_pmu_event_addr(idx), 0ull); } - for (idx = 0; idx < x86_pmu.num_counters_fixed; idx++) + for (idx = 0; idx < x86_pmu.num_counters_fixed; idx++) { + if (fixed_counter_disabled(idx)) + continue; wrmsrl_safe(MSR_ARCH_PERFMON_FIXED_CTR0 + idx, 0ull); + } if (ds) ds->bts_index = ds->bts_buffer_base; @@ -4476,7 +4479,7 @@ __init int intel_pmu_init(void) union cpuid10_eax eax; union cpuid10_ebx ebx; struct event_constraint *c; - unsigned int unused; + unsigned int fixed_mask; struct extra_reg *er; int version, i; char *name; @@ -4497,9 +4500,11 @@ __init int intel_pmu_init(void) * Check whether the Architectural PerfMon supports * Branch Misses Retired hw_event or not. */ - cpuid(10, &eax.full, &ebx.full, &unused, &edx.full); + cpuid(10, &eax.full, &ebx.full, &fixed_mask, &edx.full); if (eax.split.mask_length < ARCH_PERFMON_EVENTS_COUNT) return -ENODEV; + if (!fixed_mask) + fixed_mask = -1; version = eax.split.version_id; if (version < 2) @@ -5017,7 +5022,8 @@ __init int intel_pmu_init(void) } x86_pmu.intel_ctrl |= - ((1LL << x86_pmu.num_counters_fixed)-1) << INTEL_PMC_IDX_FIXED; + (((1LL << x86_pmu.num_counters_fixed)-1) & (u64)fixed_mask) + << INTEL_PMC_IDX_FIXED; if (x86_pmu.event_constraints) { /* @@ -5034,9 +5040,11 @@ __init int intel_pmu_init(void) c->weight = hweight64(c->idxmsk64); continue; } - if (c->cmask == FIXED_EVENT_FLAGS - && c->idxmsk64 != INTEL_PMC_MSK_FIXED_REF_CYCLES) { - c->idxmsk64 |= (1ULL << x86_pmu.num_counters) - 1; + if (c->cmask == FIXED_EVENT_FLAGS) { + if (c->idxmsk64 != INTEL_PMC_MSK_FIXED_REF_CYCLES) + c->idxmsk64 |= (1ULL << x86_pmu.num_counters) - 1; + /* Disabled fixed counters which are not in CPUID */ + c->idxmsk64 &= x86_pmu.intel_ctrl; } c->idxmsk64 &= ~(~0ULL << (INTEL_PMC_IDX_FIXED + x86_pmu.num_counters_fixed)); diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h index ef8c4d846e87..8894e3bd1f23 100644 --- a/arch/x86/events/perf_event.h +++ b/arch/x86/events/perf_event.h @@ -926,6 +926,12 @@ ssize_t events_sysfs_show(struct device *dev, struct device_attribute *attr, ssize_t events_ht_sysfs_show(struct device *dev, struct device_attribute *attr, char *page); +static inline bool fixed_counter_disabled(int i) +{ + return x86_pmu.intel_ctrl && + ((1ULL << (i + INTEL_PMC_IDX_FIXED)) & x86_pmu.intel_ctrl); +} + #ifdef CONFIG_CPU_SUP_AMD int amd_pmu_init(void); -- 2.17.1