Separate the code in pmu.[ch] into the set/list of PMUs and the code
for a particular PMU. Move the set/list of PMUs code into
pmus.[ch]. Clean up hybrid code and remove hybrid PMU list, it is
sufficient to scan PMUs looking for core ones. Add core PMU list and
perf_pmus__scan_core that just reads core PMUs. Switch code that skips
non-core PMUs during a perf_pmus__scan, to use the
perf_pmus__scan_core variant. Don't scan sysfs for PMUs if all such
PMUs have been previously scanned/loaded. Scanning just core PMUs, for
the cases it is applicable, can improve the sysfs reading time by more
than 4 fold on my laptop, as servers generally have many more uncore
PMUs the improvement there should be larger:
```
$ perf bench internals pmu-scan -i 1000
Computing performance of sysfs PMU event scan for 1000 times
Average core PMU scanning took: 989.231 usec (+- 1.535 usec)
Average PMU scanning took: 4309.425 usec (+- 74.322 usec)
```
The patch "perf pmu: Separate pmu and pmus" moves and renames a lot of
functions, and is consequently large. The changes are trivial, but
kept together to keep the overall number of patches more reasonable.
v2. Address Kan's review comments wrt "cycles" -> "cycles:P" and
"uncore_pmus" -> "other_pmus".
Ian Rogers (23):
perf tools: Warn if no user requested CPUs match PMU's CPUs
perf evlist: Remove evlist__warn_hybrid_group
perf evlist: Remove __evlist__add_default
perf evlist: Reduce scope of evlist__has_hybrid
perf pmu: Remove perf_pmu__hybrid_mounted
perf pmu: Detect ARM and hybrid PMUs with sysfs
perf pmu: Add is_core to pmu
perf pmu: Rewrite perf_pmu__has_hybrid to avoid list
perf x86: Iterate hybrid PMUs as core PMUs
perf topology: Avoid hybrid list for hybrid topology
perf evsel: Compute is_hybrid from PMU being core
perf header: Avoid hybrid PMU list in write_pmu_caps
perf metrics: Remove perf_pmu__is_hybrid use
perf stat: Avoid hybrid PMU list
perf mem: Avoid hybrid PMU list
perf pmu: Remove perf_pmu__hybrid_pmus list
perf pmus: Prefer perf_pmu__scan over perf_pmus__for_each_pmu
perf x86 mem: minor refactor to is_mem_loads_aux_event
perf pmu: Separate pmu and pmus
perf pmus: Split pmus list into core and other
perf pmus: Allow just core PMU scanning
perf pmus: Avoid repeated sysfs scanning
perf pmus: Ensure all PMUs are read for find_by_type
tools/perf/arch/arm/util/auxtrace.c | 7 +-
tools/perf/arch/arm/util/cs-etm.c | 4 +-
tools/perf/arch/arm64/util/pmu.c | 6 +-
tools/perf/arch/x86/tests/hybrid.c | 7 +-
tools/perf/arch/x86/util/auxtrace.c | 5 +-
tools/perf/arch/x86/util/evlist.c | 25 +-
tools/perf/arch/x86/util/evsel.c | 27 +-
tools/perf/arch/x86/util/intel-bts.c | 4 +-
tools/perf/arch/x86/util/intel-pt.c | 4 +-
tools/perf/arch/x86/util/mem-events.c | 17 +-
tools/perf/arch/x86/util/perf_regs.c | 15 +-
tools/perf/arch/x86/util/topdown.c | 5 +-
tools/perf/bench/pmu-scan.c | 60 ++--
tools/perf/builtin-c2c.c | 9 +-
tools/perf/builtin-list.c | 4 +-
tools/perf/builtin-mem.c | 9 +-
tools/perf/builtin-record.c | 29 +-
tools/perf/builtin-stat.c | 15 +-
tools/perf/builtin-top.c | 10 +-
tools/perf/tests/attr.c | 4 +-
tools/perf/tests/event_groups.c | 7 +-
tools/perf/tests/parse-events.c | 15 +-
tools/perf/tests/parse-metric.c | 4 +-
tools/perf/tests/pmu-events.c | 6 +-
tools/perf/tests/switch-tracking.c | 4 +-
tools/perf/tests/topology.c | 4 +-
tools/perf/util/Build | 2 -
tools/perf/util/cpumap.h | 2 +-
tools/perf/util/cputopo.c | 16 +-
tools/perf/util/env.c | 5 +-
tools/perf/util/evlist-hybrid.c | 162 ---------
tools/perf/util/evlist-hybrid.h | 15 -
tools/perf/util/evlist.c | 67 +++-
tools/perf/util/evlist.h | 9 +-
tools/perf/util/evsel.c | 57 +--
tools/perf/util/evsel.h | 3 -
tools/perf/util/header.c | 27 +-
tools/perf/util/mem-events.c | 17 +-
tools/perf/util/metricgroup.c | 9 +-
tools/perf/util/parse-events.c | 24 +-
tools/perf/util/parse-events.y | 3 +-
tools/perf/util/pfm.c | 6 +-
tools/perf/util/pmu-hybrid.c | 52 ---
tools/perf/util/pmu-hybrid.h | 32 --
tools/perf/util/pmu.c | 482 ++------------------------
tools/perf/util/pmu.h | 26 +-
tools/perf/util/pmus.c | 477 ++++++++++++++++++++++++-
tools/perf/util/pmus.h | 15 +-
tools/perf/util/print-events.c | 15 +-
tools/perf/util/python-ext-sources | 1 -
tools/perf/util/stat-display.c | 21 +-
51 files changed, 819 insertions(+), 1032 deletions(-)
delete mode 100644 tools/perf/util/evlist-hybrid.c
delete mode 100644 tools/perf/util/evlist-hybrid.h
delete mode 100644 tools/perf/util/pmu-hybrid.c
delete mode 100644 tools/perf/util/pmu-hybrid.h
--
2.40.1.698.g37aff9b760-goog
In commit 1d3351e631fc ("perf tools: Enable on a list of CPUs for hybrid")
perf on hybrid will warn if a user requested CPU doesn't match the PMU
of the given event but only for hybrid PMUs. Make the logic generic
for all PMUs and remove the hybrid logic.
Warn if a CPU is requested that is offline for uncore events. Warn if
a CPU is requested for a core PMU, but the CPU isn't within the cpu
map of that PMU.
For example on a 16 (0-15) CPU system:
```
$ perf stat -e imc_free_running/data_read/,cycles -C 16 true
WARNING: Requested CPU(s) '16' not supported by PMU 'uncore_imc_free_running_1' for event 'imc_free_running/data_read/'
WARNING: Requested CPU(s) '16' not supported by PMU 'uncore_imc_free_running_0' for event 'imc_free_running/data_read/'
WARNING: Requested CPU(s) '16' not supported by PMU 'cpu' for event 'cycles'
Performance counter stats for 'CPU(s) 16':
<not supported> MiB imc_free_running/data_read/
<not supported> cycles
0.000570094 seconds time elapsed
```
Signed-off-by: Ian Rogers <[email protected]>
---
tools/perf/builtin-record.c | 6 +--
tools/perf/builtin-stat.c | 5 +--
tools/perf/util/cpumap.h | 2 +-
tools/perf/util/evlist-hybrid.c | 74 ---------------------------------
tools/perf/util/evlist-hybrid.h | 1 -
tools/perf/util/evlist.c | 44 ++++++++++++++++++++
tools/perf/util/evlist.h | 2 +
tools/perf/util/pmu.c | 33 ---------------
tools/perf/util/pmu.h | 4 --
9 files changed, 49 insertions(+), 122 deletions(-)
diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
index ec0f2d5f189f..9d212236c75a 100644
--- a/tools/perf/builtin-record.c
+++ b/tools/perf/builtin-record.c
@@ -4198,11 +4198,7 @@ int cmd_record(int argc, const char **argv)
/* Enable ignoring missing threads when -u/-p option is defined. */
rec->opts.ignore_missing_thread = rec->opts.target.uid != UINT_MAX || rec->opts.target.pid;
- if (evlist__fix_hybrid_cpus(rec->evlist, rec->opts.target.cpu_list)) {
- pr_err("failed to use cpu list %s\n",
- rec->opts.target.cpu_list);
- goto out;
- }
+ evlist__warn_user_requested_cpus(rec->evlist, rec->opts.target.cpu_list);
rec->opts.target.hybrid = perf_pmu__has_hybrid();
diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
index bc45cee3f77c..612467216306 100644
--- a/tools/perf/builtin-stat.c
+++ b/tools/perf/builtin-stat.c
@@ -2462,10 +2462,7 @@ int cmd_stat(int argc, const char **argv)
}
}
- if (evlist__fix_hybrid_cpus(evsel_list, target.cpu_list)) {
- pr_err("failed to use cpu list %s\n", target.cpu_list);
- goto out;
- }
+ evlist__warn_user_requested_cpus(evsel_list, target.cpu_list);
target.hybrid = perf_pmu__has_hybrid();
if (evlist__create_maps(evsel_list, &target) < 0) {
diff --git a/tools/perf/util/cpumap.h b/tools/perf/util/cpumap.h
index e3426541e0aa..c1de993c083f 100644
--- a/tools/perf/util/cpumap.h
+++ b/tools/perf/util/cpumap.h
@@ -59,7 +59,7 @@ struct perf_cpu cpu__max_present_cpu(void);
/**
* cpu_map__is_dummy - Events associated with a pid, rather than a CPU, use a single dummy map with an entry of -1.
*/
-static inline bool cpu_map__is_dummy(struct perf_cpu_map *cpus)
+static inline bool cpu_map__is_dummy(const struct perf_cpu_map *cpus)
{
return perf_cpu_map__nr(cpus) == 1 && perf_cpu_map__cpu(cpus, 0).cpu == -1;
}
diff --git a/tools/perf/util/evlist-hybrid.c b/tools/perf/util/evlist-hybrid.c
index 57f02beef023..db3f5fbdebe1 100644
--- a/tools/perf/util/evlist-hybrid.c
+++ b/tools/perf/util/evlist-hybrid.c
@@ -86,77 +86,3 @@ bool evlist__has_hybrid(struct evlist *evlist)
return false;
}
-
-int evlist__fix_hybrid_cpus(struct evlist *evlist, const char *cpu_list)
-{
- struct perf_cpu_map *cpus;
- struct evsel *evsel, *tmp;
- struct perf_pmu *pmu;
- int ret, unmatched_count = 0, events_nr = 0;
-
- if (!perf_pmu__has_hybrid() || !cpu_list)
- return 0;
-
- cpus = perf_cpu_map__new(cpu_list);
- if (!cpus)
- return -1;
-
- /*
- * The evsels are created with hybrid pmu's cpus. But now we
- * need to check and adjust the cpus of evsel by cpu_list because
- * cpu_list may cause conflicts with cpus of evsel. For example,
- * cpus of evsel is cpu0-7, but the cpu_list is cpu6-8, we need
- * to adjust the cpus of evsel to cpu6-7. And then propatate maps
- * in evlist__create_maps().
- */
- evlist__for_each_entry_safe(evlist, tmp, evsel) {
- struct perf_cpu_map *matched_cpus, *unmatched_cpus;
- char buf1[128], buf2[128];
-
- pmu = perf_pmu__find_hybrid_pmu(evsel->pmu_name);
- if (!pmu)
- continue;
-
- ret = perf_pmu__cpus_match(pmu, cpus, &matched_cpus,
- &unmatched_cpus);
- if (ret)
- goto out;
-
- events_nr++;
-
- if (perf_cpu_map__nr(matched_cpus) > 0 &&
- (perf_cpu_map__nr(unmatched_cpus) > 0 ||
- perf_cpu_map__nr(matched_cpus) < perf_cpu_map__nr(cpus) ||
- perf_cpu_map__nr(matched_cpus) < perf_cpu_map__nr(pmu->cpus))) {
- perf_cpu_map__put(evsel->core.cpus);
- perf_cpu_map__put(evsel->core.own_cpus);
- evsel->core.cpus = perf_cpu_map__get(matched_cpus);
- evsel->core.own_cpus = perf_cpu_map__get(matched_cpus);
-
- if (perf_cpu_map__nr(unmatched_cpus) > 0) {
- cpu_map__snprint(matched_cpus, buf1, sizeof(buf1));
- pr_warning("WARNING: use %s in '%s' for '%s', skip other cpus in list.\n",
- buf1, pmu->name, evsel->name);
- }
- }
-
- if (perf_cpu_map__nr(matched_cpus) == 0) {
- evlist__remove(evlist, evsel);
- evsel__delete(evsel);
-
- cpu_map__snprint(cpus, buf1, sizeof(buf1));
- cpu_map__snprint(pmu->cpus, buf2, sizeof(buf2));
- pr_warning("WARNING: %s isn't a '%s', please use a CPU list in the '%s' range (%s)\n",
- buf1, pmu->name, pmu->name, buf2);
- unmatched_count++;
- }
-
- perf_cpu_map__put(matched_cpus);
- perf_cpu_map__put(unmatched_cpus);
- }
- if (events_nr)
- ret = (unmatched_count == events_nr) ? -1 : 0;
-out:
- perf_cpu_map__put(cpus);
- return ret;
-}
diff --git a/tools/perf/util/evlist-hybrid.h b/tools/perf/util/evlist-hybrid.h
index aacdb1b0f948..19f74b4c340a 100644
--- a/tools/perf/util/evlist-hybrid.h
+++ b/tools/perf/util/evlist-hybrid.h
@@ -10,6 +10,5 @@
int evlist__add_default_hybrid(struct evlist *evlist, bool precise);
void evlist__warn_hybrid_group(struct evlist *evlist);
bool evlist__has_hybrid(struct evlist *evlist);
-int evlist__fix_hybrid_cpus(struct evlist *evlist, const char *cpu_list);
#endif /* __PERF_EVLIST_HYBRID_H */
diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c
index a0504316b06f..5d0d99127a90 100644
--- a/tools/perf/util/evlist.c
+++ b/tools/perf/util/evlist.c
@@ -2465,3 +2465,47 @@ void evlist__check_mem_load_aux(struct evlist *evlist)
}
}
}
+
+/**
+ * evlist__warn_user_requested_cpus() - Check each evsel against requested CPUs
+ * and warn if the user CPU list is inapplicable for the event's PMUs
+ * CPUs. Uncore PMUs list a CPU in sysfs, but this may be overwritten by a
+ * user requested CPU and so any online CPU is applicable. Core PMUs handle
+ * events on the CPUs in their list and otherwise the event isn't supported.
+ * @evlist: The list of events being checked.
+ * @cpu_list: The user provided list of CPUs.
+ */
+void evlist__warn_user_requested_cpus(struct evlist *evlist, const char *cpu_list)
+{
+ struct perf_cpu_map *user_requested_cpus;
+ struct evsel *pos;
+
+ if (!cpu_list)
+ return;
+
+ user_requested_cpus = perf_cpu_map__new(cpu_list);
+ if (!user_requested_cpus)
+ return;
+
+ evlist__for_each_entry(evlist, pos) {
+ const struct perf_cpu_map *to_test;
+ struct perf_cpu cpu;
+ int idx;
+ bool warn = true;
+ const struct perf_pmu *pmu = evsel__find_pmu(pos);
+
+ to_test = pmu && pmu->is_uncore ? cpu_map__online() : evsel__cpus(pos);
+
+ perf_cpu_map__for_each_cpu(cpu, idx, to_test) {
+ if (perf_cpu_map__has(user_requested_cpus, cpu)) {
+ warn = false;
+ break;
+ }
+ }
+ if (warn) {
+ pr_warning("WARNING: Requested CPU(s) '%s' not supported by PMU '%s' for event '%s'\n",
+ cpu_list, pmu ? pmu->name : "cpu", evsel__name(pos));
+ }
+ }
+ perf_cpu_map__put(user_requested_cpus);
+}
diff --git a/tools/perf/util/evlist.h b/tools/perf/util/evlist.h
index e7e5540cc970..5e7ff44f3043 100644
--- a/tools/perf/util/evlist.h
+++ b/tools/perf/util/evlist.h
@@ -447,4 +447,6 @@ struct evsel *evlist__find_evsel(struct evlist *evlist, int idx);
int evlist__scnprintf_evsels(struct evlist *evlist, size_t size, char *bf);
void evlist__check_mem_load_aux(struct evlist *evlist);
+void evlist__warn_user_requested_cpus(struct evlist *evlist, const char *cpu_list);
+
#endif /* __PERF_EVLIST_H */
diff --git a/tools/perf/util/pmu.c b/tools/perf/util/pmu.c
index f4f0afbc391c..1e0be23d4dd7 100644
--- a/tools/perf/util/pmu.c
+++ b/tools/perf/util/pmu.c
@@ -2038,39 +2038,6 @@ int perf_pmu__match(char *pattern, char *name, char *tok)
return 0;
}
-int perf_pmu__cpus_match(struct perf_pmu *pmu, struct perf_cpu_map *cpus,
- struct perf_cpu_map **mcpus_ptr,
- struct perf_cpu_map **ucpus_ptr)
-{
- struct perf_cpu_map *pmu_cpus = pmu->cpus;
- struct perf_cpu_map *matched_cpus, *unmatched_cpus;
- struct perf_cpu cpu;
- int i, matched_nr = 0, unmatched_nr = 0;
-
- matched_cpus = perf_cpu_map__default_new();
- if (!matched_cpus)
- return -1;
-
- unmatched_cpus = perf_cpu_map__default_new();
- if (!unmatched_cpus) {
- perf_cpu_map__put(matched_cpus);
- return -1;
- }
-
- perf_cpu_map__for_each_cpu(cpu, i, cpus) {
- if (!perf_cpu_map__has(pmu_cpus, cpu))
- RC_CHK_ACCESS(unmatched_cpus)->map[unmatched_nr++] = cpu;
- else
- RC_CHK_ACCESS(matched_cpus)->map[matched_nr++] = cpu;
- }
-
- perf_cpu_map__set_nr(unmatched_cpus, unmatched_nr);
- perf_cpu_map__set_nr(matched_cpus, matched_nr);
- *mcpus_ptr = matched_cpus;
- *ucpus_ptr = unmatched_cpus;
- return 0;
-}
-
double __weak perf_pmu__cpu_slots_per_cycle(void)
{
return NAN;
diff --git a/tools/perf/util/pmu.h b/tools/perf/util/pmu.h
index 0e0cb6283594..49033bb134f3 100644
--- a/tools/perf/util/pmu.h
+++ b/tools/perf/util/pmu.h
@@ -257,10 +257,6 @@ void perf_pmu__warn_invalid_formats(struct perf_pmu *pmu);
bool perf_pmu__has_hybrid(void);
int perf_pmu__match(char *pattern, char *name, char *tok);
-int perf_pmu__cpus_match(struct perf_pmu *pmu, struct perf_cpu_map *cpus,
- struct perf_cpu_map **mcpus_ptr,
- struct perf_cpu_map **ucpus_ptr);
-
char *pmu_find_real_name(const char *name);
char *pmu_find_alias_name(const char *name);
double perf_pmu__cpu_slots_per_cycle(void);
--
2.40.1.698.g37aff9b760-goog
Find the PMU and then the event off of it.
Reviewed-by: Kan Liang <[email protected]>
Signed-off-by: Ian Rogers <[email protected]>
---
tools/perf/arch/x86/util/mem-events.c | 14 +++++++-------
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/tools/perf/arch/x86/util/mem-events.c b/tools/perf/arch/x86/util/mem-events.c
index f683ac702247..02d65e446f46 100644
--- a/tools/perf/arch/x86/util/mem-events.c
+++ b/tools/perf/arch/x86/util/mem-events.c
@@ -55,13 +55,13 @@ struct perf_mem_event *perf_mem_events__ptr(int i)
bool is_mem_loads_aux_event(struct evsel *leader)
{
- if (perf_pmu__find("cpu")) {
- if (!pmu_have_event("cpu", "mem-loads-aux"))
- return false;
- } else if (perf_pmu__find("cpu_core")) {
- if (!pmu_have_event("cpu_core", "mem-loads-aux"))
- return false;
- }
+ struct perf_pmu *pmu = perf_pmu__find("cpu");
+
+ if (!pmu)
+ pmu = perf_pmu__find("cpu_core");
+
+ if (pmu && !pmu_have_event(pmu->name, "mem-loads-aux"))
+ return false;
return leader->core.attr.config == MEM_LOADS_AUX;
}
--
2.40.1.698.g37aff9b760-goog
Separate and hide the pmus list in pmus.[ch]. Move pmus functionality
out of pmu.[ch] into pmus.[ch] renaming pmus functions which were
prefixed perf_pmu__ to perf_pmus__.
Signed-off-by: Ian Rogers <[email protected]>
---
tools/perf/arch/arm/util/auxtrace.c | 7 +-
tools/perf/arch/arm/util/cs-etm.c | 4 +-
tools/perf/arch/arm64/util/pmu.c | 3 +-
tools/perf/arch/x86/tests/hybrid.c | 5 +-
tools/perf/arch/x86/util/auxtrace.c | 5 +-
tools/perf/arch/x86/util/evlist.c | 5 +-
tools/perf/arch/x86/util/evsel.c | 7 +-
tools/perf/arch/x86/util/intel-bts.c | 4 +-
tools/perf/arch/x86/util/intel-pt.c | 4 +-
tools/perf/arch/x86/util/mem-events.c | 9 +-
tools/perf/arch/x86/util/perf_regs.c | 5 +-
tools/perf/arch/x86/util/topdown.c | 5 +-
tools/perf/bench/pmu-scan.c | 10 +-
tools/perf/builtin-c2c.c | 4 +-
tools/perf/builtin-list.c | 4 +-
tools/perf/builtin-mem.c | 4 +-
tools/perf/builtin-record.c | 8 +-
tools/perf/builtin-stat.c | 6 +-
tools/perf/tests/attr.c | 4 +-
tools/perf/tests/event_groups.c | 2 +-
tools/perf/tests/parse-events.c | 8 +-
tools/perf/tests/parse-metric.c | 4 +-
tools/perf/tests/pmu-events.c | 3 +-
tools/perf/tests/switch-tracking.c | 4 +-
tools/perf/tests/topology.c | 4 +-
tools/perf/util/cputopo.c | 7 +-
tools/perf/util/env.c | 5 +-
tools/perf/util/evsel.c | 3 +-
tools/perf/util/header.c | 15 +-
tools/perf/util/mem-events.c | 11 +-
tools/perf/util/metricgroup.c | 5 +-
tools/perf/util/parse-events.c | 15 +-
tools/perf/util/parse-events.y | 3 +-
tools/perf/util/pfm.c | 6 +-
tools/perf/util/pmu.c | 411 +-------------------------
tools/perf/util/pmu.h | 13 +-
tools/perf/util/pmus.c | 397 ++++++++++++++++++++++++-
tools/perf/util/pmus.h | 14 +-
tools/perf/util/print-events.c | 5 +-
tools/perf/util/stat-display.c | 3 +-
40 files changed, 534 insertions(+), 507 deletions(-)
diff --git a/tools/perf/arch/arm/util/auxtrace.c b/tools/perf/arch/arm/util/auxtrace.c
index adec6c9ee11d..3b8eca0ffb17 100644
--- a/tools/perf/arch/arm/util/auxtrace.c
+++ b/tools/perf/arch/arm/util/auxtrace.c
@@ -14,6 +14,7 @@
#include "../../../util/debug.h"
#include "../../../util/evlist.h"
#include "../../../util/pmu.h"
+#include "../../../util/pmus.h"
#include "cs-etm.h"
#include "arm-spe.h"
#include "hisi-ptt.h"
@@ -40,7 +41,7 @@ static struct perf_pmu **find_all_arm_spe_pmus(int *nr_spes, int *err)
return NULL;
}
- arm_spe_pmus[*nr_spes] = perf_pmu__find(arm_spe_pmu_name);
+ arm_spe_pmus[*nr_spes] = perf_pmus__find(arm_spe_pmu_name);
if (arm_spe_pmus[*nr_spes]) {
pr_debug2("%s %d: arm_spe_pmu %d type %d name %s\n",
__func__, __LINE__, *nr_spes,
@@ -87,7 +88,7 @@ static struct perf_pmu **find_all_hisi_ptt_pmus(int *nr_ptts, int *err)
rewinddir(dir);
while ((dent = readdir(dir))) {
if (strstr(dent->d_name, HISI_PTT_PMU_NAME) && idx < *nr_ptts) {
- hisi_ptt_pmus[idx] = perf_pmu__find(dent->d_name);
+ hisi_ptt_pmus[idx] = perf_pmus__find(dent->d_name);
if (hisi_ptt_pmus[idx])
idx++;
}
@@ -131,7 +132,7 @@ struct auxtrace_record
if (!evlist)
return NULL;
- cs_etm_pmu = perf_pmu__find(CORESIGHT_ETM_PMU_NAME);
+ cs_etm_pmu = perf_pmus__find(CORESIGHT_ETM_PMU_NAME);
arm_spe_pmus = find_all_arm_spe_pmus(&nr_spes, err);
hisi_ptt_pmus = find_all_hisi_ptt_pmus(&nr_ptts, err);
diff --git a/tools/perf/arch/arm/util/cs-etm.c b/tools/perf/arch/arm/util/cs-etm.c
index 9ca040bfb1aa..7c51fa182b51 100644
--- a/tools/perf/arch/arm/util/cs-etm.c
+++ b/tools/perf/arch/arm/util/cs-etm.c
@@ -25,7 +25,7 @@
#include "../../../util/evsel.h"
#include "../../../util/perf_api_probe.h"
#include "../../../util/evsel_config.h"
-#include "../../../util/pmu.h"
+#include "../../../util/pmus.h"
#include "../../../util/cs-etm.h"
#include <internal/lib.h> // page_size
#include "../../../util/session.h"
@@ -881,7 +881,7 @@ struct auxtrace_record *cs_etm_record_init(int *err)
struct perf_pmu *cs_etm_pmu;
struct cs_etm_recording *ptr;
- cs_etm_pmu = perf_pmu__find(CORESIGHT_ETM_PMU_NAME);
+ cs_etm_pmu = perf_pmus__find(CORESIGHT_ETM_PMU_NAME);
if (!cs_etm_pmu) {
*err = -EINVAL;
diff --git a/tools/perf/arch/arm64/util/pmu.c b/tools/perf/arch/arm64/util/pmu.c
index ef1ed645097c..2504d43a39a7 100644
--- a/tools/perf/arch/arm64/util/pmu.c
+++ b/tools/perf/arch/arm64/util/pmu.c
@@ -3,6 +3,7 @@
#include <internal/cpumap.h>
#include "../../../util/cpumap.h"
#include "../../../util/pmu.h"
+#include "../../../util/pmus.h"
#include <api/fs/fs.h>
#include <math.h>
@@ -10,7 +11,7 @@ static struct perf_pmu *pmu__find_core_pmu(void)
{
struct perf_pmu *pmu = NULL;
- while ((pmu = perf_pmu__scan(pmu))) {
+ while ((pmu = perf_pmus__scan(pmu))) {
if (!is_pmu_core(pmu->name))
continue;
diff --git a/tools/perf/arch/x86/tests/hybrid.c b/tools/perf/arch/x86/tests/hybrid.c
index 944bd1b4bab6..e466735d68d5 100644
--- a/tools/perf/arch/x86/tests/hybrid.c
+++ b/tools/perf/arch/x86/tests/hybrid.c
@@ -4,6 +4,7 @@
#include "evlist.h"
#include "evsel.h"
#include "pmu.h"
+#include "pmus.h"
#include "tests/tests.h"
static bool test_config(const struct evsel *evsel, __u64 expected_config)
@@ -113,7 +114,7 @@ static int test__hybrid_raw1(struct evlist *evlist)
struct perf_evsel *evsel;
perf_evlist__for_each_evsel(&evlist->core, evsel) {
- struct perf_pmu *pmu = perf_pmu__find_by_type(evsel->attr.type);
+ struct perf_pmu *pmu = perf_pmus__find_by_type(evsel->attr.type);
TEST_ASSERT_VAL("missing pmu", pmu);
TEST_ASSERT_VAL("unexpected pmu", !strncmp(pmu->name, "cpu_", 4));
@@ -280,7 +281,7 @@ static int test_events(const struct evlist_test *events, int cnt)
int test__hybrid(struct test_suite *test __maybe_unused, int subtest __maybe_unused)
{
- if (!perf_pmu__has_hybrid())
+ if (!perf_pmus__has_hybrid())
return TEST_SKIP;
return test_events(test__hybrid_events, ARRAY_SIZE(test__hybrid_events));
diff --git a/tools/perf/arch/x86/util/auxtrace.c b/tools/perf/arch/x86/util/auxtrace.c
index 330d03216b0e..354780ff1605 100644
--- a/tools/perf/arch/x86/util/auxtrace.c
+++ b/tools/perf/arch/x86/util/auxtrace.c
@@ -10,6 +10,7 @@
#include "../../../util/header.h"
#include "../../../util/debug.h"
#include "../../../util/pmu.h"
+#include "../../../util/pmus.h"
#include "../../../util/auxtrace.h"
#include "../../../util/intel-pt.h"
#include "../../../util/intel-bts.h"
@@ -25,8 +26,8 @@ struct auxtrace_record *auxtrace_record__init_intel(struct evlist *evlist,
bool found_pt = false;
bool found_bts = false;
- intel_pt_pmu = perf_pmu__find(INTEL_PT_PMU_NAME);
- intel_bts_pmu = perf_pmu__find(INTEL_BTS_PMU_NAME);
+ intel_pt_pmu = perf_pmus__find(INTEL_PT_PMU_NAME);
+ intel_bts_pmu = perf_pmus__find(INTEL_BTS_PMU_NAME);
evlist__for_each_entry(evlist, evsel) {
if (intel_pt_pmu && evsel->core.attr.type == intel_pt_pmu->type)
diff --git a/tools/perf/arch/x86/util/evlist.c b/tools/perf/arch/x86/util/evlist.c
index 03f7eb4cf0a4..03240c640c7f 100644
--- a/tools/perf/arch/x86/util/evlist.c
+++ b/tools/perf/arch/x86/util/evlist.c
@@ -1,6 +1,7 @@
// SPDX-License-Identifier: GPL-2.0
#include <stdio.h>
#include "util/pmu.h"
+#include "util/pmus.h"
#include "util/evlist.h"
#include "util/parse-events.h"
#include "util/event.h"
@@ -17,7 +18,7 @@ static int ___evlist__add_default_attrs(struct evlist *evlist,
for (i = 0; i < nr_attrs; i++)
event_attr_init(attrs + i);
- if (!perf_pmu__has_hybrid())
+ if (!perf_pmus__has_hybrid())
return evlist__add_attrs(evlist, attrs, nr_attrs);
for (i = 0; i < nr_attrs; i++) {
@@ -32,7 +33,7 @@ static int ___evlist__add_default_attrs(struct evlist *evlist,
continue;
}
- while ((pmu = perf_pmu__scan(pmu)) != NULL) {
+ while ((pmu = perf_pmus__scan(pmu)) != NULL) {
struct perf_cpu_map *cpus;
struct evsel *evsel;
diff --git a/tools/perf/arch/x86/util/evsel.c b/tools/perf/arch/x86/util/evsel.c
index 153cdca94cd4..25da46c8cca9 100644
--- a/tools/perf/arch/x86/util/evsel.c
+++ b/tools/perf/arch/x86/util/evsel.c
@@ -4,6 +4,7 @@
#include "util/evsel.h"
#include "util/env.h"
#include "util/pmu.h"
+#include "util/pmus.h"
#include "linux/string.h"
#include "evsel.h"
#include "util/debug.h"
@@ -30,7 +31,7 @@ bool evsel__sys_has_perf_metrics(const struct evsel *evsel)
* should be good enough to detect the perf metrics feature.
*/
if ((evsel->core.attr.type == PERF_TYPE_RAW) &&
- pmu_have_event(pmu_name, "slots"))
+ perf_pmus__have_event(pmu_name, "slots"))
return true;
return false;
@@ -98,8 +99,8 @@ void arch__post_evsel_config(struct evsel *evsel, struct perf_event_attr *attr)
if (!evsel_pmu)
return;
- ibs_fetch_pmu = perf_pmu__find("ibs_fetch");
- ibs_op_pmu = perf_pmu__find("ibs_op");
+ ibs_fetch_pmu = perf_pmus__find("ibs_fetch");
+ ibs_op_pmu = perf_pmus__find("ibs_op");
if (ibs_fetch_pmu && ibs_fetch_pmu->type == evsel_pmu->type) {
if (attr->config & IBS_FETCH_L3MISSONLY) {
diff --git a/tools/perf/arch/x86/util/intel-bts.c b/tools/perf/arch/x86/util/intel-bts.c
index 439c2956f3e7..d2c8cac11470 100644
--- a/tools/perf/arch/x86/util/intel-bts.c
+++ b/tools/perf/arch/x86/util/intel-bts.c
@@ -17,7 +17,7 @@
#include "../../../util/evlist.h"
#include "../../../util/mmap.h"
#include "../../../util/session.h"
-#include "../../../util/pmu.h"
+#include "../../../util/pmus.h"
#include "../../../util/debug.h"
#include "../../../util/record.h"
#include "../../../util/tsc.h"
@@ -416,7 +416,7 @@ static int intel_bts_find_snapshot(struct auxtrace_record *itr, int idx,
struct auxtrace_record *intel_bts_recording_init(int *err)
{
- struct perf_pmu *intel_bts_pmu = perf_pmu__find(INTEL_BTS_PMU_NAME);
+ struct perf_pmu *intel_bts_pmu = perf_pmus__find(INTEL_BTS_PMU_NAME);
struct intel_bts_recording *btsr;
if (!intel_bts_pmu)
diff --git a/tools/perf/arch/x86/util/intel-pt.c b/tools/perf/arch/x86/util/intel-pt.c
index 17336da08b58..74b70fd379df 100644
--- a/tools/perf/arch/x86/util/intel-pt.c
+++ b/tools/perf/arch/x86/util/intel-pt.c
@@ -23,7 +23,7 @@
#include "../../../util/mmap.h"
#include <subcmd/parse-options.h>
#include "../../../util/parse-events.h"
-#include "../../../util/pmu.h"
+#include "../../../util/pmus.h"
#include "../../../util/debug.h"
#include "../../../util/auxtrace.h"
#include "../../../util/perf_api_probe.h"
@@ -1185,7 +1185,7 @@ static u64 intel_pt_reference(struct auxtrace_record *itr __maybe_unused)
struct auxtrace_record *intel_pt_recording_init(int *err)
{
- struct perf_pmu *intel_pt_pmu = perf_pmu__find(INTEL_PT_PMU_NAME);
+ struct perf_pmu *intel_pt_pmu = perf_pmus__find(INTEL_PT_PMU_NAME);
struct intel_pt_recording *ptr;
if (!intel_pt_pmu)
diff --git a/tools/perf/arch/x86/util/mem-events.c b/tools/perf/arch/x86/util/mem-events.c
index 02d65e446f46..32879d12a8d5 100644
--- a/tools/perf/arch/x86/util/mem-events.c
+++ b/tools/perf/arch/x86/util/mem-events.c
@@ -1,5 +1,6 @@
// SPDX-License-Identifier: GPL-2.0
#include "util/pmu.h"
+#include "util/pmus.h"
#include "util/env.h"
#include "map_symbol.h"
#include "mem-events.h"
@@ -55,12 +56,12 @@ struct perf_mem_event *perf_mem_events__ptr(int i)
bool is_mem_loads_aux_event(struct evsel *leader)
{
- struct perf_pmu *pmu = perf_pmu__find("cpu");
+ struct perf_pmu *pmu = perf_pmus__find("cpu");
if (!pmu)
- pmu = perf_pmu__find("cpu_core");
+ pmu = perf_pmus__find("cpu_core");
- if (pmu && !pmu_have_event(pmu->name, "mem-loads-aux"))
+ if (pmu && !perf_pmu__have_event(pmu, "mem-loads-aux"))
return false;
return leader->core.attr.config == MEM_LOADS_AUX;
@@ -82,7 +83,7 @@ char *perf_mem_events__name(int i, char *pmu_name)
pmu_name = (char *)"cpu";
}
- if (pmu_have_event(pmu_name, "mem-loads-aux")) {
+ if (perf_pmus__have_event(pmu_name, "mem-loads-aux")) {
scnprintf(mem_loads_name, sizeof(mem_loads_name),
MEM_LOADS_AUX_NAME, pmu_name, pmu_name,
perf_mem_events__loads_ldlat);
diff --git a/tools/perf/arch/x86/util/perf_regs.c b/tools/perf/arch/x86/util/perf_regs.c
index 26abc159fc0e..befa7f3659b9 100644
--- a/tools/perf/arch/x86/util/perf_regs.c
+++ b/tools/perf/arch/x86/util/perf_regs.c
@@ -10,6 +10,7 @@
#include "../../../util/debug.h"
#include "../../../util/event.h"
#include "../../../util/pmu.h"
+#include "../../../util/pmus.h"
const struct sample_reg sample_reg_masks[] = {
SMPL_REG(AX, PERF_REG_X86_AX),
@@ -291,7 +292,7 @@ uint64_t arch__intr_reg_mask(void)
*/
attr.sample_period = 1;
- if (perf_pmu__has_hybrid()) {
+ if (perf_pmus__has_hybrid()) {
struct perf_pmu *pmu = NULL;
__u64 type = PERF_TYPE_RAW;
@@ -299,7 +300,7 @@ uint64_t arch__intr_reg_mask(void)
* The same register set is supported among different hybrid PMUs.
* Only check the first available one.
*/
- while ((pmu = perf_pmu__scan(pmu)) != NULL) {
+ while ((pmu = perf_pmus__scan(pmu)) != NULL) {
if (pmu->is_core) {
type = pmu->type;
break;
diff --git a/tools/perf/arch/x86/util/topdown.c b/tools/perf/arch/x86/util/topdown.c
index 9ad5e5c7bd27..3f9a267d4501 100644
--- a/tools/perf/arch/x86/util/topdown.c
+++ b/tools/perf/arch/x86/util/topdown.c
@@ -2,6 +2,7 @@
#include "api/fs/fs.h"
#include "util/evsel.h"
#include "util/pmu.h"
+#include "util/pmus.h"
#include "util/topdown.h"
#include "topdown.h"
#include "evsel.h"
@@ -22,8 +23,8 @@ bool topdown_sys_has_perf_metrics(void)
* The slots event is only available when the core PMU
* supports the perf metrics feature.
*/
- pmu = perf_pmu__find_by_type(PERF_TYPE_RAW);
- if (pmu && pmu_have_event(pmu->name, "slots"))
+ pmu = perf_pmus__find_by_type(PERF_TYPE_RAW);
+ if (pmu && perf_pmu__have_event(pmu, "slots"))
has_perf_metrics = true;
cached = true;
diff --git a/tools/perf/bench/pmu-scan.c b/tools/perf/bench/pmu-scan.c
index f4a6c37cbe27..51cae2d03353 100644
--- a/tools/perf/bench/pmu-scan.c
+++ b/tools/perf/bench/pmu-scan.c
@@ -44,7 +44,7 @@ static int save_result(void)
struct list_head *list;
struct pmu_scan_result *r;
- while ((pmu = perf_pmu__scan(pmu)) != NULL) {
+ while ((pmu = perf_pmus__scan(pmu)) != NULL) {
r = realloc(results, (nr_pmus + 1) * sizeof(*r));
if (r == NULL)
return -ENOMEM;
@@ -68,7 +68,7 @@ static int save_result(void)
nr_pmus++;
}
- perf_pmu__destroy();
+ perf_pmus__destroy();
return 0;
}
@@ -81,7 +81,7 @@ static int check_result(void)
for (int i = 0; i < nr_pmus; i++) {
r = &results[i];
- pmu = perf_pmu__find(r->name);
+ pmu = perf_pmus__find(r->name);
if (pmu == NULL) {
pr_err("Cannot find PMU %s\n", r->name);
return -1;
@@ -144,7 +144,7 @@ static int run_pmu_scan(void)
for (i = 0; i < iterations; i++) {
gettimeofday(&start, NULL);
- perf_pmu__scan(NULL);
+ perf_pmus__scan(NULL);
gettimeofday(&end, NULL);
timersub(&end, &start, &diff);
@@ -152,7 +152,7 @@ static int run_pmu_scan(void)
update_stats(&stats, runtime_us);
ret = check_result();
- perf_pmu__destroy();
+ perf_pmus__destroy();
if (ret < 0)
break;
}
diff --git a/tools/perf/builtin-c2c.c b/tools/perf/builtin-c2c.c
index 2757ccc19c5e..05dfd98af170 100644
--- a/tools/perf/builtin-c2c.c
+++ b/tools/perf/builtin-c2c.c
@@ -41,7 +41,7 @@
#include "symbol.h"
#include "ui/ui.h"
#include "ui/progress.h"
-#include "pmu.h"
+#include "pmus.h"
#include "string2.h"
#include "util/util.h"
@@ -3259,7 +3259,7 @@ static int perf_c2c__record(int argc, const char **argv)
PARSE_OPT_KEEP_UNKNOWN);
/* Max number of arguments multiplied by number of PMUs that can support them. */
- rec_argc = argc + 11 * perf_pmu__num_mem_pmus();
+ rec_argc = argc + 11 * perf_pmus__num_mem_pmus();
rec_argv = calloc(rec_argc + 1, sizeof(char *));
if (!rec_argv)
diff --git a/tools/perf/builtin-list.c b/tools/perf/builtin-list.c
index c6bd0aa4a56e..6a2e74bdb1db 100644
--- a/tools/perf/builtin-list.c
+++ b/tools/perf/builtin-list.c
@@ -522,7 +522,7 @@ int cmd_list(int argc, const char **argv)
strcmp(argv[i], "hwcache") == 0)
print_hwcache_events(&print_cb, ps);
else if (strcmp(argv[i], "pmu") == 0)
- print_pmu_events(&print_cb, ps);
+ perf_pmus__print_pmu_events(&print_cb, ps);
else if (strcmp(argv[i], "sdt") == 0)
print_sdt_events(&print_cb, ps);
else if (strcmp(argv[i], "metric") == 0 || strcmp(argv[i], "metrics") == 0) {
@@ -562,7 +562,7 @@ int cmd_list(int argc, const char **argv)
event_symbols_sw, PERF_COUNT_SW_MAX);
print_tool_events(&print_cb, ps);
print_hwcache_events(&print_cb, ps);
- print_pmu_events(&print_cb, ps);
+ perf_pmus__print_pmu_events(&print_cb, ps);
print_tracepoint_events(&print_cb, ps);
print_sdt_events(&print_cb, ps);
default_ps.metrics = true;
diff --git a/tools/perf/builtin-mem.c b/tools/perf/builtin-mem.c
index f4f1ff76d49d..960bfd4b732a 100644
--- a/tools/perf/builtin-mem.c
+++ b/tools/perf/builtin-mem.c
@@ -17,7 +17,7 @@
#include "util/dso.h"
#include "util/map.h"
#include "util/symbol.h"
-#include "util/pmu.h"
+#include "util/pmus.h"
#include "util/sample.h"
#include "util/string2.h"
#include "util/util.h"
@@ -93,7 +93,7 @@ static int __cmd_record(int argc, const char **argv, struct perf_mem *mem)
PARSE_OPT_KEEP_UNKNOWN);
/* Max number of arguments multiplied by number of PMUs that can support them. */
- rec_argc = argc + 9 * perf_pmu__num_mem_pmus();
+ rec_argc = argc + 9 * perf_pmus__num_mem_pmus();
if (mem->cpu_list)
rec_argc += 2;
diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
index aebe103fb734..e1f3525205bb 100644
--- a/tools/perf/builtin-record.c
+++ b/tools/perf/builtin-record.c
@@ -48,6 +48,8 @@
#include "util/bpf-event.h"
#include "util/util.h"
#include "util/pfm.h"
+#include "util/pmu.h"
+#include "util/pmus.h"
#include "util/clockid.h"
#include "util/off_cpu.h"
#include "util/bpf-filter.h"
@@ -1292,7 +1294,7 @@ static int record__open(struct record *rec)
* of waiting or event synthesis.
*/
if (opts->target.initial_delay || target__has_cpu(&opts->target) ||
- perf_pmu__has_hybrid()) {
+ perf_pmus__has_hybrid()) {
pos = evlist__get_tracking_event(evlist);
if (!evsel__is_dummy_event(pos)) {
/* Set up dummy event. */
@@ -2191,7 +2193,7 @@ static void record__uniquify_name(struct record *rec)
char *new_name;
int ret;
- if (!perf_pmu__has_hybrid())
+ if (!perf_pmus__has_hybrid())
return;
evlist__for_each_entry(evlist, pos) {
@@ -4191,7 +4193,7 @@ int cmd_record(int argc, const char **argv)
evlist__warn_user_requested_cpus(rec->evlist, rec->opts.target.cpu_list);
- rec->opts.target.hybrid = perf_pmu__has_hybrid();
+ rec->opts.target.hybrid = perf_pmus__has_hybrid();
if (callchain_param.enabled && callchain_param.record_mode == CALLCHAIN_FP)
arch__add_leaf_frame_record_opts(&rec->opts);
diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
index bd51da5fd3a5..4b315e04e079 100644
--- a/tools/perf/builtin-stat.c
+++ b/tools/perf/builtin-stat.c
@@ -1882,11 +1882,11 @@ static int add_default_attributes(void)
if (evlist__add_default_attrs(evsel_list, default_attrs0) < 0)
return -1;
- if (pmu_have_event("cpu", "stalled-cycles-frontend")) {
+ if (perf_pmus__have_event("cpu", "stalled-cycles-frontend")) {
if (evlist__add_default_attrs(evsel_list, frontend_attrs) < 0)
return -1;
}
- if (pmu_have_event("cpu", "stalled-cycles-backend")) {
+ if (perf_pmus__have_event("cpu", "stalled-cycles-backend")) {
if (evlist__add_default_attrs(evsel_list, backend_attrs) < 0)
return -1;
}
@@ -2460,7 +2460,7 @@ int cmd_stat(int argc, const char **argv)
evlist__warn_user_requested_cpus(evsel_list, target.cpu_list);
- target.hybrid = perf_pmu__has_hybrid();
+ target.hybrid = perf_pmus__has_hybrid();
if (evlist__create_maps(evsel_list, &target) < 0) {
if (target__has_task(&target)) {
pr_err("Problems finding threads of monitor\n");
diff --git a/tools/perf/tests/attr.c b/tools/perf/tests/attr.c
index 56fba08a3037..674876e6c8e6 100644
--- a/tools/perf/tests/attr.c
+++ b/tools/perf/tests/attr.c
@@ -34,7 +34,7 @@
#include "event.h"
#include "util.h"
#include "tests.h"
-#include "pmu.h"
+#include "pmus.h"
#define ENV "PERF_TEST_ATTR"
@@ -185,7 +185,7 @@ static int test__attr(struct test_suite *test __maybe_unused, int subtest __mayb
char path_dir[PATH_MAX];
char *exec_path;
- if (perf_pmu__has_hybrid())
+ if (perf_pmus__has_hybrid())
return TEST_SKIP;
/* First try development tree tests. */
diff --git a/tools/perf/tests/event_groups.c b/tools/perf/tests/event_groups.c
index 3d9a2b524bba..ccd9d8b2903f 100644
--- a/tools/perf/tests/event_groups.c
+++ b/tools/perf/tests/event_groups.c
@@ -53,7 +53,7 @@ static int setup_uncore_event(void)
struct perf_pmu *pmu = NULL;
int i, fd;
- while ((pmu = perf_pmu__scan(pmu)) != NULL) {
+ while ((pmu = perf_pmus__scan(pmu)) != NULL) {
for (i = 0; i < NR_UNCORE_PMUS; i++) {
if (!strcmp(uncore_pmus[i].name, pmu->name)) {
pr_debug("Using %s for uncore pmu event\n", pmu->name);
diff --git a/tools/perf/tests/parse-events.c b/tools/perf/tests/parse-events.c
index 277607ede060..9d05bc551791 100644
--- a/tools/perf/tests/parse-events.c
+++ b/tools/perf/tests/parse-events.c
@@ -112,7 +112,7 @@ static int test__checkevent_raw(struct evlist *evlist)
bool type_matched = false;
TEST_ASSERT_VAL("wrong config", test_perf_config(evsel, 0x1a));
- while ((pmu = perf_pmu__scan(pmu)) != NULL) {
+ while ((pmu = perf_pmus__scan(pmu)) != NULL) {
if (pmu->type == evsel->attr.type) {
TEST_ASSERT_VAL("PMU type expected once", !type_matched);
type_matched = true;
@@ -1443,12 +1443,12 @@ static int test__checkevent_config_cache(struct evlist *evlist)
static bool test__pmu_cpu_valid(void)
{
- return !!perf_pmu__find("cpu");
+ return !!perf_pmus__find("cpu");
}
static bool test__intel_pt_valid(void)
{
- return !!perf_pmu__find("intel_pt");
+ return !!perf_pmus__find("intel_pt");
}
static int test__intel_pt(struct evlist *evlist)
@@ -2246,7 +2246,7 @@ static int test__pmu_events(struct test_suite *test __maybe_unused, int subtest
struct perf_pmu *pmu = NULL;
int ret = TEST_OK;
- while ((pmu = perf_pmu__scan(pmu)) != NULL) {
+ while ((pmu = perf_pmus__scan(pmu)) != NULL) {
struct stat st;
char path[PATH_MAX];
struct dirent *ent;
diff --git a/tools/perf/tests/parse-metric.c b/tools/perf/tests/parse-metric.c
index c05148ea400c..1d6493a5a956 100644
--- a/tools/perf/tests/parse-metric.c
+++ b/tools/perf/tests/parse-metric.c
@@ -11,7 +11,7 @@
#include "debug.h"
#include "expr.h"
#include "stat.h"
-#include "pmu.h"
+#include "pmus.h"
struct value {
const char *event;
@@ -303,7 +303,7 @@ static int test__parse_metric(struct test_suite *test __maybe_unused, int subtes
TEST_ASSERT_VAL("recursion fail failed", test_recursion_fail() == 0);
TEST_ASSERT_VAL("Memory bandwidth", test_memory_bandwidth() == 0);
- if (!perf_pmu__has_hybrid()) {
+ if (!perf_pmus__has_hybrid()) {
TEST_ASSERT_VAL("cache_miss_cycles failed", test_cache_miss_cycles() == 0);
TEST_ASSERT_VAL("test metric group", test_metric_group() == 0);
}
diff --git a/tools/perf/tests/pmu-events.c b/tools/perf/tests/pmu-events.c
index 734004f1a37d..64ecb7845af4 100644
--- a/tools/perf/tests/pmu-events.c
+++ b/tools/perf/tests/pmu-events.c
@@ -2,6 +2,7 @@
#include "math.h"
#include "parse-events.h"
#include "pmu.h"
+#include "pmus.h"
#include "tests.h"
#include <errno.h>
#include <stdio.h>
@@ -708,7 +709,7 @@ static int test__aliases(struct test_suite *test __maybe_unused,
struct perf_pmu *pmu = NULL;
unsigned long i;
- while ((pmu = perf_pmu__scan(pmu)) != NULL) {
+ while ((pmu = perf_pmus__scan(pmu)) != NULL) {
int count = 0;
if (!is_pmu_core(pmu->name))
diff --git a/tools/perf/tests/switch-tracking.c b/tools/perf/tests/switch-tracking.c
index b3bd14b025a8..cff6ab87b2f6 100644
--- a/tools/perf/tests/switch-tracking.c
+++ b/tools/perf/tests/switch-tracking.c
@@ -20,7 +20,7 @@
#include "tests.h"
#include "util/mmap.h"
#include "util/sample.h"
-#include "pmu.h"
+#include "pmus.h"
static int spin_sleep(void)
{
@@ -375,7 +375,7 @@ static int test__switch_tracking(struct test_suite *test __maybe_unused, int sub
cpu_clocks_evsel = evlist__last(evlist);
/* Second event */
- if (perf_pmu__has_hybrid()) {
+ if (perf_pmus__has_hybrid()) {
cycles = "cpu_core/cycles/u";
err = parse_event(evlist, cycles);
if (err) {
diff --git a/tools/perf/tests/topology.c b/tools/perf/tests/topology.c
index c4630cfc80ea..49e80d15420b 100644
--- a/tools/perf/tests/topology.c
+++ b/tools/perf/tests/topology.c
@@ -8,7 +8,7 @@
#include "session.h"
#include "evlist.h"
#include "debug.h"
-#include "pmu.h"
+#include "pmus.h"
#include <linux/err.h>
#define TEMPL "/tmp/perf-test-XXXXXX"
@@ -41,7 +41,7 @@ static int session_write_header(char *path)
session = perf_session__new(&data, NULL);
TEST_ASSERT_VAL("can't get session", !IS_ERR(session));
- if (!perf_pmu__has_hybrid()) {
+ if (!perf_pmus__has_hybrid()) {
session->evlist = evlist__new_default();
TEST_ASSERT_VAL("can't get evlist", session->evlist);
} else {
diff --git a/tools/perf/util/cputopo.c b/tools/perf/util/cputopo.c
index a5c259bd5cc0..4578c26747e1 100644
--- a/tools/perf/util/cputopo.c
+++ b/tools/perf/util/cputopo.c
@@ -13,6 +13,7 @@
#include "debug.h"
#include "env.h"
#include "pmu.h"
+#include "pmus.h"
#define PACKAGE_CPUS_FMT \
"%s/devices/system/cpu/cpu%d/topology/package_cpus_list"
@@ -473,10 +474,10 @@ struct hybrid_topology *hybrid_topology__new(void)
struct hybrid_topology *tp = NULL;
u32 nr = 0, i = 0;
- if (!perf_pmu__has_hybrid())
+ if (!perf_pmus__has_hybrid())
return NULL;
- while ((pmu = perf_pmu__scan(pmu)) != NULL) {
+ while ((pmu = perf_pmus__scan(pmu)) != NULL) {
if (pmu->is_core)
nr++;
}
@@ -488,7 +489,7 @@ struct hybrid_topology *hybrid_topology__new(void)
return NULL;
tp->nr = nr;
- while ((pmu = perf_pmu__scan(pmu)) != NULL) {
+ while ((pmu = perf_pmus__scan(pmu)) != NULL) {
if (!pmu->is_core)
continue;
diff --git a/tools/perf/util/env.c b/tools/perf/util/env.c
index 4a4fdad820d6..9eabf3ec56e9 100644
--- a/tools/perf/util/env.c
+++ b/tools/perf/util/env.c
@@ -10,6 +10,7 @@
#include <sys/utsname.h>
#include <stdlib.h>
#include <string.h>
+#include "pmus.h"
#include "strbuf.h"
struct perf_env perf_env;
@@ -323,7 +324,7 @@ int perf_env__read_pmu_mappings(struct perf_env *env)
u32 pmu_num = 0;
struct strbuf sb;
- while ((pmu = perf_pmu__scan(pmu))) {
+ while ((pmu = perf_pmus__scan(pmu))) {
if (!pmu->name)
continue;
pmu_num++;
@@ -337,7 +338,7 @@ int perf_env__read_pmu_mappings(struct perf_env *env)
if (strbuf_init(&sb, 128 * pmu_num) < 0)
return -ENOMEM;
- while ((pmu = perf_pmu__scan(pmu))) {
+ while ((pmu = perf_pmus__scan(pmu))) {
if (!pmu->name)
continue;
if (strbuf_addf(&sb, "%u:%s", pmu->type, pmu->name) < 0)
diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
index 2c0ed7d25466..42d6dfacf191 100644
--- a/tools/perf/util/evsel.c
+++ b/tools/perf/util/evsel.c
@@ -48,6 +48,7 @@
#include "util/hashmap.h"
#include "off_cpu.h"
#include "pmu.h"
+#include "pmus.h"
#include "../perf-sys.h"
#include "util/parse-branch-options.h"
#include "util/bpf-filter.h"
@@ -3135,7 +3136,7 @@ bool evsel__is_hybrid(const struct evsel *evsel)
{
struct perf_pmu *pmu;
- if (!perf_pmu__has_hybrid())
+ if (!perf_pmus__has_hybrid())
return false;
pmu = evsel__find_pmu(evsel);
diff --git a/tools/perf/util/header.c b/tools/perf/util/header.c
index e24cc8f316cd..fa3f7dbbd90e 100644
--- a/tools/perf/util/header.c
+++ b/tools/perf/util/header.c
@@ -37,6 +37,7 @@
#include "debug.h"
#include "cpumap.h"
#include "pmu.h"
+#include "pmus.h"
#include "vdso.h"
#include "strbuf.h"
#include "build-id.h"
@@ -744,7 +745,7 @@ static int write_pmu_mappings(struct feat_fd *ff,
* Do a first pass to count number of pmu to avoid lseek so this
* works in pipe mode as well.
*/
- while ((pmu = perf_pmu__scan(pmu))) {
+ while ((pmu = perf_pmus__scan(pmu))) {
if (!pmu->name)
continue;
pmu_num++;
@@ -754,7 +755,7 @@ static int write_pmu_mappings(struct feat_fd *ff,
if (ret < 0)
return ret;
- while ((pmu = perf_pmu__scan(pmu))) {
+ while ((pmu = perf_pmus__scan(pmu))) {
if (!pmu->name)
continue;
@@ -1550,7 +1551,7 @@ static int __write_pmu_caps(struct feat_fd *ff, struct perf_pmu *pmu,
static int write_cpu_pmu_caps(struct feat_fd *ff,
struct evlist *evlist __maybe_unused)
{
- struct perf_pmu *cpu_pmu = perf_pmu__find("cpu");
+ struct perf_pmu *cpu_pmu = perf_pmus__find("cpu");
int ret;
if (!cpu_pmu)
@@ -1570,7 +1571,7 @@ static int write_pmu_caps(struct feat_fd *ff,
int nr_pmu = 0;
int ret;
- while ((pmu = perf_pmu__scan(pmu))) {
+ while ((pmu = perf_pmus__scan(pmu))) {
if (!pmu->name || !strcmp(pmu->name, "cpu") ||
perf_pmu__caps_parse(pmu) <= 0)
continue;
@@ -1588,9 +1589,9 @@ static int write_pmu_caps(struct feat_fd *ff,
* Write hybrid pmu caps first to maintain compatibility with
* older perf tool.
*/
- if (perf_pmu__has_hybrid()) {
+ if (perf_pmus__has_hybrid()) {
pmu = NULL;
- while ((pmu = perf_pmu__scan(pmu))) {
+ while ((pmu = perf_pmus__scan(pmu))) {
if (!pmu->is_core)
continue;
@@ -1601,7 +1602,7 @@ static int write_pmu_caps(struct feat_fd *ff,
}
pmu = NULL;
- while ((pmu = perf_pmu__scan(pmu))) {
+ while ((pmu = perf_pmus__scan(pmu))) {
if (pmu->is_core || !pmu->nr_caps)
continue;
diff --git a/tools/perf/util/mem-events.c b/tools/perf/util/mem-events.c
index c9e422a38258..08ac3ea2e366 100644
--- a/tools/perf/util/mem-events.c
+++ b/tools/perf/util/mem-events.c
@@ -13,6 +13,7 @@
#include "debug.h"
#include "symbol.h"
#include "pmu.h"
+#include "pmus.h"
unsigned int perf_mem_events__loads_ldlat = 30;
@@ -128,14 +129,14 @@ int perf_mem_events__init(void)
if (!e->tag)
continue;
- if (!perf_pmu__has_hybrid()) {
+ if (!perf_pmus__has_hybrid()) {
scnprintf(sysfs_name, sizeof(sysfs_name),
e->sysfs_name, "cpu");
e->supported = perf_mem_event__supported(mnt, sysfs_name);
} else {
struct perf_pmu *pmu = NULL;
- while ((pmu = perf_pmu__scan(pmu)) != NULL) {
+ while ((pmu = perf_pmus__scan(pmu)) != NULL) {
if (!pmu->is_core)
continue;
@@ -175,7 +176,7 @@ static void perf_mem_events__print_unsupport_hybrid(struct perf_mem_event *e,
char sysfs_name[100];
struct perf_pmu *pmu = NULL;
- while ((pmu = perf_pmu__scan(pmu)) != NULL) {
+ while ((pmu = perf_pmus__scan(pmu)) != NULL) {
if (!pmu->is_core)
continue;
@@ -201,7 +202,7 @@ int perf_mem_events__record_args(const char **rec_argv, int *argv_nr,
if (!e->record)
continue;
- if (!perf_pmu__has_hybrid()) {
+ if (!perf_pmus__has_hybrid()) {
if (!e->supported) {
pr_err("failed: event '%s' not supported\n",
perf_mem_events__name(j, NULL));
@@ -216,7 +217,7 @@ int perf_mem_events__record_args(const char **rec_argv, int *argv_nr,
return -1;
}
- while ((pmu = perf_pmu__scan(pmu)) != NULL) {
+ while ((pmu = perf_pmus__scan(pmu)) != NULL) {
if (!pmu->is_core)
continue;
rec_argv[i++] = "-e";
diff --git a/tools/perf/util/metricgroup.c b/tools/perf/util/metricgroup.c
index 72583f62eda8..27310eff49ab 100644
--- a/tools/perf/util/metricgroup.c
+++ b/tools/perf/util/metricgroup.c
@@ -11,6 +11,7 @@
#include "evsel.h"
#include "strbuf.h"
#include "pmu.h"
+#include "pmus.h"
#include "print-events.h"
#include "smt.h"
#include "expr.h"
@@ -273,7 +274,7 @@ static int setup_metric_events(const char *pmu, struct hashmap *ids,
const char *metric_id;
struct evsel *ev;
size_t ids_size, matched_events, i;
- bool all_pmus = !strcmp(pmu, "all") || !perf_pmu__has_hybrid() || !is_pmu_hybrid(pmu);
+ bool all_pmus = !strcmp(pmu, "all") || !perf_pmus__has_hybrid() || !is_pmu_hybrid(pmu);
*out_metric_events = NULL;
ids_size = hashmap__size(ids);
@@ -488,7 +489,7 @@ static int metricgroup__sys_event_iter(const struct pmu_metric *pm,
if (!pm->metric_expr || !pm->compat)
return 0;
- while ((pmu = perf_pmu__scan(pmu))) {
+ while ((pmu = perf_pmus__scan(pmu))) {
if (!pmu->id || strcmp(pmu->id, pm->compat))
continue;
diff --git a/tools/perf/util/parse-events.c b/tools/perf/util/parse-events.c
index b93264f8a37c..984b230e14d4 100644
--- a/tools/perf/util/parse-events.c
+++ b/tools/perf/util/parse-events.c
@@ -21,6 +21,7 @@
#include "parse-events-bison.h"
#include "parse-events-flex.h"
#include "pmu.h"
+#include "pmus.h"
#include "asm/bug.h"
#include "util/parse-branch-options.h"
#include "util/evsel_config.h"
@@ -451,7 +452,7 @@ int parse_events_add_cache(struct list_head *list, int *idx, const char *name,
const char *config_name = get_config_name(head_config);
const char *metric_id = get_config_metric_id(head_config);
- while ((pmu = perf_pmu__scan(pmu)) != NULL) {
+ while ((pmu = perf_pmus__scan(pmu)) != NULL) {
LIST_HEAD(config_terms);
struct perf_event_attr attr;
int ret;
@@ -1192,7 +1193,7 @@ static int config_term_pmu(struct perf_event_attr *attr,
struct parse_events_error *err)
{
if (term->type_term == PARSE_EVENTS__TERM_TYPE_LEGACY_CACHE) {
- const struct perf_pmu *pmu = perf_pmu__find_by_type(attr->type);
+ const struct perf_pmu *pmu = perf_pmus__find_by_type(attr->type);
if (perf_pmu__supports_legacy_cache(pmu)) {
attr->type = PERF_TYPE_HW_CACHE;
@@ -1202,7 +1203,7 @@ static int config_term_pmu(struct perf_event_attr *attr,
term->type_term = PARSE_EVENTS__TERM_TYPE_USER;
}
if (term->type_term == PARSE_EVENTS__TERM_TYPE_HARDWARE) {
- const struct perf_pmu *pmu = perf_pmu__find_by_type(attr->type);
+ const struct perf_pmu *pmu = perf_pmus__find_by_type(attr->type);
if (!pmu) {
pr_debug("Failed to find PMU for type %d", attr->type);
@@ -1479,7 +1480,7 @@ int parse_events_add_numeric(struct parse_events_state *parse_state,
return __parse_events_add_numeric(parse_state, list, /*pmu=*/NULL,
type, config, head_config);
- while ((pmu = perf_pmu__scan(pmu)) != NULL) {
+ while ((pmu = perf_pmus__scan(pmu)) != NULL) {
int ret;
if (!perf_pmu__supports_wildcard_numeric(pmu))
@@ -1528,7 +1529,7 @@ int parse_events_add_pmu(struct parse_events_state *parse_state,
struct parse_events_error *err = parse_state->error;
LIST_HEAD(config_terms);
- pmu = parse_state->fake_pmu ?: perf_pmu__find(name);
+ pmu = parse_state->fake_pmu ?: perf_pmus__find(name);
if (verbose > 1 && !(pmu && pmu->selectable)) {
fprintf(stderr, "Attempting to add event pmu '%s' with '",
@@ -1673,7 +1674,7 @@ int parse_events_multi_pmu_add(struct parse_events_state *parse_state,
INIT_LIST_HEAD(list);
- while ((pmu = perf_pmu__scan(pmu)) != NULL) {
+ while ((pmu = perf_pmus__scan(pmu)) != NULL) {
struct perf_pmu_alias *alias;
bool auto_merge_stats;
@@ -2409,7 +2410,7 @@ static int set_filter(struct evsel *evsel, const void *arg)
return 0;
}
- while ((pmu = perf_pmu__scan(pmu)) != NULL)
+ while ((pmu = perf_pmus__scan(pmu)) != NULL)
if (pmu->type == evsel->core.attr.type) {
found = true;
break;
diff --git a/tools/perf/util/parse-events.y b/tools/perf/util/parse-events.y
index 4e1f5de35be8..abd6ab460e12 100644
--- a/tools/perf/util/parse-events.y
+++ b/tools/perf/util/parse-events.y
@@ -15,6 +15,7 @@
#include <linux/types.h>
#include <linux/zalloc.h>
#include "pmu.h"
+#include "pmus.h"
#include "evsel.h"
#include "parse-events.h"
#include "parse-events-bison.h"
@@ -316,7 +317,7 @@ PE_NAME opt_pmu_config
if (asprintf(&pattern, "%s*", $1) < 0)
CLEANUP_YYABORT;
- while ((pmu = perf_pmu__scan(pmu)) != NULL) {
+ while ((pmu = perf_pmus__scan(pmu)) != NULL) {
char *name = pmu->name;
if (parse_events__filter_pmu(parse_state, pmu))
diff --git a/tools/perf/util/pfm.c b/tools/perf/util/pfm.c
index 6c11914c179f..076aecc22c16 100644
--- a/tools/perf/util/pfm.c
+++ b/tools/perf/util/pfm.c
@@ -10,7 +10,7 @@
#include "util/evlist.h"
#include "util/evsel.h"
#include "util/parse-events.h"
-#include "util/pmu.h"
+#include "util/pmus.h"
#include "util/pfm.h"
#include "util/strbuf.h"
@@ -49,7 +49,7 @@ int parse_libpfm_events_option(const struct option *opt, const char *str,
/*
* force loading of the PMU list
*/
- perf_pmu__scan(NULL);
+ perf_pmus__scan(NULL);
for (q = p; strsep(&p, ",{}"); q = p) {
sep = p ? str + (p - p_orig - 1) : "";
@@ -86,7 +86,7 @@ int parse_libpfm_events_option(const struct option *opt, const char *str,
goto error;
}
- pmu = perf_pmu__find_by_type((unsigned int)attr.type);
+ pmu = perf_pmus__find_by_type((unsigned int)attr.type);
evsel = parse_events__add_event(evlist->core.nr_entries,
&attr, q, /*metric_id=*/NULL,
pmu);
diff --git a/tools/perf/util/pmu.c b/tools/perf/util/pmu.c
index bcf9d78a0003..3217a859c65b 100644
--- a/tools/perf/util/pmu.c
+++ b/tools/perf/util/pmu.c
@@ -4,20 +4,15 @@
#include <linux/string.h>
#include <linux/zalloc.h>
#include <linux/ctype.h>
-#include <subcmd/pager.h>
#include <sys/types.h>
-#include <errno.h>
#include <fcntl.h>
#include <sys/stat.h>
#include <unistd.h>
#include <stdio.h>
#include <stdbool.h>
-#include <stdarg.h>
#include <dirent.h>
#include <api/fs/fs.h>
#include <locale.h>
-#include <regex.h>
-#include <perf/cpumap.h>
#include <fnmatch.h>
#include <math.h>
#include "debug.h"
@@ -59,8 +54,6 @@ struct perf_pmu_format {
struct list_head list;
};
-static struct perf_pmu *perf_pmu__find2(int dirfd, const char *name);
-
/*
* Parse & process all the sysfs attributes located under
* the directory specified in 'dir' parameter.
@@ -554,31 +547,6 @@ static int pmu_alias_terms(struct perf_pmu_alias *alias,
return 0;
}
-/* Add all pmus in sysfs to pmu list: */
-static void pmu_read_sysfs(void)
-{
- int fd;
- DIR *dir;
- struct dirent *dent;
-
- fd = perf_pmu__event_source_devices_fd();
- if (fd < 0)
- return;
-
- dir = fdopendir(fd);
- if (!dir)
- return;
-
- while ((dent = readdir(dir))) {
- if (!strcmp(dent->d_name, ".") || !strcmp(dent->d_name, ".."))
- continue;
- /* add to static LIST_HEAD(pmus): */
- perf_pmu__find2(fd, dent->d_name);
- }
-
- closedir(dir);
-}
-
/*
* Uncore PMUs have a "cpumask" file under sysfs. CPU PMUs (e.g. on arm/arm64)
* may have a "cpus" file.
@@ -904,7 +872,7 @@ static bool perf_pmu__skip_empty_cpus(const char *name)
return !strcmp(name, "cpu_core") || !strcmp(name, "cpu_atom");
}
-static struct perf_pmu *pmu_lookup(int dirfd, const char *lookup_name)
+struct perf_pmu *perf_pmu__lookup(struct list_head *pmus, int dirfd, const char *lookup_name)
{
struct perf_pmu *pmu;
LIST_HEAD(format);
@@ -964,7 +932,7 @@ static struct perf_pmu *pmu_lookup(int dirfd, const char *lookup_name)
INIT_LIST_HEAD(&pmu->caps);
list_splice(&format, &pmu->format);
list_splice(&aliases, &pmu->aliases);
- list_add_tail(&pmu->list, &pmus);
+ list_add_tail(&pmu->list, pmus);
pmu->default_config = perf_pmu__get_default_config(pmu);
@@ -992,61 +960,6 @@ void perf_pmu__warn_invalid_formats(struct perf_pmu *pmu)
}
}
-static struct perf_pmu *pmu_find(const char *name)
-{
- struct perf_pmu *pmu;
-
- list_for_each_entry(pmu, &pmus, list) {
- if (!strcmp(pmu->name, name) ||
- (pmu->alias_name && !strcmp(pmu->alias_name, name)))
- return pmu;
- }
-
- return NULL;
-}
-
-struct perf_pmu *perf_pmu__find_by_type(unsigned int type)
-{
- struct perf_pmu *pmu;
-
- list_for_each_entry(pmu, &pmus, list)
- if (pmu->type == type)
- return pmu;
-
- return NULL;
-}
-
-struct perf_pmu *perf_pmu__scan(struct perf_pmu *pmu)
-{
- /*
- * pmu iterator: If pmu is NULL, we start at the begin,
- * otherwise return the next pmu. Returns NULL on end.
- */
- if (!pmu) {
- pmu_read_sysfs();
- pmu = list_prepare_entry(pmu, &pmus, list);
- }
- list_for_each_entry_continue(pmu, &pmus, list)
- return pmu;
- return NULL;
-}
-
-struct perf_pmu *evsel__find_pmu(const struct evsel *evsel)
-{
- struct perf_pmu *pmu = NULL;
-
- if (evsel->pmu)
- return evsel->pmu;
-
- while ((pmu = perf_pmu__scan(pmu)) != NULL) {
- if (pmu->type == evsel->core.attr.type)
- break;
- }
-
- ((struct evsel *)evsel)->pmu = pmu;
- return pmu;
-}
-
bool evsel__is_aux_event(const struct evsel *evsel)
{
struct perf_pmu *pmu = evsel__find_pmu(evsel);
@@ -1083,43 +996,6 @@ void evsel__set_config_if_unset(struct perf_pmu *pmu, struct evsel *evsel,
evsel->core.attr.config |= field_prep(bits, val);
}
-struct perf_pmu *perf_pmu__find(const char *name)
-{
- struct perf_pmu *pmu;
- int dirfd;
-
- /*
- * Once PMU is loaded it stays in the list,
- * so we keep us from multiple reading/parsing
- * the pmu format definitions.
- */
- pmu = pmu_find(name);
- if (pmu)
- return pmu;
-
- dirfd = perf_pmu__event_source_devices_fd();
- pmu = pmu_lookup(dirfd, name);
- close(dirfd);
-
- return pmu;
-}
-
-static struct perf_pmu *perf_pmu__find2(int dirfd, const char *name)
-{
- struct perf_pmu *pmu;
-
- /*
- * Once PMU is loaded it stays in the list,
- * so we keep us from multiple reading/parsing
- * the pmu format definitions.
- */
- pmu = pmu_find(name);
- if (pmu)
- return pmu;
-
- return pmu_lookup(dirfd, name);
-}
-
static struct perf_pmu_format *
pmu_find_format(struct list_head *formats, const char *name)
{
@@ -1549,99 +1425,6 @@ void perf_pmu__del_formats(struct list_head *formats)
}
}
-static int sub_non_neg(int a, int b)
-{
- if (b > a)
- return 0;
- return a - b;
-}
-
-static char *format_alias(char *buf, int len, const struct perf_pmu *pmu,
- const struct perf_pmu_alias *alias)
-{
- struct parse_events_term *term;
- int used = snprintf(buf, len, "%s/%s", pmu->name, alias->name);
-
- list_for_each_entry(term, &alias->terms, list) {
- if (term->type_val == PARSE_EVENTS__TERM_TYPE_STR)
- used += snprintf(buf + used, sub_non_neg(len, used),
- ",%s=%s", term->config,
- term->val.str);
- }
-
- if (sub_non_neg(len, used) > 0) {
- buf[used] = '/';
- used++;
- }
- if (sub_non_neg(len, used) > 0) {
- buf[used] = '\0';
- used++;
- } else
- buf[len - 1] = '\0';
-
- return buf;
-}
-
-/** Struct for ordering events as output in perf list. */
-struct sevent {
- /** PMU for event. */
- const struct perf_pmu *pmu;
- /**
- * Optional event for name, desc, etc. If not present then this is a
- * selectable PMU and the event name is shown as "//".
- */
- const struct perf_pmu_alias *event;
- /** Is the PMU for the CPU? */
- bool is_cpu;
-};
-
-static int cmp_sevent(const void *a, const void *b)
-{
- const struct sevent *as = a;
- const struct sevent *bs = b;
- const char *a_pmu_name = NULL, *b_pmu_name = NULL;
- const char *a_name = "//", *a_desc = NULL, *a_topic = "";
- const char *b_name = "//", *b_desc = NULL, *b_topic = "";
- int ret;
-
- if (as->event) {
- a_name = as->event->name;
- a_desc = as->event->desc;
- a_topic = as->event->topic ?: "";
- a_pmu_name = as->event->pmu_name;
- }
- if (bs->event) {
- b_name = bs->event->name;
- b_desc = bs->event->desc;
- b_topic = bs->event->topic ?: "";
- b_pmu_name = bs->event->pmu_name;
- }
- /* Put extra events last. */
- if (!!a_desc != !!b_desc)
- return !!a_desc - !!b_desc;
-
- /* Order by topics. */
- ret = strcmp(a_topic, b_topic);
- if (ret)
- return ret;
-
- /* Order CPU core events to be first */
- if (as->is_cpu != bs->is_cpu)
- return as->is_cpu ? -1 : 1;
-
- /* Order by PMU name. */
- if (as->pmu != bs->pmu) {
- a_pmu_name = a_pmu_name ?: (as->pmu->name ?: "");
- b_pmu_name = b_pmu_name ?: (bs->pmu->name ?: "");
- ret = strcmp(a_pmu_name, b_pmu_name);
- if (ret)
- return ret;
- }
-
- /* Order by event name. */
- return strcmp(a_name, b_name);
-}
-
bool is_pmu_core(const char *name)
{
return !strcmp(name, "cpu") || is_sysfs_pmu_core(name);
@@ -1667,167 +1450,18 @@ bool perf_pmu__auto_merge_stats(const struct perf_pmu *pmu)
return !is_pmu_hybrid(pmu->name);
}
-static bool perf_pmu__is_mem_pmu(const struct perf_pmu *pmu)
+bool perf_pmu__is_mem_pmu(const struct perf_pmu *pmu)
{
return pmu->is_core;
}
-int perf_pmu__num_mem_pmus(void)
-{
- struct perf_pmu *pmu = NULL;
- int count = 0;
-
- while ((pmu = perf_pmu__scan(pmu)) != NULL) {
- if (perf_pmu__is_mem_pmu(pmu))
- count++;
- }
- return count;
-}
-
-static bool pmu_alias_is_duplicate(struct sevent *alias_a,
- struct sevent *alias_b)
-{
- const char *a_pmu_name = NULL, *b_pmu_name = NULL;
- const char *a_name = "//", *b_name = "//";
-
-
- if (alias_a->event) {
- a_name = alias_a->event->name;
- a_pmu_name = alias_a->event->pmu_name;
- }
- if (alias_b->event) {
- b_name = alias_b->event->name;
- b_pmu_name = alias_b->event->pmu_name;
- }
-
- /* Different names -> never duplicates */
- if (strcmp(a_name, b_name))
- return false;
-
- /* Don't remove duplicates for different PMUs */
- a_pmu_name = a_pmu_name ?: (alias_a->pmu->name ?: "");
- b_pmu_name = b_pmu_name ?: (alias_b->pmu->name ?: "");
- return strcmp(a_pmu_name, b_pmu_name) == 0;
-}
-
-void print_pmu_events(const struct print_callbacks *print_cb, void *print_state)
-{
- struct perf_pmu *pmu;
- struct perf_pmu_alias *event;
- char buf[1024];
- int printed = 0;
- int len, j;
- struct sevent *aliases;
-
- pmu = NULL;
- len = 0;
- while ((pmu = perf_pmu__scan(pmu)) != NULL) {
- list_for_each_entry(event, &pmu->aliases, list)
- len++;
- if (pmu->selectable)
- len++;
- }
- aliases = zalloc(sizeof(struct sevent) * len);
- if (!aliases) {
- pr_err("FATAL: not enough memory to print PMU events\n");
- return;
- }
- pmu = NULL;
- j = 0;
- while ((pmu = perf_pmu__scan(pmu)) != NULL) {
- bool is_cpu = pmu->is_core;
-
- list_for_each_entry(event, &pmu->aliases, list) {
- aliases[j].event = event;
- aliases[j].pmu = pmu;
- aliases[j].is_cpu = is_cpu;
- j++;
- }
- if (pmu->selectable) {
- aliases[j].event = NULL;
- aliases[j].pmu = pmu;
- aliases[j].is_cpu = is_cpu;
- j++;
- }
- }
- len = j;
- qsort(aliases, len, sizeof(struct sevent), cmp_sevent);
- for (j = 0; j < len; j++) {
- const char *name, *alias = NULL, *scale_unit = NULL,
- *desc = NULL, *long_desc = NULL,
- *encoding_desc = NULL, *topic = NULL,
- *pmu_name = NULL;
- bool deprecated = false;
- size_t buf_used;
-
- /* Skip duplicates */
- if (j > 0 && pmu_alias_is_duplicate(&aliases[j], &aliases[j - 1]))
- continue;
-
- if (!aliases[j].event) {
- /* A selectable event. */
- pmu_name = aliases[j].pmu->name;
- buf_used = snprintf(buf, sizeof(buf), "%s//", pmu_name) + 1;
- name = buf;
- } else {
- if (aliases[j].event->desc) {
- name = aliases[j].event->name;
- buf_used = 0;
- } else {
- name = format_alias(buf, sizeof(buf), aliases[j].pmu,
- aliases[j].event);
- if (aliases[j].is_cpu) {
- alias = name;
- name = aliases[j].event->name;
- }
- buf_used = strlen(buf) + 1;
- }
- pmu_name = aliases[j].event->pmu_name ?: (aliases[j].pmu->name ?: "");
- if (strlen(aliases[j].event->unit) || aliases[j].event->scale != 1.0) {
- scale_unit = buf + buf_used;
- buf_used += snprintf(buf + buf_used, sizeof(buf) - buf_used,
- "%G%s", aliases[j].event->scale,
- aliases[j].event->unit) + 1;
- }
- desc = aliases[j].event->desc;
- long_desc = aliases[j].event->long_desc;
- topic = aliases[j].event->topic;
- encoding_desc = buf + buf_used;
- buf_used += snprintf(buf + buf_used, sizeof(buf) - buf_used,
- "%s/%s/", pmu_name, aliases[j].event->str) + 1;
- deprecated = aliases[j].event->deprecated;
- }
- print_cb->print_event(print_state,
- pmu_name,
- topic,
- name,
- alias,
- scale_unit,
- deprecated,
- "Kernel PMU event",
- desc,
- long_desc,
- encoding_desc);
- }
- if (printed && pager_in_use())
- printf("\n");
-
- zfree(&aliases);
- return;
-}
-
-bool pmu_have_event(const char *pname, const char *name)
+bool perf_pmu__have_event(const struct perf_pmu *pmu, const char *name)
{
- struct perf_pmu *pmu;
struct perf_pmu_alias *alias;
- pmu = NULL;
- while ((pmu = perf_pmu__scan(pmu)) != NULL) {
- if (strcmp(pname, pmu->name))
- continue;
- list_for_each_entry(alias, &pmu->aliases, list)
- if (!strcmp(alias->name, name))
- return true;
+ list_for_each_entry(alias, &pmu->aliases, list) {
+ if (!strcmp(alias->name, name))
+ return true;
}
return false;
}
@@ -2033,24 +1667,6 @@ void perf_pmu__warn_invalid_config(struct perf_pmu *pmu, __u64 config,
name ?: "N/A", buf, config);
}
-bool perf_pmu__has_hybrid(void)
-{
- static bool hybrid_scanned, has_hybrid;
-
- if (!hybrid_scanned) {
- struct perf_pmu *pmu = NULL;
-
- while ((pmu = perf_pmu__scan(pmu)) != NULL) {
- if (pmu->is_core && is_pmu_hybrid(pmu->name)) {
- has_hybrid = true;
- break;
- }
- }
- hybrid_scanned = true;
- }
- return has_hybrid;
-}
-
int perf_pmu__match(char *pattern, char *name, char *tok)
{
if (!name)
@@ -2118,7 +1734,7 @@ int perf_pmu__pathname_fd(int dirfd, const char *pmu_name, const char *filename,
return openat(dirfd, path, flags);
}
-static void perf_pmu__delete(struct perf_pmu *pmu)
+void perf_pmu__delete(struct perf_pmu *pmu)
{
perf_pmu__del_formats(&pmu->format);
perf_pmu__del_aliases(pmu);
@@ -2131,14 +1747,3 @@ static void perf_pmu__delete(struct perf_pmu *pmu)
zfree(&pmu->alias_name);
free(pmu);
}
-
-void perf_pmu__destroy(void)
-{
- struct perf_pmu *pmu, *tmp;
-
- list_for_each_entry_safe(pmu, tmp, &pmus, list) {
- list_del(&pmu->list);
-
- perf_pmu__delete(pmu);
- }
-}
diff --git a/tools/perf/util/pmu.h b/tools/perf/util/pmu.h
index cb51ad6e40fa..f1f3e8a2e00e 100644
--- a/tools/perf/util/pmu.h
+++ b/tools/perf/util/pmu.h
@@ -198,8 +198,6 @@ struct perf_pmu_alias {
char *pmu_name;
};
-struct perf_pmu *perf_pmu__find(const char *name);
-struct perf_pmu *perf_pmu__find_by_type(unsigned int type);
void pmu_add_sys_aliases(struct list_head *head, struct perf_pmu *pmu);
int perf_pmu__config(struct perf_pmu *pmu, struct perf_event_attr *attr,
struct list_head *head_terms,
@@ -222,16 +220,13 @@ void perf_pmu__set_format(unsigned long *bits, long from, long to);
int perf_pmu__format_parse(int dirfd, struct list_head *head);
void perf_pmu__del_formats(struct list_head *formats);
-struct perf_pmu *perf_pmu__scan(struct perf_pmu *pmu);
-
bool is_pmu_core(const char *name);
bool is_pmu_hybrid(const char *name);
bool perf_pmu__supports_legacy_cache(const struct perf_pmu *pmu);
bool perf_pmu__supports_wildcard_numeric(const struct perf_pmu *pmu);
bool perf_pmu__auto_merge_stats(const struct perf_pmu *pmu);
-int perf_pmu__num_mem_pmus(void);
-void print_pmu_events(const struct print_callbacks *print_cb, void *print_state);
-bool pmu_have_event(const char *pname, const char *name);
+bool perf_pmu__is_mem_pmu(const struct perf_pmu *pmu);
+bool perf_pmu__have_event(const struct perf_pmu *pmu, const char *name);
FILE *perf_pmu__open_file(struct perf_pmu *pmu, const char *name);
FILE *perf_pmu__open_file_at(struct perf_pmu *pmu, int dirfd, const char *name);
@@ -261,7 +256,6 @@ void perf_pmu__warn_invalid_config(struct perf_pmu *pmu, __u64 config,
const char *name);
void perf_pmu__warn_invalid_formats(struct perf_pmu *pmu);
-bool perf_pmu__has_hybrid(void);
int perf_pmu__match(char *pattern, char *name, char *tok);
char *pmu_find_real_name(const char *name);
@@ -273,6 +267,7 @@ int perf_pmu__pathname_scnprintf(char *buf, size_t size,
int perf_pmu__event_source_devices_fd(void);
int perf_pmu__pathname_fd(int dirfd, const char *pmu_name, const char *filename, int flags);
-void perf_pmu__destroy(void);
+struct perf_pmu *perf_pmu__lookup(struct list_head *pmus, int dirfd, const char *lookup_name);
+void perf_pmu__delete(struct perf_pmu *pmu);
#endif /* __PMU_H */
diff --git a/tools/perf/util/pmus.c b/tools/perf/util/pmus.c
index 140e11f00b29..2fb28e583366 100644
--- a/tools/perf/util/pmus.c
+++ b/tools/perf/util/pmus.c
@@ -1,16 +1,136 @@
// SPDX-License-Identifier: GPL-2.0
#include <linux/list.h>
+#include <linux/zalloc.h>
+#include <subcmd/pager.h>
+#include <sys/types.h>
+#include <dirent.h>
#include <string.h>
+#include <unistd.h>
+#include "debug.h"
+#include "evsel.h"
#include "pmus.h"
#include "pmu.h"
+#include "print-events.h"
-LIST_HEAD(pmus);
+static LIST_HEAD(pmus);
+
+void perf_pmus__destroy(void)
+{
+ struct perf_pmu *pmu, *tmp;
+
+ list_for_each_entry_safe(pmu, tmp, &pmus, list) {
+ list_del(&pmu->list);
+
+ perf_pmu__delete(pmu);
+ }
+}
+
+static struct perf_pmu *pmu_find(const char *name)
+{
+ struct perf_pmu *pmu;
+
+ list_for_each_entry(pmu, &pmus, list) {
+ if (!strcmp(pmu->name, name) ||
+ (pmu->alias_name && !strcmp(pmu->alias_name, name)))
+ return pmu;
+ }
+
+ return NULL;
+}
+
+struct perf_pmu *perf_pmus__find(const char *name)
+{
+ struct perf_pmu *pmu;
+ int dirfd;
+
+ /*
+ * Once PMU is loaded it stays in the list,
+ * so we keep us from multiple reading/parsing
+ * the pmu format definitions.
+ */
+ pmu = pmu_find(name);
+ if (pmu)
+ return pmu;
+
+ dirfd = perf_pmu__event_source_devices_fd();
+ pmu = perf_pmu__lookup(&pmus, dirfd, name);
+ close(dirfd);
+
+ return pmu;
+}
+
+static struct perf_pmu *perf_pmu__find2(int dirfd, const char *name)
+{
+ struct perf_pmu *pmu;
+
+ /*
+ * Once PMU is loaded it stays in the list,
+ * so we keep us from multiple reading/parsing
+ * the pmu format definitions.
+ */
+ pmu = pmu_find(name);
+ if (pmu)
+ return pmu;
+
+ return perf_pmu__lookup(&pmus, dirfd, name);
+}
+
+/* Add all pmus in sysfs to pmu list: */
+static void pmu_read_sysfs(void)
+{
+ int fd;
+ DIR *dir;
+ struct dirent *dent;
+
+ fd = perf_pmu__event_source_devices_fd();
+ if (fd < 0)
+ return;
+
+ dir = fdopendir(fd);
+ if (!dir)
+ return;
+
+ while ((dent = readdir(dir))) {
+ if (!strcmp(dent->d_name, ".") || !strcmp(dent->d_name, ".."))
+ continue;
+ /* add to static LIST_HEAD(pmus): */
+ perf_pmu__find2(fd, dent->d_name);
+ }
+
+ closedir(dir);
+}
+
+struct perf_pmu *perf_pmus__find_by_type(unsigned int type)
+{
+ struct perf_pmu *pmu;
+
+ list_for_each_entry(pmu, &pmus, list)
+ if (pmu->type == type)
+ return pmu;
+
+ return NULL;
+}
+
+struct perf_pmu *perf_pmus__scan(struct perf_pmu *pmu)
+{
+ /*
+ * pmu iterator: If pmu is NULL, we start at the begin,
+ * otherwise return the next pmu. Returns NULL on end.
+ */
+ if (!pmu) {
+ pmu_read_sysfs();
+ pmu = list_prepare_entry(pmu, &pmus, list);
+ }
+ list_for_each_entry_continue(pmu, &pmus, list)
+ return pmu;
+ return NULL;
+}
const struct perf_pmu *perf_pmus__pmu_for_pmu_filter(const char *str)
{
struct perf_pmu *pmu = NULL;
- while ((pmu = perf_pmu__scan(pmu)) != NULL) {
+ while ((pmu = perf_pmus__scan(pmu)) != NULL) {
if (!strcmp(pmu->name, str))
return pmu;
/* Ignore "uncore_" prefix. */
@@ -26,3 +146,276 @@ const struct perf_pmu *perf_pmus__pmu_for_pmu_filter(const char *str)
}
return NULL;
}
+
+int perf_pmus__num_mem_pmus(void)
+{
+ struct perf_pmu *pmu = NULL;
+ int count = 0;
+
+ while ((pmu = perf_pmus__scan(pmu)) != NULL) {
+ if (perf_pmu__is_mem_pmu(pmu))
+ count++;
+ }
+ return count;
+}
+
+/** Struct for ordering events as output in perf list. */
+struct sevent {
+ /** PMU for event. */
+ const struct perf_pmu *pmu;
+ /**
+ * Optional event for name, desc, etc. If not present then this is a
+ * selectable PMU and the event name is shown as "//".
+ */
+ const struct perf_pmu_alias *event;
+ /** Is the PMU for the CPU? */
+ bool is_cpu;
+};
+
+static int cmp_sevent(const void *a, const void *b)
+{
+ const struct sevent *as = a;
+ const struct sevent *bs = b;
+ const char *a_pmu_name = NULL, *b_pmu_name = NULL;
+ const char *a_name = "//", *a_desc = NULL, *a_topic = "";
+ const char *b_name = "//", *b_desc = NULL, *b_topic = "";
+ int ret;
+
+ if (as->event) {
+ a_name = as->event->name;
+ a_desc = as->event->desc;
+ a_topic = as->event->topic ?: "";
+ a_pmu_name = as->event->pmu_name;
+ }
+ if (bs->event) {
+ b_name = bs->event->name;
+ b_desc = bs->event->desc;
+ b_topic = bs->event->topic ?: "";
+ b_pmu_name = bs->event->pmu_name;
+ }
+ /* Put extra events last. */
+ if (!!a_desc != !!b_desc)
+ return !!a_desc - !!b_desc;
+
+ /* Order by topics. */
+ ret = strcmp(a_topic, b_topic);
+ if (ret)
+ return ret;
+
+ /* Order CPU core events to be first */
+ if (as->is_cpu != bs->is_cpu)
+ return as->is_cpu ? -1 : 1;
+
+ /* Order by PMU name. */
+ if (as->pmu != bs->pmu) {
+ a_pmu_name = a_pmu_name ?: (as->pmu->name ?: "");
+ b_pmu_name = b_pmu_name ?: (bs->pmu->name ?: "");
+ ret = strcmp(a_pmu_name, b_pmu_name);
+ if (ret)
+ return ret;
+ }
+
+ /* Order by event name. */
+ return strcmp(a_name, b_name);
+}
+
+static bool pmu_alias_is_duplicate(struct sevent *alias_a,
+ struct sevent *alias_b)
+{
+ const char *a_pmu_name = NULL, *b_pmu_name = NULL;
+ const char *a_name = "//", *b_name = "//";
+
+
+ if (alias_a->event) {
+ a_name = alias_a->event->name;
+ a_pmu_name = alias_a->event->pmu_name;
+ }
+ if (alias_b->event) {
+ b_name = alias_b->event->name;
+ b_pmu_name = alias_b->event->pmu_name;
+ }
+
+ /* Different names -> never duplicates */
+ if (strcmp(a_name, b_name))
+ return false;
+
+ /* Don't remove duplicates for different PMUs */
+ a_pmu_name = a_pmu_name ?: (alias_a->pmu->name ?: "");
+ b_pmu_name = b_pmu_name ?: (alias_b->pmu->name ?: "");
+ return strcmp(a_pmu_name, b_pmu_name) == 0;
+}
+
+static int sub_non_neg(int a, int b)
+{
+ if (b > a)
+ return 0;
+ return a - b;
+}
+
+static char *format_alias(char *buf, int len, const struct perf_pmu *pmu,
+ const struct perf_pmu_alias *alias)
+{
+ struct parse_events_term *term;
+ int used = snprintf(buf, len, "%s/%s", pmu->name, alias->name);
+
+ list_for_each_entry(term, &alias->terms, list) {
+ if (term->type_val == PARSE_EVENTS__TERM_TYPE_STR)
+ used += snprintf(buf + used, sub_non_neg(len, used),
+ ",%s=%s", term->config,
+ term->val.str);
+ }
+
+ if (sub_non_neg(len, used) > 0) {
+ buf[used] = '/';
+ used++;
+ }
+ if (sub_non_neg(len, used) > 0) {
+ buf[used] = '\0';
+ used++;
+ } else
+ buf[len - 1] = '\0';
+
+ return buf;
+}
+
+void perf_pmus__print_pmu_events(const struct print_callbacks *print_cb, void *print_state)
+{
+ struct perf_pmu *pmu;
+ struct perf_pmu_alias *event;
+ char buf[1024];
+ int printed = 0;
+ int len, j;
+ struct sevent *aliases;
+
+ pmu = NULL;
+ len = 0;
+ while ((pmu = perf_pmus__scan(pmu)) != NULL) {
+ list_for_each_entry(event, &pmu->aliases, list)
+ len++;
+ if (pmu->selectable)
+ len++;
+ }
+ aliases = zalloc(sizeof(struct sevent) * len);
+ if (!aliases) {
+ pr_err("FATAL: not enough memory to print PMU events\n");
+ return;
+ }
+ pmu = NULL;
+ j = 0;
+ while ((pmu = perf_pmus__scan(pmu)) != NULL) {
+ bool is_cpu = pmu->is_core;
+
+ list_for_each_entry(event, &pmu->aliases, list) {
+ aliases[j].event = event;
+ aliases[j].pmu = pmu;
+ aliases[j].is_cpu = is_cpu;
+ j++;
+ }
+ if (pmu->selectable) {
+ aliases[j].event = NULL;
+ aliases[j].pmu = pmu;
+ aliases[j].is_cpu = is_cpu;
+ j++;
+ }
+ }
+ len = j;
+ qsort(aliases, len, sizeof(struct sevent), cmp_sevent);
+ for (j = 0; j < len; j++) {
+ const char *name, *alias = NULL, *scale_unit = NULL,
+ *desc = NULL, *long_desc = NULL,
+ *encoding_desc = NULL, *topic = NULL,
+ *pmu_name = NULL;
+ bool deprecated = false;
+ size_t buf_used;
+
+ /* Skip duplicates */
+ if (j > 0 && pmu_alias_is_duplicate(&aliases[j], &aliases[j - 1]))
+ continue;
+
+ if (!aliases[j].event) {
+ /* A selectable event. */
+ pmu_name = aliases[j].pmu->name;
+ buf_used = snprintf(buf, sizeof(buf), "%s//", pmu_name) + 1;
+ name = buf;
+ } else {
+ if (aliases[j].event->desc) {
+ name = aliases[j].event->name;
+ buf_used = 0;
+ } else {
+ name = format_alias(buf, sizeof(buf), aliases[j].pmu,
+ aliases[j].event);
+ if (aliases[j].is_cpu) {
+ alias = name;
+ name = aliases[j].event->name;
+ }
+ buf_used = strlen(buf) + 1;
+ }
+ pmu_name = aliases[j].event->pmu_name ?: (aliases[j].pmu->name ?: "");
+ if (strlen(aliases[j].event->unit) || aliases[j].event->scale != 1.0) {
+ scale_unit = buf + buf_used;
+ buf_used += snprintf(buf + buf_used, sizeof(buf) - buf_used,
+ "%G%s", aliases[j].event->scale,
+ aliases[j].event->unit) + 1;
+ }
+ desc = aliases[j].event->desc;
+ long_desc = aliases[j].event->long_desc;
+ topic = aliases[j].event->topic;
+ encoding_desc = buf + buf_used;
+ buf_used += snprintf(buf + buf_used, sizeof(buf) - buf_used,
+ "%s/%s/", pmu_name, aliases[j].event->str) + 1;
+ deprecated = aliases[j].event->deprecated;
+ }
+ print_cb->print_event(print_state,
+ pmu_name,
+ topic,
+ name,
+ alias,
+ scale_unit,
+ deprecated,
+ "Kernel PMU event",
+ desc,
+ long_desc,
+ encoding_desc);
+ }
+ if (printed && pager_in_use())
+ printf("\n");
+
+ zfree(&aliases);
+ return;
+}
+
+bool perf_pmus__have_event(const char *pname, const char *name)
+{
+ struct perf_pmu *pmu = perf_pmus__find(pname);
+
+ return pmu && perf_pmu__have_event(pmu, name);
+}
+
+bool perf_pmus__has_hybrid(void)
+{
+ static bool hybrid_scanned, has_hybrid;
+
+ if (!hybrid_scanned) {
+ struct perf_pmu *pmu = NULL;
+
+ while ((pmu = perf_pmus__scan(pmu)) != NULL) {
+ if (pmu->is_core && is_pmu_hybrid(pmu->name)) {
+ has_hybrid = true;
+ break;
+ }
+ }
+ hybrid_scanned = true;
+ }
+ return has_hybrid;
+}
+
+struct perf_pmu *evsel__find_pmu(const struct evsel *evsel)
+{
+ struct perf_pmu *pmu = evsel->pmu;
+
+ if (!pmu) {
+ pmu = perf_pmus__find_by_type(evsel->core.attr.type);
+ ((struct evsel *)evsel)->pmu = pmu;
+ }
+ return pmu;
+}
diff --git a/tools/perf/util/pmus.h b/tools/perf/util/pmus.h
index 257de10788e8..2a771d9f8da7 100644
--- a/tools/perf/util/pmus.h
+++ b/tools/perf/util/pmus.h
@@ -2,9 +2,21 @@
#ifndef __PMUS_H
#define __PMUS_H
-extern struct list_head pmus;
struct perf_pmu;
+struct print_callbacks;
+
+void perf_pmus__destroy(void);
+
+struct perf_pmu *perf_pmus__find(const char *name);
+struct perf_pmu *perf_pmus__find_by_type(unsigned int type);
+
+struct perf_pmu *perf_pmus__scan(struct perf_pmu *pmu);
const struct perf_pmu *perf_pmus__pmu_for_pmu_filter(const char *str);
+int perf_pmus__num_mem_pmus(void);
+void perf_pmus__print_pmu_events(const struct print_callbacks *print_cb, void *print_state);
+bool perf_pmus__have_event(const char *pname, const char *name);
+bool perf_pmus__has_hybrid(void);
+
#endif /* __PMUS_H */
diff --git a/tools/perf/util/print-events.c b/tools/perf/util/print-events.c
index 8d823bc906e6..9cee7bb7a561 100644
--- a/tools/perf/util/print-events.c
+++ b/tools/perf/util/print-events.c
@@ -20,6 +20,7 @@
#include "metricgroup.h"
#include "parse-events.h"
#include "pmu.h"
+#include "pmus.h"
#include "print-events.h"
#include "probe-file.h"
#include "string2.h"
@@ -271,7 +272,7 @@ int print_hwcache_events(const struct print_callbacks *print_cb, void *print_sta
struct perf_pmu *pmu = NULL;
const char *event_type_descriptor = event_type_descriptors[PERF_TYPE_HW_CACHE];
- while ((pmu = perf_pmu__scan(pmu)) != NULL) {
+ while ((pmu = perf_pmus__scan(pmu)) != NULL) {
/*
* Skip uncore PMUs for performance. PERF_TYPE_HW_CACHE type
* attributes can accept software PMUs in the extended type, so
@@ -404,7 +405,7 @@ void print_events(const struct print_callbacks *print_cb, void *print_state)
print_hwcache_events(print_cb, print_state);
- print_pmu_events(print_cb, print_state);
+ perf_pmus__print_pmu_events(print_cb, print_state);
print_cb->print_event(print_state,
/*topic=*/NULL,
diff --git a/tools/perf/util/stat-display.c b/tools/perf/util/stat-display.c
index 164715113b9e..51d22c19e5ab 100644
--- a/tools/perf/util/stat-display.c
+++ b/tools/perf/util/stat-display.c
@@ -20,6 +20,7 @@
#include "util.h"
#include "iostat.h"
#include "pmu.h"
+#include "pmus.h"
#define CNTR_NOT_SUPPORTED "<not supported>"
#define CNTR_NOT_COUNTED "<not counted>"
@@ -680,7 +681,7 @@ static bool evlist__has_hybrid(struct evlist *evlist)
{
struct evsel *evsel;
- if (!perf_pmu__has_hybrid())
+ if (!perf_pmus__has_hybrid())
return false;
evlist__for_each_entry(evlist, evsel) {
--
2.40.1.698.g37aff9b760-goog
perf_pmus__find_by_type may be called for something like a raw event,
in which case the PMU isn't guaranteed to have been looked up. Add a
second check to make sure all PMUs are loaded.
Signed-off-by: Ian Rogers <[email protected]>
---
tools/perf/util/pmus.c | 15 ++++++++++++++-
1 file changed, 14 insertions(+), 1 deletion(-)
diff --git a/tools/perf/util/pmus.c b/tools/perf/util/pmus.c
index 22e9e46ab765..ce75c7adca84 100644
--- a/tools/perf/util/pmus.c
+++ b/tools/perf/util/pmus.c
@@ -142,7 +142,7 @@ static void pmu_read_sysfs(bool core_only)
}
}
-struct perf_pmu *perf_pmus__find_by_type(unsigned int type)
+static struct perf_pmu *__perf_pmus__find_by_type(unsigned int type)
{
struct perf_pmu *pmu;
@@ -150,6 +150,7 @@ struct perf_pmu *perf_pmus__find_by_type(unsigned int type)
if (pmu->type == type)
return pmu;
}
+
list_for_each_entry(pmu, &other_pmus, list) {
if (pmu->type == type)
return pmu;
@@ -157,6 +158,18 @@ struct perf_pmu *perf_pmus__find_by_type(unsigned int type)
return NULL;
}
+struct perf_pmu *perf_pmus__find_by_type(unsigned int type)
+{
+ struct perf_pmu *pmu = __perf_pmus__find_by_type(type);
+
+ if (pmu || read_sysfs_all_pmus)
+ return pmu;
+
+ pmu_read_sysfs(/*core_only=*/false);
+ pmu = __perf_pmus__find_by_type(type);
+ return pmu;
+}
+
/*
* pmu iterator: If pmu is NULL, we start at the begin, otherwise return the
* next pmu. Returns NULL on end.
--
2.40.1.698.g37aff9b760-goog
perf_pmus__for_each_pmu doesn't lazily initialize pmus making its use
error prone. Just use perf_pmu__scan as this only impacts
non-performance critical tests.
Reviewed-by: Kan Liang <[email protected]>
Signed-off-by: Ian Rogers <[email protected]>
---
tools/perf/bench/pmu-scan.c | 6 ++----
tools/perf/tests/event_groups.c | 7 ++-----
tools/perf/tests/parse-events.c | 11 ++++-------
tools/perf/util/pmus.h | 2 --
4 files changed, 8 insertions(+), 18 deletions(-)
diff --git a/tools/perf/bench/pmu-scan.c b/tools/perf/bench/pmu-scan.c
index f0f007843bb8..f4a6c37cbe27 100644
--- a/tools/perf/bench/pmu-scan.c
+++ b/tools/perf/bench/pmu-scan.c
@@ -40,13 +40,11 @@ static struct pmu_scan_result *results;
static int save_result(void)
{
- struct perf_pmu *pmu;
+ struct perf_pmu *pmu = NULL;
struct list_head *list;
struct pmu_scan_result *r;
- perf_pmu__scan(NULL);
-
- perf_pmus__for_each_pmu(pmu) {
+ while ((pmu = perf_pmu__scan(pmu)) != NULL) {
r = realloc(results, (nr_pmus + 1) * sizeof(*r));
if (r == NULL)
return -ENOMEM;
diff --git a/tools/perf/tests/event_groups.c b/tools/perf/tests/event_groups.c
index 029442b4e9c6..3d9a2b524bba 100644
--- a/tools/perf/tests/event_groups.c
+++ b/tools/perf/tests/event_groups.c
@@ -50,13 +50,10 @@ static int event_open(int type, unsigned long config, int group_fd)
static int setup_uncore_event(void)
{
- struct perf_pmu *pmu;
+ struct perf_pmu *pmu = NULL;
int i, fd;
- if (list_empty(&pmus))
- perf_pmu__scan(NULL);
-
- perf_pmus__for_each_pmu(pmu) {
+ while ((pmu = perf_pmu__scan(pmu)) != NULL) {
for (i = 0; i < NR_UNCORE_PMUS; i++) {
if (!strcmp(uncore_pmus[i].name, pmu->name)) {
pr_debug("Using %s for uncore pmu event\n", pmu->name);
diff --git a/tools/perf/tests/parse-events.c b/tools/perf/tests/parse-events.c
index 72a10bed84fd..277607ede060 100644
--- a/tools/perf/tests/parse-events.c
+++ b/tools/perf/tests/parse-events.c
@@ -108,11 +108,11 @@ static int test__checkevent_raw(struct evlist *evlist)
TEST_ASSERT_VAL("wrong number of entries", 0 != evlist->core.nr_entries);
perf_evlist__for_each_evsel(&evlist->core, evsel) {
- struct perf_pmu *pmu;
+ struct perf_pmu *pmu = NULL;
bool type_matched = false;
TEST_ASSERT_VAL("wrong config", test_perf_config(evsel, 0x1a));
- perf_pmus__for_each_pmu(pmu) {
+ while ((pmu = perf_pmu__scan(pmu)) != NULL) {
if (pmu->type == evsel->attr.type) {
TEST_ASSERT_VAL("PMU type expected once", !type_matched);
type_matched = true;
@@ -2243,13 +2243,10 @@ static int test__terms2(struct test_suite *test __maybe_unused, int subtest __ma
static int test__pmu_events(struct test_suite *test __maybe_unused, int subtest __maybe_unused)
{
- struct perf_pmu *pmu;
+ struct perf_pmu *pmu = NULL;
int ret = TEST_OK;
- if (list_empty(&pmus))
- perf_pmu__scan(NULL);
-
- perf_pmus__for_each_pmu(pmu) {
+ while ((pmu = perf_pmu__scan(pmu)) != NULL) {
struct stat st;
char path[PATH_MAX];
struct dirent *ent;
diff --git a/tools/perf/util/pmus.h b/tools/perf/util/pmus.h
index d475e2960c10..257de10788e8 100644
--- a/tools/perf/util/pmus.h
+++ b/tools/perf/util/pmus.h
@@ -5,8 +5,6 @@
extern struct list_head pmus;
struct perf_pmu;
-#define perf_pmus__for_each_pmu(pmu) list_for_each_entry(pmu, &pmus, list)
-
const struct perf_pmu *perf_pmus__pmu_for_pmu_filter(const char *str);
#endif /* __PMUS_H */
--
2.40.1.698.g37aff9b760-goog
is_arm_pmu_core detects a core PMU via the presence of a "cpus" file
rather than a "cpumask" file. This pattern holds for hybrid PMUs so
rename the function and remove redundant perf_pmu__is_hybrid
tests.
Add a new helper is_pmu_hybrid similar to is_pmu_core.
Signed-off-by: Ian Rogers <[email protected]>
---
tools/perf/util/pmu.c | 29 ++++++++++++++++++-----------
tools/perf/util/pmu.h | 1 +
2 files changed, 19 insertions(+), 11 deletions(-)
diff --git a/tools/perf/util/pmu.c b/tools/perf/util/pmu.c
index 729b1f166f80..0fa451c60c77 100644
--- a/tools/perf/util/pmu.c
+++ b/tools/perf/util/pmu.c
@@ -640,12 +640,14 @@ static char *pmu_id(const char *name)
return str;
}
-/*
- * PMU CORE devices have different name other than cpu in sysfs on some
- * platforms.
- * Looking for possible sysfs files to identify the arm core device.
+/**
+ * is_sysfs_pmu_core() - PMU CORE devices have different name other than cpu in
+ * sysfs on some platforms like ARM or Intel hybrid. Looking for
+ * possible the cpus file in sysfs files to identify whether this is a
+ * core device.
+ * @name: The PMU name such as "cpu_atom".
*/
-static int is_arm_pmu_core(const char *name)
+static int is_sysfs_pmu_core(const char *name)
{
char path[PATH_MAX];
@@ -811,7 +813,7 @@ void pmu_add_cpu_aliases_table(struct list_head *head, struct perf_pmu *pmu,
struct pmu_add_cpu_aliases_map_data data = {
.head = head,
.name = pmu->name,
- .cpu_name = is_arm_pmu_core(pmu->name) ? pmu->name : "cpu",
+ .cpu_name = is_sysfs_pmu_core(pmu->name) ? pmu->name : "cpu",
.pmu = pmu,
};
@@ -1649,22 +1651,27 @@ static int cmp_sevent(const void *a, const void *b)
bool is_pmu_core(const char *name)
{
- return !strcmp(name, "cpu") || is_arm_pmu_core(name);
+ return !strcmp(name, "cpu") || is_sysfs_pmu_core(name);
+}
+
+bool is_pmu_hybrid(const char *name)
+{
+ return !strcmp(name, "cpu_atom") || !strcmp(name, "cpu_core");
}
bool perf_pmu__supports_legacy_cache(const struct perf_pmu *pmu)
{
- return is_pmu_core(pmu->name) || perf_pmu__is_hybrid(pmu->name);
+ return is_pmu_core(pmu->name);
}
bool perf_pmu__supports_wildcard_numeric(const struct perf_pmu *pmu)
{
- return is_pmu_core(pmu->name) || perf_pmu__is_hybrid(pmu->name);
+ return is_pmu_core(pmu->name);
}
bool perf_pmu__auto_merge_stats(const struct perf_pmu *pmu)
{
- return !perf_pmu__is_hybrid(pmu->name);
+ return !is_pmu_hybrid(pmu->name);
}
static bool pmu_alias_is_duplicate(struct sevent *alias_a,
@@ -1718,7 +1725,7 @@ void print_pmu_events(const struct print_callbacks *print_cb, void *print_state)
pmu = NULL;
j = 0;
while ((pmu = perf_pmu__scan(pmu)) != NULL) {
- bool is_cpu = is_pmu_core(pmu->name) || perf_pmu__is_hybrid(pmu->name);
+ bool is_cpu = is_pmu_core(pmu->name);
list_for_each_entry(event, &pmu->aliases, list) {
aliases[j].event = event;
diff --git a/tools/perf/util/pmu.h b/tools/perf/util/pmu.h
index 49033bb134f3..99e0273027bf 100644
--- a/tools/perf/util/pmu.h
+++ b/tools/perf/util/pmu.h
@@ -220,6 +220,7 @@ void perf_pmu__del_formats(struct list_head *formats);
struct perf_pmu *perf_pmu__scan(struct perf_pmu *pmu);
bool is_pmu_core(const char *name);
+bool is_pmu_hybrid(const char *name);
bool perf_pmu__supports_legacy_cache(const struct perf_pmu *pmu);
bool perf_pmu__supports_wildcard_numeric(const struct perf_pmu *pmu);
bool perf_pmu__auto_merge_stats(const struct perf_pmu *pmu);
--
2.40.1.698.g37aff9b760-goog
perf_pmu__is_hybrid implicitly uses the hybrid PMU list. Instead
return false if hybrid isn't present, if it is then see if any evsel's
PMUs are core.
Signed-off-by: Ian Rogers <[email protected]>
---
tools/perf/util/stat-display.c | 11 +++++++----
1 file changed, 7 insertions(+), 4 deletions(-)
diff --git a/tools/perf/util/stat-display.c b/tools/perf/util/stat-display.c
index ede0477d958a..164715113b9e 100644
--- a/tools/perf/util/stat-display.c
+++ b/tools/perf/util/stat-display.c
@@ -19,7 +19,7 @@
#include <api/fs/fs.h>
#include "util.h"
#include "iostat.h"
-#include "pmu-hybrid.h"
+#include "pmu.h"
#define CNTR_NOT_SUPPORTED "<not supported>"
#define CNTR_NOT_COUNTED "<not counted>"
@@ -680,11 +680,14 @@ static bool evlist__has_hybrid(struct evlist *evlist)
{
struct evsel *evsel;
+ if (!perf_pmu__has_hybrid())
+ return false;
+
evlist__for_each_entry(evlist, evsel) {
- if (evsel->pmu_name &&
- perf_pmu__is_hybrid(evsel->pmu_name)) {
+ struct perf_pmu *pmu = evsel__find_pmu(evsel);
+
+ if (pmu->is_core)
return true;
- }
}
return false;
--
2.40.1.698.g37aff9b760-goog
Avoid perf_pmu__for_each_hybrid_pmu by iterating all PMUs are dumping
the core ones. This will eventually allow removal of the hybrid PMU
list.
Signed-off-by: Ian Rogers <[email protected]>
---
tools/perf/util/header.c | 19 +++++++++++--------
1 file changed, 11 insertions(+), 8 deletions(-)
diff --git a/tools/perf/util/header.c b/tools/perf/util/header.c
index 276870221ce0..e24cc8f316cd 100644
--- a/tools/perf/util/header.c
+++ b/tools/perf/util/header.c
@@ -51,7 +51,6 @@
#include "bpf-event.h"
#include "bpf-utils.h"
#include "clockid.h"
-#include "pmu-hybrid.h"
#include <linux/ctype.h>
#include <internal/lib.h>
@@ -1589,17 +1588,21 @@ static int write_pmu_caps(struct feat_fd *ff,
* Write hybrid pmu caps first to maintain compatibility with
* older perf tool.
*/
- pmu = NULL;
- perf_pmu__for_each_hybrid_pmu(pmu) {
- ret = __write_pmu_caps(ff, pmu, true);
- if (ret < 0)
- return ret;
+ if (perf_pmu__has_hybrid()) {
+ pmu = NULL;
+ while ((pmu = perf_pmu__scan(pmu))) {
+ if (!pmu->is_core)
+ continue;
+
+ ret = __write_pmu_caps(ff, pmu, true);
+ if (ret < 0)
+ return ret;
+ }
}
pmu = NULL;
while ((pmu = perf_pmu__scan(pmu))) {
- if (!pmu->name || !strcmp(pmu->name, "cpu") ||
- !pmu->nr_caps || perf_pmu__is_hybrid(pmu->name))
+ if (pmu->is_core || !pmu->nr_caps)
continue;
ret = __write_pmu_caps(ff, pmu, true);
--
2.40.1.698.g37aff9b760-goog
Rather than iterating over a separate hybrid list, iterate all PMUs
with the hybrid ones having is_core as true.
Signed-off-by: Ian Rogers <[email protected]>
---
tools/perf/arch/x86/tests/hybrid.c | 2 +-
tools/perf/arch/x86/util/evlist.c | 25 +++++++++++++++++--------
tools/perf/arch/x86/util/perf_regs.c | 14 ++++++++++----
3 files changed, 28 insertions(+), 13 deletions(-)
diff --git a/tools/perf/arch/x86/tests/hybrid.c b/tools/perf/arch/x86/tests/hybrid.c
index 941a9edfed4e..944bd1b4bab6 100644
--- a/tools/perf/arch/x86/tests/hybrid.c
+++ b/tools/perf/arch/x86/tests/hybrid.c
@@ -3,7 +3,7 @@
#include "debug.h"
#include "evlist.h"
#include "evsel.h"
-#include "pmu-hybrid.h"
+#include "pmu.h"
#include "tests/tests.h"
static bool test_config(const struct evsel *evsel, __u64 expected_config)
diff --git a/tools/perf/arch/x86/util/evlist.c b/tools/perf/arch/x86/util/evlist.c
index 1b6065841fb0..03f7eb4cf0a4 100644
--- a/tools/perf/arch/x86/util/evlist.c
+++ b/tools/perf/arch/x86/util/evlist.c
@@ -4,7 +4,6 @@
#include "util/evlist.h"
#include "util/parse-events.h"
#include "util/event.h"
-#include "util/pmu-hybrid.h"
#include "topdown.h"
#include "evsel.h"
@@ -12,9 +11,6 @@ static int ___evlist__add_default_attrs(struct evlist *evlist,
struct perf_event_attr *attrs,
size_t nr_attrs)
{
- struct perf_cpu_map *cpus;
- struct evsel *evsel, *n;
- struct perf_pmu *pmu;
LIST_HEAD(head);
size_t i = 0;
@@ -25,15 +21,24 @@ static int ___evlist__add_default_attrs(struct evlist *evlist,
return evlist__add_attrs(evlist, attrs, nr_attrs);
for (i = 0; i < nr_attrs; i++) {
+ struct perf_pmu *pmu = NULL;
+
if (attrs[i].type == PERF_TYPE_SOFTWARE) {
- evsel = evsel__new(attrs + i);
+ struct evsel *evsel = evsel__new(attrs + i);
+
if (evsel == NULL)
goto out_delete_partial_list;
list_add_tail(&evsel->core.node, &head);
continue;
}
- perf_pmu__for_each_hybrid_pmu(pmu) {
+ while ((pmu = perf_pmu__scan(pmu)) != NULL) {
+ struct perf_cpu_map *cpus;
+ struct evsel *evsel;
+
+ if (!pmu->is_core)
+ continue;
+
evsel = evsel__new(attrs + i);
if (evsel == NULL)
goto out_delete_partial_list;
@@ -51,8 +56,12 @@ static int ___evlist__add_default_attrs(struct evlist *evlist,
return 0;
out_delete_partial_list:
- __evlist__for_each_entry_safe(&head, n, evsel)
- evsel__delete(evsel);
+ {
+ struct evsel *evsel, *n;
+
+ __evlist__for_each_entry_safe(&head, n, evsel)
+ evsel__delete(evsel);
+ }
return -1;
}
diff --git a/tools/perf/arch/x86/util/perf_regs.c b/tools/perf/arch/x86/util/perf_regs.c
index 0ed177991ad0..26abc159fc0e 100644
--- a/tools/perf/arch/x86/util/perf_regs.c
+++ b/tools/perf/arch/x86/util/perf_regs.c
@@ -10,7 +10,6 @@
#include "../../../util/debug.h"
#include "../../../util/event.h"
#include "../../../util/pmu.h"
-#include "../../../util/pmu-hybrid.h"
const struct sample_reg sample_reg_masks[] = {
SMPL_REG(AX, PERF_REG_X86_AX),
@@ -286,7 +285,6 @@ uint64_t arch__intr_reg_mask(void)
.disabled = 1,
.exclude_kernel = 1,
};
- struct perf_pmu *pmu;
int fd;
/*
* In an unnamed union, init it here to build on older gcc versions
@@ -294,12 +292,20 @@ uint64_t arch__intr_reg_mask(void)
attr.sample_period = 1;
if (perf_pmu__has_hybrid()) {
+ struct perf_pmu *pmu = NULL;
+ __u64 type = PERF_TYPE_RAW;
+
/*
* The same register set is supported among different hybrid PMUs.
* Only check the first available one.
*/
- pmu = list_first_entry(&perf_pmu__hybrid_pmus, typeof(*pmu), hybrid_list);
- attr.config |= (__u64)pmu->type << PERF_PMU_TYPE_SHIFT;
+ while ((pmu = perf_pmu__scan(pmu)) != NULL) {
+ if (pmu->is_core) {
+ type = pmu->type;
+ break;
+ }
+ }
+ attr.config |= type << PERF_PMU_TYPE_SHIFT;
}
event_attr_init(&attr);
--
2.40.1.698.g37aff9b760-goog
Rather than list empty on perf_pmu__hybrid_pmus, detect if any core
PMUs match the hybrid name. Computed values held in statics to avoid
recomputation.
Signed-off-by: Ian Rogers <[email protected]>
---
tools/perf/util/pmu.c | 16 +++++++++++-----
1 file changed, 11 insertions(+), 5 deletions(-)
diff --git a/tools/perf/util/pmu.c b/tools/perf/util/pmu.c
index 930ec3786964..2da28739e0d3 100644
--- a/tools/perf/util/pmu.c
+++ b/tools/perf/util/pmu.c
@@ -60,8 +60,6 @@ struct perf_pmu_format {
struct list_head list;
};
-static bool hybrid_scanned;
-
static struct perf_pmu *perf_pmu__find2(int dirfd, const char *name);
/*
@@ -2026,12 +2024,20 @@ void perf_pmu__warn_invalid_config(struct perf_pmu *pmu, __u64 config,
bool perf_pmu__has_hybrid(void)
{
+ static bool hybrid_scanned, has_hybrid;
+
if (!hybrid_scanned) {
+ struct perf_pmu *pmu = NULL;
+
+ while ((pmu = perf_pmu__scan(pmu)) != NULL) {
+ if (pmu->is_core && is_pmu_hybrid(pmu->name)) {
+ has_hybrid = true;
+ break;
+ }
+ }
hybrid_scanned = true;
- perf_pmu__scan(NULL);
}
-
- return !list_empty(&perf_pmu__hybrid_pmus);
+ return has_hybrid;
}
int perf_pmu__match(char *pattern, char *name, char *tok)
--
2.40.1.698.g37aff9b760-goog
perf_pmus__scan will process every directory in sysfs to see if it is
a PMU, attempting to add it if not already in the pmus list. Add two
booleans to record whether this scanning has been done for core or all
PMUs. Skip scanning in the event that scanning has already occurred.
Signed-off-by: Ian Rogers <[email protected]>
---
tools/perf/util/pmus.c | 33 +++++++++++++++++++++++++++++++--
1 file changed, 31 insertions(+), 2 deletions(-)
diff --git a/tools/perf/util/pmus.c b/tools/perf/util/pmus.c
index 5736e99facd1..22e9e46ab765 100644
--- a/tools/perf/util/pmus.c
+++ b/tools/perf/util/pmus.c
@@ -14,6 +14,8 @@
static LIST_HEAD(core_pmus);
static LIST_HEAD(other_pmus);
+static bool read_sysfs_core_pmus;
+static bool read_sysfs_all_pmus;
void perf_pmus__destroy(void)
{
@@ -29,6 +31,8 @@ void perf_pmus__destroy(void)
perf_pmu__delete(pmu);
}
+ read_sysfs_core_pmus = false;
+ read_sysfs_all_pmus = false;
}
static struct perf_pmu *pmu_find(const char *name)
@@ -53,6 +57,7 @@ struct perf_pmu *perf_pmus__find(const char *name)
{
struct perf_pmu *pmu;
int dirfd;
+ bool core_pmu;
/*
* Once PMU is loaded it stays in the list,
@@ -63,8 +68,15 @@ struct perf_pmu *perf_pmus__find(const char *name)
if (pmu)
return pmu;
+ if (read_sysfs_all_pmus)
+ return NULL;
+
+ core_pmu = is_pmu_core(name);
+ if (core_pmu && read_sysfs_core_pmus)
+ return NULL;
+
dirfd = perf_pmu__event_source_devices_fd();
- pmu = perf_pmu__lookup(is_pmu_core(name) ? &core_pmus : &other_pmus, dirfd, name);
+ pmu = perf_pmu__lookup(core_pmu ? &core_pmus : &other_pmus, dirfd, name);
close(dirfd);
return pmu;
@@ -73,6 +85,7 @@ struct perf_pmu *perf_pmus__find(const char *name)
static struct perf_pmu *perf_pmu__find2(int dirfd, const char *name)
{
struct perf_pmu *pmu;
+ bool core_pmu;
/*
* Once PMU is loaded it stays in the list,
@@ -83,7 +96,14 @@ static struct perf_pmu *perf_pmu__find2(int dirfd, const char *name)
if (pmu)
return pmu;
- return perf_pmu__lookup(is_pmu_core(name) ? &core_pmus : &other_pmus, dirfd, name);
+ if (read_sysfs_all_pmus)
+ return NULL;
+
+ core_pmu = is_pmu_core(name);
+ if (core_pmu && read_sysfs_core_pmus)
+ return NULL;
+
+ return perf_pmu__lookup(core_pmu ? &core_pmus : &other_pmus, dirfd, name);
}
/* Add all pmus in sysfs to pmu list: */
@@ -93,6 +113,9 @@ static void pmu_read_sysfs(bool core_only)
DIR *dir;
struct dirent *dent;
+ if (read_sysfs_all_pmus || (core_only && read_sysfs_core_pmus))
+ return;
+
fd = perf_pmu__event_source_devices_fd();
if (fd < 0)
return;
@@ -111,6 +134,12 @@ static void pmu_read_sysfs(bool core_only)
}
closedir(dir);
+ if (core_only) {
+ read_sysfs_core_pmus = true;
+ } else {
+ read_sysfs_core_pmus = true;
+ read_sysfs_all_pmus = true;
+ }
}
struct perf_pmu *perf_pmus__find_by_type(unsigned int type)
--
2.40.1.698.g37aff9b760-goog
Hi Ian,
On Sun, May 21, 2023 at 11:43 PM Ian Rogers <[email protected]> wrote:
>
> In commit 1d3351e631fc ("perf tools: Enable on a list of CPUs for hybrid")
> perf on hybrid will warn if a user requested CPU doesn't match the PMU
> of the given event but only for hybrid PMUs. Make the logic generic
> for all PMUs and remove the hybrid logic.
>
> Warn if a CPU is requested that is offline for uncore events. Warn if
> a CPU is requested for a core PMU, but the CPU isn't within the cpu
> map of that PMU.
>
> For example on a 16 (0-15) CPU system:
> ```
> $ perf stat -e imc_free_running/data_read/,cycles -C 16 true
> WARNING: Requested CPU(s) '16' not supported by PMU 'uncore_imc_free_running_1' for event 'imc_free_running/data_read/'
> WARNING: Requested CPU(s) '16' not supported by PMU 'uncore_imc_free_running_0' for event 'imc_free_running/data_read/'
> WARNING: Requested CPU(s) '16' not supported by PMU 'cpu' for event 'cycles'
>
> Performance counter stats for 'CPU(s) 16':
>
> <not supported> MiB imc_free_running/data_read/
> <not supported> cycles
>
> 0.000570094 seconds time elapsed
> ```
I'm ok with the warning changes, but it also removed the fixup logic
for the hybrid PMUs to change the cpu map in the events. I'm not sure
if it's your intention. If so, I think you'd better splitting it into
a separate
commit with some explanation.
Thanks,
Namhyung
>
> Signed-off-by: Ian Rogers <[email protected]>
> ---
> tools/perf/builtin-record.c | 6 +--
> tools/perf/builtin-stat.c | 5 +--
> tools/perf/util/cpumap.h | 2 +-
> tools/perf/util/evlist-hybrid.c | 74 ---------------------------------
> tools/perf/util/evlist-hybrid.h | 1 -
> tools/perf/util/evlist.c | 44 ++++++++++++++++++++
> tools/perf/util/evlist.h | 2 +
> tools/perf/util/pmu.c | 33 ---------------
> tools/perf/util/pmu.h | 4 --
> 9 files changed, 49 insertions(+), 122 deletions(-)
>
> diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
> index ec0f2d5f189f..9d212236c75a 100644
> --- a/tools/perf/builtin-record.c
> +++ b/tools/perf/builtin-record.c
> @@ -4198,11 +4198,7 @@ int cmd_record(int argc, const char **argv)
> /* Enable ignoring missing threads when -u/-p option is defined. */
> rec->opts.ignore_missing_thread = rec->opts.target.uid != UINT_MAX || rec->opts.target.pid;
>
> - if (evlist__fix_hybrid_cpus(rec->evlist, rec->opts.target.cpu_list)) {
> - pr_err("failed to use cpu list %s\n",
> - rec->opts.target.cpu_list);
> - goto out;
> - }
> + evlist__warn_user_requested_cpus(rec->evlist, rec->opts.target.cpu_list);
>
> rec->opts.target.hybrid = perf_pmu__has_hybrid();
>
> diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
> index bc45cee3f77c..612467216306 100644
> --- a/tools/perf/builtin-stat.c
> +++ b/tools/perf/builtin-stat.c
> @@ -2462,10 +2462,7 @@ int cmd_stat(int argc, const char **argv)
> }
> }
>
> - if (evlist__fix_hybrid_cpus(evsel_list, target.cpu_list)) {
> - pr_err("failed to use cpu list %s\n", target.cpu_list);
> - goto out;
> - }
> + evlist__warn_user_requested_cpus(evsel_list, target.cpu_list);
>
> target.hybrid = perf_pmu__has_hybrid();
> if (evlist__create_maps(evsel_list, &target) < 0) {
> diff --git a/tools/perf/util/cpumap.h b/tools/perf/util/cpumap.h
> index e3426541e0aa..c1de993c083f 100644
> --- a/tools/perf/util/cpumap.h
> +++ b/tools/perf/util/cpumap.h
> @@ -59,7 +59,7 @@ struct perf_cpu cpu__max_present_cpu(void);
> /**
> * cpu_map__is_dummy - Events associated with a pid, rather than a CPU, use a single dummy map with an entry of -1.
> */
> -static inline bool cpu_map__is_dummy(struct perf_cpu_map *cpus)
> +static inline bool cpu_map__is_dummy(const struct perf_cpu_map *cpus)
> {
> return perf_cpu_map__nr(cpus) == 1 && perf_cpu_map__cpu(cpus, 0).cpu == -1;
> }
> diff --git a/tools/perf/util/evlist-hybrid.c b/tools/perf/util/evlist-hybrid.c
> index 57f02beef023..db3f5fbdebe1 100644
> --- a/tools/perf/util/evlist-hybrid.c
> +++ b/tools/perf/util/evlist-hybrid.c
> @@ -86,77 +86,3 @@ bool evlist__has_hybrid(struct evlist *evlist)
>
> return false;
> }
> -
> -int evlist__fix_hybrid_cpus(struct evlist *evlist, const char *cpu_list)
> -{
> - struct perf_cpu_map *cpus;
> - struct evsel *evsel, *tmp;
> - struct perf_pmu *pmu;
> - int ret, unmatched_count = 0, events_nr = 0;
> -
> - if (!perf_pmu__has_hybrid() || !cpu_list)
> - return 0;
> -
> - cpus = perf_cpu_map__new(cpu_list);
> - if (!cpus)
> - return -1;
> -
> - /*
> - * The evsels are created with hybrid pmu's cpus. But now we
> - * need to check and adjust the cpus of evsel by cpu_list because
> - * cpu_list may cause conflicts with cpus of evsel. For example,
> - * cpus of evsel is cpu0-7, but the cpu_list is cpu6-8, we need
> - * to adjust the cpus of evsel to cpu6-7. And then propatate maps
> - * in evlist__create_maps().
> - */
> - evlist__for_each_entry_safe(evlist, tmp, evsel) {
> - struct perf_cpu_map *matched_cpus, *unmatched_cpus;
> - char buf1[128], buf2[128];
> -
> - pmu = perf_pmu__find_hybrid_pmu(evsel->pmu_name);
> - if (!pmu)
> - continue;
> -
> - ret = perf_pmu__cpus_match(pmu, cpus, &matched_cpus,
> - &unmatched_cpus);
> - if (ret)
> - goto out;
> -
> - events_nr++;
> -
> - if (perf_cpu_map__nr(matched_cpus) > 0 &&
> - (perf_cpu_map__nr(unmatched_cpus) > 0 ||
> - perf_cpu_map__nr(matched_cpus) < perf_cpu_map__nr(cpus) ||
> - perf_cpu_map__nr(matched_cpus) < perf_cpu_map__nr(pmu->cpus))) {
> - perf_cpu_map__put(evsel->core.cpus);
> - perf_cpu_map__put(evsel->core.own_cpus);
> - evsel->core.cpus = perf_cpu_map__get(matched_cpus);
> - evsel->core.own_cpus = perf_cpu_map__get(matched_cpus);
> -
> - if (perf_cpu_map__nr(unmatched_cpus) > 0) {
> - cpu_map__snprint(matched_cpus, buf1, sizeof(buf1));
> - pr_warning("WARNING: use %s in '%s' for '%s', skip other cpus in list.\n",
> - buf1, pmu->name, evsel->name);
> - }
> - }
> -
> - if (perf_cpu_map__nr(matched_cpus) == 0) {
> - evlist__remove(evlist, evsel);
> - evsel__delete(evsel);
> -
> - cpu_map__snprint(cpus, buf1, sizeof(buf1));
> - cpu_map__snprint(pmu->cpus, buf2, sizeof(buf2));
> - pr_warning("WARNING: %s isn't a '%s', please use a CPU list in the '%s' range (%s)\n",
> - buf1, pmu->name, pmu->name, buf2);
> - unmatched_count++;
> - }
> -
> - perf_cpu_map__put(matched_cpus);
> - perf_cpu_map__put(unmatched_cpus);
> - }
> - if (events_nr)
> - ret = (unmatched_count == events_nr) ? -1 : 0;
> -out:
> - perf_cpu_map__put(cpus);
> - return ret;
> -}
> diff --git a/tools/perf/util/evlist-hybrid.h b/tools/perf/util/evlist-hybrid.h
> index aacdb1b0f948..19f74b4c340a 100644
> --- a/tools/perf/util/evlist-hybrid.h
> +++ b/tools/perf/util/evlist-hybrid.h
> @@ -10,6 +10,5 @@
> int evlist__add_default_hybrid(struct evlist *evlist, bool precise);
> void evlist__warn_hybrid_group(struct evlist *evlist);
> bool evlist__has_hybrid(struct evlist *evlist);
> -int evlist__fix_hybrid_cpus(struct evlist *evlist, const char *cpu_list);
>
> #endif /* __PERF_EVLIST_HYBRID_H */
> diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c
> index a0504316b06f..5d0d99127a90 100644
> --- a/tools/perf/util/evlist.c
> +++ b/tools/perf/util/evlist.c
> @@ -2465,3 +2465,47 @@ void evlist__check_mem_load_aux(struct evlist *evlist)
> }
> }
> }
> +
> +/**
> + * evlist__warn_user_requested_cpus() - Check each evsel against requested CPUs
> + * and warn if the user CPU list is inapplicable for the event's PMUs
> + * CPUs. Uncore PMUs list a CPU in sysfs, but this may be overwritten by a
> + * user requested CPU and so any online CPU is applicable. Core PMUs handle
> + * events on the CPUs in their list and otherwise the event isn't supported.
> + * @evlist: The list of events being checked.
> + * @cpu_list: The user provided list of CPUs.
> + */
> +void evlist__warn_user_requested_cpus(struct evlist *evlist, const char *cpu_list)
> +{
> + struct perf_cpu_map *user_requested_cpus;
> + struct evsel *pos;
> +
> + if (!cpu_list)
> + return;
> +
> + user_requested_cpus = perf_cpu_map__new(cpu_list);
> + if (!user_requested_cpus)
> + return;
> +
> + evlist__for_each_entry(evlist, pos) {
> + const struct perf_cpu_map *to_test;
> + struct perf_cpu cpu;
> + int idx;
> + bool warn = true;
> + const struct perf_pmu *pmu = evsel__find_pmu(pos);
> +
> + to_test = pmu && pmu->is_uncore ? cpu_map__online() : evsel__cpus(pos);
> +
> + perf_cpu_map__for_each_cpu(cpu, idx, to_test) {
> + if (perf_cpu_map__has(user_requested_cpus, cpu)) {
> + warn = false;
> + break;
> + }
> + }
> + if (warn) {
> + pr_warning("WARNING: Requested CPU(s) '%s' not supported by PMU '%s' for event '%s'\n",
> + cpu_list, pmu ? pmu->name : "cpu", evsel__name(pos));
> + }
> + }
> + perf_cpu_map__put(user_requested_cpus);
> +}
> diff --git a/tools/perf/util/evlist.h b/tools/perf/util/evlist.h
> index e7e5540cc970..5e7ff44f3043 100644
> --- a/tools/perf/util/evlist.h
> +++ b/tools/perf/util/evlist.h
> @@ -447,4 +447,6 @@ struct evsel *evlist__find_evsel(struct evlist *evlist, int idx);
>
> int evlist__scnprintf_evsels(struct evlist *evlist, size_t size, char *bf);
> void evlist__check_mem_load_aux(struct evlist *evlist);
> +void evlist__warn_user_requested_cpus(struct evlist *evlist, const char *cpu_list);
> +
> #endif /* __PERF_EVLIST_H */
> diff --git a/tools/perf/util/pmu.c b/tools/perf/util/pmu.c
> index f4f0afbc391c..1e0be23d4dd7 100644
> --- a/tools/perf/util/pmu.c
> +++ b/tools/perf/util/pmu.c
> @@ -2038,39 +2038,6 @@ int perf_pmu__match(char *pattern, char *name, char *tok)
> return 0;
> }
>
> -int perf_pmu__cpus_match(struct perf_pmu *pmu, struct perf_cpu_map *cpus,
> - struct perf_cpu_map **mcpus_ptr,
> - struct perf_cpu_map **ucpus_ptr)
> -{
> - struct perf_cpu_map *pmu_cpus = pmu->cpus;
> - struct perf_cpu_map *matched_cpus, *unmatched_cpus;
> - struct perf_cpu cpu;
> - int i, matched_nr = 0, unmatched_nr = 0;
> -
> - matched_cpus = perf_cpu_map__default_new();
> - if (!matched_cpus)
> - return -1;
> -
> - unmatched_cpus = perf_cpu_map__default_new();
> - if (!unmatched_cpus) {
> - perf_cpu_map__put(matched_cpus);
> - return -1;
> - }
> -
> - perf_cpu_map__for_each_cpu(cpu, i, cpus) {
> - if (!perf_cpu_map__has(pmu_cpus, cpu))
> - RC_CHK_ACCESS(unmatched_cpus)->map[unmatched_nr++] = cpu;
> - else
> - RC_CHK_ACCESS(matched_cpus)->map[matched_nr++] = cpu;
> - }
> -
> - perf_cpu_map__set_nr(unmatched_cpus, unmatched_nr);
> - perf_cpu_map__set_nr(matched_cpus, matched_nr);
> - *mcpus_ptr = matched_cpus;
> - *ucpus_ptr = unmatched_cpus;
> - return 0;
> -}
> -
> double __weak perf_pmu__cpu_slots_per_cycle(void)
> {
> return NAN;
> diff --git a/tools/perf/util/pmu.h b/tools/perf/util/pmu.h
> index 0e0cb6283594..49033bb134f3 100644
> --- a/tools/perf/util/pmu.h
> +++ b/tools/perf/util/pmu.h
> @@ -257,10 +257,6 @@ void perf_pmu__warn_invalid_formats(struct perf_pmu *pmu);
> bool perf_pmu__has_hybrid(void);
> int perf_pmu__match(char *pattern, char *name, char *tok);
>
> -int perf_pmu__cpus_match(struct perf_pmu *pmu, struct perf_cpu_map *cpus,
> - struct perf_cpu_map **mcpus_ptr,
> - struct perf_cpu_map **ucpus_ptr);
> -
> char *pmu_find_real_name(const char *name);
> char *pmu_find_alias_name(const char *name);
> double perf_pmu__cpu_slots_per_cycle(void);
> --
> 2.40.1.698.g37aff9b760-goog
>
On 2023-05-22 2:43 a.m., Ian Rogers wrote:
> Separate the code in pmu.[ch] into the set/list of PMUs and the code
> for a particular PMU. Move the set/list of PMUs code into
> pmus.[ch]. Clean up hybrid code and remove hybrid PMU list, it is
> sufficient to scan PMUs looking for core ones. Add core PMU list and
> perf_pmus__scan_core that just reads core PMUs. Switch code that skips
> non-core PMUs during a perf_pmus__scan, to use the
> perf_pmus__scan_core variant. Don't scan sysfs for PMUs if all such
> PMUs have been previously scanned/loaded. Scanning just core PMUs, for
> the cases it is applicable, can improve the sysfs reading time by more
> than 4 fold on my laptop, as servers generally have many more uncore
> PMUs the improvement there should be larger:
>
> ```
> $ perf bench internals pmu-scan -i 1000
> Computing performance of sysfs PMU event scan for 1000 times
> Average core PMU scanning took: 989.231 usec (+- 1.535 usec)
> Average PMU scanning took: 4309.425 usec (+- 74.322 usec)
> ```
>
> The patch "perf pmu: Separate pmu and pmus" moves and renames a lot of
> functions, and is consequently large. The changes are trivial, but
> kept together to keep the overall number of patches more reasonable.
>
> v2. Address Kan's review comments wrt "cycles" -> "cycles:P" and
> "uncore_pmus" -> "other_pmus".
>
> Ian Rogers (23):
> perf tools: Warn if no user requested CPUs match PMU's CPUs
> perf evlist: Remove evlist__warn_hybrid_group
> perf evlist: Remove __evlist__add_default
> perf evlist: Reduce scope of evlist__has_hybrid
> perf pmu: Remove perf_pmu__hybrid_mounted
> perf pmu: Detect ARM and hybrid PMUs with sysfs
> perf pmu: Add is_core to pmu
> perf pmu: Rewrite perf_pmu__has_hybrid to avoid list
> perf x86: Iterate hybrid PMUs as core PMUs
> perf topology: Avoid hybrid list for hybrid topology
> perf evsel: Compute is_hybrid from PMU being core
> perf header: Avoid hybrid PMU list in write_pmu_caps
> perf metrics: Remove perf_pmu__is_hybrid use
> perf stat: Avoid hybrid PMU list
> perf mem: Avoid hybrid PMU list
> perf pmu: Remove perf_pmu__hybrid_pmus list
> perf pmus: Prefer perf_pmu__scan over perf_pmus__for_each_pmu
> perf x86 mem: minor refactor to is_mem_loads_aux_event
> perf pmu: Separate pmu and pmus
> perf pmus: Split pmus list into core and other
> perf pmus: Allow just core PMU scanning
> perf pmus: Avoid repeated sysfs scanning
> perf pmus: Ensure all PMUs are read for find_by_type
The patch set also triggers Segmentation fault with default mode on my
hybrid machine.
# ./perf stat sleep 1
Performance counter stats for 'sleep 1':
0.53 msec task-clock # 0.001 CPUs
utilized
1 context-switches # 1.875 K/sec
0 cpu-migrations # 0.000 /sec
68 page-faults # 127.476 K/sec
Segmentation fault (core dumped)
Program received signal SIGSEGV, Segmentation fault.
evsel__is_hybrid (evsel=0x55555609a1a0) at util/evsel.c:3143
3143 return pmu->is_core;
(gdb) backtrace
#0 evsel__is_hybrid (evsel=0x55555609a1a0) at util/evsel.c:3143
#1 evsel__is_hybrid (evsel=evsel@entry=0x55555609a1a0) at util/evsel.c:3135
#2 0x0000555555759468 in hybrid_uniquify (config=0x555555f931e0
<stat_config>, evsel=0x55555609a1a0)
at util/stat-display.c:813
#3 uniquify_counter (counter=0x55555609a1a0, config=0x555555f931e0
<stat_config>) at util/stat-display.c:818
#4 print_counter_aggrdata (config=config@entry=0x555555f931e0
<stat_config>,
counter=counter@entry=0x55555609a1a0, aggr_idx=aggr_idx@entry=0,
os=os@entry=0x7fffffff8fe0)
at util/stat-display.c:888
#5 0x000055555575b119 in print_counter (os=<optimized out>,
counter=<optimized out>, config=<optimized out>)
at util/stat-display.c:1019
#6 print_counter (os=0x7fffffff8fe0, counter=0x55555609a1a0,
config=0x555555f931e0 <stat_config>)
at util/stat-display.c:1009
#7 evlist__print_counters (evlist=0x555556029da0,
config=config@entry=0x555555f931e0 <stat_config>,
_target=_target@entry=0x555555f42de0 <target>, ts=ts@entry=0x0,
argc=argc@entry=2,
argv=argv@entry=0x7fffffffe1d0) at util/stat-display.c:1480
#8 0x000055555562009c in print_counters (argv=0x7fffffffe1d0, argc=2,
ts=0x0) at builtin-stat.c:979
#9 print_counters (argv=0x7fffffffe1d0, argc=2, ts=0x0) at
builtin-stat.c:971
#10 cmd_stat (argc=2, argv=0x7fffffffe1d0) at builtin-stat.c:2832
#11 0x00005555556b6670 in run_builtin (p=p@entry=0x555555f9c590
<commands+336>, argc=argc@entry=3,
argv=argv@entry=0x7fffffffe1d0) at perf.c:323
#12 0x00005555555ff2d9 in handle_internal_command (argv=0x7fffffffe1d0,
argc=3) at perf.c:377
#13 run_argv (argv=<synthetic pointer>, argcp=<synthetic pointer>) at
perf.c:421
#14 main (argc=3, argv=0x7fffffffe1d0) at perf.c:537
Thanks,
Kan
>
> tools/perf/arch/arm/util/auxtrace.c | 7 +-
> tools/perf/arch/arm/util/cs-etm.c | 4 +-
> tools/perf/arch/arm64/util/pmu.c | 6 +-
> tools/perf/arch/x86/tests/hybrid.c | 7 +-
> tools/perf/arch/x86/util/auxtrace.c | 5 +-
> tools/perf/arch/x86/util/evlist.c | 25 +-
> tools/perf/arch/x86/util/evsel.c | 27 +-
> tools/perf/arch/x86/util/intel-bts.c | 4 +-
> tools/perf/arch/x86/util/intel-pt.c | 4 +-
> tools/perf/arch/x86/util/mem-events.c | 17 +-
> tools/perf/arch/x86/util/perf_regs.c | 15 +-
> tools/perf/arch/x86/util/topdown.c | 5 +-
> tools/perf/bench/pmu-scan.c | 60 ++--
> tools/perf/builtin-c2c.c | 9 +-
> tools/perf/builtin-list.c | 4 +-
> tools/perf/builtin-mem.c | 9 +-
> tools/perf/builtin-record.c | 29 +-
> tools/perf/builtin-stat.c | 15 +-
> tools/perf/builtin-top.c | 10 +-
> tools/perf/tests/attr.c | 4 +-
> tools/perf/tests/event_groups.c | 7 +-
> tools/perf/tests/parse-events.c | 15 +-
> tools/perf/tests/parse-metric.c | 4 +-
> tools/perf/tests/pmu-events.c | 6 +-
> tools/perf/tests/switch-tracking.c | 4 +-
> tools/perf/tests/topology.c | 4 +-
> tools/perf/util/Build | 2 -
> tools/perf/util/cpumap.h | 2 +-
> tools/perf/util/cputopo.c | 16 +-
> tools/perf/util/env.c | 5 +-
> tools/perf/util/evlist-hybrid.c | 162 ---------
> tools/perf/util/evlist-hybrid.h | 15 -
> tools/perf/util/evlist.c | 67 +++-
> tools/perf/util/evlist.h | 9 +-
> tools/perf/util/evsel.c | 57 +--
> tools/perf/util/evsel.h | 3 -
> tools/perf/util/header.c | 27 +-
> tools/perf/util/mem-events.c | 17 +-
> tools/perf/util/metricgroup.c | 9 +-
> tools/perf/util/parse-events.c | 24 +-
> tools/perf/util/parse-events.y | 3 +-
> tools/perf/util/pfm.c | 6 +-
> tools/perf/util/pmu-hybrid.c | 52 ---
> tools/perf/util/pmu-hybrid.h | 32 --
> tools/perf/util/pmu.c | 482 ++------------------------
> tools/perf/util/pmu.h | 26 +-
> tools/perf/util/pmus.c | 477 ++++++++++++++++++++++++-
> tools/perf/util/pmus.h | 15 +-
> tools/perf/util/print-events.c | 15 +-
> tools/perf/util/python-ext-sources | 1 -
> tools/perf/util/stat-display.c | 21 +-
> 51 files changed, 819 insertions(+), 1032 deletions(-)
> delete mode 100644 tools/perf/util/evlist-hybrid.c
> delete mode 100644 tools/perf/util/evlist-hybrid.h
> delete mode 100644 tools/perf/util/pmu-hybrid.c
> delete mode 100644 tools/perf/util/pmu-hybrid.h
>