The perf tool has previously made legacy events the priority so with
or without a PMU the legacy event would be opened:
```
$ perf stat -e cpu-cycles,cpu/cpu-cycles/ true
Using CPUID GenuineIntel-6-8D-1
intel_pt default config: tsc,mtc,mtc_period=3,psb_period=3,pt,branch
Attempting to add event pmu 'cpu' with 'cpu-cycles,' that may result in non-fatal errors
After aliases, add event pmu 'cpu' with 'cpu-cycles,' that may result in non-fatal errors
Control descriptor is not initialized
------------------------------------------------------------
perf_event_attr:
type 0 (PERF_TYPE_HARDWARE)
size 136
config 0 (PERF_COUNT_HW_CPU_CYCLES)
sample_type IDENTIFIER
read_format TOTAL_TIME_ENABLED|TOTAL_TIME_RUNNING
disabled 1
inherit 1
enable_on_exec 1
exclude_guest 1
------------------------------------------------------------
sys_perf_event_open: pid 833967 cpu -1 group_fd -1 flags 0x8 = 3
------------------------------------------------------------
perf_event_attr:
type 0 (PERF_TYPE_HARDWARE)
size 136
config 0 (PERF_COUNT_HW_CPU_CYCLES)
sample_type IDENTIFIER
read_format TOTAL_TIME_ENABLED|TOTAL_TIME_RUNNING
disabled 1
inherit 1
enable_on_exec 1
exclude_guest 1
------------------------------------------------------------
...
```
Fixes to make hybrid/BIG.little PMUs behave correctly, ie as core PMUs
capable of opening legacy events on each, removing hard coded
"cpu_core" and "cpu_atom" Intel PMU names, etc. caused a behavioral
difference on Apple/ARM due to latent issues in the PMU driver
reported in:
https://lore.kernel.org/lkml/[email protected]/
As part of that report Mark Rutland <[email protected]> requested
that legacy events not be higher in priority when a PMU is specified
reversing what has until this change been perf's default
behavior. With this change the above becomes:
```
$ perf stat -e cpu-cycles,cpu/cpu-cycles/ true
Using CPUID GenuineIntel-6-8D-1
Attempt to add: cpu/cpu-cycles=0/
..after resolving event: cpu/event=0x3c/
Control descriptor is not initialized
------------------------------------------------------------
perf_event_attr:
type 0 (PERF_TYPE_HARDWARE)
size 136
config 0 (PERF_COUNT_HW_CPU_CYCLES)
sample_type IDENTIFIER
read_format TOTAL_TIME_ENABLED|TOTAL_TIME_RUNNING
disabled 1
inherit 1
enable_on_exec 1
exclude_guest 1
------------------------------------------------------------
sys_perf_event_open: pid 827628 cpu -1 group_fd -1 flags 0x8 = 3
------------------------------------------------------------
perf_event_attr:
type 4 (PERF_TYPE_RAW)
size 136
config 0x3c
sample_type IDENTIFIER
read_format TOTAL_TIME_ENABLED|TOTAL_TIME_RUNNING
disabled 1
inherit 1
enable_on_exec 1
exclude_guest 1
------------------------------------------------------------
...
```
So the second event has become a raw event as
/sys/devices/cpu/events/cpu-cycles exists.
A fix was necessary to config_term_pmu in parse-events.c as
check_alias expansion needs to happen after config_term_pmu, and
config_term_pmu may need calling a second time because of this.
config_term_pmu is updated to not use the legacy event when the PMU
has such a named event (either from json or sysfs).
The bulk of this change is updating all of the parse-events test
expectations so that if a sysfs/json event exists for a PMU the test
doesn't fail - a further sign, if it were needed, that the legacy
event priority was a known and tested behavior of the perf tool.
Signed-off-by: Ian Rogers <[email protected]>
---
This is a large behavioral change:
1) the scope of the change means it should bake on linux-next and I
don't believe should be a 6.7-rc fix.
2) a fixes tag and stable backport I don't think are appropriate. The
real reported issue is with the PMU driver. A backport would bring the
risk that later fixes, due to the large behavior change, wouldn't be
backported and past releases get regressed in scenarios like
hybrid. Backports for the perf tool are also less necessary than say a
buggy PMU driver, as distributions should be updating to the latest
perf tool regardless of what Linux kernel is being run (the perf tool
is backward compatible).
---
tools/perf/tests/parse-events.c | 256 +++++++++++++++++++++++---------
tools/perf/util/parse-events.c | 52 +++++--
tools/perf/util/pmu.c | 8 +-
tools/perf/util/pmu.h | 3 +-
4 files changed, 231 insertions(+), 88 deletions(-)
diff --git a/tools/perf/tests/parse-events.c b/tools/perf/tests/parse-events.c
index e52f45c7c3d1..fbdf710d5eea 100644
--- a/tools/perf/tests/parse-events.c
+++ b/tools/perf/tests/parse-events.c
@@ -162,6 +162,22 @@ static int test__checkevent_numeric(struct evlist *evlist)
return TEST_OK;
}
+
+static int assert_hw(struct perf_evsel *evsel, enum perf_hw_id id, const char *name)
+{
+ struct perf_pmu *pmu;
+
+ if (evsel->attr.type == PERF_TYPE_HARDWARE) {
+ TEST_ASSERT_VAL("wrong config", test_perf_config(evsel, id));
+ return 0;
+ }
+ pmu = perf_pmus__find_by_type(evsel->attr.type);
+
+ TEST_ASSERT_VAL("unexpected PMU type", pmu);
+ TEST_ASSERT_VAL("PMU missing event", perf_pmu__have_event(pmu, name));
+ return 0;
+}
+
static int test__checkevent_symbolic_name(struct evlist *evlist)
{
struct perf_evsel *evsel;
@@ -169,10 +185,12 @@ static int test__checkevent_symbolic_name(struct evlist *evlist)
TEST_ASSERT_VAL("wrong number of entries", 0 != evlist->core.nr_entries);
perf_evlist__for_each_evsel(&evlist->core, evsel) {
- TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->attr.type);
- TEST_ASSERT_VAL("wrong config",
- test_perf_config(evsel, PERF_COUNT_HW_INSTRUCTIONS));
+ int ret = assert_hw(evsel, PERF_COUNT_HW_INSTRUCTIONS, "instructions");
+
+ if (ret)
+ return ret;
}
+
return TEST_OK;
}
@@ -183,8 +201,10 @@ static int test__checkevent_symbolic_name_config(struct evlist *evlist)
TEST_ASSERT_VAL("wrong number of entries", 0 != evlist->core.nr_entries);
perf_evlist__for_each_evsel(&evlist->core, evsel) {
- TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->attr.type);
- TEST_ASSERT_VAL("wrong config", test_perf_config(evsel, PERF_COUNT_HW_CPU_CYCLES));
+ int ret = assert_hw(evsel, PERF_COUNT_HW_CPU_CYCLES, "cycles");
+
+ if (ret)
+ return ret;
/*
* The period value gets configured within evlist__config,
* while this test executes only parse events method.
@@ -861,10 +881,14 @@ static int test__group1(struct evlist *evlist)
evlist__nr_groups(evlist) == num_core_entries());
for (int i = 0; i < num_core_entries(); i++) {
+ int ret;
+
/* instructions:k */
evsel = leader = (i == 0 ? evlist__first(evlist) : evsel__next(evsel));
- TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
- TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_INSTRUCTIONS));
+ ret = assert_hw(&evsel->core, PERF_COUNT_HW_INSTRUCTIONS, "instructions");
+ if (ret)
+ return ret;
+
TEST_ASSERT_VAL("wrong exclude_user", evsel->core.attr.exclude_user);
TEST_ASSERT_VAL("wrong exclude_kernel", !evsel->core.attr.exclude_kernel);
TEST_ASSERT_VAL("wrong exclude_hv", evsel->core.attr.exclude_hv);
@@ -878,8 +902,10 @@ static int test__group1(struct evlist *evlist)
/* cycles:upp */
evsel = evsel__next(evsel);
- TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
- TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CPU_CYCLES));
+ ret = assert_hw(&evsel->core, PERF_COUNT_HW_CPU_CYCLES, "cycles");
+ if (ret)
+ return ret;
+
TEST_ASSERT_VAL("wrong exclude_user", !evsel->core.attr.exclude_user);
TEST_ASSERT_VAL("wrong exclude_kernel", evsel->core.attr.exclude_kernel);
TEST_ASSERT_VAL("wrong exclude_hv", evsel->core.attr.exclude_hv);
@@ -907,6 +933,8 @@ static int test__group2(struct evlist *evlist)
TEST_ASSERT_VAL("wrong number of groups", 1 == evlist__nr_groups(evlist));
evlist__for_each_entry(evlist, evsel) {
+ int ret;
+
if (evsel->core.attr.type == PERF_TYPE_SOFTWARE) {
/* faults + :ku modifier */
leader = evsel;
@@ -939,8 +967,10 @@ static int test__group2(struct evlist *evlist)
continue;
}
/* cycles:k */
- TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
- TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CPU_CYCLES));
+ ret = assert_hw(&evsel->core, PERF_COUNT_HW_CPU_CYCLES, "cycles");
+ if (ret)
+ return ret;
+
TEST_ASSERT_VAL("wrong exclude_user", evsel->core.attr.exclude_user);
TEST_ASSERT_VAL("wrong exclude_kernel", !evsel->core.attr.exclude_kernel);
TEST_ASSERT_VAL("wrong exclude_hv", evsel->core.attr.exclude_hv);
@@ -957,6 +987,7 @@ static int test__group2(struct evlist *evlist)
static int test__group3(struct evlist *evlist __maybe_unused)
{
struct evsel *evsel, *group1_leader = NULL, *group2_leader = NULL;
+ int ret;
TEST_ASSERT_VAL("wrong number of entries",
evlist->core.nr_entries == (3 * perf_pmus__num_core_pmus() + 2));
@@ -1045,8 +1076,10 @@ static int test__group3(struct evlist *evlist __maybe_unused)
continue;
}
/* instructions:u */
- TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
- TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_INSTRUCTIONS));
+ ret = assert_hw(&evsel->core, PERF_COUNT_HW_INSTRUCTIONS, "instructions");
+ if (ret)
+ return ret;
+
TEST_ASSERT_VAL("wrong exclude_user", !evsel->core.attr.exclude_user);
TEST_ASSERT_VAL("wrong exclude_kernel", evsel->core.attr.exclude_kernel);
TEST_ASSERT_VAL("wrong exclude_hv", evsel->core.attr.exclude_hv);
@@ -1070,10 +1103,14 @@ static int test__group4(struct evlist *evlist __maybe_unused)
num_core_entries() == evlist__nr_groups(evlist));
for (int i = 0; i < num_core_entries(); i++) {
+ int ret;
+
/* cycles:u + p */
evsel = leader = (i == 0 ? evlist__first(evlist) : evsel__next(evsel));
- TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
- TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CPU_CYCLES));
+ ret = assert_hw(&evsel->core, PERF_COUNT_HW_CPU_CYCLES, "cycles");
+ if (ret)
+ return ret;
+
TEST_ASSERT_VAL("wrong exclude_user", !evsel->core.attr.exclude_user);
TEST_ASSERT_VAL("wrong exclude_kernel", evsel->core.attr.exclude_kernel);
TEST_ASSERT_VAL("wrong exclude_hv", evsel->core.attr.exclude_hv);
@@ -1089,8 +1126,10 @@ static int test__group4(struct evlist *evlist __maybe_unused)
/* instructions:kp + p */
evsel = evsel__next(evsel);
- TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
- TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_INSTRUCTIONS));
+ ret = assert_hw(&evsel->core, PERF_COUNT_HW_INSTRUCTIONS, "instructions");
+ if (ret)
+ return ret;
+
TEST_ASSERT_VAL("wrong exclude_user", evsel->core.attr.exclude_user);
TEST_ASSERT_VAL("wrong exclude_kernel", !evsel->core.attr.exclude_kernel);
TEST_ASSERT_VAL("wrong exclude_hv", evsel->core.attr.exclude_hv);
@@ -1108,6 +1147,7 @@ static int test__group4(struct evlist *evlist __maybe_unused)
static int test__group5(struct evlist *evlist __maybe_unused)
{
struct evsel *evsel = NULL, *leader;
+ int ret;
TEST_ASSERT_VAL("wrong number of entries",
evlist->core.nr_entries == (5 * num_core_entries()));
@@ -1117,8 +1157,10 @@ static int test__group5(struct evlist *evlist __maybe_unused)
for (int i = 0; i < num_core_entries(); i++) {
/* cycles + G */
evsel = leader = (i == 0 ? evlist__first(evlist) : evsel__next(evsel));
- TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
- TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CPU_CYCLES));
+ ret = assert_hw(&evsel->core, PERF_COUNT_HW_CPU_CYCLES, "cycles");
+ if (ret)
+ return ret;
+
TEST_ASSERT_VAL("wrong exclude_user", !evsel->core.attr.exclude_user);
TEST_ASSERT_VAL("wrong exclude_kernel", !evsel->core.attr.exclude_kernel);
TEST_ASSERT_VAL("wrong exclude_hv", !evsel->core.attr.exclude_hv);
@@ -1133,8 +1175,10 @@ static int test__group5(struct evlist *evlist __maybe_unused)
/* instructions + G */
evsel = evsel__next(evsel);
- TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
- TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_INSTRUCTIONS));
+ ret = assert_hw(&evsel->core, PERF_COUNT_HW_INSTRUCTIONS, "instructions");
+ if (ret)
+ return ret;
+
TEST_ASSERT_VAL("wrong exclude_user", !evsel->core.attr.exclude_user);
TEST_ASSERT_VAL("wrong exclude_kernel", !evsel->core.attr.exclude_kernel);
TEST_ASSERT_VAL("wrong exclude_hv", !evsel->core.attr.exclude_hv);
@@ -1148,8 +1192,10 @@ static int test__group5(struct evlist *evlist __maybe_unused)
for (int i = 0; i < num_core_entries(); i++) {
/* cycles:G */
evsel = leader = evsel__next(evsel);
- TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
- TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CPU_CYCLES));
+ ret = assert_hw(&evsel->core, PERF_COUNT_HW_CPU_CYCLES, "cycles");
+ if (ret)
+ return ret;
+
TEST_ASSERT_VAL("wrong exclude_user", !evsel->core.attr.exclude_user);
TEST_ASSERT_VAL("wrong exclude_kernel", !evsel->core.attr.exclude_kernel);
TEST_ASSERT_VAL("wrong exclude_hv", !evsel->core.attr.exclude_hv);
@@ -1164,8 +1210,10 @@ static int test__group5(struct evlist *evlist __maybe_unused)
/* instructions:G */
evsel = evsel__next(evsel);
- TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
- TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_INSTRUCTIONS));
+ ret = assert_hw(&evsel->core, PERF_COUNT_HW_INSTRUCTIONS, "instructions");
+ if (ret)
+ return ret;
+
TEST_ASSERT_VAL("wrong exclude_user", !evsel->core.attr.exclude_user);
TEST_ASSERT_VAL("wrong exclude_kernel", !evsel->core.attr.exclude_kernel);
TEST_ASSERT_VAL("wrong exclude_hv", !evsel->core.attr.exclude_hv);
@@ -1178,8 +1226,10 @@ static int test__group5(struct evlist *evlist __maybe_unused)
for (int i = 0; i < num_core_entries(); i++) {
/* cycles */
evsel = evsel__next(evsel);
- TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
- TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CPU_CYCLES));
+ ret = assert_hw(&evsel->core, PERF_COUNT_HW_CPU_CYCLES, "cycles");
+ if (ret)
+ return ret;
+
TEST_ASSERT_VAL("wrong exclude_user", !evsel->core.attr.exclude_user);
TEST_ASSERT_VAL("wrong exclude_kernel", !evsel->core.attr.exclude_kernel);
TEST_ASSERT_VAL("wrong exclude_hv", !evsel->core.attr.exclude_hv);
@@ -1201,10 +1251,14 @@ static int test__group_gh1(struct evlist *evlist)
evlist__nr_groups(evlist) == num_core_entries());
for (int i = 0; i < num_core_entries(); i++) {
+ int ret;
+
/* cycles + :H group modifier */
evsel = leader = (i == 0 ? evlist__first(evlist) : evsel__next(evsel));
- TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
- TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CPU_CYCLES));
+ ret = assert_hw(&evsel->core, PERF_COUNT_HW_CPU_CYCLES, "cycles");
+ if (ret)
+ return ret;
+
TEST_ASSERT_VAL("wrong exclude_user", !evsel->core.attr.exclude_user);
TEST_ASSERT_VAL("wrong exclude_kernel", !evsel->core.attr.exclude_kernel);
TEST_ASSERT_VAL("wrong exclude_hv", !evsel->core.attr.exclude_hv);
@@ -1218,8 +1272,10 @@ static int test__group_gh1(struct evlist *evlist)
/* cache-misses:G + :H group modifier */
evsel = evsel__next(evsel);
- TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
- TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CACHE_MISSES));
+ ret = assert_hw(&evsel->core, PERF_COUNT_HW_CACHE_MISSES, "cache-misses");
+ if (ret)
+ return ret;
+
TEST_ASSERT_VAL("wrong exclude_user", !evsel->core.attr.exclude_user);
TEST_ASSERT_VAL("wrong exclude_kernel", !evsel->core.attr.exclude_kernel);
TEST_ASSERT_VAL("wrong exclude_hv", !evsel->core.attr.exclude_hv);
@@ -1242,10 +1298,14 @@ static int test__group_gh2(struct evlist *evlist)
evlist__nr_groups(evlist) == num_core_entries());
for (int i = 0; i < num_core_entries(); i++) {
+ int ret;
+
/* cycles + :G group modifier */
evsel = leader = (i == 0 ? evlist__first(evlist) : evsel__next(evsel));
- TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
- TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CPU_CYCLES));
+ ret = assert_hw(&evsel->core, PERF_COUNT_HW_CPU_CYCLES, "cycles");
+ if (ret)
+ return ret;
+
TEST_ASSERT_VAL("wrong exclude_user", !evsel->core.attr.exclude_user);
TEST_ASSERT_VAL("wrong exclude_kernel", !evsel->core.attr.exclude_kernel);
TEST_ASSERT_VAL("wrong exclude_hv", !evsel->core.attr.exclude_hv);
@@ -1259,8 +1319,10 @@ static int test__group_gh2(struct evlist *evlist)
/* cache-misses:H + :G group modifier */
evsel = evsel__next(evsel);
- TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
- TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CACHE_MISSES));
+ ret = assert_hw(&evsel->core, PERF_COUNT_HW_CACHE_MISSES, "cache-misses");
+ if (ret)
+ return ret;
+
TEST_ASSERT_VAL("wrong exclude_user", !evsel->core.attr.exclude_user);
TEST_ASSERT_VAL("wrong exclude_kernel", !evsel->core.attr.exclude_kernel);
TEST_ASSERT_VAL("wrong exclude_hv", !evsel->core.attr.exclude_hv);
@@ -1283,10 +1345,14 @@ static int test__group_gh3(struct evlist *evlist)
evlist__nr_groups(evlist) == num_core_entries());
for (int i = 0; i < num_core_entries(); i++) {
+ int ret;
+
/* cycles:G + :u group modifier */
evsel = leader = (i == 0 ? evlist__first(evlist) : evsel__next(evsel));
- TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
- TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CPU_CYCLES));
+ ret = assert_hw(&evsel->core, PERF_COUNT_HW_CPU_CYCLES, "cycles");
+ if (ret)
+ return ret;
+
TEST_ASSERT_VAL("wrong exclude_user", !evsel->core.attr.exclude_user);
TEST_ASSERT_VAL("wrong exclude_kernel", evsel->core.attr.exclude_kernel);
TEST_ASSERT_VAL("wrong exclude_hv", evsel->core.attr.exclude_hv);
@@ -1300,8 +1366,10 @@ static int test__group_gh3(struct evlist *evlist)
/* cache-misses:H + :u group modifier */
evsel = evsel__next(evsel);
- TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
- TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CACHE_MISSES));
+ ret = assert_hw(&evsel->core, PERF_COUNT_HW_CACHE_MISSES, "cache-misses");
+ if (ret)
+ return ret;
+
TEST_ASSERT_VAL("wrong exclude_user", !evsel->core.attr.exclude_user);
TEST_ASSERT_VAL("wrong exclude_kernel", evsel->core.attr.exclude_kernel);
TEST_ASSERT_VAL("wrong exclude_hv", evsel->core.attr.exclude_hv);
@@ -1324,10 +1392,14 @@ static int test__group_gh4(struct evlist *evlist)
evlist__nr_groups(evlist) == num_core_entries());
for (int i = 0; i < num_core_entries(); i++) {
+ int ret;
+
/* cycles:G + :uG group modifier */
evsel = leader = (i == 0 ? evlist__first(evlist) : evsel__next(evsel));
- TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
- TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CPU_CYCLES));
+ ret = assert_hw(&evsel->core, PERF_COUNT_HW_CPU_CYCLES, "cycles");
+ if (ret)
+ return ret;
+
TEST_ASSERT_VAL("wrong exclude_user", !evsel->core.attr.exclude_user);
TEST_ASSERT_VAL("wrong exclude_kernel", evsel->core.attr.exclude_kernel);
TEST_ASSERT_VAL("wrong exclude_hv", evsel->core.attr.exclude_hv);
@@ -1341,8 +1413,10 @@ static int test__group_gh4(struct evlist *evlist)
/* cache-misses:H + :uG group modifier */
evsel = evsel__next(evsel);
- TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
- TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CACHE_MISSES));
+ ret = assert_hw(&evsel->core, PERF_COUNT_HW_CACHE_MISSES, "cache-misses");
+ if (ret)
+ return ret;
+
TEST_ASSERT_VAL("wrong exclude_user", !evsel->core.attr.exclude_user);
TEST_ASSERT_VAL("wrong exclude_kernel", evsel->core.attr.exclude_kernel);
TEST_ASSERT_VAL("wrong exclude_hv", evsel->core.attr.exclude_hv);
@@ -1363,10 +1437,14 @@ static int test__leader_sample1(struct evlist *evlist)
evlist->core.nr_entries == (3 * num_core_entries()));
for (int i = 0; i < num_core_entries(); i++) {
+ int ret;
+
/* cycles - sampling group leader */
evsel = leader = (i == 0 ? evlist__first(evlist) : evsel__next(evsel));
- TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
- TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CPU_CYCLES));
+ ret = assert_hw(&evsel->core, PERF_COUNT_HW_CPU_CYCLES, "cycles");
+ if (ret)
+ return ret;
+
TEST_ASSERT_VAL("wrong exclude_user", !evsel->core.attr.exclude_user);
TEST_ASSERT_VAL("wrong exclude_kernel", !evsel->core.attr.exclude_kernel);
TEST_ASSERT_VAL("wrong exclude_hv", !evsel->core.attr.exclude_hv);
@@ -1379,8 +1457,10 @@ static int test__leader_sample1(struct evlist *evlist)
/* cache-misses - not sampling */
evsel = evsel__next(evsel);
- TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
- TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CACHE_MISSES));
+ ret = assert_hw(&evsel->core, PERF_COUNT_HW_CACHE_MISSES, "cache-misses");
+ if (ret)
+ return ret;
+
TEST_ASSERT_VAL("wrong exclude_user", !evsel->core.attr.exclude_user);
TEST_ASSERT_VAL("wrong exclude_kernel", !evsel->core.attr.exclude_kernel);
TEST_ASSERT_VAL("wrong exclude_hv", !evsel->core.attr.exclude_hv);
@@ -1392,8 +1472,10 @@ static int test__leader_sample1(struct evlist *evlist)
/* branch-misses - not sampling */
evsel = evsel__next(evsel);
- TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
- TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_BRANCH_MISSES));
+ ret = assert_hw(&evsel->core, PERF_COUNT_HW_BRANCH_MISSES, "branch-misses");
+ if (ret)
+ return ret;
+
TEST_ASSERT_VAL("wrong exclude_user", !evsel->core.attr.exclude_user);
TEST_ASSERT_VAL("wrong exclude_kernel", !evsel->core.attr.exclude_kernel);
TEST_ASSERT_VAL("wrong exclude_hv", !evsel->core.attr.exclude_hv);
@@ -1415,10 +1497,14 @@ static int test__leader_sample2(struct evlist *evlist __maybe_unused)
evlist->core.nr_entries == (2 * num_core_entries()));
for (int i = 0; i < num_core_entries(); i++) {
+ int ret;
+
/* instructions - sampling group leader */
evsel = leader = (i == 0 ? evlist__first(evlist) : evsel__next(evsel));
- TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
- TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_INSTRUCTIONS));
+ ret = assert_hw(&evsel->core, PERF_COUNT_HW_INSTRUCTIONS, "instructions");
+ if (ret)
+ return ret;
+
TEST_ASSERT_VAL("wrong exclude_user", !evsel->core.attr.exclude_user);
TEST_ASSERT_VAL("wrong exclude_kernel", evsel->core.attr.exclude_kernel);
TEST_ASSERT_VAL("wrong exclude_hv", evsel->core.attr.exclude_hv);
@@ -1431,8 +1517,10 @@ static int test__leader_sample2(struct evlist *evlist __maybe_unused)
/* branch-misses - not sampling */
evsel = evsel__next(evsel);
- TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
- TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_BRANCH_MISSES));
+ ret = assert_hw(&evsel->core, PERF_COUNT_HW_BRANCH_MISSES, "branch-misses");
+ if (ret)
+ return ret;
+
TEST_ASSERT_VAL("wrong exclude_user", !evsel->core.attr.exclude_user);
TEST_ASSERT_VAL("wrong exclude_kernel", evsel->core.attr.exclude_kernel);
TEST_ASSERT_VAL("wrong exclude_hv", evsel->core.attr.exclude_hv);
@@ -1472,10 +1560,14 @@ static int test__pinned_group(struct evlist *evlist)
evlist->core.nr_entries == (3 * num_core_entries()));
for (int i = 0; i < num_core_entries(); i++) {
+ int ret;
+
/* cycles - group leader */
evsel = leader = (i == 0 ? evlist__first(evlist) : evsel__next(evsel));
- TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
- TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CPU_CYCLES));
+ ret = assert_hw(&evsel->core, PERF_COUNT_HW_CPU_CYCLES, "cycles");
+ if (ret)
+ return ret;
+
TEST_ASSERT_VAL("wrong group name", !evsel->group_name);
TEST_ASSERT_VAL("wrong leader", evsel__has_leader(evsel, leader));
/* TODO: The group modifier is not copied to the split group leader. */
@@ -1484,13 +1576,18 @@ static int test__pinned_group(struct evlist *evlist)
/* cache-misses - can not be pinned, but will go on with the leader */
evsel = evsel__next(evsel);
- TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
- TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CACHE_MISSES));
+ ret = assert_hw(&evsel->core, PERF_COUNT_HW_CACHE_MISSES, "cache-misses");
+ if (ret)
+ return ret;
+
TEST_ASSERT_VAL("wrong pinned", !evsel->core.attr.pinned);
/* branch-misses - ditto */
evsel = evsel__next(evsel);
- TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_BRANCH_MISSES));
+ ret = assert_hw(&evsel->core, PERF_COUNT_HW_BRANCH_MISSES, "branch-misses");
+ if (ret)
+ return ret;
+
TEST_ASSERT_VAL("wrong pinned", !evsel->core.attr.pinned);
}
return TEST_OK;
@@ -1517,10 +1614,14 @@ static int test__exclusive_group(struct evlist *evlist)
evlist->core.nr_entries == 3 * num_core_entries());
for (int i = 0; i < num_core_entries(); i++) {
+ int ret;
+
/* cycles - group leader */
evsel = leader = (i == 0 ? evlist__first(evlist) : evsel__next(evsel));
- TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
- TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CPU_CYCLES));
+ ret = assert_hw(&evsel->core, PERF_COUNT_HW_CPU_CYCLES, "cycles");
+ if (ret)
+ return ret;
+
TEST_ASSERT_VAL("wrong group name", !evsel->group_name);
TEST_ASSERT_VAL("wrong leader", evsel__has_leader(evsel, leader));
/* TODO: The group modifier is not copied to the split group leader. */
@@ -1529,13 +1630,18 @@ static int test__exclusive_group(struct evlist *evlist)
/* cache-misses - can not be pinned, but will go on with the leader */
evsel = evsel__next(evsel);
- TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
- TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CACHE_MISSES));
+ ret = assert_hw(&evsel->core, PERF_COUNT_HW_CACHE_MISSES, "cache-misses");
+ if (ret)
+ return ret;
+
TEST_ASSERT_VAL("wrong exclusive", !evsel->core.attr.exclusive);
/* branch-misses - ditto */
evsel = evsel__next(evsel);
- TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_BRANCH_MISSES));
+ ret = assert_hw(&evsel->core, PERF_COUNT_HW_BRANCH_MISSES, "branch-misses");
+ if (ret)
+ return ret;
+
TEST_ASSERT_VAL("wrong exclusive", !evsel->core.attr.exclusive);
}
return TEST_OK;
@@ -1677,9 +1783,11 @@ static int test__checkevent_raw_pmu(struct evlist *evlist)
static int test__sym_event_slash(struct evlist *evlist)
{
struct evsel *evsel = evlist__first(evlist);
+ int ret = assert_hw(&evsel->core, PERF_COUNT_HW_CPU_CYCLES, "cycles");
+
+ if (ret)
+ return ret;
- TEST_ASSERT_VAL("wrong type", evsel->core.attr.type == PERF_TYPE_HARDWARE);
- TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CPU_CYCLES));
TEST_ASSERT_VAL("wrong exclude_kernel", evsel->core.attr.exclude_kernel);
return TEST_OK;
}
@@ -1687,9 +1795,11 @@ static int test__sym_event_slash(struct evlist *evlist)
static int test__sym_event_dc(struct evlist *evlist)
{
struct evsel *evsel = evlist__first(evlist);
+ int ret = assert_hw(&evsel->core, PERF_COUNT_HW_CPU_CYCLES, "cycles");
+
+ if (ret)
+ return ret;
- TEST_ASSERT_VAL("wrong type", evsel->core.attr.type == PERF_TYPE_HARDWARE);
- TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CPU_CYCLES));
TEST_ASSERT_VAL("wrong exclude_user", evsel->core.attr.exclude_user);
return TEST_OK;
}
@@ -1697,9 +1807,11 @@ static int test__sym_event_dc(struct evlist *evlist)
static int test__term_equal_term(struct evlist *evlist)
{
struct evsel *evsel = evlist__first(evlist);
+ int ret = assert_hw(&evsel->core, PERF_COUNT_HW_CPU_CYCLES, "cycles");
+
+ if (ret)
+ return ret;
- TEST_ASSERT_VAL("wrong type", evsel->core.attr.type == PERF_TYPE_HARDWARE);
- TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CPU_CYCLES));
TEST_ASSERT_VAL("wrong name setting", strcmp(evsel->name, "name") == 0);
return TEST_OK;
}
@@ -1707,9 +1819,11 @@ static int test__term_equal_term(struct evlist *evlist)
static int test__term_equal_legacy(struct evlist *evlist)
{
struct evsel *evsel = evlist__first(evlist);
+ int ret = assert_hw(&evsel->core, PERF_COUNT_HW_CPU_CYCLES, "cycles");
+
+ if (ret)
+ return ret;
- TEST_ASSERT_VAL("wrong type", evsel->core.attr.type == PERF_TYPE_HARDWARE);
- TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CPU_CYCLES));
TEST_ASSERT_VAL("wrong name setting", strcmp(evsel->name, "l1d") == 0);
return TEST_OK;
}
diff --git a/tools/perf/util/parse-events.c b/tools/perf/util/parse-events.c
index aa2f5c6fc7fc..767aa718faa5 100644
--- a/tools/perf/util/parse-events.c
+++ b/tools/perf/util/parse-events.c
@@ -976,7 +976,7 @@ static int config_term_pmu(struct perf_event_attr *attr,
struct parse_events_error *err)
{
if (term->type_term == PARSE_EVENTS__TERM_TYPE_LEGACY_CACHE) {
- const struct perf_pmu *pmu = perf_pmus__find_by_type(attr->type);
+ struct perf_pmu *pmu = perf_pmus__find_by_type(attr->type);
if (!pmu) {
char *err_str;
@@ -986,15 +986,23 @@ static int config_term_pmu(struct perf_event_attr *attr,
err_str, /*help=*/NULL);
return -EINVAL;
}
- if (perf_pmu__supports_legacy_cache(pmu)) {
+ /*
+ * Rewrite the PMU event to a legacy cache one unless the PMU
+ * doesn't support legacy cache events or the event is present
+ * within the PMU.
+ */
+ if (perf_pmu__supports_legacy_cache(pmu) &&
+ !perf_pmu__have_event(pmu, term->config)) {
attr->type = PERF_TYPE_HW_CACHE;
return parse_events__decode_legacy_cache(term->config, pmu->type,
&attr->config);
- } else
+ } else {
term->type_term = PARSE_EVENTS__TERM_TYPE_USER;
+ term->no_value = true;
+ }
}
if (term->type_term == PARSE_EVENTS__TERM_TYPE_HARDWARE) {
- const struct perf_pmu *pmu = perf_pmus__find_by_type(attr->type);
+ struct perf_pmu *pmu = perf_pmus__find_by_type(attr->type);
if (!pmu) {
char *err_str;
@@ -1004,10 +1012,19 @@ static int config_term_pmu(struct perf_event_attr *attr,
err_str, /*help=*/NULL);
return -EINVAL;
}
- attr->type = PERF_TYPE_HARDWARE;
- attr->config = term->val.num;
- if (perf_pmus__supports_extended_type())
- attr->config |= (__u64)pmu->type << PERF_PMU_TYPE_SHIFT;
+ /*
+ * If the PMU has a sysfs or json event prefer it over
+ * legacy. ARM requires this.
+ */
+ if (perf_pmu__have_event(pmu, term->config)) {
+ term->type_term = PARSE_EVENTS__TERM_TYPE_USER;
+ term->no_value = true;
+ } else {
+ attr->type = PERF_TYPE_HARDWARE;
+ attr->config = term->val.num;
+ if (perf_pmus__supports_extended_type())
+ attr->config |= (__u64)pmu->type << PERF_PMU_TYPE_SHIFT;
+ }
return 0;
}
if (term->type_term == PARSE_EVENTS__TERM_TYPE_USER ||
@@ -1381,6 +1398,7 @@ int parse_events_add_pmu(struct parse_events_state *parse_state,
YYLTYPE *loc = loc_;
LIST_HEAD(config_terms);
struct parse_events_terms parsed_terms;
+ bool alias_rewrote_terms;
pmu = parse_state->fake_pmu ?: perf_pmus__find(name);
@@ -1433,7 +1451,15 @@ int parse_events_add_pmu(struct parse_events_state *parse_state,
return evsel ? 0 : -ENOMEM;
}
- if (!parse_state->fake_pmu && perf_pmu__check_alias(pmu, &parsed_terms, &info, err)) {
+ /* Configure attr/terms with a known PMU, this will set hardcoded terms. */
+ if (config_attr(&attr, &parsed_terms, parse_state->error, config_term_pmu)) {
+ parse_events_terms__exit(&parsed_terms);
+ return -EINVAL;
+ }
+
+ /* Look for event names in the terms and rewrite into format based terms. */
+ if (!parse_state->fake_pmu && perf_pmu__check_alias(pmu, &parsed_terms,
+ &info, &alias_rewrote_terms, err)) {
parse_events_terms__exit(&parsed_terms);
return -EINVAL;
}
@@ -1447,11 +1473,9 @@ int parse_events_add_pmu(struct parse_events_state *parse_state,
strbuf_release(&sb);
}
- /*
- * Configure hardcoded terms first, no need to check
- * return value when called with fail == 0 ;)
- */
- if (config_attr(&attr, &parsed_terms, parse_state->error, config_term_pmu)) {
+ /* Configure attr/terms again if an alias was expanded. */
+ if (alias_rewrote_terms &&
+ config_attr(&attr, &parsed_terms, parse_state->error, config_term_pmu)) {
parse_events_terms__exit(&parsed_terms);
return -EINVAL;
}
diff --git a/tools/perf/util/pmu.c b/tools/perf/util/pmu.c
index d3c9aa4326be..3c9609944a2f 100644
--- a/tools/perf/util/pmu.c
+++ b/tools/perf/util/pmu.c
@@ -1494,12 +1494,14 @@ static int check_info_data(struct perf_pmu *pmu,
* defined for the alias
*/
int perf_pmu__check_alias(struct perf_pmu *pmu, struct parse_events_terms *head_terms,
- struct perf_pmu_info *info, struct parse_events_error *err)
+ struct perf_pmu_info *info, bool *rewrote_terms,
+ struct parse_events_error *err)
{
struct parse_events_term *term, *h;
struct perf_pmu_alias *alias;
int ret;
+ *rewrote_terms = false;
info->per_pkg = false;
/*
@@ -1521,7 +1523,7 @@ int perf_pmu__check_alias(struct perf_pmu *pmu, struct parse_events_terms *head_
NULL);
return ret;
}
-
+ *rewrote_terms = true;
ret = check_info_data(pmu, alias, info, err, term->err_term);
if (ret)
return ret;
@@ -1615,6 +1617,8 @@ bool perf_pmu__auto_merge_stats(const struct perf_pmu *pmu)
bool perf_pmu__have_event(struct perf_pmu *pmu, const char *name)
{
+ if (!name)
+ return false;
if (perf_pmu__find_alias(pmu, name, /*load=*/ true) != NULL)
return true;
if (pmu->cpu_aliases_added || !pmu->events_table)
diff --git a/tools/perf/util/pmu.h b/tools/perf/util/pmu.h
index d2895d415f08..424c3fee0949 100644
--- a/tools/perf/util/pmu.h
+++ b/tools/perf/util/pmu.h
@@ -201,7 +201,8 @@ int perf_pmu__config_terms(const struct perf_pmu *pmu,
__u64 perf_pmu__format_bits(struct perf_pmu *pmu, const char *name);
int perf_pmu__format_type(struct perf_pmu *pmu, const char *name);
int perf_pmu__check_alias(struct perf_pmu *pmu, struct parse_events_terms *head_terms,
- struct perf_pmu_info *info, struct parse_events_error *err);
+ struct perf_pmu_info *info, bool *rewrote_terms,
+ struct parse_events_error *err);
int perf_pmu__find_event(struct perf_pmu *pmu, const char *event, void *state, pmu_event_callback cb);
int perf_pmu__format_parse(struct perf_pmu *pmu, int dirfd, bool eager_load);
--
2.43.0.rc1.413.gea7ed67945-goog
On 2023/11/23 13:29, Ian Rogers wrote:
> The perf tool has previously made legacy events the priority so with
> or without a PMU the legacy event would be opened:
>
> ```
> $ perf stat -e cpu-cycles,cpu/cpu-cycles/ true
> Using CPUID GenuineIntel-6-8D-1
> intel_pt default config: tsc,mtc,mtc_period=3,psb_period=3,pt,branch
> Attempting to add event pmu 'cpu' with 'cpu-cycles,' that may result in non-fatal errors
> After aliases, add event pmu 'cpu' with 'cpu-cycles,' that may result in non-fatal errors
> Control descriptor is not initialized
> ------------------------------------------------------------
> perf_event_attr:
> type 0 (PERF_TYPE_HARDWARE)
> size 136
> config 0 (PERF_COUNT_HW_CPU_CYCLES)
> sample_type IDENTIFIER
> read_format TOTAL_TIME_ENABLED|TOTAL_TIME_RUNNING
> disabled 1
> inherit 1
> enable_on_exec 1
> exclude_guest 1
> ------------------------------------------------------------
> sys_perf_event_open: pid 833967 cpu -1 group_fd -1 flags 0x8 = 3
> ------------------------------------------------------------
> perf_event_attr:
> type 0 (PERF_TYPE_HARDWARE)
> size 136
> config 0 (PERF_COUNT_HW_CPU_CYCLES)
> sample_type IDENTIFIER
> read_format TOTAL_TIME_ENABLED|TOTAL_TIME_RUNNING
> disabled 1
> inherit 1
> enable_on_exec 1
> exclude_guest 1
> ------------------------------------------------------------
> ...
> ```
>
> Fixes to make hybrid/BIG.little PMUs behave correctly, ie as core PMUs
> capable of opening legacy events on each, removing hard coded
> "cpu_core" and "cpu_atom" Intel PMU names, etc. caused a behavioral
> difference on Apple/ARM due to latent issues in the PMU driver
> reported in:
> https://lore.kernel.org/lkml/[email protected]/
>
> As part of that report Mark Rutland <[email protected]> requested
> that legacy events not be higher in priority when a PMU is specified
> reversing what has until this change been perf's default
> behavior. With this change the above becomes:
>
> ```
> $ perf stat -e cpu-cycles,cpu/cpu-cycles/ true
> Using CPUID GenuineIntel-6-8D-1
> Attempt to add: cpu/cpu-cycles=0/
> ..after resolving event: cpu/event=0x3c/
> Control descriptor is not initialized
> ------------------------------------------------------------
> perf_event_attr:
> type 0 (PERF_TYPE_HARDWARE)
> size 136
> config 0 (PERF_COUNT_HW_CPU_CYCLES)
> sample_type IDENTIFIER
> read_format TOTAL_TIME_ENABLED|TOTAL_TIME_RUNNING
> disabled 1
> inherit 1
> enable_on_exec 1
> exclude_guest 1
> ------------------------------------------------------------
> sys_perf_event_open: pid 827628 cpu -1 group_fd -1 flags 0x8 = 3
> ------------------------------------------------------------
> perf_event_attr:
> type 4 (PERF_TYPE_RAW)
> size 136
> config 0x3c
> sample_type IDENTIFIER
> read_format TOTAL_TIME_ENABLED|TOTAL_TIME_RUNNING
> disabled 1
> inherit 1
> enable_on_exec 1
> exclude_guest 1
> ------------------------------------------------------------
> ...
> ```
>
> So the second event has become a raw event as
> /sys/devices/cpu/events/cpu-cycles exists.
>
> A fix was necessary to config_term_pmu in parse-events.c as
> check_alias expansion needs to happen after config_term_pmu, and
> config_term_pmu may need calling a second time because of this.
>
> config_term_pmu is updated to not use the legacy event when the PMU
> has such a named event (either from json or sysfs).
>
> The bulk of this change is updating all of the parse-events test
> expectations so that if a sysfs/json event exists for a PMU the test
> doesn't fail - a further sign, if it were needed, that the legacy
> event priority was a known and tested behavior of the perf tool.
>
> Signed-off-by: Ian Rogers <[email protected]>
> ---
> This is a large behavioral change:
> 1) the scope of the change means it should bake on linux-next and I
> don't believe should be a 6.7-rc fix.
> 2) a fixes tag and stable backport I don't think are appropriate. The
> real reported issue is with the PMU driver. A backport would bring the
> risk that later fixes, due to the large behavior change, wouldn't be
> backported and past releases get regressed in scenarios like
> hybrid. Backports for the perf tool are also less necessary than say a
> buggy PMU driver, as distributions should be updating to the latest
> perf tool regardless of what Linux kernel is being run (the perf tool
> is backward compatible).
> ---
> tools/perf/tests/parse-events.c | 256 +++++++++++++++++++++++---------
> tools/perf/util/parse-events.c | 52 +++++--
> tools/perf/util/pmu.c | 8 +-
> tools/perf/util/pmu.h | 3 +-
> 4 files changed, 231 insertions(+), 88 deletions(-)
>
Tested-by: Hector Martin <[email protected]>
$ sudo taskset -c 2 ./perf stat -e apple_icestorm_pmu/cycles/ -e
apple_firestorm_pmu/cycles/ -e cycles echo
Performance counter stats for 'echo':
<not counted> apple_icestorm_pmu/cycles/
(0.00%)
34,622 apple_firestorm_pmu/cycles/
30,751 cycles
0.000429625 seconds time elapsed
0.000000000 seconds user
0.000443000 seconds sys
$ sudo taskset -c 0 ./perf stat -e apple_icestorm_pmu/cycles/ -e
apple_firestorm_pmu/cycles/ -e cycles echo
Performance counter stats for 'echo':
13,413 apple_icestorm_pmu/cycles/
<not counted> apple_firestorm_pmu/cycles/
(0.00%)
<not counted> cycles
(0.00%)
0.000898458 seconds time elapsed
0.000908000 seconds user
0.000000000 seconds sys
(It would be nice to have "cycles" match/aggregate both PMUs, but that's
a story for another day. The behavior above is what was there in 6.4 and
earlier.)
- Hector
Em Thu, Nov 23, 2023 at 05:45:19PM +0900, Hector Martin escreveu:
> On 2023/11/23 13:29, Ian Rogers wrote:
> > The bulk of this change is updating all of the parse-events test
> > expectations so that if a sysfs/json event exists for a PMU the test
> > doesn't fail - a further sign, if it were needed, that the legacy
> > event priority was a known and tested behavior of the perf tool.
> > Signed-off-by: Ian Rogers <[email protected]>
> > ---
> > This is a large behavioral change:
> > 1) the scope of the change means it should bake on linux-next and I
> > don't believe should be a 6.7-rc fix.
> > 2) a fixes tag and stable backport I don't think are appropriate. The
> > real reported issue is with the PMU driver. A backport would bring the
> > risk that later fixes, due to the large behavior change, wouldn't be
> > backported and past releases get regressed in scenarios like
> > hybrid. Backports for the perf tool are also less necessary than say a
> > buggy PMU driver, as distributions should be updating to the latest
> > perf tool regardless of what Linux kernel is being run (the perf tool
> > is backward compatible).
>
> Tested-by: Hector Martin <[email protected]>
Thanks, applied locally, doing some tests and then will push for
linux-next to pick it up.
Mark, can I have your Reviewed-by or Acked-by?
- Arnaldo
> $ sudo taskset -c 2 ./perf stat -e apple_icestorm_pmu/cycles/ -e
> apple_firestorm_pmu/cycles/ -e cycles echo
>
>
> Performance counter stats for 'echo':
>
> <not counted> apple_icestorm_pmu/cycles/
> (0.00%)
> 34,622 apple_firestorm_pmu/cycles/
>
> 30,751 cycles
>
>
> 0.000429625 seconds time elapsed
>
> 0.000000000 seconds user
> 0.000443000 seconds sys
>
>
> $ sudo taskset -c 0 ./perf stat -e apple_icestorm_pmu/cycles/ -e
> apple_firestorm_pmu/cycles/ -e cycles echo
>
>
> Performance counter stats for 'echo':
>
> 13,413 apple_icestorm_pmu/cycles/
>
> <not counted> apple_firestorm_pmu/cycles/
> (0.00%)
> <not counted> cycles
> (0.00%)
>
> 0.000898458 seconds time elapsed
>
> 0.000908000 seconds user
> 0.000000000 seconds sys
>
> (It would be nice to have "cycles" match/aggregate both PMUs, but that's
> a story for another day. The behavior above is what was there in 6.4 and
> earlier.)
Hi Ian,
Thanks for this!
On Wed, Nov 22, 2023 at 08:29:22PM -0800, Ian Rogers wrote:
> The perf tool has previously made legacy events the priority so with
> or without a PMU the legacy event would be opened:
>
> ```
> $ perf stat -e cpu-cycles,cpu/cpu-cycles/ true
> Using CPUID GenuineIntel-6-8D-1
> intel_pt default config: tsc,mtc,mtc_period=3,psb_period=3,pt,branch
> Attempting to add event pmu 'cpu' with 'cpu-cycles,' that may result in non-fatal errors
> After aliases, add event pmu 'cpu' with 'cpu-cycles,' that may result in non-fatal errors
> Control descriptor is not initialized
> ------------------------------------------------------------
> perf_event_attr:
> type 0 (PERF_TYPE_HARDWARE)
> size 136
> config 0 (PERF_COUNT_HW_CPU_CYCLES)
> sample_type IDENTIFIER
> read_format TOTAL_TIME_ENABLED|TOTAL_TIME_RUNNING
> disabled 1
> inherit 1
> enable_on_exec 1
> exclude_guest 1
> ------------------------------------------------------------
> sys_perf_event_open: pid 833967 cpu -1 group_fd -1 flags 0x8 = 3
> ------------------------------------------------------------
> perf_event_attr:
> type 0 (PERF_TYPE_HARDWARE)
> size 136
> config 0 (PERF_COUNT_HW_CPU_CYCLES)
> sample_type IDENTIFIER
> read_format TOTAL_TIME_ENABLED|TOTAL_TIME_RUNNING
> disabled 1
> inherit 1
> enable_on_exec 1
> exclude_guest 1
> ------------------------------------------------------------
> ...
> ```
>
> Fixes to make hybrid/BIG.little PMUs behave correctly, ie as core PMUs
> capable of opening legacy events on each, removing hard coded
> "cpu_core" and "cpu_atom" Intel PMU names, etc. caused a behavioral
> difference on Apple/ARM due to latent issues in the PMU driver
> reported in:
> https://lore.kernel.org/lkml/[email protected]/
>
> As part of that report Mark Rutland <[email protected]> requested
> that legacy events not be higher in priority when a PMU is specified
> reversing what has until this change been perf's default
> behavior. With this change the above becomes:
>
> ```
> $ perf stat -e cpu-cycles,cpu/cpu-cycles/ true
> Using CPUID GenuineIntel-6-8D-1
> Attempt to add: cpu/cpu-cycles=0/
> ..after resolving event: cpu/event=0x3c/
> Control descriptor is not initialized
> ------------------------------------------------------------
> perf_event_attr:
> type 0 (PERF_TYPE_HARDWARE)
> size 136
> config 0 (PERF_COUNT_HW_CPU_CYCLES)
> sample_type IDENTIFIER
> read_format TOTAL_TIME_ENABLED|TOTAL_TIME_RUNNING
> disabled 1
> inherit 1
> enable_on_exec 1
> exclude_guest 1
> ------------------------------------------------------------
> sys_perf_event_open: pid 827628 cpu -1 group_fd -1 flags 0x8 = 3
> ------------------------------------------------------------
> perf_event_attr:
> type 4 (PERF_TYPE_RAW)
> size 136
> config 0x3c
> sample_type IDENTIFIER
> read_format TOTAL_TIME_ENABLED|TOTAL_TIME_RUNNING
> disabled 1
> inherit 1
> enable_on_exec 1
> exclude_guest 1
> ------------------------------------------------------------
> ...
> ```
>
> So the second event has become a raw event as
> /sys/devices/cpu/events/cpu-cycles exists.
>
> A fix was necessary to config_term_pmu in parse-events.c as
> check_alias expansion needs to happen after config_term_pmu, and
> config_term_pmu may need calling a second time because of this.
>
> config_term_pmu is updated to not use the legacy event when the PMU
> has such a named event (either from json or sysfs).
>
> The bulk of this change is updating all of the parse-events test
> expectations so that if a sysfs/json event exists for a PMU the test
> doesn't fail - a further sign, if it were needed, that the legacy
> event priority was a known and tested behavior of the perf tool.
>
> Signed-off-by: Ian Rogers <[email protected]>
Regardless of my comments below, for this patch as-is:
Acked-by: Mark Rutland <[email protected]>
> ---
> This is a large behavioral change:
> 1) the scope of the change means it should bake on linux-next and I
> don't believe should be a 6.7-rc fix.
I'm happy for this to bake, but I do think it needs to be backported for the
sake of users, especially given that it *restores* the old behaviour.
> 2) a fixes tag and stable backport I don't think are appropriate.
For the sake of users I think a fixes tag and stable backport are necssary. In
practice distributions ship the perf tool associated with their stable kernel,
so (for better or worse) a stable backport is certainly necessary for distros
that'll use the v6.6 stable kernel.
> The real reported issue is with the PMU driver.
Having trawled through the driver and core perf code, I don't believe the PMU
driver is at fault. Please see my analysis at:
https://lore.kernel.org/lkml/[email protected]/
... where it looks like the perf tool is dropping the extended type ID in some
cases.
If you know of a specific bug in the PMU driver or perf core code, please let
me know and I will investigate. As it stands we have no evidence of a bug in
the PMU driver, and pretty clear evidence (as linked above) there there is
*some* issue in userspace. In the absence of such evidence, please don't assert
that there must be a kernel bug.
> A backport would bring the
> risk that later fixes, due to the large behavior change, wouldn't be
> backported and past releases get regressed in scenarios like
> hybrid. Backports for the perf tool are also less necessary than say a
> buggy PMU driver, as distributions should be updating to the latest
> perf tool regardless of what Linux kernel is being run (the perf tool
> is backward compatible).
As above I believe that a backport is necessary.
Thanks,
Mark.
> ---
> tools/perf/tests/parse-events.c | 256 +++++++++++++++++++++++---------
> tools/perf/util/parse-events.c | 52 +++++--
> tools/perf/util/pmu.c | 8 +-
> tools/perf/util/pmu.h | 3 +-
> 4 files changed, 231 insertions(+), 88 deletions(-)
>
> diff --git a/tools/perf/tests/parse-events.c b/tools/perf/tests/parse-events.c
> index e52f45c7c3d1..fbdf710d5eea 100644
> --- a/tools/perf/tests/parse-events.c
> +++ b/tools/perf/tests/parse-events.c
> @@ -162,6 +162,22 @@ static int test__checkevent_numeric(struct evlist *evlist)
> return TEST_OK;
> }
>
> +
> +static int assert_hw(struct perf_evsel *evsel, enum perf_hw_id id, const char *name)
> +{
> + struct perf_pmu *pmu;
> +
> + if (evsel->attr.type == PERF_TYPE_HARDWARE) {
> + TEST_ASSERT_VAL("wrong config", test_perf_config(evsel, id));
> + return 0;
> + }
> + pmu = perf_pmus__find_by_type(evsel->attr.type);
> +
> + TEST_ASSERT_VAL("unexpected PMU type", pmu);
> + TEST_ASSERT_VAL("PMU missing event", perf_pmu__have_event(pmu, name));
> + return 0;
> +}
> +
> static int test__checkevent_symbolic_name(struct evlist *evlist)
> {
> struct perf_evsel *evsel;
> @@ -169,10 +185,12 @@ static int test__checkevent_symbolic_name(struct evlist *evlist)
> TEST_ASSERT_VAL("wrong number of entries", 0 != evlist->core.nr_entries);
>
> perf_evlist__for_each_evsel(&evlist->core, evsel) {
> - TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->attr.type);
> - TEST_ASSERT_VAL("wrong config",
> - test_perf_config(evsel, PERF_COUNT_HW_INSTRUCTIONS));
> + int ret = assert_hw(evsel, PERF_COUNT_HW_INSTRUCTIONS, "instructions");
> +
> + if (ret)
> + return ret;
> }
> +
> return TEST_OK;
> }
>
> @@ -183,8 +201,10 @@ static int test__checkevent_symbolic_name_config(struct evlist *evlist)
> TEST_ASSERT_VAL("wrong number of entries", 0 != evlist->core.nr_entries);
>
> perf_evlist__for_each_evsel(&evlist->core, evsel) {
> - TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->attr.type);
> - TEST_ASSERT_VAL("wrong config", test_perf_config(evsel, PERF_COUNT_HW_CPU_CYCLES));
> + int ret = assert_hw(evsel, PERF_COUNT_HW_CPU_CYCLES, "cycles");
> +
> + if (ret)
> + return ret;
> /*
> * The period value gets configured within evlist__config,
> * while this test executes only parse events method.
> @@ -861,10 +881,14 @@ static int test__group1(struct evlist *evlist)
> evlist__nr_groups(evlist) == num_core_entries());
>
> for (int i = 0; i < num_core_entries(); i++) {
> + int ret;
> +
> /* instructions:k */
> evsel = leader = (i == 0 ? evlist__first(evlist) : evsel__next(evsel));
> - TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
> - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_INSTRUCTIONS));
> + ret = assert_hw(&evsel->core, PERF_COUNT_HW_INSTRUCTIONS, "instructions");
> + if (ret)
> + return ret;
> +
> TEST_ASSERT_VAL("wrong exclude_user", evsel->core.attr.exclude_user);
> TEST_ASSERT_VAL("wrong exclude_kernel", !evsel->core.attr.exclude_kernel);
> TEST_ASSERT_VAL("wrong exclude_hv", evsel->core.attr.exclude_hv);
> @@ -878,8 +902,10 @@ static int test__group1(struct evlist *evlist)
>
> /* cycles:upp */
> evsel = evsel__next(evsel);
> - TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
> - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CPU_CYCLES));
> + ret = assert_hw(&evsel->core, PERF_COUNT_HW_CPU_CYCLES, "cycles");
> + if (ret)
> + return ret;
> +
> TEST_ASSERT_VAL("wrong exclude_user", !evsel->core.attr.exclude_user);
> TEST_ASSERT_VAL("wrong exclude_kernel", evsel->core.attr.exclude_kernel);
> TEST_ASSERT_VAL("wrong exclude_hv", evsel->core.attr.exclude_hv);
> @@ -907,6 +933,8 @@ static int test__group2(struct evlist *evlist)
> TEST_ASSERT_VAL("wrong number of groups", 1 == evlist__nr_groups(evlist));
>
> evlist__for_each_entry(evlist, evsel) {
> + int ret;
> +
> if (evsel->core.attr.type == PERF_TYPE_SOFTWARE) {
> /* faults + :ku modifier */
> leader = evsel;
> @@ -939,8 +967,10 @@ static int test__group2(struct evlist *evlist)
> continue;
> }
> /* cycles:k */
> - TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
> - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CPU_CYCLES));
> + ret = assert_hw(&evsel->core, PERF_COUNT_HW_CPU_CYCLES, "cycles");
> + if (ret)
> + return ret;
> +
> TEST_ASSERT_VAL("wrong exclude_user", evsel->core.attr.exclude_user);
> TEST_ASSERT_VAL("wrong exclude_kernel", !evsel->core.attr.exclude_kernel);
> TEST_ASSERT_VAL("wrong exclude_hv", evsel->core.attr.exclude_hv);
> @@ -957,6 +987,7 @@ static int test__group2(struct evlist *evlist)
> static int test__group3(struct evlist *evlist __maybe_unused)
> {
> struct evsel *evsel, *group1_leader = NULL, *group2_leader = NULL;
> + int ret;
>
> TEST_ASSERT_VAL("wrong number of entries",
> evlist->core.nr_entries == (3 * perf_pmus__num_core_pmus() + 2));
> @@ -1045,8 +1076,10 @@ static int test__group3(struct evlist *evlist __maybe_unused)
> continue;
> }
> /* instructions:u */
> - TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
> - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_INSTRUCTIONS));
> + ret = assert_hw(&evsel->core, PERF_COUNT_HW_INSTRUCTIONS, "instructions");
> + if (ret)
> + return ret;
> +
> TEST_ASSERT_VAL("wrong exclude_user", !evsel->core.attr.exclude_user);
> TEST_ASSERT_VAL("wrong exclude_kernel", evsel->core.attr.exclude_kernel);
> TEST_ASSERT_VAL("wrong exclude_hv", evsel->core.attr.exclude_hv);
> @@ -1070,10 +1103,14 @@ static int test__group4(struct evlist *evlist __maybe_unused)
> num_core_entries() == evlist__nr_groups(evlist));
>
> for (int i = 0; i < num_core_entries(); i++) {
> + int ret;
> +
> /* cycles:u + p */
> evsel = leader = (i == 0 ? evlist__first(evlist) : evsel__next(evsel));
> - TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
> - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CPU_CYCLES));
> + ret = assert_hw(&evsel->core, PERF_COUNT_HW_CPU_CYCLES, "cycles");
> + if (ret)
> + return ret;
> +
> TEST_ASSERT_VAL("wrong exclude_user", !evsel->core.attr.exclude_user);
> TEST_ASSERT_VAL("wrong exclude_kernel", evsel->core.attr.exclude_kernel);
> TEST_ASSERT_VAL("wrong exclude_hv", evsel->core.attr.exclude_hv);
> @@ -1089,8 +1126,10 @@ static int test__group4(struct evlist *evlist __maybe_unused)
>
> /* instructions:kp + p */
> evsel = evsel__next(evsel);
> - TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
> - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_INSTRUCTIONS));
> + ret = assert_hw(&evsel->core, PERF_COUNT_HW_INSTRUCTIONS, "instructions");
> + if (ret)
> + return ret;
> +
> TEST_ASSERT_VAL("wrong exclude_user", evsel->core.attr.exclude_user);
> TEST_ASSERT_VAL("wrong exclude_kernel", !evsel->core.attr.exclude_kernel);
> TEST_ASSERT_VAL("wrong exclude_hv", evsel->core.attr.exclude_hv);
> @@ -1108,6 +1147,7 @@ static int test__group4(struct evlist *evlist __maybe_unused)
> static int test__group5(struct evlist *evlist __maybe_unused)
> {
> struct evsel *evsel = NULL, *leader;
> + int ret;
>
> TEST_ASSERT_VAL("wrong number of entries",
> evlist->core.nr_entries == (5 * num_core_entries()));
> @@ -1117,8 +1157,10 @@ static int test__group5(struct evlist *evlist __maybe_unused)
> for (int i = 0; i < num_core_entries(); i++) {
> /* cycles + G */
> evsel = leader = (i == 0 ? evlist__first(evlist) : evsel__next(evsel));
> - TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
> - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CPU_CYCLES));
> + ret = assert_hw(&evsel->core, PERF_COUNT_HW_CPU_CYCLES, "cycles");
> + if (ret)
> + return ret;
> +
> TEST_ASSERT_VAL("wrong exclude_user", !evsel->core.attr.exclude_user);
> TEST_ASSERT_VAL("wrong exclude_kernel", !evsel->core.attr.exclude_kernel);
> TEST_ASSERT_VAL("wrong exclude_hv", !evsel->core.attr.exclude_hv);
> @@ -1133,8 +1175,10 @@ static int test__group5(struct evlist *evlist __maybe_unused)
>
> /* instructions + G */
> evsel = evsel__next(evsel);
> - TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
> - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_INSTRUCTIONS));
> + ret = assert_hw(&evsel->core, PERF_COUNT_HW_INSTRUCTIONS, "instructions");
> + if (ret)
> + return ret;
> +
> TEST_ASSERT_VAL("wrong exclude_user", !evsel->core.attr.exclude_user);
> TEST_ASSERT_VAL("wrong exclude_kernel", !evsel->core.attr.exclude_kernel);
> TEST_ASSERT_VAL("wrong exclude_hv", !evsel->core.attr.exclude_hv);
> @@ -1148,8 +1192,10 @@ static int test__group5(struct evlist *evlist __maybe_unused)
> for (int i = 0; i < num_core_entries(); i++) {
> /* cycles:G */
> evsel = leader = evsel__next(evsel);
> - TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
> - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CPU_CYCLES));
> + ret = assert_hw(&evsel->core, PERF_COUNT_HW_CPU_CYCLES, "cycles");
> + if (ret)
> + return ret;
> +
> TEST_ASSERT_VAL("wrong exclude_user", !evsel->core.attr.exclude_user);
> TEST_ASSERT_VAL("wrong exclude_kernel", !evsel->core.attr.exclude_kernel);
> TEST_ASSERT_VAL("wrong exclude_hv", !evsel->core.attr.exclude_hv);
> @@ -1164,8 +1210,10 @@ static int test__group5(struct evlist *evlist __maybe_unused)
>
> /* instructions:G */
> evsel = evsel__next(evsel);
> - TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
> - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_INSTRUCTIONS));
> + ret = assert_hw(&evsel->core, PERF_COUNT_HW_INSTRUCTIONS, "instructions");
> + if (ret)
> + return ret;
> +
> TEST_ASSERT_VAL("wrong exclude_user", !evsel->core.attr.exclude_user);
> TEST_ASSERT_VAL("wrong exclude_kernel", !evsel->core.attr.exclude_kernel);
> TEST_ASSERT_VAL("wrong exclude_hv", !evsel->core.attr.exclude_hv);
> @@ -1178,8 +1226,10 @@ static int test__group5(struct evlist *evlist __maybe_unused)
> for (int i = 0; i < num_core_entries(); i++) {
> /* cycles */
> evsel = evsel__next(evsel);
> - TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
> - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CPU_CYCLES));
> + ret = assert_hw(&evsel->core, PERF_COUNT_HW_CPU_CYCLES, "cycles");
> + if (ret)
> + return ret;
> +
> TEST_ASSERT_VAL("wrong exclude_user", !evsel->core.attr.exclude_user);
> TEST_ASSERT_VAL("wrong exclude_kernel", !evsel->core.attr.exclude_kernel);
> TEST_ASSERT_VAL("wrong exclude_hv", !evsel->core.attr.exclude_hv);
> @@ -1201,10 +1251,14 @@ static int test__group_gh1(struct evlist *evlist)
> evlist__nr_groups(evlist) == num_core_entries());
>
> for (int i = 0; i < num_core_entries(); i++) {
> + int ret;
> +
> /* cycles + :H group modifier */
> evsel = leader = (i == 0 ? evlist__first(evlist) : evsel__next(evsel));
> - TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
> - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CPU_CYCLES));
> + ret = assert_hw(&evsel->core, PERF_COUNT_HW_CPU_CYCLES, "cycles");
> + if (ret)
> + return ret;
> +
> TEST_ASSERT_VAL("wrong exclude_user", !evsel->core.attr.exclude_user);
> TEST_ASSERT_VAL("wrong exclude_kernel", !evsel->core.attr.exclude_kernel);
> TEST_ASSERT_VAL("wrong exclude_hv", !evsel->core.attr.exclude_hv);
> @@ -1218,8 +1272,10 @@ static int test__group_gh1(struct evlist *evlist)
>
> /* cache-misses:G + :H group modifier */
> evsel = evsel__next(evsel);
> - TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
> - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CACHE_MISSES));
> + ret = assert_hw(&evsel->core, PERF_COUNT_HW_CACHE_MISSES, "cache-misses");
> + if (ret)
> + return ret;
> +
> TEST_ASSERT_VAL("wrong exclude_user", !evsel->core.attr.exclude_user);
> TEST_ASSERT_VAL("wrong exclude_kernel", !evsel->core.attr.exclude_kernel);
> TEST_ASSERT_VAL("wrong exclude_hv", !evsel->core.attr.exclude_hv);
> @@ -1242,10 +1298,14 @@ static int test__group_gh2(struct evlist *evlist)
> evlist__nr_groups(evlist) == num_core_entries());
>
> for (int i = 0; i < num_core_entries(); i++) {
> + int ret;
> +
> /* cycles + :G group modifier */
> evsel = leader = (i == 0 ? evlist__first(evlist) : evsel__next(evsel));
> - TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
> - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CPU_CYCLES));
> + ret = assert_hw(&evsel->core, PERF_COUNT_HW_CPU_CYCLES, "cycles");
> + if (ret)
> + return ret;
> +
> TEST_ASSERT_VAL("wrong exclude_user", !evsel->core.attr.exclude_user);
> TEST_ASSERT_VAL("wrong exclude_kernel", !evsel->core.attr.exclude_kernel);
> TEST_ASSERT_VAL("wrong exclude_hv", !evsel->core.attr.exclude_hv);
> @@ -1259,8 +1319,10 @@ static int test__group_gh2(struct evlist *evlist)
>
> /* cache-misses:H + :G group modifier */
> evsel = evsel__next(evsel);
> - TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
> - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CACHE_MISSES));
> + ret = assert_hw(&evsel->core, PERF_COUNT_HW_CACHE_MISSES, "cache-misses");
> + if (ret)
> + return ret;
> +
> TEST_ASSERT_VAL("wrong exclude_user", !evsel->core.attr.exclude_user);
> TEST_ASSERT_VAL("wrong exclude_kernel", !evsel->core.attr.exclude_kernel);
> TEST_ASSERT_VAL("wrong exclude_hv", !evsel->core.attr.exclude_hv);
> @@ -1283,10 +1345,14 @@ static int test__group_gh3(struct evlist *evlist)
> evlist__nr_groups(evlist) == num_core_entries());
>
> for (int i = 0; i < num_core_entries(); i++) {
> + int ret;
> +
> /* cycles:G + :u group modifier */
> evsel = leader = (i == 0 ? evlist__first(evlist) : evsel__next(evsel));
> - TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
> - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CPU_CYCLES));
> + ret = assert_hw(&evsel->core, PERF_COUNT_HW_CPU_CYCLES, "cycles");
> + if (ret)
> + return ret;
> +
> TEST_ASSERT_VAL("wrong exclude_user", !evsel->core.attr.exclude_user);
> TEST_ASSERT_VAL("wrong exclude_kernel", evsel->core.attr.exclude_kernel);
> TEST_ASSERT_VAL("wrong exclude_hv", evsel->core.attr.exclude_hv);
> @@ -1300,8 +1366,10 @@ static int test__group_gh3(struct evlist *evlist)
>
> /* cache-misses:H + :u group modifier */
> evsel = evsel__next(evsel);
> - TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
> - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CACHE_MISSES));
> + ret = assert_hw(&evsel->core, PERF_COUNT_HW_CACHE_MISSES, "cache-misses");
> + if (ret)
> + return ret;
> +
> TEST_ASSERT_VAL("wrong exclude_user", !evsel->core.attr.exclude_user);
> TEST_ASSERT_VAL("wrong exclude_kernel", evsel->core.attr.exclude_kernel);
> TEST_ASSERT_VAL("wrong exclude_hv", evsel->core.attr.exclude_hv);
> @@ -1324,10 +1392,14 @@ static int test__group_gh4(struct evlist *evlist)
> evlist__nr_groups(evlist) == num_core_entries());
>
> for (int i = 0; i < num_core_entries(); i++) {
> + int ret;
> +
> /* cycles:G + :uG group modifier */
> evsel = leader = (i == 0 ? evlist__first(evlist) : evsel__next(evsel));
> - TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
> - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CPU_CYCLES));
> + ret = assert_hw(&evsel->core, PERF_COUNT_HW_CPU_CYCLES, "cycles");
> + if (ret)
> + return ret;
> +
> TEST_ASSERT_VAL("wrong exclude_user", !evsel->core.attr.exclude_user);
> TEST_ASSERT_VAL("wrong exclude_kernel", evsel->core.attr.exclude_kernel);
> TEST_ASSERT_VAL("wrong exclude_hv", evsel->core.attr.exclude_hv);
> @@ -1341,8 +1413,10 @@ static int test__group_gh4(struct evlist *evlist)
>
> /* cache-misses:H + :uG group modifier */
> evsel = evsel__next(evsel);
> - TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
> - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CACHE_MISSES));
> + ret = assert_hw(&evsel->core, PERF_COUNT_HW_CACHE_MISSES, "cache-misses");
> + if (ret)
> + return ret;
> +
> TEST_ASSERT_VAL("wrong exclude_user", !evsel->core.attr.exclude_user);
> TEST_ASSERT_VAL("wrong exclude_kernel", evsel->core.attr.exclude_kernel);
> TEST_ASSERT_VAL("wrong exclude_hv", evsel->core.attr.exclude_hv);
> @@ -1363,10 +1437,14 @@ static int test__leader_sample1(struct evlist *evlist)
> evlist->core.nr_entries == (3 * num_core_entries()));
>
> for (int i = 0; i < num_core_entries(); i++) {
> + int ret;
> +
> /* cycles - sampling group leader */
> evsel = leader = (i == 0 ? evlist__first(evlist) : evsel__next(evsel));
> - TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
> - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CPU_CYCLES));
> + ret = assert_hw(&evsel->core, PERF_COUNT_HW_CPU_CYCLES, "cycles");
> + if (ret)
> + return ret;
> +
> TEST_ASSERT_VAL("wrong exclude_user", !evsel->core.attr.exclude_user);
> TEST_ASSERT_VAL("wrong exclude_kernel", !evsel->core.attr.exclude_kernel);
> TEST_ASSERT_VAL("wrong exclude_hv", !evsel->core.attr.exclude_hv);
> @@ -1379,8 +1457,10 @@ static int test__leader_sample1(struct evlist *evlist)
>
> /* cache-misses - not sampling */
> evsel = evsel__next(evsel);
> - TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
> - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CACHE_MISSES));
> + ret = assert_hw(&evsel->core, PERF_COUNT_HW_CACHE_MISSES, "cache-misses");
> + if (ret)
> + return ret;
> +
> TEST_ASSERT_VAL("wrong exclude_user", !evsel->core.attr.exclude_user);
> TEST_ASSERT_VAL("wrong exclude_kernel", !evsel->core.attr.exclude_kernel);
> TEST_ASSERT_VAL("wrong exclude_hv", !evsel->core.attr.exclude_hv);
> @@ -1392,8 +1472,10 @@ static int test__leader_sample1(struct evlist *evlist)
>
> /* branch-misses - not sampling */
> evsel = evsel__next(evsel);
> - TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
> - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_BRANCH_MISSES));
> + ret = assert_hw(&evsel->core, PERF_COUNT_HW_BRANCH_MISSES, "branch-misses");
> + if (ret)
> + return ret;
> +
> TEST_ASSERT_VAL("wrong exclude_user", !evsel->core.attr.exclude_user);
> TEST_ASSERT_VAL("wrong exclude_kernel", !evsel->core.attr.exclude_kernel);
> TEST_ASSERT_VAL("wrong exclude_hv", !evsel->core.attr.exclude_hv);
> @@ -1415,10 +1497,14 @@ static int test__leader_sample2(struct evlist *evlist __maybe_unused)
> evlist->core.nr_entries == (2 * num_core_entries()));
>
> for (int i = 0; i < num_core_entries(); i++) {
> + int ret;
> +
> /* instructions - sampling group leader */
> evsel = leader = (i == 0 ? evlist__first(evlist) : evsel__next(evsel));
> - TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
> - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_INSTRUCTIONS));
> + ret = assert_hw(&evsel->core, PERF_COUNT_HW_INSTRUCTIONS, "instructions");
> + if (ret)
> + return ret;
> +
> TEST_ASSERT_VAL("wrong exclude_user", !evsel->core.attr.exclude_user);
> TEST_ASSERT_VAL("wrong exclude_kernel", evsel->core.attr.exclude_kernel);
> TEST_ASSERT_VAL("wrong exclude_hv", evsel->core.attr.exclude_hv);
> @@ -1431,8 +1517,10 @@ static int test__leader_sample2(struct evlist *evlist __maybe_unused)
>
> /* branch-misses - not sampling */
> evsel = evsel__next(evsel);
> - TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
> - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_BRANCH_MISSES));
> + ret = assert_hw(&evsel->core, PERF_COUNT_HW_BRANCH_MISSES, "branch-misses");
> + if (ret)
> + return ret;
> +
> TEST_ASSERT_VAL("wrong exclude_user", !evsel->core.attr.exclude_user);
> TEST_ASSERT_VAL("wrong exclude_kernel", evsel->core.attr.exclude_kernel);
> TEST_ASSERT_VAL("wrong exclude_hv", evsel->core.attr.exclude_hv);
> @@ -1472,10 +1560,14 @@ static int test__pinned_group(struct evlist *evlist)
> evlist->core.nr_entries == (3 * num_core_entries()));
>
> for (int i = 0; i < num_core_entries(); i++) {
> + int ret;
> +
> /* cycles - group leader */
> evsel = leader = (i == 0 ? evlist__first(evlist) : evsel__next(evsel));
> - TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
> - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CPU_CYCLES));
> + ret = assert_hw(&evsel->core, PERF_COUNT_HW_CPU_CYCLES, "cycles");
> + if (ret)
> + return ret;
> +
> TEST_ASSERT_VAL("wrong group name", !evsel->group_name);
> TEST_ASSERT_VAL("wrong leader", evsel__has_leader(evsel, leader));
> /* TODO: The group modifier is not copied to the split group leader. */
> @@ -1484,13 +1576,18 @@ static int test__pinned_group(struct evlist *evlist)
>
> /* cache-misses - can not be pinned, but will go on with the leader */
> evsel = evsel__next(evsel);
> - TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
> - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CACHE_MISSES));
> + ret = assert_hw(&evsel->core, PERF_COUNT_HW_CACHE_MISSES, "cache-misses");
> + if (ret)
> + return ret;
> +
> TEST_ASSERT_VAL("wrong pinned", !evsel->core.attr.pinned);
>
> /* branch-misses - ditto */
> evsel = evsel__next(evsel);
> - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_BRANCH_MISSES));
> + ret = assert_hw(&evsel->core, PERF_COUNT_HW_BRANCH_MISSES, "branch-misses");
> + if (ret)
> + return ret;
> +
> TEST_ASSERT_VAL("wrong pinned", !evsel->core.attr.pinned);
> }
> return TEST_OK;
> @@ -1517,10 +1614,14 @@ static int test__exclusive_group(struct evlist *evlist)
> evlist->core.nr_entries == 3 * num_core_entries());
>
> for (int i = 0; i < num_core_entries(); i++) {
> + int ret;
> +
> /* cycles - group leader */
> evsel = leader = (i == 0 ? evlist__first(evlist) : evsel__next(evsel));
> - TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
> - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CPU_CYCLES));
> + ret = assert_hw(&evsel->core, PERF_COUNT_HW_CPU_CYCLES, "cycles");
> + if (ret)
> + return ret;
> +
> TEST_ASSERT_VAL("wrong group name", !evsel->group_name);
> TEST_ASSERT_VAL("wrong leader", evsel__has_leader(evsel, leader));
> /* TODO: The group modifier is not copied to the split group leader. */
> @@ -1529,13 +1630,18 @@ static int test__exclusive_group(struct evlist *evlist)
>
> /* cache-misses - can not be pinned, but will go on with the leader */
> evsel = evsel__next(evsel);
> - TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
> - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CACHE_MISSES));
> + ret = assert_hw(&evsel->core, PERF_COUNT_HW_CACHE_MISSES, "cache-misses");
> + if (ret)
> + return ret;
> +
> TEST_ASSERT_VAL("wrong exclusive", !evsel->core.attr.exclusive);
>
> /* branch-misses - ditto */
> evsel = evsel__next(evsel);
> - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_BRANCH_MISSES));
> + ret = assert_hw(&evsel->core, PERF_COUNT_HW_BRANCH_MISSES, "branch-misses");
> + if (ret)
> + return ret;
> +
> TEST_ASSERT_VAL("wrong exclusive", !evsel->core.attr.exclusive);
> }
> return TEST_OK;
> @@ -1677,9 +1783,11 @@ static int test__checkevent_raw_pmu(struct evlist *evlist)
> static int test__sym_event_slash(struct evlist *evlist)
> {
> struct evsel *evsel = evlist__first(evlist);
> + int ret = assert_hw(&evsel->core, PERF_COUNT_HW_CPU_CYCLES, "cycles");
> +
> + if (ret)
> + return ret;
>
> - TEST_ASSERT_VAL("wrong type", evsel->core.attr.type == PERF_TYPE_HARDWARE);
> - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CPU_CYCLES));
> TEST_ASSERT_VAL("wrong exclude_kernel", evsel->core.attr.exclude_kernel);
> return TEST_OK;
> }
> @@ -1687,9 +1795,11 @@ static int test__sym_event_slash(struct evlist *evlist)
> static int test__sym_event_dc(struct evlist *evlist)
> {
> struct evsel *evsel = evlist__first(evlist);
> + int ret = assert_hw(&evsel->core, PERF_COUNT_HW_CPU_CYCLES, "cycles");
> +
> + if (ret)
> + return ret;
>
> - TEST_ASSERT_VAL("wrong type", evsel->core.attr.type == PERF_TYPE_HARDWARE);
> - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CPU_CYCLES));
> TEST_ASSERT_VAL("wrong exclude_user", evsel->core.attr.exclude_user);
> return TEST_OK;
> }
> @@ -1697,9 +1807,11 @@ static int test__sym_event_dc(struct evlist *evlist)
> static int test__term_equal_term(struct evlist *evlist)
> {
> struct evsel *evsel = evlist__first(evlist);
> + int ret = assert_hw(&evsel->core, PERF_COUNT_HW_CPU_CYCLES, "cycles");
> +
> + if (ret)
> + return ret;
>
> - TEST_ASSERT_VAL("wrong type", evsel->core.attr.type == PERF_TYPE_HARDWARE);
> - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CPU_CYCLES));
> TEST_ASSERT_VAL("wrong name setting", strcmp(evsel->name, "name") == 0);
> return TEST_OK;
> }
> @@ -1707,9 +1819,11 @@ static int test__term_equal_term(struct evlist *evlist)
> static int test__term_equal_legacy(struct evlist *evlist)
> {
> struct evsel *evsel = evlist__first(evlist);
> + int ret = assert_hw(&evsel->core, PERF_COUNT_HW_CPU_CYCLES, "cycles");
> +
> + if (ret)
> + return ret;
>
> - TEST_ASSERT_VAL("wrong type", evsel->core.attr.type == PERF_TYPE_HARDWARE);
> - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CPU_CYCLES));
> TEST_ASSERT_VAL("wrong name setting", strcmp(evsel->name, "l1d") == 0);
> return TEST_OK;
> }
> diff --git a/tools/perf/util/parse-events.c b/tools/perf/util/parse-events.c
> index aa2f5c6fc7fc..767aa718faa5 100644
> --- a/tools/perf/util/parse-events.c
> +++ b/tools/perf/util/parse-events.c
> @@ -976,7 +976,7 @@ static int config_term_pmu(struct perf_event_attr *attr,
> struct parse_events_error *err)
> {
> if (term->type_term == PARSE_EVENTS__TERM_TYPE_LEGACY_CACHE) {
> - const struct perf_pmu *pmu = perf_pmus__find_by_type(attr->type);
> + struct perf_pmu *pmu = perf_pmus__find_by_type(attr->type);
>
> if (!pmu) {
> char *err_str;
> @@ -986,15 +986,23 @@ static int config_term_pmu(struct perf_event_attr *attr,
> err_str, /*help=*/NULL);
> return -EINVAL;
> }
> - if (perf_pmu__supports_legacy_cache(pmu)) {
> + /*
> + * Rewrite the PMU event to a legacy cache one unless the PMU
> + * doesn't support legacy cache events or the event is present
> + * within the PMU.
> + */
> + if (perf_pmu__supports_legacy_cache(pmu) &&
> + !perf_pmu__have_event(pmu, term->config)) {
> attr->type = PERF_TYPE_HW_CACHE;
> return parse_events__decode_legacy_cache(term->config, pmu->type,
> &attr->config);
> - } else
> + } else {
> term->type_term = PARSE_EVENTS__TERM_TYPE_USER;
> + term->no_value = true;
> + }
> }
> if (term->type_term == PARSE_EVENTS__TERM_TYPE_HARDWARE) {
> - const struct perf_pmu *pmu = perf_pmus__find_by_type(attr->type);
> + struct perf_pmu *pmu = perf_pmus__find_by_type(attr->type);
>
> if (!pmu) {
> char *err_str;
> @@ -1004,10 +1012,19 @@ static int config_term_pmu(struct perf_event_attr *attr,
> err_str, /*help=*/NULL);
> return -EINVAL;
> }
> - attr->type = PERF_TYPE_HARDWARE;
> - attr->config = term->val.num;
> - if (perf_pmus__supports_extended_type())
> - attr->config |= (__u64)pmu->type << PERF_PMU_TYPE_SHIFT;
> + /*
> + * If the PMU has a sysfs or json event prefer it over
> + * legacy. ARM requires this.
> + */
> + if (perf_pmu__have_event(pmu, term->config)) {
> + term->type_term = PARSE_EVENTS__TERM_TYPE_USER;
> + term->no_value = true;
> + } else {
> + attr->type = PERF_TYPE_HARDWARE;
> + attr->config = term->val.num;
> + if (perf_pmus__supports_extended_type())
> + attr->config |= (__u64)pmu->type << PERF_PMU_TYPE_SHIFT;
> + }
> return 0;
> }
> if (term->type_term == PARSE_EVENTS__TERM_TYPE_USER ||
> @@ -1381,6 +1398,7 @@ int parse_events_add_pmu(struct parse_events_state *parse_state,
> YYLTYPE *loc = loc_;
> LIST_HEAD(config_terms);
> struct parse_events_terms parsed_terms;
> + bool alias_rewrote_terms;
>
> pmu = parse_state->fake_pmu ?: perf_pmus__find(name);
>
> @@ -1433,7 +1451,15 @@ int parse_events_add_pmu(struct parse_events_state *parse_state,
> return evsel ? 0 : -ENOMEM;
> }
>
> - if (!parse_state->fake_pmu && perf_pmu__check_alias(pmu, &parsed_terms, &info, err)) {
> + /* Configure attr/terms with a known PMU, this will set hardcoded terms. */
> + if (config_attr(&attr, &parsed_terms, parse_state->error, config_term_pmu)) {
> + parse_events_terms__exit(&parsed_terms);
> + return -EINVAL;
> + }
> +
> + /* Look for event names in the terms and rewrite into format based terms. */
> + if (!parse_state->fake_pmu && perf_pmu__check_alias(pmu, &parsed_terms,
> + &info, &alias_rewrote_terms, err)) {
> parse_events_terms__exit(&parsed_terms);
> return -EINVAL;
> }
> @@ -1447,11 +1473,9 @@ int parse_events_add_pmu(struct parse_events_state *parse_state,
> strbuf_release(&sb);
> }
>
> - /*
> - * Configure hardcoded terms first, no need to check
> - * return value when called with fail == 0 ;)
> - */
> - if (config_attr(&attr, &parsed_terms, parse_state->error, config_term_pmu)) {
> + /* Configure attr/terms again if an alias was expanded. */
> + if (alias_rewrote_terms &&
> + config_attr(&attr, &parsed_terms, parse_state->error, config_term_pmu)) {
> parse_events_terms__exit(&parsed_terms);
> return -EINVAL;
> }
> diff --git a/tools/perf/util/pmu.c b/tools/perf/util/pmu.c
> index d3c9aa4326be..3c9609944a2f 100644
> --- a/tools/perf/util/pmu.c
> +++ b/tools/perf/util/pmu.c
> @@ -1494,12 +1494,14 @@ static int check_info_data(struct perf_pmu *pmu,
> * defined for the alias
> */
> int perf_pmu__check_alias(struct perf_pmu *pmu, struct parse_events_terms *head_terms,
> - struct perf_pmu_info *info, struct parse_events_error *err)
> + struct perf_pmu_info *info, bool *rewrote_terms,
> + struct parse_events_error *err)
> {
> struct parse_events_term *term, *h;
> struct perf_pmu_alias *alias;
> int ret;
>
> + *rewrote_terms = false;
> info->per_pkg = false;
>
> /*
> @@ -1521,7 +1523,7 @@ int perf_pmu__check_alias(struct perf_pmu *pmu, struct parse_events_terms *head_
> NULL);
> return ret;
> }
> -
> + *rewrote_terms = true;
> ret = check_info_data(pmu, alias, info, err, term->err_term);
> if (ret)
> return ret;
> @@ -1615,6 +1617,8 @@ bool perf_pmu__auto_merge_stats(const struct perf_pmu *pmu)
>
> bool perf_pmu__have_event(struct perf_pmu *pmu, const char *name)
> {
> + if (!name)
> + return false;
> if (perf_pmu__find_alias(pmu, name, /*load=*/ true) != NULL)
> return true;
> if (pmu->cpu_aliases_added || !pmu->events_table)
> diff --git a/tools/perf/util/pmu.h b/tools/perf/util/pmu.h
> index d2895d415f08..424c3fee0949 100644
> --- a/tools/perf/util/pmu.h
> +++ b/tools/perf/util/pmu.h
> @@ -201,7 +201,8 @@ int perf_pmu__config_terms(const struct perf_pmu *pmu,
> __u64 perf_pmu__format_bits(struct perf_pmu *pmu, const char *name);
> int perf_pmu__format_type(struct perf_pmu *pmu, const char *name);
> int perf_pmu__check_alias(struct perf_pmu *pmu, struct parse_events_terms *head_terms,
> - struct perf_pmu_info *info, struct parse_events_error *err);
> + struct perf_pmu_info *info, bool *rewrote_terms,
> + struct parse_events_error *err);
> int perf_pmu__find_event(struct perf_pmu *pmu, const char *event, void *state, pmu_event_callback cb);
>
> int perf_pmu__format_parse(struct perf_pmu *pmu, int dirfd, bool eager_load);
> --
> 2.43.0.rc1.413.gea7ed67945-goog
>
On Thu, 23 Nov 2023 04:29:22 +0000,
Ian Rogers <[email protected]> wrote:
>
> The perf tool has previously made legacy events the priority so with
> or without a PMU the legacy event would be opened:
>
> ```
> $ perf stat -e cpu-cycles,cpu/cpu-cycles/ true
> Using CPUID GenuineIntel-6-8D-1
> intel_pt default config: tsc,mtc,mtc_period=3,psb_period=3,pt,branch
> Attempting to add event pmu 'cpu' with 'cpu-cycles,' that may result in non-fatal errors
> After aliases, add event pmu 'cpu' with 'cpu-cycles,' that may result in non-fatal errors
> Control descriptor is not initialized
> ------------------------------------------------------------
> perf_event_attr:
> type 0 (PERF_TYPE_HARDWARE)
> size 136
> config 0 (PERF_COUNT_HW_CPU_CYCLES)
> sample_type IDENTIFIER
> read_format TOTAL_TIME_ENABLED|TOTAL_TIME_RUNNING
> disabled 1
> inherit 1
> enable_on_exec 1
> exclude_guest 1
> ------------------------------------------------------------
> sys_perf_event_open: pid 833967 cpu -1 group_fd -1 flags 0x8 = 3
> ------------------------------------------------------------
> perf_event_attr:
> type 0 (PERF_TYPE_HARDWARE)
> size 136
> config 0 (PERF_COUNT_HW_CPU_CYCLES)
> sample_type IDENTIFIER
> read_format TOTAL_TIME_ENABLED|TOTAL_TIME_RUNNING
> disabled 1
> inherit 1
> enable_on_exec 1
> exclude_guest 1
> ------------------------------------------------------------
> ...
> ```
>
> Fixes to make hybrid/BIG.little PMUs behave correctly, ie as core PMUs
> capable of opening legacy events on each, removing hard coded
> "cpu_core" and "cpu_atom" Intel PMU names, etc. caused a behavioral
> difference on Apple/ARM due to latent issues in the PMU driver
> reported in:
> https://lore.kernel.org/lkml/[email protected]/
>
What issue? So far, I don't see anything that remotely looks like a
kernel issue.
> As part of that report Mark Rutland <[email protected]> requested
> that legacy events not be higher in priority when a PMU is specified
> reversing what has until this change been perf's default
> behavior. With this change the above becomes:
>
> ```
> $ perf stat -e cpu-cycles,cpu/cpu-cycles/ true
> Using CPUID GenuineIntel-6-8D-1
> Attempt to add: cpu/cpu-cycles=0/
> ..after resolving event: cpu/event=0x3c/
> Control descriptor is not initialized
> ------------------------------------------------------------
> perf_event_attr:
> type 0 (PERF_TYPE_HARDWARE)
> size 136
> config 0 (PERF_COUNT_HW_CPU_CYCLES)
> sample_type IDENTIFIER
> read_format TOTAL_TIME_ENABLED|TOTAL_TIME_RUNNING
> disabled 1
> inherit 1
> enable_on_exec 1
> exclude_guest 1
> ------------------------------------------------------------
> sys_perf_event_open: pid 827628 cpu -1 group_fd -1 flags 0x8 = 3
> ------------------------------------------------------------
> perf_event_attr:
> type 4 (PERF_TYPE_RAW)
> size 136
> config 0x3c
> sample_type IDENTIFIER
> read_format TOTAL_TIME_ENABLED|TOTAL_TIME_RUNNING
> disabled 1
> inherit 1
> enable_on_exec 1
> exclude_guest 1
> ------------------------------------------------------------
> ...
> ```
>
> So the second event has become a raw event as
> /sys/devices/cpu/events/cpu-cycles exists.
>
> A fix was necessary to config_term_pmu in parse-events.c as
> check_alias expansion needs to happen after config_term_pmu, and
> config_term_pmu may need calling a second time because of this.
>
> config_term_pmu is updated to not use the legacy event when the PMU
> has such a named event (either from json or sysfs).
>
> The bulk of this change is updating all of the parse-events test
> expectations so that if a sysfs/json event exists for a PMU the test
> doesn't fail - a further sign, if it were needed, that the legacy
> event priority was a known and tested behavior of the perf tool.
>
> Signed-off-by: Ian Rogers <[email protected]>
Reported-by:, Link: and Fixes: tags would be appreciated.
> ---
> This is a large behavioral change:
> 1) the scope of the change means it should bake on linux-next and I
> don't believe should be a 6.7-rc fix.
I entirely disagree. Distros are shipping a broken perf tool.
> 2) a fixes tag and stable backport I don't think are appropriate. The
> real reported issue is with the PMU driver. A backport would bring the
> risk that later fixes, due to the large behavior change, wouldn't be
> backported and past releases get regressed in scenarios like
> hybrid. Backports for the perf tool are also less necessary than say a
> buggy PMU driver, as distributions should be updating to the latest
> perf tool regardless of what Linux kernel is being run (the perf tool
> is backward compatible).
Again, perf gets shipped in distros, and not necessary as the latest
version. Rather, they tend to ship the version matching the kernel. No
backport, buggy perf.
And again, I don't see a bug in the PMU driver.
Now, in the interest of moving forward: this patch seems to address
the basic problems I was seeing on both M1/M2 (running any kernel
version) and Juno (running an older kernel), so:
Tested-by: Marc Zyngier <[email protected]>
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
On Thu, Nov 23, 2023 at 6:37 AM Mark Rutland <[email protected]> wrote:
>
>
> Hi Ian,
>
> Thanks for this!
>
> On Wed, Nov 22, 2023 at 08:29:22PM -0800, Ian Rogers wrote:
> > The perf tool has previously made legacy events the priority so with
> > or without a PMU the legacy event would be opened:
> >
> > ```
> > $ perf stat -e cpu-cycles,cpu/cpu-cycles/ true
> > Using CPUID GenuineIntel-6-8D-1
> > intel_pt default config: tsc,mtc,mtc_period=3,psb_period=3,pt,branch
> > Attempting to add event pmu 'cpu' with 'cpu-cycles,' that may result in non-fatal errors
> > After aliases, add event pmu 'cpu' with 'cpu-cycles,' that may result in non-fatal errors
> > Control descriptor is not initialized
> > ------------------------------------------------------------
> > perf_event_attr:
> > type 0 (PERF_TYPE_HARDWARE)
> > size 136
> > config 0 (PERF_COUNT_HW_CPU_CYCLES)
> > sample_type IDENTIFIER
> > read_format TOTAL_TIME_ENABLED|TOTAL_TIME_RUNNING
> > disabled 1
> > inherit 1
> > enable_on_exec 1
> > exclude_guest 1
> > ------------------------------------------------------------
> > sys_perf_event_open: pid 833967 cpu -1 group_fd -1 flags 0x8 = 3
> > ------------------------------------------------------------
> > perf_event_attr:
> > type 0 (PERF_TYPE_HARDWARE)
> > size 136
> > config 0 (PERF_COUNT_HW_CPU_CYCLES)
> > sample_type IDENTIFIER
> > read_format TOTAL_TIME_ENABLED|TOTAL_TIME_RUNNING
> > disabled 1
> > inherit 1
> > enable_on_exec 1
> > exclude_guest 1
> > ------------------------------------------------------------
> > ...
> > ```
> >
> > Fixes to make hybrid/BIG.little PMUs behave correctly, ie as core PMUs
> > capable of opening legacy events on each, removing hard coded
> > "cpu_core" and "cpu_atom" Intel PMU names, etc. caused a behavioral
> > difference on Apple/ARM due to latent issues in the PMU driver
> > reported in:
> > https://lore.kernel.org/lkml/[email protected]/
> >
> > As part of that report Mark Rutland <[email protected]> requested
> > that legacy events not be higher in priority when a PMU is specified
> > reversing what has until this change been perf's default
> > behavior. With this change the above becomes:
> >
> > ```
> > $ perf stat -e cpu-cycles,cpu/cpu-cycles/ true
> > Using CPUID GenuineIntel-6-8D-1
> > Attempt to add: cpu/cpu-cycles=0/
> > ..after resolving event: cpu/event=0x3c/
> > Control descriptor is not initialized
> > ------------------------------------------------------------
> > perf_event_attr:
> > type 0 (PERF_TYPE_HARDWARE)
> > size 136
> > config 0 (PERF_COUNT_HW_CPU_CYCLES)
> > sample_type IDENTIFIER
> > read_format TOTAL_TIME_ENABLED|TOTAL_TIME_RUNNING
> > disabled 1
> > inherit 1
> > enable_on_exec 1
> > exclude_guest 1
> > ------------------------------------------------------------
> > sys_perf_event_open: pid 827628 cpu -1 group_fd -1 flags 0x8 = 3
> > ------------------------------------------------------------
> > perf_event_attr:
> > type 4 (PERF_TYPE_RAW)
> > size 136
> > config 0x3c
> > sample_type IDENTIFIER
> > read_format TOTAL_TIME_ENABLED|TOTAL_TIME_RUNNING
> > disabled 1
> > inherit 1
> > enable_on_exec 1
> > exclude_guest 1
> > ------------------------------------------------------------
> > ...
> > ```
> >
> > So the second event has become a raw event as
> > /sys/devices/cpu/events/cpu-cycles exists.
> >
> > A fix was necessary to config_term_pmu in parse-events.c as
> > check_alias expansion needs to happen after config_term_pmu, and
> > config_term_pmu may need calling a second time because of this.
> >
> > config_term_pmu is updated to not use the legacy event when the PMU
> > has such a named event (either from json or sysfs).
> >
> > The bulk of this change is updating all of the parse-events test
> > expectations so that if a sysfs/json event exists for a PMU the test
> > doesn't fail - a further sign, if it were needed, that the legacy
> > event priority was a known and tested behavior of the perf tool.
> >
> > Signed-off-by: Ian Rogers <[email protected]>
>
> Regardless of my comments below, for this patch as-is:
>
> Acked-by: Mark Rutland <[email protected]>
>
> > ---
> > This is a large behavioral change:
> > 1) the scope of the change means it should bake on linux-next and I
> > don't believe should be a 6.7-rc fix.
>
> I'm happy for this to bake, but I do think it needs to be backported for the
> sake of users, especially given that it *restores* the old behaviour.
It is going to change the behavior for a far larger set of users. I'm
also concerned that:
```
$ perf list
...
cpu-cycles OR cpu/cpu-cycles/ [Kernel PMU event]
...
```
implies that cpu/cpu-cycles/ is a synonym for cpu-cycles, which is no
longer true (or pick another event from sysfs whose name is the same
as a legacy event). I'm not sure what a fix in perf list for this
would look like.
Thanks,
Ian
> > 2) a fixes tag and stable backport I don't think are appropriate.
>
> For the sake of users I think a fixes tag and stable backport are necssary. In
> practice distributions ship the perf tool associated with their stable kernel,
> so (for better or worse) a stable backport is certainly necessary for distros
> that'll use the v6.6 stable kernel.
>
> > The real reported issue is with the PMU driver.
>
> Having trawled through the driver and core perf code, I don't believe the PMU
> driver is at fault. Please see my analysis at:
>
> https://lore.kernel.org/lkml/[email protected]/
>
> ... where it looks like the perf tool is dropping the extended type ID in some
> cases.
>
> If you know of a specific bug in the PMU driver or perf core code, please let
> me know and I will investigate. As it stands we have no evidence of a bug in
> the PMU driver, and pretty clear evidence (as linked above) there there is
> *some* issue in userspace. In the absence of such evidence, please don't assert
> that there must be a kernel bug.
>
> > A backport would bring the
> > risk that later fixes, due to the large behavior change, wouldn't be
> > backported and past releases get regressed in scenarios like
> > hybrid. Backports for the perf tool are also less necessary than say a
> > buggy PMU driver, as distributions should be updating to the latest
> > perf tool regardless of what Linux kernel is being run (the perf tool
> > is backward compatible).
>
> As above I believe that a backport is necessary.
>
> Thanks,
> Mark.
>
> > ---
> > tools/perf/tests/parse-events.c | 256 +++++++++++++++++++++++---------
> > tools/perf/util/parse-events.c | 52 +++++--
> > tools/perf/util/pmu.c | 8 +-
> > tools/perf/util/pmu.h | 3 +-
> > 4 files changed, 231 insertions(+), 88 deletions(-)
> >
> > diff --git a/tools/perf/tests/parse-events.c b/tools/perf/tests/parse-events.c
> > index e52f45c7c3d1..fbdf710d5eea 100644
> > --- a/tools/perf/tests/parse-events.c
> > +++ b/tools/perf/tests/parse-events.c
> > @@ -162,6 +162,22 @@ static int test__checkevent_numeric(struct evlist *evlist)
> > return TEST_OK;
> > }
> >
> > +
> > +static int assert_hw(struct perf_evsel *evsel, enum perf_hw_id id, const char *name)
> > +{
> > + struct perf_pmu *pmu;
> > +
> > + if (evsel->attr.type == PERF_TYPE_HARDWARE) {
> > + TEST_ASSERT_VAL("wrong config", test_perf_config(evsel, id));
> > + return 0;
> > + }
> > + pmu = perf_pmus__find_by_type(evsel->attr.type);
> > +
> > + TEST_ASSERT_VAL("unexpected PMU type", pmu);
> > + TEST_ASSERT_VAL("PMU missing event", perf_pmu__have_event(pmu, name));
> > + return 0;
> > +}
> > +
> > static int test__checkevent_symbolic_name(struct evlist *evlist)
> > {
> > struct perf_evsel *evsel;
> > @@ -169,10 +185,12 @@ static int test__checkevent_symbolic_name(struct evlist *evlist)
> > TEST_ASSERT_VAL("wrong number of entries", 0 != evlist->core.nr_entries);
> >
> > perf_evlist__for_each_evsel(&evlist->core, evsel) {
> > - TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->attr.type);
> > - TEST_ASSERT_VAL("wrong config",
> > - test_perf_config(evsel, PERF_COUNT_HW_INSTRUCTIONS));
> > + int ret = assert_hw(evsel, PERF_COUNT_HW_INSTRUCTIONS, "instructions");
> > +
> > + if (ret)
> > + return ret;
> > }
> > +
> > return TEST_OK;
> > }
> >
> > @@ -183,8 +201,10 @@ static int test__checkevent_symbolic_name_config(struct evlist *evlist)
> > TEST_ASSERT_VAL("wrong number of entries", 0 != evlist->core.nr_entries);
> >
> > perf_evlist__for_each_evsel(&evlist->core, evsel) {
> > - TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->attr.type);
> > - TEST_ASSERT_VAL("wrong config", test_perf_config(evsel, PERF_COUNT_HW_CPU_CYCLES));
> > + int ret = assert_hw(evsel, PERF_COUNT_HW_CPU_CYCLES, "cycles");
> > +
> > + if (ret)
> > + return ret;
> > /*
> > * The period value gets configured within evlist__config,
> > * while this test executes only parse events method.
> > @@ -861,10 +881,14 @@ static int test__group1(struct evlist *evlist)
> > evlist__nr_groups(evlist) == num_core_entries());
> >
> > for (int i = 0; i < num_core_entries(); i++) {
> > + int ret;
> > +
> > /* instructions:k */
> > evsel = leader = (i == 0 ? evlist__first(evlist) : evsel__next(evsel));
> > - TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
> > - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_INSTRUCTIONS));
> > + ret = assert_hw(&evsel->core, PERF_COUNT_HW_INSTRUCTIONS, "instructions");
> > + if (ret)
> > + return ret;
> > +
> > TEST_ASSERT_VAL("wrong exclude_user", evsel->core.attr.exclude_user);
> > TEST_ASSERT_VAL("wrong exclude_kernel", !evsel->core.attr.exclude_kernel);
> > TEST_ASSERT_VAL("wrong exclude_hv", evsel->core.attr.exclude_hv);
> > @@ -878,8 +902,10 @@ static int test__group1(struct evlist *evlist)
> >
> > /* cycles:upp */
> > evsel = evsel__next(evsel);
> > - TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
> > - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CPU_CYCLES));
> > + ret = assert_hw(&evsel->core, PERF_COUNT_HW_CPU_CYCLES, "cycles");
> > + if (ret)
> > + return ret;
> > +
> > TEST_ASSERT_VAL("wrong exclude_user", !evsel->core.attr.exclude_user);
> > TEST_ASSERT_VAL("wrong exclude_kernel", evsel->core.attr.exclude_kernel);
> > TEST_ASSERT_VAL("wrong exclude_hv", evsel->core.attr.exclude_hv);
> > @@ -907,6 +933,8 @@ static int test__group2(struct evlist *evlist)
> > TEST_ASSERT_VAL("wrong number of groups", 1 == evlist__nr_groups(evlist));
> >
> > evlist__for_each_entry(evlist, evsel) {
> > + int ret;
> > +
> > if (evsel->core.attr.type == PERF_TYPE_SOFTWARE) {
> > /* faults + :ku modifier */
> > leader = evsel;
> > @@ -939,8 +967,10 @@ static int test__group2(struct evlist *evlist)
> > continue;
> > }
> > /* cycles:k */
> > - TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
> > - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CPU_CYCLES));
> > + ret = assert_hw(&evsel->core, PERF_COUNT_HW_CPU_CYCLES, "cycles");
> > + if (ret)
> > + return ret;
> > +
> > TEST_ASSERT_VAL("wrong exclude_user", evsel->core.attr.exclude_user);
> > TEST_ASSERT_VAL("wrong exclude_kernel", !evsel->core.attr.exclude_kernel);
> > TEST_ASSERT_VAL("wrong exclude_hv", evsel->core.attr.exclude_hv);
> > @@ -957,6 +987,7 @@ static int test__group2(struct evlist *evlist)
> > static int test__group3(struct evlist *evlist __maybe_unused)
> > {
> > struct evsel *evsel, *group1_leader = NULL, *group2_leader = NULL;
> > + int ret;
> >
> > TEST_ASSERT_VAL("wrong number of entries",
> > evlist->core.nr_entries == (3 * perf_pmus__num_core_pmus() + 2));
> > @@ -1045,8 +1076,10 @@ static int test__group3(struct evlist *evlist __maybe_unused)
> > continue;
> > }
> > /* instructions:u */
> > - TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
> > - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_INSTRUCTIONS));
> > + ret = assert_hw(&evsel->core, PERF_COUNT_HW_INSTRUCTIONS, "instructions");
> > + if (ret)
> > + return ret;
> > +
> > TEST_ASSERT_VAL("wrong exclude_user", !evsel->core.attr.exclude_user);
> > TEST_ASSERT_VAL("wrong exclude_kernel", evsel->core.attr.exclude_kernel);
> > TEST_ASSERT_VAL("wrong exclude_hv", evsel->core.attr.exclude_hv);
> > @@ -1070,10 +1103,14 @@ static int test__group4(struct evlist *evlist __maybe_unused)
> > num_core_entries() == evlist__nr_groups(evlist));
> >
> > for (int i = 0; i < num_core_entries(); i++) {
> > + int ret;
> > +
> > /* cycles:u + p */
> > evsel = leader = (i == 0 ? evlist__first(evlist) : evsel__next(evsel));
> > - TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
> > - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CPU_CYCLES));
> > + ret = assert_hw(&evsel->core, PERF_COUNT_HW_CPU_CYCLES, "cycles");
> > + if (ret)
> > + return ret;
> > +
> > TEST_ASSERT_VAL("wrong exclude_user", !evsel->core.attr.exclude_user);
> > TEST_ASSERT_VAL("wrong exclude_kernel", evsel->core.attr.exclude_kernel);
> > TEST_ASSERT_VAL("wrong exclude_hv", evsel->core.attr.exclude_hv);
> > @@ -1089,8 +1126,10 @@ static int test__group4(struct evlist *evlist __maybe_unused)
> >
> > /* instructions:kp + p */
> > evsel = evsel__next(evsel);
> > - TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
> > - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_INSTRUCTIONS));
> > + ret = assert_hw(&evsel->core, PERF_COUNT_HW_INSTRUCTIONS, "instructions");
> > + if (ret)
> > + return ret;
> > +
> > TEST_ASSERT_VAL("wrong exclude_user", evsel->core.attr.exclude_user);
> > TEST_ASSERT_VAL("wrong exclude_kernel", !evsel->core.attr.exclude_kernel);
> > TEST_ASSERT_VAL("wrong exclude_hv", evsel->core.attr.exclude_hv);
> > @@ -1108,6 +1147,7 @@ static int test__group4(struct evlist *evlist __maybe_unused)
> > static int test__group5(struct evlist *evlist __maybe_unused)
> > {
> > struct evsel *evsel = NULL, *leader;
> > + int ret;
> >
> > TEST_ASSERT_VAL("wrong number of entries",
> > evlist->core.nr_entries == (5 * num_core_entries()));
> > @@ -1117,8 +1157,10 @@ static int test__group5(struct evlist *evlist __maybe_unused)
> > for (int i = 0; i < num_core_entries(); i++) {
> > /* cycles + G */
> > evsel = leader = (i == 0 ? evlist__first(evlist) : evsel__next(evsel));
> > - TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
> > - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CPU_CYCLES));
> > + ret = assert_hw(&evsel->core, PERF_COUNT_HW_CPU_CYCLES, "cycles");
> > + if (ret)
> > + return ret;
> > +
> > TEST_ASSERT_VAL("wrong exclude_user", !evsel->core.attr.exclude_user);
> > TEST_ASSERT_VAL("wrong exclude_kernel", !evsel->core.attr.exclude_kernel);
> > TEST_ASSERT_VAL("wrong exclude_hv", !evsel->core.attr.exclude_hv);
> > @@ -1133,8 +1175,10 @@ static int test__group5(struct evlist *evlist __maybe_unused)
> >
> > /* instructions + G */
> > evsel = evsel__next(evsel);
> > - TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
> > - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_INSTRUCTIONS));
> > + ret = assert_hw(&evsel->core, PERF_COUNT_HW_INSTRUCTIONS, "instructions");
> > + if (ret)
> > + return ret;
> > +
> > TEST_ASSERT_VAL("wrong exclude_user", !evsel->core.attr.exclude_user);
> > TEST_ASSERT_VAL("wrong exclude_kernel", !evsel->core.attr.exclude_kernel);
> > TEST_ASSERT_VAL("wrong exclude_hv", !evsel->core.attr.exclude_hv);
> > @@ -1148,8 +1192,10 @@ static int test__group5(struct evlist *evlist __maybe_unused)
> > for (int i = 0; i < num_core_entries(); i++) {
> > /* cycles:G */
> > evsel = leader = evsel__next(evsel);
> > - TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
> > - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CPU_CYCLES));
> > + ret = assert_hw(&evsel->core, PERF_COUNT_HW_CPU_CYCLES, "cycles");
> > + if (ret)
> > + return ret;
> > +
> > TEST_ASSERT_VAL("wrong exclude_user", !evsel->core.attr.exclude_user);
> > TEST_ASSERT_VAL("wrong exclude_kernel", !evsel->core.attr.exclude_kernel);
> > TEST_ASSERT_VAL("wrong exclude_hv", !evsel->core.attr.exclude_hv);
> > @@ -1164,8 +1210,10 @@ static int test__group5(struct evlist *evlist __maybe_unused)
> >
> > /* instructions:G */
> > evsel = evsel__next(evsel);
> > - TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
> > - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_INSTRUCTIONS));
> > + ret = assert_hw(&evsel->core, PERF_COUNT_HW_INSTRUCTIONS, "instructions");
> > + if (ret)
> > + return ret;
> > +
> > TEST_ASSERT_VAL("wrong exclude_user", !evsel->core.attr.exclude_user);
> > TEST_ASSERT_VAL("wrong exclude_kernel", !evsel->core.attr.exclude_kernel);
> > TEST_ASSERT_VAL("wrong exclude_hv", !evsel->core.attr.exclude_hv);
> > @@ -1178,8 +1226,10 @@ static int test__group5(struct evlist *evlist __maybe_unused)
> > for (int i = 0; i < num_core_entries(); i++) {
> > /* cycles */
> > evsel = evsel__next(evsel);
> > - TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
> > - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CPU_CYCLES));
> > + ret = assert_hw(&evsel->core, PERF_COUNT_HW_CPU_CYCLES, "cycles");
> > + if (ret)
> > + return ret;
> > +
> > TEST_ASSERT_VAL("wrong exclude_user", !evsel->core.attr.exclude_user);
> > TEST_ASSERT_VAL("wrong exclude_kernel", !evsel->core.attr.exclude_kernel);
> > TEST_ASSERT_VAL("wrong exclude_hv", !evsel->core.attr.exclude_hv);
> > @@ -1201,10 +1251,14 @@ static int test__group_gh1(struct evlist *evlist)
> > evlist__nr_groups(evlist) == num_core_entries());
> >
> > for (int i = 0; i < num_core_entries(); i++) {
> > + int ret;
> > +
> > /* cycles + :H group modifier */
> > evsel = leader = (i == 0 ? evlist__first(evlist) : evsel__next(evsel));
> > - TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
> > - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CPU_CYCLES));
> > + ret = assert_hw(&evsel->core, PERF_COUNT_HW_CPU_CYCLES, "cycles");
> > + if (ret)
> > + return ret;
> > +
> > TEST_ASSERT_VAL("wrong exclude_user", !evsel->core.attr.exclude_user);
> > TEST_ASSERT_VAL("wrong exclude_kernel", !evsel->core.attr.exclude_kernel);
> > TEST_ASSERT_VAL("wrong exclude_hv", !evsel->core.attr.exclude_hv);
> > @@ -1218,8 +1272,10 @@ static int test__group_gh1(struct evlist *evlist)
> >
> > /* cache-misses:G + :H group modifier */
> > evsel = evsel__next(evsel);
> > - TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
> > - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CACHE_MISSES));
> > + ret = assert_hw(&evsel->core, PERF_COUNT_HW_CACHE_MISSES, "cache-misses");
> > + if (ret)
> > + return ret;
> > +
> > TEST_ASSERT_VAL("wrong exclude_user", !evsel->core.attr.exclude_user);
> > TEST_ASSERT_VAL("wrong exclude_kernel", !evsel->core.attr.exclude_kernel);
> > TEST_ASSERT_VAL("wrong exclude_hv", !evsel->core.attr.exclude_hv);
> > @@ -1242,10 +1298,14 @@ static int test__group_gh2(struct evlist *evlist)
> > evlist__nr_groups(evlist) == num_core_entries());
> >
> > for (int i = 0; i < num_core_entries(); i++) {
> > + int ret;
> > +
> > /* cycles + :G group modifier */
> > evsel = leader = (i == 0 ? evlist__first(evlist) : evsel__next(evsel));
> > - TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
> > - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CPU_CYCLES));
> > + ret = assert_hw(&evsel->core, PERF_COUNT_HW_CPU_CYCLES, "cycles");
> > + if (ret)
> > + return ret;
> > +
> > TEST_ASSERT_VAL("wrong exclude_user", !evsel->core.attr.exclude_user);
> > TEST_ASSERT_VAL("wrong exclude_kernel", !evsel->core.attr.exclude_kernel);
> > TEST_ASSERT_VAL("wrong exclude_hv", !evsel->core.attr.exclude_hv);
> > @@ -1259,8 +1319,10 @@ static int test__group_gh2(struct evlist *evlist)
> >
> > /* cache-misses:H + :G group modifier */
> > evsel = evsel__next(evsel);
> > - TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
> > - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CACHE_MISSES));
> > + ret = assert_hw(&evsel->core, PERF_COUNT_HW_CACHE_MISSES, "cache-misses");
> > + if (ret)
> > + return ret;
> > +
> > TEST_ASSERT_VAL("wrong exclude_user", !evsel->core.attr.exclude_user);
> > TEST_ASSERT_VAL("wrong exclude_kernel", !evsel->core.attr.exclude_kernel);
> > TEST_ASSERT_VAL("wrong exclude_hv", !evsel->core.attr.exclude_hv);
> > @@ -1283,10 +1345,14 @@ static int test__group_gh3(struct evlist *evlist)
> > evlist__nr_groups(evlist) == num_core_entries());
> >
> > for (int i = 0; i < num_core_entries(); i++) {
> > + int ret;
> > +
> > /* cycles:G + :u group modifier */
> > evsel = leader = (i == 0 ? evlist__first(evlist) : evsel__next(evsel));
> > - TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
> > - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CPU_CYCLES));
> > + ret = assert_hw(&evsel->core, PERF_COUNT_HW_CPU_CYCLES, "cycles");
> > + if (ret)
> > + return ret;
> > +
> > TEST_ASSERT_VAL("wrong exclude_user", !evsel->core.attr.exclude_user);
> > TEST_ASSERT_VAL("wrong exclude_kernel", evsel->core.attr.exclude_kernel);
> > TEST_ASSERT_VAL("wrong exclude_hv", evsel->core.attr.exclude_hv);
> > @@ -1300,8 +1366,10 @@ static int test__group_gh3(struct evlist *evlist)
> >
> > /* cache-misses:H + :u group modifier */
> > evsel = evsel__next(evsel);
> > - TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
> > - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CACHE_MISSES));
> > + ret = assert_hw(&evsel->core, PERF_COUNT_HW_CACHE_MISSES, "cache-misses");
> > + if (ret)
> > + return ret;
> > +
> > TEST_ASSERT_VAL("wrong exclude_user", !evsel->core.attr.exclude_user);
> > TEST_ASSERT_VAL("wrong exclude_kernel", evsel->core.attr.exclude_kernel);
> > TEST_ASSERT_VAL("wrong exclude_hv", evsel->core.attr.exclude_hv);
> > @@ -1324,10 +1392,14 @@ static int test__group_gh4(struct evlist *evlist)
> > evlist__nr_groups(evlist) == num_core_entries());
> >
> > for (int i = 0; i < num_core_entries(); i++) {
> > + int ret;
> > +
> > /* cycles:G + :uG group modifier */
> > evsel = leader = (i == 0 ? evlist__first(evlist) : evsel__next(evsel));
> > - TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
> > - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CPU_CYCLES));
> > + ret = assert_hw(&evsel->core, PERF_COUNT_HW_CPU_CYCLES, "cycles");
> > + if (ret)
> > + return ret;
> > +
> > TEST_ASSERT_VAL("wrong exclude_user", !evsel->core.attr.exclude_user);
> > TEST_ASSERT_VAL("wrong exclude_kernel", evsel->core.attr.exclude_kernel);
> > TEST_ASSERT_VAL("wrong exclude_hv", evsel->core.attr.exclude_hv);
> > @@ -1341,8 +1413,10 @@ static int test__group_gh4(struct evlist *evlist)
> >
> > /* cache-misses:H + :uG group modifier */
> > evsel = evsel__next(evsel);
> > - TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
> > - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CACHE_MISSES));
> > + ret = assert_hw(&evsel->core, PERF_COUNT_HW_CACHE_MISSES, "cache-misses");
> > + if (ret)
> > + return ret;
> > +
> > TEST_ASSERT_VAL("wrong exclude_user", !evsel->core.attr.exclude_user);
> > TEST_ASSERT_VAL("wrong exclude_kernel", evsel->core.attr.exclude_kernel);
> > TEST_ASSERT_VAL("wrong exclude_hv", evsel->core.attr.exclude_hv);
> > @@ -1363,10 +1437,14 @@ static int test__leader_sample1(struct evlist *evlist)
> > evlist->core.nr_entries == (3 * num_core_entries()));
> >
> > for (int i = 0; i < num_core_entries(); i++) {
> > + int ret;
> > +
> > /* cycles - sampling group leader */
> > evsel = leader = (i == 0 ? evlist__first(evlist) : evsel__next(evsel));
> > - TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
> > - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CPU_CYCLES));
> > + ret = assert_hw(&evsel->core, PERF_COUNT_HW_CPU_CYCLES, "cycles");
> > + if (ret)
> > + return ret;
> > +
> > TEST_ASSERT_VAL("wrong exclude_user", !evsel->core.attr.exclude_user);
> > TEST_ASSERT_VAL("wrong exclude_kernel", !evsel->core.attr.exclude_kernel);
> > TEST_ASSERT_VAL("wrong exclude_hv", !evsel->core.attr.exclude_hv);
> > @@ -1379,8 +1457,10 @@ static int test__leader_sample1(struct evlist *evlist)
> >
> > /* cache-misses - not sampling */
> > evsel = evsel__next(evsel);
> > - TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
> > - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CACHE_MISSES));
> > + ret = assert_hw(&evsel->core, PERF_COUNT_HW_CACHE_MISSES, "cache-misses");
> > + if (ret)
> > + return ret;
> > +
> > TEST_ASSERT_VAL("wrong exclude_user", !evsel->core.attr.exclude_user);
> > TEST_ASSERT_VAL("wrong exclude_kernel", !evsel->core.attr.exclude_kernel);
> > TEST_ASSERT_VAL("wrong exclude_hv", !evsel->core.attr.exclude_hv);
> > @@ -1392,8 +1472,10 @@ static int test__leader_sample1(struct evlist *evlist)
> >
> > /* branch-misses - not sampling */
> > evsel = evsel__next(evsel);
> > - TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
> > - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_BRANCH_MISSES));
> > + ret = assert_hw(&evsel->core, PERF_COUNT_HW_BRANCH_MISSES, "branch-misses");
> > + if (ret)
> > + return ret;
> > +
> > TEST_ASSERT_VAL("wrong exclude_user", !evsel->core.attr.exclude_user);
> > TEST_ASSERT_VAL("wrong exclude_kernel", !evsel->core.attr.exclude_kernel);
> > TEST_ASSERT_VAL("wrong exclude_hv", !evsel->core.attr.exclude_hv);
> > @@ -1415,10 +1497,14 @@ static int test__leader_sample2(struct evlist *evlist __maybe_unused)
> > evlist->core.nr_entries == (2 * num_core_entries()));
> >
> > for (int i = 0; i < num_core_entries(); i++) {
> > + int ret;
> > +
> > /* instructions - sampling group leader */
> > evsel = leader = (i == 0 ? evlist__first(evlist) : evsel__next(evsel));
> > - TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
> > - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_INSTRUCTIONS));
> > + ret = assert_hw(&evsel->core, PERF_COUNT_HW_INSTRUCTIONS, "instructions");
> > + if (ret)
> > + return ret;
> > +
> > TEST_ASSERT_VAL("wrong exclude_user", !evsel->core.attr.exclude_user);
> > TEST_ASSERT_VAL("wrong exclude_kernel", evsel->core.attr.exclude_kernel);
> > TEST_ASSERT_VAL("wrong exclude_hv", evsel->core.attr.exclude_hv);
> > @@ -1431,8 +1517,10 @@ static int test__leader_sample2(struct evlist *evlist __maybe_unused)
> >
> > /* branch-misses - not sampling */
> > evsel = evsel__next(evsel);
> > - TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
> > - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_BRANCH_MISSES));
> > + ret = assert_hw(&evsel->core, PERF_COUNT_HW_BRANCH_MISSES, "branch-misses");
> > + if (ret)
> > + return ret;
> > +
> > TEST_ASSERT_VAL("wrong exclude_user", !evsel->core.attr.exclude_user);
> > TEST_ASSERT_VAL("wrong exclude_kernel", evsel->core.attr.exclude_kernel);
> > TEST_ASSERT_VAL("wrong exclude_hv", evsel->core.attr.exclude_hv);
> > @@ -1472,10 +1560,14 @@ static int test__pinned_group(struct evlist *evlist)
> > evlist->core.nr_entries == (3 * num_core_entries()));
> >
> > for (int i = 0; i < num_core_entries(); i++) {
> > + int ret;
> > +
> > /* cycles - group leader */
> > evsel = leader = (i == 0 ? evlist__first(evlist) : evsel__next(evsel));
> > - TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
> > - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CPU_CYCLES));
> > + ret = assert_hw(&evsel->core, PERF_COUNT_HW_CPU_CYCLES, "cycles");
> > + if (ret)
> > + return ret;
> > +
> > TEST_ASSERT_VAL("wrong group name", !evsel->group_name);
> > TEST_ASSERT_VAL("wrong leader", evsel__has_leader(evsel, leader));
> > /* TODO: The group modifier is not copied to the split group leader. */
> > @@ -1484,13 +1576,18 @@ static int test__pinned_group(struct evlist *evlist)
> >
> > /* cache-misses - can not be pinned, but will go on with the leader */
> > evsel = evsel__next(evsel);
> > - TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
> > - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CACHE_MISSES));
> > + ret = assert_hw(&evsel->core, PERF_COUNT_HW_CACHE_MISSES, "cache-misses");
> > + if (ret)
> > + return ret;
> > +
> > TEST_ASSERT_VAL("wrong pinned", !evsel->core.attr.pinned);
> >
> > /* branch-misses - ditto */
> > evsel = evsel__next(evsel);
> > - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_BRANCH_MISSES));
> > + ret = assert_hw(&evsel->core, PERF_COUNT_HW_BRANCH_MISSES, "branch-misses");
> > + if (ret)
> > + return ret;
> > +
> > TEST_ASSERT_VAL("wrong pinned", !evsel->core.attr.pinned);
> > }
> > return TEST_OK;
> > @@ -1517,10 +1614,14 @@ static int test__exclusive_group(struct evlist *evlist)
> > evlist->core.nr_entries == 3 * num_core_entries());
> >
> > for (int i = 0; i < num_core_entries(); i++) {
> > + int ret;
> > +
> > /* cycles - group leader */
> > evsel = leader = (i == 0 ? evlist__first(evlist) : evsel__next(evsel));
> > - TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
> > - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CPU_CYCLES));
> > + ret = assert_hw(&evsel->core, PERF_COUNT_HW_CPU_CYCLES, "cycles");
> > + if (ret)
> > + return ret;
> > +
> > TEST_ASSERT_VAL("wrong group name", !evsel->group_name);
> > TEST_ASSERT_VAL("wrong leader", evsel__has_leader(evsel, leader));
> > /* TODO: The group modifier is not copied to the split group leader. */
> > @@ -1529,13 +1630,18 @@ static int test__exclusive_group(struct evlist *evlist)
> >
> > /* cache-misses - can not be pinned, but will go on with the leader */
> > evsel = evsel__next(evsel);
> > - TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
> > - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CACHE_MISSES));
> > + ret = assert_hw(&evsel->core, PERF_COUNT_HW_CACHE_MISSES, "cache-misses");
> > + if (ret)
> > + return ret;
> > +
> > TEST_ASSERT_VAL("wrong exclusive", !evsel->core.attr.exclusive);
> >
> > /* branch-misses - ditto */
> > evsel = evsel__next(evsel);
> > - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_BRANCH_MISSES));
> > + ret = assert_hw(&evsel->core, PERF_COUNT_HW_BRANCH_MISSES, "branch-misses");
> > + if (ret)
> > + return ret;
> > +
> > TEST_ASSERT_VAL("wrong exclusive", !evsel->core.attr.exclusive);
> > }
> > return TEST_OK;
> > @@ -1677,9 +1783,11 @@ static int test__checkevent_raw_pmu(struct evlist *evlist)
> > static int test__sym_event_slash(struct evlist *evlist)
> > {
> > struct evsel *evsel = evlist__first(evlist);
> > + int ret = assert_hw(&evsel->core, PERF_COUNT_HW_CPU_CYCLES, "cycles");
> > +
> > + if (ret)
> > + return ret;
> >
> > - TEST_ASSERT_VAL("wrong type", evsel->core.attr.type == PERF_TYPE_HARDWARE);
> > - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CPU_CYCLES));
> > TEST_ASSERT_VAL("wrong exclude_kernel", evsel->core.attr.exclude_kernel);
> > return TEST_OK;
> > }
> > @@ -1687,9 +1795,11 @@ static int test__sym_event_slash(struct evlist *evlist)
> > static int test__sym_event_dc(struct evlist *evlist)
> > {
> > struct evsel *evsel = evlist__first(evlist);
> > + int ret = assert_hw(&evsel->core, PERF_COUNT_HW_CPU_CYCLES, "cycles");
> > +
> > + if (ret)
> > + return ret;
> >
> > - TEST_ASSERT_VAL("wrong type", evsel->core.attr.type == PERF_TYPE_HARDWARE);
> > - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CPU_CYCLES));
> > TEST_ASSERT_VAL("wrong exclude_user", evsel->core.attr.exclude_user);
> > return TEST_OK;
> > }
> > @@ -1697,9 +1807,11 @@ static int test__sym_event_dc(struct evlist *evlist)
> > static int test__term_equal_term(struct evlist *evlist)
> > {
> > struct evsel *evsel = evlist__first(evlist);
> > + int ret = assert_hw(&evsel->core, PERF_COUNT_HW_CPU_CYCLES, "cycles");
> > +
> > + if (ret)
> > + return ret;
> >
> > - TEST_ASSERT_VAL("wrong type", evsel->core.attr.type == PERF_TYPE_HARDWARE);
> > - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CPU_CYCLES));
> > TEST_ASSERT_VAL("wrong name setting", strcmp(evsel->name, "name") == 0);
> > return TEST_OK;
> > }
> > @@ -1707,9 +1819,11 @@ static int test__term_equal_term(struct evlist *evlist)
> > static int test__term_equal_legacy(struct evlist *evlist)
> > {
> > struct evsel *evsel = evlist__first(evlist);
> > + int ret = assert_hw(&evsel->core, PERF_COUNT_HW_CPU_CYCLES, "cycles");
> > +
> > + if (ret)
> > + return ret;
> >
> > - TEST_ASSERT_VAL("wrong type", evsel->core.attr.type == PERF_TYPE_HARDWARE);
> > - TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CPU_CYCLES));
> > TEST_ASSERT_VAL("wrong name setting", strcmp(evsel->name, "l1d") == 0);
> > return TEST_OK;
> > }
> > diff --git a/tools/perf/util/parse-events.c b/tools/perf/util/parse-events.c
> > index aa2f5c6fc7fc..767aa718faa5 100644
> > --- a/tools/perf/util/parse-events.c
> > +++ b/tools/perf/util/parse-events.c
> > @@ -976,7 +976,7 @@ static int config_term_pmu(struct perf_event_attr *attr,
> > struct parse_events_error *err)
> > {
> > if (term->type_term == PARSE_EVENTS__TERM_TYPE_LEGACY_CACHE) {
> > - const struct perf_pmu *pmu = perf_pmus__find_by_type(attr->type);
> > + struct perf_pmu *pmu = perf_pmus__find_by_type(attr->type);
> >
> > if (!pmu) {
> > char *err_str;
> > @@ -986,15 +986,23 @@ static int config_term_pmu(struct perf_event_attr *attr,
> > err_str, /*help=*/NULL);
> > return -EINVAL;
> > }
> > - if (perf_pmu__supports_legacy_cache(pmu)) {
> > + /*
> > + * Rewrite the PMU event to a legacy cache one unless the PMU
> > + * doesn't support legacy cache events or the event is present
> > + * within the PMU.
> > + */
> > + if (perf_pmu__supports_legacy_cache(pmu) &&
> > + !perf_pmu__have_event(pmu, term->config)) {
> > attr->type = PERF_TYPE_HW_CACHE;
> > return parse_events__decode_legacy_cache(term->config, pmu->type,
> > &attr->config);
> > - } else
> > + } else {
> > term->type_term = PARSE_EVENTS__TERM_TYPE_USER;
> > + term->no_value = true;
> > + }
> > }
> > if (term->type_term == PARSE_EVENTS__TERM_TYPE_HARDWARE) {
> > - const struct perf_pmu *pmu = perf_pmus__find_by_type(attr->type);
> > + struct perf_pmu *pmu = perf_pmus__find_by_type(attr->type);
> >
> > if (!pmu) {
> > char *err_str;
> > @@ -1004,10 +1012,19 @@ static int config_term_pmu(struct perf_event_attr *attr,
> > err_str, /*help=*/NULL);
> > return -EINVAL;
> > }
> > - attr->type = PERF_TYPE_HARDWARE;
> > - attr->config = term->val.num;
> > - if (perf_pmus__supports_extended_type())
> > - attr->config |= (__u64)pmu->type << PERF_PMU_TYPE_SHIFT;
> > + /*
> > + * If the PMU has a sysfs or json event prefer it over
> > + * legacy. ARM requires this.
> > + */
> > + if (perf_pmu__have_event(pmu, term->config)) {
> > + term->type_term = PARSE_EVENTS__TERM_TYPE_USER;
> > + term->no_value = true;
> > + } else {
> > + attr->type = PERF_TYPE_HARDWARE;
> > + attr->config = term->val.num;
> > + if (perf_pmus__supports_extended_type())
> > + attr->config |= (__u64)pmu->type << PERF_PMU_TYPE_SHIFT;
> > + }
> > return 0;
> > }
> > if (term->type_term == PARSE_EVENTS__TERM_TYPE_USER ||
> > @@ -1381,6 +1398,7 @@ int parse_events_add_pmu(struct parse_events_state *parse_state,
> > YYLTYPE *loc = loc_;
> > LIST_HEAD(config_terms);
> > struct parse_events_terms parsed_terms;
> > + bool alias_rewrote_terms;
> >
> > pmu = parse_state->fake_pmu ?: perf_pmus__find(name);
> >
> > @@ -1433,7 +1451,15 @@ int parse_events_add_pmu(struct parse_events_state *parse_state,
> > return evsel ? 0 : -ENOMEM;
> > }
> >
> > - if (!parse_state->fake_pmu && perf_pmu__check_alias(pmu, &parsed_terms, &info, err)) {
> > + /* Configure attr/terms with a known PMU, this will set hardcoded terms. */
> > + if (config_attr(&attr, &parsed_terms, parse_state->error, config_term_pmu)) {
> > + parse_events_terms__exit(&parsed_terms);
> > + return -EINVAL;
> > + }
> > +
> > + /* Look for event names in the terms and rewrite into format based terms. */
> > + if (!parse_state->fake_pmu && perf_pmu__check_alias(pmu, &parsed_terms,
> > + &info, &alias_rewrote_terms, err)) {
> > parse_events_terms__exit(&parsed_terms);
> > return -EINVAL;
> > }
> > @@ -1447,11 +1473,9 @@ int parse_events_add_pmu(struct parse_events_state *parse_state,
> > strbuf_release(&sb);
> > }
> >
> > - /*
> > - * Configure hardcoded terms first, no need to check
> > - * return value when called with fail == 0 ;)
> > - */
> > - if (config_attr(&attr, &parsed_terms, parse_state->error, config_term_pmu)) {
> > + /* Configure attr/terms again if an alias was expanded. */
> > + if (alias_rewrote_terms &&
> > + config_attr(&attr, &parsed_terms, parse_state->error, config_term_pmu)) {
> > parse_events_terms__exit(&parsed_terms);
> > return -EINVAL;
> > }
> > diff --git a/tools/perf/util/pmu.c b/tools/perf/util/pmu.c
> > index d3c9aa4326be..3c9609944a2f 100644
> > --- a/tools/perf/util/pmu.c
> > +++ b/tools/perf/util/pmu.c
> > @@ -1494,12 +1494,14 @@ static int check_info_data(struct perf_pmu *pmu,
> > * defined for the alias
> > */
> > int perf_pmu__check_alias(struct perf_pmu *pmu, struct parse_events_terms *head_terms,
> > - struct perf_pmu_info *info, struct parse_events_error *err)
> > + struct perf_pmu_info *info, bool *rewrote_terms,
> > + struct parse_events_error *err)
> > {
> > struct parse_events_term *term, *h;
> > struct perf_pmu_alias *alias;
> > int ret;
> >
> > + *rewrote_terms = false;
> > info->per_pkg = false;
> >
> > /*
> > @@ -1521,7 +1523,7 @@ int perf_pmu__check_alias(struct perf_pmu *pmu, struct parse_events_terms *head_
> > NULL);
> > return ret;
> > }
> > -
> > + *rewrote_terms = true;
> > ret = check_info_data(pmu, alias, info, err, term->err_term);
> > if (ret)
> > return ret;
> > @@ -1615,6 +1617,8 @@ bool perf_pmu__auto_merge_stats(const struct perf_pmu *pmu)
> >
> > bool perf_pmu__have_event(struct perf_pmu *pmu, const char *name)
> > {
> > + if (!name)
> > + return false;
> > if (perf_pmu__find_alias(pmu, name, /*load=*/ true) != NULL)
> > return true;
> > if (pmu->cpu_aliases_added || !pmu->events_table)
> > diff --git a/tools/perf/util/pmu.h b/tools/perf/util/pmu.h
> > index d2895d415f08..424c3fee0949 100644
> > --- a/tools/perf/util/pmu.h
> > +++ b/tools/perf/util/pmu.h
> > @@ -201,7 +201,8 @@ int perf_pmu__config_terms(const struct perf_pmu *pmu,
> > __u64 perf_pmu__format_bits(struct perf_pmu *pmu, const char *name);
> > int perf_pmu__format_type(struct perf_pmu *pmu, const char *name);
> > int perf_pmu__check_alias(struct perf_pmu *pmu, struct parse_events_terms *head_terms,
> > - struct perf_pmu_info *info, struct parse_events_error *err);
> > + struct perf_pmu_info *info, bool *rewrote_terms,
> > + struct parse_events_error *err);
> > int perf_pmu__find_event(struct perf_pmu *pmu, const char *event, void *state, pmu_event_callback cb);
> >
> > int perf_pmu__format_parse(struct perf_pmu *pmu, int dirfd, bool eager_load);
> > --
> > 2.43.0.rc1.413.gea7ed67945-goog
> >
On Thu, Nov 23, 2023 at 7:16 AM Marc Zyngier <[email protected]> wrote:
>
> On Thu, 23 Nov 2023 04:29:22 +0000,
> Ian Rogers <[email protected]> wrote:
> >
> > The perf tool has previously made legacy events the priority so with
> > or without a PMU the legacy event would be opened:
> >
> > ```
> > $ perf stat -e cpu-cycles,cpu/cpu-cycles/ true
> > Using CPUID GenuineIntel-6-8D-1
> > intel_pt default config: tsc,mtc,mtc_period=3,psb_period=3,pt,branch
> > Attempting to add event pmu 'cpu' with 'cpu-cycles,' that may result in non-fatal errors
> > After aliases, add event pmu 'cpu' with 'cpu-cycles,' that may result in non-fatal errors
> > Control descriptor is not initialized
> > ------------------------------------------------------------
> > perf_event_attr:
> > type 0 (PERF_TYPE_HARDWARE)
> > size 136
> > config 0 (PERF_COUNT_HW_CPU_CYCLES)
> > sample_type IDENTIFIER
> > read_format TOTAL_TIME_ENABLED|TOTAL_TIME_RUNNING
> > disabled 1
> > inherit 1
> > enable_on_exec 1
> > exclude_guest 1
> > ------------------------------------------------------------
> > sys_perf_event_open: pid 833967 cpu -1 group_fd -1 flags 0x8 = 3
> > ------------------------------------------------------------
> > perf_event_attr:
> > type 0 (PERF_TYPE_HARDWARE)
> > size 136
> > config 0 (PERF_COUNT_HW_CPU_CYCLES)
> > sample_type IDENTIFIER
> > read_format TOTAL_TIME_ENABLED|TOTAL_TIME_RUNNING
> > disabled 1
> > inherit 1
> > enable_on_exec 1
> > exclude_guest 1
> > ------------------------------------------------------------
> > ...
> > ```
> >
> > Fixes to make hybrid/BIG.little PMUs behave correctly, ie as core PMUs
> > capable of opening legacy events on each, removing hard coded
> > "cpu_core" and "cpu_atom" Intel PMU names, etc. caused a behavioral
> > difference on Apple/ARM due to latent issues in the PMU driver
> > reported in:
> > https://lore.kernel.org/lkml/[email protected]/
> >
>
> What issue? So far, I don't see anything that remotely looks like a
> kernel issue.
>
> > As part of that report Mark Rutland <[email protected]> requested
> > that legacy events not be higher in priority when a PMU is specified
> > reversing what has until this change been perf's default
> > behavior. With this change the above becomes:
> >
> > ```
> > $ perf stat -e cpu-cycles,cpu/cpu-cycles/ true
> > Using CPUID GenuineIntel-6-8D-1
> > Attempt to add: cpu/cpu-cycles=0/
> > ..after resolving event: cpu/event=0x3c/
> > Control descriptor is not initialized
> > ------------------------------------------------------------
> > perf_event_attr:
> > type 0 (PERF_TYPE_HARDWARE)
> > size 136
> > config 0 (PERF_COUNT_HW_CPU_CYCLES)
> > sample_type IDENTIFIER
> > read_format TOTAL_TIME_ENABLED|TOTAL_TIME_RUNNING
> > disabled 1
> > inherit 1
> > enable_on_exec 1
> > exclude_guest 1
> > ------------------------------------------------------------
> > sys_perf_event_open: pid 827628 cpu -1 group_fd -1 flags 0x8 = 3
> > ------------------------------------------------------------
> > perf_event_attr:
> > type 4 (PERF_TYPE_RAW)
> > size 136
> > config 0x3c
> > sample_type IDENTIFIER
> > read_format TOTAL_TIME_ENABLED|TOTAL_TIME_RUNNING
> > disabled 1
> > inherit 1
> > enable_on_exec 1
> > exclude_guest 1
> > ------------------------------------------------------------
> > ...
> > ```
> >
> > So the second event has become a raw event as
> > /sys/devices/cpu/events/cpu-cycles exists.
> >
> > A fix was necessary to config_term_pmu in parse-events.c as
> > check_alias expansion needs to happen after config_term_pmu, and
> > config_term_pmu may need calling a second time because of this.
> >
> > config_term_pmu is updated to not use the legacy event when the PMU
> > has such a named event (either from json or sysfs).
> >
> > The bulk of this change is updating all of the parse-events test
> > expectations so that if a sysfs/json event exists for a PMU the test
> > doesn't fail - a further sign, if it were needed, that the legacy
> > event priority was a known and tested behavior of the perf tool.
> >
> > Signed-off-by: Ian Rogers <[email protected]>
>
> Reported-by:, Link: and Fixes: tags would be appreciated.
>
> > ---
> > This is a large behavioral change:
> > 1) the scope of the change means it should bake on linux-next and I
> > don't believe should be a 6.7-rc fix.
>
> I entirely disagree. Distros are shipping a broken perf tool.
>
> > 2) a fixes tag and stable backport I don't think are appropriate. The
> > real reported issue is with the PMU driver. A backport would bring the
> > risk that later fixes, due to the large behavior change, wouldn't be
> > backported and past releases get regressed in scenarios like
> > hybrid. Backports for the perf tool are also less necessary than say a
> > buggy PMU driver, as distributions should be updating to the latest
> > perf tool regardless of what Linux kernel is being run (the perf tool
> > is backward compatible).
>
> Again, perf gets shipped in distros, and not necessary as the latest
> version. Rather, they tend to ship the version matching the kernel. No
> backport, buggy perf.
Please complain to the distros. I complained to Debian, we got rid of
the horrible wrapper script thing they did. I complained to two
separate Ubuntu people over the last two weeks as they still have
broken packaging even though they derive from Debian. Fedora is of
course perfect as Arnaldo oversees it :-)
> And again, I don't see a bug in the PMU driver.
Whether the PMU driver is requested a legacy cycles event or the
cycles event as an event code, the PMU driver should support it.
Supporting legacy events is just something core PMU drivers do. This
workaround wouldn't be necessary were it not for this PMU bug.
This change impacts every user of perf not just a partial fix to
workaround ARM PMU driver issues, see the updated parse-events test
for a list of what a simple test sees as a behavior change.
Thanks,
Ian
> Now, in the interest of moving forward: this patch seems to address
> the basic problems I was seeing on both M1/M2 (running any kernel
> version) and Juno (running an older kernel), so:
>
> Tested-by: Marc Zyngier <[email protected]>
>
> Thanks,
>
> M.
>
> --
> Without deviation from the norm, progress is not possible.
On Thu, 23 Nov 2023 15:27:54 +0000,
Ian Rogers <[email protected]> wrote:
>
> On Thu, Nov 23, 2023 at 7:16 AM Marc Zyngier <[email protected]> wrote:
> >
> > Again, perf gets shipped in distros, and not necessary as the latest
> > version. Rather, they tend to ship the version matching the kernel. No
> > backport, buggy perf.
>
> Please complain to the distros. I complained to Debian, we got rid of
> the horrible wrapper script thing they did. I complained to two
> separate Ubuntu people over the last two weeks as they still have
> broken packaging even though they derive from Debian. Fedora is of
> course perfect as Arnaldo oversees it :-)
In this instance, I don't need to complain to anyone but you. And
guess what: it is on Fedora that this issue was first discovered.
I also don't see what distro packaging policy has anything to do with
the issue at hand, but that's beside the point.
>
> > And again, I don't see a bug in the PMU driver.
>
> Whether the PMU driver is requested a legacy cycles event or the
> cycles event as an event code, the PMU driver should support it.
> Supporting legacy events is just something core PMU drivers do. This
> workaround wouldn't be necessary were it not for this PMU bug.
Again, *which* PMU bug? What is a legacy event, and when has this
terminology made it into the kernel? Who has decided that a change was
necessary? Why haven't you submitted patches upgrading all the PMU
drivers to support whatever you are referring to?
> This change impacts every user of perf not just a partial fix to
> workaround ARM PMU driver issues, see the updated parse-events test
> for a list of what a simple test sees as a behavior change.
When making far-reaching changes to a subsystem, I apply two rules:
- I address everything that is affected, not just my pet architecture
- I don't break other people's toys, which means compatibility is a
*must*, not a 'nice to have'
By this standard, your complaining that "ARM is broken" doesn't hold.
It was working just fine until your changes rendered perf unusable.
Nonetheless, thank you for addressing it quickly. This is sincerely
appreciated.
M.
--
Without deviation from the norm, progress is not possible.
On Thu, Nov 23, 2023 at 8:09 AM Marc Zyngier <[email protected]> wrote:
>
> On Thu, 23 Nov 2023 15:27:54 +0000,
> Ian Rogers <[email protected]> wrote:
> >
> > On Thu, Nov 23, 2023 at 7:16 AM Marc Zyngier <[email protected]> wrote:
> > >
> > > Again, perf gets shipped in distros, and not necessary as the latest
> > > version. Rather, they tend to ship the version matching the kernel. No
> > > backport, buggy perf.
> >
> > Please complain to the distros. I complained to Debian, we got rid of
> > the horrible wrapper script thing they did. I complained to two
> > separate Ubuntu people over the last two weeks as they still have
> > broken packaging even though they derive from Debian. Fedora is of
> > course perfect as Arnaldo oversees it :-)
>
> In this instance, I don't need to complain to anyone but you. And
> guess what: it is on Fedora that this issue was first discovered.
>
> I also don't see what distro packaging policy has anything to do with
> the issue at hand, but that's beside the point.
Because the latest perf tool is always improved and carries fixes,
just as say gcc or clang. We don't ask these tools to backport fixes
and then deliberately run out-of-date versions of them.
> >
> > > And again, I don't see a bug in the PMU driver.
> >
> > Whether the PMU driver is requested a legacy cycles event or the
> > cycles event as an event code, the PMU driver should support it.
> > Supporting legacy events is just something core PMU drivers do. This
> > workaround wouldn't be necessary were it not for this PMU bug.
>
> Again, *which* PMU bug? What is a legacy event, and when has this
> terminology made it into the kernel? Who has decided that a change was
> necessary? Why haven't you submitted patches upgrading all the PMU
> drivers to support whatever you are referring to?
I did fix ARM's PMU driver for extended types, James Clark took over
the patch. The term legacy has at least been in use in kernel source
code for over 11 years:
http://lkml.kernel.org/r/[email protected]
An issue I face in fixing somebody's PMU driver is it is ever so
useful to be able to test. The work done with James was done blind by
me except for checking for regressions on a raspberry pi 4, which
isn't heterogeneous (nor is the 5 *sigh*). The fact there were bugs in
ARM's PMU driver for so long shows a lack of testing by ARM and we've
been going out of our way to increase testing. Something positive ARM
could do in this area is to update the parse-events test, yes the one
that is supposed to test issues like this, so that the hardcoded "cpu"
PMU assumption that works on most platforms who name their core PMU
"cpu" also works on ARM. For bonus points setting up testing so that
we know when things break would be useful. As mentioned in previous
emails I hope to work away from needing an actual machine to test the
perf tool's correctness, but we're a long way from that. There are
very many BIG.little Android devices in the field where the PMUs are
not set up as heterogeneous, ARM could contribute a CTS test to
Android to make sure this doesn't happen.
Thanks,
Ian
> > This change impacts every user of perf not just a partial fix to
> > workaround ARM PMU driver issues, see the updated parse-events test
> > for a list of what a simple test sees as a behavior change.
>
> When making far-reaching changes to a subsystem, I apply two rules:
>
> - I address everything that is affected, not just my pet architecture
>
> - I don't break other people's toys, which means compatibility is a
> *must*, not a 'nice to have'
>
> By this standard, your complaining that "ARM is broken" doesn't hold.
> It was working just fine until your changes rendered perf unusable.
>
> Nonetheless, thank you for addressing it quickly. This is sincerely
> appreciated.
>
> M.
>
> --
> Without deviation from the norm, progress is not possible.
Em Thu, Nov 23, 2023 at 11:18:22AM -0300, Arnaldo Carvalho de Melo escreveu:
> Em Thu, Nov 23, 2023 at 05:45:19PM +0900, Hector Martin escreveu:
> > On 2023/11/23 13:29, Ian Rogers wrote:
> > > The bulk of this change is updating all of the parse-events test
> > > expectations so that if a sysfs/json event exists for a PMU the test
> > > doesn't fail - a further sign, if it were needed, that the legacy
> > > event priority was a known and tested behavior of the perf tool.
>
> > > Signed-off-by: Ian Rogers <[email protected]>
> > > ---
> > > This is a large behavioral change:
> > > 1) the scope of the change means it should bake on linux-next and I
> > > don't believe should be a 6.7-rc fix.
> > > 2) a fixes tag and stable backport I don't think are appropriate. The
> > > real reported issue is with the PMU driver. A backport would bring the
> > > risk that later fixes, due to the large behavior change, wouldn't be
> > > backported and past releases get regressed in scenarios like
> > > hybrid. Backports for the perf tool are also less necessary than say a
> > > buggy PMU driver, as distributions should be updating to the latest
> > > perf tool regardless of what Linux kernel is being run (the perf tool
> > > is backward compatible).
> >
> > Tested-by: Hector Martin <[email protected]>
>
> Thanks, applied locally, doing some tests and then will push for
> linux-next to pick it up.
>
> Mark, can I have your Reviewed-by or Acked-by?
I'll collect those, but only after addressing these:
[perfbuilder@five ~]$ export BUILD_TARBALL=http://192.168.86.5/perf/perf-6.6.0-rc1.tar.xz
[perfbuilder@five ~]$ time dm
1 100.09 almalinux:8 : FAIL clang version 15.0.7 (Red Hat 15.0.7-1.module_el8.8.0+3466+dfcbc058)
util/parse-events.c:1461:6: error: variable 'alias_rewrote_terms' is used uninitialized whenever '&&' condition is false [-Werror,-Wsometimes-uninitialized]
if (!parse_state->fake_pmu && perf_pmu__check_alias(pmu, &parsed_terms,
^~~~~~~~~~~~~~~~~~~~~~
util/parse-events.c:1477:6: note: uninitialized use occurs here
if (alias_rewrote_terms &&
^~~~~~~~~~~~~~~~~~~
util/parse-events.c:1461:6: note: remove the '&&' if its condition is always true
if (!parse_state->fake_pmu && perf_pmu__check_alias(pmu, &parsed_terms,
^~~~~~~~~~~~~~~~~~~~~~~~~
util/parse-events.c:1401:26: note: initialize the variable 'alias_rewrote_terms' to silence this warning
bool alias_rewrote_terms;
^
= false
1 error generated.
make[3]: *** [/git/perf-6.6.0-rc1/tools/build/Makefile.build:158: util] Error 2
2 114.29 almalinux:9 : FAIL clang version 15.0.7 (Red Hat 15.0.7-2.el9)
util/parse-events.c:1461:6: error: variable 'alias_rewrote_terms' is used uninitialized whenever '&&' condition is false [-Werror,-Wsometimes-uninitialized]
if (!parse_state->fake_pmu && perf_pmu__check_alias(pmu, &parsed_terms,
^~~~~~~~~~~~~~~~~~~~~~
util/parse-events.c:1477:6: note: uninitialized use occurs here
if (alias_rewrote_terms &&
^~~~~~~~~~~~~~~~~~~
util/parse-events.c:1461:6: note: remove the '&&' if its condition is always true
if (!parse_state->fake_pmu && perf_pmu__check_alias(pmu, &parsed_terms,
^~~~~~~~~~~~~~~~~~~~~~~~~
util/parse-events.c:1401:26: note: initialize the variable 'alias_rewrote_terms' to silence this warning
bool alias_rewrote_terms;
^
= false
1 error generated.
make[3]: *** [/git/perf-6.6.0-rc1/tools/build/Makefile.build:158: util] Error 2
Em Thu, Nov 23, 2023 at 02:37:31PM +0000, Mark Rutland escreveu:
> Hi Ian,
>
> Thanks for this!
Yeah, it seems we're making progress, thanks for the continuous effort
in getting this fixed!
> On Wed, Nov 22, 2023 at 08:29:22PM -0800, Ian Rogers wrote:
> > The perf tool has previously made legacy events the priority so with
> > or without a PMU the legacy event would be opened:
<SNIP>
> > The bulk of this change is updating all of the parse-events test
> > expectations so that if a sysfs/json event exists for a PMU the test
> > doesn't fail - a further sign, if it were needed, that the legacy
> > event priority was a known and tested behavior of the perf tool.
> > Signed-off-by: Ian Rogers <[email protected]>
> Regardless of my comments below, for this patch as-is:
> Acked-by: Mark Rutland <[email protected]>
I'm collecting this even with the problems in some setups so far, thanks
for providing it.
> > ---
> > This is a large behavioral change:
> > 1) the scope of the change means it should bake on linux-next and I
> > don't believe should be a 6.7-rc fix.
>
> I'm happy for this to bake, but I do think it needs to be backported for the
> sake of users, especially given that it *restores* the old behaviour.
>
> > 2) a fixes tag and stable backport I don't think are appropriate.
> For the sake of users I think a fixes tag and stable backport are necssary. In
> practice distributions ship the perf tool associated with their stable kernel,
> so (for better or worse) a stable backport is certainly necessary for distros
> that'll use the v6.6 stable kernel.
Which, as Ian mentioned, is a common misconception, as the lack of
lockstep of perf/kernel versions was never properly stated in
documentation, only in the source code, look for the
evsel__disable_missing_features() function that tries to do whatever we
managed to do from what was being asked (new features for old kernels)
and the laconic responses from perf_event_open() given back to those
requests.
But the fact is that most if not all distros think perf is in lockstep
with the kernel, which is not the intent.
That said, for distros that do backports, this is one to be done, and
for [email protected], yeah, I also think this is one to be flagged as
that, but since this hybrid thing has such a miscoordinated
user/kernel/arches history, with such great number of nuances and
interpretations, I think we better continue to test it for a while, in
perf-tools-next/perf-tools-next and linux-next, to the flag it for
backports.
> > The real reported issue is with the PMU driver.
>
> Having trawled through the driver and core perf code, I don't believe the PMU
> driver is at fault. Please see my analysis at:
>
> https://lore.kernel.org/lkml/[email protected]/
>
> ... where it looks like the perf tool is dropping the extended type ID in some
> cases.
> If you know of a specific bug in the PMU driver or perf core code, please let
> me know and I will investigate. As it stands we have no evidence of a bug in
> the PMU driver, and pretty clear evidence (as linked above) there there is
> *some* issue in userspace. In the absence of such evidence, please don't assert
> that there must be a kernel bug.
> > A backport would bring the
> > risk that later fixes, due to the large behavior change, wouldn't be
> > backported and past releases get regressed in scenarios like
> > hybrid. Backports for the perf tool are also less necessary than say a
> > buggy PMU driver, as distributions should be updating to the latest
> > perf tool regardless of what Linux kernel is being run (the perf tool
> > is backward compatible).
> As above I believe that a backport is necessary.
Agreed, as we get this tested a bit.
- Arnaldo
Em Thu, Nov 23, 2023 at 07:18:57AM -0800, Ian Rogers escreveu:
> On Thu, Nov 23, 2023 at 6:37 AM Mark Rutland <[email protected]> wrote:
> > On Wed, Nov 22, 2023 at 08:29:22PM -0800, Ian Rogers wrote:
> > > This is a large behavioral change:
> > > 1) the scope of the change means it should bake on linux-next and I
> > > don't believe should be a 6.7-rc fix.
> > I'm happy for this to bake, but I do think it needs to be backported for the
> > sake of users, especially given that it *restores* the old behaviour.
> It is going to change the behavior for a far larger set of users. I'm
> also concerned that:
> ```
> $ perf list
> ...
> cpu-cycles OR cpu/cpu-cycles/ [Kernel PMU event]
> ...
> ```
> implies that cpu/cpu-cycles/ is a synonym for cpu-cycles, which is no
> longer true (or pick another event from sysfs whose name is the same
> as a legacy event). I'm not sure what a fix in perf list for this
> would look like.
It is a mess, indeed, cpu-cycles should be equivalent to
cpu/cpu-cycles/, map to the same HW PMU counter.
But by now I think we need to just reword the output of perf list to not
equate those events, and instead provide a more informative output, if
that is at all possible.
Something like:
Legacy events:
cpu-cycles: some nice explanation about the intent for this one (Ingo's
probably).
And then:
PMU events: (and this is even less clear :-()
cpu/cpu-cycles/: other nice explanation about the intent for this one,
if different, on this arch, for the "legacy" cpu-cycles one.
The original intent, that I called "Ingo's" was to try to somehow
generalize some concepts (CPU cycles, for instante) so that we could get
a rough idea that would allow us to somehow compare results using the
tools over different architectures (and micro-arches, etc).
I think that with these generic names we're now in damage control mode:
what matters is to keep what people got used to to continue to hold. And
that is what I think is the main argument here.
And I really think we should just head people like Hector (perf power
users) and provide a way to aggregate "cycles" over all cores, probably
not in the default output, but in a really simple way, as he seems to
want that and I would'n t be surprised that that may be something useful
after all 8-).
So back to checking why this patch isn't working in some of the
container arches I test this whole shebang, sigh.
- Arnaldo
On Thu, Nov 23, 2023 at 06:32:51PM -0300, Arnaldo Carvalho de Melo wrote:
> Em Thu, Nov 23, 2023 at 02:37:31PM +0000, Mark Rutland escreveu:
> > Hi Ian,
> >
> > Thanks for this!
>
> Yeah, it seems we're making progress, thanks for the continuous effort
> in getting this fixed!
>
> > On Wed, Nov 22, 2023 at 08:29:22PM -0800, Ian Rogers wrote:
> > > The perf tool has previously made legacy events the priority so with
> > > or without a PMU the legacy event would be opened:
>
> <SNIP>
>
> > > The bulk of this change is updating all of the parse-events test
> > > expectations so that if a sysfs/json event exists for a PMU the test
> > > doesn't fail - a further sign, if it were needed, that the legacy
> > > event priority was a known and tested behavior of the perf tool.
>
> > > Signed-off-by: Ian Rogers <[email protected]>
>
> > Regardless of my comments below, for this patch as-is:
>
> > Acked-by: Mark Rutland <[email protected]>
>
> I'm collecting this even with the problems in some setups so far, thanks
> for providing it.
That's fine by me, thanks!
> > > ---
> > > This is a large behavioral change:
> > > 1) the scope of the change means it should bake on linux-next and I
> > > don't believe should be a 6.7-rc fix.
> >
> > I'm happy for this to bake, but I do think it needs to be backported for the
> > sake of users, especially given that it *restores* the old behaviour.
> >
> > > 2) a fixes tag and stable backport I don't think are appropriate.
>
> > For the sake of users I think a fixes tag and stable backport are necssary. In
> > practice distributions ship the perf tool associated with their stable kernel,
> > so (for better or worse) a stable backport is certainly necessary for distros
> > that'll use the v6.6 stable kernel.
>
> Which, as Ian mentioned, is a common misconception, as the lack of
> lockstep of perf/kernel versions was never properly stated in
> documentation, only in the source code, look for the
> evsel__disable_missing_features() function that tries to do whatever we
> managed to do from what was being asked (new features for old kernels)
> and the laconic responses from perf_event_open() given back to those
> requests.
>
> But the fact is that most if not all distros think perf is in lockstep
> with the kernel, which is not the intent.
Sorry, I didn't mean to imply that these were in lockstep.
All I meant was:
(a) Distributions typically ship a stable version of the perf tool, and don't
update to the latest-and-greatest version, but will take stable bug fixes.
Basically what they do for every other userspace application they ship.
Distros with a rolling release policy will update, but they're not the
common case today.
(b) The tool version that distributions happen to ship is typically the same as
the kernel version, since packagers will grab them at the same time, and
they're in the same source tree anyway.
... and hence, if a distro uses kernel v6.6, they are *very* likely to also
pick perf tool version v6.6.
> That said, for distros that do backports, this is one to be done, and
> for [email protected], yeah, I also think this is one to be flagged as
> that, but since this hybrid thing has such a miscoordinated
> user/kernel/arches history, with such great number of nuances and
> interpretations, I think we better continue to test it for a while, in
> perf-tools-next/perf-tools-next and linux-next, to the flag it for
> backports.
100% agreed!
Thanks,
Mark.
Em Thu, Nov 23, 2023 at 06:15:09PM -0300, Arnaldo Carvalho de Melo escreveu:
>
> I'll collect those, but only after addressing these:
>
> [perfbuilder@five ~]$ export BUILD_TARBALL=http://192.168.86.5/perf/perf-6.6.0-rc1.tar.xz
> [perfbuilder@five ~]$ time dm
> 1 100.09 almalinux:8 : FAIL clang version 15.0.7 (Red Hat 15.0.7-1.module_el8.8.0+3466+dfcbc058)
> util/parse-events.c:1461:6: error: variable 'alias_rewrote_terms' is used uninitialized whenever '&&' condition is false [-Werror,-Wsometimes-uninitialized]
> if (!parse_state->fake_pmu && perf_pmu__check_alias(pmu, &parsed_terms,
> ^~~~~~~~~~~~~~~~~~~~~~
> util/parse-events.c:1477:6: note: uninitialized use occurs here
> if (alias_rewrote_terms &&
> ^~~~~~~~~~~~~~~~~~~
> util/parse-events.c:1461:6: note: remove the '&&' if its condition is always true
> if (!parse_state->fake_pmu && perf_pmu__check_alias(pmu, &parsed_terms,
> ^~~~~~~~~~~~~~~~~~~~~~~~~
> util/parse-events.c:1401:26: note: initialize the variable 'alias_rewrote_terms' to silence this warning
> bool alias_rewrote_terms;
> ^
> = false
> 1 error generated.
> make[3]: *** [/git/perf-6.6.0-rc1/tools/build/Makefile.build:158: util] Error 2
It built well with gcc but clang didn't notice that
perf_pmu__check_alias() unconditionally initializes that variable to
false as one of its first steps.
So I just did what clang suggested.
- Arnaldo
Em Thu, Nov 23, 2023 at 03:16:09PM +0000, Marc Zyngier escreveu:
> Now, in the interest of moving forward: this patch seems to address
> the basic problems I was seeing on both M1/M2 (running any kernel
> version) and Juno (running an older kernel), so:
> Tested-by: Marc Zyngier <[email protected]>
Thanks, added to the cset.
- Arnaldo