2024-03-02 01:00:07

by Ian Rogers

[permalink] [raw]
Subject: [PATCH v2 00/12] Foundations for metric generation with Python

Metrics in the perf tool come in via json. Json doesn't allow
comments, line breaks, etc. making it an inconvenient way to write
metrics. Further, it is useful to detect when writing a metric that
the event specified is supported within the event json for a model.

These patches introduce infrastructure and fixes for the addition of
metrics written in python for Arm64, AMD Zen and Intel CPUs. Later
patches will introduce the metrics split apart by the vendor.

v2. Fixes two type issues in the python code but no functional or
output changes.

Ian Rogers (12):
perf jevents: Allow multiple metricgroups.json files
perf jevents: Update metric constraint support
perf jevents: Add descriptions to metricgroup abstraction
perf jevents: Allow metric groups not to be named
perf jevents: Support parsing negative exponents
perf jevents: Term list fix in event parsing
perf jevents: Add threshold expressions to Metric
perf jevents: Move json encoding to its own functions
perf jevents: Drop duplicate pending metrics
perf jevents: Skip optional metrics in metric group list
perf jevents: Build support for generating metrics from python
perf jevents: Add load event json to verify and allow fallbacks

tools/perf/.gitignore | 2 +
tools/perf/Makefile.perf | 17 ++-
tools/perf/pmu-events/Build | 60 ++++++++-
tools/perf/pmu-events/amd_metrics.py | 22 ++++
tools/perf/pmu-events/arm64_metrics.py | 23 ++++
tools/perf/pmu-events/intel_metrics.py | 22 ++++
tools/perf/pmu-events/jevents.py | 6 +-
tools/perf/pmu-events/metric.py | 162 +++++++++++++++++++++----
tools/perf/pmu-events/metric_test.py | 4 +
9 files changed, 282 insertions(+), 36 deletions(-)
create mode 100755 tools/perf/pmu-events/amd_metrics.py
create mode 100755 tools/perf/pmu-events/arm64_metrics.py
create mode 100755 tools/perf/pmu-events/intel_metrics.py

--
2.44.0.278.ge034bb2e1d-goog



2024-03-02 01:00:51

by Ian Rogers

[permalink] [raw]
Subject: [PATCH v2 03/12] perf jevents: Add descriptions to metricgroup abstraction

Add a function to recursively generate metric group descriptions.

Signed-off-by: Ian Rogers <[email protected]>
---
tools/perf/pmu-events/metric.py | 14 ++++++++++++--
1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/tools/perf/pmu-events/metric.py b/tools/perf/pmu-events/metric.py
index 8a718dd4b1fe..1de4fb72c75e 100644
--- a/tools/perf/pmu-events/metric.py
+++ b/tools/perf/pmu-events/metric.py
@@ -475,6 +475,8 @@ class Metric:

return result

+ def ToMetricGroupDescriptions(self, root: bool = True) -> Dict[str, str]:
+ return {}

class _MetricJsonEncoder(json.JSONEncoder):
"""Special handling for Metric objects."""
@@ -493,10 +495,12 @@ class MetricGroup:
which can facilitate arrangements similar to trees.
"""

- def __init__(self, name: str, metric_list: List[Union[Metric,
- 'MetricGroup']]):
+ def __init__(self, name: str,
+ metric_list: List[Union[Metric, 'MetricGroup']],
+ description: Optional[str] = None):
self.name = name
self.metric_list = metric_list
+ self.description = description
for metric in metric_list:
metric.AddToMetricGroup(self)

@@ -516,6 +520,12 @@ class MetricGroup:
def ToPerfJson(self) -> str:
return json.dumps(sorted(self.Flatten()), indent=2, cls=_MetricJsonEncoder)

+ def ToMetricGroupDescriptions(self, root: bool = True) -> Dict[str, str]:
+ result = {self.name: self.description} if self.description else {}
+ for x in self.metric_list:
+ result.update(x.ToMetricGroupDescriptions(False))
+ return result
+
def __str__(self) -> str:
return self.ToPerfJson()

--
2.44.0.278.ge034bb2e1d-goog


2024-03-02 01:01:08

by Ian Rogers

[permalink] [raw]
Subject: [PATCH v2 04/12] perf jevents: Allow metric groups not to be named

It can be convenient to have unnamed metric groups for the sake of
organizing other metrics and metric groups. An unspecified name
shouldn't contribute to the MetricGroup json value, so don't record
it.

Signed-off-by: Ian Rogers <[email protected]>
---
tools/perf/pmu-events/metric.py | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/tools/perf/pmu-events/metric.py b/tools/perf/pmu-events/metric.py
index 1de4fb72c75e..847b614d40d5 100644
--- a/tools/perf/pmu-events/metric.py
+++ b/tools/perf/pmu-events/metric.py
@@ -455,7 +455,8 @@ class Metric:

def AddToMetricGroup(self, group):
"""Callback used when being added to a MetricGroup."""
- self.groups.add(group.name)
+ if group.name:
+ self.groups.add(group.name)

def Flatten(self) -> Set['Metric']:
"""Return a leaf metric."""
--
2.44.0.278.ge034bb2e1d-goog


2024-03-02 01:01:21

by Ian Rogers

[permalink] [raw]
Subject: [PATCH v2 05/12] perf jevents: Support parsing negative exponents

Support negative exponents when parsing from a json metric string by
making the numbers after the 'e' optional in the 'Event' insertion fix
up.

Signed-off-by: Ian Rogers <[email protected]>
---
tools/perf/pmu-events/metric.py | 2 +-
tools/perf/pmu-events/metric_test.py | 4 ++++
2 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/tools/perf/pmu-events/metric.py b/tools/perf/pmu-events/metric.py
index 847b614d40d5..31eea2f45152 100644
--- a/tools/perf/pmu-events/metric.py
+++ b/tools/perf/pmu-events/metric.py
@@ -573,7 +573,7 @@ def ParsePerfJson(orig: str) -> Expression:
# a double by the Bison parser
py = re.sub(r'0Event\(r"[xX]([0-9a-fA-F]*)"\)', r'Event("0x\1")', py)
# Convert accidentally converted scientific notation constants back
- py = re.sub(r'([0-9]+)Event\(r"(e[0-9]+)"\)', r'\1\2', py)
+ py = re.sub(r'([0-9]+)Event\(r"(e[0-9]*)"\)', r'\1\2', py)
# Convert all the known keywords back from events to just the keyword
keywords = ['if', 'else', 'min', 'max', 'd_ratio', 'source_count', 'has_event', 'strcmp_cpuid_str']
for kw in keywords:
diff --git a/tools/perf/pmu-events/metric_test.py b/tools/perf/pmu-events/metric_test.py
index ee22ff43ddd7..8acfe4652b55 100755
--- a/tools/perf/pmu-events/metric_test.py
+++ b/tools/perf/pmu-events/metric_test.py
@@ -61,6 +61,10 @@ class TestMetricExpressions(unittest.TestCase):
after = before
self.assertEqual(ParsePerfJson(before).ToPerfJson(), after)

+ before = r'a + 3e-12 + b'
+ after = before
+ self.assertEqual(ParsePerfJson(before).ToPerfJson(), after)
+
def test_IfElseTests(self):
# if-else needs rewriting to Select and back.
before = r'Event1 if #smt_on else Event2'
--
2.44.0.278.ge034bb2e1d-goog


2024-03-02 01:01:37

by Ian Rogers

[permalink] [raw]
Subject: [PATCH v2 06/12] perf jevents: Term list fix in event parsing

Fix events seemingly broken apart at a comma.

Signed-off-by: Ian Rogers <[email protected]>
---
tools/perf/pmu-events/metric.py | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/tools/perf/pmu-events/metric.py b/tools/perf/pmu-events/metric.py
index 31eea2f45152..0f4e67e5cfea 100644
--- a/tools/perf/pmu-events/metric.py
+++ b/tools/perf/pmu-events/metric.py
@@ -568,6 +568,12 @@ def ParsePerfJson(orig: str) -> Expression:
r'Event(r"\1")', py)
# If it started with a # it should have been a literal, rather than an event name
py = re.sub(r'#Event\(r"([^"]*)"\)', r'Literal("#\1")', py)
+ # Fix events wrongly broken at a ','
+ while True:
+ prev_py = py
+ py = re.sub(r'Event\(r"([^"]*)"\),Event\(r"([^"]*)"\)', r'Event(r"\1,\2")', py)
+ if py == prev_py:
+ break
# Convert accidentally converted hex constants ("0Event(r"xDEADBEEF)"") back to a constant,
# but keep it wrapped in Event(), otherwise Python drops the 0x prefix and it gets interpreted as
# a double by the Bison parser
@@ -586,7 +592,6 @@ def ParsePerfJson(orig: str) -> Expression:
parsed = ast.fix_missing_locations(parsed)
return _Constify(eval(compile(parsed, orig, 'eval')))

-
def RewriteMetricsInTermsOfOthers(metrics: List[Tuple[str, str, Expression]]
)-> Dict[Tuple[str, str], Expression]:
"""Shorten metrics by rewriting in terms of others.
--
2.44.0.278.ge034bb2e1d-goog


2024-03-02 01:01:46

by Ian Rogers

[permalink] [raw]
Subject: [PATCH v2 07/12] perf jevents: Add threshold expressions to Metric

Allow threshold expressions for metrics to be generated.

Signed-off-by: Ian Rogers <[email protected]>
---
tools/perf/pmu-events/metric.py | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/tools/perf/pmu-events/metric.py b/tools/perf/pmu-events/metric.py
index 0f4e67e5cfea..e81fed2e29b5 100644
--- a/tools/perf/pmu-events/metric.py
+++ b/tools/perf/pmu-events/metric.py
@@ -430,13 +430,15 @@ class Metric:
expr: Expression
scale_unit: str
constraint: MetricConstraint
+ threshold: Optional[Expression]

def __init__(self,
name: str,
description: str,
expr: Expression,
scale_unit: str,
- constraint: MetricConstraint = MetricConstraint.GROUPED_EVENTS):
+ constraint: MetricConstraint = MetricConstraint.GROUPED_EVENTS,
+ threshold: Optional[Expression] = None):
self.name = name
self.description = description
self.expr = expr.Simplify()
@@ -447,6 +449,7 @@ class Metric:
else:
self.scale_unit = f'1{scale_unit}'
self.constraint = constraint
+ self.threshold = threshold
self.groups = set()

def __lt__(self, other):
@@ -473,6 +476,8 @@ class Metric:
}
if self.constraint != MetricConstraint.GROUPED_EVENTS:
result['MetricConstraint'] = self.constraint.name
+ if self.threshold:
+ result['MetricThreshold'] = self.threshold.ToPerfJson()

return result

--
2.44.0.278.ge034bb2e1d-goog


2024-03-02 01:02:04

by Ian Rogers

[permalink] [raw]
Subject: [PATCH v2 08/12] perf jevents: Move json encoding to its own functions

Have dedicate encode functions rather than having them embedded in
MetricGroup. This is to provide some uniformity in the Metric ToXXX
routines.

Signed-off-by: Ian Rogers <[email protected]>
---
tools/perf/pmu-events/metric.py | 34 +++++++++++++++++++++------------
1 file changed, 22 insertions(+), 12 deletions(-)

diff --git a/tools/perf/pmu-events/metric.py b/tools/perf/pmu-events/metric.py
index e81fed2e29b5..b39189182608 100644
--- a/tools/perf/pmu-events/metric.py
+++ b/tools/perf/pmu-events/metric.py
@@ -484,15 +484,6 @@ class Metric:
def ToMetricGroupDescriptions(self, root: bool = True) -> Dict[str, str]:
return {}

-class _MetricJsonEncoder(json.JSONEncoder):
- """Special handling for Metric objects."""
-
- def default(self, o):
- if isinstance(o, Metric):
- return o.ToPerfJson()
- return json.JSONEncoder.default(self, o)
-
-
class MetricGroup:
"""A group of metrics.

@@ -523,8 +514,11 @@ class MetricGroup:

return result

- def ToPerfJson(self) -> str:
- return json.dumps(sorted(self.Flatten()), indent=2, cls=_MetricJsonEncoder)
+ def ToPerfJson(self) -> List[Dict[str, str]]:
+ result = []
+ for x in sorted(self.Flatten()):
+ result.append(x.ToPerfJson())
+ return result

def ToMetricGroupDescriptions(self, root: bool = True) -> Dict[str, str]:
result = {self.name: self.description} if self.description else {}
@@ -533,7 +527,23 @@ class MetricGroup:
return result

def __str__(self) -> str:
- return self.ToPerfJson()
+ return str(self.ToPerfJson())
+
+
+def JsonEncodeMetric(x: MetricGroup):
+ class MetricJsonEncoder(json.JSONEncoder):
+ """Special handling for Metric objects."""
+
+ def default(self, o):
+ if isinstance(o, Metric) or isinstance(o, MetricGroup):
+ return o.ToPerfJson()
+ return json.JSONEncoder.default(self, o)
+
+ return json.dumps(x, indent=2, cls=MetricJsonEncoder)
+
+
+def JsonEncodeMetricGroupDescriptions(x: MetricGroup):
+ return json.dumps(x.ToMetricGroupDescriptions(), indent=2)


class _RewriteIfExpToSelect(ast.NodeTransformer):
--
2.44.0.278.ge034bb2e1d-goog


2024-03-02 01:02:31

by Ian Rogers

[permalink] [raw]
Subject: [PATCH v2 10/12] perf jevents: Skip optional metrics in metric group list

For metric groups, skip metrics in the list that are None. This allows
functions to better optionally return metrics.

Signed-off-by: Ian Rogers <[email protected]>
---
tools/perf/pmu-events/metric.py | 8 +++++---
1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/tools/perf/pmu-events/metric.py b/tools/perf/pmu-events/metric.py
index b39189182608..dd8fd06940e6 100644
--- a/tools/perf/pmu-events/metric.py
+++ b/tools/perf/pmu-events/metric.py
@@ -493,13 +493,15 @@ class MetricGroup:
"""

def __init__(self, name: str,
- metric_list: List[Union[Metric, 'MetricGroup']],
+ metric_list: List[Union[Optional[Metric], Optional['MetricGroup']]],
description: Optional[str] = None):
self.name = name
- self.metric_list = metric_list
+ self.metric_list = []
self.description = description
for metric in metric_list:
- metric.AddToMetricGroup(self)
+ if metric:
+ self.metric_list.append(metric)
+ metric.AddToMetricGroup(self)

def AddToMetricGroup(self, group):
"""Callback used when a MetricGroup is added into another."""
--
2.44.0.278.ge034bb2e1d-goog


2024-03-02 01:02:45

by Ian Rogers

[permalink] [raw]
Subject: [PATCH v2 11/12] perf jevents: Build support for generating metrics from python

Generate extra-metrics.json and extra-metricgroups.json from python
architecture specific scripts. The metrics themselves will be added in
later patches.

If a build takes place in tools/perf/ then extra-metrics.json and
extra-metricgroups.json are generated in that directory and so added
to .gitignore. If there is an OUTPUT directory then the
tools/perf/pmu-events/arch files are copied to it so the generated
extra-metrics.json and extra-metricgroups.json can be added/generated
there.

Signed-off-by: Ian Rogers <[email protected]>
---
tools/perf/.gitignore | 2 +
tools/perf/Makefile.perf | 17 ++++++--
tools/perf/pmu-events/Build | 60 ++++++++++++++++++++++++--
tools/perf/pmu-events/amd_metrics.py | 17 ++++++++
tools/perf/pmu-events/arm64_metrics.py | 18 ++++++++
tools/perf/pmu-events/intel_metrics.py | 17 ++++++++
6 files changed, 124 insertions(+), 7 deletions(-)
create mode 100755 tools/perf/pmu-events/amd_metrics.py
create mode 100755 tools/perf/pmu-events/arm64_metrics.py
create mode 100755 tools/perf/pmu-events/intel_metrics.py

diff --git a/tools/perf/.gitignore b/tools/perf/.gitignore
index f5b81d439387..c9a8da5bfc56 100644
--- a/tools/perf/.gitignore
+++ b/tools/perf/.gitignore
@@ -39,6 +39,8 @@ trace/beauty/generated/
pmu-events/pmu-events.c
pmu-events/jevents
pmu-events/metric_test.log
+pmu-events/arch/**/extra-metrics.json
+pmu-events/arch/**/extra-metricgroups.json
tests/shell/*.shellcheck_log
tests/shell/coresight/*.shellcheck_log
tests/shell/lib/*.shellcheck_log
diff --git a/tools/perf/Makefile.perf b/tools/perf/Makefile.perf
index 04d89d2ed209..4fbb0a173476 100644
--- a/tools/perf/Makefile.perf
+++ b/tools/perf/Makefile.perf
@@ -1177,7 +1177,20 @@ endif # CONFIG_PERF_BPF_SKEL
bpf-skel-clean:
$(call QUIET_CLEAN, bpf-skel) $(RM) -r $(SKEL_TMP_OUT) $(SKELETONS) $(SKEL_OUT)/vmlinux.h

-clean:: $(LIBAPI)-clean $(LIBBPF)-clean $(LIBSUBCMD)-clean $(LIBSYMBOL)-clean $(LIBPERF)-clean arm64-sysreg-defs-clean fixdep-clean python-clean bpf-skel-clean tests-coresight-targets-clean
+pmu-events-clean:
+ifeq ($(OUTPUT),)
+ $(call QUIET_CLEAN, pmu-events) $(RM) \
+ pmu-events/pmu-events.c \
+ pmu-events/metric_test.log
+ $(Q)find pmu-events/arch -name 'extra-metrics.json' -delete -o \
+ -name 'extra-metricgroups.json' -delete
+else
+ $(call QUIET_CLEAN, pmu-events) $(RM) -r $(OUTPUT)pmu-events/arch \
+ $(OUTPUT)pmu-events/pmu-events.c \
+ $(OUTPUT)pmu-events/metric_test.log
+endif
+
+clean:: $(LIBAPI)-clean $(LIBBPF)-clean $(LIBSUBCMD)-clean $(LIBSYMBOL)-clean $(LIBPERF)-clean arm64-sysreg-defs-clean fixdep-clean python-clean bpf-skel-clean tests-coresight-targets-clean pmu-events-clean
$(call QUIET_CLEAN, core-objs) $(RM) $(LIBPERF_A) $(OUTPUT)perf-archive $(OUTPUT)perf-iostat $(LANG_BINDINGS)
$(Q)find $(or $(OUTPUT),.) -name '*.o' -delete -o -name '\.*.cmd' -delete -o -name '\.*.d' -delete -o -name '*.shellcheck_log' -delete
$(Q)$(RM) $(OUTPUT).config-detected
@@ -1185,8 +1198,6 @@ clean:: $(LIBAPI)-clean $(LIBBPF)-clean $(LIBSUBCMD)-clean $(LIBSYMBOL)-clean $(
$(call QUIET_CLEAN, core-gen) $(RM) *.spec *.pyc *.pyo */*.pyc */*.pyo $(OUTPUT)common-cmds.h TAGS tags cscope* $(OUTPUT)PERF-VERSION-FILE $(OUTPUT)FEATURE-DUMP $(OUTPUT)util/*-bison* $(OUTPUT)util/*-flex* \
$(OUTPUT)util/intel-pt-decoder/inat-tables.c \
$(OUTPUT)tests/llvm-src-{base,kbuild,prologue,relocation}.c \
- $(OUTPUT)pmu-events/pmu-events.c \
- $(OUTPUT)pmu-events/metric_test.log \
$(OUTPUT)$(fadvise_advice_array) \
$(OUTPUT)$(fsconfig_arrays) \
$(OUTPUT)$(fsmount_arrays) \
diff --git a/tools/perf/pmu-events/Build b/tools/perf/pmu-events/Build
index 1d18bb89402e..9af15e3498f1 100644
--- a/tools/perf/pmu-events/Build
+++ b/tools/perf/pmu-events/Build
@@ -1,7 +1,6 @@
pmu-events-y += pmu-events.o
JDIR = pmu-events/arch/$(SRCARCH)
-JSON = $(shell [ -d $(JDIR) ] && \
- find $(JDIR) -name '*.json' -o -name 'mapfile.csv')
+JSON = $(shell find pmu-events/arch -name *.json -o -name *.csv)
JDIR_TEST = pmu-events/arch/test
JSON_TEST = $(shell [ -d $(JDIR_TEST) ] && \
find $(JDIR_TEST) -name '*.json')
@@ -27,13 +26,66 @@ $(PMU_EVENTS_C): $(EMPTY_PMU_EVENTS_C)
$(call rule_mkdir)
$(Q)$(call echo-cmd,gen)cp $< $@
else
+# Extract the model from a extra-metrics.json or extra-metricgroups.json path
+model_name = $(shell echo $(1)|sed -e 's@.\+/\(.*\)/extra-metric.*\.json@\1@')
+vendor_name = $(shell echo $(1)|sed -e 's@.\+/\(.*\)/[^/]*/extra-metric.*\.json@\1@')
+
+# Copy checked-in json for generation.
+$(OUTPUT)pmu-events/arch/%: pmu-events/arch/%
+ $(call rule_mkdir)
+ $(Q)$(call echo-cmd,gen)cp $< $@
+
+# Generate AMD Json
+ZENS = $(shell ls -d pmu-events/arch/x86/amdzen*)
+ZEN_METRICS = $(foreach x,$(ZENS),$(OUTPUT)$(x)/extra-metrics.json)
+ZEN_METRICGROUPS = $(foreach x,$(ZENS),$(OUTPUT)$(x)/extra-metricgroups.json)
+
+$(ZEN_METRICS): pmu-events/amd_metrics.py
+ $(call rule_mkdir)
+ $(Q)$(call echo-cmd,gen)$(PYTHON) $< $(call model_name,$@) > $@
+
+$(ZEN_METRICGROUPS): pmu-events/amd_metrics.py
+ $(call rule_mkdir)
+ $(Q)$(call echo-cmd,gen)$(PYTHON) $< -metricgroups $(call model_name,$@) > $@
+
+# Generate ARM Json
+ARMS = $(shell ls -d pmu-events/arch/arm64/arm/*)
+ARM_METRICS = $(foreach x,$(ARMS),$(OUTPUT)$(x)/extra-metrics.json)
+ARM_METRICGROUPS = $(foreach x,$(ARMS),$(OUTPUT)$(x)/extra-metricgroups.json)
+
+$(ARM_METRICS): pmu-events/arm64_metrics.py
+ $(call rule_mkdir)
+ $(Q)$(call echo-cmd,gen)$(PYTHON) $< $(call vendor_name,$@) $(call model_name,$@) > $@
+
+$(ARM_METRICGROUPS): pmu-events/arm64_metrics.py
+ $(call rule_mkdir)
+ $(Q)$(call echo-cmd,gen)$(PYTHON) $< -metricgroups $(call vendor_name,$@) $(call model_name,$@) > $@
+
+# Generate Intel Json
+INTELS = $(shell ls -d pmu-events/arch/x86/*|grep -v amdzen|grep -v mapfile.csv)
+INTEL_METRICS = $(foreach x,$(INTELS),$(OUTPUT)$(x)/extra-metrics.json)
+INTEL_METRICGROUPS = $(foreach x,$(INTELS),$(OUTPUT)$(x)/extra-metricgroups.json)
+
+$(INTEL_METRICS): pmu-events/intel_metrics.py
+ $(call rule_mkdir)
+ $(Q)$(call echo-cmd,gen)$(PYTHON) $< $(call model_name,$@) > $@
+
+$(INTEL_METRICGROUPS): pmu-events/intel_metrics.py
+ $(call rule_mkdir)
+ $(Q)$(call echo-cmd,gen)$(PYTHON) $< -metricgroups $(call model_name,$@) > $@
+
+GEN_JSON = $(patsubst %,$(OUTPUT)%,$(JSON)) \
+ $(ZEN_METRICS) $(ZEN_METRICGROUPS) \
+ $(ARM_METRICS) $(ARM_METRICGROUPS) \
+ $(INTEL_METRICS) $(INTEL_METRICGROUPS)
+
$(METRIC_TEST_LOG): $(METRIC_TEST_PY) $(METRIC_PY)
$(call rule_mkdir)
$(Q)$(call echo-cmd,test)$(PYTHON) $< 2> $@ || (cat $@ && false)

-$(PMU_EVENTS_C): $(JSON) $(JSON_TEST) $(JEVENTS_PY) $(METRIC_PY) $(METRIC_TEST_LOG)
+$(PMU_EVENTS_C): $(GEN_JSON) $(JSON_TEST) $(JEVENTS_PY) $(METRIC_PY) $(METRIC_TEST_LOG)
$(call rule_mkdir)
- $(Q)$(call echo-cmd,gen)$(PYTHON) $(JEVENTS_PY) $(JEVENTS_ARCH) $(JEVENTS_MODEL) pmu-events/arch $@
+ $(Q)$(call echo-cmd,gen)$(PYTHON) $(JEVENTS_PY) $(JEVENTS_ARCH) $(JEVENTS_MODEL) $(OUTPUT)pmu-events/arch $@
endif

# pmu-events.c file is generated in the OUTPUT directory so it needs a
diff --git a/tools/perf/pmu-events/amd_metrics.py b/tools/perf/pmu-events/amd_metrics.py
new file mode 100755
index 000000000000..cb850ab1ed13
--- /dev/null
+++ b/tools/perf/pmu-events/amd_metrics.py
@@ -0,0 +1,17 @@
+#!/usr/bin/env python3
+# SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause)
+from metric import (JsonEncodeMetric, JsonEncodeMetricGroupDescriptions, MetricGroup)
+import argparse
+import json
+
+parser = argparse.ArgumentParser(description="AMD perf json generator")
+parser.add_argument("-metricgroups", help="Generate metricgroups data", action='store_true')
+parser.add_argument("model", help="e.g. amdzen[123]")
+args = parser.parse_args()
+
+all_metrics = MetricGroup("",[])
+
+if args.metricgroups:
+ print(JsonEncodeMetricGroupDescriptions(all_metrics))
+else:
+ print(JsonEncodeMetric(all_metrics))
diff --git a/tools/perf/pmu-events/arm64_metrics.py b/tools/perf/pmu-events/arm64_metrics.py
new file mode 100755
index 000000000000..a54fa8aae2fa
--- /dev/null
+++ b/tools/perf/pmu-events/arm64_metrics.py
@@ -0,0 +1,18 @@
+#!/usr/bin/env python3
+# SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause)
+from metric import (JsonEncodeMetric, JsonEncodeMetricGroupDescriptions, MetricGroup)
+import argparse
+import json
+
+parser = argparse.ArgumentParser(description="ARM perf json generator")
+parser.add_argument("-metricgroups", help="Generate metricgroups data", action='store_true')
+parser.add_argument("vendor", help="e.g. arm")
+parser.add_argument("model", help="e.g. neoverse-n1")
+args = parser.parse_args()
+
+all_metrics = MetricGroup("",[])
+
+if args.metricgroups:
+ print(JsonEncodeMetricGroupDescriptions(all_metrics))
+else:
+ print(JsonEncodeMetric(all_metrics))
diff --git a/tools/perf/pmu-events/intel_metrics.py b/tools/perf/pmu-events/intel_metrics.py
new file mode 100755
index 000000000000..8b67b9613ab5
--- /dev/null
+++ b/tools/perf/pmu-events/intel_metrics.py
@@ -0,0 +1,17 @@
+#!/usr/bin/env python3
+# SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause)
+from metric import (JsonEncodeMetric, JsonEncodeMetricGroupDescriptions, MetricGroup)
+import argparse
+import json
+
+parser = argparse.ArgumentParser(description="Intel perf json generator")
+parser.add_argument("-metricgroups", help="Generate metricgroups data", action='store_true')
+parser.add_argument("model", help="e.g. skylakex")
+args = parser.parse_args()
+
+all_metrics = MetricGroup("",[])
+
+if args.metricgroups:
+ print(JsonEncodeMetricGroupDescriptions(all_metrics))
+else:
+ print(JsonEncodeMetric(all_metrics))
--
2.44.0.278.ge034bb2e1d-goog


2024-03-02 01:02:54

by Ian Rogers

[permalink] [raw]
Subject: [PATCH v2 12/12] perf jevents: Add load event json to verify and allow fallbacks

Add a LoadEvents function that loads all event json files in a
directory. In the Event constructor ensure all events are defined in
the event json except for legacy events like "cycles". If the initial
event isn't found then legacy_event1 is used, and if that isn't found
legacy_event2 is used. This allows a single Event to have multiple
event names as models will often rename the same event over time. If
the event doesn't exist an exception is raised.

So that references to metrics can be added, add the MetricRef
class. This doesn't validate as an event name and so provides an
escape hatch for metrics to refer to each other.

Signed-off-by: Ian Rogers <[email protected]>
---
tools/perf/pmu-events/amd_metrics.py | 7 ++-
tools/perf/pmu-events/arm64_metrics.py | 7 ++-
tools/perf/pmu-events/intel_metrics.py | 7 ++-
tools/perf/pmu-events/metric.py | 77 +++++++++++++++++++++++++-
4 files changed, 92 insertions(+), 6 deletions(-)

diff --git a/tools/perf/pmu-events/amd_metrics.py b/tools/perf/pmu-events/amd_metrics.py
index cb850ab1ed13..227f9b98c016 100755
--- a/tools/perf/pmu-events/amd_metrics.py
+++ b/tools/perf/pmu-events/amd_metrics.py
@@ -1,14 +1,19 @@
#!/usr/bin/env python3
# SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause)
-from metric import (JsonEncodeMetric, JsonEncodeMetricGroupDescriptions, MetricGroup)
+from metric import (JsonEncodeMetric, JsonEncodeMetricGroupDescriptions, LoadEvents,
+ MetricGroup)
import argparse
import json
+import os

parser = argparse.ArgumentParser(description="AMD perf json generator")
parser.add_argument("-metricgroups", help="Generate metricgroups data", action='store_true')
parser.add_argument("model", help="e.g. amdzen[123]")
args = parser.parse_args()

+directory = f"{os.path.dirname(os.path.realpath(__file__))}/arch/x86/{args.model}/"
+LoadEvents(directory)
+
all_metrics = MetricGroup("",[])

if args.metricgroups:
diff --git a/tools/perf/pmu-events/arm64_metrics.py b/tools/perf/pmu-events/arm64_metrics.py
index a54fa8aae2fa..7cd0ebc0bd80 100755
--- a/tools/perf/pmu-events/arm64_metrics.py
+++ b/tools/perf/pmu-events/arm64_metrics.py
@@ -1,8 +1,10 @@
#!/usr/bin/env python3
# SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause)
-from metric import (JsonEncodeMetric, JsonEncodeMetricGroupDescriptions, MetricGroup)
+from metric import (JsonEncodeMetric, JsonEncodeMetricGroupDescriptions, LoadEvents,
+ MetricGroup)
import argparse
import json
+import os

parser = argparse.ArgumentParser(description="ARM perf json generator")
parser.add_argument("-metricgroups", help="Generate metricgroups data", action='store_true')
@@ -10,6 +12,9 @@ parser.add_argument("vendor", help="e.g. arm")
parser.add_argument("model", help="e.g. neoverse-n1")
args = parser.parse_args()

+directory = f"{os.path.dirname(os.path.realpath(__file__))}/arch/arm64/{args.vendor}/{args.model}/"
+LoadEvents(directory)
+
all_metrics = MetricGroup("",[])

if args.metricgroups:
diff --git a/tools/perf/pmu-events/intel_metrics.py b/tools/perf/pmu-events/intel_metrics.py
index 8b67b9613ab5..4fbb31c9eccd 100755
--- a/tools/perf/pmu-events/intel_metrics.py
+++ b/tools/perf/pmu-events/intel_metrics.py
@@ -1,14 +1,19 @@
#!/usr/bin/env python3
# SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause)
-from metric import (JsonEncodeMetric, JsonEncodeMetricGroupDescriptions, MetricGroup)
+from metric import (JsonEncodeMetric, JsonEncodeMetricGroupDescriptions, LoadEvents,
+ MetricGroup)
import argparse
import json
+import os

parser = argparse.ArgumentParser(description="Intel perf json generator")
parser.add_argument("-metricgroups", help="Generate metricgroups data", action='store_true')
parser.add_argument("model", help="e.g. skylakex")
args = parser.parse_args()

+directory = f"{os.path.dirname(os.path.realpath(__file__))}/arch/x86/{args.model}/"
+LoadEvents(directory)
+
all_metrics = MetricGroup("",[])

if args.metricgroups:
diff --git a/tools/perf/pmu-events/metric.py b/tools/perf/pmu-events/metric.py
index dd8fd06940e6..03312cd6d491 100644
--- a/tools/perf/pmu-events/metric.py
+++ b/tools/perf/pmu-events/metric.py
@@ -3,10 +3,50 @@
import ast
import decimal
import json
+import os
import re
from enum import Enum
from typing import Dict, List, Optional, Set, Tuple, Union

+all_events = set()
+
+def LoadEvents(directory: str) -> None:
+ """Populate a global set of all known events for the purpose of validating Event names"""
+ global all_events
+ all_events = {
+ "context\-switches",
+ "cycles",
+ "duration_time",
+ "instructions",
+ "l2_itlb_misses",
+ }
+ for file in os.listdir(os.fsencode(directory)):
+ filename = os.fsdecode(file)
+ if filename.endswith(".json"):
+ for x in json.load(open(f"{directory}/{filename}")):
+ if "EventName" in x:
+ all_events.add(x["EventName"])
+ elif "ArchStdEvent" in x:
+ all_events.add(x["ArchStdEvent"])
+
+
+def CheckEvent(name: str) -> bool:
+ """Check the event name exists in the set of all loaded events"""
+ global all_events
+ if len(all_events) == 0:
+ # No events loaded so assume any event is good.
+ return True
+
+ if ':' in name:
+ # Remove trailing modifier.
+ name = name[:name.find(':')]
+ elif '/' in name:
+ # Name could begin with a PMU or an event, for now assume it is good.
+ return True
+
+ return name in all_events
+
+
class MetricConstraint(Enum):
GROUPED_EVENTS = 0
NO_GROUP_EVENTS = 1
@@ -317,9 +357,18 @@ def _FixEscapes(s: str) -> str:
class Event(Expression):
"""An event in an expression."""

- def __init__(self, name: str, legacy_name: str = ''):
- self.name = _FixEscapes(name)
- self.legacy_name = _FixEscapes(legacy_name)
+ def __init__(self, *args: str):
+ error = ""
+ for name in args:
+ if CheckEvent(name):
+ self.name = _FixEscapes(name)
+ return
+ if error:
+ error += " or " + name
+ else:
+ error = name
+ global all_events
+ raise Exception(f"No event {error} in:\n{all_events}")

def ToPerfJson(self):
result = re.sub('/', '@', self.name)
@@ -338,6 +387,28 @@ class Event(Expression):
return self


+class MetricRef(Expression):
+ """A metric reference in an expression."""
+
+ def __init__(self, name: str):
+ self.name = _FixEscapes(name)
+
+ def ToPerfJson(self):
+ return self.name
+
+ def ToPython(self):
+ return f'MetricRef(r"{self.name}")'
+
+ def Simplify(self) -> Expression:
+ return self
+
+ def Equals(self, other: Expression) -> bool:
+ return isinstance(other, MetricRef) and self.name == other.name
+
+ def Substitute(self, name: str, expression: Expression) -> Expression:
+ return self
+
+
class Constant(Expression):
"""A constant within the expression tree."""

--
2.44.0.278.ge034bb2e1d-goog


2024-03-02 01:03:17

by Ian Rogers

[permalink] [raw]
Subject: [PATCH v2 01/12] perf jevents: Allow multiple metricgroups.json files

Allow multiple metricgroups.json files by handling any file ending
with metricgroups.json as a metricgroups file.

Signed-off-by: Ian Rogers <[email protected]>
---
tools/perf/pmu-events/jevents.py | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/tools/perf/pmu-events/jevents.py b/tools/perf/pmu-events/jevents.py
index 2c7e5d61ce92..65ab03ce5064 100755
--- a/tools/perf/pmu-events/jevents.py
+++ b/tools/perf/pmu-events/jevents.py
@@ -603,7 +603,7 @@ def preprocess_one_file(parents: Sequence[str], item: os.DirEntry) -> None:
if not item.is_file() or not item.name.endswith('.json'):
return

- if item.name == 'metricgroups.json':
+ if item.name.endswith('metricgroups.json'):
metricgroup_descriptions = json.load(open(item.path))
for mgroup in metricgroup_descriptions:
assert len(mgroup) > 1, parents
@@ -653,7 +653,7 @@ def process_one_file(parents: Sequence[str], item: os.DirEntry) -> None:

# Ignore other directories. If the file name does not have a .json
# extension, ignore it. It could be a readme.txt for instance.
- if not item.is_file() or not item.name.endswith('.json') or item.name == 'metricgroups.json':
+ if not item.is_file() or not item.name.endswith('.json') or item.name.endswith('metricgroups.json'):
return

add_events_table_entries(item, get_topic(item.name))
--
2.44.0.278.ge034bb2e1d-goog


2024-03-02 01:03:25

by Ian Rogers

[permalink] [raw]
Subject: [PATCH v2 02/12] perf jevents: Update metric constraint support

Previous metric constraints were binary, either none or don't group
when the NMI watchdog is present. Update to match the definitions in
'enum metric_event_groups' in pmu-events.h.

Signed-off-by: Ian Rogers <[email protected]>
---
tools/perf/pmu-events/metric.py | 14 ++++++++++----
1 file changed, 10 insertions(+), 4 deletions(-)

diff --git a/tools/perf/pmu-events/metric.py b/tools/perf/pmu-events/metric.py
index 92acd89ed97a..8a718dd4b1fe 100644
--- a/tools/perf/pmu-events/metric.py
+++ b/tools/perf/pmu-events/metric.py
@@ -4,8 +4,14 @@ import ast
import decimal
import json
import re
+from enum import Enum
from typing import Dict, List, Optional, Set, Tuple, Union

+class MetricConstraint(Enum):
+ GROUPED_EVENTS = 0
+ NO_GROUP_EVENTS = 1
+ NO_GROUP_EVENTS_NMI = 2
+ NO_GROUP_EVENTS_SMT = 3

class Expression:
"""Abstract base class of elements in a metric expression."""
@@ -423,14 +429,14 @@ class Metric:
groups: Set[str]
expr: Expression
scale_unit: str
- constraint: bool
+ constraint: MetricConstraint

def __init__(self,
name: str,
description: str,
expr: Expression,
scale_unit: str,
- constraint: bool = False):
+ constraint: MetricConstraint = MetricConstraint.GROUPED_EVENTS):
self.name = name
self.description = description
self.expr = expr.Simplify()
@@ -464,8 +470,8 @@ class Metric:
'MetricExpr': self.expr.ToPerfJson(),
'ScaleUnit': self.scale_unit
}
- if self.constraint:
- result['MetricConstraint'] = 'NO_NMI_WATCHDOG'
+ if self.constraint != MetricConstraint.GROUPED_EVENTS:
+ result['MetricConstraint'] = self.constraint.name

return result

--
2.44.0.278.ge034bb2e1d-goog


2024-03-02 01:04:35

by Ian Rogers

[permalink] [raw]
Subject: [PATCH v2 09/12] perf jevents: Drop duplicate pending metrics

Drop adding a pending metric if there is an existing one.

Signed-off-by: Ian Rogers <[email protected]>
---
tools/perf/pmu-events/jevents.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/perf/pmu-events/jevents.py b/tools/perf/pmu-events/jevents.py
index 65ab03ce5064..55205a260a16 100755
--- a/tools/perf/pmu-events/jevents.py
+++ b/tools/perf/pmu-events/jevents.py
@@ -468,7 +468,7 @@ def add_events_table_entries(item: os.DirEntry, topic: str) -> None:
for e in read_json_events(item.path, topic):
if e.name:
_pending_events.append(e)
- if e.metric_name:
+ if e.metric_name and not any(e.metric_name == x.metric_name for x in _pending_metrics):
_pending_metrics.append(e)


--
2.44.0.278.ge034bb2e1d-goog


2024-03-06 07:21:33

by Namhyung Kim

[permalink] [raw]
Subject: Re: [PATCH v2 05/12] perf jevents: Support parsing negative exponents

On Fri, Mar 1, 2024 at 5:00 PM Ian Rogers <[email protected]> wrote:
>
> Support negative exponents when parsing from a json metric string by
> making the numbers after the 'e' optional in the 'Event' insertion fix
> up.
>
> Signed-off-by: Ian Rogers <[email protected]>
> ---
> tools/perf/pmu-events/metric.py | 2 +-
> tools/perf/pmu-events/metric_test.py | 4 ++++
> 2 files changed, 5 insertions(+), 1 deletion(-)
>
> diff --git a/tools/perf/pmu-events/metric.py b/tools/perf/pmu-events/metric.py
> index 847b614d40d5..31eea2f45152 100644
> --- a/tools/perf/pmu-events/metric.py
> +++ b/tools/perf/pmu-events/metric.py
> @@ -573,7 +573,7 @@ def ParsePerfJson(orig: str) -> Expression:
> # a double by the Bison parser
> py = re.sub(r'0Event\(r"[xX]([0-9a-fA-F]*)"\)', r'Event("0x\1")', py)
> # Convert accidentally converted scientific notation constants back
> - py = re.sub(r'([0-9]+)Event\(r"(e[0-9]+)"\)', r'\1\2', py)
> + py = re.sub(r'([0-9]+)Event\(r"(e[0-9]*)"\)', r'\1\2', py)

I don't understand how it can handle negative numbers.
Why isn't it like Event\(r"(e-?[0-9]+)"\) ?

Thanks,
Namhyung


> # Convert all the known keywords back from events to just the keyword
> keywords = ['if', 'else', 'min', 'max', 'd_ratio', 'source_count', 'has_event', 'strcmp_cpuid_str']
> for kw in keywords:
> diff --git a/tools/perf/pmu-events/metric_test.py b/tools/perf/pmu-events/metric_test.py
> index ee22ff43ddd7..8acfe4652b55 100755
> --- a/tools/perf/pmu-events/metric_test.py
> +++ b/tools/perf/pmu-events/metric_test.py
> @@ -61,6 +61,10 @@ class TestMetricExpressions(unittest.TestCase):
> after = before
> self.assertEqual(ParsePerfJson(before).ToPerfJson(), after)
>
> + before = r'a + 3e-12 + b'
> + after = before
> + self.assertEqual(ParsePerfJson(before).ToPerfJson(), after)
> +
> def test_IfElseTests(self):
> # if-else needs rewriting to Select and back.
> before = r'Event1 if #smt_on else Event2'
> --
> 2.44.0.278.ge034bb2e1d-goog
>

2024-03-14 05:35:55

by Ian Rogers

[permalink] [raw]
Subject: Re: [PATCH v2 05/12] perf jevents: Support parsing negative exponents

On Tue, Mar 5, 2024 at 11:20 PM Namhyung Kim <[email protected]> wrote:
>
> On Fri, Mar 1, 2024 at 5:00 PM Ian Rogers <[email protected]> wrote:
> >
> > Support negative exponents when parsing from a json metric string by
> > making the numbers after the 'e' optional in the 'Event' insertion fix
> > up.
> >
> > Signed-off-by: Ian Rogers <[email protected]>
> > ---
> > tools/perf/pmu-events/metric.py | 2 +-
> > tools/perf/pmu-events/metric_test.py | 4 ++++
> > 2 files changed, 5 insertions(+), 1 deletion(-)
> >
> > diff --git a/tools/perf/pmu-events/metric.py b/tools/perf/pmu-events/metric.py
> > index 847b614d40d5..31eea2f45152 100644
> > --- a/tools/perf/pmu-events/metric.py
> > +++ b/tools/perf/pmu-events/metric.py
> > @@ -573,7 +573,7 @@ def ParsePerfJson(orig: str) -> Expression:
> > # a double by the Bison parser
> > py = re.sub(r'0Event\(r"[xX]([0-9a-fA-F]*)"\)', r'Event("0x\1")', py)
> > # Convert accidentally converted scientific notation constants back
> > - py = re.sub(r'([0-9]+)Event\(r"(e[0-9]+)"\)', r'\1\2', py)
> > + py = re.sub(r'([0-9]+)Event\(r"(e[0-9]*)"\)', r'\1\2', py)
>
> I don't understand how it can handle negative numbers.
> Why isn't it like Event\(r"(e-?[0-9]+)"\) ?

When something like 3e12 is converted at this point it would be:
3Event("e12")
and so this substitution remove the Event and puts it back to:
3e12
but it expects a number with the "e". For a negative exponent like 3e-12 we get:
3Event("e")-12
and so there's no number and no substitution. Changing the + to a *
means the number in the event next to the "e" becomes optional and so
we match and remove the Event again.

I'm wondering about having this be a bit more of a parser, but then it
kind of defeats using Python's eval as being the parser.

Thanks,
Ian

> Thanks,
> Namhyung
>
>
> > # Convert all the known keywords back from events to just the keyword
> > keywords = ['if', 'else', 'min', 'max', 'd_ratio', 'source_count', 'has_event', 'strcmp_cpuid_str']
> > for kw in keywords:
> > diff --git a/tools/perf/pmu-events/metric_test.py b/tools/perf/pmu-events/metric_test.py
> > index ee22ff43ddd7..8acfe4652b55 100755
> > --- a/tools/perf/pmu-events/metric_test.py
> > +++ b/tools/perf/pmu-events/metric_test.py
> > @@ -61,6 +61,10 @@ class TestMetricExpressions(unittest.TestCase):
> > after = before
> > self.assertEqual(ParsePerfJson(before).ToPerfJson(), after)
> >
> > + before = r'a + 3e-12 + b'
> > + after = before
> > + self.assertEqual(ParsePerfJson(before).ToPerfJson(), after)
> > +
> > def test_IfElseTests(self):
> > # if-else needs rewriting to Select and back.
> > before = r'Event1 if #smt_on else Event2'
> > --
> > 2.44.0.278.ge034bb2e1d-goog
> >