The current method for allocating trace source ID values to sources is
to use a fixed algorithm for CPU based sources of (cpu_num * 2 + 0x10).
The STM is allocated ID 0x1.
This fixed algorithm is used in both the CoreSight driver code, and by
perf when writing the trace metadata in the AUXTRACE_INFO record.
The method needs replacing as currently:-
1. It is inefficient in using available IDs.
2. Does not scale to larger systems with many cores and the algorithm
has no limits so will generate invalid trace IDs for cpu number > 44.
Additionally requirements to allocate additional system IDs on some
systems have been seen.
This patch set introduces an API that allows the allocation of trace IDs
in a dynamic manner.
Architecturally reserved IDs are never allocated, and the system is
limited to allocating only valid IDs.
Each of the current trace sources ETM3.x, ETM4.x and STM is updated to use
the new API.
For the ETMx.x devices IDs are allocated on certain events
a) When using sysfs, an ID will be allocated on hardware enable, or a read of
sysfs TRCTRACEID register and freed when the sysfs reset is written.
b) When using perf, ID is allocated on during setup AUX event, and freed on
event free. IDs are communicated using the AUX_OUTPUT_HW_ID packet.
The ID allocator is notified when perf sessions start and stop
so CPU based IDs are kept constant throughout any perf session.
Note: This patchset breaks some backward compatibility for perf record and
perf report.
The version of the AUXTRACE_INFO has been updated to reflect the fact that
the trace source IDs are generated differently. This will
mean older versions of perf report cannot decode the newer file.
Applies to coresight/next [4d45bc82df66]
Tested on DB410c
Changes since v3:
1) Fixed aarch32 build error in ETM3.x driver.
Reported-by: kernel test robot <[email protected]>
Changes since v2:
1) Improved backward compatibility: (requested by James)
Using the new version of perf on an old kernel will generate a usable file
legacy metadata values are set by the new perf and will be used if mew
ID packets are not present in the file.
Using an older version of perf / simpleperf on an updated kernel may still
work. The trace ID allocator has been updated to use the legacy ID values
where possible, so generated file and used trace IDs will match up to the
point where the legacy algorithm is broken anyway.
2) Various changes to the ID allocator and ID packet format.
(suggested by Suzuki)
3) per CPU ID info in allocator now stored as atomic type to allow a passive read
without taking the allocator spinlock. perf flow now allocates and releases ID
values in setup_aux / free_event. Device enable and event enable use the passive
read to set the allocated values. This simplifies the locking mechanisms on the
perf run and fixes issues that arose with locking dependencies.
Changes since v1:
(after feedback & discussion with Mathieu & Suzuki).
1) API has changed. The global trace ID map is managed internally, so it
is no longer passed in to the API functions.
2) perf record does not use sysfs to find the trace IDs. These are now
output as AUX_OUTPUT_HW_ID events. The drivers, perf record, and perf report
have been updated accordingly to generate and handle these events.
Mike Leach (13):
coresight: trace-id: Add API to dynamically assign Trace ID values
coresight: Remove obsolete Trace ID unniqueness checks
coresight: stm: Update STM driver to use Trace ID API
coresight: etm4x: Update ETM4 driver to use Trace ID API
coresight: etm3x: Update ETM3 driver to use Trace ID API
coresight: etmX.X: stm: Remove trace_id() callback
coresight: perf: traceid: Add perf notifiers for Trace ID
perf: cs-etm: Move mapping of Trace ID and cpu into helper function
perf: cs-etm: Update record event to use new Trace ID protocol
kernel: events: Export perf_report_aux_output_id()
perf: cs-etm: Handle PERF_RECORD_AUX_OUTPUT_HW_ID packet
coresight: events: PERF_RECORD_AUX_OUTPUT_HW_ID used for Trace ID
coresight: trace-id: Add debug & test macros to Trace ID allocation
drivers/hwtracing/coresight/Makefile | 2 +-
drivers/hwtracing/coresight/coresight-core.c | 49 +--
.../hwtracing/coresight/coresight-etm-perf.c | 23 ++
drivers/hwtracing/coresight/coresight-etm.h | 3 +-
.../coresight/coresight-etm3x-core.c | 92 +++--
.../coresight/coresight-etm3x-sysfs.c | 27 +-
.../coresight/coresight-etm4x-core.c | 79 ++++-
.../coresight/coresight-etm4x-sysfs.c | 27 +-
drivers/hwtracing/coresight/coresight-etm4x.h | 3 +
drivers/hwtracing/coresight/coresight-stm.c | 49 +--
.../hwtracing/coresight/coresight-trace-id.c | 266 ++++++++++++++
.../hwtracing/coresight/coresight-trace-id.h | 78 +++++
include/linux/coresight-pmu.h | 35 +-
include/linux/coresight.h | 3 -
kernel/events/core.c | 1 +
tools/include/linux/coresight-pmu.h | 48 ++-
tools/perf/arch/arm/util/cs-etm.c | 21 +-
.../perf/util/cs-etm-decoder/cs-etm-decoder.c | 7 +
tools/perf/util/cs-etm.c | 331 +++++++++++++++---
tools/perf/util/cs-etm.h | 14 +-
20 files changed, 933 insertions(+), 225 deletions(-)
create mode 100644 drivers/hwtracing/coresight/coresight-trace-id.c
create mode 100644 drivers/hwtracing/coresight/coresight-trace-id.h
--
2.17.1
Adds in a number of pr_debug macros to allow the debugging and test of
the trace ID allocation system.
Signed-off-by: Mike Leach <[email protected]>
---
.../hwtracing/coresight/coresight-trace-id.c | 36 +++++++++++++++++++
1 file changed, 36 insertions(+)
diff --git a/drivers/hwtracing/coresight/coresight-trace-id.c b/drivers/hwtracing/coresight/coresight-trace-id.c
index ac9092896dec..24c19ff493a9 100644
--- a/drivers/hwtracing/coresight/coresight-trace-id.c
+++ b/drivers/hwtracing/coresight/coresight-trace-id.c
@@ -69,6 +69,30 @@ static void coresight_trace_id_set_pend_rel(int id, struct coresight_trace_id_ma
set_bit(id, id_map->pend_rel_ids);
}
+/* #define TRACE_ID_DEBUG 1 */
+#ifdef TRACE_ID_DEBUG
+static char page_buf[PAGE_SIZE];
+
+static void coresight_trace_id_dump_table(struct coresight_trace_id_map *id_map,
+ const char *func_name)
+{
+ pr_debug("%s id_map::\n", func_name);
+ bitmap_print_to_pagebuf(0, page_buf, id_map->used_ids, CORESIGHT_TRACE_IDS_MAX);
+ pr_debug("Avial= %s\n", page_buf);
+ bitmap_print_to_pagebuf(0, page_buf, id_map->pend_rel_ids, CORESIGHT_TRACE_IDS_MAX);
+ pr_debug("Pend = %s\n", page_buf);
+}
+#define DUMP_ID_MAP(map) coresight_trace_id_dump_table(map, __func__)
+#define DUMP_ID_CPU(cpu, id) pr_debug("%s called; cpu=%d, id=%d\n", __func__, cpu, id)
+#define DUMP_ID(id) pr_debug("%s called; id=%d\n", __func__, id)
+#define PERF_SESSION(n) pr_debug("%s perf count %d\n", __func__, n)
+#else
+#define DUMP_ID_MAP(map)
+#define DUMP_ID(id)
+#define DUMP_ID_CPU(cpu, id)
+#define PERF_SESSION(n)
+#endif
+
/* release all pending IDs for all current maps & clear CPU associations */
static void coresight_trace_id_release_all_pending(void)
{
@@ -88,6 +112,7 @@ static void coresight_trace_id_release_all_pending(void)
}
}
spin_unlock_irqrestore(&id_map_lock, flags);
+ DUMP_ID_MAP(id_map);
}
static int coresight_trace_id_map_get_cpu_id(int cpu, struct coresight_trace_id_map *id_map)
@@ -123,6 +148,8 @@ static int coresight_trace_id_map_get_cpu_id(int cpu, struct coresight_trace_id_
get_cpu_id_out:
spin_unlock_irqrestore(&id_map_lock, flags);
+ DUMP_ID_CPU(cpu, id);
+ DUMP_ID_MAP(id_map);
return id;
}
@@ -150,6 +177,8 @@ static void coresight_trace_id_map_put_cpu_id(int cpu, struct coresight_trace_id
spin_unlock_irqrestore(&id_map_lock, flags);
put_cpu_id_out:
+ DUMP_ID_CPU(cpu, id);
+ DUMP_ID_MAP(id_map);
}
static int coresight_trace_id_map_get_system_id(struct coresight_trace_id_map *id_map)
@@ -161,6 +190,8 @@ static int coresight_trace_id_map_get_system_id(struct coresight_trace_id_map *i
id = coresight_trace_id_alloc_new_id(id_map, 0);
spin_unlock_irqrestore(&id_map_lock, flags);
+ DUMP_ID(id);
+ DUMP_ID_MAP(id_map);
return id;
}
@@ -171,6 +202,9 @@ static void coresight_trace_id_map_put_system_id(struct coresight_trace_id_map *
spin_lock_irqsave(&id_map_lock, flags);
coresight_trace_id_free(id, id_map);
spin_unlock_irqrestore(&id_map_lock, flags);
+
+ DUMP_ID(id);
+ DUMP_ID_MAP(id_map);
}
/* API functions */
@@ -207,6 +241,7 @@ EXPORT_SYMBOL_GPL(coresight_trace_id_put_system_id);
void coresight_trace_id_perf_start(void)
{
atomic_inc(&perf_cs_etm_session_active);
+ PERF_SESSION(atomic_read(&perf_cs_etm_session_active));
}
EXPORT_SYMBOL_GPL(coresight_trace_id_perf_start);
@@ -214,6 +249,7 @@ void coresight_trace_id_perf_stop(void)
{
if (!atomic_dec_return(&perf_cs_etm_session_active))
coresight_trace_id_release_all_pending();
+ PERF_SESSION(atomic_read(&perf_cs_etm_session_active));
}
EXPORT_SYMBOL_GPL(coresight_trace_id_perf_stop);
--
2.17.1
CoreSight sources provide a callback (.trace_id) in the standard source
ops which returns the ID to the core code. This was used to check that
sources all had a unique Trace ID.
Uniqueness is now gauranteed by the Trace ID allocation system, and the
check code has been removed from the core.
This patch removes the unneeded and unused .trace_id source ops
from the ops structure and implementations in etm3x, etm4x and stm.
Signed-off-by: Mike Leach <[email protected]>
Reviewed-by: Suzuki K Poulose <[email protected]>
---
drivers/hwtracing/coresight/coresight-etm.h | 1 -
.../coresight/coresight-etm3x-core.c | 37 -------------------
.../coresight/coresight-etm4x-core.c | 8 ----
drivers/hwtracing/coresight/coresight-stm.c | 8 ----
include/linux/coresight.h | 3 --
5 files changed, 57 deletions(-)
diff --git a/drivers/hwtracing/coresight/coresight-etm.h b/drivers/hwtracing/coresight/coresight-etm.h
index 3667428d38b6..9a0d08b092ae 100644
--- a/drivers/hwtracing/coresight/coresight-etm.h
+++ b/drivers/hwtracing/coresight/coresight-etm.h
@@ -283,7 +283,6 @@ static inline unsigned int etm_readl(struct etm_drvdata *drvdata, u32 off)
}
extern const struct attribute_group *coresight_etm_groups[];
-int etm_get_trace_id(struct etm_drvdata *drvdata);
void etm_set_default(struct etm_config *config);
void etm_config_trace_mode(struct etm_config *config);
struct etm_config *get_etm_config(struct etm_drvdata *drvdata);
diff --git a/drivers/hwtracing/coresight/coresight-etm3x-core.c b/drivers/hwtracing/coresight/coresight-etm3x-core.c
index 245804cb29d5..c91008f060e6 100644
--- a/drivers/hwtracing/coresight/coresight-etm3x-core.c
+++ b/drivers/hwtracing/coresight/coresight-etm3x-core.c
@@ -455,42 +455,6 @@ static int etm_cpu_id(struct coresight_device *csdev)
return drvdata->cpu;
}
-int etm_get_trace_id(struct etm_drvdata *drvdata)
-{
- unsigned long flags;
- int trace_id = -1;
- struct device *etm_dev;
-
- if (!drvdata)
- goto out;
-
- etm_dev = drvdata->csdev->dev.parent;
- if (!local_read(&drvdata->mode))
- return drvdata->traceid;
-
- pm_runtime_get_sync(etm_dev);
-
- spin_lock_irqsave(&drvdata->spinlock, flags);
-
- CS_UNLOCK(drvdata->base);
- trace_id = (etm_readl(drvdata, ETMTRACEIDR) & ETM_TRACEID_MASK);
- CS_LOCK(drvdata->base);
-
- spin_unlock_irqrestore(&drvdata->spinlock, flags);
- pm_runtime_put(etm_dev);
-
-out:
- return trace_id;
-
-}
-
-static int etm_trace_id(struct coresight_device *csdev)
-{
- struct etm_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
-
- return etm_get_trace_id(drvdata);
-}
-
int etm_read_alloc_trace_id(struct etm_drvdata *drvdata)
{
int trace_id;
@@ -740,7 +704,6 @@ static void etm_disable(struct coresight_device *csdev,
static const struct coresight_ops_source etm_source_ops = {
.cpu_id = etm_cpu_id,
- .trace_id = etm_trace_id,
.enable = etm_enable,
.disable = etm_disable,
};
diff --git a/drivers/hwtracing/coresight/coresight-etm4x-core.c b/drivers/hwtracing/coresight/coresight-etm4x-core.c
index b4fb28ce89fd..0648dea4053f 100644
--- a/drivers/hwtracing/coresight/coresight-etm4x-core.c
+++ b/drivers/hwtracing/coresight/coresight-etm4x-core.c
@@ -228,13 +228,6 @@ static int etm4_cpu_id(struct coresight_device *csdev)
return drvdata->cpu;
}
-static int etm4_trace_id(struct coresight_device *csdev)
-{
- struct etmv4_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
-
- return drvdata->trcid;
-}
-
int etm4_read_alloc_trace_id(struct etmv4_drvdata *drvdata)
{
int trace_id;
@@ -1026,7 +1019,6 @@ static void etm4_disable(struct coresight_device *csdev,
static const struct coresight_ops_source etm4_source_ops = {
.cpu_id = etm4_cpu_id,
- .trace_id = etm4_trace_id,
.enable = etm4_enable,
.disable = etm4_disable,
};
diff --git a/drivers/hwtracing/coresight/coresight-stm.c b/drivers/hwtracing/coresight/coresight-stm.c
index 9ef3e923a930..f4b4232614b0 100644
--- a/drivers/hwtracing/coresight/coresight-stm.c
+++ b/drivers/hwtracing/coresight/coresight-stm.c
@@ -281,15 +281,7 @@ static void stm_disable(struct coresight_device *csdev,
}
}
-static int stm_trace_id(struct coresight_device *csdev)
-{
- struct stm_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
-
- return drvdata->traceid;
-}
-
static const struct coresight_ops_source stm_source_ops = {
- .trace_id = stm_trace_id,
.enable = stm_enable,
.disable = stm_disable,
};
diff --git a/include/linux/coresight.h b/include/linux/coresight.h
index 9f445f09fcfe..247147c11231 100644
--- a/include/linux/coresight.h
+++ b/include/linux/coresight.h
@@ -314,14 +314,11 @@ struct coresight_ops_link {
* Operations available for sources.
* @cpu_id: returns the value of the CPU number this component
* is associated to.
- * @trace_id: returns the value of the component's trace ID as known
- * to the HW.
* @enable: enables tracing for a source.
* @disable: disables tracing for a source.
*/
struct coresight_ops_source {
int (*cpu_id)(struct coresight_device *csdev);
- int (*trace_id)(struct coresight_device *csdev);
int (*enable)(struct coresight_device *csdev,
struct perf_event *event, u32 mode);
void (*disable)(struct coresight_device *csdev,
--
2.17.1
The checks for sources to have unique IDs has been removed - this is now
guaranteed by the ID allocation mechanisms, and inappropriate where
multiple ID maps are in use in larger systems
Signed-off-by: Mike Leach <[email protected]>
---
drivers/hwtracing/coresight/coresight-core.c | 45 --------------------
1 file changed, 45 deletions(-)
diff --git a/drivers/hwtracing/coresight/coresight-core.c b/drivers/hwtracing/coresight/coresight-core.c
index c7b7c518a0a3..cde1b4704727 100644
--- a/drivers/hwtracing/coresight/coresight-core.c
+++ b/drivers/hwtracing/coresight/coresight-core.c
@@ -85,45 +85,6 @@ struct coresight_device *coresight_get_percpu_sink(int cpu)
}
EXPORT_SYMBOL_GPL(coresight_get_percpu_sink);
-static int coresight_id_match(struct device *dev, void *data)
-{
- int trace_id, i_trace_id;
- struct coresight_device *csdev, *i_csdev;
-
- csdev = data;
- i_csdev = to_coresight_device(dev);
-
- /*
- * No need to care about oneself and components that are not
- * sources or not enabled
- */
- if (i_csdev == csdev || !i_csdev->enable ||
- i_csdev->type != CORESIGHT_DEV_TYPE_SOURCE)
- return 0;
-
- /* Get the source ID for both components */
- trace_id = source_ops(csdev)->trace_id(csdev);
- i_trace_id = source_ops(i_csdev)->trace_id(i_csdev);
-
- /* All you need is one */
- if (trace_id == i_trace_id)
- return 1;
-
- return 0;
-}
-
-static int coresight_source_is_unique(struct coresight_device *csdev)
-{
- int trace_id = source_ops(csdev)->trace_id(csdev);
-
- /* this shouldn't happen */
- if (trace_id < 0)
- return 0;
-
- return !bus_for_each_dev(&coresight_bustype, NULL,
- csdev, coresight_id_match);
-}
-
static int coresight_find_link_inport(struct coresight_device *csdev,
struct coresight_device *parent)
{
@@ -432,12 +393,6 @@ static int coresight_enable_source(struct coresight_device *csdev, u32 mode)
{
int ret;
- if (!coresight_source_is_unique(csdev)) {
- dev_warn(&csdev->dev, "traceID %d not unique\n",
- source_ops(csdev)->trace_id(csdev));
- return -EINVAL;
- }
-
if (!csdev->enable) {
if (source_ops(csdev)->enable) {
ret = coresight_control_assoc_ectdev(csdev, true);
--
2.17.1
The trace ID API is now used to allocate trace IDs for ETM4.x / ETE
devices.
For perf sessions, these will be allocated on enable, and released on
disable.
For sysfs sessions, these will be allocated on enable, but only released
on reset. This allows the sysfs session to interrogate the Trace ID used
after the session is over - maintaining functional consistency with the
previous allocation scheme.
The trace ID will also be allocated on read of the mgmt/trctraceid file.
This ensures that if perf or sysfs read this before enabling trace, the
value will be the one used for the trace session.
Trace ID initialisation is removed from the _probe() function.
Signed-off-by: Mike Leach <[email protected]>
---
.../coresight/coresight-etm4x-core.c | 79 +++++++++++++++++--
.../coresight/coresight-etm4x-sysfs.c | 27 ++++++-
drivers/hwtracing/coresight/coresight-etm4x.h | 3 +
3 files changed, 100 insertions(+), 9 deletions(-)
diff --git a/drivers/hwtracing/coresight/coresight-etm4x-core.c b/drivers/hwtracing/coresight/coresight-etm4x-core.c
index cf249ecad5a5..b4fb28ce89fd 100644
--- a/drivers/hwtracing/coresight/coresight-etm4x-core.c
+++ b/drivers/hwtracing/coresight/coresight-etm4x-core.c
@@ -42,6 +42,7 @@
#include "coresight-etm4x-cfg.h"
#include "coresight-self-hosted-trace.h"
#include "coresight-syscfg.h"
+#include "coresight-trace-id.h"
static int boot_enable;
module_param(boot_enable, int, 0444);
@@ -234,6 +235,50 @@ static int etm4_trace_id(struct coresight_device *csdev)
return drvdata->trcid;
}
+int etm4_read_alloc_trace_id(struct etmv4_drvdata *drvdata)
+{
+ int trace_id;
+
+ /*
+ * This will allocate a trace ID to the cpu,
+ * or return the one currently allocated.
+ */
+ /* trace id function has its own lock */
+ trace_id = coresight_trace_id_get_cpu_id(drvdata->cpu);
+ if (IS_VALID_ID(trace_id))
+ drvdata->trcid = (u8)trace_id;
+ else
+ dev_err(&drvdata->csdev->dev,
+ "Failed to allocate trace ID for %s on CPU%d\n",
+ dev_name(&drvdata->csdev->dev), drvdata->cpu);
+ return trace_id;
+}
+
+static int etm4_set_current_trace_id(struct etmv4_drvdata *drvdata)
+{
+ int trace_id;
+
+ /*
+ * Set the currently allocated trace ID - perf allocates IDs
+ * as part of setup_aux for all CPUs it may use.
+ */
+ trace_id = coresight_trace_id_read_cpu_id(drvdata->cpu);
+ if (IS_VALID_ID(trace_id)) {
+ drvdata->trcid = (u8)trace_id;
+ return 0;
+ }
+
+ dev_err(&drvdata->csdev->dev, "Failed to set trace ID for %s on CPU%d\n",
+ dev_name(&drvdata->csdev->dev), drvdata->cpu);
+
+ return -EINVAL;
+}
+
+void etm4_release_trace_id(struct etmv4_drvdata *drvdata)
+{
+ coresight_trace_id_put_cpu_id(drvdata->cpu);
+}
+
struct etm4_enable_arg {
struct etmv4_drvdata *drvdata;
int rc;
@@ -729,6 +774,15 @@ static int etm4_enable_perf(struct coresight_device *csdev,
ret = etm4_parse_event_config(csdev, event);
if (ret)
goto out;
+
+ /*
+ * perf allocates cpu ids as part of setup - device needs to use
+ * the allocated ID.
+ */
+ ret = etm4_set_current_trace_id(drvdata);
+ if (ret < 0)
+ goto out;
+
/* And enable it */
ret = etm4_enable_hw(drvdata);
@@ -753,6 +807,11 @@ static int etm4_enable_sysfs(struct coresight_device *csdev)
spin_lock(&drvdata->spinlock);
+ /* sysfs needs to read and allocate a trace ID */
+ ret = etm4_read_alloc_trace_id(drvdata);
+ if (ret < 0)
+ goto unlock_sysfs_enable;
+
/*
* Executing etm4_enable_hw on the cpu whose ETM is being enabled
* ensures that register writes occur when cpu is powered.
@@ -764,6 +823,11 @@ static int etm4_enable_sysfs(struct coresight_device *csdev)
ret = arg.rc;
if (!ret)
drvdata->sticky_enable = true;
+
+ if (ret)
+ etm4_release_trace_id(drvdata);
+
+unlock_sysfs_enable:
spin_unlock(&drvdata->spinlock);
if (!ret)
@@ -895,6 +959,8 @@ static int etm4_disable_perf(struct coresight_device *csdev,
/* TRCVICTLR::SSSTATUS, bit[9] */
filters->ssstatus = (control & BIT(9));
+ /* The perf event will release trace ids when it is destroyed */
+
return 0;
}
@@ -920,6 +986,13 @@ static void etm4_disable_sysfs(struct coresight_device *csdev)
spin_unlock(&drvdata->spinlock);
cpus_read_unlock();
+ /*
+ * we only release trace IDs when resetting sysfs.
+ * This permits sysfs users to read the trace ID after the trace
+ * session has completed. This maintains operational behaviour with
+ * prior trace id allocation method
+ */
+
dev_dbg(&csdev->dev, "ETM tracing disabled\n");
}
@@ -1562,11 +1635,6 @@ static int etm4_dying_cpu(unsigned int cpu)
return 0;
}
-static void etm4_init_trace_id(struct etmv4_drvdata *drvdata)
-{
- drvdata->trcid = coresight_get_trace_id(drvdata->cpu);
-}
-
static int __etm4_cpu_save(struct etmv4_drvdata *drvdata)
{
int i, ret = 0;
@@ -1971,7 +2039,6 @@ static int etm4_probe(struct device *dev, void __iomem *base, u32 etm_pid)
if (!desc.name)
return -ENOMEM;
- etm4_init_trace_id(drvdata);
etm4_set_default(&drvdata->config);
pdata = coresight_get_platform_data(dev);
diff --git a/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c b/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c
index 6ea8181816fc..d3c27c521d43 100644
--- a/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c
+++ b/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c
@@ -266,10 +266,11 @@ static ssize_t reset_store(struct device *dev,
config->vmid_mask0 = 0x0;
config->vmid_mask1 = 0x0;
- drvdata->trcid = drvdata->cpu + 1;
-
spin_unlock(&drvdata->spinlock);
+ /* for sysfs - only release trace id when resetting */
+ etm4_release_trace_id(drvdata);
+
cscfg_csdev_reset_feats(to_coresight_device(dev));
return size;
@@ -2363,6 +2364,26 @@ static struct attribute *coresight_etmv4_attrs[] = {
NULL,
};
+/*
+ * Trace ID allocated dynamically on enable - but also allocate on read
+ * in case sysfs or perf read before enable to ensure consistent metadata
+ * information for trace decode
+ */
+static ssize_t trctraceid_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ int trace_id;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+
+ trace_id = etm4_read_alloc_trace_id(drvdata);
+ if (trace_id < 0)
+ return trace_id;
+
+ return sysfs_emit(buf, "0x%x\n", trace_id);
+}
+static DEVICE_ATTR_RO(trctraceid);
+
struct etmv4_reg {
struct coresight_device *csdev;
u32 offset;
@@ -2499,7 +2520,7 @@ static struct attribute *coresight_etmv4_mgmt_attrs[] = {
coresight_etm4x_reg(trcpidr3, TRCPIDR3),
coresight_etm4x_reg(trcoslsr, TRCOSLSR),
coresight_etm4x_reg(trcconfig, TRCCONFIGR),
- coresight_etm4x_reg(trctraceid, TRCTRACEIDR),
+ &dev_attr_trctraceid.attr,
coresight_etm4x_reg(trcdevarch, TRCDEVARCH),
NULL,
};
diff --git a/drivers/hwtracing/coresight/coresight-etm4x.h b/drivers/hwtracing/coresight/coresight-etm4x.h
index a7bfea31f7d8..793c361841d4 100644
--- a/drivers/hwtracing/coresight/coresight-etm4x.h
+++ b/drivers/hwtracing/coresight/coresight-etm4x.h
@@ -1095,4 +1095,7 @@ static inline bool etm4x_is_ete(struct etmv4_drvdata *drvdata)
{
return drvdata->arch >= ETM_ARCH_ETE;
}
+
+int etm4_read_alloc_trace_id(struct etmv4_drvdata *drvdata);
+void etm4_release_trace_id(struct etmv4_drvdata *drvdata);
#endif
--
2.17.1
When using dynamically assigned CoreSight trace IDs the drivers can output
the ID / CPU association as a PERF_RECORD_AUX_OUTPUT_HW_ID packet.
Update cs-etm decoder to handle this packet by setting the CPU/Trace ID
mapping.
Signed-off-by: Mike Leach <[email protected]>
---
tools/include/linux/coresight-pmu.h | 15 ++
.../perf/util/cs-etm-decoder/cs-etm-decoder.c | 7 +
tools/perf/util/cs-etm.c | 251 ++++++++++++++++--
3 files changed, 257 insertions(+), 16 deletions(-)
diff --git a/tools/include/linux/coresight-pmu.h b/tools/include/linux/coresight-pmu.h
index 307f357defe9..57ba1abf1224 100644
--- a/tools/include/linux/coresight-pmu.h
+++ b/tools/include/linux/coresight-pmu.h
@@ -32,6 +32,9 @@
*/
#define CORESIGHT_TRACE_ID_UNUSED_FLAG BIT(31)
+/* Value to set for unused trace ID values */
+#define CORESIGHT_TRACE_ID_UNUSED_VAL 0x7F
+
/*
* Below are the definition of bit offsets for perf option, and works as
* arbitrary values for all ETM versions.
@@ -56,4 +59,16 @@
#define ETM4_CFG_BIT_RETSTK 12
#define ETM4_CFG_BIT_VMID_OPT 15
+/*
+ * Interpretation of the PERF_RECORD_AUX_OUTPUT_HW_ID payload.
+ * Used to associate a CPU with the CoreSight Trace ID.
+ * [07:00] - Trace ID - uses 8 bits to make value easy to read in file.
+ * [59:08] - Unused (SBZ)
+ * [63:60] - Version
+ */
+#define CS_AUX_HW_ID_TRACE_ID_MASK GENMASK_ULL(7, 0)
+#define CS_AUX_HW_ID_VERSION_MASK GENMASK_ULL(63, 60)
+
+#define CS_AUX_HW_ID_CURR_VERSION 0
+
#endif
diff --git a/tools/perf/util/cs-etm-decoder/cs-etm-decoder.c b/tools/perf/util/cs-etm-decoder/cs-etm-decoder.c
index 31fa3b45134a..fa3aa9c0fb2e 100644
--- a/tools/perf/util/cs-etm-decoder/cs-etm-decoder.c
+++ b/tools/perf/util/cs-etm-decoder/cs-etm-decoder.c
@@ -625,6 +625,7 @@ cs_etm_decoder__create_etm_decoder(struct cs_etm_decoder_params *d_params,
switch (t_params->protocol) {
case CS_ETM_PROTO_ETMV3:
case CS_ETM_PROTO_PTM:
+ csid = (t_params->etmv3.reg_idr & CORESIGHT_TRACE_ID_VAL_MASK);
cs_etm_decoder__gen_etmv3_config(t_params, &config_etmv3);
decoder->decoder_name = (t_params->protocol == CS_ETM_PROTO_ETMV3) ?
OCSD_BUILTIN_DCD_ETMV3 :
@@ -632,11 +633,13 @@ cs_etm_decoder__create_etm_decoder(struct cs_etm_decoder_params *d_params,
trace_config = &config_etmv3;
break;
case CS_ETM_PROTO_ETMV4i:
+ csid = (t_params->etmv4.reg_traceidr & CORESIGHT_TRACE_ID_VAL_MASK);
cs_etm_decoder__gen_etmv4_config(t_params, &trace_config_etmv4);
decoder->decoder_name = OCSD_BUILTIN_DCD_ETMV4I;
trace_config = &trace_config_etmv4;
break;
case CS_ETM_PROTO_ETE:
+ csid = (t_params->ete.reg_traceidr & CORESIGHT_TRACE_ID_VAL_MASK);
cs_etm_decoder__gen_ete_config(t_params, &trace_config_ete);
decoder->decoder_name = OCSD_BUILTIN_DCD_ETE;
trace_config = &trace_config_ete;
@@ -645,6 +648,10 @@ cs_etm_decoder__create_etm_decoder(struct cs_etm_decoder_params *d_params,
return -1;
}
+ /* if the CPU has no trace ID associated, no decoder needed */
+ if (csid == CORESIGHT_TRACE_ID_UNUSED_VAL)
+ return 0;
+
if (d_params->operation == CS_ETM_OPERATION_DECODE) {
if (ocsd_dt_create_decoder(decoder->dcd_tree,
decoder->decoder_name,
diff --git a/tools/perf/util/cs-etm.c b/tools/perf/util/cs-etm.c
index 48aaa2843ee2..06adcee254aa 100644
--- a/tools/perf/util/cs-etm.c
+++ b/tools/perf/util/cs-etm.c
@@ -217,6 +217,143 @@ static int cs_etm__map_trace_id(u8 trace_chan_id, u64 *cpu_metadata)
return 0;
}
+static int cs_etm__metadata_get_trace_id(u8 *trace_chan_id, u64 *cpu_metadata)
+{
+ u64 cs_etm_magic = cpu_metadata[CS_ETM_MAGIC];
+
+ switch (cs_etm_magic) {
+ case __perf_cs_etmv3_magic:
+ *trace_chan_id = (u8)(cpu_metadata[CS_ETM_ETMTRACEIDR] &
+ CORESIGHT_TRACE_ID_VAL_MASK);
+ break;
+ case __perf_cs_etmv4_magic:
+ case __perf_cs_ete_magic:
+ *trace_chan_id = (u8)(cpu_metadata[CS_ETMV4_TRCTRACEIDR] &
+ CORESIGHT_TRACE_ID_VAL_MASK);
+ break;
+ default:
+ return -EINVAL;
+ }
+ return 0;
+}
+
+/*
+ * update metadata trace ID from the value found in the AUX_HW_INFO packet.
+ * This will also clear the CORESIGHT_TRACE_ID_UNUSED_FLAG flag if present.
+ */
+static int cs_etm__metadata_set_trace_id(u8 trace_chan_id, u64 *cpu_metadata)
+{
+ u64 cs_etm_magic = cpu_metadata[CS_ETM_MAGIC];
+
+ switch (cs_etm_magic) {
+ case __perf_cs_etmv3_magic:
+ cpu_metadata[CS_ETM_ETMTRACEIDR] = trace_chan_id;
+ break;
+ case __perf_cs_etmv4_magic:
+ case __perf_cs_ete_magic:
+ cpu_metadata[CS_ETMV4_TRCTRACEIDR] = trace_chan_id;
+ break;
+
+ default:
+ return -EINVAL;
+ }
+ return 0;
+}
+
+/*
+ * FIELD_GET (linux/bitfield.h) not available outside kernel code,
+ * and the header contains too many dependencies to just copy over,
+ * so roll our own based on the original
+ */
+#define __bf_shf(x) (__builtin_ffsll(x) - 1)
+#define FIELD_GET(_mask, _reg) \
+ ({ \
+ (typeof(_mask))(((_reg) & (_mask)) >> __bf_shf(_mask)); \
+ })
+
+/*
+ * Handle the PERF_RECORD_AUX_OUTPUT_HW_ID event.
+ *
+ * The payload associates the Trace ID and the CPU.
+ * The routine is tolerant of seeing multiple packets with the same association,
+ * but a CPU / Trace ID association changing during a session is an error.
+ */
+static int cs_etm__process_aux_output_hw_id(struct perf_session *session,
+ union perf_event *event)
+{
+ struct cs_etm_auxtrace *etm;
+ struct perf_sample sample;
+ struct int_node *inode;
+ struct evsel *evsel;
+ u64 *cpu_data;
+ u64 hw_id;
+ int cpu, version, err;
+ u8 trace_chan_id, curr_chan_id;
+
+ /* extract and parse the HW ID */
+ hw_id = event->aux_output_hw_id.hw_id;
+ version = FIELD_GET(CS_AUX_HW_ID_VERSION_MASK, hw_id);
+ trace_chan_id = FIELD_GET(CS_AUX_HW_ID_TRACE_ID_MASK, hw_id);
+
+ /* check that we can handle this version */
+ if (version > CS_AUX_HW_ID_CURR_VERSION)
+ return -EINVAL;
+
+ /* get access to the etm metadata */
+ etm = container_of(session->auxtrace, struct cs_etm_auxtrace, auxtrace);
+ if (!etm || !etm->metadata)
+ return -EINVAL;
+
+ /* parse the sample to get the CPU */
+ evsel = evlist__event2evsel(session->evlist, event);
+ if (!evsel)
+ return -EINVAL;
+ err = evsel__parse_sample(evsel, event, &sample);
+ if (err)
+ return err;
+ cpu = sample.cpu;
+ if (cpu == -1) {
+ /* no CPU in the sample - possibly recorded with an old version of perf */
+ pr_err("CS_ETM: no CPU AUX_OUTPUT_HW_ID sample. Use compatible perf to record.");
+ return -EINVAL;
+ }
+
+ /* See if the ID is mapped to a CPU, and it matches the current CPU */
+ inode = intlist__find(traceid_list, trace_chan_id);
+ if (inode) {
+ cpu_data = inode->priv;
+ if ((int)cpu_data[CS_ETM_CPU] != cpu) {
+ pr_err("CS_ETM: map mismatch between HW_ID packet CPU and Trace ID\n");
+ return -EINVAL;
+ }
+
+ /* check that the mapped ID matches */
+ err = cs_etm__metadata_get_trace_id(&curr_chan_id, cpu_data);
+ if (err)
+ return err;
+ if (curr_chan_id != trace_chan_id) {
+ pr_err("CS_ETM: mismatch between CPU trace ID and HW_ID packet ID\n");
+ return -EINVAL;
+ }
+
+ /* mapped and matched - return OK */
+ return 0;
+ }
+
+ /* not one we've seen before - lets map it */
+ cpu_data = etm->metadata[cpu];
+ err = cs_etm__map_trace_id(trace_chan_id, cpu_data);
+ if (err)
+ return err;
+
+ /*
+ * if we are picking up the association from the packet, need to plug
+ * the correct trace ID into the metadata for setting up decoders later.
+ */
+ err = cs_etm__metadata_set_trace_id(trace_chan_id, cpu_data);
+ return err;
+}
+
void cs_etm__etmq_set_traceid_queue_timestamp(struct cs_etm_queue *etmq,
u8 trace_chan_id)
{
@@ -2662,7 +2799,7 @@ static void cs_etm__print_auxtrace_info(__u64 *val, int num)
for (i = CS_HEADER_VERSION_MAX; cpu < num; cpu++) {
if (version == 0)
err = cs_etm__print_cpu_metadata_v0(val, &i);
- else if (version == 1)
+ else if (version == 1 || version == 2)
err = cs_etm__print_cpu_metadata_v1(val, &i);
if (err)
return;
@@ -2774,11 +2911,16 @@ static int cs_etm__queue_aux_fragment(struct perf_session *session, off_t file_o
}
/*
- * In per-thread mode, CPU is set to -1, but TID will be set instead. See
- * auxtrace_mmap_params__set_idx(). Return 'not found' if neither CPU nor TID match.
+ * In per-thread mode, auxtrace CPU is set to -1, but TID will be set instead. See
+ * auxtrace_mmap_params__set_idx(). However, the sample AUX event will contain a
+ * CPU as we set this always for the AUX_OUTPUT_HW_ID event.
+ * So now compare only TIDs if auxtrace CPU is -1, and CPUs if auxtrace CPU is not -1.
+ * Return 'not found' if mismatch.
*/
- if ((auxtrace_event->cpu == (__u32) -1 && auxtrace_event->tid != sample->tid) ||
- auxtrace_event->cpu != sample->cpu)
+ if (auxtrace_event->cpu == (__u32) -1) {
+ if (auxtrace_event->tid != sample->tid)
+ return 1;
+ } else if (auxtrace_event->cpu != sample->cpu)
return 1;
if (aux_event->flags & PERF_AUX_FLAG_OVERWRITE) {
@@ -2827,6 +2969,17 @@ static int cs_etm__queue_aux_fragment(struct perf_session *session, off_t file_o
return 1;
}
+static int cs_etm__process_aux_hw_id_cb(struct perf_session *session, union perf_event *event,
+ u64 offset __maybe_unused, void *data __maybe_unused)
+{
+ /* look to handle PERF_RECORD_AUX_OUTPUT_HW_ID early to ensure decoders can be set up */
+ if (event->header.type == PERF_RECORD_AUX_OUTPUT_HW_ID) {
+ (*(int *)data)++; /* increment found count */
+ return cs_etm__process_aux_output_hw_id(session, event);
+ }
+ return 0;
+}
+
static int cs_etm__queue_aux_records_cb(struct perf_session *session, union perf_event *event,
u64 offset __maybe_unused, void *data __maybe_unused)
{
@@ -2916,13 +3069,13 @@ static int cs_etm__map_trace_ids_metadata(int num_cpu, u64 **metadata)
cs_etm_magic = metadata[i][CS_ETM_MAGIC];
switch (cs_etm_magic) {
case __perf_cs_etmv3_magic:
- trace_chan_id = (u8)((metadata[i][CS_ETM_ETMTRACEIDR]) &
- CORESIGHT_TRACE_ID_VAL_MASK);
+ metadata[i][CS_ETM_ETMTRACEIDR] &= CORESIGHT_TRACE_ID_VAL_MASK;
+ trace_chan_id = (u8)(metadata[i][CS_ETM_ETMTRACEIDR]);
break;
case __perf_cs_etmv4_magic:
case __perf_cs_ete_magic:
- trace_chan_id = (u8)((metadata[i][CS_ETMV4_TRCTRACEIDR]) &
- CORESIGHT_TRACE_ID_VAL_MASK);
+ metadata[i][CS_ETMV4_TRCTRACEIDR] &= CORESIGHT_TRACE_ID_VAL_MASK;
+ trace_chan_id = (u8)(metadata[i][CS_ETMV4_TRCTRACEIDR]);
break;
default:
/* unknown magic number */
@@ -2935,6 +3088,35 @@ static int cs_etm__map_trace_ids_metadata(int num_cpu, u64 **metadata)
return 0;
}
+/*
+ * If we found AUX_HW_ID packets, then set any metadata marked as unused to the
+ * unused value to reduce the number of unneeded decoders created.
+ */
+static int cs_etm__clear_unused_trace_ids_metadata(int num_cpu, u64 **metadata)
+{
+ u64 cs_etm_magic;
+ int i;
+
+ for (i = 0; i < num_cpu; i++) {
+ cs_etm_magic = metadata[i][CS_ETM_MAGIC];
+ switch (cs_etm_magic) {
+ case __perf_cs_etmv3_magic:
+ if (metadata[i][CS_ETM_ETMTRACEIDR] & CORESIGHT_TRACE_ID_UNUSED_FLAG)
+ metadata[i][CS_ETM_ETMTRACEIDR] = CORESIGHT_TRACE_ID_UNUSED_VAL;
+ break;
+ case __perf_cs_etmv4_magic:
+ case __perf_cs_ete_magic:
+ if (metadata[i][CS_ETMV4_TRCTRACEIDR] & CORESIGHT_TRACE_ID_UNUSED_FLAG)
+ metadata[i][CS_ETMV4_TRCTRACEIDR] = CORESIGHT_TRACE_ID_UNUSED_VAL;
+ break;
+ default:
+ /* unknown magic number */
+ return -EINVAL;
+ }
+ }
+ return 0;
+}
+
int cs_etm__process_auxtrace_info(union perf_event *event,
struct perf_session *session)
{
@@ -2947,6 +3129,7 @@ int cs_etm__process_auxtrace_info(union perf_event *event,
int priv_size = 0;
int num_cpu;
int err = 0;
+ int aux_hw_id_found;
int i, j;
u64 *ptr, *hdr = NULL;
u64 **metadata = NULL;
@@ -3113,8 +3296,43 @@ int cs_etm__process_auxtrace_info(union perf_event *event,
if (err)
goto err_delete_thread;
- /* before aux records are queued, need to map metadata to trace IDs */
- err = cs_etm__map_trace_ids_metadata(num_cpu, metadata);
+ /*
+ * Map Trace ID values to CPU metadata.
+ *
+ * Trace metadata will always contain Trace ID values from the legacy algorithm. If the
+ * files has been recorded by a "new" perf updated to handle AUX_HW_ID then the metadata
+ * ID value will also have the CORESIGHT_TRACE_ID_UNUSED_FLAG set.
+ *
+ * The updated kernel drivers that use AUX_HW_ID to sent Trace IDs will attempt to use
+ * the same IDs as the old algorithm as far as is possible, unless there are clashes
+ * in which case a different value will be used. This means an older perf may still
+ * be able to record and read files generate on a newer system.
+ *
+ * For a perf able to interpret AUX_HW_ID packets we first check for the presence of
+ * those packets. If they are there then the values will be mapped and plugged into
+ * the metadata. We then set any remaining metadata values with the used flag to a
+ * value CORESIGHT_TRACE_ID_UNUSED_VAL - which indicates no decoder is required.
+ *
+ * If no AUX_HW_ID packets are present - which means a file recorded on an old kernel
+ * then we map Trace ID values to CPU directly from the metadata - clearing any unused
+ * flags if present.
+ */
+
+ /* first scan for AUX_OUTPUT_HW_ID records to map trace ID values to CPU metadata */
+ aux_hw_id_found = 0;
+ err = perf_session__peek_events(session, session->header.data_offset,
+ session->header.data_size,
+ cs_etm__process_aux_hw_id_cb, &aux_hw_id_found);
+ if (err)
+ goto err_delete_thread;
+
+ /* if HW ID found then clear any unused metadata ID values */
+ if (aux_hw_id_found)
+ err = cs_etm__clear_unused_trace_ids_metadata(num_cpu, metadata);
+ /* otherwise, this is a file with metadata values only, map from metadata */
+ else
+ err = cs_etm__map_trace_ids_metadata(num_cpu, metadata);
+
if (err)
goto err_delete_thread;
@@ -3124,13 +3342,14 @@ int cs_etm__process_auxtrace_info(union perf_event *event,
etm->data_queued = etm->queues.populated;
/*
- * Print warning in pipe mode, see cs_etm__process_auxtrace_event() and
+ * Print error in pipe mode, see cs_etm__process_auxtrace_event() and
* cs_etm__queue_aux_fragment() for details relating to limitations.
*/
- if (!etm->data_queued)
- pr_warning("CS ETM warning: Coresight decode and TRBE support requires random file access.\n"
- "Continuing with best effort decoding in piped mode.\n\n");
-
+ if (!etm->data_queued) {
+ pr_err("CS ETM: Coresight decode and TRBE support need random file access.\n");
+ err = -EINVAL;
+ goto err_delete_thread;
+ }
return 0;
err_delete_thread:
--
2.17.1
Trace IDs are now dynamically allocated.
Previously used the static association algorithm that is no longer
used. The 'cpu * 2 + seed' was outdated and broken for systems with high
core counts (>46). as it did not scale and was broken for larger
core counts.
Trace ID will now be sent in PERF_RECORD_AUX_OUTPUT_HW_ID record.
Legacy ID algorithm renamed and retained for limited backward
compatibility use.
Signed-off-by: Mike Leach <[email protected]>
---
tools/include/linux/coresight-pmu.h | 30 +++++++++++++++++------------
tools/perf/arch/arm/util/cs-etm.c | 21 ++++++++++++--------
2 files changed, 31 insertions(+), 20 deletions(-)
diff --git a/tools/include/linux/coresight-pmu.h b/tools/include/linux/coresight-pmu.h
index db9c7c0abb6a..307f357defe9 100644
--- a/tools/include/linux/coresight-pmu.h
+++ b/tools/include/linux/coresight-pmu.h
@@ -10,11 +10,28 @@
#include <linux/bits.h>
#define CORESIGHT_ETM_PMU_NAME "cs_etm"
-#define CORESIGHT_ETM_PMU_SEED 0x10
+
+/*
+ * The legacy Trace ID system based on fixed calculation from the cpu
+ * number. This has been replaced by drivers using a dynamic allocation
+ * system - but need to retain the legacy algorithm for backward comparibility
+ * in certain situations:-
+ * a) new perf running on older systems that generate the legacy mapping
+ * b) older tools e.g. simpleperf in Android, that may not update at the same
+ * time as the kernel.
+ */
+#define CORESIGHT_LEGACY_CPU_TRACE_ID(cpu) (0x10 + (cpu * 2))
/* CoreSight trace ID is currently the bottom 7 bits of the value */
#define CORESIGHT_TRACE_ID_VAL_MASK GENMASK(6, 0)
+/*
+ * perf record will set the legacy meta data values as unused initially.
+ * This allows perf report to manage the decoders created when dynamic
+ * allocation in operation.
+ */
+#define CORESIGHT_TRACE_ID_UNUSED_FLAG BIT(31)
+
/*
* Below are the definition of bit offsets for perf option, and works as
* arbitrary values for all ETM versions.
@@ -39,15 +56,4 @@
#define ETM4_CFG_BIT_RETSTK 12
#define ETM4_CFG_BIT_VMID_OPT 15
-static inline int coresight_get_trace_id(int cpu)
-{
- /*
- * A trace ID of value 0 is invalid, so let's start at some
- * random value that fits in 7 bits and go from there. Since
- * the common convention is to have data trace IDs be I(N) + 1,
- * set instruction trace IDs as a function of the CPU number.
- */
- return (CORESIGHT_ETM_PMU_SEED + (cpu * 2));
-}
-
#endif
diff --git a/tools/perf/arch/arm/util/cs-etm.c b/tools/perf/arch/arm/util/cs-etm.c
index 1b54638d53b0..196fe1a77de9 100644
--- a/tools/perf/arch/arm/util/cs-etm.c
+++ b/tools/perf/arch/arm/util/cs-etm.c
@@ -421,13 +421,16 @@ static int cs_etm_recording_options(struct auxtrace_record *itr,
evlist__to_front(evlist, cs_etm_evsel);
/*
- * In the case of per-cpu mmaps, we need the CPU on the
- * AUX event. We also need the contextID in order to be notified
+ * get the CPU on the sample - need it to associate trace ID in the
+ * AUX_OUTPUT_HW_ID event, and the AUX event for per-cpu mmaps.
+ */
+ evsel__set_sample_bit(cs_etm_evsel, CPU);
+
+ /*
+ * Also the case of per-cpu mmaps, need the contextID in order to be notified
* when a context switch happened.
*/
if (!perf_cpu_map__empty(cpus)) {
- evsel__set_sample_bit(cs_etm_evsel, CPU);
-
err = cs_etm_set_option(itr, cs_etm_evsel,
BIT(ETM_OPT_CTXTID) | BIT(ETM_OPT_TS));
if (err)
@@ -633,8 +636,10 @@ static void cs_etm_save_etmv4_header(__u64 data[], struct auxtrace_record *itr,
/* Get trace configuration register */
data[CS_ETMV4_TRCCONFIGR] = cs_etmv4_get_config(itr);
- /* Get traceID from the framework */
- data[CS_ETMV4_TRCTRACEIDR] = coresight_get_trace_id(cpu);
+ /* traceID set to legacy version, in case new perf running on older system */
+ data[CS_ETMV4_TRCTRACEIDR] =
+ CORESIGHT_LEGACY_CPU_TRACE_ID(cpu) | CORESIGHT_TRACE_ID_UNUSED_FLAG;
+
/* Get read-only information from sysFS */
data[CS_ETMV4_TRCIDR0] = cs_etm_get_ro(cs_etm_pmu, cpu,
metadata_etmv4_ro[CS_ETMV4_TRCIDR0]);
@@ -681,9 +686,9 @@ static void cs_etm_get_metadata(int cpu, u32 *offset,
magic = __perf_cs_etmv3_magic;
/* Get configuration register */
info->priv[*offset + CS_ETM_ETMCR] = cs_etm_get_config(itr);
- /* Get traceID from the framework */
+ /* traceID set to legacy value in case new perf running on old system */
info->priv[*offset + CS_ETM_ETMTRACEIDR] =
- coresight_get_trace_id(cpu);
+ CORESIGHT_LEGACY_CPU_TRACE_ID(cpu) | CORESIGHT_TRACE_ID_UNUSED_FLAG;
/* Get read-only information from sysFS */
info->priv[*offset + CS_ETM_ETMCCER] =
cs_etm_get_ro(cs_etm_pmu, cpu,
--
2.17.1
The existing mechanism to assign Trace ID values to sources is limited
and does not scale for larger multicore / multi trace source systems.
The API introduces functions that reserve IDs based on availabilty
represented by a coresight_trace_id_map structure. This records the
used and free IDs in a bitmap.
CPU bound sources such as ETMs use the coresight_trace_id_get_cpu_id /
coresight_trace_id_put_cpu_id pair of functions. The API will record
the ID associated with the CPU. This ensures that the same ID will be
re-used while perf events are active on the CPU. The put_cpu_id function
will pend release of the ID until all perf cs_etm sessions are complete.
Non-cpu sources, such as the STM can use coresight_trace_id_get_system_id /
coresight_trace_id_put_system_id.
Signed-off-by: Mike Leach <[email protected]>
---
drivers/hwtracing/coresight/Makefile | 2 +-
drivers/hwtracing/coresight/coresight-core.c | 4 +
.../hwtracing/coresight/coresight-trace-id.c | 230 ++++++++++++++++++
.../hwtracing/coresight/coresight-trace-id.h | 78 ++++++
include/linux/coresight-pmu.h | 23 +-
5 files changed, 324 insertions(+), 13 deletions(-)
create mode 100644 drivers/hwtracing/coresight/coresight-trace-id.c
create mode 100644 drivers/hwtracing/coresight/coresight-trace-id.h
diff --git a/drivers/hwtracing/coresight/Makefile b/drivers/hwtracing/coresight/Makefile
index b6c4a48140ec..329a0c704b87 100644
--- a/drivers/hwtracing/coresight/Makefile
+++ b/drivers/hwtracing/coresight/Makefile
@@ -6,7 +6,7 @@ obj-$(CONFIG_CORESIGHT) += coresight.o
coresight-y := coresight-core.o coresight-etm-perf.o coresight-platform.o \
coresight-sysfs.o coresight-syscfg.o coresight-config.o \
coresight-cfg-preload.o coresight-cfg-afdo.o \
- coresight-syscfg-configfs.o
+ coresight-syscfg-configfs.o coresight-trace-id.o
obj-$(CONFIG_CORESIGHT_LINK_AND_SINK_TMC) += coresight-tmc.o
coresight-tmc-y := coresight-tmc-core.o coresight-tmc-etf.o \
coresight-tmc-etr.o
diff --git a/drivers/hwtracing/coresight/coresight-core.c b/drivers/hwtracing/coresight/coresight-core.c
index 1edfec1e9d18..c7b7c518a0a3 100644
--- a/drivers/hwtracing/coresight/coresight-core.c
+++ b/drivers/hwtracing/coresight/coresight-core.c
@@ -22,6 +22,7 @@
#include "coresight-etm-perf.h"
#include "coresight-priv.h"
#include "coresight-syscfg.h"
+#include "coresight-trace-id.h"
static DEFINE_MUTEX(coresight_mutex);
static DEFINE_PER_CPU(struct coresight_device *, csdev_sink);
@@ -1775,6 +1776,9 @@ static int __init coresight_init(void)
if (ret)
goto exit_bus_unregister;
+ /* initialise the trace ID allocator */
+ coresight_trace_id_init();
+
/* initialise the coresight syscfg API */
ret = cscfg_init();
if (!ret)
diff --git a/drivers/hwtracing/coresight/coresight-trace-id.c b/drivers/hwtracing/coresight/coresight-trace-id.c
new file mode 100644
index 000000000000..ac9092896dec
--- /dev/null
+++ b/drivers/hwtracing/coresight/coresight-trace-id.c
@@ -0,0 +1,230 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2022, Linaro Limited, All rights reserved.
+ * Author: Mike Leach <[email protected]>
+ */
+#include <linux/coresight-pmu.h>
+#include <linux/kernel.h>
+#include <linux/spinlock.h>
+#include <linux/types.h>
+
+#include "coresight-trace-id.h"
+
+/* need to keep data on ids & association with cpus. */
+struct cpu_id_info {
+ atomic_t id;
+ bool pend_rel;
+};
+
+/* default trace ID map. Used for systems that do not require per sink mappings */
+static struct coresight_trace_id_map id_map_default;
+
+/* maintain a record of the current mapping of cpu IDs */
+static DEFINE_PER_CPU(struct cpu_id_info, cpu_ids);
+
+/* perf session active counter */
+static atomic_t perf_cs_etm_session_active = ATOMIC_INIT(0);
+
+/* lock to protect id_map and cpu data */
+static DEFINE_SPINLOCK(id_map_lock);
+
+/*
+ * allocate new ID and set in use
+ * if @preferred_id is a valid id then try to use that value if available.
+ */
+static int coresight_trace_id_alloc_new_id(struct coresight_trace_id_map *id_map,
+ int preferred_id)
+{
+ int id;
+
+ /* for backwards compatibility reasons, cpu Ids may have a preferred value */
+ if (IS_VALID_ID(preferred_id) && !test_bit(preferred_id, id_map->used_ids))
+ id = preferred_id;
+ else {
+ /* skip reserved bit 0, look from bit 1 to CORESIGHT_TRACE_ID_RES_TOP */
+ id = find_next_zero_bit(id_map->used_ids, 1, CORESIGHT_TRACE_ID_RES_TOP);
+ if (id >= CORESIGHT_TRACE_ID_RES_TOP)
+ return -EINVAL;
+ }
+
+ /* mark as used */
+ set_bit(id, id_map->used_ids);
+ return id;
+}
+
+static void coresight_trace_id_free(int id, struct coresight_trace_id_map *id_map)
+{
+ if (WARN(!IS_VALID_ID(id), "%s: Invalid Trace ID %d\n", __func__, id))
+ return;
+ if (WARN(!test_bit(id, id_map->used_ids),
+ "%s: Freeing unused ID %d\n", __func__, id))
+ return;
+ clear_bit(id, id_map->used_ids);
+}
+
+static void coresight_trace_id_set_pend_rel(int id, struct coresight_trace_id_map *id_map)
+{
+ if (WARN(!IS_VALID_ID(id), "%s: Invalid Trace ID %d\n", __func__, id))
+ return;
+ set_bit(id, id_map->pend_rel_ids);
+}
+
+/* release all pending IDs for all current maps & clear CPU associations */
+static void coresight_trace_id_release_all_pending(void)
+{
+ struct coresight_trace_id_map *id_map = &id_map_default;
+ unsigned long flags;
+ int cpu, bit;
+
+ spin_lock_irqsave(&id_map_lock, flags);
+ for_each_set_bit(bit, id_map->pend_rel_ids, CORESIGHT_TRACE_ID_RES_TOP) {
+ clear_bit(bit, id_map->used_ids);
+ clear_bit(bit, id_map->pend_rel_ids);
+ }
+ for_each_possible_cpu(cpu) {
+ if (per_cpu(cpu_ids, cpu).pend_rel) {
+ per_cpu(cpu_ids, cpu).pend_rel = false;
+ atomic_set(&per_cpu(cpu_ids, cpu).id, 0);
+ }
+ }
+ spin_unlock_irqrestore(&id_map_lock, flags);
+}
+
+static int coresight_trace_id_map_get_cpu_id(int cpu, struct coresight_trace_id_map *id_map)
+{
+ unsigned long flags;
+ int id;
+
+ spin_lock_irqsave(&id_map_lock, flags);
+
+ /* check for existing allocation for this CPU */
+ id = atomic_read(&per_cpu(cpu_ids, cpu).id);
+ if (id)
+ goto get_cpu_id_out;
+
+ /*
+ * Find a new ID.
+ *
+ * Use legacy values where possible in the dynamic trace ID allocator to
+ * allow tools like Android simpleperf to continue working if they are not
+ * upgraded at the same time as the kernel drivers.
+ *
+ * If the generated legacy ID is invalid, or not available then the next
+ * available dynamic ID will be used.
+ */
+ id = coresight_trace_id_alloc_new_id(id_map, CORESIGHT_LEGACY_CPU_TRACE_ID(cpu));
+ if (IS_VALID_ID(id)) {
+ /* got a valid new ID - save details */
+ atomic_set(&per_cpu(cpu_ids, cpu).id, id);
+ per_cpu(cpu_ids, cpu).pend_rel = false;
+ clear_bit(id, id_map->pend_rel_ids);
+ }
+
+get_cpu_id_out:
+ spin_unlock_irqrestore(&id_map_lock, flags);
+
+ return id;
+}
+
+static void coresight_trace_id_map_put_cpu_id(int cpu, struct coresight_trace_id_map *id_map)
+{
+ unsigned long flags;
+ int id;
+
+ /* check for existing allocation for this CPU */
+ id = atomic_read(&per_cpu(cpu_ids, cpu).id);
+ if (!id)
+ goto put_cpu_id_out;
+
+ spin_lock_irqsave(&id_map_lock, flags);
+
+ if (atomic_read(&perf_cs_etm_session_active)) {
+ /* set release at pending if perf still active */
+ coresight_trace_id_set_pend_rel(id, id_map);
+ per_cpu(cpu_ids, cpu).pend_rel = true;
+ } else {
+ /* otherwise clear id */
+ coresight_trace_id_free(id, id_map);
+ atomic_set(&per_cpu(cpu_ids, cpu).id, 0);
+ }
+
+ spin_unlock_irqrestore(&id_map_lock, flags);
+put_cpu_id_out:
+}
+
+static int coresight_trace_id_map_get_system_id(struct coresight_trace_id_map *id_map)
+{
+ unsigned long flags;
+ int id;
+
+ spin_lock_irqsave(&id_map_lock, flags);
+ id = coresight_trace_id_alloc_new_id(id_map, 0);
+ spin_unlock_irqrestore(&id_map_lock, flags);
+
+ return id;
+}
+
+static void coresight_trace_id_map_put_system_id(struct coresight_trace_id_map *id_map, int id)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&id_map_lock, flags);
+ coresight_trace_id_free(id, id_map);
+ spin_unlock_irqrestore(&id_map_lock, flags);
+}
+
+/* API functions */
+int coresight_trace_id_get_cpu_id(int cpu)
+{
+ return coresight_trace_id_map_get_cpu_id(cpu, &id_map_default);
+}
+EXPORT_SYMBOL_GPL(coresight_trace_id_get_cpu_id);
+
+void coresight_trace_id_put_cpu_id(int cpu)
+{
+ coresight_trace_id_map_put_cpu_id(cpu, &id_map_default);
+}
+EXPORT_SYMBOL_GPL(coresight_trace_id_put_cpu_id);
+
+int coresight_trace_id_read_cpu_id(int cpu)
+{
+ return atomic_read(&per_cpu(cpu_ids, cpu).id);
+}
+EXPORT_SYMBOL_GPL(coresight_trace_id_read_cpu_id);
+
+int coresight_trace_id_get_system_id(void)
+{
+ return coresight_trace_id_map_get_system_id(&id_map_default);
+}
+EXPORT_SYMBOL_GPL(coresight_trace_id_get_system_id);
+
+void coresight_trace_id_put_system_id(int id)
+{
+ coresight_trace_id_map_put_system_id(&id_map_default, id);
+}
+EXPORT_SYMBOL_GPL(coresight_trace_id_put_system_id);
+
+void coresight_trace_id_perf_start(void)
+{
+ atomic_inc(&perf_cs_etm_session_active);
+}
+EXPORT_SYMBOL_GPL(coresight_trace_id_perf_start);
+
+void coresight_trace_id_perf_stop(void)
+{
+ if (!atomic_dec_return(&perf_cs_etm_session_active))
+ coresight_trace_id_release_all_pending();
+}
+EXPORT_SYMBOL_GPL(coresight_trace_id_perf_stop);
+
+void coresight_trace_id_init(void)
+{
+ int cpu;
+
+ /* initialise the atomic trace ID values */
+ for_each_possible_cpu(cpu) {
+ per_cpu(cpu_ids, cpu).pend_rel = false;
+ atomic_set(&per_cpu(cpu_ids, cpu).id, 0);
+ }
+}
+EXPORT_SYMBOL_GPL(coresight_trace_id_init);
diff --git a/drivers/hwtracing/coresight/coresight-trace-id.h b/drivers/hwtracing/coresight/coresight-trace-id.h
new file mode 100644
index 000000000000..0172f83a80bb
--- /dev/null
+++ b/drivers/hwtracing/coresight/coresight-trace-id.h
@@ -0,0 +1,78 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright(C) 2022 Linaro Limited. All rights reserved.
+ * Author: Mike Leach <[email protected]>
+ */
+
+#ifndef _CORESIGHT_TRACE_ID_H
+#define _CORESIGHT_TRACE_ID_H
+
+/*
+ * Coresight trace ID allocation API
+ *
+ * With multi cpu systems, and more additional trace sources a scalable
+ * trace ID reservation system is required.
+ *
+ * The system will allocate Ids on a demand basis, and allow them to be
+ * released when done.
+ *
+ * In order to ensure that a consistent cpu / ID matching is maintained
+ * throughout a perf cs_etm event session - a session in progress flag will
+ * be maintained, and released IDs not cleared until the perf session is
+ * complete. This allows the same CPU to be re-allocated its prior ID.
+ *
+ *
+ * Trace ID maps will be created and initialised to prevent architecturally
+ * reserved IDs from being allocated.
+ *
+ * API permits multiple maps to be maintained - for large systems where
+ * different sets of cpus trace into different independent sinks.
+ */
+
+#include <linux/bitops.h>
+#include <linux/types.h>
+
+
+/* architecturally we have 128 IDs some of which are reserved */
+#define CORESIGHT_TRACE_IDS_MAX 128
+
+/* ID 0 is reserved */
+#define CORESIGHT_TRACE_ID_RES_0 0
+
+/* ID 0x70 onwards are reserved */
+#define CORESIGHT_TRACE_ID_RES_TOP 0x70
+
+/* check an ID is in the valid range */
+#define IS_VALID_ID(id) \
+ ((id > CORESIGHT_TRACE_ID_RES_0) && (id < CORESIGHT_TRACE_ID_RES_TOP))
+
+/**
+ * Trace ID map.
+ *
+ * @used_ids: Bitmap to register available (bit = 0) and in use (bit = 1) IDs.
+ * Initialised so that the reserved IDs are permanently marked as in use.
+ * @pend_rel_ids: CPU IDs that have been released by the trace source but not yet marked
+ * as available, to allow re-allocation to the same CPU during a perf session.
+ */
+struct coresight_trace_id_map {
+ DECLARE_BITMAP(used_ids, CORESIGHT_TRACE_IDS_MAX);
+ DECLARE_BITMAP(pend_rel_ids, CORESIGHT_TRACE_IDS_MAX);
+};
+
+/* Allocate and release IDs for a single default trace ID map */
+int coresight_trace_id_get_cpu_id(int cpu);
+int coresight_trace_id_get_system_id(void);
+void coresight_trace_id_put_cpu_id(int cpu);
+void coresight_trace_id_put_system_id(int id);
+
+/* read without allocate */
+int coresight_trace_id_read_cpu_id(int cpu);
+
+/* notifiers for perf session start and stop */
+void coresight_trace_id_perf_start(void);
+void coresight_trace_id_perf_stop(void);
+
+/* initialisation */
+void coresight_trace_id_init(void);
+
+#endif /* _CORESIGHT_TRACE_ID_H */
diff --git a/include/linux/coresight-pmu.h b/include/linux/coresight-pmu.h
index 6c2fd6cc5a98..99bc3cc6bf2d 100644
--- a/include/linux/coresight-pmu.h
+++ b/include/linux/coresight-pmu.h
@@ -8,7 +8,17 @@
#define _LINUX_CORESIGHT_PMU_H
#define CORESIGHT_ETM_PMU_NAME "cs_etm"
-#define CORESIGHT_ETM_PMU_SEED 0x10
+
+/*
+ * The legacy Trace ID system based on fixed calculation from the cpu
+ * number. This has been replaced by drivers using a dynamic allocation
+ * system - but need to retain the legacy algorithm for backward comparibility
+ * in certain situations:-
+ * a) new perf running on older systems that generate the legacy mapping
+ * b) older tools e.g. simpleperf in Android, that may not update at the same
+ * time as the kernel.
+ */
+#define CORESIGHT_LEGACY_CPU_TRACE_ID(cpu) (0x10 + (cpu * 2))
/*
* Below are the definition of bit offsets for perf option, and works as
@@ -34,15 +44,4 @@
#define ETM4_CFG_BIT_RETSTK 12
#define ETM4_CFG_BIT_VMID_OPT 15
-static inline int coresight_get_trace_id(int cpu)
-{
- /*
- * A trace ID of value 0 is invalid, so let's start at some
- * random value that fits in 7 bits and go from there. Since
- * the common convention is to have data trace IDs be I(N) + 1,
- * set instruction trace IDs as a function of the CPU number.
- */
- return (CORESIGHT_ETM_PMU_SEED + (cpu * 2));
-}
-
#endif
--
2.17.1
On 23/08/2022 10:09, Mike Leach wrote:
> The current method for allocating trace source ID values to sources is
> to use a fixed algorithm for CPU based sources of (cpu_num * 2 + 0x10).
> The STM is allocated ID 0x1.
>
> This fixed algorithm is used in both the CoreSight driver code, and by
> perf when writing the trace metadata in the AUXTRACE_INFO record.
>
> The method needs replacing as currently:-
> 1. It is inefficient in using available IDs.
> 2. Does not scale to larger systems with many cores and the algorithm
> has no limits so will generate invalid trace IDs for cpu number > 44.
>
> Additionally requirements to allocate additional system IDs on some
> systems have been seen.
>
> This patch set introduces an API that allows the allocation of trace IDs
> in a dynamic manner.
>
> Architecturally reserved IDs are never allocated, and the system is
> limited to allocating only valid IDs.
>
> Each of the current trace sources ETM3.x, ETM4.x and STM is updated to use
> the new API.
>
> For the ETMx.x devices IDs are allocated on certain events
> a) When using sysfs, an ID will be allocated on hardware enable, or a read of
> sysfs TRCTRACEID register and freed when the sysfs reset is written.
>
> b) When using perf, ID is allocated on during setup AUX event, and freed on
> event free. IDs are communicated using the AUX_OUTPUT_HW_ID packet.
> The ID allocator is notified when perf sessions start and stop
> so CPU based IDs are kept constant throughout any perf session.
>
>
> Note: This patchset breaks some backward compatibility for perf record and
> perf report.
>
> The version of the AUXTRACE_INFO has been updated to reflect the fact that
> the trace source IDs are generated differently. This will
> mean older versions of perf report cannot decode the newer file.
>
> Applies to coresight/next [4d45bc82df66]
> Tested on DB410c
>
Tested-by: James Clark <[email protected]>
Tested on N1SDP. Checked new Perf on new and old kernels. Confirmed that
HW_ID packets are output, decoding still looks good in system-wide,
per-thread modes.
> Changes since v3:
> 1) Fixed aarch32 build error in ETM3.x driver.
> Reported-by: kernel test robot <[email protected]>
>
> Changes since v2:
> 1) Improved backward compatibility: (requested by James)
>
> Using the new version of perf on an old kernel will generate a usable file
> legacy metadata values are set by the new perf and will be used if mew
> ID packets are not present in the file.
>
> Using an older version of perf / simpleperf on an updated kernel may still
> work. The trace ID allocator has been updated to use the legacy ID values
> where possible, so generated file and used trace IDs will match up to the
> point where the legacy algorithm is broken anyway.
>
> 2) Various changes to the ID allocator and ID packet format.
> (suggested by Suzuki)
>
> 3) per CPU ID info in allocator now stored as atomic type to allow a passive read
> without taking the allocator spinlock. perf flow now allocates and releases ID
> values in setup_aux / free_event. Device enable and event enable use the passive
> read to set the allocated values. This simplifies the locking mechanisms on the
> perf run and fixes issues that arose with locking dependencies.
>
> Changes since v1:
> (after feedback & discussion with Mathieu & Suzuki).
>
> 1) API has changed. The global trace ID map is managed internally, so it
> is no longer passed in to the API functions.
>
> 2) perf record does not use sysfs to find the trace IDs. These are now
> output as AUX_OUTPUT_HW_ID events. The drivers, perf record, and perf report
> have been updated accordingly to generate and handle these events.
>
> Mike Leach (13):
> coresight: trace-id: Add API to dynamically assign Trace ID values
> coresight: Remove obsolete Trace ID unniqueness checks
> coresight: stm: Update STM driver to use Trace ID API
> coresight: etm4x: Update ETM4 driver to use Trace ID API
> coresight: etm3x: Update ETM3 driver to use Trace ID API
> coresight: etmX.X: stm: Remove trace_id() callback
> coresight: perf: traceid: Add perf notifiers for Trace ID
> perf: cs-etm: Move mapping of Trace ID and cpu into helper function
> perf: cs-etm: Update record event to use new Trace ID protocol
> kernel: events: Export perf_report_aux_output_id()
> perf: cs-etm: Handle PERF_RECORD_AUX_OUTPUT_HW_ID packet
> coresight: events: PERF_RECORD_AUX_OUTPUT_HW_ID used for Trace ID
> coresight: trace-id: Add debug & test macros to Trace ID allocation
>
> drivers/hwtracing/coresight/Makefile | 2 +-
> drivers/hwtracing/coresight/coresight-core.c | 49 +--
> .../hwtracing/coresight/coresight-etm-perf.c | 23 ++
> drivers/hwtracing/coresight/coresight-etm.h | 3 +-
> .../coresight/coresight-etm3x-core.c | 92 +++--
> .../coresight/coresight-etm3x-sysfs.c | 27 +-
> .../coresight/coresight-etm4x-core.c | 79 ++++-
> .../coresight/coresight-etm4x-sysfs.c | 27 +-
> drivers/hwtracing/coresight/coresight-etm4x.h | 3 +
> drivers/hwtracing/coresight/coresight-stm.c | 49 +--
> .../hwtracing/coresight/coresight-trace-id.c | 266 ++++++++++++++
> .../hwtracing/coresight/coresight-trace-id.h | 78 +++++
> include/linux/coresight-pmu.h | 35 +-
> include/linux/coresight.h | 3 -
> kernel/events/core.c | 1 +
> tools/include/linux/coresight-pmu.h | 48 ++-
> tools/perf/arch/arm/util/cs-etm.c | 21 +-
> .../perf/util/cs-etm-decoder/cs-etm-decoder.c | 7 +
> tools/perf/util/cs-etm.c | 331 +++++++++++++++---
> tools/perf/util/cs-etm.h | 14 +-
> 20 files changed, 933 insertions(+), 225 deletions(-)
> create mode 100644 drivers/hwtracing/coresight/coresight-trace-id.c
> create mode 100644 drivers/hwtracing/coresight/coresight-trace-id.h
>
On 23/08/2022 10:10, Mike Leach wrote:
> When using dynamically assigned CoreSight trace IDs the drivers can output
> the ID / CPU association as a PERF_RECORD_AUX_OUTPUT_HW_ID packet.
>
> Update cs-etm decoder to handle this packet by setting the CPU/Trace ID
> mapping.
>
> Signed-off-by: Mike Leach <[email protected]>
> ---
[...]
> - /* before aux records are queued, need to map metadata to trace IDs */
> - err = cs_etm__map_trace_ids_metadata(num_cpu, metadata);
> + /*
> + * Map Trace ID values to CPU metadata.
> + *
> + * Trace metadata will always contain Trace ID values from the legacy algorithm. If the
> + * files has been recorded by a "new" perf updated to handle AUX_HW_ID then the metadata
> + * ID value will also have the CORESIGHT_TRACE_ID_UNUSED_FLAG set.
> + *
> + * The updated kernel drivers that use AUX_HW_ID to sent Trace IDs will attempt to use
> + * the same IDs as the old algorithm as far as is possible, unless there are clashes
> + * in which case a different value will be used. This means an older perf may still
> + * be able to record and read files generate on a newer system.
> + *
> + * For a perf able to interpret AUX_HW_ID packets we first check for the presence of
> + * those packets. If they are there then the values will be mapped and plugged into
> + * the metadata. We then set any remaining metadata values with the used flag to a
> + * value CORESIGHT_TRACE_ID_UNUSED_VAL - which indicates no decoder is required.
> + *
> + * If no AUX_HW_ID packets are present - which means a file recorded on an old kernel
> + * then we map Trace ID values to CPU directly from the metadata - clearing any unused
> + * flags if present.
> + */
> +
> + /* first scan for AUX_OUTPUT_HW_ID records to map trace ID values to CPU metadata */
> + aux_hw_id_found = 0;
> + err = perf_session__peek_events(session, session->header.data_offset,
> + session->header.data_size,
> + cs_etm__process_aux_hw_id_cb, &aux_hw_id_found);
> + if (err)
> + goto err_delete_thread;
> +
> + /* if HW ID found then clear any unused metadata ID values */
> + if (aux_hw_id_found)
> + err = cs_etm__clear_unused_trace_ids_metadata(num_cpu, metadata);
> + /* otherwise, this is a file with metadata values only, map from metadata */
> + else
> + err = cs_etm__map_trace_ids_metadata(num_cpu, metadata);
> +
> if (err)
> goto err_delete_thread;
>
> @@ -3124,13 +3342,14 @@ int cs_etm__process_auxtrace_info(union perf_event *event,
>
> etm->data_queued = etm->queues.populated;
> /*
> - * Print warning in pipe mode, see cs_etm__process_auxtrace_event() and
> + * Print error in pipe mode, see cs_etm__process_auxtrace_event() and
> * cs_etm__queue_aux_fragment() for details relating to limitations.
> */
> - if (!etm->data_queued)
> - pr_warning("CS ETM warning: Coresight decode and TRBE support requires random file access.\n"
> - "Continuing with best effort decoding in piped mode.\n\n");
> -
> + if (!etm->data_queued) {
> + pr_err("CS ETM: Coresight decode and TRBE support need random file access.\n");
> + err = -EINVAL;
> + goto err_delete_thread;
> + }
This error message is never hit because the peek that was added is
followed by:
if (err)
goto err_delete_thread;
Peek will return -1 in pipe mode so then you get this output instead:
./perf record -e cs_etm//u -o - -- ls > stdio.data
cat stdio.data | ./perf report -i -
0x1464 [0x168]: failed to process type: 70
Error:
failed to process sample
It would be simpler to add this new check to the very beginning of
cs_etm__process_auxtrace_info() and print the message/quit there instead:
if (perf_data__is_pipe(session->data))
return -1;
Then etm->data_queued can also be removed because it's always true.
Apart from that issue:
Reviewed-by: James Clark <[email protected]>
Thanks
James
> return 0;
>
> err_delete_thread:
On 23/08/2022 10:10, Mike Leach wrote:
> Trace IDs are now dynamically allocated.
>
> Previously used the static association algorithm that is no longer
> used. The 'cpu * 2 + seed' was outdated and broken for systems with high
> core counts (>46). as it did not scale and was broken for larger
> core counts.
>
> Trace ID will now be sent in PERF_RECORD_AUX_OUTPUT_HW_ID record.
>
> Legacy ID algorithm renamed and retained for limited backward
> compatibility use.
>
> Signed-off-by: Mike Leach <[email protected]>
Reviewed-by: James Clark <[email protected]>
> ---
> tools/include/linux/coresight-pmu.h | 30 +++++++++++++++++------------
> tools/perf/arch/arm/util/cs-etm.c | 21 ++++++++++++--------
> 2 files changed, 31 insertions(+), 20 deletions(-)
>
> diff --git a/tools/include/linux/coresight-pmu.h b/tools/include/linux/coresight-pmu.h
> index db9c7c0abb6a..307f357defe9 100644
> --- a/tools/include/linux/coresight-pmu.h
> +++ b/tools/include/linux/coresight-pmu.h
> @@ -10,11 +10,28 @@
> #include <linux/bits.h>
>
> #define CORESIGHT_ETM_PMU_NAME "cs_etm"
> -#define CORESIGHT_ETM_PMU_SEED 0x10
> +
> +/*
> + * The legacy Trace ID system based on fixed calculation from the cpu
> + * number. This has been replaced by drivers using a dynamic allocation
> + * system - but need to retain the legacy algorithm for backward comparibility
> + * in certain situations:-
> + * a) new perf running on older systems that generate the legacy mapping
> + * b) older tools e.g. simpleperf in Android, that may not update at the same
> + * time as the kernel.
> + */
> +#define CORESIGHT_LEGACY_CPU_TRACE_ID(cpu) (0x10 + (cpu * 2))
>
> /* CoreSight trace ID is currently the bottom 7 bits of the value */
> #define CORESIGHT_TRACE_ID_VAL_MASK GENMASK(6, 0)
>
> +/*
> + * perf record will set the legacy meta data values as unused initially.
> + * This allows perf report to manage the decoders created when dynamic
> + * allocation in operation.
> + */
> +#define CORESIGHT_TRACE_ID_UNUSED_FLAG BIT(31)
> +
> /*
> * Below are the definition of bit offsets for perf option, and works as
> * arbitrary values for all ETM versions.
> @@ -39,15 +56,4 @@
> #define ETM4_CFG_BIT_RETSTK 12
> #define ETM4_CFG_BIT_VMID_OPT 15
>
> -static inline int coresight_get_trace_id(int cpu)
> -{
> - /*
> - * A trace ID of value 0 is invalid, so let's start at some
> - * random value that fits in 7 bits and go from there. Since
> - * the common convention is to have data trace IDs be I(N) + 1,
> - * set instruction trace IDs as a function of the CPU number.
> - */
> - return (CORESIGHT_ETM_PMU_SEED + (cpu * 2));
> -}
> -
> #endif
> diff --git a/tools/perf/arch/arm/util/cs-etm.c b/tools/perf/arch/arm/util/cs-etm.c
> index 1b54638d53b0..196fe1a77de9 100644
> --- a/tools/perf/arch/arm/util/cs-etm.c
> +++ b/tools/perf/arch/arm/util/cs-etm.c
> @@ -421,13 +421,16 @@ static int cs_etm_recording_options(struct auxtrace_record *itr,
> evlist__to_front(evlist, cs_etm_evsel);
>
> /*
> - * In the case of per-cpu mmaps, we need the CPU on the
> - * AUX event. We also need the contextID in order to be notified
> + * get the CPU on the sample - need it to associate trace ID in the
> + * AUX_OUTPUT_HW_ID event, and the AUX event for per-cpu mmaps.
> + */
> + evsel__set_sample_bit(cs_etm_evsel, CPU);
> +
> + /*
> + * Also the case of per-cpu mmaps, need the contextID in order to be notified
> * when a context switch happened.
> */
> if (!perf_cpu_map__empty(cpus)) {
> - evsel__set_sample_bit(cs_etm_evsel, CPU);
> -
> err = cs_etm_set_option(itr, cs_etm_evsel,
> BIT(ETM_OPT_CTXTID) | BIT(ETM_OPT_TS));
> if (err)
> @@ -633,8 +636,10 @@ static void cs_etm_save_etmv4_header(__u64 data[], struct auxtrace_record *itr,
>
> /* Get trace configuration register */
> data[CS_ETMV4_TRCCONFIGR] = cs_etmv4_get_config(itr);
> - /* Get traceID from the framework */
> - data[CS_ETMV4_TRCTRACEIDR] = coresight_get_trace_id(cpu);
> + /* traceID set to legacy version, in case new perf running on older system */
> + data[CS_ETMV4_TRCTRACEIDR] =
> + CORESIGHT_LEGACY_CPU_TRACE_ID(cpu) | CORESIGHT_TRACE_ID_UNUSED_FLAG;
> +
> /* Get read-only information from sysFS */
> data[CS_ETMV4_TRCIDR0] = cs_etm_get_ro(cs_etm_pmu, cpu,
> metadata_etmv4_ro[CS_ETMV4_TRCIDR0]);
> @@ -681,9 +686,9 @@ static void cs_etm_get_metadata(int cpu, u32 *offset,
> magic = __perf_cs_etmv3_magic;
> /* Get configuration register */
> info->priv[*offset + CS_ETM_ETMCR] = cs_etm_get_config(itr);
> - /* Get traceID from the framework */
> + /* traceID set to legacy value in case new perf running on old system */
> info->priv[*offset + CS_ETM_ETMTRACEIDR] =
> - coresight_get_trace_id(cpu);
> + CORESIGHT_LEGACY_CPU_TRACE_ID(cpu) | CORESIGHT_TRACE_ID_UNUSED_FLAG;
> /* Get read-only information from sysFS */
> info->priv[*offset + CS_ETM_ETMCCER] =
> cs_etm_get_ro(cs_etm_pmu, cpu,