This series enables future IP trace features Embedded Trace Extension (ETE)
and Trace Buffer Extension (TRBE). This series depends on the ETM system
register instruction support series [0] which is available here [1]. This
series which applies on [1] is avaialble here [2] for quick access.
ETE is the PE (CPU) trace unit for CPUs, implementing future architecture
extensions. ETE overlaps with the ETMv4 architecture, with additions to
support the newer architecture features and some restrictions on the
supported features w.r.t ETMv4. The ETE support is added by extending the
ETMv4 driver to recognise the ETE and handle the features as exposed by the
TRCIDRx registers. ETE only supports system instructions access from the
host CPU. The ETE could be integrated with a TRBE (see below), or with the
legacy CoreSight trace bus (e.g, ETRs). Thus the ETE follows same firmware
description as the ETMs and requires a node per instance.
Trace Buffer Extensions (TRBE) implements a per CPU trace buffer, which is
accessible via the system registers and can be combined with the ETE to
provide a 1x1 configuration of source & sink. TRBE is being represented
here as a CoreSight sink. Primary reason is that the ETE source could work
with other traditional CoreSight sink devices. As TRBE captures the trace
data which is produced by ETE, it cannot work alone.
TRBE representation here have some distinct deviations from a traditional
CoreSight sink device. Coresight path between ETE and TRBE are not built
during boot looking at respective DT or ACPI entries.
Unlike traditional sinks, TRBE can generate interrupts to signal including
many other things, buffer got filled. The interrupt is a PPI and should be
communicated from the platform. DT or ACPI entry representing TRBE should
have the PPI number for a given platform. During perf session, the TRBE IRQ
handler should capture trace for perf auxiliary buffer before restarting it
back. System registers being used here to configure ETE and TRBE could be
referred in the link below.
https://developer.arm.com/docs/ddi0601/g/aarch64-system-registers.
Things todo:
- Improve TRBE IRQ handling for all possible corner cases
- Implement sysfs based trace sessions
[0] https://lore.kernel.org/linux-arm-kernel/[email protected]/
[1] https://gitlab.arm.com/linux-arm/linux-skp/-/tree/coresight/etm/sysreg-v5
[2] https://gitlab.arm.com/linux-arm/linux-anshuman/-/tree/coresight/ete_trbe_v1
Changes in V1:
- There are not much ETE changes from Suzuki apart from splitting of the ETE DTS patch
- TRBE changes have been captured in the respective patches
Changes in RFC:
https://lore.kernel.org/linux-arm-kernel/[email protected]/
Cc: Mathieu Poirier <[email protected]>
Cc: Suzuki K Poulose <[email protected]>
Cc: Mike Leach <[email protected]>
Cc: Linu Cherian <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Anshuman Khandual (5):
arm64: Add TRBE definitions
coresight: core: Add support for dedicated percpu sinks
coresight: etm-perf: Truncate the perf record if handle has no space
coresight: sink: Add TRBE driver
dts: bindings: Document device tree binding for Arm TRBE
Suzuki K Poulose (6):
coresight: etm-perf: Allow an event to use different sinks
coresight: Do not scan for graph if none is present
coresight: etm4x: Add support for PE OS lock
coresight: ete: Add support for ETE sysreg access
coresight: ete: Add support for ETE tracing
dts: bindings: Document device tree bindings for ETE
Documentation/devicetree/bindings/arm/ete.txt | 41 +
Documentation/devicetree/bindings/arm/trbe.txt | 20 +
Documentation/trace/coresight/coresight-trbe.rst | 39 +
arch/arm64/include/asm/sysreg.h | 51 ++
drivers/hwtracing/coresight/Kconfig | 11 +
drivers/hwtracing/coresight/Makefile | 1 +
drivers/hwtracing/coresight/coresight-core.c | 14 +
drivers/hwtracing/coresight/coresight-etm-perf.c | 51 +-
drivers/hwtracing/coresight/coresight-etm4x-core.c | 138 ++-
drivers/hwtracing/coresight/coresight-etm4x.h | 64 +-
drivers/hwtracing/coresight/coresight-platform.c | 6 +
drivers/hwtracing/coresight/coresight-trbe.c | 925 +++++++++++++++++++++
drivers/hwtracing/coresight/coresight-trbe.h | 248 ++++++
include/linux/coresight.h | 12 +
14 files changed, 1580 insertions(+), 41 deletions(-)
create mode 100644 Documentation/devicetree/bindings/arm/ete.txt
create mode 100644 Documentation/devicetree/bindings/arm/trbe.txt
create mode 100644 Documentation/trace/coresight/coresight-trbe.rst
create mode 100644 drivers/hwtracing/coresight/coresight-trbe.c
create mode 100644 drivers/hwtracing/coresight/coresight-trbe.h
--
2.7.4
From: Suzuki K Poulose <[email protected]>
If a graph node is not found for a given node, of_get_next_endpoint()
will emit the following error message :
OF: graph: no port node found in /<node_name>
If the given component doesn't have any explicit connections (e.g,
ETE) we could simply ignore the graph parsing.
Cc: Mathieu Poirier <[email protected]>
Cc: Mike Leach <[email protected]>
Signed-off-by: Suzuki K Poulose <[email protected]>
Signed-off-by: Anshuman Khandual <[email protected]>
---
drivers/hwtracing/coresight/coresight-platform.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/drivers/hwtracing/coresight/coresight-platform.c b/drivers/hwtracing/coresight/coresight-platform.c
index 3629b78..c594f45 100644
--- a/drivers/hwtracing/coresight/coresight-platform.c
+++ b/drivers/hwtracing/coresight/coresight-platform.c
@@ -90,6 +90,12 @@ static void of_coresight_get_ports_legacy(const struct device_node *node,
struct of_endpoint endpoint;
int in = 0, out = 0;
+ /*
+ * Avoid warnings in of_graph_get_next_endpoint()
+ * if the device doesn't have any graph connections
+ */
+ if (!of_graph_is_present(node))
+ return;
do {
ep = of_graph_get_next_endpoint(node, ep);
if (!ep)
--
2.7.4
From: Suzuki K Poulose <[email protected]>
When there are multiple sinks on the system, in the absence
of a specified sink, it is quite possible that a default sink
for an ETM could be different from that of another ETM. However
we do not support having multiple sinks for an event yet. This
patch allows the event to use the default sinks on the ETMs
where they are scheduled as long as the sinks are of the same
type.
e.g, if we have 1x1 topology with per-CPU ETRs, the event can
use the per-CPU ETR for the session. However, if the sinks
are of different type, e.g TMC-ETR on one and a custom sink
on another, the event will only trace on the first detected
sink.
Cc: Mathieu Poirier <[email protected]>
Cc: Mike Leach <[email protected]>
Tested-by: Linu Cherian <[email protected]>
Signed-off-by: Suzuki K Poulose <[email protected]>
Signed-off-by: Anshuman Khandual <[email protected]>
---
drivers/hwtracing/coresight/coresight-etm-perf.c | 48 +++++++++++++++++++-----
1 file changed, 38 insertions(+), 10 deletions(-)
diff --git a/drivers/hwtracing/coresight/coresight-etm-perf.c b/drivers/hwtracing/coresight/coresight-etm-perf.c
index bdc34ca..eb9e7e9 100644
--- a/drivers/hwtracing/coresight/coresight-etm-perf.c
+++ b/drivers/hwtracing/coresight/coresight-etm-perf.c
@@ -204,6 +204,13 @@ static void etm_free_aux(void *data)
schedule_work(&event_data->work);
}
+static bool sinks_match(struct coresight_device *a, struct coresight_device *b)
+{
+ if (!a || !b)
+ return false;
+ return (sink_ops(a) == sink_ops(b));
+}
+
static void *etm_setup_aux(struct perf_event *event, void **pages,
int nr_pages, bool overwrite)
{
@@ -212,6 +219,7 @@ static void *etm_setup_aux(struct perf_event *event, void **pages,
cpumask_t *mask;
struct coresight_device *sink = NULL;
struct etm_event_data *event_data = NULL;
+ bool sink_forced = false;
event_data = alloc_event_data(cpu);
if (!event_data)
@@ -222,6 +230,7 @@ static void *etm_setup_aux(struct perf_event *event, void **pages,
if (event->attr.config2) {
id = (u32)event->attr.config2;
sink = coresight_get_sink_by_id(id);
+ sink_forced = true;
}
mask = &event_data->mask;
@@ -235,7 +244,7 @@ static void *etm_setup_aux(struct perf_event *event, void **pages,
*/
for_each_cpu(cpu, mask) {
struct list_head *path;
- struct coresight_device *csdev;
+ struct coresight_device *csdev, *new_sink;
csdev = per_cpu(csdev_src, cpu);
/*
@@ -249,21 +258,35 @@ static void *etm_setup_aux(struct perf_event *event, void **pages,
}
/*
- * No sink provided - look for a default sink for one of the
- * devices. At present we only support topology where all CPUs
- * use the same sink [N:1], so only need to find one sink. The
- * coresight_build_path later will remove any CPU that does not
- * attach to the sink, or if we have not found a sink.
+ * No sink provided - look for a default sink for all the devices.
+ * We only support multiple sinks, only if all the default sinks
+ * are of the same type, so that the sink buffer can be shared
+ * as the event moves around. We don't trace on a CPU if it can't
+ *
*/
- if (!sink)
- sink = coresight_find_default_sink(csdev);
+ if (!sink_forced) {
+ new_sink = coresight_find_default_sink(csdev);
+ if (!new_sink) {
+ cpumask_clear_cpu(cpu, mask);
+ continue;
+ }
+ /* Skip checks for the first sink */
+ if (!sink) {
+ sink = new_sink;
+ } else if (!sinks_match(new_sink, sink)) {
+ cpumask_clear_cpu(cpu, mask);
+ continue;
+ }
+ } else {
+ new_sink = sink;
+ }
/*
* Building a path doesn't enable it, it simply builds a
* list of devices from source to sink that can be
* referenced later when the path is actually needed.
*/
- path = coresight_build_path(csdev, sink);
+ path = coresight_build_path(csdev, new_sink);
if (IS_ERR(path)) {
cpumask_clear_cpu(cpu, mask);
continue;
@@ -284,7 +307,12 @@ static void *etm_setup_aux(struct perf_event *event, void **pages,
if (!sink_ops(sink)->alloc_buffer || !sink_ops(sink)->free_buffer)
goto err;
- /* Allocate the sink buffer for this session */
+ /*
+ * Allocate the sink buffer for this session. All the sinks
+ * where this event can be scheduled are ensured to be of the
+ * same type. Thus the same sink configuration is used by the
+ * sinks.
+ */
event_data->snk_config =
sink_ops(sink)->alloc_buffer(sink, event, pages,
nr_pages, overwrite);
--
2.7.4
From: Suzuki K Poulose <[email protected]>
ETE may not implement the OS lock and instead could rely on
the PE OS Lock for the trace unit access. This is indicated
by the TRCOLSR.OSM == 0b100. Add support for handling the
PE OS lock
Cc: Mathieu Poirier <[email protected]>
Cc: Mike Leach <[email protected]>
Signed-off-by: Suzuki K Poulose <[email protected]>
Signed-off-by: Anshuman Khandual <[email protected]>
---
drivers/hwtracing/coresight/coresight-etm4x-core.c | 50 ++++++++++++++++++----
drivers/hwtracing/coresight/coresight-etm4x.h | 15 +++++++
2 files changed, 56 insertions(+), 9 deletions(-)
diff --git a/drivers/hwtracing/coresight/coresight-etm4x-core.c b/drivers/hwtracing/coresight/coresight-etm4x-core.c
index 3d62acb..31d65f3 100644
--- a/drivers/hwtracing/coresight/coresight-etm4x-core.c
+++ b/drivers/hwtracing/coresight/coresight-etm4x-core.c
@@ -110,30 +110,59 @@ void etm4x_sysreg_write(u64 val, u32 offset, bool _relaxed, bool _64bit)
}
}
-static void etm4_os_unlock_csa(struct etmv4_drvdata *drvdata, struct csdev_access *csa)
+static void etm_detect_os_lock(struct etmv4_drvdata *drvdata,
+ struct csdev_access *csa)
{
- /* Writing 0 to TRCOSLAR unlocks the trace registers */
- etm4x_relaxed_write32(csa, 0x0, TRCOSLAR);
- drvdata->os_unlock = true;
+ u32 oslsr = etm4x_relaxed_read32(csa, TRCOSLSR);
+
+ drvdata->os_lock_model = ETM_OSLSR_OSLM(oslsr);
+}
+
+static void etm_write_os_lock(struct etmv4_drvdata *drvdata,
+ struct csdev_access *csa, u32 val)
+{
+ val = !!val;
+
+ switch (drvdata->os_lock_model) {
+ case ETM_OSLOCK_PRESENT:
+ etm4x_relaxed_write32(csa, val, TRCOSLAR);
+ break;
+ case ETM_OSLOCK_PE:
+ write_sysreg_s(val, SYS_OSLAR_EL1);
+ break;
+ default:
+ pr_warn_once("CPU%d: Unsupported Trace OSLock model: %x\n",
+ smp_processor_id(), drvdata->os_lock_model);
+ fallthrough;
+ case ETM_OSLOCK_NI:
+ return;
+ }
isb();
}
+static inline void etm4_os_unlock_csa(struct etmv4_drvdata *drvdata,
+ struct csdev_access *csa)
+{
+ WARN_ON(drvdata->cpu != smp_processor_id());
+
+ /* Writing 0 to OS Lock unlocks the trace unit registers */
+ etm_write_os_lock(drvdata, csa, 0x0);
+ drvdata->os_unlock = true;
+}
+
static void etm4_os_unlock(struct etmv4_drvdata *drvdata)
{
if (!WARN_ON(!drvdata->csdev))
etm4_os_unlock_csa(drvdata, &drvdata->csdev->access);
-
}
static void etm4_os_lock(struct etmv4_drvdata *drvdata)
{
if (WARN_ON(!drvdata->csdev))
return;
-
- /* Writing 0x1 to TRCOSLAR locks the trace registers */
- etm4x_relaxed_write32(&drvdata->csdev->access, 0x1, TRCOSLAR);
+ /* Writing 0x1 to OS Lock locks the trace registers */
+ etm_write_os_lock(drvdata, &drvdata->csdev->access, 0x1);
drvdata->os_unlock = false;
- isb();
}
static void etm4_cs_lock(struct etmv4_drvdata *drvdata,
@@ -807,6 +836,9 @@ static void etm4_init_arch_data(void *info)
if (!etm4_init_csdev_access(drvdata, csa))
return;
+ /* Detect the support for OS Lock before we actuall use it */
+ etm_detect_os_lock(drvdata, csa);
+
/* Make sure all registers are accessible */
etm4_os_unlock_csa(drvdata, csa);
etm4_cs_unlock(drvdata, csa);
diff --git a/drivers/hwtracing/coresight/coresight-etm4x.h b/drivers/hwtracing/coresight/coresight-etm4x.h
index 7a6e3cd..69af577 100644
--- a/drivers/hwtracing/coresight/coresight-etm4x.h
+++ b/drivers/hwtracing/coresight/coresight-etm4x.h
@@ -498,6 +498,20 @@
ETM_MODE_EXCL_USER)
/*
+ * TRCOSLSR.OSLM advertises the OS Lock model.
+ * OSLM[2:0] = TRCOSLSR[4:3,0]
+ *
+ * 0b000 - Trace OS Lock is not implemented.
+ * 0b010 - Trace OS Lock is implemented.
+ * 0b100 - Trace OS Lock is not implemented, unit is controlled by PE OS Lock.
+ */
+#define ETM_OSLOCK_NI 0b000
+#define ETM_OSLOCK_PRESENT 0b010
+#define ETM_OSLOCK_PE 0b100
+
+#define ETM_OSLSR_OSLM(oslsr) ((((oslsr) & GENMASK(4, 3)) >> 2) | (oslsr & 0x1))
+
+/*
* TRCDEVARCH Bit field definitions
* Bits[31:21] - ARCHITECT = Always Arm Ltd.
* * Bits[31:28] = 0x4
@@ -883,6 +897,7 @@ struct etmv4_drvdata {
u8 s_ex_level;
u8 ns_ex_level;
u8 q_support;
+ u8 os_lock_model;
bool sticky_enable;
bool boot_enable;
bool os_unlock;
--
2.7.4
From: Suzuki K Poulose <[email protected]>
Add support for handling the system registers for Embedded Trace
Extensions (ETE). ETE shares most of the registers with ETMv4 except
for some and also adds some new registers. Re-arrange the ETMv4x list
to share the common definitions and add the ETE sysreg support.
Cc: Mike Leach <[email protected]>
Cc: Mathieu Poirier <[email protected]>
Signed-off-by: Suzuki K Poulose <[email protected]>
Signed-off-by: Anshuman Khandual <[email protected]>
---
drivers/hwtracing/coresight/coresight-etm4x-core.c | 32 +++++++++++++++++
drivers/hwtracing/coresight/coresight-etm4x.h | 42 +++++++++++++++++-----
2 files changed, 65 insertions(+), 9 deletions(-)
diff --git a/drivers/hwtracing/coresight/coresight-etm4x-core.c b/drivers/hwtracing/coresight/coresight-etm4x-core.c
index 31d65f3..dff502f 100644
--- a/drivers/hwtracing/coresight/coresight-etm4x-core.c
+++ b/drivers/hwtracing/coresight/coresight-etm4x-core.c
@@ -110,6 +110,38 @@ void etm4x_sysreg_write(u64 val, u32 offset, bool _relaxed, bool _64bit)
}
}
+u64 ete_sysreg_read(u32 offset, bool _relaxed, bool _64bit)
+{
+ u64 res = 0;
+
+ switch (offset) {
+ ETE_READ_CASES(res)
+ default :
+ WARN_ONCE(1, "ete: trying to read unsupported register @%x\n",
+ offset);
+ }
+
+ if (!_relaxed)
+ __iormb(res); /* Imitate the !relaxed I/O helpers */
+
+ return res;
+}
+
+void ete_sysreg_write(u64 val, u32 offset, bool _relaxed, bool _64bit)
+{
+ if (!_relaxed)
+ __iowmb(); /* Imitate the !relaxed I/O helpers */
+ if (!_64bit)
+ val &= GENMASK(31, 0);
+
+ switch (offset) {
+ ETE_WRITE_CASES(val)
+ default :
+ WARN_ONCE(1, "ete: trying to write to unsupported register @%x\n",
+ offset);
+ }
+}
+
static void etm_detect_os_lock(struct etmv4_drvdata *drvdata,
struct csdev_access *csa)
{
diff --git a/drivers/hwtracing/coresight/coresight-etm4x.h b/drivers/hwtracing/coresight/coresight-etm4x.h
index 69af577..6f64f08 100644
--- a/drivers/hwtracing/coresight/coresight-etm4x.h
+++ b/drivers/hwtracing/coresight/coresight-etm4x.h
@@ -28,6 +28,7 @@
#define TRCAUXCTLR 0x018
#define TRCEVENTCTL0R 0x020
#define TRCEVENTCTL1R 0x024
+#define TRCRSR 0x028
#define TRCSTALLCTLR 0x02C
#define TRCTSCTLR 0x030
#define TRCSYNCPR 0x034
@@ -48,6 +49,7 @@
#define TRCSEQRSTEVR 0x118
#define TRCSEQSTR 0x11C
#define TRCEXTINSELR 0x120
+#define TRCEXTINSELRn(n) (0x120 + (n * 4)) /* n = 0-3 */
#define TRCCNTRLDVRn(n) (0x140 + (n * 4)) /* n = 0-3 */
#define TRCCNTCTLRn(n) (0x150 + (n * 4)) /* n = 0-3 */
#define TRCCNTVRn(n) (0x160 + (n * 4)) /* n = 0-3 */
@@ -156,9 +158,22 @@
#define CASE_WRITE(val, x) \
case (x): { write_etm4x_sysreg_const_offset((val), (x)); break; }
-#define CASE_LIST(op, val) \
- CASE_##op((val), TRCPRGCTLR) \
+#define ETE_ONLY_LIST(op, val) \
+ CASE_##op((val), TRCRSR) \
+ CASE_##op((val), TRCEXTINSELRn(1)) \
+ CASE_##op((val), TRCEXTINSELRn(2)) \
+ CASE_##op((val), TRCEXTINSELRn(3))
+
+#define ETM_ONLY_LIST(op, val) \
CASE_##op((val), TRCPROCSELR) \
+ CASE_##op((val), TRCVDCTLR) \
+ CASE_##op((val), TRCVDSACCTLR) \
+ CASE_##op((val), TRCVDARCCTLR) \
+ CASE_##op((val), TRCITCTRL) \
+ CASE_##op((val), TRCOSLAR)
+
+#define COMMON_LIST(op, val) \
+ CASE_##op((val), TRCPRGCTLR) \
CASE_##op((val), TRCSTATR) \
CASE_##op((val), TRCCONFIGR) \
CASE_##op((val), TRCAUXCTLR) \
@@ -175,9 +190,6 @@
CASE_##op((val), TRCVIIECTLR) \
CASE_##op((val), TRCVISSCTLR) \
CASE_##op((val), TRCVIPCSSCTLR) \
- CASE_##op((val), TRCVDCTLR) \
- CASE_##op((val), TRCVDSACCTLR) \
- CASE_##op((val), TRCVDARCCTLR) \
CASE_##op((val), TRCSEQEVRn(0)) \
CASE_##op((val), TRCSEQEVRn(1)) \
CASE_##op((val), TRCSEQEVRn(2)) \
@@ -272,7 +284,6 @@
CASE_##op((val), TRCSSPCICRn(5)) \
CASE_##op((val), TRCSSPCICRn(6)) \
CASE_##op((val), TRCSSPCICRn(7)) \
- CASE_##op((val), TRCOSLAR) \
CASE_##op((val), TRCOSLSR) \
CASE_##op((val), TRCPDCR) \
CASE_##op((val), TRCPDSR) \
@@ -344,7 +355,6 @@
CASE_##op((val), TRCCIDCCTLR1) \
CASE_##op((val), TRCVMIDCCTLR0) \
CASE_##op((val), TRCVMIDCCTLR1) \
- CASE_##op((val), TRCITCTRL) \
CASE_##op((val), TRCCLAIMSET) \
CASE_##op((val), TRCCLAIMCLR) \
CASE_##op((val), TRCDEVAFF0) \
@@ -364,8 +374,22 @@
CASE_##op((val), TRCPIDR2) \
CASE_##op((val), TRCPIDR3)
-#define ETM4x_READ_CASES(res) CASE_LIST(READ, (res))
-#define ETM4x_WRITE_CASES(val) CASE_LIST(WRITE, (val))
+#define ETM4x_READ_CASES(res) \
+ COMMON_LIST(READ, (res)) \
+ ETM_ONLY_LIST(READ, (res))
+
+#define ETM4x_WRITE_CASES(res) \
+ COMMON_LIST(WRITE, (res)) \
+ ETM_ONLY_LIST(WRITE, (res))
+
+#define ETE_READ_CASES(res) \
+ COMMON_LIST(READ, (res)) \
+ ETE_ONLY_LIST(READ, (res))
+
+#define ETE_WRITE_CASES(res) \
+ COMMON_LIST(WRITE, (res)) \
+ ETE_ONLY_LIST(WRITE, (res))
+
#define read_etm4x_sysreg_offset(offset, _64bit) \
({ \
--
2.7.4
This patch documents the device tree binding in use for Arm TRBE.
Cc: [email protected]
Cc: Mathieu Poirier <[email protected]>
Cc: Mike Leach <[email protected]>
Cc: Suzuki K Poulose <[email protected]>
Signed-off-by: Anshuman Khandual <[email protected]>
---
Changes in V1:
- TRBE DT entry has been renamed as 'arm, trace-buffer-extension'
Documentation/devicetree/bindings/arm/trbe.txt | 20 ++++++++++++++++++++
1 file changed, 20 insertions(+)
create mode 100644 Documentation/devicetree/bindings/arm/trbe.txt
diff --git a/Documentation/devicetree/bindings/arm/trbe.txt b/Documentation/devicetree/bindings/arm/trbe.txt
new file mode 100644
index 0000000..001945d
--- /dev/null
+++ b/Documentation/devicetree/bindings/arm/trbe.txt
@@ -0,0 +1,20 @@
+* Trace Buffer Extension (TRBE)
+
+Trace Buffer Extension (TRBE) is used for collecting trace data generated
+from a corresponding trace unit (ETE) using an in memory trace buffer.
+
+** TRBE Required properties:
+
+- compatible : should be one of:
+ "arm,trace-buffer-extension"
+
+- interrupts : Exactly 1 PPI must be listed. For heterogeneous systems where
+ TRBE is only supported on a subset of the CPUs, please consult
+ the arm,gic-v3 binding for details on describing a PPI partition.
+
+** Example:
+
+trbe {
+ compatible = "arm,trace-buffer-extension";
+ interrupts = <GIC_PPI 15 IRQ_TYPE_LEVEL_HIGH>;
+};
--
2.7.4
Trace Buffer Extension (TRBE) implements a trace buffer per CPU which is
accessible via the system registers. The TRBE supports different addressing
modes including CPU virtual address and buffer modes including the circular
buffer mode. The TRBE buffer is addressed by a base pointer (TRBBASER_EL1),
an write pointer (TRBPTR_EL1) and a limit pointer (TRBLIMITR_EL1). But the
access to the trace buffer could be prohibited by a higher exception level
(EL3 or EL2), indicated by TRBIDR_EL1.P. The TRBE can also generate a CPU
private interrupt (PPI) on address translation errors and when the buffer
is full. Overall implementation here is inspired from the Arm SPE driver.
Cc: Mathieu Poirier <[email protected]>
Cc: Mike Leach <[email protected]>
Cc: Suzuki K Poulose <[email protected]>
Signed-off-by: Anshuman Khandual <[email protected]>
---
Changes in V1:
- Replaced direct alignment tests with IS_ALIGNED()
- Dropped trbe_perf->pid as its not really required
- Dropped trbe_drvdata->atclk is its not really required
- Changed trbe_cpudata->trbe_align sysfs entry as hex
- Dropped trbe_cpudata->irq sysfs entry completely and updated the documentation
- Dropped ACPI based TRBE detection support
- Dropped trbe_address_mode and string enum
- Added CORESIGHT_DEV_SUBTYPE_SINK_PERCPU_SYSMEM enumeration
- Make TRBE driver modular
- Premature exit when mode != CS_MODE_PERF
- Dropped CONFIG_PM suspend and resume support
- Dropped unused helpers and enumerations
- Truncate the perf record when perf_aux_output_begin() fails
- Dropped assert_trbe_address_[mode|align]()
- Introduce clear_trbe_state() which clear TRBSR in single pass
- Introduce set_trbe_state() which configures TRBE for perf session
- Changed percpu trbe_drvdata->handle to hold a pointer instead
- Dropped TSB_CSYNC, dsb(), isb() from trbe_get_fault_act()
- Dropped two redundant isb() from trbe_reset_local()
- Dropped flush_tlb_all() from trbe_enable_hw()
- Moved dsb() after TSB_CSYNC in trbe_enable_hw() and trbe_disable_and_drain_local()
- Changed dsb() argument to ish for all instances
- Introduce set_trbe_flush()
- Reorganized arm_trbe_[probe|remove]_coresight_cpu()
- Dropped smp_call_function_single() from cpu online path i.e arm_trbe_cpu_startup()
- Replaced smp_call_function_single() with smp_call_function_many()
- Added some documentation on TRBE buffer management for perf
- Added comment for padding packet ETE_IGNORE_PACKET
- Dropped trbe_name[], instead used DRVNAME
- Changed TRBE DT node from arm-trbe to trace-buffer-extension
- Renamed trbe_perf structure as trbe_buf
- Changed all TRBE enum classification into just simple int defines
- Reworked TRBE module init/exit and CPU start-up/tear-down sequences
Documentation/trace/coresight/coresight-trbe.rst | 39 +
arch/arm64/include/asm/sysreg.h | 2 +
drivers/hwtracing/coresight/Kconfig | 11 +
drivers/hwtracing/coresight/Makefile | 1 +
drivers/hwtracing/coresight/coresight-trbe.c | 925 +++++++++++++++++++++++
drivers/hwtracing/coresight/coresight-trbe.h | 248 ++++++
6 files changed, 1226 insertions(+)
create mode 100644 Documentation/trace/coresight/coresight-trbe.rst
create mode 100644 drivers/hwtracing/coresight/coresight-trbe.c
create mode 100644 drivers/hwtracing/coresight/coresight-trbe.h
diff --git a/Documentation/trace/coresight/coresight-trbe.rst b/Documentation/trace/coresight/coresight-trbe.rst
new file mode 100644
index 0000000..8b79850
--- /dev/null
+++ b/Documentation/trace/coresight/coresight-trbe.rst
@@ -0,0 +1,39 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+==============================
+Trace Buffer Extension (TRBE).
+==============================
+
+ :Author: Anshuman Khandual <[email protected]>
+ :Date: November 2020
+
+Hardware Description
+--------------------
+
+Trace Buffer Extension (TRBE) is a percpu hardware which captures in system
+memory, CPU traces generated from a corresponding percpu tracing unit. This
+gets plugged in as a coresight sink device because the corresponding trace
+genarators (ETE), are plugged in as source device.
+
+The TRBE is not compliant to CoreSight architecture specifications, but is
+driven via the CoreSight driver framework to support the ETE (which is
+CoreSight compliant) integration.
+
+Sysfs files and directories
+---------------------------
+
+The TRBE devices appear on the existing coresight bus alongside the other
+coresight devices::
+
+ >$ ls /sys/bus/coresight/devices
+ trbe0 trbe1 trbe2 trbe3
+
+The ``trbe<N>`` named TRBEs are associated with a CPU.::
+
+ >$ ls /sys/bus/coresight/devices/trbe0/
+ irq align dbm
+
+*Key file items are:-*
+ * ``align``: TRBE write pointer alignment
+ * ``dbm``: TRBE updates memory with access and dirty flags
+
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index e6962b1..2a9bfb7 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -97,6 +97,7 @@
#define SET_PSTATE_UAO(x) __emit_inst(0xd500401f | PSTATE_UAO | ((!!x) << PSTATE_Imm_shift))
#define SET_PSTATE_SSBS(x) __emit_inst(0xd500401f | PSTATE_SSBS | ((!!x) << PSTATE_Imm_shift))
#define SET_PSTATE_TCO(x) __emit_inst(0xd500401f | PSTATE_TCO | ((!!x) << PSTATE_Imm_shift))
+#define TSB_CSYNC __emit_inst(0xd503225f)
#define __SYS_BARRIER_INSN(CRm, op2, Rt) \
__emit_inst(0xd5000000 | sys_insn(0, 3, 3, (CRm), (op2)) | ((Rt) & 0x1f))
@@ -869,6 +870,7 @@
#define ID_AA64MMFR2_CNP_SHIFT 0
/* id_aa64dfr0 */
+#define ID_AA64DFR0_TRBE_SHIFT 44
#define ID_AA64DFR0_TRACE_FILT_SHIFT 40
#define ID_AA64DFR0_DOUBLELOCK_SHIFT 36
#define ID_AA64DFR0_PMSVER_SHIFT 32
diff --git a/drivers/hwtracing/coresight/Kconfig b/drivers/hwtracing/coresight/Kconfig
index c119824..0f5e101 100644
--- a/drivers/hwtracing/coresight/Kconfig
+++ b/drivers/hwtracing/coresight/Kconfig
@@ -156,6 +156,17 @@ config CORESIGHT_CTI
To compile this driver as a module, choose M here: the
module will be called coresight-cti.
+config CORESIGHT_TRBE
+ bool "Trace Buffer Extension (TRBE) driver"
+ depends on ARM64
+ help
+ This driver provides support for percpu Trace Buffer Extension (TRBE).
+ TRBE always needs to be used along with it's corresponding percpu ETE
+ component. ETE generates trace data which is then captured with TRBE.
+ Unlike traditional sink devices, TRBE is a CPU feature accessible via
+ system registers. But it's explicit dependency with trace unit (ETE)
+ requires it to be plugged in as a coresight sink device.
+
config CORESIGHT_CTI_INTEGRATION_REGS
bool "Access CTI CoreSight Integration Registers"
depends on CORESIGHT_CTI
diff --git a/drivers/hwtracing/coresight/Makefile b/drivers/hwtracing/coresight/Makefile
index f20e357..d608165 100644
--- a/drivers/hwtracing/coresight/Makefile
+++ b/drivers/hwtracing/coresight/Makefile
@@ -21,5 +21,6 @@ obj-$(CONFIG_CORESIGHT_STM) += coresight-stm.o
obj-$(CONFIG_CORESIGHT_CPU_DEBUG) += coresight-cpu-debug.o
obj-$(CONFIG_CORESIGHT_CATU) += coresight-catu.o
obj-$(CONFIG_CORESIGHT_CTI) += coresight-cti.o
+obj-$(CONFIG_CORESIGHT_TRBE) += coresight-trbe.o
coresight-cti-y := coresight-cti-core.o coresight-cti-platform.o \
coresight-cti-sysfs.o
diff --git a/drivers/hwtracing/coresight/coresight-trbe.c b/drivers/hwtracing/coresight/coresight-trbe.c
new file mode 100644
index 0000000..ba280e6
--- /dev/null
+++ b/drivers/hwtracing/coresight/coresight-trbe.c
@@ -0,0 +1,925 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * This driver enables Trace Buffer Extension (TRBE) as a per-cpu coresight
+ * sink device could then pair with an appropriate per-cpu coresight source
+ * device (ETE) thus generating required trace data. Trace can be enabled
+ * via the perf framework.
+ *
+ * Copyright (C) 2020 ARM Ltd.
+ *
+ * Author: Anshuman Khandual <[email protected]>
+ */
+#define DRVNAME "arm_trbe"
+
+#define pr_fmt(fmt) DRVNAME ": " fmt
+
+#include "coresight-trbe.h"
+
+#define PERF_IDX2OFF(idx, buf) ((idx) % ((buf)->nr_pages << PAGE_SHIFT))
+
+/*
+ * A padding packet that will help the user space tools
+ * in skipping relevant sections in the captured trace
+ * data which could not be decoded.
+ */
+#define ETE_IGNORE_PACKET 0x70
+
+enum trbe_fault_action {
+ TRBE_FAULT_ACT_WRAP,
+ TRBE_FAULT_ACT_SPURIOUS,
+ TRBE_FAULT_ACT_FATAL,
+};
+
+struct trbe_buf {
+ unsigned long trbe_base;
+ unsigned long trbe_limit;
+ unsigned long trbe_write;
+ int nr_pages;
+ void **pages;
+ bool snapshot;
+ struct trbe_cpudata *cpudata;
+};
+
+struct trbe_cpudata {
+ bool trbe_dbm;
+ u64 trbe_align;
+ int cpu;
+ enum cs_mode mode;
+ struct trbe_buf *buf;
+ struct trbe_drvdata *drvdata;
+};
+
+struct trbe_drvdata {
+ struct trbe_cpudata __percpu *cpudata;
+ struct perf_output_handle __percpu **handle;
+ struct hlist_node hotplug_node;
+ int irq;
+ cpumask_t supported_cpus;
+ enum cpuhp_state trbe_online;
+ struct platform_device *pdev;
+};
+
+static int trbe_alloc_node(struct perf_event *event)
+{
+ if (event->cpu == -1)
+ return NUMA_NO_NODE;
+ return cpu_to_node(event->cpu);
+}
+
+static void set_trbe_flush(void)
+{
+ asm(TSB_CSYNC);
+ dsb(ish);
+}
+
+static void trbe_disable_and_drain_local(void)
+{
+ write_sysreg_s(0, SYS_TRBLIMITR_EL1);
+ isb();
+ set_trbe_flush();
+}
+
+static void trbe_reset_local(void)
+{
+ trbe_disable_and_drain_local();
+ write_sysreg_s(0, SYS_TRBPTR_EL1);
+ write_sysreg_s(0, SYS_TRBBASER_EL1);
+ write_sysreg_s(0, SYS_TRBSR_EL1);
+ isb();
+}
+
+/*
+ * TRBE Buffer Management
+ *
+ * The TRBE buffer spans from the base pointer till the limit pointer. When enabled,
+ * it starts writing trace data from the write pointer onward till the limit pointer.
+ * When the write pointer reaches the address just before the limit pointer, it gets
+ * wrapped around again to the base pointer. This is called a TRBE wrap event which
+ * is accompanied by an IRQ. The write pointer again starts writing trace data from
+ * the base pointer until just before the limit pointer before getting wrapped again
+ * with an IRQ and this process just goes on as long as the TRBE is enabled.
+ *
+ * Wrap around with an IRQ
+ * ------ < ------ < ------- < ----- < -----
+ * | |
+ * ------ > ------ > ------- > ----- > -----
+ *
+ * +---------------+-----------------------+
+ * | | |
+ * +---------------+-----------------------+
+ * Base Pointer Write Pointer Limit Pointer
+ *
+ * The base and limit pointers always needs to be PAGE_SIZE aligned. But the write
+ * pointer can be aligned to the implementation defined TRBE trace buffer alignment
+ * as captured in trbe_cpudata->trbe_align.
+ *
+ *
+ * head tail wakeup
+ * +---------------------------------------+----- ~ ~ ------
+ * |$$$$$$$|################|$$$$$$$$$$$$$$| |
+ * +---------------------------------------+----- ~ ~ ------
+ * Base Pointer Write Pointer Limit Pointer
+ *
+ * The perf_output_handle indices (head, tail, wakeup) are monotonically increasing
+ * values which tracks all the driver writes and user reads from the perf auxiliary
+ * buffer. Generally [head..tail] is the area where the driver can write into unless
+ * the wakeup is behind the tail. Enabled TRBE buffer span needs to be adjusted and
+ * configured depending on the perf_output_handle indices, so that the driver does
+ * not override into areas in the perf auxiliary buffer which is being or yet to be
+ * consumed from the user space. The enabled TRBE buffer area is a moving subset of
+ * the allocated perf auxiliary buffer.
+ */
+static void trbe_pad_buf(struct perf_output_handle *handle, int len)
+{
+ struct trbe_buf *buf = etm_perf_sink_config(handle);
+ u64 head = PERF_IDX2OFF(handle->head, buf);
+
+ memset((void *) buf->trbe_base + head, ETE_IGNORE_PACKET, len);
+ if (!buf->snapshot)
+ perf_aux_output_skip(handle, len);
+}
+
+static unsigned long trbe_snapshot_offset(struct perf_output_handle *handle)
+{
+ struct trbe_buf *buf = etm_perf_sink_config(handle);
+ u64 head = PERF_IDX2OFF(handle->head, buf);
+ u64 limit = buf->nr_pages * PAGE_SIZE;
+
+ /*
+ * The trace format isn't parseable in reverse, so clamp the limit
+ * to half of the buffer size in snapshot mode so that the worst
+ * case is half a buffer of records, as opposed to a single record.
+ */
+ if (head < limit >> 1)
+ limit >>= 1;
+
+ return limit;
+}
+
+/*
+ * TRBE Limit Calculation
+ *
+ * The following markers are used to illustrate various TRBE buffer situations.
+ *
+ * $$$$ - Data area, unconsumed captured trace data, not to be overridden
+ * #### - Free area, enabled, trace will be written
+ * %%%% - Free area, disabled, trace will not be written
+ * ==== - Free area, padded with ETE_IGNORE_PACKET, trace will be skipped
+ */
+static unsigned long trbe_normal_offset(struct perf_output_handle *handle)
+{
+ struct trbe_buf *buf = etm_perf_sink_config(handle);
+ struct trbe_cpudata *cpudata = buf->cpudata;
+ const u64 bufsize = buf->nr_pages * PAGE_SIZE;
+ u64 limit = bufsize;
+ u64 head, tail, wakeup;
+
+ head = PERF_IDX2OFF(handle->head, buf);
+
+ /*
+ * head
+ * ------->|
+ * |
+ * head TRBE align tail
+ * +----|-------|---------------|-------+
+ * |$$$$|=======|###############|$$$$$$$|
+ * +----|-------|---------------|-------+
+ * trbe_base trbe_base + nr_pages
+ *
+ * Perf aux buffer output head position can be misaligned depending on
+ * various factors including user space reads. In case misaligned, head
+ * needs to be aligned before TRBE can be configured. Pad the alignment
+ * gap with ETE_IGNORE_PACKET bytes that will be ignored by user tools
+ * and skip this section thus advancing the head.
+ */
+ if (!IS_ALIGNED(head, cpudata->trbe_align)) {
+ unsigned long delta = roundup(head, cpudata->trbe_align) - head;
+
+ delta = min(delta, handle->size);
+ trbe_pad_buf(handle, delta);
+ head = PERF_IDX2OFF(handle->head, buf);
+ }
+
+ /*
+ * head = tail (size = 0)
+ * +----|-------------------------------+
+ * |$$$$|$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ |
+ * +----|-------------------------------+
+ * trbe_base trbe_base + nr_pages
+ *
+ * Perf aux buffer does not have any space for the driver to write into.
+ * Just communicate trace truncation event to the user space by marking
+ * it with PERF_AUX_FLAG_TRUNCATED.
+ */
+ if (!handle->size) {
+ perf_aux_output_flag(handle, PERF_AUX_FLAG_TRUNCATED);
+ return 0;
+ }
+
+ /* Compute the tail and wakeup indices now that we've aligned head */
+ tail = PERF_IDX2OFF(handle->head + handle->size, buf);
+ wakeup = PERF_IDX2OFF(handle->wakeup, buf);
+
+ /*
+ * Lets calculate the buffer area which TRBE could write into. There
+ * are three possible scenarios here. Limit needs to be aligned with
+ * PAGE_SIZE per the TRBE requirement. Always avoid clobbering the
+ * unconsumed data.
+ *
+ * 1) head < tail
+ *
+ * head tail
+ * +----|-----------------------|-------+
+ * |$$$$|#######################|$$$$$$$|
+ * +----|-----------------------|-------+
+ * trbe_base limit trbe_base + nr_pages
+ *
+ * TRBE could write into [head..tail] area. Unless the tail is right at
+ * the end of the buffer, neither an wrap around nor an IRQ is expected
+ * while being enabled.
+ *
+ * 2) head == tail
+ *
+ * head = tail (size > 0)
+ * +----|-------------------------------+
+ * |%%%%|###############################|
+ * +----|-------------------------------+
+ * trbe_base limit = trbe_base + nr_pages
+ *
+ * TRBE should just write into [head..base + nr_pages] area even though
+ * the entire buffer is empty. Reason being, when the trace reaches the
+ * end of the buffer, it will just wrap around with an IRQ giving an
+ * opportunity to reconfigure the buffer.
+ *
+ * 3) tail < head
+ *
+ * tail head
+ * +----|-----------------------|-------+
+ * |%%%%|$$$$$$$$$$$$$$$$$$$$$$$|#######|
+ * +----|-----------------------|-------+
+ * trbe_base limit = trbe_base + nr_pages
+ *
+ * TRBE should just write into [head..base + nr_pages] area even though
+ * the [trbe_base..tail] is also empty. Reason being, when the trace
+ * reaches the end of the buffer, it will just wrap around with an IRQ
+ * giving an opportunity to reconfigure the buffer.
+ */
+ if (head < tail)
+ limit = round_down(tail, PAGE_SIZE);
+
+ /*
+ * Wakeup may be arbitrarily far into the future. If it's not in the
+ * current generation, either we'll wrap before hitting it, or it's
+ * in the past and has been handled already.
+ *
+ * If there's a wakeup before we wrap, arrange to be woken up by the
+ * page boundary following it. Keep the tail boundary if that's lower.
+ *
+ * head wakeup tail
+ * +----|---------------|-------|-------+
+ * |$$$$|###############|%%%%%%%|$$$$$$$|
+ * +----|---------------|-------|-------+
+ * trbe_base limit trbe_base + nr_pages
+ */
+ if (handle->wakeup < (handle->head + handle->size) && head <= wakeup)
+ limit = min(limit, round_up(wakeup, PAGE_SIZE));
+
+ /*
+ * There are two situation when this can happen i.e limit is before
+ * the head and hence TRBE cannot be configured.
+ *
+ * 1) head < tail (aligned down with PAGE_SIZE) and also they are both
+ * within the same PAGE size range.
+ *
+ * PAGE_SIZE
+ * |----------------------|
+ *
+ * limit head tail
+ * +------------|------|--------|-------+
+ * |$$$$$$$$$$$$$$$$$$$|========|$$$$$$$|
+ * +------------|------|--------|-------+
+ * trbe_base trbe_base + nr_pages
+ *
+ * 2) head < wakeup (aligned up with PAGE_SIZE) < tail and also both
+ * head and wakeup are within same PAGE size range.
+ *
+ * PAGE_SIZE
+ * |----------------------|
+ *
+ * limit head wakeup tail
+ * +----|------|-------|--------|-------+
+ * |$$$$$$$$$$$|=======|========|$$$$$$$|
+ * +----|------|-------|--------|-------+
+ * trbe_base trbe_base + nr_pages
+ */
+ if (limit > head)
+ return limit;
+
+ trbe_pad_buf(handle, handle->size);
+ perf_aux_output_flag(handle, PERF_AUX_FLAG_TRUNCATED);
+ return 0;
+}
+
+static unsigned long get_trbe_limit(struct perf_output_handle *handle)
+{
+ struct trbe_buf *buf = etm_perf_sink_config(handle);
+ unsigned long offset;
+
+ if (buf->snapshot)
+ offset = trbe_snapshot_offset(handle);
+ else
+ offset = trbe_normal_offset(handle);
+ return buf->trbe_base + offset;
+}
+
+static void clear_trbe_state(void)
+{
+ u64 trbsr = read_sysreg_s(SYS_TRBSR_EL1);
+
+ WARN_ON(is_trbe_enabled());
+ trbsr &= ~TRBSR_IRQ;
+ trbsr &= ~TRBSR_TRG;
+ trbsr &= ~TRBSR_WRAP;
+ trbsr &= ~(TRBSR_EC_MASK << TRBSR_EC_SHIFT);
+ trbsr &= ~(TRBSR_BSC_MASK << TRBSR_BSC_SHIFT);
+ trbsr &= ~(TRBSR_FSC_MASK << TRBSR_FSC_SHIFT);
+ write_sysreg_s(trbsr, SYS_TRBSR_EL1);
+}
+
+static void set_trbe_state(void)
+{
+ u64 trblimitr = read_sysreg_s(SYS_TRBLIMITR_EL1);
+
+ trblimitr &= ~TRBLIMITR_NVM;
+ trblimitr &= ~(TRBLIMITR_FILL_MODE_MASK << TRBLIMITR_FILL_MODE_SHIFT);
+ trblimitr &= ~(TRBLIMITR_TRIG_MODE_MASK << TRBLIMITR_TRIG_MODE_SHIFT);
+ trblimitr |= (TRBE_FILL_STOP & TRBLIMITR_FILL_MODE_MASK) << TRBLIMITR_FILL_MODE_SHIFT;
+ trblimitr |= (TRBE_TRIGGER_IGNORE & TRBLIMITR_TRIG_MODE_MASK) << TRBLIMITR_TRIG_MODE_SHIFT;
+ write_sysreg_s(trblimitr, SYS_TRBLIMITR_EL1);
+}
+
+static void trbe_enable_hw(struct trbe_buf *buf)
+{
+ WARN_ON(buf->trbe_write < buf->trbe_base);
+ WARN_ON(buf->trbe_write >= buf->trbe_limit);
+ set_trbe_disabled();
+ clear_trbe_state();
+ set_trbe_state();
+ isb();
+ set_trbe_base_pointer(buf->trbe_base);
+ set_trbe_limit_pointer(buf->trbe_limit);
+ set_trbe_write_pointer(buf->trbe_write);
+ isb();
+ set_trbe_running();
+ set_trbe_enabled();
+ set_trbe_flush();
+}
+
+static void *arm_trbe_alloc_buffer(struct coresight_device *csdev,
+ struct perf_event *event, void **pages,
+ int nr_pages, bool snapshot)
+{
+ struct trbe_buf *buf;
+ struct page **pglist;
+ int i;
+
+ if ((nr_pages < 2) || (snapshot && (nr_pages & 1)))
+ return NULL;
+
+ buf = kzalloc_node(sizeof(*buf), GFP_KERNEL, trbe_alloc_node(event));
+ if (IS_ERR(buf))
+ return ERR_PTR(-ENOMEM);
+
+ pglist = kcalloc(nr_pages, sizeof(*pglist), GFP_KERNEL);
+ if (IS_ERR(pglist)) {
+ kfree(buf);
+ return ERR_PTR(-ENOMEM);
+ }
+
+ for (i = 0; i < nr_pages; i++)
+ pglist[i] = virt_to_page(pages[i]);
+
+ buf->trbe_base = (unsigned long) vmap(pglist, nr_pages, VM_MAP, PAGE_KERNEL);
+ if (IS_ERR((void *) buf->trbe_base)) {
+ kfree(pglist);
+ kfree(buf);
+ return ERR_PTR(buf->trbe_base);
+ }
+ buf->trbe_limit = buf->trbe_base + nr_pages * PAGE_SIZE;
+ buf->trbe_write = buf->trbe_base;
+ buf->snapshot = snapshot;
+ buf->nr_pages = nr_pages;
+ buf->pages = pages;
+ kfree(pglist);
+ return buf;
+}
+
+void arm_trbe_free_buffer(void *config)
+{
+ struct trbe_buf *buf = config;
+
+ vunmap((void *) buf->trbe_base);
+ kfree(buf);
+}
+
+static unsigned long arm_trbe_update_buffer(struct coresight_device *csdev,
+ struct perf_output_handle *handle,
+ void *config)
+{
+ struct trbe_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
+ struct trbe_cpudata *cpudata = dev_get_drvdata(&csdev->dev);
+ struct trbe_buf *buf = config;
+ unsigned long size, offset;
+
+ WARN_ON(buf->cpudata != cpudata);
+ WARN_ON(cpudata->cpu != smp_processor_id());
+ WARN_ON(cpudata->drvdata != drvdata);
+ if (cpudata->mode != CS_MODE_PERF)
+ return -EINVAL;
+
+ offset = get_trbe_write_pointer() - get_trbe_base_pointer();
+ size = offset - PERF_IDX2OFF(handle->head, buf);
+ if (buf->snapshot)
+ handle->head += size;
+ trbe_reset_local();
+ return size;
+}
+
+static int arm_trbe_enable(struct coresight_device *csdev, u32 mode, void *data)
+{
+ struct trbe_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
+ struct trbe_cpudata *cpudata = dev_get_drvdata(&csdev->dev);
+ struct perf_output_handle *handle = data;
+ struct trbe_buf *buf = etm_perf_sink_config(handle);
+
+ WARN_ON(cpudata->cpu != smp_processor_id());
+ WARN_ON(cpudata->drvdata != drvdata);
+ if (mode != CS_MODE_PERF)
+ return -EINVAL;
+
+ *this_cpu_ptr(drvdata->handle) = handle;
+ cpudata->buf = buf;
+ cpudata->mode = mode;
+ buf->cpudata = cpudata;
+ buf->trbe_write = buf->trbe_base + PERF_IDX2OFF(handle->head, buf);
+ buf->trbe_limit = get_trbe_limit(handle);
+ if (buf->trbe_limit == buf->trbe_base) {
+ trbe_disable_and_drain_local();
+ return 0;
+ }
+ trbe_enable_hw(buf);
+ return 0;
+}
+
+static int arm_trbe_disable(struct coresight_device *csdev)
+{
+ struct trbe_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
+ struct trbe_cpudata *cpudata = dev_get_drvdata(&csdev->dev);
+ struct trbe_buf *buf = cpudata->buf;
+
+ WARN_ON(buf->cpudata != cpudata);
+ WARN_ON(cpudata->cpu != smp_processor_id());
+ WARN_ON(cpudata->drvdata != drvdata);
+ if (cpudata->mode != CS_MODE_PERF)
+ return -EINVAL;
+
+ trbe_disable_and_drain_local();
+ buf->cpudata = NULL;
+ cpudata->buf = NULL;
+ cpudata->mode = CS_MODE_DISABLED;
+ return 0;
+}
+
+static void trbe_handle_fatal(struct perf_output_handle *handle)
+{
+ perf_aux_output_flag(handle, PERF_AUX_FLAG_TRUNCATED);
+ perf_aux_output_end(handle, 0);
+ trbe_disable_and_drain_local();
+}
+
+static void trbe_handle_spurious(struct perf_output_handle *handle)
+{
+ struct trbe_buf *buf = etm_perf_sink_config(handle);
+
+ buf->trbe_write = buf->trbe_base + PERF_IDX2OFF(handle->head, buf);
+ buf->trbe_limit = get_trbe_limit(handle);
+ if (buf->trbe_limit == buf->trbe_base) {
+ trbe_disable_and_drain_local();
+ return;
+ }
+ trbe_enable_hw(buf);
+}
+
+static void trbe_handle_overflow(struct perf_output_handle *handle)
+{
+ struct perf_event *event = handle->event;
+ struct trbe_buf *buf = etm_perf_sink_config(handle);
+ unsigned long offset, size;
+ struct etm_event_data *event_data;
+
+ offset = get_trbe_limit_pointer() - get_trbe_base_pointer();
+ size = offset - PERF_IDX2OFF(handle->head, buf);
+ if (buf->snapshot)
+ handle->head = offset;
+ perf_aux_output_end(handle, size);
+
+ event_data = perf_aux_output_begin(handle, event);
+ if (!event_data) {
+ event->hw.state |= PERF_HES_STOPPED;
+ trbe_disable_and_drain_local();
+ perf_aux_output_flag(handle, PERF_AUX_FLAG_TRUNCATED);
+ return;
+ }
+ buf->trbe_write = buf->trbe_base;
+ buf->trbe_limit = get_trbe_limit(handle);
+ if (buf->trbe_limit == buf->trbe_base) {
+ trbe_disable_and_drain_local();
+ return;
+ }
+ *this_cpu_ptr(buf->cpudata->drvdata->handle) = handle;
+ trbe_enable_hw(buf);
+}
+
+static bool is_perf_trbe(struct perf_output_handle *handle)
+{
+ struct trbe_buf *buf = etm_perf_sink_config(handle);
+ struct trbe_cpudata *cpudata = buf->cpudata;
+ struct trbe_drvdata *drvdata = cpudata->drvdata;
+ int cpu = smp_processor_id();
+
+ WARN_ON(buf->trbe_base != get_trbe_base_pointer());
+ WARN_ON(buf->trbe_limit != get_trbe_limit_pointer());
+
+ if (cpudata->mode != CS_MODE_PERF)
+ return false;
+
+ if (cpudata->cpu != cpu)
+ return false;
+
+ if (!cpumask_test_cpu(cpu, &drvdata->supported_cpus))
+ return false;
+
+ return true;
+}
+
+static enum trbe_fault_action trbe_get_fault_act(struct perf_output_handle *handle)
+{
+ int ec = get_trbe_ec();
+ int bsc = get_trbe_bsc();
+
+ WARN_ON(is_trbe_running());
+ if (is_trbe_trg() || is_trbe_abort())
+ return TRBE_FAULT_ACT_FATAL;
+
+ if ((ec == TRBE_EC_STAGE1_ABORT) || (ec == TRBE_EC_STAGE2_ABORT))
+ return TRBE_FAULT_ACT_FATAL;
+
+ if (is_trbe_wrap() && (ec == TRBE_EC_OTHERS) && (bsc == TRBE_BSC_FILLED)) {
+ if (get_trbe_write_pointer() == get_trbe_base_pointer())
+ return TRBE_FAULT_ACT_WRAP;
+ }
+ return TRBE_FAULT_ACT_SPURIOUS;
+}
+
+static irqreturn_t arm_trbe_irq_handler(int irq, void *dev)
+{
+ struct perf_output_handle **handle_ptr = dev;
+ struct perf_output_handle *handle = *handle_ptr;
+ enum trbe_fault_action act;
+
+ WARN_ON(!is_trbe_irq());
+ clr_trbe_irq();
+
+ if (!perf_get_aux(handle))
+ return IRQ_NONE;
+
+ if (!is_perf_trbe(handle))
+ return IRQ_NONE;
+
+ irq_work_run();
+
+ act = trbe_get_fault_act(handle);
+ switch (act) {
+ case TRBE_FAULT_ACT_WRAP:
+ trbe_handle_overflow(handle);
+ break;
+ case TRBE_FAULT_ACT_SPURIOUS:
+ trbe_handle_spurious(handle);
+ break;
+ case TRBE_FAULT_ACT_FATAL:
+ trbe_handle_fatal(handle);
+ break;
+ }
+ return IRQ_HANDLED;
+}
+
+static const struct coresight_ops_sink arm_trbe_sink_ops = {
+ .enable = arm_trbe_enable,
+ .disable = arm_trbe_disable,
+ .alloc_buffer = arm_trbe_alloc_buffer,
+ .free_buffer = arm_trbe_free_buffer,
+ .update_buffer = arm_trbe_update_buffer,
+};
+
+static const struct coresight_ops arm_trbe_cs_ops = {
+ .sink_ops = &arm_trbe_sink_ops,
+};
+
+static ssize_t align_show(struct device *dev, struct device_attribute *attr, char *buf)
+{
+ struct trbe_cpudata *cpudata = dev_get_drvdata(dev);
+
+ return sprintf(buf, "%llx\n", cpudata->trbe_align);
+}
+static DEVICE_ATTR_RO(align);
+
+static ssize_t dbm_show(struct device *dev, struct device_attribute *attr, char *buf)
+{
+ struct trbe_cpudata *cpudata = dev_get_drvdata(dev);
+
+ return sprintf(buf, "%d\n", cpudata->trbe_dbm);
+}
+static DEVICE_ATTR_RO(dbm);
+
+static struct attribute *arm_trbe_attrs[] = {
+ &dev_attr_align.attr,
+ &dev_attr_dbm.attr,
+ NULL,
+};
+
+static const struct attribute_group arm_trbe_group = {
+ .attrs = arm_trbe_attrs,
+};
+
+static const struct attribute_group *arm_trbe_groups[] = {
+ &arm_trbe_group,
+ NULL,
+};
+
+static void arm_trbe_probe_coresight_cpu(void *info)
+{
+ struct trbe_drvdata *drvdata = info;
+ struct coresight_desc desc = { 0 };
+ int cpu = smp_processor_id();
+ struct trbe_cpudata *cpudata = per_cpu_ptr(drvdata->cpudata, cpu);
+ struct coresight_device *trbe_csdev = per_cpu(csdev_sink, cpu);
+ struct device *dev;
+
+ if (WARN_ON(!cpudata))
+ goto cpu_clear;
+
+ if (trbe_csdev)
+ return;
+
+ cpudata->cpu = smp_processor_id();
+ cpudata->drvdata = drvdata;
+ dev = &cpudata->drvdata->pdev->dev;
+
+ if (!is_trbe_available()) {
+ pr_err("TRBE is not implemented on cpu %d\n", cpudata->cpu);
+ goto cpu_clear;
+ }
+
+ if (!is_trbe_programmable()) {
+ pr_err("TRBE is owned in higher exception level on cpu %d\n", cpudata->cpu);
+ goto cpu_clear;
+ }
+ desc.name = devm_kasprintf(dev, GFP_KERNEL, "%s%d", DRVNAME, smp_processor_id());
+ if (IS_ERR(desc.name))
+ goto cpu_clear;
+
+ desc.type = CORESIGHT_DEV_TYPE_SINK;
+ desc.subtype.sink_subtype = CORESIGHT_DEV_SUBTYPE_SINK_PERCPU_SYSMEM;
+ desc.ops = &arm_trbe_cs_ops;
+ desc.pdata = dev_get_platdata(dev);
+ desc.groups = arm_trbe_groups;
+ desc.dev = dev;
+ trbe_csdev = coresight_register(&desc);
+ if (IS_ERR(trbe_csdev))
+ goto cpu_clear;
+
+ dev_set_drvdata(&trbe_csdev->dev, cpudata);
+ cpudata->trbe_dbm = get_trbe_flag_update();
+ cpudata->trbe_align = 1ULL << get_trbe_address_align();
+ if (cpudata->trbe_align > SZ_2K) {
+ pr_err("Unsupported alignment on cpu %d\n", cpudata->cpu);
+ goto cpu_clear;
+ }
+ per_cpu(csdev_sink, cpu) = trbe_csdev;
+ trbe_reset_local();
+ enable_percpu_irq(drvdata->irq, IRQ_TYPE_NONE);
+ return;
+cpu_clear:
+ cpumask_clear_cpu(cpudata->cpu, &cpudata->drvdata->supported_cpus);
+}
+
+static void arm_trbe_remove_coresight_cpu(void *info)
+{
+ int cpu = smp_processor_id();
+ struct trbe_drvdata *drvdata = info;
+ struct trbe_cpudata *cpudata = per_cpu_ptr(drvdata->cpudata, cpu);
+ struct coresight_device *trbe_csdev = per_cpu(csdev_sink, cpu);
+
+ if (trbe_csdev) {
+ coresight_unregister(trbe_csdev);
+ cpudata->drvdata = NULL;
+ per_cpu(csdev_sink, cpu) = NULL;
+ }
+ disable_percpu_irq(drvdata->irq);
+ trbe_reset_local();
+}
+
+static int arm_trbe_probe_coresight(struct trbe_drvdata *drvdata)
+{
+ drvdata->cpudata = alloc_percpu(typeof(*drvdata->cpudata));
+ if (IS_ERR(drvdata->cpudata))
+ return PTR_ERR(drvdata->cpudata);
+
+ arm_trbe_probe_coresight_cpu(drvdata);
+ smp_call_function_many(&drvdata->supported_cpus, arm_trbe_probe_coresight_cpu, drvdata, 1);
+ return 0;
+}
+
+static int arm_trbe_remove_coresight(struct trbe_drvdata *drvdata)
+{
+ arm_trbe_remove_coresight_cpu(drvdata);
+ smp_call_function_many(&drvdata->supported_cpus, arm_trbe_remove_coresight_cpu, drvdata, 1);
+ free_percpu(drvdata->cpudata);
+ return 0;
+}
+
+static int arm_trbe_cpu_startup(unsigned int cpu, struct hlist_node *node)
+{
+ struct trbe_drvdata *drvdata = hlist_entry_safe(node, struct trbe_drvdata, hotplug_node);
+
+ if (cpumask_test_cpu(cpu, &drvdata->supported_cpus)) {
+ if (!per_cpu(csdev_sink, cpu) && (system_state == SYSTEM_RUNNING)) {
+ arm_trbe_probe_coresight_cpu(drvdata);
+ } else {
+ trbe_reset_local();
+ enable_percpu_irq(drvdata->irq, IRQ_TYPE_NONE);
+ }
+ }
+ return 0;
+}
+
+static int arm_trbe_cpu_teardown(unsigned int cpu, struct hlist_node *node)
+{
+ struct trbe_drvdata *drvdata = hlist_entry_safe(node, struct trbe_drvdata, hotplug_node);
+
+ if (cpumask_test_cpu(cpu, &drvdata->supported_cpus)) {
+ disable_percpu_irq(drvdata->irq);
+ trbe_reset_local();
+ }
+ return 0;
+}
+
+static int arm_trbe_probe_cpuhp(struct trbe_drvdata *drvdata)
+{
+ enum cpuhp_state trbe_online;
+
+ trbe_online = cpuhp_setup_state_multi(CPUHP_AP_ONLINE_DYN, DRVNAME,
+ arm_trbe_cpu_startup, arm_trbe_cpu_teardown);
+ if (trbe_online < 0)
+ return -EINVAL;
+
+ if (cpuhp_state_add_instance(trbe_online, &drvdata->hotplug_node))
+ return -EINVAL;
+
+ drvdata->trbe_online = trbe_online;
+ return 0;
+}
+
+static void arm_trbe_remove_cpuhp(struct trbe_drvdata *drvdata)
+{
+ cpuhp_remove_multi_state(drvdata->trbe_online);
+}
+
+static int arm_trbe_probe_irq(struct platform_device *pdev,
+ struct trbe_drvdata *drvdata)
+{
+ drvdata->irq = platform_get_irq(pdev, 0);
+ if (!drvdata->irq) {
+ pr_err("IRQ not found for the platform device\n");
+ return -ENXIO;
+ }
+
+ if (!irq_is_percpu(drvdata->irq)) {
+ pr_err("IRQ is not a PPI\n");
+ return -EINVAL;
+ }
+
+ if (irq_get_percpu_devid_partition(drvdata->irq, &drvdata->supported_cpus))
+ return -EINVAL;
+
+ drvdata->handle = alloc_percpu(typeof(*drvdata->handle));
+ if (!drvdata->handle)
+ return -ENOMEM;
+
+ if (request_percpu_irq(drvdata->irq, arm_trbe_irq_handler, DRVNAME, drvdata->handle)) {
+ free_percpu(drvdata->handle);
+ return -EINVAL;
+ }
+ return 0;
+}
+
+static void arm_trbe_remove_irq(struct trbe_drvdata *drvdata)
+{
+ free_percpu_irq(drvdata->irq, drvdata->handle);
+ free_percpu(drvdata->handle);
+}
+
+static int arm_trbe_device_probe(struct platform_device *pdev)
+{
+ struct coresight_platform_data *pdata;
+ struct trbe_drvdata *drvdata;
+ struct device *dev = &pdev->dev;
+ int ret;
+
+ drvdata = devm_kzalloc(dev, sizeof(*drvdata), GFP_KERNEL);
+ if (IS_ERR(drvdata))
+ return -ENOMEM;
+
+ pdata = coresight_get_platform_data(dev);
+ if (IS_ERR(pdata)) {
+ kfree(drvdata);
+ return -ENOMEM;
+ }
+
+ dev_set_drvdata(dev, drvdata);
+ dev->platform_data = pdata;
+ drvdata->pdev = pdev;
+ ret = arm_trbe_probe_irq(pdev, drvdata);
+ if (ret)
+ goto irq_failed;
+
+ ret = arm_trbe_probe_coresight(drvdata);
+ if (ret)
+ goto probe_failed;
+
+ ret = arm_trbe_probe_cpuhp(drvdata);
+ if (ret)
+ goto cpuhp_failed;
+
+ return 0;
+cpuhp_failed:
+ arm_trbe_remove_coresight(drvdata);
+probe_failed:
+ arm_trbe_remove_irq(drvdata);
+irq_failed:
+ kfree(pdata);
+ kfree(drvdata);
+ return ret;
+}
+
+static int arm_trbe_device_remove(struct platform_device *pdev)
+{
+ struct coresight_platform_data *pdata = dev_get_platdata(&pdev->dev);
+ struct trbe_drvdata *drvdata = platform_get_drvdata(pdev);
+
+ arm_trbe_remove_coresight(drvdata);
+ arm_trbe_remove_cpuhp(drvdata);
+ arm_trbe_remove_irq(drvdata);
+ kfree(pdata);
+ kfree(drvdata);
+ return 0;
+}
+
+static const struct of_device_id arm_trbe_of_match[] = {
+ { .compatible = "arm,trace-buffer-extension", .data = (void *)1 },
+ {},
+};
+MODULE_DEVICE_TABLE(of, arm_trbe_of_match);
+
+static struct platform_driver arm_trbe_driver = {
+ .driver = {
+ .name = DRVNAME,
+ .of_match_table = of_match_ptr(arm_trbe_of_match),
+ .suppress_bind_attrs = true,
+ },
+ .probe = arm_trbe_device_probe,
+ .remove = arm_trbe_device_remove,
+};
+
+static int __init arm_trbe_init(void)
+{
+ int ret;
+
+ ret = platform_driver_register(&arm_trbe_driver);
+ if (!ret)
+ return 0;
+
+ pr_err("Error registering %s platform driver\n", DRVNAME);
+ return ret;
+}
+
+static void __exit arm_trbe_exit(void)
+{
+ platform_driver_unregister(&arm_trbe_driver);
+}
+module_init(arm_trbe_init);
+module_exit(arm_trbe_exit);
+
+MODULE_AUTHOR("Anshuman Khandual <[email protected]>");
+MODULE_DESCRIPTION("Arm Trace Buffer Extension (TRBE) driver");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/hwtracing/coresight/coresight-trbe.h b/drivers/hwtracing/coresight/coresight-trbe.h
new file mode 100644
index 0000000..e956439
--- /dev/null
+++ b/drivers/hwtracing/coresight/coresight-trbe.h
@@ -0,0 +1,248 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * This contains all required hardware related helper functions for
+ * Trace Buffer Extension (TRBE) driver in the coresight framework.
+ *
+ * Copyright (C) 2020 ARM Ltd.
+ *
+ * Author: Anshuman Khandual <[email protected]>
+ */
+#include <linux/coresight.h>
+#include <linux/device.h>
+#include <linux/irq.h>
+#include <linux/kernel.h>
+#include <linux/of.h>
+#include <linux/platform_device.h>
+#include <linux/smp.h>
+
+#include "coresight-etm-perf.h"
+
+DECLARE_PER_CPU(struct coresight_device *, csdev_sink);
+
+static inline bool is_trbe_available(void)
+{
+ u64 aa64dfr0 = read_sysreg_s(SYS_ID_AA64DFR0_EL1);
+ int trbe = cpuid_feature_extract_unsigned_field(aa64dfr0, ID_AA64DFR0_TRBE_SHIFT);
+
+ return trbe >= 0b0001;
+}
+
+static inline bool is_trbe_enabled(void)
+{
+ u64 trblimitr = read_sysreg_s(SYS_TRBLIMITR_EL1);
+
+ return trblimitr & TRBLIMITR_ENABLE;
+}
+
+#define TRBE_EC_OTHERS 0
+#define TRBE_EC_STAGE1_ABORT 36
+#define TRBE_EC_STAGE2_ABORT 37
+
+static inline int get_trbe_ec(void)
+{
+ u64 trbsr = read_sysreg_s(SYS_TRBSR_EL1);
+
+ return (trbsr >> TRBSR_EC_SHIFT) & TRBSR_EC_MASK;
+}
+
+#define TRBE_BSC_NOT_STOPPED 0
+#define TRBE_BSC_FILLED 1
+#define TRBE_BSC_TRIGGERED 2
+
+static inline int get_trbe_bsc(void)
+{
+ u64 trbsr = read_sysreg_s(SYS_TRBSR_EL1);
+
+ return (trbsr >> TRBSR_BSC_SHIFT) & TRBSR_BSC_MASK;
+}
+
+static inline void clr_trbe_irq(void)
+{
+ u64 trbsr = read_sysreg_s(SYS_TRBSR_EL1);
+
+ trbsr &= ~TRBSR_IRQ;
+ write_sysreg_s(trbsr, SYS_TRBSR_EL1);
+}
+
+static inline bool is_trbe_irq(void)
+{
+ u64 trbsr = read_sysreg_s(SYS_TRBSR_EL1);
+
+ return trbsr & TRBSR_IRQ;
+}
+
+static inline bool is_trbe_trg(void)
+{
+ u64 trbsr = read_sysreg_s(SYS_TRBSR_EL1);
+
+ return trbsr & TRBSR_TRG;
+}
+
+static inline bool is_trbe_wrap(void)
+{
+ u64 trbsr = read_sysreg_s(SYS_TRBSR_EL1);
+
+ return trbsr & TRBSR_WRAP;
+}
+
+static inline bool is_trbe_abort(void)
+{
+ u64 trbsr = read_sysreg_s(SYS_TRBSR_EL1);
+
+ return trbsr & TRBSR_ABORT;
+}
+
+static inline bool is_trbe_running(void)
+{
+ u64 trbsr = read_sysreg_s(SYS_TRBSR_EL1);
+
+ return !(trbsr & TRBSR_STOP);
+}
+
+static inline void set_trbe_running(void)
+{
+ u64 trbsr = read_sysreg_s(SYS_TRBSR_EL1);
+
+ trbsr &= ~TRBSR_STOP;
+ write_sysreg_s(trbsr, SYS_TRBSR_EL1);
+}
+
+static inline void set_trbe_virtual_mode(void)
+{
+ u64 trblimitr = read_sysreg_s(SYS_TRBLIMITR_EL1);
+
+ trblimitr &= ~TRBLIMITR_NVM;
+ write_sysreg_s(trblimitr, SYS_TRBLIMITR_EL1);
+}
+
+#define TRBE_TRIGGER_STOP 0
+#define TRBE_TRIGGER_IRQ 1
+#define TRBE_TRIGGER_IGNORE 3
+
+static inline int get_trbe_trig_mode(void)
+{
+ u64 trblimitr = read_sysreg_s(SYS_TRBLIMITR_EL1);
+
+ return (trblimitr >> TRBLIMITR_TRIG_MODE_SHIFT) & TRBLIMITR_TRIG_MODE_MASK;
+}
+
+static inline void set_trbe_trig_mode(int mode)
+{
+ u64 trblimitr = read_sysreg_s(SYS_TRBLIMITR_EL1);
+
+ trblimitr &= ~(TRBLIMITR_TRIG_MODE_MASK << TRBLIMITR_TRIG_MODE_SHIFT);
+ trblimitr |= ((mode & TRBLIMITR_TRIG_MODE_MASK) << TRBLIMITR_TRIG_MODE_SHIFT);
+ write_sysreg_s(trblimitr, SYS_TRBLIMITR_EL1);
+}
+
+#define TRBE_FILL_STOP 0
+#define TRBE_FILL_WRAP 1
+#define TRBE_FILL_CIRCULAR 3
+
+static inline int get_trbe_fill_mode(void)
+{
+ u64 trblimitr = read_sysreg_s(SYS_TRBLIMITR_EL1);
+
+ return (trblimitr >> TRBLIMITR_FILL_MODE_SHIFT) & TRBLIMITR_FILL_MODE_MASK;
+}
+
+static inline void set_trbe_fill_mode(int mode)
+{
+ u64 trblimitr = read_sysreg_s(SYS_TRBLIMITR_EL1);
+
+ trblimitr &= ~(TRBLIMITR_FILL_MODE_MASK << TRBLIMITR_FILL_MODE_SHIFT);
+ trblimitr |= ((mode & TRBLIMITR_FILL_MODE_MASK) << TRBLIMITR_FILL_MODE_SHIFT);
+ write_sysreg_s(trblimitr, SYS_TRBLIMITR_EL1);
+}
+
+static inline void set_trbe_disabled(void)
+{
+ u64 trblimitr = read_sysreg_s(SYS_TRBLIMITR_EL1);
+
+ trblimitr &= ~TRBLIMITR_ENABLE;
+ write_sysreg_s(trblimitr, SYS_TRBLIMITR_EL1);
+}
+
+static inline void set_trbe_enabled(void)
+{
+ u64 trblimitr = read_sysreg_s(SYS_TRBLIMITR_EL1);
+
+ trblimitr |= TRBLIMITR_ENABLE;
+ write_sysreg_s(trblimitr, SYS_TRBLIMITR_EL1);
+}
+
+static inline bool get_trbe_flag_update(void)
+{
+ u64 trbidr = read_sysreg_s(SYS_TRBIDR_EL1);
+
+ return trbidr & TRBIDR_FLAG;
+}
+
+static inline bool is_trbe_programmable(void)
+{
+ u64 trbidr = read_sysreg_s(SYS_TRBIDR_EL1);
+
+ return !(trbidr & TRBIDR_PROG);
+}
+
+static inline int get_trbe_address_align(void)
+{
+ u64 trbidr = read_sysreg_s(SYS_TRBIDR_EL1);
+
+ return (trbidr >> TRBIDR_ALIGN_SHIFT) & TRBIDR_ALIGN_MASK;
+}
+
+static inline unsigned long get_trbe_write_pointer(void)
+{
+ u64 trbptr = read_sysreg_s(SYS_TRBPTR_EL1);
+ unsigned long addr = (trbptr >> TRBPTR_PTR_SHIFT) & TRBPTR_PTR_MASK;
+
+ return addr;
+}
+
+static inline void set_trbe_write_pointer(unsigned long addr)
+{
+ WARN_ON(is_trbe_enabled());
+ addr = (addr >> TRBPTR_PTR_SHIFT) & TRBPTR_PTR_MASK;
+ write_sysreg_s(addr, SYS_TRBPTR_EL1);
+}
+
+static inline unsigned long get_trbe_limit_pointer(void)
+{
+ u64 trblimitr = read_sysreg_s(SYS_TRBLIMITR_EL1);
+ unsigned long limit = (trblimitr >> TRBLIMITR_LIMIT_SHIFT) & TRBLIMITR_LIMIT_MASK;
+ unsigned long addr = limit << TRBLIMITR_LIMIT_SHIFT;
+
+ WARN_ON(!IS_ALIGNED(addr, PAGE_SIZE));
+ return addr;
+}
+
+static inline void set_trbe_limit_pointer(unsigned long addr)
+{
+ u64 trblimitr = read_sysreg_s(SYS_TRBLIMITR_EL1);
+
+ WARN_ON(is_trbe_enabled());
+ WARN_ON(!IS_ALIGNED(addr, (1UL << TRBLIMITR_LIMIT_SHIFT)));
+ WARN_ON(!IS_ALIGNED(addr, PAGE_SIZE));
+ trblimitr &= ~(TRBLIMITR_LIMIT_MASK << TRBLIMITR_LIMIT_SHIFT);
+ trblimitr |= (addr & PAGE_MASK);
+ write_sysreg_s(trblimitr, SYS_TRBLIMITR_EL1);
+}
+
+static inline unsigned long get_trbe_base_pointer(void)
+{
+ u64 trbbaser = read_sysreg_s(SYS_TRBBASER_EL1);
+ unsigned long addr = (trbbaser >> TRBBASER_BASE_SHIFT) & TRBBASER_BASE_MASK;
+
+ addr = addr << TRBBASER_BASE_SHIFT;
+ WARN_ON(!IS_ALIGNED(addr, PAGE_SIZE));
+ return addr;
+}
+
+static inline void set_trbe_base_pointer(unsigned long addr)
+{
+ WARN_ON(is_trbe_enabled());
+ WARN_ON(!IS_ALIGNED(addr, (1UL << TRBLIMITR_LIMIT_SHIFT)));
+ WARN_ON(!IS_ALIGNED(addr, PAGE_SIZE));
+ write_sysreg_s(addr, SYS_TRBBASER_EL1);
+}
--
2.7.4
From: Suzuki K Poulose <[email protected]>
Add ETE as one of the supported device types we support
with ETM4x driver. The devices are named following the
existing convention as ete<N>.
ETE mandates that the trace resource status register is programmed
before the tracing is turned on. For the moment simply write to
it indicating TraceActive.
Cc: Mathieu Poirier <[email protected]>
Cc: Mike Leach <[email protected]>
Signed-off-by: Suzuki K Poulose <[email protected]>
Signed-off-by: Anshuman Khandual <[email protected]>
---
drivers/hwtracing/coresight/coresight-etm4x-core.c | 56 +++++++++++++++++-----
drivers/hwtracing/coresight/coresight-etm4x.h | 7 +++
2 files changed, 50 insertions(+), 13 deletions(-)
diff --git a/drivers/hwtracing/coresight/coresight-etm4x-core.c b/drivers/hwtracing/coresight/coresight-etm4x-core.c
index dff502f..1af6ae0 100644
--- a/drivers/hwtracing/coresight/coresight-etm4x-core.c
+++ b/drivers/hwtracing/coresight/coresight-etm4x-core.c
@@ -333,6 +333,13 @@ static int etm4_enable_hw(struct etmv4_drvdata *drvdata)
etm4x_relaxed_write32(csa, trcpdcr | TRCPDCR_PU, TRCPDCR);
}
+ /*
+ * ETE mandates that the TRCRSR is written to before
+ * enabling it.
+ */
+ if (drvdata->arch >= ETM_ARCH_ETE)
+ etm4x_relaxed_write32(csa, TRCRSR_TA, TRCRSR);
+
/* Enable the trace unit */
etm4x_relaxed_write32(csa, 1, TRCPRGCTLR);
@@ -765,13 +772,24 @@ static bool etm4_init_sysreg_access(struct etmv4_drvdata *drvdata,
* ETMs implementing sysreg access must implement TRCDEVARCH.
*/
devarch = read_etm4x_sysreg_const_offset(TRCDEVARCH);
- if ((devarch & ETM_DEVARCH_ID_MASK) != ETM_DEVARCH_ETMv4x_ARCH)
+ switch (devarch & ETM_DEVARCH_ID_MASK) {
+ case ETM_DEVARCH_ETMv4x_ARCH:
+ *csa = (struct csdev_access) {
+ .io_mem = false,
+ .read = etm4x_sysreg_read,
+ .write = etm4x_sysreg_write,
+ };
+ break;
+ case ETM_DEVARCH_ETE_ARCH:
+ *csa = (struct csdev_access) {
+ .io_mem = false,
+ .read = ete_sysreg_read,
+ .write = ete_sysreg_write,
+ };
+ break;
+ default:
return false;
- *csa = (struct csdev_access) {
- .io_mem = false,
- .read = etm4x_sysreg_read,
- .write = etm4x_sysreg_write,
- };
+ }
drvdata->arch = etm_devarch_to_arch(devarch);
return true;
@@ -1707,6 +1725,8 @@ static int etm4_probe(struct device *dev, void __iomem *base)
struct etmv4_drvdata *drvdata;
struct coresight_desc desc = { 0 };
struct etm4_init_arg init_arg = { 0 };
+ u8 major, minor;
+ char *type_name;
drvdata = devm_kzalloc(dev, sizeof(*drvdata), GFP_KERNEL);
if (!drvdata)
@@ -1733,10 +1753,6 @@ static int etm4_probe(struct device *dev, void __iomem *base)
if (drvdata->cpu < 0)
return drvdata->cpu;
- desc.name = devm_kasprintf(dev, GFP_KERNEL, "etm%d", drvdata->cpu);
- if (!desc.name)
- return -ENOMEM;
-
init_arg.drvdata = drvdata;
init_arg.csa = &desc.access;
@@ -1751,6 +1767,20 @@ static int etm4_probe(struct device *dev, void __iomem *base)
if (!desc.access.io_mem ||
fwnode_property_present(dev_fwnode(dev), "qcom,skip-power-up"))
drvdata->skip_power_up = true;
+ major = ETM_ARCH_MAJOR_VERSION(drvdata->arch);
+ minor = ETM_ARCH_MINOR_VERSION(drvdata->arch);
+ if (drvdata->arch >= ETM_ARCH_ETE) {
+ type_name = "ete";
+ /* ETE v1 has major version == 5. Adjust this for logging.*/
+ major -= 4;
+ } else {
+ type_name = "etm";
+ }
+
+ desc.name = devm_kasprintf(dev, GFP_KERNEL,
+ "%s%d", type_name, drvdata->cpu);
+ if (!desc.name)
+ return -ENOMEM;
etm4_init_trace_id(drvdata);
etm4_set_default(&drvdata->config);
@@ -1779,9 +1809,8 @@ static int etm4_probe(struct device *dev, void __iomem *base)
etmdrvdata[drvdata->cpu] = drvdata;
- dev_info(&drvdata->csdev->dev, "CPU%d: ETM v%d.%d initialized\n",
- drvdata->cpu, ETM_ARCH_MAJOR_VERSION(drvdata->arch),
- ETM_ARCH_MINOR_VERSION(drvdata->arch));
+ dev_info(&drvdata->csdev->dev, "CPU%d: %s v%d.%d initialized\n",
+ drvdata->cpu, type_name, major, minor);
if (boot_enable) {
coresight_enable(drvdata->csdev);
@@ -1918,6 +1947,7 @@ static struct amba_driver etm4x_amba_driver = {
static const struct of_device_id etm4_sysreg_match[] = {
{ .compatible = "arm,coresight-etm4x-sysreg" },
+ { .compatible = "arm,embedded-trace-extension" },
{}
};
diff --git a/drivers/hwtracing/coresight/coresight-etm4x.h b/drivers/hwtracing/coresight/coresight-etm4x.h
index 6f64f08..96073f5 100644
--- a/drivers/hwtracing/coresight/coresight-etm4x.h
+++ b/drivers/hwtracing/coresight/coresight-etm4x.h
@@ -127,6 +127,8 @@
#define TRCCIDR2 0xFF8
#define TRCCIDR3 0xFFC
+#define TRCRSR_TA BIT(12)
+
/*
* System instructions to access ETM registers.
* See ETMv4.4 spec ARM IHI0064F section 4.3.6 System instructions
@@ -571,11 +573,14 @@
((ETM_DEVARCH_MAKE_ARCHID_ARCH_VER(major)) | ETM_DEVARCH_ARCHID_ARCH_PART(0xA13))
#define ETM_DEVARCH_ARCHID_ETMv4x ETM_DEVARCH_MAKE_ARCHID(0x4)
+#define ETM_DEVARCH_ARCHID_ETE ETM_DEVARCH_MAKE_ARCHID(0x5)
#define ETM_DEVARCH_ID_MASK \
(ETM_DEVARCH_ARCHITECT_MASK | ETM_DEVARCH_ARCHID_MASK | ETM_DEVARCH_PRESENT)
#define ETM_DEVARCH_ETMv4x_ARCH \
(ETM_DEVARCH_ARCHITECT_ARM | ETM_DEVARCH_ARCHID_ETMv4x | ETM_DEVARCH_PRESENT)
+#define ETM_DEVARCH_ETE_ARCH \
+ (ETM_DEVARCH_ARCHITECT_ARM | ETM_DEVARCH_ARCHID_ETE | ETM_DEVARCH_PRESENT)
#define TRCSTATR_IDLE_BIT 0
#define TRCSTATR_PMSTABLE_BIT 1
@@ -665,6 +670,8 @@
#define ETM_ARCH_MINOR_VERSION(arch) ((arch) & 0xfU)
#define ETM_ARCH_V4 ETM_ARCH_VERSION(4, 0)
+#define ETM_ARCH_ETE ETM_ARCH_VERSION(5, 0)
+
/* Interpretation of resource numbers change at ETM v4.3 architecture */
#define ETM_ARCH_V4_3 ETM_ARCH_VERSION(4, 3)
--
2.7.4
This adds TRBE related registers and corresponding feature macros.
Cc: Mathieu Poirier <[email protected]>
Cc: Mike Leach <[email protected]>
Cc: Suzuki K Poulose <[email protected]>
Signed-off-by: Anshuman Khandual <[email protected]>
---
Changes in V1:
- Re-arranged TRBE register definitions per existing sorted registers
- Replaced some instances with BIT() and GENMASK_ULL() when applicable
arch/arm64/include/asm/sysreg.h | 49 +++++++++++++++++++++++++++++++++++++++++
1 file changed, 49 insertions(+)
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index eeaab55..e6962b1 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -325,6 +325,55 @@
/*** End of Statistical Profiling Extension ***/
+/*
+ * TRBE Registers
+ */
+#define SYS_TRBLIMITR_EL1 sys_reg(3, 0, 9, 11, 0)
+#define SYS_TRBPTR_EL1 sys_reg(3, 0, 9, 11, 1)
+#define SYS_TRBBASER_EL1 sys_reg(3, 0, 9, 11, 2)
+#define SYS_TRBSR_EL1 sys_reg(3, 0, 9, 11, 3)
+#define SYS_TRBMAR_EL1 sys_reg(3, 0, 9, 11, 4)
+#define SYS_TRBTRG_EL1 sys_reg(3, 0, 9, 11, 6)
+#define SYS_TRBIDR_EL1 sys_reg(3, 0, 9, 11, 7)
+
+#define TRBLIMITR_LIMIT_MASK GENMASK_ULL(51, 0)
+#define TRBLIMITR_LIMIT_SHIFT 12
+#define TRBLIMITR_NVM BIT(5)
+#define TRBLIMITR_TRIG_MODE_MASK GENMASK(1, 0)
+#define TRBLIMITR_TRIG_MODE_SHIFT 2
+#define TRBLIMITR_FILL_MODE_MASK GENMASK(1, 0)
+#define TRBLIMITR_FILL_MODE_SHIFT 1
+#define TRBLIMITR_ENABLE BIT(0)
+#define TRBPTR_PTR_MASK GENMASK_ULL(63, 0)
+#define TRBPTR_PTR_SHIFT 0
+#define TRBBASER_BASE_MASK GENMASK_ULL(51, 0)
+#define TRBBASER_BASE_SHIFT 12
+#define TRBSR_EC_MASK GENMASK(5, 0)
+#define TRBSR_EC_SHIFT 26
+#define TRBSR_IRQ BIT(22)
+#define TRBSR_TRG BIT(21)
+#define TRBSR_WRAP BIT(20)
+#define TRBSR_ABORT BIT(18)
+#define TRBSR_STOP BIT(17)
+#define TRBSR_MSS_MASK GENMASK(15, 0)
+#define TRBSR_MSS_SHIFT 0
+#define TRBSR_BSC_MASK GENMASK(5, 0)
+#define TRBSR_BSC_SHIFT 0
+#define TRBSR_FSC_MASK GENMASK(5, 0)
+#define TRBSR_FSC_SHIFT 0
+#define TRBMAR_SHARE_MASK GENMASK(1, 0)
+#define TRBMAR_SHARE_SHIFT 8
+#define TRBMAR_OUTER_MASK GENMASK(3, 0)
+#define TRBMAR_OUTER_SHIFT 4
+#define TRBMAR_INNER_MASK GENMASK(3, 0)
+#define TRBMAR_INNER_SHIFT 0
+#define TRBTRG_TRG_MASK GENMASK(31, 0)
+#define TRBTRG_TRG_SHIFT 0
+#define TRBIDR_FLAG BIT(5)
+#define TRBIDR_PROG BIT(4)
+#define TRBIDR_ALIGN_MASK GENMASK(3, 0)
+#define TRBIDR_ALIGN_SHIFT 0
+
#define SYS_PMINTENSET_EL1 sys_reg(3, 0, 9, 14, 1)
#define SYS_PMINTENCLR_EL1 sys_reg(3, 0, 9, 14, 2)
--
2.7.4
While starting off the etm event, just abort and truncate the perf record
if the perf handle as no space left. This avoids configuring both source
and sink devices in case the data cannot be consumed in perf.
Cc: Mathieu Poirier <[email protected]>
Cc: Mike Leach <[email protected]>
Cc: Suzuki K Poulose <[email protected]>
Signed-off-by: Anshuman Khandual <[email protected]>
---
drivers/hwtracing/coresight/coresight-etm-perf.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/hwtracing/coresight/coresight-etm-perf.c b/drivers/hwtracing/coresight/coresight-etm-perf.c
index eb9e7e9..e776a07 100644
--- a/drivers/hwtracing/coresight/coresight-etm-perf.c
+++ b/drivers/hwtracing/coresight/coresight-etm-perf.c
@@ -347,6 +347,9 @@ static void etm_event_start(struct perf_event *event, int flags)
if (!event_data)
goto fail;
+ if (!handle->size)
+ goto fail_end_stop;
+
/*
* Check if this ETM is allowed to trace, as decided
* at etm_setup_aux(). This could be due to an unreachable
--
2.7.4
Add support for dedicated sinks that are bound to individual CPUs. (e.g,
TRBE). To allow quicker access to the sink for a given CPU bound source,
keep a percpu array of the sink devices. Also, add support for building
a path to the CPU local sink from the ETM.
This adds a new percpu sink type CORESIGHT_DEV_SUBTYPE_SINK_PERCPU_SYSMEM.
This new sink type is exclusively available and can only work with percpu
source type device CORESIGHT_DEV_SUBTYPE_SOURCE_PERCPU_PROC.
This defines a percpu structure that accommodates a single coresight_device
which can be used to store an initialized instance from a sink driver. As
these sinks are exclusively linked and dependent on corresponding percpu
sources devices, they should also be the default sink device during a perf
session.
Outwards device connections are scanned while establishing paths between a
source and a sink device. But such connections are not present for certain
percpu source and sink devices which are exclusively linked and dependent.
Build the path directly and skip connection scanning for such devices.
Cc: Mathieu Poirier <[email protected]>
Cc: Mike Leach <[email protected]>
Cc: Suzuki K Poulose <[email protected]>
Signed-off-by: Anshuman Khandual <[email protected]>
---
Changes in V1:
- Replaced post init ETE-TRBE link configuration with dynamic path creation
drivers/hwtracing/coresight/coresight-core.c | 14 ++++++++++++++
include/linux/coresight.h | 12 ++++++++++++
2 files changed, 26 insertions(+)
diff --git a/drivers/hwtracing/coresight/coresight-core.c b/drivers/hwtracing/coresight/coresight-core.c
index 0062c89..b300606 100644
--- a/drivers/hwtracing/coresight/coresight-core.c
+++ b/drivers/hwtracing/coresight/coresight-core.c
@@ -23,6 +23,7 @@
#include "coresight-priv.h"
static DEFINE_MUTEX(coresight_mutex);
+DEFINE_PER_CPU(struct coresight_device *, csdev_sink);
/**
* struct coresight_node - elements of a path, from source to sink
@@ -784,6 +785,13 @@ static int _coresight_build_path(struct coresight_device *csdev,
if (csdev == sink)
goto out;
+ if (coresight_is_percpu_source(csdev) && coresight_is_percpu_sink(sink) &&
+ sink == per_cpu(csdev_sink, source_ops(csdev)->cpu_id(csdev))) {
+ _coresight_build_path(sink, sink, path);
+ found = true;
+ goto out;
+ }
+
/* Not a sink - recursively explore each port found on this element */
for (i = 0; i < csdev->pdata->nr_outport; i++) {
struct coresight_device *child_dev;
@@ -998,6 +1006,12 @@ coresight_find_default_sink(struct coresight_device *csdev)
{
int depth = 0;
+ if (coresight_is_percpu_source(csdev)) {
+ csdev->def_sink = per_cpu(csdev_sink, source_ops(csdev)->cpu_id(csdev));
+ if (csdev->def_sink)
+ return csdev->def_sink;
+ }
+
/* look for a default sink if we have not found for this device */
if (!csdev->def_sink)
csdev->def_sink = coresight_find_sink(csdev, &depth);
diff --git a/include/linux/coresight.h b/include/linux/coresight.h
index 951ba88..2aee12e 100644
--- a/include/linux/coresight.h
+++ b/include/linux/coresight.h
@@ -50,6 +50,7 @@ enum coresight_dev_subtype_sink {
CORESIGHT_DEV_SUBTYPE_SINK_PORT,
CORESIGHT_DEV_SUBTYPE_SINK_BUFFER,
CORESIGHT_DEV_SUBTYPE_SINK_SYSMEM,
+ CORESIGHT_DEV_SUBTYPE_SINK_PERCPU_SYSMEM,
};
enum coresight_dev_subtype_link {
@@ -432,6 +433,17 @@ static inline void csdev_access_write64(struct csdev_access *csa, u64 val, u32 o
csa->write(val, offset, false, true);
}
+static inline bool coresight_is_percpu_source(struct coresight_device *csdev)
+{
+ return csdev && (csdev->type == CORESIGHT_DEV_TYPE_SOURCE) &&
+ csdev->subtype.source_subtype == CORESIGHT_DEV_SUBTYPE_SOURCE_PROC;
+}
+
+static inline bool coresight_is_percpu_sink(struct coresight_device *csdev)
+{
+ return csdev && (csdev->type == CORESIGHT_DEV_TYPE_SINK) &&
+ csdev->subtype.sink_subtype == CORESIGHT_DEV_SUBTYPE_SINK_PERCPU_SYSMEM;
+}
#else /* !CONFIG_64BIT */
static inline u64 csdev_access_relaxed_read64(struct csdev_access *csa,
--
2.7.4
From: Suzuki K Poulose <[email protected]>
Document the device tree bindings for Embedded Trace Extensions.
ETE can be connected to legacy coresight components and thus
could optionally contain a connection graph as described by
the CoreSight bindings.
Cc: [email protected]
Cc: Mathieu Poirier <[email protected]>
Cc: Mike Leach <[email protected]>
Cc: Rob Herring <[email protected]>
Signed-off-by: Suzuki K Poulose <[email protected]>
Signed-off-by: Anshuman Khandual <[email protected]>
---
Documentation/devicetree/bindings/arm/ete.txt | 41 +++++++++++++++++++++++++++
1 file changed, 41 insertions(+)
create mode 100644 Documentation/devicetree/bindings/arm/ete.txt
diff --git a/Documentation/devicetree/bindings/arm/ete.txt b/Documentation/devicetree/bindings/arm/ete.txt
new file mode 100644
index 0000000..b52b507
--- /dev/null
+++ b/Documentation/devicetree/bindings/arm/ete.txt
@@ -0,0 +1,41 @@
+Arm Embedded Trace Extensions
+
+Arm Embedded Trace Extensions (ETE) is a per CPU trace component that
+allows tracing the CPU execution. It overlaps with the CoreSight ETMv4
+architecture and has extended support for future architecture changes.
+The trace generated by the ETE could be stored via legacy CoreSight
+components (e.g, TMC-ETR) or other means (e.g, using a per CPU buffer
+Arm Trace Buffer Extension (TRBE)). Since the ETE can be connected to
+legacy CoreSight components, a node must be listed per instance, along
+with any optional connection graph as per the coresight bindings.
+See bindings/arm/coresight.txt.
+
+** ETE Required properties:
+
+- compatible : should be one of:
+ "arm,embedded-trace-extensions"
+
+- cpu : the CPU phandle this ETE belongs to.
+
+** Optional properties:
+- CoreSight connection graph, see bindings/arm/coresight.txt.
+
+** Example:
+
+ete_0 {
+ compatible = "arm,embedded-trace-extension";
+ cpu = <&cpu_0>;
+};
+
+ete_1 {
+ compatible = "arm,embedded-trace-extension";
+ cpu = <&cpu_1>;
+
+ out-ports { /* legacy CoreSight connection */
+ port {
+ ete1_out_port: endpoint@0 {
+ remote-endpoint = <&funnel_in_port0>;
+ };
+ };
+ };
+};
--
2.7.4
On Wed, Dec 23, 2020 at 03:33:38PM +0530, Anshuman Khandual wrote:
> From: Suzuki K Poulose <[email protected]>
>
> Document the device tree bindings for Embedded Trace Extensions.
> ETE can be connected to legacy coresight components and thus
> could optionally contain a connection graph as described by
> the CoreSight bindings.
>
> Cc: [email protected]
> Cc: Mathieu Poirier <[email protected]>
> Cc: Mike Leach <[email protected]>
> Cc: Rob Herring <[email protected]>
> Signed-off-by: Suzuki K Poulose <[email protected]>
> Signed-off-by: Anshuman Khandual <[email protected]>
> ---
> Documentation/devicetree/bindings/arm/ete.txt | 41 +++++++++++++++++++++++++++
> 1 file changed, 41 insertions(+)
> create mode 100644 Documentation/devicetree/bindings/arm/ete.txt
Bindings are in schema format now, please convert this.
>
> diff --git a/Documentation/devicetree/bindings/arm/ete.txt b/Documentation/devicetree/bindings/arm/ete.txt
> new file mode 100644
> index 0000000..b52b507
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/arm/ete.txt
> @@ -0,0 +1,41 @@
> +Arm Embedded Trace Extensions
> +
> +Arm Embedded Trace Extensions (ETE) is a per CPU trace component that
> +allows tracing the CPU execution. It overlaps with the CoreSight ETMv4
> +architecture and has extended support for future architecture changes.
> +The trace generated by the ETE could be stored via legacy CoreSight
> +components (e.g, TMC-ETR) or other means (e.g, using a per CPU buffer
> +Arm Trace Buffer Extension (TRBE)). Since the ETE can be connected to
> +legacy CoreSight components, a node must be listed per instance, along
> +with any optional connection graph as per the coresight bindings.
> +See bindings/arm/coresight.txt.
> +
> +** ETE Required properties:
> +
> +- compatible : should be one of:
> + "arm,embedded-trace-extensions"
> +
> +- cpu : the CPU phandle this ETE belongs to.
If this is 1:1 with CPUs, then perhaps it should be a child node of the
CPU nodes.
> +
> +** Optional properties:
> +- CoreSight connection graph, see bindings/arm/coresight.txt.
> +
> +** Example:
> +
> +ete_0 {
> + compatible = "arm,embedded-trace-extension";
> + cpu = <&cpu_0>;
> +};
> +
> +ete_1 {
> + compatible = "arm,embedded-trace-extension";
> + cpu = <&cpu_1>;
> +
> + out-ports { /* legacy CoreSight connection */
> + port {
> + ete1_out_port: endpoint@0 {
> + remote-endpoint = <&funnel_in_port0>;
> + };
> + };
> + };
> +};
> --
> 2.7.4
>
On Wed, Dec 23, 2020 at 03:33:43PM +0530, Anshuman Khandual wrote:
> This patch documents the device tree binding in use for Arm TRBE.
>
> Cc: [email protected]
> Cc: Mathieu Poirier <[email protected]>
> Cc: Mike Leach <[email protected]>
> Cc: Suzuki K Poulose <[email protected]>
> Signed-off-by: Anshuman Khandual <[email protected]>
> ---
> Changes in V1:
>
> - TRBE DT entry has been renamed as 'arm, trace-buffer-extension'
>
> Documentation/devicetree/bindings/arm/trbe.txt | 20 ++++++++++++++++++++
> 1 file changed, 20 insertions(+)
> create mode 100644 Documentation/devicetree/bindings/arm/trbe.txt
>
> diff --git a/Documentation/devicetree/bindings/arm/trbe.txt b/Documentation/devicetree/bindings/arm/trbe.txt
> new file mode 100644
> index 0000000..001945d
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/arm/trbe.txt
> @@ -0,0 +1,20 @@
> +* Trace Buffer Extension (TRBE)
> +
> +Trace Buffer Extension (TRBE) is used for collecting trace data generated
> +from a corresponding trace unit (ETE) using an in memory trace buffer.
> +
> +** TRBE Required properties:
> +
> +- compatible : should be one of:
> + "arm,trace-buffer-extension"
> +
> +- interrupts : Exactly 1 PPI must be listed. For heterogeneous systems where
> + TRBE is only supported on a subset of the CPUs, please consult
> + the arm,gic-v3 binding for details on describing a PPI partition.
> +
> +** Example:
> +
> +trbe {
> + compatible = "arm,trace-buffer-extension";
> + interrupts = <GIC_PPI 15 IRQ_TYPE_LEVEL_HIGH>;
If only an interrupt, then could just be part of ETE? If not, how is
this hardware block accessed? An interrupt alone is not enough unless
there's some architected way to access.
Rob
On 1/3/21 10:35 PM, Rob Herring wrote:
> On Wed, Dec 23, 2020 at 03:33:43PM +0530, Anshuman Khandual wrote:
>> This patch documents the device tree binding in use for Arm TRBE.
>>
>> Cc: [email protected]
>> Cc: Mathieu Poirier <[email protected]>
>> Cc: Mike Leach <[email protected]>
>> Cc: Suzuki K Poulose <[email protected]>
>> Signed-off-by: Anshuman Khandual <[email protected]>
>> ---
>> Changes in V1:
>>
>> - TRBE DT entry has been renamed as 'arm, trace-buffer-extension'
>>
>> Documentation/devicetree/bindings/arm/trbe.txt | 20 ++++++++++++++++++++
>> 1 file changed, 20 insertions(+)
>> create mode 100644 Documentation/devicetree/bindings/arm/trbe.txt
>>
>> diff --git a/Documentation/devicetree/bindings/arm/trbe.txt b/Documentation/devicetree/bindings/arm/trbe.txt
>> new file mode 100644
>> index 0000000..001945d
>> --- /dev/null
>> +++ b/Documentation/devicetree/bindings/arm/trbe.txt
>> @@ -0,0 +1,20 @@
>> +* Trace Buffer Extension (TRBE)
>> +
>> +Trace Buffer Extension (TRBE) is used for collecting trace data generated
>> +from a corresponding trace unit (ETE) using an in memory trace buffer.
>> +
>> +** TRBE Required properties:
>> +
>> +- compatible : should be one of:
>> + "arm,trace-buffer-extension"
>> +
>> +- interrupts : Exactly 1 PPI must be listed. For heterogeneous systems where
>> + TRBE is only supported on a subset of the CPUs, please consult
>> + the arm,gic-v3 binding for details on describing a PPI partition.
>> +
>> +** Example:
>> +
>> +trbe {
>> + compatible = "arm,trace-buffer-extension";
>> + interrupts = <GIC_PPI 15 IRQ_TYPE_LEVEL_HIGH>;
>
> If only an interrupt, then could just be part of ETE? If not, how is
> this hardware block accessed? An interrupt alone is not enough unless
> there's some architected way to access.
TRBE hardware block is accessed via respective new system registers but the
PPI number where the IRQ will be triggered for various buffer events, would
depend on the platform as defined in the SBSA.
TRBE would need a ETE to work but the reverse is not true. ETE might just
be present without a corresponding TRBE and can work with traditional sinks.
Hence just wondering whether it would be prudent to add the TRBE interrupt
number as part of the ETE DT specification.
Hi Rob,
On 1/3/21 5:02 PM, Rob Herring wrote:
> On Wed, Dec 23, 2020 at 03:33:38PM +0530, Anshuman Khandual wrote:
>> From: Suzuki K Poulose <[email protected]>
>>
>> Document the device tree bindings for Embedded Trace Extensions.
>> ETE can be connected to legacy coresight components and thus
>> could optionally contain a connection graph as described by
>> the CoreSight bindings.
>>
>> Cc: [email protected]
>> Cc: Mathieu Poirier <[email protected]>
>> Cc: Mike Leach <[email protected]>
>> Cc: Rob Herring <[email protected]>
>> Signed-off-by: Suzuki K Poulose <[email protected]>
>> Signed-off-by: Anshuman Khandual <[email protected]>
>> ---
>> Documentation/devicetree/bindings/arm/ete.txt | 41 +++++++++++++++++++++++++++
>> 1 file changed, 41 insertions(+)
>> create mode 100644 Documentation/devicetree/bindings/arm/ete.txt
>
> Bindings are in schema format now, please convert this.
>
Sure, will do that.
>>
>> diff --git a/Documentation/devicetree/bindings/arm/ete.txt b/Documentation/devicetree/bindings/arm/ete.txt
>> new file mode 100644
>> index 0000000..b52b507
>> --- /dev/null
>> +++ b/Documentation/devicetree/bindings/arm/ete.txt
>> @@ -0,0 +1,41 @@
>> +Arm Embedded Trace Extensions
>> +
>> +Arm Embedded Trace Extensions (ETE) is a per CPU trace component that
>> +allows tracing the CPU execution. It overlaps with the CoreSight ETMv4
>> +architecture and has extended support for future architecture changes.
>> +The trace generated by the ETE could be stored via legacy CoreSight
>> +components (e.g, TMC-ETR) or other means (e.g, using a per CPU buffer
>> +Arm Trace Buffer Extension (TRBE)). Since the ETE can be connected to
>> +legacy CoreSight components, a node must be listed per instance, along
>> +with any optional connection graph as per the coresight bindings.
>> +See bindings/arm/coresight.txt.
>> +
>> +** ETE Required properties:
>> +
>> +- compatible : should be one of:
>> + "arm,embedded-trace-extensions"
>> +
>> +- cpu : the CPU phandle this ETE belongs to.
>
> If this is 1:1 with CPUs, then perhaps it should be a child node of the
> CPU nodes.
Yes, it is 1:1 with the CPUs. I have tried to keep this aligned with that of
"coresight-etm4x". The same driver handles both. The only reason why this
was separated from the "coresight.txt" is to describe the new configurations
possible (read, TRBE).
That said, I am happy to move this under the CPU, if Mathieu is happy with
the diversion.
Thanks for the review.
Suzuki
Hi Anshuman,
On 12/23/20 10:03 AM, Anshuman Khandual wrote:
> Trace Buffer Extension (TRBE) implements a trace buffer per CPU which is
> accessible via the system registers. The TRBE supports different addressing
> modes including CPU virtual address and buffer modes including the circular
> buffer mode. The TRBE buffer is addressed by a base pointer (TRBBASER_EL1),
> an write pointer (TRBPTR_EL1) and a limit pointer (TRBLIMITR_EL1). But the
> access to the trace buffer could be prohibited by a higher exception level
> (EL3 or EL2), indicated by TRBIDR_EL1.P. The TRBE can also generate a CPU
> private interrupt (PPI) on address translation errors and when the buffer
> is full. Overall implementation here is inspired from the Arm SPE driver.
>
> Cc: Mathieu Poirier <[email protected]>
> Cc: Mike Leach <[email protected]>
> Cc: Suzuki K Poulose <[email protected]>
> Signed-off-by: Anshuman Khandual <[email protected]>
> ---
>
> Documentation/trace/coresight/coresight-trbe.rst | 39 +
> arch/arm64/include/asm/sysreg.h | 2 +
> drivers/hwtracing/coresight/Kconfig | 11 +
> drivers/hwtracing/coresight/Makefile | 1 +
> drivers/hwtracing/coresight/coresight-trbe.c | 925 +++++++++++++++++++++++
> drivers/hwtracing/coresight/coresight-trbe.h | 248 ++++++
> 6 files changed, 1226 insertions(+)
> create mode 100644 Documentation/trace/coresight/coresight-trbe.rst
> create mode 100644 drivers/hwtracing/coresight/coresight-trbe.c
> create mode 100644 drivers/hwtracing/coresight/coresight-trbe.h
>
> diff --git a/Documentation/trace/coresight/coresight-trbe.rst b/Documentation/trace/coresight/coresight-trbe.rst
> new file mode 100644
> index 0000000..8b79850
> --- /dev/null
> +++ b/Documentation/trace/coresight/coresight-trbe.rst
> @@ -0,0 +1,39 @@
> +.. SPDX-License-Identifier: GPL-2.0
> +
> +==============================
> +Trace Buffer Extension (TRBE).
> +==============================
> +
> + :Author: Anshuman Khandual <[email protected]>
> + :Date: November 2020
> +
> +Hardware Description
> +--------------------
> +
> +Trace Buffer Extension (TRBE) is a percpu hardware which captures in system
> +memory, CPU traces generated from a corresponding percpu tracing unit. This
> +gets plugged in as a coresight sink device because the corresponding trace
> +genarators (ETE), are plugged in as source device.
> +
> +The TRBE is not compliant to CoreSight architecture specifications, but is
> +driven via the CoreSight driver framework to support the ETE (which is
> +CoreSight compliant) integration.
> +
> +Sysfs files and directories
> +---------------------------
> +
> +The TRBE devices appear on the existing coresight bus alongside the other
> +coresight devices::
> +
> + >$ ls /sys/bus/coresight/devices
> + trbe0 trbe1 trbe2 trbe3
> +
> +The ``trbe<N>`` named TRBEs are associated with a CPU.::
> +
> + >$ ls /sys/bus/coresight/devices/trbe0/
> + irq align dbm
You may want to remove irq here.
> +
> +*Key file items are:-*
> + * ``align``: TRBE write pointer alignment
> + * ``dbm``: TRBE updates memory with access and dirty flags
> +
> diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
> index e6962b1..2a9bfb7 100644
> --- a/arch/arm64/include/asm/sysreg.h
> +++ b/arch/arm64/include/asm/sysreg.h
> @@ -97,6 +97,7 @@
> #define SET_PSTATE_UAO(x) __emit_inst(0xd500401f | PSTATE_UAO | ((!!x) << PSTATE_Imm_shift))
> #define SET_PSTATE_SSBS(x) __emit_inst(0xd500401f | PSTATE_SSBS | ((!!x) << PSTATE_Imm_shift))
> #define SET_PSTATE_TCO(x) __emit_inst(0xd500401f | PSTATE_TCO | ((!!x) << PSTATE_Imm_shift))
> +#define TSB_CSYNC __emit_inst(0xd503225f)
>
> #define __SYS_BARRIER_INSN(CRm, op2, Rt) \
> __emit_inst(0xd5000000 | sys_insn(0, 3, 3, (CRm), (op2)) | ((Rt) & 0x1f))
> @@ -869,6 +870,7 @@
> #define ID_AA64MMFR2_CNP_SHIFT 0
>
> /* id_aa64dfr0 */
> +#define ID_AA64DFR0_TRBE_SHIFT 44
> #define ID_AA64DFR0_TRACE_FILT_SHIFT 40
> #define ID_AA64DFR0_DOUBLELOCK_SHIFT 36
> #define ID_AA64DFR0_PMSVER_SHIFT 32
> diff --git a/drivers/hwtracing/coresight/Kconfig b/drivers/hwtracing/coresight/Kconfig
> index c119824..0f5e101 100644
> --- a/drivers/hwtracing/coresight/Kconfig
> +++ b/drivers/hwtracing/coresight/Kconfig
> @@ -156,6 +156,17 @@ config CORESIGHT_CTI
> To compile this driver as a module, choose M here: the
> module will be called coresight-cti.
>
> +config CORESIGHT_TRBE
> + bool "Trace Buffer Extension (TRBE) driver"
> + depends on ARM64
> + help
> + This driver provides support for percpu Trace Buffer Extension (TRBE).
> + TRBE always needs to be used along with it's corresponding percpu ETE
> + component. ETE generates trace data which is then captured with TRBE.
> + Unlike traditional sink devices, TRBE is a CPU feature accessible via
> + system registers. But it's explicit dependency with trace unit (ETE)
> + requires it to be plugged in as a coresight sink device.
> +
> config CORESIGHT_CTI_INTEGRATION_REGS
> bool "Access CTI CoreSight Integration Registers"
> depends on CORESIGHT_CTI
> diff --git a/drivers/hwtracing/coresight/Makefile b/drivers/hwtracing/coresight/Makefile
> index f20e357..d608165 100644
> --- a/drivers/hwtracing/coresight/Makefile
> +++ b/drivers/hwtracing/coresight/Makefile
> @@ -21,5 +21,6 @@ obj-$(CONFIG_CORESIGHT_STM) += coresight-stm.o
> obj-$(CONFIG_CORESIGHT_CPU_DEBUG) += coresight-cpu-debug.o
> obj-$(CONFIG_CORESIGHT_CATU) += coresight-catu.o
> obj-$(CONFIG_CORESIGHT_CTI) += coresight-cti.o
> +obj-$(CONFIG_CORESIGHT_TRBE) += coresight-trbe.o
> coresight-cti-y := coresight-cti-core.o coresight-cti-platform.o \
> coresight-cti-sysfs.o
> diff --git a/drivers/hwtracing/coresight/coresight-trbe.c b/drivers/hwtracing/coresight/coresight-trbe.c
> new file mode 100644
> index 0000000..ba280e6
> --- /dev/null
> +++ b/drivers/hwtracing/coresight/coresight-trbe.c
> @@ -0,0 +1,925 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * This driver enables Trace Buffer Extension (TRBE) as a per-cpu coresight
> + * sink device could then pair with an appropriate per-cpu coresight source
> + * device (ETE) thus generating required trace data. Trace can be enabled
> + * via the perf framework.
> + *
> + * Copyright (C) 2020 ARM Ltd.
> + *
> + * Author: Anshuman Khandual <[email protected]>
> + */
> +#define DRVNAME "arm_trbe"
> +
> +#define pr_fmt(fmt) DRVNAME ": " fmt
> +
> +#include "coresight-trbe.h"
> +
> +#define PERF_IDX2OFF(idx, buf) ((idx) % ((buf)->nr_pages << PAGE_SHIFT))
> +
> +/*
> + * A padding packet that will help the user space tools
> + * in skipping relevant sections in the captured trace
> + * data which could not be decoded.
You may want to add :
TRBE doesn't support formatting the trace data, unlike the legacy CoreSight sinks
and thus we use ETE trace packets to pad the sections of buffer.
> + */
> +#define ETE_IGNORE_PACKET 0x70
> +
> +enum trbe_fault_action {
> + TRBE_FAULT_ACT_WRAP,
> + TRBE_FAULT_ACT_SPURIOUS,
> + TRBE_FAULT_ACT_FATAL,
> +};
> +
> +struct trbe_buf {
> + unsigned long trbe_base;
> + unsigned long trbe_limit;
> + unsigned long trbe_write;
> + int nr_pages;
> + void **pages;
> + bool snapshot;
> + struct trbe_cpudata *cpudata;
> +};
> +
> +struct trbe_cpudata {
> + bool trbe_dbm;
> + u64 trbe_align;
> + int cpu;
> + enum cs_mode mode;
> + struct trbe_buf *buf;
> + struct trbe_drvdata *drvdata;
> +};
> +
> +struct trbe_drvdata {
> + struct trbe_cpudata __percpu *cpudata;
> + struct perf_output_handle __percpu **handle;
> + struct hlist_node hotplug_node;
> + int irq;
> + cpumask_t supported_cpus;
> + enum cpuhp_state trbe_online;
> + struct platform_device *pdev;
> +};
> +
> +static int trbe_alloc_node(struct perf_event *event)
> +{
> + if (event->cpu == -1)
> + return NUMA_NO_NODE;
> + return cpu_to_node(event->cpu);
> +}
> +
> +static void set_trbe_flush(void)
> +{
> + asm(TSB_CSYNC);
> + dsb(ish);
> +}
> +
> +static void trbe_disable_and_drain_local(void)
> +{
> + write_sysreg_s(0, SYS_TRBLIMITR_EL1);
> + isb();
> + set_trbe_flush();
> +}
> +
> +static void trbe_reset_local(void)
> +{
> + trbe_disable_and_drain_local();
> + write_sysreg_s(0, SYS_TRBPTR_EL1);
> + write_sysreg_s(0, SYS_TRBBASER_EL1);
> + write_sysreg_s(0, SYS_TRBSR_EL1);
> + isb();
> +}
> +
> +/*
> + * TRBE Buffer Management
> + *
> + * The TRBE buffer spans from the base pointer till the limit pointer. When enabled,
> + * it starts writing trace data from the write pointer onward till the limit pointer.
> + * When the write pointer reaches the address just before the limit pointer, it gets
> + * wrapped around again to the base pointer. This is called a TRBE wrap event which
> + * is accompanied by an IRQ.
This is true for one of the modes of operation, the WRAP mode, which could be specified
in the comment. e.g,
This is called a TRBE wrap event, which generates a maintenance interrupt when operated
in WRAP mode.
The write pointer again starts writing trace data from
> + * the base pointer until just before the limit pointer before getting wrapped again
> + * with an IRQ and this process just goes on as long as the TRBE is enabled.
> + *
> + * Wrap around with an IRQ
> + * ------ < ------ < ------- < ----- < -----
> + * | |
> + * ------ > ------ > ------- > ----- > -----
> + *
> + * +---------------+-----------------------+
> + * | | |
> + * +---------------+-----------------------+
> + * Base Pointer Write Pointer Limit Pointer
> + *
> + * The base and limit pointers always needs to be PAGE_SIZE aligned. But the write
> + * pointer can be aligned to the implementation defined TRBE trace buffer alignment
> + * as captured in trbe_cpudata->trbe_align.
> + *
> + *
> + * head tail wakeup
> + * +---------------------------------------+----- ~ ~ ------
> + * |$$$$$$$|################|$$$$$$$$$$$$$$| |
> + * +---------------------------------------+----- ~ ~ ------
> + * Base Pointer Write Pointer Limit Pointer
> + *
> + * The perf_output_handle indices (head, tail, wakeup) are monotonically increasing
> + * values which tracks all the driver writes and user reads from the perf auxiliary
> + * buffer. Generally [head..tail] is the area where the driver can write into unless
> + * the wakeup is behind the tail. Enabled TRBE buffer span needs to be adjusted and
> + * configured depending on the perf_output_handle indices, so that the driver does
> + * not override into areas in the perf auxiliary buffer which is being or yet to be
> + * consumed from the user space. The enabled TRBE buffer area is a moving subset of
> + * the allocated perf auxiliary buffer.
> + */
> +static void trbe_pad_buf(struct perf_output_handle *handle, int len)
> +{
> + struct trbe_buf *buf = etm_perf_sink_config(handle);
> + u64 head = PERF_IDX2OFF(handle->head, buf);
> +
> + memset((void *) buf->trbe_base + head, ETE_IGNORE_PACKET, len);
> + if (!buf->snapshot)
> + perf_aux_output_skip(handle, len);
> +}
> +
> +static unsigned long trbe_snapshot_offset(struct perf_output_handle *handle)
> +{
> + struct trbe_buf *buf = etm_perf_sink_config(handle);
> + u64 head = PERF_IDX2OFF(handle->head, buf);
> + u64 limit = buf->nr_pages * PAGE_SIZE;
> +
> + /*
> + * The trace format isn't parseable in reverse, so clamp the limit
> + * to half of the buffer size in snapshot mode so that the worst
> + * case is half a buffer of records, as opposed to a single record.
> + */
That is not true. We can pad the buffer with Ignore packets and the decoder could
skip forward until it finds an alignment synchronization packet. So, we could use
the full size of the buffer, unlike the SPE.
> + if (head < limit >> 1)
> + limit >>= 1;
> +
> + return limit;
> +}
> +
> +/*
> + * TRBE Limit Calculation
> + *
> + * The following markers are used to illustrate various TRBE buffer situations.
> + *
> + * $$$$ - Data area, unconsumed captured trace data, not to be overridden
> + * #### - Free area, enabled, trace will be written
> + * %%%% - Free area, disabled, trace will not be written
> + * ==== - Free area, padded with ETE_IGNORE_PACKET, trace will be skipped
> + */
Thanks for the nice ASCII art, it really helps to understand the scenarios.
> +static unsigned long trbe_normal_offset(struct perf_output_handle *handle)
> +{
> + struct trbe_buf *buf = etm_perf_sink_config(handle);
> + struct trbe_cpudata *cpudata = buf->cpudata;
> + const u64 bufsize = buf->nr_pages * PAGE_SIZE;
> + u64 limit = bufsize;
> + u64 head, tail, wakeup;
> +
> + head = PERF_IDX2OFF(handle->head, buf);
> +
> + /*
> + * head
> + * ------->|
> + * |
> + * head TRBE align tail
> + * +----|-------|---------------|-------+
> + * |$$$$|=======|###############|$$$$$$$|
> + * +----|-------|---------------|-------+
> + * trbe_base trbe_base + nr_pages
> + *
> + * Perf aux buffer output head position can be misaligned depending on
> + * various factors including user space reads. In case misaligned, head
> + * needs to be aligned before TRBE can be configured. Pad the alignment
> + * gap with ETE_IGNORE_PACKET bytes that will be ignored by user tools
> + * and skip this section thus advancing the head.
> + */
> + if (!IS_ALIGNED(head, cpudata->trbe_align)) {
> + unsigned long delta = roundup(head, cpudata->trbe_align) - head;
> +
> + delta = min(delta, handle->size);
> + trbe_pad_buf(handle, delta);
> + head = PERF_IDX2OFF(handle->head, buf);
> + }
> +
> + /*
> + * head = tail (size = 0)
> + * +----|-------------------------------+
> + * |$$$$|$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ |
> + * +----|-------------------------------+
> + * trbe_base trbe_base + nr_pages
> + *
> + * Perf aux buffer does not have any space for the driver to write into.
> + * Just communicate trace truncation event to the user space by marking
> + * it with PERF_AUX_FLAG_TRUNCATED.
> + */
> + if (!handle->size) {
> + perf_aux_output_flag(handle, PERF_AUX_FLAG_TRUNCATED);
> + return 0;
> + }
> +
> + /* Compute the tail and wakeup indices now that we've aligned head */
> + tail = PERF_IDX2OFF(handle->head + handle->size, buf);
> + wakeup = PERF_IDX2OFF(handle->wakeup, buf);
> +
> + /*
> + * Lets calculate the buffer area which TRBE could write into. There
> + * are three possible scenarios here. Limit needs to be aligned with
> + * PAGE_SIZE per the TRBE requirement. Always avoid clobbering the
> + * unconsumed data.
> + *
> + * 1) head < tail
> + *
> + * head tail
> + * +----|-----------------------|-------+
> + * |$$$$|#######################|$$$$$$$|
> + * +----|-----------------------|-------+
> + * trbe_base limit trbe_base + nr_pages
> + *
> + * TRBE could write into [head..tail] area. Unless the tail is right at
> + * the end of the buffer, neither an wrap around nor an IRQ is expected
> + * while being enabled.
> + *
> + * 2) head == tail
> + *
> + * head = tail (size > 0)
> + * +----|-------------------------------+
> + * |%%%%|###############################|
> + * +----|-------------------------------+
> + * trbe_base limit = trbe_base + nr_pages
> + *
> + * TRBE should just write into [head..base + nr_pages] area even though
> + * the entire buffer is empty. Reason being, when the trace reaches the
> + * end of the buffer, it will just wrap around with an IRQ giving an
> + * opportunity to reconfigure the buffer.
> + *
> + * 3) tail < head
> + *
> + * tail head
> + * +----|-----------------------|-------+
> + * |%%%%|$$$$$$$$$$$$$$$$$$$$$$$|#######|
> + * +----|-----------------------|-------+
> + * trbe_base limit = trbe_base + nr_pages
> + *
> + * TRBE should just write into [head..base + nr_pages] area even though
> + * the [trbe_base..tail] is also empty. Reason being, when the trace
> + * reaches the end of the buffer, it will just wrap around with an IRQ
> + * giving an opportunity to reconfigure the buffer.
> + */
> + if (head < tail)
> + limit = round_down(tail, PAGE_SIZE);
> +
> + /*
> + * Wakeup may be arbitrarily far into the future. If it's not in the
> + * current generation, either we'll wrap before hitting it, or it's
> + * in the past and has been handled already.
> + *
> + * If there's a wakeup before we wrap, arrange to be woken up by the
> + * page boundary following it. Keep the tail boundary if that's lower.
> + *
> + * head wakeup tail
> + * +----|---------------|-------|-------+
> + * |$$$$|###############|%%%%%%%|$$$$$$$|
> + * +----|---------------|-------|-------+
> + * trbe_base limit trbe_base + nr_pages
> + */
> + if (handle->wakeup < (handle->head + handle->size) && head <= wakeup)
> + limit = min(limit, round_up(wakeup, PAGE_SIZE));
> +
> + /*
> + * There are two situation when this can happen i.e limit is before
> + * the head and hence TRBE cannot be configured.
> + *
> + * 1) head < tail (aligned down with PAGE_SIZE) and also they are both
> + * within the same PAGE size range.
> + *
> + * PAGE_SIZE
> + * |----------------------|
> + *
> + * limit head tail
> + * +------------|------|--------|-------+
> + * |$$$$$$$$$$$$$$$$$$$|========|$$$$$$$|
> + * +------------|------|--------|-------+
> + * trbe_base trbe_base + nr_pages
> + *
> + * 2) head < wakeup (aligned up with PAGE_SIZE) < tail and also both
> + * head and wakeup are within same PAGE size range.
> + *
> + * PAGE_SIZE
> + * |----------------------|
> + *
> + * limit head wakeup tail
> + * +----|------|-------|--------|-------+
> + * |$$$$$$$$$$$|=======|========|$$$$$$$|
> + * +----|------|-------|--------|-------+
> + * trbe_base trbe_base + nr_pages
> + */
> + if (limit > head)
> + return limit;
> +
> + trbe_pad_buf(handle, handle->size);
> + perf_aux_output_flag(handle, PERF_AUX_FLAG_TRUNCATED);
> + return 0;
> +}
> +
> +static unsigned long get_trbe_limit(struct perf_output_handle *handle)
nit: The naming is a bit confusing with get_trbe_limit() and get_trbe_limit_pointer().
One computes the TRBE buffer limit and the other reads the hardware Limit pointer.
It would be good if follow a scheme for the namings.
e.g, trbe_limit_pointer() , trbe_base_pointer(), trbe_<register>_<name> for anything
that reads the hardware register.
Or may be rename the get_trbe_limit() to compute_trbe_buffer_limit()
> +{
> + struct trbe_buf *buf = etm_perf_sink_config(handle);
> + unsigned long offset;
> +
> + if (buf->snapshot)
> + offset = trbe_snapshot_offset(handle);
> + else
> + offset = trbe_normal_offset(handle);
> + return buf->trbe_base + offset;
> +}
> +
> +static void clear_trbe_state(void)
nit: The name doesn't give much clue about what it is doing, especially, given
the following "set_trbe_state()" which does completely different from this "clear"
operation.
I would rather open code this with a write of 0 to trbsr in the caller.
> +{
> + u64 trbsr = read_sysreg_s(SYS_TRBSR_EL1);
> +
> + WARN_ON(is_trbe_enabled());
> + trbsr &= ~TRBSR_IRQ;
> + trbsr &= ~TRBSR_TRG;
> + trbsr &= ~TRBSR_WRAP;
> + trbsr &= ~(TRBSR_EC_MASK << TRBSR_EC_SHIFT);
> + trbsr &= ~(TRBSR_BSC_MASK << TRBSR_BSC_SHIFT);
> + trbsr &= ~(TRBSR_FSC_MASK << TRBSR_FSC_SHIFT);
BSC and FSC are the same fields under MSS, with their meanings determined by the EC field.
Could we simply write 0 to the register ?
> + write_sysreg_s(trbsr, SYS_TRBSR_EL1);
> +}
> +
> +static void set_trbe_state(void)
> +{
> + u64 trblimitr = read_sysreg_s(SYS_TRBLIMITR_EL1);
> +
> + trblimitr &= ~TRBLIMITR_NVM;
> + trblimitr &= ~(TRBLIMITR_FILL_MODE_MASK << TRBLIMITR_FILL_MODE_SHIFT);
> + trblimitr &= ~(TRBLIMITR_TRIG_MODE_MASK << TRBLIMITR_TRIG_MODE_SHIFT);
> + trblimitr |= (TRBE_FILL_STOP & TRBLIMITR_FILL_MODE_MASK) << TRBLIMITR_FILL_MODE_SHIFT;
> + trblimitr |= (TRBE_TRIGGER_IGNORE & TRBLIMITR_TRIG_MODE_MASK) << TRBLIMITR_TRIG_MODE_SHIFT;
> + write_sysreg_s(trblimitr, SYS_TRBLIMITR_EL1);
Do we need to read-copy-update here ? Could we simply write 0 ?
Same as above comment, could we not simply opencode it at the caller ?
Clearly the names don't help.
> +}
> +
> +static void trbe_enable_hw(struct trbe_buf *buf)
> +{
> + WARN_ON(buf->trbe_write < buf->trbe_base);
> + WARN_ON(buf->trbe_write >= buf->trbe_limit);
> + set_trbe_disabled();
> + clear_trbe_state();
> + set_trbe_state();
> + isb();
> + set_trbe_base_pointer(buf->trbe_base);
> + set_trbe_limit_pointer(buf->trbe_limit);
> + set_trbe_write_pointer(buf->trbe_write);
Where do we set the fill mode ?
> + isb();
> + set_trbe_running();
> + set_trbe_enabled();
> + set_trbe_flush();
> +}
> +
> +static void *arm_trbe_alloc_buffer(struct coresight_device *csdev,
> + struct perf_event *event, void **pages,
> + int nr_pages, bool snapshot)
> +{
> + struct trbe_buf *buf;
> + struct page **pglist;
> + int i;
> +
> + if ((nr_pages < 2) || (snapshot && (nr_pages & 1)))
> + return NULL;
> +
> + buf = kzalloc_node(sizeof(*buf), GFP_KERNEL, trbe_alloc_node(event));
> + if (IS_ERR(buf))
> + return ERR_PTR(-ENOMEM);
> +
> + pglist = kcalloc(nr_pages, sizeof(*pglist), GFP_KERNEL);
> + if (IS_ERR(pglist)) {
> + kfree(buf);
> + return ERR_PTR(-ENOMEM);
> + }
> +
> + for (i = 0; i < nr_pages; i++)
> + pglist[i] = virt_to_page(pages[i]);
> +
> + buf->trbe_base = (unsigned long) vmap(pglist, nr_pages, VM_MAP, PAGE_KERNEL);
> + if (IS_ERR((void *) buf->trbe_base)) {
> + kfree(pglist);
> + kfree(buf);
> + return ERR_PTR(buf->trbe_base);
> + }
> + buf->trbe_limit = buf->trbe_base + nr_pages * PAGE_SIZE;
> + buf->trbe_write = buf->trbe_base;
> + buf->snapshot = snapshot;
> + buf->nr_pages = nr_pages;
> + buf->pages = pages;
> + kfree(pglist);
> + return buf;
> +}
> +
> +void arm_trbe_free_buffer(void *config)
> +{
> + struct trbe_buf *buf = config;
> +
> + vunmap((void *) buf->trbe_base);
> + kfree(buf);
> +}
> +
> +static unsigned long arm_trbe_update_buffer(struct coresight_device *csdev,
> + struct perf_output_handle *handle,
> + void *config)
> +{
> + struct trbe_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
> + struct trbe_cpudata *cpudata = dev_get_drvdata(&csdev->dev);
> + struct trbe_buf *buf = config;
> + unsigned long size, offset;
> +
> + WARN_ON(buf->cpudata != cpudata);
> + WARN_ON(cpudata->cpu != smp_processor_id());
> + WARN_ON(cpudata->drvdata != drvdata);
> + if (cpudata->mode != CS_MODE_PERF)
> + return -EINVAL;
> +
> + offset = get_trbe_write_pointer() - get_trbe_base_pointer();
> + size = offset - PERF_IDX2OFF(handle->head, buf);
> + if (buf->snapshot)
> + handle->head += size;
You may want to add in a comment why we keep the TRBE disabled here.
> + trbe_reset_local();
> + return size;
> +}
> +
> +static int arm_trbe_enable(struct coresight_device *csdev, u32 mode, void *data)
> +{
> + struct trbe_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
> + struct trbe_cpudata *cpudata = dev_get_drvdata(&csdev->dev);
> + struct perf_output_handle *handle = data;
> + struct trbe_buf *buf = etm_perf_sink_config(handle);
> +
> + WARN_ON(cpudata->cpu != smp_processor_id());
> + WARN_ON(cpudata->drvdata != drvdata);
> + if (mode != CS_MODE_PERF)
> + return -EINVAL;
> +
> + *this_cpu_ptr(drvdata->handle) = handle;
> + cpudata->buf = buf;
> + cpudata->mode = mode;
> + buf->cpudata = cpudata;
> + buf->trbe_write = buf->trbe_base + PERF_IDX2OFF(handle->head, buf);
> + buf->trbe_limit = get_trbe_limit(handle);
> + if (buf->trbe_limit == buf->trbe_base) {
> + trbe_disable_and_drain_local();
> + return 0;
> + }
> + trbe_enable_hw(buf);
> + return 0;
> +}
> +
> +static int arm_trbe_disable(struct coresight_device *csdev)
> +{
> + struct trbe_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
> + struct trbe_cpudata *cpudata = dev_get_drvdata(&csdev->dev);
> + struct trbe_buf *buf = cpudata->buf;
> +
> + WARN_ON(buf->cpudata != cpudata);
> + WARN_ON(cpudata->cpu != smp_processor_id());
> + WARN_ON(cpudata->drvdata != drvdata);
> + if (cpudata->mode != CS_MODE_PERF)
> + return -EINVAL;
> +
> + trbe_disable_and_drain_local();
> + buf->cpudata = NULL;
> + cpudata->buf = NULL;
> + cpudata->mode = CS_MODE_DISABLED;
> + return 0;
> +}
> +
> +static void trbe_handle_fatal(struct perf_output_handle *handle)
> +{
> + perf_aux_output_flag(handle, PERF_AUX_FLAG_TRUNCATED);
> + perf_aux_output_end(handle, 0);
> + trbe_disable_and_drain_local();
> +}
> +
> +static void trbe_handle_spurious(struct perf_output_handle *handle)
> +{
> + struct trbe_buf *buf = etm_perf_sink_config(handle);
> +
> + buf->trbe_write = buf->trbe_base + PERF_IDX2OFF(handle->head, buf);
> + buf->trbe_limit = get_trbe_limit(handle);
> + if (buf->trbe_limit == buf->trbe_base) {
> + trbe_disable_and_drain_local();
> + return;
> + }
> + trbe_enable_hw(buf);
> +}
> +
> +static void trbe_handle_overflow(struct perf_output_handle *handle)
> +{
> + struct perf_event *event = handle->event;
> + struct trbe_buf *buf = etm_perf_sink_config(handle);
> + unsigned long offset, size;
> + struct etm_event_data *event_data;
> +
> + offset = get_trbe_limit_pointer() - get_trbe_base_pointer();
> + size = offset - PERF_IDX2OFF(handle->head, buf);
> + if (buf->snapshot)
> + handle->head = offset;
> + perf_aux_output_end(handle, size);
> +
> + event_data = perf_aux_output_begin(handle, event);
> + if (!event_data) {
> + event->hw.state |= PERF_HES_STOPPED;
> + trbe_disable_and_drain_local();
> + perf_aux_output_flag(handle, PERF_AUX_FLAG_TRUNCATED);
> + return;
> + }
> + buf->trbe_write = buf->trbe_base;
> + buf->trbe_limit = get_trbe_limit(handle);
> + if (buf->trbe_limit == buf->trbe_base) {
> + trbe_disable_and_drain_local();
> + return;
> + }
> + *this_cpu_ptr(buf->cpudata->drvdata->handle) = handle;
> + trbe_enable_hw(buf);
> +}
> +
> +static bool is_perf_trbe(struct perf_output_handle *handle)
> +{
> + struct trbe_buf *buf = etm_perf_sink_config(handle);
> + struct trbe_cpudata *cpudata = buf->cpudata;
> + struct trbe_drvdata *drvdata = cpudata->drvdata;
> + int cpu = smp_processor_id();
> +
> + WARN_ON(buf->trbe_base != get_trbe_base_pointer());
> + WARN_ON(buf->trbe_limit != get_trbe_limit_pointer());
> +
> + if (cpudata->mode != CS_MODE_PERF)
> + return false;
> +
> + if (cpudata->cpu != cpu)
> + return false;
> +
> + if (!cpumask_test_cpu(cpu, &drvdata->supported_cpus))
> + return false;
> +
> + return true;
> +}
> +
> +static enum trbe_fault_action trbe_get_fault_act(struct perf_output_handle *handle)
> +{
> + int ec = get_trbe_ec();
> + int bsc = get_trbe_bsc();
> +
> + WARN_ON(is_trbe_running());
> + if (is_trbe_trg() || is_trbe_abort())
> + return TRBE_FAULT_ACT_FATAL;
> +
> + if ((ec == TRBE_EC_STAGE1_ABORT) || (ec == TRBE_EC_STAGE2_ABORT))
> + return TRBE_FAULT_ACT_FATAL;
> +
> + if (is_trbe_wrap() && (ec == TRBE_EC_OTHERS) && (bsc == TRBE_BSC_FILLED)) {
> + if (get_trbe_write_pointer() == get_trbe_base_pointer())
> + return TRBE_FAULT_ACT_WRAP;
> + }
> + return TRBE_FAULT_ACT_SPURIOUS;
> +}
> +
> +static irqreturn_t arm_trbe_irq_handler(int irq, void *dev)
> +{
> + struct perf_output_handle **handle_ptr = dev;
> + struct perf_output_handle *handle = *handle_ptr;
> + enum trbe_fault_action act;
> +
> + WARN_ON(!is_trbe_irq());
> + clr_trbe_irq();
> +
> + if (!perf_get_aux(handle))
> + return IRQ_NONE;
> +
> + if (!is_perf_trbe(handle))
> + return IRQ_NONE;
> +
> + irq_work_run();
> +
> + act = trbe_get_fault_act(handle);
> + switch (act) {
> + case TRBE_FAULT_ACT_WRAP:
> + trbe_handle_overflow(handle);
> + break;
> + case TRBE_FAULT_ACT_SPURIOUS:
> + trbe_handle_spurious(handle);
> + break;
> + case TRBE_FAULT_ACT_FATAL:
> + trbe_handle_fatal(handle);
> + break;
> + }
> + return IRQ_HANDLED;
> +}
> +
> +static const struct coresight_ops_sink arm_trbe_sink_ops = {
> + .enable = arm_trbe_enable,
> + .disable = arm_trbe_disable,
> + .alloc_buffer = arm_trbe_alloc_buffer,
> + .free_buffer = arm_trbe_free_buffer,
> + .update_buffer = arm_trbe_update_buffer,
> +};
> +
> +static const struct coresight_ops arm_trbe_cs_ops = {
> + .sink_ops = &arm_trbe_sink_ops,
> +};
> +
> +static ssize_t align_show(struct device *dev, struct device_attribute *attr, char *buf)
> +{
> + struct trbe_cpudata *cpudata = dev_get_drvdata(dev);
> +
> + return sprintf(buf, "%llx\n", cpudata->trbe_align);
> +}
> +static DEVICE_ATTR_RO(align);
> +
> +static ssize_t dbm_show(struct device *dev, struct device_attribute *attr, char *buf)
> +{
> + struct trbe_cpudata *cpudata = dev_get_drvdata(dev);
> +
> + return sprintf(buf, "%d\n", cpudata->trbe_dbm);
> +}
> +static DEVICE_ATTR_RO(dbm);
> +
> +static struct attribute *arm_trbe_attrs[] = {
> + &dev_attr_align.attr,
> + &dev_attr_dbm.attr,
> + NULL,
> +};
> +
> +static const struct attribute_group arm_trbe_group = {
> + .attrs = arm_trbe_attrs,
> +};
> +
> +static const struct attribute_group *arm_trbe_groups[] = {
> + &arm_trbe_group,
> + NULL,
> +};
> +
> +static void arm_trbe_probe_coresight_cpu(void *info)
> +{
> + struct trbe_drvdata *drvdata = info;
> + struct coresight_desc desc = { 0 };
> + int cpu = smp_processor_id();
> + struct trbe_cpudata *cpudata = per_cpu_ptr(drvdata->cpudata, cpu);
> + struct coresight_device *trbe_csdev = per_cpu(csdev_sink, cpu);
> + struct device *dev;
> +
> + if (WARN_ON(!cpudata))
> + goto cpu_clear;
> +
> + if (trbe_csdev)
> + return;
> +
> + cpudata->cpu = smp_processor_id();
> + cpudata->drvdata = drvdata;
> + dev = &cpudata->drvdata->pdev->dev;
> +
> + if (!is_trbe_available()) {
> + pr_err("TRBE is not implemented on cpu %d\n", cpudata->cpu);
> + goto cpu_clear;
> + }
> +
> + if (!is_trbe_programmable()) {
> + pr_err("TRBE is owned in higher exception level on cpu %d\n", cpudata->cpu);
> + goto cpu_clear;
> + }
> + desc.name = devm_kasprintf(dev, GFP_KERNEL, "%s%d", DRVNAME, smp_processor_id());
> + if (IS_ERR(desc.name))
> + goto cpu_clear;
> +
> + desc.type = CORESIGHT_DEV_TYPE_SINK;
> + desc.subtype.sink_subtype = CORESIGHT_DEV_SUBTYPE_SINK_PERCPU_SYSMEM;
> + desc.ops = &arm_trbe_cs_ops;
> + desc.pdata = dev_get_platdata(dev);
> + desc.groups = arm_trbe_groups;
> + desc.dev = dev;
> + trbe_csdev = coresight_register(&desc);
> + if (IS_ERR(trbe_csdev))
> + goto cpu_clear;
> +
> + dev_set_drvdata(&trbe_csdev->dev, cpudata);
> + cpudata->trbe_dbm = get_trbe_flag_update();
> + cpudata->trbe_align = 1ULL << get_trbe_address_align();
> + if (cpudata->trbe_align > SZ_2K) {
> + pr_err("Unsupported alignment on cpu %d\n", cpudata->cpu);
> + goto cpu_clear;
> + }
> + per_cpu(csdev_sink, cpu) = trbe_csdev;
> + trbe_reset_local();
> + enable_percpu_irq(drvdata->irq, IRQ_TYPE_NONE);
> + return;
> +cpu_clear:
> + cpumask_clear_cpu(cpudata->cpu, &cpudata->drvdata->supported_cpus);
> +}
> +
> +static void arm_trbe_remove_coresight_cpu(void *info)
> +{
> + int cpu = smp_processor_id();
> + struct trbe_drvdata *drvdata = info;
> + struct trbe_cpudata *cpudata = per_cpu_ptr(drvdata->cpudata, cpu);
> + struct coresight_device *trbe_csdev = per_cpu(csdev_sink, cpu);
> +
> + if (trbe_csdev) {
> + coresight_unregister(trbe_csdev);
> + cpudata->drvdata = NULL;
> + per_cpu(csdev_sink, cpu) = NULL;
> + }
> + disable_percpu_irq(drvdata->irq);
> + trbe_reset_local();
> +}
> +
> +static int arm_trbe_probe_coresight(struct trbe_drvdata *drvdata)
> +{
> + drvdata->cpudata = alloc_percpu(typeof(*drvdata->cpudata));
> + if (IS_ERR(drvdata->cpudata))
> + return PTR_ERR(drvdata->cpudata);
> +
> + arm_trbe_probe_coresight_cpu(drvdata);
> + smp_call_function_many(&drvdata->supported_cpus, arm_trbe_probe_coresight_cpu, drvdata, 1);
> + return 0;
> +}
> +
> +static int arm_trbe_remove_coresight(struct trbe_drvdata *drvdata)
> +{
> + arm_trbe_remove_coresight_cpu(drvdata);
> + smp_call_function_many(&drvdata->supported_cpus, arm_trbe_remove_coresight_cpu, drvdata, 1);
> + free_percpu(drvdata->cpudata);
> + return 0;
> +}
> +
> +static int arm_trbe_cpu_startup(unsigned int cpu, struct hlist_node *node)
> +{
> + struct trbe_drvdata *drvdata = hlist_entry_safe(node, struct trbe_drvdata, hotplug_node);
> +
> + if (cpumask_test_cpu(cpu, &drvdata->supported_cpus)) {
> + if (!per_cpu(csdev_sink, cpu) && (system_state == SYSTEM_RUNNING)) {
Why is the system_state check relevant here ?
> + arm_trbe_probe_coresight_cpu(drvdata);
> + } else {
> + trbe_reset_local();
> + enable_percpu_irq(drvdata->irq, IRQ_TYPE_NONE);
> + }
> + }
> + return 0;
> +}
> +
> +static int arm_trbe_cpu_teardown(unsigned int cpu, struct hlist_node *node)
> +{
> + struct trbe_drvdata *drvdata = hlist_entry_safe(node, struct trbe_drvdata, hotplug_node);
> +
> + if (cpumask_test_cpu(cpu, &drvdata->supported_cpus)) {
> + disable_percpu_irq(drvdata->irq);
> + trbe_reset_local();
> + }
> + return 0;
> +}
> +
> +static int arm_trbe_probe_cpuhp(struct trbe_drvdata *drvdata)
> +{
> + enum cpuhp_state trbe_online;
> +
> + trbe_online = cpuhp_setup_state_multi(CPUHP_AP_ONLINE_DYN, DRVNAME,
> + arm_trbe_cpu_startup, arm_trbe_cpu_teardown);
> + if (trbe_online < 0)
> + return -EINVAL;
> +
> + if (cpuhp_state_add_instance(trbe_online, &drvdata->hotplug_node))
> + return -EINVAL;
> +
> + drvdata->trbe_online = trbe_online;
> + return 0;
> +}
> +
> +static void arm_trbe_remove_cpuhp(struct trbe_drvdata *drvdata)
> +{
> + cpuhp_remove_multi_state(drvdata->trbe_online);
> +}
> +
> +static int arm_trbe_probe_irq(struct platform_device *pdev,
> + struct trbe_drvdata *drvdata)
> +{
> + drvdata->irq = platform_get_irq(pdev, 0);
> + if (!drvdata->irq) {
> + pr_err("IRQ not found for the platform device\n");
> + return -ENXIO;
> + }
> +
> + if (!irq_is_percpu(drvdata->irq)) {
> + pr_err("IRQ is not a PPI\n");
> + return -EINVAL;
> + }
> +
> + if (irq_get_percpu_devid_partition(drvdata->irq, &drvdata->supported_cpus))
> + return -EINVAL;
> +
> + drvdata->handle = alloc_percpu(typeof(*drvdata->handle));
> + if (!drvdata->handle)
> + return -ENOMEM;
> +
> + if (request_percpu_irq(drvdata->irq, arm_trbe_irq_handler, DRVNAME, drvdata->handle)) {
> + free_percpu(drvdata->handle);
> + return -EINVAL;
> + }
> + return 0;
> +}
> +
> +static void arm_trbe_remove_irq(struct trbe_drvdata *drvdata)
> +{
> + free_percpu_irq(drvdata->irq, drvdata->handle);
> + free_percpu(drvdata->handle);
> +}
> +
> +static int arm_trbe_device_probe(struct platform_device *pdev)
> +{
> + struct coresight_platform_data *pdata;
> + struct trbe_drvdata *drvdata;
> + struct device *dev = &pdev->dev;
> + int ret;
> +
> + drvdata = devm_kzalloc(dev, sizeof(*drvdata), GFP_KERNEL);
> + if (IS_ERR(drvdata))
> + return -ENOMEM;
> +
> + pdata = coresight_get_platform_data(dev);
> + if (IS_ERR(pdata)) {
> + kfree(drvdata);
> + return -ENOMEM;
> + }
> +
> + dev_set_drvdata(dev, drvdata);
> + dev->platform_data = pdata;
> + drvdata->pdev = pdev;
> + ret = arm_trbe_probe_irq(pdev, drvdata);
> + if (ret)
> + goto irq_failed;
> +
> + ret = arm_trbe_probe_coresight(drvdata);
> + if (ret)
> + goto probe_failed;
> +
> + ret = arm_trbe_probe_cpuhp(drvdata);
> + if (ret)
> + goto cpuhp_failed;
> +
> + return 0;
> +cpuhp_failed:
> + arm_trbe_remove_coresight(drvdata);
> +probe_failed:
> + arm_trbe_remove_irq(drvdata);
> +irq_failed:
> + kfree(pdata);
> + kfree(drvdata);
> + return ret;
> +}
> +
> +static int arm_trbe_device_remove(struct platform_device *pdev)
> +{
> + struct coresight_platform_data *pdata = dev_get_platdata(&pdev->dev);
> + struct trbe_drvdata *drvdata = platform_get_drvdata(pdev);
> +
> + arm_trbe_remove_coresight(drvdata);
> + arm_trbe_remove_cpuhp(drvdata);
> + arm_trbe_remove_irq(drvdata);
> + kfree(pdata);
> + kfree(drvdata);
> + return 0;
> +}
> +
> +static const struct of_device_id arm_trbe_of_match[] = {
> + { .compatible = "arm,trace-buffer-extension", .data = (void *)1 },
What is the significance of .data = 1 ?
> + {},
> +};
> +MODULE_DEVICE_TABLE(of, arm_trbe_of_match);
> +
> +static struct platform_driver arm_trbe_driver = {
> + .driver = {
> + .name = DRVNAME,
> + .of_match_table = of_match_ptr(arm_trbe_of_match),
> + .suppress_bind_attrs = true,
> + },
> + .probe = arm_trbe_device_probe,
> + .remove = arm_trbe_device_remove,
> +};
> +
> +static int __init arm_trbe_init(void)
> +{
> + int ret;
> +
> + ret = platform_driver_register(&arm_trbe_driver);
> + if (!ret)
> + return 0;
> +
> + pr_err("Error registering %s platform driver\n", DRVNAME);
> + return ret;
> +}
> +
> +static void __exit arm_trbe_exit(void)
> +{
> + platform_driver_unregister(&arm_trbe_driver);
> +}
> +module_init(arm_trbe_init);
> +module_exit(arm_trbe_exit);
> +
> +MODULE_AUTHOR("Anshuman Khandual <[email protected]>");
> +MODULE_DESCRIPTION("Arm Trace Buffer Extension (TRBE) driver");
> +MODULE_LICENSE("GPL v2");
> diff --git a/drivers/hwtracing/coresight/coresight-trbe.h b/drivers/hwtracing/coresight/coresight-trbe.h
> new file mode 100644
> index 0000000..e956439
> --- /dev/null
> +++ b/drivers/hwtracing/coresight/coresight-trbe.h
> @@ -0,0 +1,248 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * This contains all required hardware related helper functions for
> + * Trace Buffer Extension (TRBE) driver in the coresight framework.
> + *
> + * Copyright (C) 2020 ARM Ltd.
> + *
> + * Author: Anshuman Khandual <[email protected]>
> + */
> +#include <linux/coresight.h>
> +#include <linux/device.h>
> +#include <linux/irq.h>
> +#include <linux/kernel.h>
> +#include <linux/of.h>
> +#include <linux/platform_device.h>
> +#include <linux/smp.h>
> +
> +#include "coresight-etm-perf.h"
> +
> +DECLARE_PER_CPU(struct coresight_device *, csdev_sink);
> +
> +static inline bool is_trbe_available(void)
> +{
> + u64 aa64dfr0 = read_sysreg_s(SYS_ID_AA64DFR0_EL1);
> + int trbe = cpuid_feature_extract_unsigned_field(aa64dfr0, ID_AA64DFR0_TRBE_SHIFT);
> +
> + return trbe >= 0b0001;
> +}
> +
> +static inline bool is_trbe_enabled(void)
> +{
> + u64 trblimitr = read_sysreg_s(SYS_TRBLIMITR_EL1);
> +
> + return trblimitr & TRBLIMITR_ENABLE;
> +}
> +
> +#define TRBE_EC_OTHERS 0
> +#define TRBE_EC_STAGE1_ABORT 36
> +#define TRBE_EC_STAGE2_ABORT 37
> +
> +static inline int get_trbe_ec(void)
> +{
> + u64 trbsr = read_sysreg_s(SYS_TRBSR_EL1);
> +
> + return (trbsr >> TRBSR_EC_SHIFT) & TRBSR_EC_MASK;
> +}
> +
> +#define TRBE_BSC_NOT_STOPPED 0
> +#define TRBE_BSC_FILLED 1
> +#define TRBE_BSC_TRIGGERED 2
> +
> +static inline int get_trbe_bsc(void)
> +{
> + u64 trbsr = read_sysreg_s(SYS_TRBSR_EL1);
> +
> + return (trbsr >> TRBSR_BSC_SHIFT) & TRBSR_BSC_MASK;
> +}
> +
> +static inline void clr_trbe_irq(void)
> +{
> + u64 trbsr = read_sysreg_s(SYS_TRBSR_EL1);
> +
> + trbsr &= ~TRBSR_IRQ;
> + write_sysreg_s(trbsr, SYS_TRBSR_EL1);
> +}
> +
> +static inline bool is_trbe_irq(void)
> +{
> + u64 trbsr = read_sysreg_s(SYS_TRBSR_EL1);
> +
> + return trbsr & TRBSR_IRQ;
> +}
> +
> +static inline bool is_trbe_trg(void)
> +{
> + u64 trbsr = read_sysreg_s(SYS_TRBSR_EL1);
> +
> + return trbsr & TRBSR_TRG;
> +}
> +
> +static inline bool is_trbe_wrap(void)
> +{
> + u64 trbsr = read_sysreg_s(SYS_TRBSR_EL1);
> +
> + return trbsr & TRBSR_WRAP;
> +}
> +
> +static inline bool is_trbe_abort(void)
> +{
> + u64 trbsr = read_sysreg_s(SYS_TRBSR_EL1);
> +
> + return trbsr & TRBSR_ABORT;
> +}
> +
> +static inline bool is_trbe_running(void)
> +{
> + u64 trbsr = read_sysreg_s(SYS_TRBSR_EL1);
> +
> + return !(trbsr & TRBSR_STOP);
> +}
> +
> +static inline void set_trbe_running(void)
> +{
> + u64 trbsr = read_sysreg_s(SYS_TRBSR_EL1);
> +
> + trbsr &= ~TRBSR_STOP;
> + write_sysreg_s(trbsr, SYS_TRBSR_EL1);
> +}
> +
> +static inline void set_trbe_virtual_mode(void)
> +{
> + u64 trblimitr = read_sysreg_s(SYS_TRBLIMITR_EL1);
> +
> + trblimitr &= ~TRBLIMITR_NVM;
> + write_sysreg_s(trblimitr, SYS_TRBLIMITR_EL1);
> +}
> +
> +#define TRBE_TRIGGER_STOP 0
> +#define TRBE_TRIGGER_IRQ 1
> +#define TRBE_TRIGGER_IGNORE 3
> +
> +static inline int get_trbe_trig_mode(void)
> +{
> + u64 trblimitr = read_sysreg_s(SYS_TRBLIMITR_EL1);
> +
> + return (trblimitr >> TRBLIMITR_TRIG_MODE_SHIFT) & TRBLIMITR_TRIG_MODE_MASK;
> +}
> +
> +static inline void set_trbe_trig_mode(int mode)
> +{
> + u64 trblimitr = read_sysreg_s(SYS_TRBLIMITR_EL1);
> +
> + trblimitr &= ~(TRBLIMITR_TRIG_MODE_MASK << TRBLIMITR_TRIG_MODE_SHIFT);
> + trblimitr |= ((mode & TRBLIMITR_TRIG_MODE_MASK) << TRBLIMITR_TRIG_MODE_SHIFT);
> + write_sysreg_s(trblimitr, SYS_TRBLIMITR_EL1);
> +}
> +
> +#define TRBE_FILL_STOP 0
> +#define TRBE_FILL_WRAP 1
> +#define TRBE_FILL_CIRCULAR 3
> +
---8>---
> +static inline int get_trbe_fill_mode(void)
> +{
> + u64 trblimitr = read_sysreg_s(SYS_TRBLIMITR_EL1);
> +
> + return (trblimitr >> TRBLIMITR_FILL_MODE_SHIFT) & TRBLIMITR_FILL_MODE_MASK;
> +} > +
> +static inline void set_trbe_fill_mode(int mode)
> +{
> + u64 trblimitr = read_sysreg_s(SYS_TRBLIMITR_EL1);
> +
> + trblimitr &= ~(TRBLIMITR_FILL_MODE_MASK << TRBLIMITR_FILL_MODE_SHIFT);
> + trblimitr |= ((mode & TRBLIMITR_FILL_MODE_MASK) << TRBLIMITR_FILL_MODE_SHIFT);
> + write_sysreg_s(trblimitr, SYS_TRBLIMITR_EL1);
> +}
> +
Where do we use these ? I couldn't find any users.
Suzuki
On Mon, Jan 04, 2021 at 02:42:08PM +0000, Suzuki K Poulose wrote:
> Hi Rob,
>
> On 1/3/21 5:02 PM, Rob Herring wrote:
> > On Wed, Dec 23, 2020 at 03:33:38PM +0530, Anshuman Khandual wrote:
> > > From: Suzuki K Poulose <[email protected]>
> > >
> > > Document the device tree bindings for Embedded Trace Extensions.
> > > ETE can be connected to legacy coresight components and thus
> > > could optionally contain a connection graph as described by
> > > the CoreSight bindings.
> > >
> > > Cc: [email protected]
> > > Cc: Mathieu Poirier <[email protected]>
> > > Cc: Mike Leach <[email protected]>
> > > Cc: Rob Herring <[email protected]>
> > > Signed-off-by: Suzuki K Poulose <[email protected]>
> > > Signed-off-by: Anshuman Khandual <[email protected]>
> > > ---
> > > Documentation/devicetree/bindings/arm/ete.txt | 41 +++++++++++++++++++++++++++
> > > 1 file changed, 41 insertions(+)
> > > create mode 100644 Documentation/devicetree/bindings/arm/ete.txt
> >
> > Bindings are in schema format now, please convert this.
> >
>
> Sure, will do that.
>
> > >
> > > diff --git a/Documentation/devicetree/bindings/arm/ete.txt b/Documentation/devicetree/bindings/arm/ete.txt
> > > new file mode 100644
> > > index 0000000..b52b507
> > > --- /dev/null
> > > +++ b/Documentation/devicetree/bindings/arm/ete.txt
> > > @@ -0,0 +1,41 @@
> > > +Arm Embedded Trace Extensions
> > > +
> > > +Arm Embedded Trace Extensions (ETE) is a per CPU trace component that
> > > +allows tracing the CPU execution. It overlaps with the CoreSight ETMv4
> > > +architecture and has extended support for future architecture changes.
> > > +The trace generated by the ETE could be stored via legacy CoreSight
> > > +components (e.g, TMC-ETR) or other means (e.g, using a per CPU buffer
> > > +Arm Trace Buffer Extension (TRBE)). Since the ETE can be connected to
> > > +legacy CoreSight components, a node must be listed per instance, along
> > > +with any optional connection graph as per the coresight bindings.
> > > +See bindings/arm/coresight.txt.
> > > +
> > > +** ETE Required properties:
> > > +
> > > +- compatible : should be one of:
> > > + "arm,embedded-trace-extensions"
> > > +
> > > +- cpu : the CPU phandle this ETE belongs to.
> >
> > If this is 1:1 with CPUs, then perhaps it should be a child node of the
> > CPU nodes.
>
> Yes, it is 1:1 with the CPUs. I have tried to keep this aligned with that of
> "coresight-etm4x". The same driver handles both. The only reason why this
> was separated from the "coresight.txt" is to describe the new configurations
> possible (read, TRBE).
Would it be possible to keep the CPU handle rather than moving things under the
CPU nodes? ETMv3.x and ETMv4.x are using a handle and as Suzuki points out ETE
and ETMv4.x are sharing the same driver. Proceeding differently for the ETE
would be terribly confusing.
>
> That said, I am happy to move this under the CPU, if Mathieu is happy with
> the diversion.
>
> Thanks for the review.
>
> Suzuki
On Mon, Jan 4, 2021 at 11:15 AM Mathieu Poirier
<[email protected]> wrote:
>
> On Mon, Jan 04, 2021 at 02:42:08PM +0000, Suzuki K Poulose wrote:
> > Hi Rob,
> >
> > On 1/3/21 5:02 PM, Rob Herring wrote:
> > > On Wed, Dec 23, 2020 at 03:33:38PM +0530, Anshuman Khandual wrote:
> > > > From: Suzuki K Poulose <[email protected]>
> > > >
> > > > Document the device tree bindings for Embedded Trace Extensions.
> > > > ETE can be connected to legacy coresight components and thus
> > > > could optionally contain a connection graph as described by
> > > > the CoreSight bindings.
> > > >
> > > > Cc: [email protected]
> > > > Cc: Mathieu Poirier <[email protected]>
> > > > Cc: Mike Leach <[email protected]>
> > > > Cc: Rob Herring <[email protected]>
> > > > Signed-off-by: Suzuki K Poulose <[email protected]>
> > > > Signed-off-by: Anshuman Khandual <[email protected]>
> > > > ---
> > > > Documentation/devicetree/bindings/arm/ete.txt | 41 +++++++++++++++++++++++++++
> > > > 1 file changed, 41 insertions(+)
> > > > create mode 100644 Documentation/devicetree/bindings/arm/ete.txt
> > >
> > > Bindings are in schema format now, please convert this.
> > >
> >
> > Sure, will do that.
> >
> > > >
> > > > diff --git a/Documentation/devicetree/bindings/arm/ete.txt b/Documentation/devicetree/bindings/arm/ete.txt
> > > > new file mode 100644
> > > > index 0000000..b52b507
> > > > --- /dev/null
> > > > +++ b/Documentation/devicetree/bindings/arm/ete.txt
> > > > @@ -0,0 +1,41 @@
> > > > +Arm Embedded Trace Extensions
> > > > +
> > > > +Arm Embedded Trace Extensions (ETE) is a per CPU trace component that
> > > > +allows tracing the CPU execution. It overlaps with the CoreSight ETMv4
> > > > +architecture and has extended support for future architecture changes.
> > > > +The trace generated by the ETE could be stored via legacy CoreSight
> > > > +components (e.g, TMC-ETR) or other means (e.g, using a per CPU buffer
> > > > +Arm Trace Buffer Extension (TRBE)). Since the ETE can be connected to
> > > > +legacy CoreSight components, a node must be listed per instance, along
> > > > +with any optional connection graph as per the coresight bindings.
> > > > +See bindings/arm/coresight.txt.
> > > > +
> > > > +** ETE Required properties:
> > > > +
> > > > +- compatible : should be one of:
> > > > + "arm,embedded-trace-extensions"
> > > > +
> > > > +- cpu : the CPU phandle this ETE belongs to.
> > >
> > > If this is 1:1 with CPUs, then perhaps it should be a child node of the
> > > CPU nodes.
> >
> > Yes, it is 1:1 with the CPUs. I have tried to keep this aligned with that of
> > "coresight-etm4x". The same driver handles both. The only reason why this
> > was separated from the "coresight.txt" is to describe the new configurations
> > possible (read, TRBE).
>
> Would it be possible to keep the CPU handle rather than moving things under the
> CPU nodes? ETMv3.x and ETMv4.x are using a handle and as Suzuki points out ETE
> and ETMv4.x are sharing the same driver. Proceeding differently for the ETE
> would be terribly confusing.
Yeah, no problem.
Rob
On 1/4/21 9:58 PM, Suzuki K Poulose wrote:
>
> Hi Anshuman,
>
> On 12/23/20 10:03 AM, Anshuman Khandual wrote:
>> Trace Buffer Extension (TRBE) implements a trace buffer per CPU which is
>> accessible via the system registers. The TRBE supports different addressing
>> modes including CPU virtual address and buffer modes including the circular
>> buffer mode. The TRBE buffer is addressed by a base pointer (TRBBASER_EL1),
>> an write pointer (TRBPTR_EL1) and a limit pointer (TRBLIMITR_EL1). But the
>> access to the trace buffer could be prohibited by a higher exception level
>> (EL3 or EL2), indicated by TRBIDR_EL1.P. The TRBE can also generate a CPU
>> private interrupt (PPI) on address translation errors and when the buffer
>> is full. Overall implementation here is inspired from the Arm SPE driver.
>>
>> Cc: Mathieu Poirier <[email protected]>
>> Cc: Mike Leach <[email protected]>
>> Cc: Suzuki K Poulose <[email protected]>
>> Signed-off-by: Anshuman Khandual <[email protected]>
>> ---
>
>>
>> Documentation/trace/coresight/coresight-trbe.rst | 39 +
>> arch/arm64/include/asm/sysreg.h | 2 +
>> drivers/hwtracing/coresight/Kconfig | 11 +
>> drivers/hwtracing/coresight/Makefile | 1 +
>> drivers/hwtracing/coresight/coresight-trbe.c | 925 +++++++++++++++++++++++
>> drivers/hwtracing/coresight/coresight-trbe.h | 248 ++++++
>> 6 files changed, 1226 insertions(+)
>> create mode 100644 Documentation/trace/coresight/coresight-trbe.rst
>> create mode 100644 drivers/hwtracing/coresight/coresight-trbe.c
>> create mode 100644 drivers/hwtracing/coresight/coresight-trbe.h
>>
>> diff --git a/Documentation/trace/coresight/coresight-trbe.rst b/Documentation/trace/coresight/coresight-trbe.rst
>> new file mode 100644
>> index 0000000..8b79850
>> --- /dev/null
>> +++ b/Documentation/trace/coresight/coresight-trbe.rst
>> @@ -0,0 +1,39 @@
>> +.. SPDX-License-Identifier: GPL-2.0
>> +
>> +==============================
>> +Trace Buffer Extension (TRBE).
>> +==============================
>> +
>> + :Author: Anshuman Khandual <[email protected]>
>> + :Date: November 2020
>> +
>> +Hardware Description
>> +--------------------
>> +
>> +Trace Buffer Extension (TRBE) is a percpu hardware which captures in system
>> +memory, CPU traces generated from a corresponding percpu tracing unit. This
>> +gets plugged in as a coresight sink device because the corresponding trace
>> +genarators (ETE), are plugged in as source device.
>> +
>> +The TRBE is not compliant to CoreSight architecture specifications, but is
>> +driven via the CoreSight driver framework to support the ETE (which is
>> +CoreSight compliant) integration.
>> +
>> +Sysfs files and directories
>> +---------------------------
>> +
>> +The TRBE devices appear on the existing coresight bus alongside the other
>> +coresight devices::
>> +
>> + >$ ls /sys/bus/coresight/devices
>> + trbe0 trbe1 trbe2 trbe3
>> +
>> +The ``trbe<N>`` named TRBEs are associated with a CPU.::
>> +
>> + >$ ls /sys/bus/coresight/devices/trbe0/
>> + irq align dbm
>
> You may want to remove irq here.
Sure, will do.
>
>> +
>> +*Key file items are:-*
>> + * ``align``: TRBE write pointer alignment
>> + * ``dbm``: TRBE updates memory with access and dirty flags
>> +
>> diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
>> index e6962b1..2a9bfb7 100644
>> --- a/arch/arm64/include/asm/sysreg.h
>> +++ b/arch/arm64/include/asm/sysreg.h
>> @@ -97,6 +97,7 @@
>> #define SET_PSTATE_UAO(x) __emit_inst(0xd500401f | PSTATE_UAO | ((!!x) << PSTATE_Imm_shift))
>> #define SET_PSTATE_SSBS(x) __emit_inst(0xd500401f | PSTATE_SSBS | ((!!x) << PSTATE_Imm_shift))
>> #define SET_PSTATE_TCO(x) __emit_inst(0xd500401f | PSTATE_TCO | ((!!x) << PSTATE_Imm_shift))
>> +#define TSB_CSYNC __emit_inst(0xd503225f)
>> #define __SYS_BARRIER_INSN(CRm, op2, Rt) \
>> __emit_inst(0xd5000000 | sys_insn(0, 3, 3, (CRm), (op2)) | ((Rt) & 0x1f))
>> @@ -869,6 +870,7 @@
>> #define ID_AA64MMFR2_CNP_SHIFT 0
>> /* id_aa64dfr0 */
>> +#define ID_AA64DFR0_TRBE_SHIFT 44
>> #define ID_AA64DFR0_TRACE_FILT_SHIFT 40
>> #define ID_AA64DFR0_DOUBLELOCK_SHIFT 36
>> #define ID_AA64DFR0_PMSVER_SHIFT 32
>> diff --git a/drivers/hwtracing/coresight/Kconfig b/drivers/hwtracing/coresight/Kconfig
>> index c119824..0f5e101 100644
>> --- a/drivers/hwtracing/coresight/Kconfig
>> +++ b/drivers/hwtracing/coresight/Kconfig
>> @@ -156,6 +156,17 @@ config CORESIGHT_CTI
>> To compile this driver as a module, choose M here: the
>> module will be called coresight-cti.
>> +config CORESIGHT_TRBE
>> + bool "Trace Buffer Extension (TRBE) driver"
>> + depends on ARM64
>> + help
>> + This driver provides support for percpu Trace Buffer Extension (TRBE).
>> + TRBE always needs to be used along with it's corresponding percpu ETE
>> + component. ETE generates trace data which is then captured with TRBE.
>> + Unlike traditional sink devices, TRBE is a CPU feature accessible via
>> + system registers. But it's explicit dependency with trace unit (ETE)
>> + requires it to be plugged in as a coresight sink device.
>> +
>> config CORESIGHT_CTI_INTEGRATION_REGS
>> bool "Access CTI CoreSight Integration Registers"
>> depends on CORESIGHT_CTI
>> diff --git a/drivers/hwtracing/coresight/Makefile b/drivers/hwtracing/coresight/Makefile
>> index f20e357..d608165 100644
>> --- a/drivers/hwtracing/coresight/Makefile
>> +++ b/drivers/hwtracing/coresight/Makefile
>> @@ -21,5 +21,6 @@ obj-$(CONFIG_CORESIGHT_STM) += coresight-stm.o
>> obj-$(CONFIG_CORESIGHT_CPU_DEBUG) += coresight-cpu-debug.o
>> obj-$(CONFIG_CORESIGHT_CATU) += coresight-catu.o
>> obj-$(CONFIG_CORESIGHT_CTI) += coresight-cti.o
>> +obj-$(CONFIG_CORESIGHT_TRBE) += coresight-trbe.o
>> coresight-cti-y := coresight-cti-core.o coresight-cti-platform.o \
>> coresight-cti-sysfs.o
>> diff --git a/drivers/hwtracing/coresight/coresight-trbe.c b/drivers/hwtracing/coresight/coresight-trbe.c
>> new file mode 100644
>> index 0000000..ba280e6
>> --- /dev/null
>> +++ b/drivers/hwtracing/coresight/coresight-trbe.c
>> @@ -0,0 +1,925 @@
>> +// SPDX-License-Identifier: GPL-2.0
>> +/*
>> + * This driver enables Trace Buffer Extension (TRBE) as a per-cpu coresight
>> + * sink device could then pair with an appropriate per-cpu coresight source
>> + * device (ETE) thus generating required trace data. Trace can be enabled
>> + * via the perf framework.
>> + *
>> + * Copyright (C) 2020 ARM Ltd.
>> + *
>> + * Author: Anshuman Khandual <[email protected]>
>> + */
>> +#define DRVNAME "arm_trbe"
>> +
>> +#define pr_fmt(fmt) DRVNAME ": " fmt
>> +
>> +#include "coresight-trbe.h"
>> +
>> +#define PERF_IDX2OFF(idx, buf) ((idx) % ((buf)->nr_pages << PAGE_SHIFT))
>> +
>> +/*
>> + * A padding packet that will help the user space tools
>> + * in skipping relevant sections in the captured trace
>> + * data which could not be decoded.
>
> You may want to add :
>
> TRBE doesn't support formatting the trace data, unlike the legacy CoreSight sinks
> and thus we use ETE trace packets to pad the sections of buffer.
Sure, will add.
>
>
>> + */
>> +#define ETE_IGNORE_PACKET 0x70
>> +
>> +enum trbe_fault_action {
>> + TRBE_FAULT_ACT_WRAP,
>> + TRBE_FAULT_ACT_SPURIOUS,
>> + TRBE_FAULT_ACT_FATAL,
>> +};
>> +
>> +struct trbe_buf {
>> + unsigned long trbe_base;
>> + unsigned long trbe_limit;
>> + unsigned long trbe_write;
>> + int nr_pages;
>> + void **pages;
>> + bool snapshot;
>> + struct trbe_cpudata *cpudata;
>> +};
>> +
>> +struct trbe_cpudata {
>> + bool trbe_dbm;
>> + u64 trbe_align;
>> + int cpu;
>> + enum cs_mode mode;
>> + struct trbe_buf *buf;
>> + struct trbe_drvdata *drvdata;
>> +};
>> +
>> +struct trbe_drvdata {
>> + struct trbe_cpudata __percpu *cpudata;
>> + struct perf_output_handle __percpu **handle;
>> + struct hlist_node hotplug_node;
>> + int irq;
>> + cpumask_t supported_cpus;
>> + enum cpuhp_state trbe_online;
>> + struct platform_device *pdev;
>> +};
>> +
>> +static int trbe_alloc_node(struct perf_event *event)
>> +{
>> + if (event->cpu == -1)
>> + return NUMA_NO_NODE;
>> + return cpu_to_node(event->cpu);
>> +}
>> +
>> +static void set_trbe_flush(void)
>> +{
>> + asm(TSB_CSYNC);
>> + dsb(ish);
>> +}
>> +
>> +static void trbe_disable_and_drain_local(void)
>> +{
>> + write_sysreg_s(0, SYS_TRBLIMITR_EL1);
>> + isb();
>> + set_trbe_flush();
>> +}
>> +
>> +static void trbe_reset_local(void)
>> +{
>> + trbe_disable_and_drain_local();
>> + write_sysreg_s(0, SYS_TRBPTR_EL1);
>> + write_sysreg_s(0, SYS_TRBBASER_EL1);
>> + write_sysreg_s(0, SYS_TRBSR_EL1);
>> + isb();
>> +}
>> +
>> +/*
>> + * TRBE Buffer Management
>> + *
>> + * The TRBE buffer spans from the base pointer till the limit pointer. When enabled,
>> + * it starts writing trace data from the write pointer onward till the limit pointer.
>> + * When the write pointer reaches the address just before the limit pointer, it gets
>> + * wrapped around again to the base pointer. This is called a TRBE wrap event which
>> + * is accompanied by an IRQ.
>
> This is true for one of the modes of operation, the WRAP mode, which could be specified
> in the comment. e.g,
>
> This is called a TRBE wrap event, which generates a maintenance interrupt when operated
> in WRAP mode.
Sure, will change.
>
>
> The write pointer again starts writing trace data from
>> + * the base pointer until just before the limit pointer before getting wrapped again
>> + * with an IRQ and this process just goes on as long as the TRBE is enabled.
>> + *
>> + * Wrap around with an IRQ
>> + * ------ < ------ < ------- < ----- < -----
>> + * | |
>> + * ------ > ------ > ------- > ----- > -----
>> + *
>> + * +---------------+-----------------------+
>> + * | | |
>> + * +---------------+-----------------------+
>> + * Base Pointer Write Pointer Limit Pointer
>> + *
>> + * The base and limit pointers always needs to be PAGE_SIZE aligned. But the write
>> + * pointer can be aligned to the implementation defined TRBE trace buffer alignment
>> + * as captured in trbe_cpudata->trbe_align.
>> + *
>> + *
>> + * head tail wakeup
>> + * +---------------------------------------+----- ~ ~ ------
>> + * |$$$$$$$|################|$$$$$$$$$$$$$$| |
>> + * +---------------------------------------+----- ~ ~ ------
>> + * Base Pointer Write Pointer Limit Pointer
>> + *
>> + * The perf_output_handle indices (head, tail, wakeup) are monotonically increasing
>> + * values which tracks all the driver writes and user reads from the perf auxiliary
>> + * buffer. Generally [head..tail] is the area where the driver can write into unless
>> + * the wakeup is behind the tail. Enabled TRBE buffer span needs to be adjusted and
>> + * configured depending on the perf_output_handle indices, so that the driver does
>> + * not override into areas in the perf auxiliary buffer which is being or yet to be
>> + * consumed from the user space. The enabled TRBE buffer area is a moving subset of
>> + * the allocated perf auxiliary buffer.
>> + */
>> +static void trbe_pad_buf(struct perf_output_handle *handle, int len)
>> +{
>> + struct trbe_buf *buf = etm_perf_sink_config(handle);
>> + u64 head = PERF_IDX2OFF(handle->head, buf);
>> +
>> + memset((void *) buf->trbe_base + head, ETE_IGNORE_PACKET, len);
>> + if (!buf->snapshot)
>> + perf_aux_output_skip(handle, len);
>> +}
>> +
>> +static unsigned long trbe_snapshot_offset(struct perf_output_handle *handle)
>> +{
>> + struct trbe_buf *buf = etm_perf_sink_config(handle);
>> + u64 head = PERF_IDX2OFF(handle->head, buf);
>> + u64 limit = buf->nr_pages * PAGE_SIZE;
>> +
>> + /*
>> + * The trace format isn't parseable in reverse, so clamp the limit
>> + * to half of the buffer size in snapshot mode so that the worst
>> + * case is half a buffer of records, as opposed to a single record.
>> + */
>
> That is not true. We can pad the buffer with Ignore packets and the decoder could
> skip forward until it finds an alignment synchronization packet. So, we could use
> the full size of the buffer, unlike the SPE.
Will rework this and update.
>
>> + if (head < limit >> 1)
>> + limit >>= 1;
>> +
>> + return limit;
>> +}
>> +
>> +/*
>> + * TRBE Limit Calculation
>> + *
>> + * The following markers are used to illustrate various TRBE buffer situations.
>> + *
>> + * $$$$ - Data area, unconsumed captured trace data, not to be overridden
>> + * #### - Free area, enabled, trace will be written
>> + * %%%% - Free area, disabled, trace will not be written
>> + * ==== - Free area, padded with ETE_IGNORE_PACKET, trace will be skipped
>> + */
>
> Thanks for the nice ASCII art, it really helps to understand the scenarios.
>
>> +static unsigned long trbe_normal_offset(struct perf_output_handle *handle)
>> +{
>> + struct trbe_buf *buf = etm_perf_sink_config(handle);
>> + struct trbe_cpudata *cpudata = buf->cpudata;
>> + const u64 bufsize = buf->nr_pages * PAGE_SIZE;
>> + u64 limit = bufsize;
>> + u64 head, tail, wakeup;
>> +
>> + head = PERF_IDX2OFF(handle->head, buf);
>> +
>> + /*
>> + * head
>> + * ------->|
>> + * |
>> + * head TRBE align tail
>> + * +----|-------|---------------|-------+
>> + * |$$$$|=======|###############|$$$$$$$|
>> + * +----|-------|---------------|-------+
>> + * trbe_base trbe_base + nr_pages
>> + *
>> + * Perf aux buffer output head position can be misaligned depending on
>> + * various factors including user space reads. In case misaligned, head
>> + * needs to be aligned before TRBE can be configured. Pad the alignment
>> + * gap with ETE_IGNORE_PACKET bytes that will be ignored by user tools
>> + * and skip this section thus advancing the head.
>> + */
>> + if (!IS_ALIGNED(head, cpudata->trbe_align)) {
>> + unsigned long delta = roundup(head, cpudata->trbe_align) - head;
>> +
>> + delta = min(delta, handle->size);
>> + trbe_pad_buf(handle, delta);
>> + head = PERF_IDX2OFF(handle->head, buf);
>> + }
>> +
>> + /*
>> + * head = tail (size = 0)
>> + * +----|-------------------------------+
>> + * |$$$$|$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ |
>> + * +----|-------------------------------+
>> + * trbe_base trbe_base + nr_pages
>> + *
>> + * Perf aux buffer does not have any space for the driver to write into.
>> + * Just communicate trace truncation event to the user space by marking
>> + * it with PERF_AUX_FLAG_TRUNCATED.
>> + */
>> + if (!handle->size) {
>> + perf_aux_output_flag(handle, PERF_AUX_FLAG_TRUNCATED);
>> + return 0;
>> + }
>> +
>> + /* Compute the tail and wakeup indices now that we've aligned head */
>> + tail = PERF_IDX2OFF(handle->head + handle->size, buf);
>> + wakeup = PERF_IDX2OFF(handle->wakeup, buf);
>> +
>> + /*
>> + * Lets calculate the buffer area which TRBE could write into. There
>> + * are three possible scenarios here. Limit needs to be aligned with
>> + * PAGE_SIZE per the TRBE requirement. Always avoid clobbering the
>> + * unconsumed data.
>> + *
>> + * 1) head < tail
>> + *
>> + * head tail
>> + * +----|-----------------------|-------+
>> + * |$$$$|#######################|$$$$$$$|
>> + * +----|-----------------------|-------+
>> + * trbe_base limit trbe_base + nr_pages
>> + *
>> + * TRBE could write into [head..tail] area. Unless the tail is right at
>> + * the end of the buffer, neither an wrap around nor an IRQ is expected
>> + * while being enabled.
>> + *
>> + * 2) head == tail
>> + *
>> + * head = tail (size > 0)
>> + * +----|-------------------------------+
>> + * |%%%%|###############################|
>> + * +----|-------------------------------+
>> + * trbe_base limit = trbe_base + nr_pages
>> + *
>> + * TRBE should just write into [head..base + nr_pages] area even though
>> + * the entire buffer is empty. Reason being, when the trace reaches the
>> + * end of the buffer, it will just wrap around with an IRQ giving an
>> + * opportunity to reconfigure the buffer.
>> + *
>> + * 3) tail < head
>> + *
>> + * tail head
>> + * +----|-----------------------|-------+
>> + * |%%%%|$$$$$$$$$$$$$$$$$$$$$$$|#######|
>> + * +----|-----------------------|-------+
>> + * trbe_base limit = trbe_base + nr_pages
>> + *
>> + * TRBE should just write into [head..base + nr_pages] area even though
>> + * the [trbe_base..tail] is also empty. Reason being, when the trace
>> + * reaches the end of the buffer, it will just wrap around with an IRQ
>> + * giving an opportunity to reconfigure the buffer.
>> + */
>> + if (head < tail)
>> + limit = round_down(tail, PAGE_SIZE);
>> +
>> + /*
>> + * Wakeup may be arbitrarily far into the future. If it's not in the
>> + * current generation, either we'll wrap before hitting it, or it's
>> + * in the past and has been handled already.
>> + *
>> + * If there's a wakeup before we wrap, arrange to be woken up by the
>> + * page boundary following it. Keep the tail boundary if that's lower.
>> + *
>> + * head wakeup tail
>> + * +----|---------------|-------|-------+
>> + * |$$$$|###############|%%%%%%%|$$$$$$$|
>> + * +----|---------------|-------|-------+
>> + * trbe_base limit trbe_base + nr_pages
>> + */
>> + if (handle->wakeup < (handle->head + handle->size) && head <= wakeup)
>> + limit = min(limit, round_up(wakeup, PAGE_SIZE));
>> +
>> + /*
>> + * There are two situation when this can happen i.e limit is before
>> + * the head and hence TRBE cannot be configured.
>> + *
>> + * 1) head < tail (aligned down with PAGE_SIZE) and also they are both
>> + * within the same PAGE size range.
>> + *
>> + * PAGE_SIZE
>> + * |----------------------|
>> + *
>> + * limit head tail
>> + * +------------|------|--------|-------+
>> + * |$$$$$$$$$$$$$$$$$$$|========|$$$$$$$|
>> + * +------------|------|--------|-------+
>> + * trbe_base trbe_base + nr_pages
>> + *
>> + * 2) head < wakeup (aligned up with PAGE_SIZE) < tail and also both
>> + * head and wakeup are within same PAGE size range.
>> + *
>> + * PAGE_SIZE
>> + * |----------------------|
>> + *
>> + * limit head wakeup tail
>> + * +----|------|-------|--------|-------+
>> + * |$$$$$$$$$$$|=======|========|$$$$$$$|
>> + * +----|------|-------|--------|-------+
>> + * trbe_base trbe_base + nr_pages
>> + */
>> + if (limit > head)
>> + return limit;
>> +
>> + trbe_pad_buf(handle, handle->size);
>> + perf_aux_output_flag(handle, PERF_AUX_FLAG_TRUNCATED);
>> + return 0;
>> +}
>> +
>> +static unsigned long get_trbe_limit(struct perf_output_handle *handle)
>
> nit: The naming is a bit confusing with get_trbe_limit() and get_trbe_limit_pointer().
> One computes the TRBE buffer limit and the other reads the hardware Limit pointer.
> It would be good if follow a scheme for the namings.
>
> e.g, trbe_limit_pointer() , trbe_base_pointer(), trbe_<register>_<name> for anything
> that reads the hardware register.
The current scheme is in the form get_trbe_XXX() where XXX
is a TRBE hardware component e.g.
get_trbe_base_pointer()
get_trbe_limit_pointer()
get_trbe_write_pointer()
get_trbe_ec()
get_trbe_bsc()
get_trbe_address_align()
get_trbe_flag_update()
>
> Or may be rename the get_trbe_limit() to compute_trbe_buffer_limit()
This makes it clear, will change.
>
>> +{
>> + struct trbe_buf *buf = etm_perf_sink_config(handle);
>> + unsigned long offset;
>> +
>> + if (buf->snapshot)
>> + offset = trbe_snapshot_offset(handle);
>> + else
>> + offset = trbe_normal_offset(handle);
>> + return buf->trbe_base + offset;
>> +}
>> +
>> +static void clear_trbe_state(void)
>
> nit: The name doesn't give much clue about what it is doing, especially, given
> the following "set_trbe_state()" which does completely different from this "clear"
> operation.
I agree that these names could have been better.
s/clear_trbe_state/trbe_reset_perf_state - Clears TRBE from current perf config
s/set_trbe_state/trbe_prepare_perf_state - Prepares TRBE for the next perf config
>
> I would rather open code this with a write of 0 to trbsr in the caller.
>
>> +{
>> + u64 trbsr = read_sysreg_s(SYS_TRBSR_EL1);
>> +
>> + WARN_ON(is_trbe_enabled());
>> + trbsr &= ~TRBSR_IRQ;
>> + trbsr &= ~TRBSR_TRG;
>> + trbsr &= ~TRBSR_WRAP;
>> + trbsr &= ~(TRBSR_EC_MASK << TRBSR_EC_SHIFT);
>> + trbsr &= ~(TRBSR_BSC_MASK << TRBSR_BSC_SHIFT);
>> + trbsr &= ~(TRBSR_FSC_MASK << TRBSR_FSC_SHIFT);
>
> BSC and FSC are the same fields under MSS, with their meanings determined by the EC field.
Could just drop the FSC part if required.
>
> Could we simply write 0 to the register ?
I would really like to avoid that. This function clearly enumerates all
individual bit fields being cleared for resetting as well as preparing
the TRBE for the next perf session. Converting this into a 0 write for
SYS_TRBSR_EL1 sounds excessive and the only thing it would save is the
register read.
>
>> + write_sysreg_s(trbsr, SYS_TRBSR_EL1);
>> +}
>> +
>> +static void set_trbe_state(void)
>> +{
>> + u64 trblimitr = read_sysreg_s(SYS_TRBLIMITR_EL1);
>> +
>> + trblimitr &= ~TRBLIMITR_NVM;
>> + trblimitr &= ~(TRBLIMITR_FILL_MODE_MASK << TRBLIMITR_FILL_MODE_SHIFT);
>> + trblimitr &= ~(TRBLIMITR_TRIG_MODE_MASK << TRBLIMITR_TRIG_MODE_SHIFT);
>> + trblimitr |= (TRBE_FILL_STOP & TRBLIMITR_FILL_MODE_MASK) << TRBLIMITR_FILL_MODE_SHIFT;
>> + trblimitr |= (TRBE_TRIGGER_IGNORE & TRBLIMITR_TRIG_MODE_MASK) << TRBLIMITR_TRIG_MODE_SHIFT;
>> + write_sysreg_s(trblimitr, SYS_TRBLIMITR_EL1);
>
> Do we need to read-copy-update here ? Could we simply write 0 ?
> Same as above comment, could we not simply opencode it at the caller ?
> Clearly the names don't help.
Will change the names as proposed or something better. But lets leave
these functions as is. Besides TRBE_TRIGGER_IGNORE also has a positive
value (i.e 3), writing all 0s into SYS_TRBLIMITR_EL1 will not be ideal.
>
>> +}
>> +
>> +static void trbe_enable_hw(struct trbe_buf *buf)
>> +{
>> + WARN_ON(buf->trbe_write < buf->trbe_base);
>> + WARN_ON(buf->trbe_write >= buf->trbe_limit);
>> + set_trbe_disabled();
>> + clear_trbe_state();
>> + set_trbe_state();
>> + isb();
>> + set_trbe_base_pointer(buf->trbe_base);
>> + set_trbe_limit_pointer(buf->trbe_limit);
>> + set_trbe_write_pointer(buf->trbe_write);
>
> Where do we set the fill mode ?
TRBE_FILL_STOP has already been configured in set_trbe_state().
>
>> + isb();
>> + set_trbe_running();
>> + set_trbe_enabled();
>> + set_trbe_flush();
>> +}
>> +
>> +static void *arm_trbe_alloc_buffer(struct coresight_device *csdev,
>> + struct perf_event *event, void **pages,
>> + int nr_pages, bool snapshot)
>> +{
>> + struct trbe_buf *buf;
>> + struct page **pglist;
>> + int i;
>> +
>> + if ((nr_pages < 2) || (snapshot && (nr_pages & 1)))
>> + return NULL;
>> +
>> + buf = kzalloc_node(sizeof(*buf), GFP_KERNEL, trbe_alloc_node(event));
>> + if (IS_ERR(buf))
>> + return ERR_PTR(-ENOMEM);
>> +
>> + pglist = kcalloc(nr_pages, sizeof(*pglist), GFP_KERNEL);
>> + if (IS_ERR(pglist)) {
>> + kfree(buf);
>> + return ERR_PTR(-ENOMEM);
>> + }
>> +
>> + for (i = 0; i < nr_pages; i++)
>> + pglist[i] = virt_to_page(pages[i]);
>> +
>> + buf->trbe_base = (unsigned long) vmap(pglist, nr_pages, VM_MAP, PAGE_KERNEL);
>> + if (IS_ERR((void *) buf->trbe_base)) {
>> + kfree(pglist);
>> + kfree(buf);
>> + return ERR_PTR(buf->trbe_base);
>> + }
>> + buf->trbe_limit = buf->trbe_base + nr_pages * PAGE_SIZE;
>> + buf->trbe_write = buf->trbe_base;
>> + buf->snapshot = snapshot;
>> + buf->nr_pages = nr_pages;
>> + buf->pages = pages;
>> + kfree(pglist);
>> + return buf;
>> +}
>> +
>> +void arm_trbe_free_buffer(void *config)
>> +{
>> + struct trbe_buf *buf = config;
>> +
>> + vunmap((void *) buf->trbe_base);
>> + kfree(buf);
>> +}
>> +
>> +static unsigned long arm_trbe_update_buffer(struct coresight_device *csdev,
>> + struct perf_output_handle *handle,
>> + void *config)
>> +{
>> + struct trbe_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
>> + struct trbe_cpudata *cpudata = dev_get_drvdata(&csdev->dev);
>> + struct trbe_buf *buf = config;
>> + unsigned long size, offset;
>> +
>> + WARN_ON(buf->cpudata != cpudata);
>> + WARN_ON(cpudata->cpu != smp_processor_id());
>> + WARN_ON(cpudata->drvdata != drvdata);
>> + if (cpudata->mode != CS_MODE_PERF)
>> + return -EINVAL;
>> +
>> + offset = get_trbe_write_pointer() - get_trbe_base_pointer();
>> + size = offset - PERF_IDX2OFF(handle->head, buf);
>> + if (buf->snapshot)
>> + handle->head += size;
>
> You may want to add in a comment why we keep the TRBE disabled here.
Good point, will add.
>
>> + trbe_reset_local();
>> + return size;
>> +}
>> +
>> +static int arm_trbe_enable(struct coresight_device *csdev, u32 mode, void *data)
>> +{
>> + struct trbe_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
>> + struct trbe_cpudata *cpudata = dev_get_drvdata(&csdev->dev);
>> + struct perf_output_handle *handle = data;
>> + struct trbe_buf *buf = etm_perf_sink_config(handle);
>> +
>> + WARN_ON(cpudata->cpu != smp_processor_id());
>> + WARN_ON(cpudata->drvdata != drvdata);
>> + if (mode != CS_MODE_PERF)
>> + return -EINVAL;
>> +
>> + *this_cpu_ptr(drvdata->handle) = handle;
>> + cpudata->buf = buf;
>> + cpudata->mode = mode;
>> + buf->cpudata = cpudata;
>> + buf->trbe_write = buf->trbe_base + PERF_IDX2OFF(handle->head, buf);
>> + buf->trbe_limit = get_trbe_limit(handle);
>> + if (buf->trbe_limit == buf->trbe_base) {
>> + trbe_disable_and_drain_local();
>> + return 0;
>> + }
>> + trbe_enable_hw(buf);
>> + return 0;
>> +}
>> +
>> +static int arm_trbe_disable(struct coresight_device *csdev)
>> +{
>> + struct trbe_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
>> + struct trbe_cpudata *cpudata = dev_get_drvdata(&csdev->dev);
>> + struct trbe_buf *buf = cpudata->buf;
>> +
>> + WARN_ON(buf->cpudata != cpudata);
>> + WARN_ON(cpudata->cpu != smp_processor_id());
>> + WARN_ON(cpudata->drvdata != drvdata);
>> + if (cpudata->mode != CS_MODE_PERF)
>> + return -EINVAL;
>> +
>> + trbe_disable_and_drain_local();
>> + buf->cpudata = NULL;
>> + cpudata->buf = NULL;
>> + cpudata->mode = CS_MODE_DISABLED;
>> + return 0;
>> +}
>> +
>> +static void trbe_handle_fatal(struct perf_output_handle *handle)
>> +{
>> + perf_aux_output_flag(handle, PERF_AUX_FLAG_TRUNCATED);
>> + perf_aux_output_end(handle, 0);
>> + trbe_disable_and_drain_local();
>> +}
>> +
>> +static void trbe_handle_spurious(struct perf_output_handle *handle)
>> +{
>> + struct trbe_buf *buf = etm_perf_sink_config(handle);
>> +
>> + buf->trbe_write = buf->trbe_base + PERF_IDX2OFF(handle->head, buf);
>> + buf->trbe_limit = get_trbe_limit(handle);
>> + if (buf->trbe_limit == buf->trbe_base) {
>> + trbe_disable_and_drain_local();
>> + return;
>> + }
>> + trbe_enable_hw(buf);
>> +}
>> +
>> +static void trbe_handle_overflow(struct perf_output_handle *handle)
>> +{
>> + struct perf_event *event = handle->event;
>> + struct trbe_buf *buf = etm_perf_sink_config(handle);
>> + unsigned long offset, size;
>> + struct etm_event_data *event_data;
>> +
>> + offset = get_trbe_limit_pointer() - get_trbe_base_pointer();
>> + size = offset - PERF_IDX2OFF(handle->head, buf);
>> + if (buf->snapshot)
>> + handle->head = offset;
>> + perf_aux_output_end(handle, size);
>> +
>> + event_data = perf_aux_output_begin(handle, event);
>> + if (!event_data) {
>> + event->hw.state |= PERF_HES_STOPPED;
>> + trbe_disable_and_drain_local();
>> + perf_aux_output_flag(handle, PERF_AUX_FLAG_TRUNCATED);
>> + return;
>> + }
>> + buf->trbe_write = buf->trbe_base;
>> + buf->trbe_limit = get_trbe_limit(handle);
>> + if (buf->trbe_limit == buf->trbe_base) {
>> + trbe_disable_and_drain_local();
>> + return;
>> + }
>> + *this_cpu_ptr(buf->cpudata->drvdata->handle) = handle;
>> + trbe_enable_hw(buf);
>> +}
>> +
>> +static bool is_perf_trbe(struct perf_output_handle *handle)
>> +{
>> + struct trbe_buf *buf = etm_perf_sink_config(handle);
>> + struct trbe_cpudata *cpudata = buf->cpudata;
>> + struct trbe_drvdata *drvdata = cpudata->drvdata;
>> + int cpu = smp_processor_id();
>> +
>> + WARN_ON(buf->trbe_base != get_trbe_base_pointer());
>> + WARN_ON(buf->trbe_limit != get_trbe_limit_pointer());
>> +
>> + if (cpudata->mode != CS_MODE_PERF)
>> + return false;
>> +
>> + if (cpudata->cpu != cpu)
>> + return false;
>> +
>> + if (!cpumask_test_cpu(cpu, &drvdata->supported_cpus))
>> + return false;
>> +
>> + return true;
>> +}
>> +
>> +static enum trbe_fault_action trbe_get_fault_act(struct perf_output_handle *handle)
>> +{
>> + int ec = get_trbe_ec();
>> + int bsc = get_trbe_bsc();
>> +
>> + WARN_ON(is_trbe_running());
>> + if (is_trbe_trg() || is_trbe_abort())
>> + return TRBE_FAULT_ACT_FATAL;
>> +
>> + if ((ec == TRBE_EC_STAGE1_ABORT) || (ec == TRBE_EC_STAGE2_ABORT))
>> + return TRBE_FAULT_ACT_FATAL;
>> +
>> + if (is_trbe_wrap() && (ec == TRBE_EC_OTHERS) && (bsc == TRBE_BSC_FILLED)) {
>> + if (get_trbe_write_pointer() == get_trbe_base_pointer())
>> + return TRBE_FAULT_ACT_WRAP;
>> + }
>> + return TRBE_FAULT_ACT_SPURIOUS;
>> +}
>> +
>> +static irqreturn_t arm_trbe_irq_handler(int irq, void *dev)
>> +{
>> + struct perf_output_handle **handle_ptr = dev;
>> + struct perf_output_handle *handle = *handle_ptr;
>> + enum trbe_fault_action act;
>> +
>> + WARN_ON(!is_trbe_irq());
>> + clr_trbe_irq();
>> +
>> + if (!perf_get_aux(handle))
>> + return IRQ_NONE;
>> +
>> + if (!is_perf_trbe(handle))
>> + return IRQ_NONE;
>> +
>> + irq_work_run();
>> +
>> + act = trbe_get_fault_act(handle);
>> + switch (act) {
>> + case TRBE_FAULT_ACT_WRAP:
>> + trbe_handle_overflow(handle);
>> + break;
>> + case TRBE_FAULT_ACT_SPURIOUS:
>> + trbe_handle_spurious(handle);
>> + break;
>> + case TRBE_FAULT_ACT_FATAL:
>> + trbe_handle_fatal(handle);
>> + break;
>> + }
>> + return IRQ_HANDLED;
>> +}
>> +
>> +static const struct coresight_ops_sink arm_trbe_sink_ops = {
>> + .enable = arm_trbe_enable,
>> + .disable = arm_trbe_disable,
>> + .alloc_buffer = arm_trbe_alloc_buffer,
>> + .free_buffer = arm_trbe_free_buffer,
>> + .update_buffer = arm_trbe_update_buffer,
>> +};
>> +
>> +static const struct coresight_ops arm_trbe_cs_ops = {
>> + .sink_ops = &arm_trbe_sink_ops,
>> +};
>> +
>> +static ssize_t align_show(struct device *dev, struct device_attribute *attr, char *buf)
>> +{
>> + struct trbe_cpudata *cpudata = dev_get_drvdata(dev);
>> +
>> + return sprintf(buf, "%llx\n", cpudata->trbe_align);
>> +}
>> +static DEVICE_ATTR_RO(align);
>> +
>> +static ssize_t dbm_show(struct device *dev, struct device_attribute *attr, char *buf)
>> +{
>> + struct trbe_cpudata *cpudata = dev_get_drvdata(dev);
>> +
>> + return sprintf(buf, "%d\n", cpudata->trbe_dbm);
>> +}
>> +static DEVICE_ATTR_RO(dbm);
>> +
>> +static struct attribute *arm_trbe_attrs[] = {
>> + &dev_attr_align.attr,
>> + &dev_attr_dbm.attr,
>> + NULL,
>> +};
>> +
>> +static const struct attribute_group arm_trbe_group = {
>> + .attrs = arm_trbe_attrs,
>> +};
>> +
>> +static const struct attribute_group *arm_trbe_groups[] = {
>> + &arm_trbe_group,
>> + NULL,
>> +};
>> +
>> +static void arm_trbe_probe_coresight_cpu(void *info)
>> +{
>> + struct trbe_drvdata *drvdata = info;
>> + struct coresight_desc desc = { 0 };
>> + int cpu = smp_processor_id();
>> + struct trbe_cpudata *cpudata = per_cpu_ptr(drvdata->cpudata, cpu);
>> + struct coresight_device *trbe_csdev = per_cpu(csdev_sink, cpu);
>> + struct device *dev;
>> +
>> + if (WARN_ON(!cpudata))
>> + goto cpu_clear;
>> +
>> + if (trbe_csdev)
>> + return;
>> +
>> + cpudata->cpu = smp_processor_id();
>> + cpudata->drvdata = drvdata;
>> + dev = &cpudata->drvdata->pdev->dev;
>> +
>> + if (!is_trbe_available()) {
>> + pr_err("TRBE is not implemented on cpu %d\n", cpudata->cpu);
>> + goto cpu_clear;
>> + }
>> +
>> + if (!is_trbe_programmable()) {
>> + pr_err("TRBE is owned in higher exception level on cpu %d\n", cpudata->cpu);
>> + goto cpu_clear;
>> + }
>> + desc.name = devm_kasprintf(dev, GFP_KERNEL, "%s%d", DRVNAME, smp_processor_id());
>> + if (IS_ERR(desc.name))
>> + goto cpu_clear;
>> +
>> + desc.type = CORESIGHT_DEV_TYPE_SINK;
>> + desc.subtype.sink_subtype = CORESIGHT_DEV_SUBTYPE_SINK_PERCPU_SYSMEM;
>> + desc.ops = &arm_trbe_cs_ops;
>> + desc.pdata = dev_get_platdata(dev);
>> + desc.groups = arm_trbe_groups;
>> + desc.dev = dev;
>> + trbe_csdev = coresight_register(&desc);
>> + if (IS_ERR(trbe_csdev))
>> + goto cpu_clear;
>> +
>> + dev_set_drvdata(&trbe_csdev->dev, cpudata);
>> + cpudata->trbe_dbm = get_trbe_flag_update();
>> + cpudata->trbe_align = 1ULL << get_trbe_address_align();
>> + if (cpudata->trbe_align > SZ_2K) {
>> + pr_err("Unsupported alignment on cpu %d\n", cpudata->cpu);
>> + goto cpu_clear;
>> + }
>> + per_cpu(csdev_sink, cpu) = trbe_csdev;
>> + trbe_reset_local();
>> + enable_percpu_irq(drvdata->irq, IRQ_TYPE_NONE);
>> + return;
>> +cpu_clear:
>> + cpumask_clear_cpu(cpudata->cpu, &cpudata->drvdata->supported_cpus);
>> +}
>> +
>> +static void arm_trbe_remove_coresight_cpu(void *info)
>> +{
>> + int cpu = smp_processor_id();
>> + struct trbe_drvdata *drvdata = info;
>> + struct trbe_cpudata *cpudata = per_cpu_ptr(drvdata->cpudata, cpu);
>> + struct coresight_device *trbe_csdev = per_cpu(csdev_sink, cpu);
>> +
>> + if (trbe_csdev) {
>> + coresight_unregister(trbe_csdev);
>> + cpudata->drvdata = NULL;
>> + per_cpu(csdev_sink, cpu) = NULL;
>> + }
>> + disable_percpu_irq(drvdata->irq);
>> + trbe_reset_local();
>> +}
>> +
>> +static int arm_trbe_probe_coresight(struct trbe_drvdata *drvdata)
>> +{
>> + drvdata->cpudata = alloc_percpu(typeof(*drvdata->cpudata));
>> + if (IS_ERR(drvdata->cpudata))
>> + return PTR_ERR(drvdata->cpudata);
>> +
>> + arm_trbe_probe_coresight_cpu(drvdata);
>> + smp_call_function_many(&drvdata->supported_cpus, arm_trbe_probe_coresight_cpu, drvdata, 1);
>> + return 0;
>> +}
>> +
>> +static int arm_trbe_remove_coresight(struct trbe_drvdata *drvdata)
>> +{
>> + arm_trbe_remove_coresight_cpu(drvdata);
>> + smp_call_function_many(&drvdata->supported_cpus, arm_trbe_remove_coresight_cpu, drvdata, 1);
>> + free_percpu(drvdata->cpudata);
>> + return 0;
>> +}
>> +
>> +static int arm_trbe_cpu_startup(unsigned int cpu, struct hlist_node *node)
>> +{
>> + struct trbe_drvdata *drvdata = hlist_entry_safe(node, struct trbe_drvdata, hotplug_node);
>> +
>> + if (cpumask_test_cpu(cpu, &drvdata->supported_cpus)) {
>> + if (!per_cpu(csdev_sink, cpu) && (system_state == SYSTEM_RUNNING)) {
>
> Why is the system_state check relevant here ?
I had a concern regarding whether arm_trbe_probe_coresight_cpu() invocations
from arm_trbe_cpu_startup() might race with its invocations during boot from
arm_trbe_device_probe(). Checking for runtime system_state would ensure that
a complete TRBE probe on a given cpu is called only after the boot is complete.
But if the race condition is really never possible, can just drop this check.
>
>> + arm_trbe_probe_coresight_cpu(drvdata);
>> + } else {
>> + trbe_reset_local();
>> + enable_percpu_irq(drvdata->irq, IRQ_TYPE_NONE);
>> + }
>> + }
>> + return 0;
>> +}
>> +
>> +static int arm_trbe_cpu_teardown(unsigned int cpu, struct hlist_node *node)
>> +{
>> + struct trbe_drvdata *drvdata = hlist_entry_safe(node, struct trbe_drvdata, hotplug_node);
>> +
>> + if (cpumask_test_cpu(cpu, &drvdata->supported_cpus)) {
>> + disable_percpu_irq(drvdata->irq);
>> + trbe_reset_local();
>> + }
>> + return 0;
>> +}
>> +
>> +static int arm_trbe_probe_cpuhp(struct trbe_drvdata *drvdata)
>> +{
>> + enum cpuhp_state trbe_online;
>> +
>> + trbe_online = cpuhp_setup_state_multi(CPUHP_AP_ONLINE_DYN, DRVNAME,
>> + arm_trbe_cpu_startup, arm_trbe_cpu_teardown);
>> + if (trbe_online < 0)
>> + return -EINVAL;
>> +
>> + if (cpuhp_state_add_instance(trbe_online, &drvdata->hotplug_node))
>> + return -EINVAL;
>> +
>> + drvdata->trbe_online = trbe_online;
>> + return 0;
>> +}
>> +
>> +static void arm_trbe_remove_cpuhp(struct trbe_drvdata *drvdata)
>> +{
>> + cpuhp_remove_multi_state(drvdata->trbe_online);
>> +}
>> +
>> +static int arm_trbe_probe_irq(struct platform_device *pdev,
>> + struct trbe_drvdata *drvdata)
>> +{
>> + drvdata->irq = platform_get_irq(pdev, 0);
>> + if (!drvdata->irq) {
>> + pr_err("IRQ not found for the platform device\n");
>> + return -ENXIO;
>> + }
>> +
>> + if (!irq_is_percpu(drvdata->irq)) {
>> + pr_err("IRQ is not a PPI\n");
>> + return -EINVAL;
>> + }
>> +
>> + if (irq_get_percpu_devid_partition(drvdata->irq, &drvdata->supported_cpus))
>> + return -EINVAL;
>> +
>> + drvdata->handle = alloc_percpu(typeof(*drvdata->handle));
>> + if (!drvdata->handle)
>> + return -ENOMEM;
>> +
>> + if (request_percpu_irq(drvdata->irq, arm_trbe_irq_handler, DRVNAME, drvdata->handle)) {
>> + free_percpu(drvdata->handle);
>> + return -EINVAL;
>> + }
>> + return 0;
>> +}
>> +
>> +static void arm_trbe_remove_irq(struct trbe_drvdata *drvdata)
>> +{
>> + free_percpu_irq(drvdata->irq, drvdata->handle);
>> + free_percpu(drvdata->handle);
>> +}
>> +
>> +static int arm_trbe_device_probe(struct platform_device *pdev)
>> +{
>> + struct coresight_platform_data *pdata;
>> + struct trbe_drvdata *drvdata;
>> + struct device *dev = &pdev->dev;
>> + int ret;
>> +
>> + drvdata = devm_kzalloc(dev, sizeof(*drvdata), GFP_KERNEL);
>> + if (IS_ERR(drvdata))
>> + return -ENOMEM;
>> +
>> + pdata = coresight_get_platform_data(dev);
>> + if (IS_ERR(pdata)) {
>> + kfree(drvdata);
>> + return -ENOMEM;
>> + }
>> +
>> + dev_set_drvdata(dev, drvdata);
>> + dev->platform_data = pdata;
>> + drvdata->pdev = pdev;
>> + ret = arm_trbe_probe_irq(pdev, drvdata);
>> + if (ret)
>> + goto irq_failed;
>> +
>> + ret = arm_trbe_probe_coresight(drvdata);
>> + if (ret)
>> + goto probe_failed;
>> +
>> + ret = arm_trbe_probe_cpuhp(drvdata);
>> + if (ret)
>> + goto cpuhp_failed;
>> +
>> + return 0;
>> +cpuhp_failed:
>> + arm_trbe_remove_coresight(drvdata);
>> +probe_failed:
>> + arm_trbe_remove_irq(drvdata);
>> +irq_failed:
>> + kfree(pdata);
>> + kfree(drvdata);
>> + return ret;
>> +}
>> +
>> +static int arm_trbe_device_remove(struct platform_device *pdev)
>> +{
>> + struct coresight_platform_data *pdata = dev_get_platdata(&pdev->dev);
>> + struct trbe_drvdata *drvdata = platform_get_drvdata(pdev);
>> +
>> + arm_trbe_remove_coresight(drvdata);
>> + arm_trbe_remove_cpuhp(drvdata);
>> + arm_trbe_remove_irq(drvdata);
>> + kfree(pdata);
>> + kfree(drvdata);
>> + return 0;
>> +}
>> +
>> +static const struct of_device_id arm_trbe_of_match[] = {
>> + { .compatible = "arm,trace-buffer-extension", .data = (void *)1 },
>
> What is the significance of .data = 1 ?
I guess this can also be dropped.
>
>> + {},
>> +};
>> +MODULE_DEVICE_TABLE(of, arm_trbe_of_match);
>> +
>> +static struct platform_driver arm_trbe_driver = {
>> + .driver = {
>> + .name = DRVNAME,
>> + .of_match_table = of_match_ptr(arm_trbe_of_match),
>> + .suppress_bind_attrs = true,
>> + },
>> + .probe = arm_trbe_device_probe,
>> + .remove = arm_trbe_device_remove,
>> +};
>> +
>> +static int __init arm_trbe_init(void)
>> +{
>> + int ret;
>> +
>> + ret = platform_driver_register(&arm_trbe_driver);
>> + if (!ret)
>> + return 0;
>> +
>> + pr_err("Error registering %s platform driver\n", DRVNAME);
>> + return ret;
>> +}
>> +
>> +static void __exit arm_trbe_exit(void)
>> +{
>> + platform_driver_unregister(&arm_trbe_driver);
>> +}
>> +module_init(arm_trbe_init);
>> +module_exit(arm_trbe_exit);
>> +
>> +MODULE_AUTHOR("Anshuman Khandual <[email protected]>");
>> +MODULE_DESCRIPTION("Arm Trace Buffer Extension (TRBE) driver");
>> +MODULE_LICENSE("GPL v2");
>> diff --git a/drivers/hwtracing/coresight/coresight-trbe.h b/drivers/hwtracing/coresight/coresight-trbe.h
>> new file mode 100644
>> index 0000000..e956439
>> --- /dev/null
>> +++ b/drivers/hwtracing/coresight/coresight-trbe.h
>> @@ -0,0 +1,248 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * This contains all required hardware related helper functions for
>> + * Trace Buffer Extension (TRBE) driver in the coresight framework.
>> + *
>> + * Copyright (C) 2020 ARM Ltd.
>> + *
>> + * Author: Anshuman Khandual <[email protected]>
>> + */
>> +#include <linux/coresight.h>
>> +#include <linux/device.h>
>> +#include <linux/irq.h>
>> +#include <linux/kernel.h>
>> +#include <linux/of.h>
>> +#include <linux/platform_device.h>
>> +#include <linux/smp.h>
>> +
>> +#include "coresight-etm-perf.h"
>> +
>> +DECLARE_PER_CPU(struct coresight_device *, csdev_sink);
>> +
>> +static inline bool is_trbe_available(void)
>> +{
>> + u64 aa64dfr0 = read_sysreg_s(SYS_ID_AA64DFR0_EL1);
>> + int trbe = cpuid_feature_extract_unsigned_field(aa64dfr0, ID_AA64DFR0_TRBE_SHIFT);
>> +
>> + return trbe >= 0b0001;
>> +}
>> +
>> +static inline bool is_trbe_enabled(void)
>> +{
>> + u64 trblimitr = read_sysreg_s(SYS_TRBLIMITR_EL1);
>> +
>> + return trblimitr & TRBLIMITR_ENABLE;
>> +}
>> +
>> +#define TRBE_EC_OTHERS 0
>> +#define TRBE_EC_STAGE1_ABORT 36
>> +#define TRBE_EC_STAGE2_ABORT 37
>> +
>> +static inline int get_trbe_ec(void)
>> +{
>> + u64 trbsr = read_sysreg_s(SYS_TRBSR_EL1);
>> +
>> + return (trbsr >> TRBSR_EC_SHIFT) & TRBSR_EC_MASK;
>> +}
>> +
>> +#define TRBE_BSC_NOT_STOPPED 0
>> +#define TRBE_BSC_FILLED 1
>> +#define TRBE_BSC_TRIGGERED 2
>> +
>> +static inline int get_trbe_bsc(void)
>> +{
>> + u64 trbsr = read_sysreg_s(SYS_TRBSR_EL1);
>> +
>> + return (trbsr >> TRBSR_BSC_SHIFT) & TRBSR_BSC_MASK;
>> +}
>> +
>> +static inline void clr_trbe_irq(void)
>> +{
>> + u64 trbsr = read_sysreg_s(SYS_TRBSR_EL1);
>> +
>> + trbsr &= ~TRBSR_IRQ;
>> + write_sysreg_s(trbsr, SYS_TRBSR_EL1);
>> +}
>> +
>> +static inline bool is_trbe_irq(void)
>> +{
>> + u64 trbsr = read_sysreg_s(SYS_TRBSR_EL1);
>> +
>> + return trbsr & TRBSR_IRQ;
>> +}
>> +
>> +static inline bool is_trbe_trg(void)
>> +{
>> + u64 trbsr = read_sysreg_s(SYS_TRBSR_EL1);
>> +
>> + return trbsr & TRBSR_TRG;
>> +}
>> +
>> +static inline bool is_trbe_wrap(void)
>> +{
>> + u64 trbsr = read_sysreg_s(SYS_TRBSR_EL1);
>> +
>> + return trbsr & TRBSR_WRAP;
>> +}
>> +
>> +static inline bool is_trbe_abort(void)
>> +{
>> + u64 trbsr = read_sysreg_s(SYS_TRBSR_EL1);
>> +
>> + return trbsr & TRBSR_ABORT;
>> +}
>> +
>> +static inline bool is_trbe_running(void)
>> +{
>> + u64 trbsr = read_sysreg_s(SYS_TRBSR_EL1);
>> +
>> + return !(trbsr & TRBSR_STOP);
>> +}
>> +
>> +static inline void set_trbe_running(void)
>> +{
>> + u64 trbsr = read_sysreg_s(SYS_TRBSR_EL1);
>> +
>> + trbsr &= ~TRBSR_STOP;
>> + write_sysreg_s(trbsr, SYS_TRBSR_EL1);
>> +}
>> +
>> +static inline void set_trbe_virtual_mode(void)
>> +{
>> + u64 trblimitr = read_sysreg_s(SYS_TRBLIMITR_EL1);
>> +
>> + trblimitr &= ~TRBLIMITR_NVM;
>> + write_sysreg_s(trblimitr, SYS_TRBLIMITR_EL1);
>> +}
>> +
>> +#define TRBE_TRIGGER_STOP 0
>> +#define TRBE_TRIGGER_IRQ 1
>> +#define TRBE_TRIGGER_IGNORE 3
>> +
>> +static inline int get_trbe_trig_mode(void)
>> +{
>> + u64 trblimitr = read_sysreg_s(SYS_TRBLIMITR_EL1);
>> +
>> + return (trblimitr >> TRBLIMITR_TRIG_MODE_SHIFT) & TRBLIMITR_TRIG_MODE_MASK;
>> +}
>> +
>> +static inline void set_trbe_trig_mode(int mode)
>> +{
>> + u64 trblimitr = read_sysreg_s(SYS_TRBLIMITR_EL1);
>> +
>> + trblimitr &= ~(TRBLIMITR_TRIG_MODE_MASK << TRBLIMITR_TRIG_MODE_SHIFT);
>> + trblimitr |= ((mode & TRBLIMITR_TRIG_MODE_MASK) << TRBLIMITR_TRIG_MODE_SHIFT);
>> + write_sysreg_s(trblimitr, SYS_TRBLIMITR_EL1);
>> +}
>> +
>> +#define TRBE_FILL_STOP 0
>> +#define TRBE_FILL_WRAP 1
>> +#define TRBE_FILL_CIRCULAR 3
>> +
>
> ---8>---
>
>> +static inline int get_trbe_fill_mode(void)
>> +{
>> + u64 trblimitr = read_sysreg_s(SYS_TRBLIMITR_EL1);
>> +
>> + return (trblimitr >> TRBLIMITR_FILL_MODE_SHIFT) & TRBLIMITR_FILL_MODE_MASK;
>> +} > +
>> +static inline void set_trbe_fill_mode(int mode)
>> +{
>> + u64 trblimitr = read_sysreg_s(SYS_TRBLIMITR_EL1);
>> +
>> + trblimitr &= ~(TRBLIMITR_FILL_MODE_MASK << TRBLIMITR_FILL_MODE_SHIFT);
>> + trblimitr |= ((mode & TRBLIMITR_FILL_MODE_MASK) << TRBLIMITR_FILL_MODE_SHIFT);
>> + write_sysreg_s(trblimitr, SYS_TRBLIMITR_EL1);
>> +}
>> +
>
> Where do we use these ? I couldn't find any users.
After directly configuring the TRBE state via [set|clear]_trbe_state()
functions, these trig/fill mode related helpers are not used any more.
Hence will just drop these.
On 1/5/21 9:29 AM, Anshuman Khandual wrote:
>
>
> On 1/4/21 9:58 PM, Suzuki K Poulose wrote:
>>
>> Hi Anshuman,
>>
>> On 12/23/20 10:03 AM, Anshuman Khandual wrote:
>>> Trace Buffer Extension (TRBE) implements a trace buffer per CPU which is
>>> accessible via the system registers. The TRBE supports different addressing
>>> modes including CPU virtual address and buffer modes including the circular
>>> buffer mode. The TRBE buffer is addressed by a base pointer (TRBBASER_EL1),
>>> an write pointer (TRBPTR_EL1) and a limit pointer (TRBLIMITR_EL1). But the
>>> access to the trace buffer could be prohibited by a higher exception level
>>> (EL3 or EL2), indicated by TRBIDR_EL1.P. The TRBE can also generate a CPU
>>> private interrupt (PPI) on address translation errors and when the buffer
>>> is full. Overall implementation here is inspired from the Arm SPE driver.
>>>
>>> Cc: Mathieu Poirier <[email protected]>
>>> Cc: Mike Leach <[email protected]>
>>> Cc: Suzuki K Poulose <[email protected]>
>>> Signed-off-by: Anshuman Khandual <[email protected]>
>>> ---
>>
>>>
>>> Documentation/trace/coresight/coresight-trbe.rst | 39 +
>>> arch/arm64/include/asm/sysreg.h | 2 +
>>> drivers/hwtracing/coresight/Kconfig | 11 +
>>> drivers/hwtracing/coresight/Makefile | 1 +
>>> drivers/hwtracing/coresight/coresight-trbe.c | 925 +++++++++++++++++++++++
>>> drivers/hwtracing/coresight/coresight-trbe.h | 248 ++++++
>>> 6 files changed, 1226 insertions(+)
>>> create mode 100644 Documentation/trace/coresight/coresight-trbe.rst
>>> create mode 100644 drivers/hwtracing/coresight/coresight-trbe.c
>>> create mode 100644 drivers/hwtracing/coresight/coresight-trbe.h
>>>
>>> diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
>>> index e6962b1..2a9bfb7 100644
>>> --- a/arch/arm64/include/asm/sysreg.h
>>> +++ b/arch/arm64/include/asm/sysreg.h
>>> @@ -97,6 +97,7 @@
>>> #define SET_PSTATE_UAO(x) __emit_inst(0xd500401f | PSTATE_UAO | ((!!x) << PSTATE_Imm_shift))
>>> #define SET_PSTATE_SSBS(x) __emit_inst(0xd500401f | PSTATE_SSBS | ((!!x) << PSTATE_Imm_shift))
>>> #define SET_PSTATE_TCO(x) __emit_inst(0xd500401f | PSTATE_TCO | ((!!x) << PSTATE_Imm_shift))
>>> +#define TSB_CSYNC __emit_inst(0xd503225f)
>>> #define __SYS_BARRIER_INSN(CRm, op2, Rt) \
>>> __emit_inst(0xd5000000 | sys_insn(0, 3, 3, (CRm), (op2)) | ((Rt) & 0x1f))
>>> @@ -869,6 +870,7 @@
>>> #define ID_AA64MMFR2_CNP_SHIFT 0
>>> /* id_aa64dfr0 */
>>> +#define ID_AA64DFR0_TRBE_SHIFT 44
>>> #define ID_AA64DFR0_TRACE_FILT_SHIFT 40
>>> #define ID_AA64DFR0_DOUBLELOCK_SHIFT 36
>>> #define ID_AA64DFR0_PMSVER_SHIFT 32
>>> diff --git a/drivers/hwtracing/coresight/Makefile b/drivers/hwtracing/coresight/Makefile
>>> index f20e357..d608165 100644
>>> --- a/drivers/hwtracing/coresight/Makefile
>>> +++ b/drivers/hwtracing/coresight/Makefile
>>> @@ -21,5 +21,6 @@ obj-$(CONFIG_CORESIGHT_STM) += coresight-stm.o
>>> obj-$(CONFIG_CORESIGHT_CPU_DEBUG) += coresight-cpu-debug.o
>>> obj-$(CONFIG_CORESIGHT_CATU) += coresight-catu.o
>>> obj-$(CONFIG_CORESIGHT_CTI) += coresight-cti.o
>>> +obj-$(CONFIG_CORESIGHT_TRBE) += coresight-trbe.o
>>> coresight-cti-y := coresight-cti-core.o coresight-cti-platform.o \
>>> coresight-cti-sysfs.o
>>> diff --git a/drivers/hwtracing/coresight/coresight-trbe.c b/drivers/hwtracing/coresight/coresight-trbe.c
>>> new file mode 100644
>>> index 0000000..ba280e6
>>> --- /dev/null
>>> +++ b/drivers/hwtracing/coresight/coresight-trbe.c
>>> +static void trbe_reset_local(void)
>>> +{
>>> + trbe_disable_and_drain_local();
>>> + write_sysreg_s(0, SYS_TRBPTR_EL1);
>>> + write_sysreg_s(0, SYS_TRBBASER_EL1);
>>> + write_sysreg_s(0, SYS_TRBSR_EL1);
>>> + isb();
>>> +}
>>> +
>>> +/*
>>> + * TRBE Buffer Management
>>> + *
>>> + * The TRBE buffer spans from the base pointer till the limit pointer. When enabled,
>>> + * it starts writing trace data from the write pointer onward till the limit pointer.
>>> + * When the write pointer reaches the address just before the limit pointer, it gets
>>> + * wrapped around again to the base pointer. This is called a TRBE wrap event which
>>> + * is accompanied by an IRQ.
>>
>> This is true for one of the modes of operation, the WRAP mode, which could be specified
>> in the comment. e.g,
>>
>> This is called a TRBE wrap event, which generates a maintenance interrupt when operated
>> in WRAP mode.
>
> Sure, will change.
Sorry, correcting myself:
s/when operated in WRAP mode/when operated in WRAP or STOP mode/
...
>>> +
>>> +static unsigned long get_trbe_limit(struct perf_output_handle *handle)
>>
>> nit: The naming is a bit confusing with get_trbe_limit() and get_trbe_limit_pointer().
>> One computes the TRBE buffer limit and the other reads the hardware Limit pointer.
>> It would be good if follow a scheme for the namings.
>>
>> e.g, trbe_limit_pointer() , trbe_base_pointer(), trbe_<register>_<name> for anything
>> that reads the hardware register.
>
> The current scheme is in the form get_trbe_XXX() where XXX
> is a TRBE hardware component e.g.
>
> get_trbe_base_pointer()
> get_trbe_limit_pointer()
> get_trbe_write_pointer()
> get_trbe_ec()
> get_trbe_bsc()
> get_trbe_address_align()
> get_trbe_flag_update()
>
>>
>> Or may be rename the get_trbe_limit() to compute_trbe_buffer_limit()
>
> This makes it clear, will change.
>
>>
>>> +{
>>> + struct trbe_buf *buf = etm_perf_sink_config(handle);
>>> + unsigned long offset;
>>> +
>>> + if (buf->snapshot)
>>> + offset = trbe_snapshot_offset(handle);
>>> + else
>>> + offset = trbe_normal_offset(handle);
>>> + return buf->trbe_base + offset;
>>> +}
>>> +
>>> +static void clear_trbe_state(void)
>>
>> nit: The name doesn't give much clue about what it is doing, especially, given
>> the following "set_trbe_state()" which does completely different from this "clear"
>> operation.
>
> I agree that these names could have been better.
>
> s/clear_trbe_state/trbe_reset_perf_state - Clears TRBE from current perf config
> s/set_trbe_state/trbe_prepare_perf_state - Prepares TRBE for the next perf config
Please don't tie them to "perf". This is pure hardware configuration, not perf.
Also, I wonder if we need a separate "set_trbe_state". Could we not initialize the LIMITR
at one go ?
i.e, do something like :
set_trbe_limit_pointer(limit, mode) ?
where it sets all the fields of limit pointer. Also, you may want to document the mode we
choose for TRBE. i.e, FILL STOP mode for us to collect the trace.
>
>
>>
>> I would rather open code this with a write of 0 to trbsr in the caller.
>>
>>> +{
>>> + u64 trbsr = read_sysreg_s(SYS_TRBSR_EL1);
>>> +
>>> + WARN_ON(is_trbe_enabled());
>>> + trbsr &= ~TRBSR_IRQ;
>>> + trbsr &= ~TRBSR_TRG;
>>> + trbsr &= ~TRBSR_WRAP;
>>> + trbsr &= ~(TRBSR_EC_MASK << TRBSR_EC_SHIFT);
>>> + trbsr &= ~(TRBSR_BSC_MASK << TRBSR_BSC_SHIFT);
>>> + trbsr &= ~(TRBSR_FSC_MASK << TRBSR_FSC_SHIFT);
>>
>> BSC and FSC are the same fields under MSS, with their meanings determined by the EC field.
>
> Could just drop the FSC part if required.
>
>>
>> Could we simply write 0 to the register ?
>
> I would really like to avoid that. This function clearly enumerates all
> individual bit fields being cleared for resetting as well as preparing
> the TRBE for the next perf session. Converting this into a 0 write for
> SYS_TRBSR_EL1 sounds excessive and the only thing it would save is the
> register read.
>
>>
>>> + write_sysreg_s(trbsr, SYS_TRBSR_EL1);
>>> +}
>>> +
>>> +static void set_trbe_state(void)
>>> +{
>>> + u64 trblimitr = read_sysreg_s(SYS_TRBLIMITR_EL1);
>>> +
>>> + trblimitr &= ~TRBLIMITR_NVM;
>>> + trblimitr &= ~(TRBLIMITR_FILL_MODE_MASK << TRBLIMITR_FILL_MODE_SHIFT);
>>> + trblimitr &= ~(TRBLIMITR_TRIG_MODE_MASK << TRBLIMITR_TRIG_MODE_SHIFT);
>>> + trblimitr |= (TRBE_FILL_STOP & TRBLIMITR_FILL_MODE_MASK) << TRBLIMITR_FILL_MODE_SHIFT;
>>> + trblimitr |= (TRBE_TRIGGER_IGNORE & TRBLIMITR_TRIG_MODE_MASK) << TRBLIMITR_TRIG_MODE_SHIFT;
>>> + write_sysreg_s(trblimitr, SYS_TRBLIMITR_EL1);
>>
>> Do we need to read-copy-update here ? Could we simply write 0 ?
>> Same as above comment, could we not simply opencode it at the caller ?
>> Clearly the names don't help.
>
> Will change the names as proposed or something better. But lets leave
> these functions as is. Besides TRBE_TRIGGER_IGNORE also has a positive
> value (i.e 3), writing all 0s into SYS_TRBLIMITR_EL1 will not be ideal.
>
The point is, we don't need to preserve the values for LIMITR. Also see my comment
above, for folding this to set_trbe_limit_pointer(). In any case, I don't think
we should rely on the values of fields we change. So it is safer and cleaner to
set set all the bits for LIMITR, including the LIMIT address in one go, without
ready-copy-update.
>>
>>> +}
>>> +
>>> +static void trbe_enable_hw(struct trbe_buf *buf)
>>> +{
>>> + WARN_ON(buf->trbe_write < buf->trbe_base);
>>> + WARN_ON(buf->trbe_write >= buf->trbe_limit);
>>> + set_trbe_disabled();
>>> + clear_trbe_state();
>>> + set_trbe_state();
>>> + isb();
>>> + set_trbe_base_pointer(buf->trbe_base);
>>> + set_trbe_limit_pointer(buf->trbe_limit);
>>> + set_trbe_write_pointer(buf->trbe_write);
>>
>> Where do we set the fill mode ?
>
> TRBE_FILL_STOP has already been configured in set_trbe_state().
>
As mentioned above, this needs to be documented. It is not evident
for someone who is looking at the code. e.g, I thought the set_trbe_state()
was simply stopping the TRBE.
Also, looking at the spec, I find the names of the fill modes confusing.
The modes are FILL, WRAP and CIRCULAR BUFFER. Stop is just the behavior
of FILL. So, please do not use STOP for the mode name.
Also, please rename the mode symbols to :
TRBE_FILL_MODE_FILL
TRBE_FILL_MODE_WRAP
TRBE_FILL_MODE_CIRCULAR_BUFFER
to align with the spec.
>>
>>> + isb();
>>> + set_trbe_running();
>>> + set_trbe_enabled();
>>> + set_trbe_flush();
>>> +}
>>> +
>>> +
>>> +static int arm_trbe_cpu_startup(unsigned int cpu, struct hlist_node *node)
>>> +{
>>> + struct trbe_drvdata *drvdata = hlist_entry_safe(node, struct trbe_drvdata, hotplug_node);
>>> +
>>> + if (cpumask_test_cpu(cpu, &drvdata->supported_cpus)) {
>>> + if (!per_cpu(csdev_sink, cpu) && (system_state == SYSTEM_RUNNING)) {
>>
>> Why is the system_state check relevant here ?
>
> I had a concern regarding whether arm_trbe_probe_coresight_cpu() invocations
> from arm_trbe_cpu_startup() might race with its invocations during boot from
> arm_trbe_device_probe(). Checking for runtime system_state would ensure that
> a complete TRBE probe on a given cpu is called only after the boot is complete.
> But if the race condition is really never possible, can just drop this check.
I don't think they should.
Suzuki
On 1/5/21 5:07 PM, Suzuki K Poulose wrote:
> On 1/5/21 9:29 AM, Anshuman Khandual wrote:
>>
>>
>> On 1/4/21 9:58 PM, Suzuki K Poulose wrote:
>>>
>>> Hi Anshuman,
>>>
>>> On 12/23/20 10:03 AM, Anshuman Khandual wrote:
>>>> Trace Buffer Extension (TRBE) implements a trace buffer per CPU which is
>>>> accessible via the system registers. The TRBE supports different addressing
>>>> modes including CPU virtual address and buffer modes including the circular
>>>> buffer mode. The TRBE buffer is addressed by a base pointer (TRBBASER_EL1),
>>>> an write pointer (TRBPTR_EL1) and a limit pointer (TRBLIMITR_EL1). But the
>>>> access to the trace buffer could be prohibited by a higher exception level
>>>> (EL3 or EL2), indicated by TRBIDR_EL1.P. The TRBE can also generate a CPU
>>>> private interrupt (PPI) on address translation errors and when the buffer
>>>> is full. Overall implementation here is inspired from the Arm SPE driver.
>>>>
>>>> Cc: Mathieu Poirier <[email protected]>
>>>> Cc: Mike Leach <[email protected]>
>>>> Cc: Suzuki K Poulose <[email protected]>
>>>> Signed-off-by: Anshuman Khandual <[email protected]>
>>>> ---
>>>
>>>>
>>>> Documentation/trace/coresight/coresight-trbe.rst | 39 +
>>>> arch/arm64/include/asm/sysreg.h | 2 +
>>>> drivers/hwtracing/coresight/Kconfig | 11 +
>>>> drivers/hwtracing/coresight/Makefile | 1 +
>>>> drivers/hwtracing/coresight/coresight-trbe.c | 925 +++++++++++++++++++++++
>>>> drivers/hwtracing/coresight/coresight-trbe.h | 248 ++++++
>>>> 6 files changed, 1226 insertions(+)
>>>> create mode 100644 Documentation/trace/coresight/coresight-trbe.rst
>>>> create mode 100644 drivers/hwtracing/coresight/coresight-trbe.c
>>>> create mode 100644 drivers/hwtracing/coresight/coresight-trbe.h
>>>>
>
>>>> diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
>>>> index e6962b1..2a9bfb7 100644
>>>> --- a/arch/arm64/include/asm/sysreg.h
>>>> +++ b/arch/arm64/include/asm/sysreg.h
>>>> @@ -97,6 +97,7 @@
>>>> #define SET_PSTATE_UAO(x) __emit_inst(0xd500401f | PSTATE_UAO | ((!!x) << PSTATE_Imm_shift))
>>>> #define SET_PSTATE_SSBS(x) __emit_inst(0xd500401f | PSTATE_SSBS | ((!!x) << PSTATE_Imm_shift))
>>>> #define SET_PSTATE_TCO(x) __emit_inst(0xd500401f | PSTATE_TCO | ((!!x) << PSTATE_Imm_shift))
>>>> +#define TSB_CSYNC __emit_inst(0xd503225f)
>>>> #define __SYS_BARRIER_INSN(CRm, op2, Rt) \
>>>> __emit_inst(0xd5000000 | sys_insn(0, 3, 3, (CRm), (op2)) | ((Rt) & 0x1f))
>>>> @@ -869,6 +870,7 @@
>>>> #define ID_AA64MMFR2_CNP_SHIFT 0
>>>> /* id_aa64dfr0 */
>>>> +#define ID_AA64DFR0_TRBE_SHIFT 44
>>>> #define ID_AA64DFR0_TRACE_FILT_SHIFT 40
>>>> #define ID_AA64DFR0_DOUBLELOCK_SHIFT 36
>>>> #define ID_AA64DFR0_PMSVER_SHIFT 32
>
>
>>>> diff --git a/drivers/hwtracing/coresight/Makefile b/drivers/hwtracing/coresight/Makefile
>>>> index f20e357..d608165 100644
>>>> --- a/drivers/hwtracing/coresight/Makefile
>>>> +++ b/drivers/hwtracing/coresight/Makefile
>>>> @@ -21,5 +21,6 @@ obj-$(CONFIG_CORESIGHT_STM) += coresight-stm.o
>>>> obj-$(CONFIG_CORESIGHT_CPU_DEBUG) += coresight-cpu-debug.o
>>>> obj-$(CONFIG_CORESIGHT_CATU) += coresight-catu.o
>>>> obj-$(CONFIG_CORESIGHT_CTI) += coresight-cti.o
>>>> +obj-$(CONFIG_CORESIGHT_TRBE) += coresight-trbe.o
>>>> coresight-cti-y := coresight-cti-core.o coresight-cti-platform.o \
>>>> coresight-cti-sysfs.o
>>>> diff --git a/drivers/hwtracing/coresight/coresight-trbe.c b/drivers/hwtracing/coresight/coresight-trbe.c
>>>> new file mode 100644
>>>> index 0000000..ba280e6
>>>> --- /dev/null
>>>> +++ b/drivers/hwtracing/coresight/coresight-trbe.c
>
>>>> +static void trbe_reset_local(void)
>>>> +{
>>>> + trbe_disable_and_drain_local();
>>>> + write_sysreg_s(0, SYS_TRBPTR_EL1);
>>>> + write_sysreg_s(0, SYS_TRBBASER_EL1);
>>>> + write_sysreg_s(0, SYS_TRBSR_EL1);
>>>> + isb();
>>>> +}
>>>> +
>>>> +/*
>>>> + * TRBE Buffer Management
>>>> + *
>>>> + * The TRBE buffer spans from the base pointer till the limit pointer. When enabled,
>>>> + * it starts writing trace data from the write pointer onward till the limit pointer.
>>>> + * When the write pointer reaches the address just before the limit pointer, it gets
>>>> + * wrapped around again to the base pointer. This is called a TRBE wrap event which
>>>> + * is accompanied by an IRQ.
>>>
>>> This is true for one of the modes of operation, the WRAP mode, which could be specified
>>> in the comment. e.g,
>>>
>>> This is called a TRBE wrap event, which generates a maintenance interrupt when operated
>>> in WRAP mode.
>>
>> Sure, will change.
>
> Sorry, correcting myself:
>
> s/when operated in WRAP mode/when operated in WRAP or STOP mode/
Sure, will change.
>
> ...
>
>>>> +
>>>> +static unsigned long get_trbe_limit(struct perf_output_handle *handle)
>>>
>>> nit: The naming is a bit confusing with get_trbe_limit() and get_trbe_limit_pointer().
>>> One computes the TRBE buffer limit and the other reads the hardware Limit pointer.
>>> It would be good if follow a scheme for the namings.
>>>
>>> e.g, trbe_limit_pointer() , trbe_base_pointer(), trbe_<register>_<name> for anything
>>> that reads the hardware register.
>>
>> The current scheme is in the form get_trbe_XXX() where XXX
>> is a TRBE hardware component e.g.
>>
>> get_trbe_base_pointer()
>> get_trbe_limit_pointer()
>> get_trbe_write_pointer()
>> get_trbe_ec()
>> get_trbe_bsc()
>> get_trbe_address_align()
>> get_trbe_flag_update()
>>
>>>
>>> Or may be rename the get_trbe_limit() to compute_trbe_buffer_limit()
>>
>> This makes it clear, will change.
>>
>>>
>>>> +{
>>>> + struct trbe_buf *buf = etm_perf_sink_config(handle);
>>>> + unsigned long offset;
>>>> +
>>>> + if (buf->snapshot)
>>>> + offset = trbe_snapshot_offset(handle);
>>>> + else
>>>> + offset = trbe_normal_offset(handle);
>>>> + return buf->trbe_base + offset;
>>>> +}
>>>> +
>>>> +static void clear_trbe_state(void)
>>>
>>> nit: The name doesn't give much clue about what it is doing, especially, given
>>> the following "set_trbe_state()" which does completely different from this "clear"
>>> operation.
>>
>> I agree that these names could have been better.
>>
>> s/clear_trbe_state/trbe_reset_perf_state - Clears TRBE from current perf config
>> s/set_trbe_state/trbe_prepare_perf_state - Prepares TRBE for the next perf config
>
> Please don't tie them to "perf". This is pure hardware configuration, not perf.
Okay.
>
> Also, I wonder if we need a separate "set_trbe_state". Could we not initialize the LIMITR
> at one go ?
There are some limitations which could prevent that.
>
> i.e, do something like :
>
> set_trbe_limit_pointer(limit, mode) ?
>
> where it sets all the fields of limit pointer. Also, you may want to document the mode we
> choose for TRBE. i.e, FILL STOP mode for us to collect the trace.
Sure, will document the TRBE mode being choosen here.
>
>>
>>
>>>
>>> I would rather open code this with a write of 0 to trbsr in the caller.
>>>
>>>> +{
>>>> + u64 trbsr = read_sysreg_s(SYS_TRBSR_EL1);
>>>> +
>>>> + WARN_ON(is_trbe_enabled());
>>>> + trbsr &= ~TRBSR_IRQ;
>>>> + trbsr &= ~TRBSR_TRG;
>>>> + trbsr &= ~TRBSR_WRAP;
>>>> + trbsr &= ~(TRBSR_EC_MASK << TRBSR_EC_SHIFT);
>>>> + trbsr &= ~(TRBSR_BSC_MASK << TRBSR_BSC_SHIFT);
>>>> + trbsr &= ~(TRBSR_FSC_MASK << TRBSR_FSC_SHIFT);
>>>
>>> BSC and FSC are the same fields under MSS, with their meanings determined by the EC field.
>>
>> Could just drop the FSC part if required.
>>
>>>
>>> Could we simply write 0 to the register ?
>>
>> I would really like to avoid that. This function clearly enumerates all
>> individual bit fields being cleared for resetting as well as preparing
>> the TRBE for the next perf session. Converting this into a 0 write for
>> SYS_TRBSR_EL1 sounds excessive and the only thing it would save is the
>> register read.
>
>>
>>>
>>>> + write_sysreg_s(trbsr, SYS_TRBSR_EL1);
>>>> +}
>>>> +
>>>> +static void set_trbe_state(void)
>>>> +{
>>>> + u64 trblimitr = read_sysreg_s(SYS_TRBLIMITR_EL1);
>>>> +
>>>> + trblimitr &= ~TRBLIMITR_NVM;
>>>> + trblimitr &= ~(TRBLIMITR_FILL_MODE_MASK << TRBLIMITR_FILL_MODE_SHIFT);
>>>> + trblimitr &= ~(TRBLIMITR_TRIG_MODE_MASK << TRBLIMITR_TRIG_MODE_SHIFT);
>>>> + trblimitr |= (TRBE_FILL_STOP & TRBLIMITR_FILL_MODE_MASK) << TRBLIMITR_FILL_MODE_SHIFT;
>>>> + trblimitr |= (TRBE_TRIGGER_IGNORE & TRBLIMITR_TRIG_MODE_MASK) << TRBLIMITR_TRIG_MODE_SHIFT;
>>>> + write_sysreg_s(trblimitr, SYS_TRBLIMITR_EL1);
>>>
>>> Do we need to read-copy-update here ? Could we simply write 0 ?
>>> Same as above comment, could we not simply opencode it at the caller ?
>>> Clearly the names don't help.
>>
>> Will change the names as proposed or something better. But lets leave
>> these functions as is. Besides TRBE_TRIGGER_IGNORE also has a positive
>> value (i.e 3), writing all 0s into SYS_TRBLIMITR_EL1 will not be ideal.
>>
>
> The point is, we don't need to preserve the values for LIMITR. Also see my comment
> above, for folding this to set_trbe_limit_pointer(). In any case, I don't think
> we should rely on the values of fields we change. So it is safer and cleaner to
> set set all the bits for LIMITR, including the LIMIT address in one go, without
> ready-copy-update.
TRBE needs to be disabled (which is also in the LIMIT register) before we can update
any other fields in the LIMIT register. So there is already an order dependency here.
Looking at the function trbe_enable_hw(), it follows something like
1. Clear and set the TRBE mode - followed by an isb()
2. Update the TRBE pointers - followed by an isb()
3. Set it rolling - followed by TSB_CSYNC
static void trbe_enable_hw(struct trbe_buf *buf)
{
[Software checks]
WARN_ON(buf->trbe_write < buf->trbe_base);
WARN_ON(buf->trbe_write >= buf->trbe_limit);
[Disable TRBE in the limit register]
set_trbe_disabled();
[Clears TRBE status register]
trbe_reset_perf_state();
[Configures TRBE mode in the limit register]
trbe_prepare_perf_state();
isb();
[Update all required pointers]
set_trbe_base_pointer(buf->trbe_base);
set_trbe_limit_pointer(buf->trbe_limit);
set_trbe_write_pointer(buf->trbe_write);
isb();
[Set it rolling]
[Update TRBE status register stop bit]
set_trbe_running();
[Update TRBE limit register enable bit]
set_trbe_enabled();
set_trbe_flush();
}
set_trbe_disabled() should be called before trbe_reset_perf_state() as TRBE
needs to be stopped completely before clearing the TRBE status register.
Hence set_trbe_disabled() cannot be moved inside trbe_prepare_perf_state().
set_trbe_enabled() also cannot be called before configuring the TRBE mode
and updating all other required pointers. So set_trbe_enabled() cannot be
moved inside trbe_prepare_perf_state().
The TRBE limit register needs to be written into in different batches even
though the unchanged fields need not be preserved.
Besides, function names could be changed as follows and document the mode
selection.
s/trbe_reset_perf_state/clr_trbe_status/
s/trbe_prepare_perf_state/set_trbe_mode/
set_trbe_mode() might also take the fill mode and trigger mode as arguments.
Trigger mode needs to be set correctly i.e TRBE_TRIGGER_IGNORE.
>
>
>>>
>>>> +}
>>>> +
>>>> +static void trbe_enable_hw(struct trbe_buf *buf)
>>>> +{
>>>> + WARN_ON(buf->trbe_write < buf->trbe_base);
>>>> + WARN_ON(buf->trbe_write >= buf->trbe_limit);
>>>> + set_trbe_disabled();
>>>> + clear_trbe_state();
>>>> + set_trbe_state();
>>>> + isb();
>>>> + set_trbe_base_pointer(buf->trbe_base);
>>>> + set_trbe_limit_pointer(buf->trbe_limit);
>>>> + set_trbe_write_pointer(buf->trbe_write);
>>>
>>> Where do we set the fill mode ?
>>
>> TRBE_FILL_STOP has already been configured in set_trbe_state().
>>
>
> As mentioned above, this needs to be documented. It is not evident
> for someone who is looking at the code. e.g, I thought the set_trbe_state()
> was simply stopping the TRBE.
>
> Also, looking at the spec, I find the names of the fill modes confusing.
> The modes are FILL, WRAP and CIRCULAR BUFFER. Stop is just the behavior
> of FILL. So, please do not use STOP for the mode name.
>
> Also, please rename the mode symbols to :
>
> TRBE_FILL_MODE_FILL
> TRBE_FILL_MODE_WRAP
> TRBE_FILL_MODE_CIRCULAR_BUFFER
>
> to align with the spec.
Okay, will change these and possibly the trigger modes as well.
>
>>>
>>>> + isb();
>>>> + set_trbe_running();
>>>> + set_trbe_enabled();
>>>> + set_trbe_flush();
>>>> +}
>>>> +
>
>
>>>> +
>>>> +static int arm_trbe_cpu_startup(unsigned int cpu, struct hlist_node *node)
>>>> +{
>>>> + struct trbe_drvdata *drvdata = hlist_entry_safe(node, struct trbe_drvdata, hotplug_node);
>>>> +
>>>> + if (cpumask_test_cpu(cpu, &drvdata->supported_cpus)) {
>>>> + if (!per_cpu(csdev_sink, cpu) && (system_state == SYSTEM_RUNNING)) {
>>>
>>> Why is the system_state check relevant here ?
>>
>> I had a concern regarding whether arm_trbe_probe_coresight_cpu() invocations
>> from arm_trbe_cpu_startup() might race with its invocations during boot from
>> arm_trbe_device_probe(). Checking for runtime system_state would ensure that
>> a complete TRBE probe on a given cpu is called only after the boot is complete.
>> But if the race condition is really never possible, can just drop this check.
>
> I don't think they should.
>
> Suzuki
>
On 1/6/21 11:50 AM, Anshuman Khandual wrote:
>
>
> On 1/5/21 5:07 PM, Suzuki K Poulose wrote:
>> On 1/5/21 9:29 AM, Anshuman Khandual wrote:
...
>>>>> +{
>>>>> + struct trbe_buf *buf = etm_perf_sink_config(handle);
>>>>> + unsigned long offset;
>>>>> +
>>>>> + if (buf->snapshot)
>>>>> + offset = trbe_snapshot_offset(handle);
>>>>> + else
>>>>> + offset = trbe_normal_offset(handle);
>>>>> + return buf->trbe_base + offset;
>>>>> +}
>>>>> +
>>>>> +static void clear_trbe_state(void)
>>>>
>>>> nit: The name doesn't give much clue about what it is doing, especially, given
>>>> the following "set_trbe_state()" which does completely different from this "clear"
>>>> operation.
>>>
>>> I agree that these names could have been better.
>>>
>>> s/clear_trbe_state/trbe_reset_perf_state - Clears TRBE from current perf config
>>> s/set_trbe_state/trbe_prepare_perf_state - Prepares TRBE for the next perf config
>>
>> Please don't tie them to "perf". This is pure hardware configuration, not perf.
>
> Okay.
>
>>
>> Also, I wonder if we need a separate "set_trbe_state". Could we not initialize the LIMITR
>> at one go ?
>
> There are some limitations which could prevent that.
>
>>
>> i.e, do something like :
>>
>> set_trbe_limit_pointer(limit, mode) ?
>>
>> where it sets all the fields of limit pointer. Also, you may want to document the mode we
>> choose for TRBE. i.e, FILL STOP mode for us to collect the trace.
>
> Sure, will document the TRBE mode being choosen here.
>
>>
>>>
>>>
>>>>
>>>> I would rather open code this with a write of 0 to trbsr in the caller.
>>>>
>>>>> +{
>>>>> + u64 trbsr = read_sysreg_s(SYS_TRBSR_EL1);
>>>>> +
>>>>> + WARN_ON(is_trbe_enabled());
>>>>> + trbsr &= ~TRBSR_IRQ;
>>>>> + trbsr &= ~TRBSR_TRG;
>>>>> + trbsr &= ~TRBSR_WRAP;
>>>>> + trbsr &= ~(TRBSR_EC_MASK << TRBSR_EC_SHIFT);
>>>>> + trbsr &= ~(TRBSR_BSC_MASK << TRBSR_BSC_SHIFT);
>>>>> + trbsr &= ~(TRBSR_FSC_MASK << TRBSR_FSC_SHIFT);
>>>>
>>>> BSC and FSC are the same fields under MSS, with their meanings determined by the EC field.
>>>
>>> Could just drop the FSC part if required.
>>>
>>>>
>>>> Could we simply write 0 to the register ?
>>>
>>> I would really like to avoid that. This function clearly enumerates all
>>> individual bit fields being cleared for resetting as well as preparing
>>> the TRBE for the next perf session. Converting this into a 0 write for
>>> SYS_TRBSR_EL1 sounds excessive and the only thing it would save is the
>>> register read.
>>
>>>
>>>>
>>>>> + write_sysreg_s(trbsr, SYS_TRBSR_EL1);
>>>>> +}
>>>>> +
>>>>> +static void set_trbe_state(void)
>>>>> +{
>>>>> + u64 trblimitr = read_sysreg_s(SYS_TRBLIMITR_EL1);
>>>>> +
>>>>> + trblimitr &= ~TRBLIMITR_NVM;
>>>>> + trblimitr &= ~(TRBLIMITR_FILL_MODE_MASK << TRBLIMITR_FILL_MODE_SHIFT);
>>>>> + trblimitr &= ~(TRBLIMITR_TRIG_MODE_MASK << TRBLIMITR_TRIG_MODE_SHIFT);
>>>>> + trblimitr |= (TRBE_FILL_STOP & TRBLIMITR_FILL_MODE_MASK) << TRBLIMITR_FILL_MODE_SHIFT;
>>>>> + trblimitr |= (TRBE_TRIGGER_IGNORE & TRBLIMITR_TRIG_MODE_MASK) << TRBLIMITR_TRIG_MODE_SHIFT;
>>>>> + write_sysreg_s(trblimitr, SYS_TRBLIMITR_EL1);
>>>>
>>>> Do we need to read-copy-update here ? Could we simply write 0 ?
>>>> Same as above comment, could we not simply opencode it at the caller ?
>>>> Clearly the names don't help.
>>>
>>> Will change the names as proposed or something better. But lets leave
>>> these functions as is. Besides TRBE_TRIGGER_IGNORE also has a positive
>>> value (i.e 3), writing all 0s into SYS_TRBLIMITR_EL1 will not be ideal.
>>>
>>
>> The point is, we don't need to preserve the values for LIMITR. Also see my comment
>> above, for folding this to set_trbe_limit_pointer(). In any case, I don't think
>> we should rely on the values of fields we change. So it is safer and cleaner to
>> set set all the bits for LIMITR, including the LIMIT address in one go, without
>> ready-copy-update.
>
> TRBE needs to be disabled (which is also in the LIMIT register) before we can update
> any other fields in the LIMIT register. So there is already an order dependency here.
> Looking at the function trbe_enable_hw(), it follows something like
>
> 1. Clear and set the TRBE mode - followed by an isb()
> 2. Update the TRBE pointers - followed by an isb()
> 3. Set it rolling - followed by TSB_CSYNC
>
> static void trbe_enable_hw(struct trbe_buf *buf)
> {
>
> [Software checks]
> WARN_ON(buf->trbe_write < buf->trbe_base);
> WARN_ON(buf->trbe_write >= buf->trbe_limit);
>
> [Disable TRBE in the limit register]
> set_trbe_disabled();
>
We need an isb() here.
> [Clears TRBE status register]
> trbe_reset_perf_state();
Please be explicit here. Make the function name reflect the fact that
we are simply clearing the status register and nothing related to perf.
>
> [Configures TRBE mode in the limit register]
> trbe_prepare_perf_state();
This is unnecessarily introducing a dependency not enforced by the HW.
You could program the LIMIT register with all the setting, mode, limit
and *enable TBRE* once we have programmed base and write pointer at one
shot.
>
> isb();
Drop the ISB
>
> [Update all required pointers]
> set_trbe_base_pointer(buf->trbe_base);
> set_trbe_limit_pointer(buf->trbe_limit);
As mentioned above, this could be done in set_trbe_enabled()
> set_trbe_write_pointer(buf->trbe_write);
> isb();
>
> [Set it rolling]
>
> [Update TRBE status register stop bit]
> set_trbe_running();
This doesn't have any significance with Hardware. It is a status bit
from the HW, which is writable only for "state" save/restore, when
switching between contexts. Otherwise, this write doesn't do anything.
So, please combine this with the clear_status operation above.
>
> [Update TRBE limit register enable bit]
> set_trbe_enabled();
Here we could set all the fileds of the LIMIT register, followed by
an isb()
Kind regards
Suzuki
On 1/4/21 3:44 AM, Anshuman Khandual wrote:
>
> On 1/3/21 10:35 PM, Rob Herring wrote:
>> On Wed, Dec 23, 2020 at 03:33:43PM +0530, Anshuman Khandual wrote:
>>> This patch documents the device tree binding in use for Arm TRBE.
>>>
>>> Cc: [email protected]
>>> Cc: Mathieu Poirier <[email protected]>
>>> Cc: Mike Leach <[email protected]>
>>> Cc: Suzuki K Poulose <[email protected]>
>>> Signed-off-by: Anshuman Khandual <[email protected]>
>>> ---
>>> Changes in V1:
>>>
>>> - TRBE DT entry has been renamed as 'arm, trace-buffer-extension'
>>>
>>> Documentation/devicetree/bindings/arm/trbe.txt | 20 ++++++++++++++++++++
>>> 1 file changed, 20 insertions(+)
>>> create mode 100644 Documentation/devicetree/bindings/arm/trbe.txt
>>>
>>> diff --git a/Documentation/devicetree/bindings/arm/trbe.txt b/Documentation/devicetree/bindings/arm/trbe.txt
>>> new file mode 100644
>>> index 0000000..001945d
>>> --- /dev/null
>>> +++ b/Documentation/devicetree/bindings/arm/trbe.txt
>>> @@ -0,0 +1,20 @@
>>> +* Trace Buffer Extension (TRBE)
>>> +
>>> +Trace Buffer Extension (TRBE) is used for collecting trace data generated
>>> +from a corresponding trace unit (ETE) using an in memory trace buffer.
>>> +
>>> +** TRBE Required properties:
>>> +
>>> +- compatible : should be one of:
>>> + "arm,trace-buffer-extension"
>>> +
>>> +- interrupts : Exactly 1 PPI must be listed. For heterogeneous systems where
>>> + TRBE is only supported on a subset of the CPUs, please consult
>>> + the arm,gic-v3 binding for details on describing a PPI partition.
>>> +
>>> +** Example:
>>> +
>>> +trbe {
>>> + compatible = "arm,trace-buffer-extension";
>>> + interrupts = <GIC_PPI 15 IRQ_TYPE_LEVEL_HIGH>;
>>
>> If only an interrupt, then could just be part of ETE? If not, how is
>> this hardware block accessed? An interrupt alone is not enough unless
>> there's some architected way to access.
>
> TRBE hardware block is accessed via respective new system registers but the
> PPI number where the IRQ will be triggered for various buffer events, would
> depend on the platform as defined in the SBSA.
That is correct. TRBE is accessed via CPU system registers. The IRQ is specifically
for the TRBE unit to handle buffer overflow situations and other errors in the
buffer handling. Please include this information in the description section of
the bindings.
Also, it may be worth switching this to yaml format.
Suzuki