Add driver support for ARM CoreSight PMU device and event attributes for NVIDIA
implementation. The code is based on ARM Coresight PMU architecture and ACPI ARM
Performance Monitoring Unit table (APMT) specification below:
* ARM Coresight PMU:
https://developer.arm.com/documentation/ihi0091/latest
* APMT: https://developer.arm.com/documentation/den0117/latest
Notes:
* There is a concern on the naming of the PMU device.
Currently the driver is probing "arm-coresight-pmu" device, however the APMT
spec supports different kinds of CoreSight PMU based implementation. So it is
open for discussion if the name can stay or a "generic" name is required.
Please see the following thread:
http://lists.infradead.org/pipermail/linux-arm-kernel/2022-May/740485.html
The patchset applies on top of
https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git
master next-20220524
Changes from v2:
* Driver is now probing "arm-system-pmu" device.
* Change default PMU naming to "arm_<APMT node type>_pmu".
* Add implementor ops to generate custom name.
Thanks to [email protected] for the review comments.
v2: https://lore.kernel.org/all/[email protected]/
Changes from v1:
* Remove CPU arch dependency.
* Remove 32-bit read/write helper function and just use read/writel.
* Add .is_visible into event attribute to filter out cycle counter event.
* Update pmiidr matching.
* Remove read-modify-write on PMCR since the driver only writes to PMCR.E.
* Assign default cycle event outside the 32-bit PMEVTYPER range.
* Rework the active event and used counter tracking.
Thanks to [email protected] for the review comments.
v1: https://lore.kernel.org/all/[email protected]/
Besar Wicaksono (2):
perf: coresight_pmu: Add support for ARM CoreSight PMU driver
perf: coresight_pmu: Add support for NVIDIA SCF and MCF attribute
arch/arm64/configs/defconfig | 1 +
drivers/perf/Kconfig | 2 +
drivers/perf/Makefile | 1 +
drivers/perf/coresight_pmu/Kconfig | 11 +
drivers/perf/coresight_pmu/Makefile | 7 +
.../perf/coresight_pmu/arm_coresight_pmu.c | 1316 +++++++++++++++++
.../perf/coresight_pmu/arm_coresight_pmu.h | 177 +++
.../coresight_pmu/arm_coresight_pmu_nvidia.c | 312 ++++
.../coresight_pmu/arm_coresight_pmu_nvidia.h | 17 +
9 files changed, 1844 insertions(+)
create mode 100644 drivers/perf/coresight_pmu/Kconfig
create mode 100644 drivers/perf/coresight_pmu/Makefile
create mode 100644 drivers/perf/coresight_pmu/arm_coresight_pmu.c
create mode 100644 drivers/perf/coresight_pmu/arm_coresight_pmu.h
create mode 100644 drivers/perf/coresight_pmu/arm_coresight_pmu_nvidia.c
create mode 100644 drivers/perf/coresight_pmu/arm_coresight_pmu_nvidia.h
base-commit: 09ce5091ff971cdbfd67ad84dc561ea27f10d67a
--
2.17.1
Add support for NVIDIA System Cache Fabric (SCF) and Memory Control
Fabric (MCF) PMU attributes for CoreSight PMU implementation in
NVIDIA devices.
Signed-off-by: Besar Wicaksono <[email protected]>
---
drivers/perf/coresight_pmu/Makefile | 3 +-
.../perf/coresight_pmu/arm_coresight_pmu.c | 4 +
.../coresight_pmu/arm_coresight_pmu_nvidia.c | 312 ++++++++++++++++++
.../coresight_pmu/arm_coresight_pmu_nvidia.h | 17 +
4 files changed, 335 insertions(+), 1 deletion(-)
create mode 100644 drivers/perf/coresight_pmu/arm_coresight_pmu_nvidia.c
create mode 100644 drivers/perf/coresight_pmu/arm_coresight_pmu_nvidia.h
diff --git a/drivers/perf/coresight_pmu/Makefile b/drivers/perf/coresight_pmu/Makefile
index a2a7a5fbbc16..181b1b0dbaa1 100644
--- a/drivers/perf/coresight_pmu/Makefile
+++ b/drivers/perf/coresight_pmu/Makefile
@@ -3,4 +3,5 @@
# SPDX-License-Identifier: GPL-2.0
obj-$(CONFIG_ARM_CORESIGHT_PMU) += \
- arm_coresight_pmu.o
+ arm_coresight_pmu.o \
+ arm_coresight_pmu_nvidia.o
diff --git a/drivers/perf/coresight_pmu/arm_coresight_pmu.c b/drivers/perf/coresight_pmu/arm_coresight_pmu.c
index ba52cc592b2d..12179d029bfd 100644
--- a/drivers/perf/coresight_pmu/arm_coresight_pmu.c
+++ b/drivers/perf/coresight_pmu/arm_coresight_pmu.c
@@ -42,6 +42,7 @@
#include <acpi/processor.h>
#include "arm_coresight_pmu.h"
+#include "arm_coresight_pmu_nvidia.h"
#define PMUNAME "arm_system_pmu"
@@ -396,6 +397,9 @@ struct impl_match {
};
static const struct impl_match impl_match[] = {
+ { .pmiidr = 0x36B,
+ .mask = PMIIDR_IMPLEMENTER_MASK,
+ .impl_init_ops = nv_coresight_init_ops },
{}
};
diff --git a/drivers/perf/coresight_pmu/arm_coresight_pmu_nvidia.c b/drivers/perf/coresight_pmu/arm_coresight_pmu_nvidia.c
new file mode 100644
index 000000000000..54f4eae4c529
--- /dev/null
+++ b/drivers/perf/coresight_pmu/arm_coresight_pmu_nvidia.c
@@ -0,0 +1,312 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES.
+ *
+ */
+
+/* Support for NVIDIA specific attributes. */
+
+#include "arm_coresight_pmu_nvidia.h"
+
+#define NV_MCF_PCIE_PORT_COUNT 10ULL
+#define NV_MCF_PCIE_FILTER_ID_MASK ((1ULL << NV_MCF_PCIE_PORT_COUNT) - 1)
+
+#define NV_MCF_GPU_PORT_COUNT 2ULL
+#define NV_MCF_GPU_FILTER_ID_MASK ((1ULL << NV_MCF_GPU_PORT_COUNT) - 1)
+
+#define NV_MCF_NVLINK_PORT_COUNT 4ULL
+#define NV_MCF_NVLINK_FILTER_ID_MASK ((1ULL << NV_MCF_NVLINK_PORT_COUNT) - 1)
+
+#define PMIIDR_PRODUCTID_MASK 0xFFF
+#define PMIIDR_PRODUCTID_SHIFT 20
+
+#define to_nv_pmu_impl(coresight_pmu) \
+ (container_of(coresight_pmu->impl.ops, struct nv_pmu_impl, ops))
+
+#define CORESIGHT_EVENT_ATTR_4_INNER(_pref, _num, _suff, _config) \
+ CORESIGHT_EVENT_ATTR(_pref##_num##_suff, _config)
+
+#define CORESIGHT_EVENT_ATTR_4(_pref, _suff, _config) \
+ CORESIGHT_EVENT_ATTR_4_INNER(_pref, _0_, _suff, _config), \
+ CORESIGHT_EVENT_ATTR_4_INNER(_pref, _1_, _suff, _config + 1), \
+ CORESIGHT_EVENT_ATTR_4_INNER(_pref, _2_, _suff, _config + 2), \
+ CORESIGHT_EVENT_ATTR_4_INNER(_pref, _3_, _suff, _config + 3)
+
+struct nv_pmu_impl {
+ struct coresight_pmu_impl_ops ops;
+ const char *name;
+ u32 filter_mask;
+ struct attribute **event_attr;
+ struct attribute **format_attr;
+};
+
+static struct attribute *scf_pmu_event_attrs[] = {
+ CORESIGHT_EVENT_ATTR(bus_cycles, 0x1d),
+
+ CORESIGHT_EVENT_ATTR(scf_cache_allocate, 0xF0),
+ CORESIGHT_EVENT_ATTR(scf_cache_refill, 0xF1),
+ CORESIGHT_EVENT_ATTR(scf_cache, 0xF2),
+ CORESIGHT_EVENT_ATTR(scf_cache_wb, 0xF3),
+
+ CORESIGHT_EVENT_ATTR_4(socket, rd_data, 0x101),
+ CORESIGHT_EVENT_ATTR_4(socket, dl_rsp, 0x105),
+ CORESIGHT_EVENT_ATTR_4(socket, wb_data, 0x109),
+ CORESIGHT_EVENT_ATTR_4(socket, ev_rsp, 0x10d),
+ CORESIGHT_EVENT_ATTR_4(socket, prb_data, 0x111),
+
+ CORESIGHT_EVENT_ATTR_4(socket, rd_outstanding, 0x115),
+ CORESIGHT_EVENT_ATTR_4(socket, dl_outstanding, 0x119),
+ CORESIGHT_EVENT_ATTR_4(socket, wb_outstanding, 0x11d),
+ CORESIGHT_EVENT_ATTR_4(socket, wr_outstanding, 0x121),
+ CORESIGHT_EVENT_ATTR_4(socket, ev_outstanding, 0x125),
+ CORESIGHT_EVENT_ATTR_4(socket, prb_outstanding, 0x129),
+
+ CORESIGHT_EVENT_ATTR_4(socket, rd_access, 0x12d),
+ CORESIGHT_EVENT_ATTR_4(socket, dl_access, 0x131),
+ CORESIGHT_EVENT_ATTR_4(socket, wb_access, 0x135),
+ CORESIGHT_EVENT_ATTR_4(socket, wr_access, 0x139),
+ CORESIGHT_EVENT_ATTR_4(socket, ev_access, 0x13d),
+ CORESIGHT_EVENT_ATTR_4(socket, prb_access, 0x141),
+
+ CORESIGHT_EVENT_ATTR_4(ocu, gmem_rd_data, 0x145),
+ CORESIGHT_EVENT_ATTR_4(ocu, gmem_rd_access, 0x149),
+ CORESIGHT_EVENT_ATTR_4(ocu, gmem_wb_access, 0x14d),
+ CORESIGHT_EVENT_ATTR_4(ocu, gmem_rd_outstanding, 0x151),
+ CORESIGHT_EVENT_ATTR_4(ocu, gmem_wr_outstanding, 0x155),
+
+ CORESIGHT_EVENT_ATTR_4(ocu, rem_rd_data, 0x159),
+ CORESIGHT_EVENT_ATTR_4(ocu, rem_rd_access, 0x15d),
+ CORESIGHT_EVENT_ATTR_4(ocu, rem_wb_access, 0x161),
+ CORESIGHT_EVENT_ATTR_4(ocu, rem_rd_outstanding, 0x165),
+ CORESIGHT_EVENT_ATTR_4(ocu, rem_wr_outstanding, 0x169),
+
+ CORESIGHT_EVENT_ATTR(gmem_rd_data, 0x16d),
+ CORESIGHT_EVENT_ATTR(gmem_rd_access, 0x16e),
+ CORESIGHT_EVENT_ATTR(gmem_rd_outstanding, 0x16f),
+ CORESIGHT_EVENT_ATTR(gmem_dl_rsp, 0x170),
+ CORESIGHT_EVENT_ATTR(gmem_dl_access, 0x171),
+ CORESIGHT_EVENT_ATTR(gmem_dl_outstanding, 0x172),
+ CORESIGHT_EVENT_ATTR(gmem_wb_data, 0x173),
+ CORESIGHT_EVENT_ATTR(gmem_wb_access, 0x174),
+ CORESIGHT_EVENT_ATTR(gmem_wb_outstanding, 0x175),
+ CORESIGHT_EVENT_ATTR(gmem_ev_rsp, 0x176),
+ CORESIGHT_EVENT_ATTR(gmem_ev_access, 0x177),
+ CORESIGHT_EVENT_ATTR(gmem_ev_outstanding, 0x178),
+ CORESIGHT_EVENT_ATTR(gmem_wr_data, 0x179),
+ CORESIGHT_EVENT_ATTR(gmem_wr_outstanding, 0x17a),
+ CORESIGHT_EVENT_ATTR(gmem_wr_access, 0x17b),
+
+ CORESIGHT_EVENT_ATTR_4(socket, wr_data, 0x17c),
+
+ CORESIGHT_EVENT_ATTR_4(ocu, gmem_wr_data, 0x180),
+ CORESIGHT_EVENT_ATTR_4(ocu, gmem_wb_data, 0x184),
+ CORESIGHT_EVENT_ATTR_4(ocu, gmem_wr_access, 0x188),
+ CORESIGHT_EVENT_ATTR_4(ocu, gmem_wb_outstanding, 0x18c),
+
+ CORESIGHT_EVENT_ATTR_4(ocu, rem_wr_data, 0x190),
+ CORESIGHT_EVENT_ATTR_4(ocu, rem_wb_data, 0x194),
+ CORESIGHT_EVENT_ATTR_4(ocu, rem_wr_access, 0x198),
+ CORESIGHT_EVENT_ATTR_4(ocu, rem_wb_outstanding, 0x19c),
+
+ CORESIGHT_EVENT_ATTR(gmem_wr_total_bytes, 0x1a0),
+ CORESIGHT_EVENT_ATTR(remote_socket_wr_total_bytes, 0x1a1),
+ CORESIGHT_EVENT_ATTR(remote_socket_rd_data, 0x1a2),
+ CORESIGHT_EVENT_ATTR(remote_socket_rd_outstanding, 0x1a3),
+ CORESIGHT_EVENT_ATTR(remote_socket_rd_access, 0x1a4),
+
+ CORESIGHT_EVENT_ATTR(cmem_rd_data, 0x1a5),
+ CORESIGHT_EVENT_ATTR(cmem_rd_access, 0x1a6),
+ CORESIGHT_EVENT_ATTR(cmem_rd_outstanding, 0x1a7),
+ CORESIGHT_EVENT_ATTR(cmem_dl_rsp, 0x1a8),
+ CORESIGHT_EVENT_ATTR(cmem_dl_access, 0x1a9),
+ CORESIGHT_EVENT_ATTR(cmem_dl_outstanding, 0x1aa),
+ CORESIGHT_EVENT_ATTR(cmem_wb_data, 0x1ab),
+ CORESIGHT_EVENT_ATTR(cmem_wb_access, 0x1ac),
+ CORESIGHT_EVENT_ATTR(cmem_wb_outstanding, 0x1ad),
+ CORESIGHT_EVENT_ATTR(cmem_ev_rsp, 0x1ae),
+ CORESIGHT_EVENT_ATTR(cmem_ev_access, 0x1af),
+ CORESIGHT_EVENT_ATTR(cmem_ev_outstanding, 0x1b0),
+ CORESIGHT_EVENT_ATTR(cmem_wr_data, 0x1b1),
+ CORESIGHT_EVENT_ATTR(cmem_wr_outstanding, 0x1b2),
+
+ CORESIGHT_EVENT_ATTR_4(ocu, cmem_rd_data, 0x1b3),
+ CORESIGHT_EVENT_ATTR_4(ocu, cmem_rd_access, 0x1b7),
+ CORESIGHT_EVENT_ATTR_4(ocu, cmem_wb_access, 0x1bb),
+ CORESIGHT_EVENT_ATTR_4(ocu, cmem_rd_outstanding, 0x1bf),
+ CORESIGHT_EVENT_ATTR_4(ocu, cmem_wr_outstanding, 0x1c3),
+
+ CORESIGHT_EVENT_ATTR(ocu_prb_access, 0x1c7),
+ CORESIGHT_EVENT_ATTR(ocu_prb_data, 0x1c8),
+ CORESIGHT_EVENT_ATTR(ocu_prb_outstanding, 0x1c9),
+
+ CORESIGHT_EVENT_ATTR(cmem_wr_access, 0x1ca),
+
+ CORESIGHT_EVENT_ATTR_4(ocu, cmem_wr_access, 0x1cb),
+ CORESIGHT_EVENT_ATTR_4(ocu, cmem_wb_data, 0x1cf),
+ CORESIGHT_EVENT_ATTR_4(ocu, cmem_wr_data, 0x1d3),
+ CORESIGHT_EVENT_ATTR_4(ocu, cmem_wb_outstanding, 0x1d7),
+
+ CORESIGHT_EVENT_ATTR(cmem_wr_total_bytes, 0x1db),
+
+ CORESIGHT_EVENT_ATTR(cycles, CORESIGHT_PMU_EVT_CYCLES_DEFAULT),
+ NULL,
+};
+
+static struct attribute *mcf_pmu_event_attrs[] = {
+ CORESIGHT_EVENT_ATTR(rd_bytes_loc, 0x0),
+ CORESIGHT_EVENT_ATTR(rd_bytes_rem, 0x1),
+ CORESIGHT_EVENT_ATTR(wr_bytes_loc, 0x2),
+ CORESIGHT_EVENT_ATTR(wr_bytes_rem, 0x3),
+ CORESIGHT_EVENT_ATTR(total_bytes_loc, 0x4),
+ CORESIGHT_EVENT_ATTR(total_bytes_rem, 0x5),
+ CORESIGHT_EVENT_ATTR(rd_req_loc, 0x6),
+ CORESIGHT_EVENT_ATTR(rd_req_rem, 0x7),
+ CORESIGHT_EVENT_ATTR(wr_req_loc, 0x8),
+ CORESIGHT_EVENT_ATTR(wr_req_rem, 0x9),
+ CORESIGHT_EVENT_ATTR(total_req_loc, 0xa),
+ CORESIGHT_EVENT_ATTR(total_req_rem, 0xb),
+ CORESIGHT_EVENT_ATTR(rd_cum_outs_loc, 0xc),
+ CORESIGHT_EVENT_ATTR(rd_cum_outs_rem, 0xd),
+ CORESIGHT_EVENT_ATTR(cycles, CORESIGHT_PMU_EVT_CYCLES_DEFAULT),
+ NULL,
+};
+
+static struct attribute *scf_pmu_format_attrs[] = {
+ CORESIGHT_FORMAT_EVENT_ATTR,
+ NULL,
+};
+
+static struct attribute *mcf_pcie_pmu_format_attrs[] = {
+ CORESIGHT_FORMAT_EVENT_ATTR,
+ CORESIGHT_FORMAT_ATTR(root_port, "config1:0-9"),
+ NULL,
+};
+
+static struct attribute *mcf_gpu_pmu_format_attrs[] = {
+ CORESIGHT_FORMAT_EVENT_ATTR,
+ CORESIGHT_FORMAT_ATTR(gpu, "config1:0-1"),
+ NULL,
+};
+
+static struct attribute *mcf_nvlink_pmu_format_attrs[] = {
+ CORESIGHT_FORMAT_EVENT_ATTR,
+ CORESIGHT_FORMAT_ATTR(socket, "config1:0-3"),
+ NULL,
+};
+
+static struct attribute **
+nv_coresight_pmu_get_event_attrs(const struct coresight_pmu *coresight_pmu)
+{
+ const struct nv_pmu_impl *impl = to_nv_pmu_impl(coresight_pmu);
+
+ return impl->event_attr;
+}
+
+static struct attribute **
+nv_coresight_pmu_get_format_attrs(const struct coresight_pmu *coresight_pmu)
+{
+ const struct nv_pmu_impl *impl = to_nv_pmu_impl(coresight_pmu);
+
+ return impl->format_attr;
+}
+
+static const char *
+nv_coresight_pmu_get_name(const struct coresight_pmu *coresight_pmu)
+{
+ const struct nv_pmu_impl *impl = to_nv_pmu_impl(coresight_pmu);
+
+ return impl->name;
+}
+
+static u32 nv_coresight_pmu_event_filter(const struct perf_event *event)
+{
+ const struct nv_pmu_impl *impl =
+ to_nv_pmu_impl(to_coresight_pmu(event->pmu));
+ return event->attr.config1 & impl->filter_mask;
+}
+
+int nv_coresight_init_ops(struct coresight_pmu *coresight_pmu)
+{
+ u32 product_id;
+ struct device *dev;
+ struct nv_pmu_impl *impl;
+ static atomic_t pmu_idx = {0};
+
+ dev = coresight_pmu->dev;
+
+ impl = devm_kzalloc(dev, sizeof(struct nv_pmu_impl), GFP_KERNEL);
+ if (!impl)
+ return -ENOMEM;
+
+ product_id = (coresight_pmu->impl.pmiidr >> PMIIDR_PRODUCTID_SHIFT) &
+ PMIIDR_PRODUCTID_MASK;
+
+ switch (product_id) {
+ case 0x103:
+ impl->name =
+ devm_kasprintf(dev, GFP_KERNEL,
+ "nvidia_mcf_pcie_pmu_%u",
+ coresight_pmu->apmt_node->proc_affinity);
+ impl->filter_mask = NV_MCF_PCIE_FILTER_ID_MASK;
+ impl->event_attr = mcf_pmu_event_attrs;
+ impl->format_attr = mcf_pcie_pmu_format_attrs;
+ break;
+ case 0x104:
+ impl->name =
+ devm_kasprintf(dev, GFP_KERNEL,
+ "nvidia_mcf_gpuvir_pmu_%u",
+ coresight_pmu->apmt_node->proc_affinity);
+ impl->filter_mask = NV_MCF_GPU_FILTER_ID_MASK;
+ impl->event_attr = mcf_pmu_event_attrs;
+ impl->format_attr = mcf_gpu_pmu_format_attrs;
+ break;
+ case 0x105:
+ impl->name =
+ devm_kasprintf(dev, GFP_KERNEL,
+ "nvidia_mcf_gpu_pmu_%u",
+ coresight_pmu->apmt_node->proc_affinity);
+ impl->filter_mask = NV_MCF_GPU_FILTER_ID_MASK;
+ impl->event_attr = mcf_pmu_event_attrs;
+ impl->format_attr = mcf_gpu_pmu_format_attrs;
+ break;
+ case 0x106:
+ impl->name =
+ devm_kasprintf(dev, GFP_KERNEL,
+ "nvidia_mcf_nvlink_pmu_%u",
+ coresight_pmu->apmt_node->proc_affinity);
+ impl->filter_mask = NV_MCF_NVLINK_FILTER_ID_MASK;
+ impl->event_attr = mcf_pmu_event_attrs;
+ impl->format_attr = mcf_nvlink_pmu_format_attrs;
+ break;
+ case 0x2CF:
+ impl->name =
+ devm_kasprintf(dev, GFP_KERNEL, "nvidia_scf_pmu_%u",
+ coresight_pmu->apmt_node->proc_affinity);
+ impl->filter_mask = 0x0;
+ impl->event_attr = scf_pmu_event_attrs;
+ impl->format_attr = scf_pmu_format_attrs;
+ break;
+ default:
+ impl->name =
+ devm_kasprintf(dev, GFP_KERNEL, "nvidia_uncore_pmu_%u",
+ atomic_fetch_inc(&pmu_idx));
+ impl->filter_mask = CORESIGHT_FILTER_MASK;
+ impl->event_attr = coresight_pmu_get_event_attrs(coresight_pmu);
+ impl->format_attr =
+ coresight_pmu_get_format_attrs(coresight_pmu);
+ break;
+ }
+
+ impl->ops.get_event_attrs = nv_coresight_pmu_get_event_attrs;
+ impl->ops.get_format_attrs = nv_coresight_pmu_get_format_attrs;
+ impl->ops.get_identifier = coresight_pmu_get_identifier;
+ impl->ops.get_name = nv_coresight_pmu_get_name;
+ impl->ops.event_filter = nv_coresight_pmu_event_filter;
+ impl->ops.event_type = coresight_pmu_event_type;
+ impl->ops.event_attr_is_visible = coresight_pmu_event_attr_is_visible;
+ impl->ops.is_cc_event = coresight_pmu_is_cc_event;
+
+ coresight_pmu->impl.ops = &impl->ops;
+ return 0;
+}
+EXPORT_SYMBOL_GPL(nv_coresight_init_ops);
diff --git a/drivers/perf/coresight_pmu/arm_coresight_pmu_nvidia.h b/drivers/perf/coresight_pmu/arm_coresight_pmu_nvidia.h
new file mode 100644
index 000000000000..3c81c16c14f4
--- /dev/null
+++ b/drivers/perf/coresight_pmu/arm_coresight_pmu_nvidia.h
@@ -0,0 +1,17 @@
+/* SPDX-License-Identifier: GPL-2.0
+ *
+ * Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES.
+ *
+ */
+
+/* Support for NVIDIA specific attributes. */
+
+#ifndef __ARM_CORESIGHT_PMU_NVIDIA_H__
+#define __ARM_CORESIGHT_PMU_NVIDIA_H__
+
+#include "arm_coresight_pmu.h"
+
+/* Allocate NVIDIA descriptor. */
+int nv_coresight_init_ops(struct coresight_pmu *coresight_pmu);
+
+#endif /* __ARM_CORESIGHT_PMU_NVIDIA_H__ */
--
2.17.1
Add support for ARM CoreSight PMU driver framework and interfaces.
The driver provides generic implementation to operate uncore PMU based
on ARM CoreSight PMU architecture. The driver also provides interface
to get vendor/implementation specific information, for example event
attributes and formating.
The specification used in this implementation can be found below:
* ACPI Arm Performance Monitoring Unit table:
https://developer.arm.com/documentation/den0117/latest
* ARM Coresight PMU architecture:
https://developer.arm.com/documentation/ihi0091/latest
Signed-off-by: Besar Wicaksono <[email protected]>
---
arch/arm64/configs/defconfig | 1 +
drivers/perf/Kconfig | 2 +
drivers/perf/Makefile | 1 +
drivers/perf/coresight_pmu/Kconfig | 11 +
drivers/perf/coresight_pmu/Makefile | 6 +
.../perf/coresight_pmu/arm_coresight_pmu.c | 1312 +++++++++++++++++
.../perf/coresight_pmu/arm_coresight_pmu.h | 177 +++
7 files changed, 1510 insertions(+)
create mode 100644 drivers/perf/coresight_pmu/Kconfig
create mode 100644 drivers/perf/coresight_pmu/Makefile
create mode 100644 drivers/perf/coresight_pmu/arm_coresight_pmu.c
create mode 100644 drivers/perf/coresight_pmu/arm_coresight_pmu.h
diff --git a/arch/arm64/configs/defconfig b/arch/arm64/configs/defconfig
index 7d1105343bc2..22184f8883da 100644
--- a/arch/arm64/configs/defconfig
+++ b/arch/arm64/configs/defconfig
@@ -1212,6 +1212,7 @@ CONFIG_PHY_UNIPHIER_USB3=y
CONFIG_PHY_TEGRA_XUSB=y
CONFIG_PHY_AM654_SERDES=m
CONFIG_PHY_J721E_WIZ=m
+CONFIG_ARM_CORESIGHT_PMU=y
CONFIG_ARM_SMMU_V3_PMU=m
CONFIG_FSL_IMX8_DDR_PMU=m
CONFIG_QCOM_L2_PMU=y
diff --git a/drivers/perf/Kconfig b/drivers/perf/Kconfig
index 1e2d69453771..c4e7cd5b4162 100644
--- a/drivers/perf/Kconfig
+++ b/drivers/perf/Kconfig
@@ -192,4 +192,6 @@ config MARVELL_CN10K_DDR_PMU
Enable perf support for Marvell DDR Performance monitoring
event on CN10K platform.
+source "drivers/perf/coresight_pmu/Kconfig"
+
endmenu
diff --git a/drivers/perf/Makefile b/drivers/perf/Makefile
index 57a279c61df5..4126a04b5583 100644
--- a/drivers/perf/Makefile
+++ b/drivers/perf/Makefile
@@ -20,3 +20,4 @@ obj-$(CONFIG_ARM_DMC620_PMU) += arm_dmc620_pmu.o
obj-$(CONFIG_MARVELL_CN10K_TAD_PMU) += marvell_cn10k_tad_pmu.o
obj-$(CONFIG_MARVELL_CN10K_DDR_PMU) += marvell_cn10k_ddr_pmu.o
obj-$(CONFIG_APPLE_M1_CPU_PMU) += apple_m1_cpu_pmu.o
+obj-$(CONFIG_ARM_CORESIGHT_PMU) += coresight_pmu/
diff --git a/drivers/perf/coresight_pmu/Kconfig b/drivers/perf/coresight_pmu/Kconfig
new file mode 100644
index 000000000000..89174f54c7be
--- /dev/null
+++ b/drivers/perf/coresight_pmu/Kconfig
@@ -0,0 +1,11 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES.
+
+config ARM_CORESIGHT_PMU
+ tristate "ARM Coresight PMU"
+ depends on ACPI
+ depends on ACPI_APMT || COMPILE_TEST
+ help
+ Provides support for Performance Monitoring Unit (PMU) events based on
+ ARM CoreSight PMU architecture.
\ No newline at end of file
diff --git a/drivers/perf/coresight_pmu/Makefile b/drivers/perf/coresight_pmu/Makefile
new file mode 100644
index 000000000000..a2a7a5fbbc16
--- /dev/null
+++ b/drivers/perf/coresight_pmu/Makefile
@@ -0,0 +1,6 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES.
+#
+# SPDX-License-Identifier: GPL-2.0
+
+obj-$(CONFIG_ARM_CORESIGHT_PMU) += \
+ arm_coresight_pmu.o
diff --git a/drivers/perf/coresight_pmu/arm_coresight_pmu.c b/drivers/perf/coresight_pmu/arm_coresight_pmu.c
new file mode 100644
index 000000000000..ba52cc592b2d
--- /dev/null
+++ b/drivers/perf/coresight_pmu/arm_coresight_pmu.c
@@ -0,0 +1,1312 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * ARM CoreSight PMU driver.
+ *
+ * This driver adds support for uncore PMU based on ARM CoreSight Performance
+ * Monitoring Unit Architecture. The PMU is accessible via MMIO registers and
+ * like other uncore PMUs, it does not support process specific events and
+ * cannot be used in sampling mode.
+ *
+ * This code is based on other uncore PMUs like ARM DSU PMU. It provides a
+ * generic implementation to operate the PMU according to CoreSight PMU
+ * architecture and ACPI ARM PMU table (APMT) documents below:
+ * - ARM CoreSight PMU architecture document number: ARM IHI 0091 A.a-00bet0.
+ * - APMT document number: ARM DEN0117.
+ * The description of the PMU, like the PMU device identification, available
+ * events, and configuration options, is vendor specific. The driver provides
+ * interface for vendor specific code to get this information. This allows the
+ * driver to be shared with PMU from different vendors.
+ *
+ * The CoreSight PMU devices can be named using implementor specific format, or
+ * with default naming format: arm_<apmt node type>_pmu_<numeric id>.
+ * The description of the device, like the identifier, supported events, and
+ * formats can be found in sysfs
+ * /sys/bus/event_source/devices/arm_<apmt node type>_pmu_<numeric id>
+ *
+ * The user should refer to the vendor technical documentation to get details
+ * about the supported events.
+ *
+ * Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES.
+ *
+ */
+
+#include <linux/acpi.h>
+#include <linux/cacheinfo.h>
+#include <linux/ctype.h>
+#include <linux/interrupt.h>
+#include <linux/io-64-nonatomic-lo-hi.h>
+#include <linux/module.h>
+#include <linux/mod_devicetable.h>
+#include <linux/perf_event.h>
+#include <linux/platform_device.h>
+#include <acpi/processor.h>
+
+#include "arm_coresight_pmu.h"
+
+#define PMUNAME "arm_system_pmu"
+
+#define CORESIGHT_CPUMASK_ATTR(_name, _config) \
+ CORESIGHT_EXT_ATTR(_name, coresight_pmu_cpumask_show, \
+ (unsigned long)_config)
+
+/*
+ * Register offsets based on CoreSight Performance Monitoring Unit Architecture
+ * Document number: ARM-ECM-0640169 00alp6
+ */
+#define PMEVCNTR_LO 0x0
+#define PMEVCNTR_HI 0x4
+#define PMEVTYPER 0x400
+#define PMCCFILTR 0x47C
+#define PMEVFILTR 0xA00
+#define PMCNTENSET 0xC00
+#define PMCNTENCLR 0xC20
+#define PMINTENSET 0xC40
+#define PMINTENCLR 0xC60
+#define PMOVSCLR 0xC80
+#define PMOVSSET 0xCC0
+#define PMCFGR 0xE00
+#define PMCR 0xE04
+#define PMIIDR 0xE08
+
+/* PMCFGR register field */
+#define PMCFGR_NCG_SHIFT 28
+#define PMCFGR_NCG_MASK 0xf
+#define PMCFGR_HDBG BIT(24)
+#define PMCFGR_TRO BIT(23)
+#define PMCFGR_SS BIT(22)
+#define PMCFGR_FZO BIT(21)
+#define PMCFGR_MSI BIT(20)
+#define PMCFGR_UEN BIT(19)
+#define PMCFGR_NA BIT(17)
+#define PMCFGR_EX BIT(16)
+#define PMCFGR_CCD BIT(15)
+#define PMCFGR_CC BIT(14)
+#define PMCFGR_SIZE_SHIFT 8
+#define PMCFGR_SIZE_MASK 0x3f
+#define PMCFGR_N_SHIFT 0
+#define PMCFGR_N_MASK 0xff
+
+/* PMCR register field */
+#define PMCR_TRO BIT(11)
+#define PMCR_HDBG BIT(10)
+#define PMCR_FZO BIT(9)
+#define PMCR_NA BIT(8)
+#define PMCR_DP BIT(5)
+#define PMCR_X BIT(4)
+#define PMCR_D BIT(3)
+#define PMCR_C BIT(2)
+#define PMCR_P BIT(1)
+#define PMCR_E BIT(0)
+
+/* PMIIDR register field */
+#define PMIIDR_IMPLEMENTER_MASK 0xFFF
+#define PMIIDR_PRODUCTID_MASK 0xFFF
+#define PMIIDR_PRODUCTID_SHIFT 20
+
+/* Each SET/CLR register supports up to 32 counters. */
+#define CORESIGHT_SET_CLR_REG_COUNTER_NUM 32
+#define CORESIGHT_SET_CLR_REG_COUNTER_SHIFT 5
+
+/* The number of 32-bit SET/CLR register that can be supported. */
+#define CORESIGHT_SET_CLR_REG_MAX_NUM ((PMCNTENCLR - PMCNTENSET) / sizeof(u32))
+
+static_assert((CORESIGHT_SET_CLR_REG_MAX_NUM *
+ CORESIGHT_SET_CLR_REG_COUNTER_NUM) >=
+ CORESIGHT_PMU_MAX_HW_CNTRS);
+
+/* Convert counter idx into SET/CLR register number. */
+#define CORESIGHT_IDX_TO_SET_CLR_REG_ID(idx) \
+ (idx >> CORESIGHT_SET_CLR_REG_COUNTER_SHIFT)
+
+/* Convert counter idx into SET/CLR register bit. */
+#define CORESIGHT_IDX_TO_SET_CLR_REG_BIT(idx) \
+ (idx & (CORESIGHT_SET_CLR_REG_COUNTER_NUM - 1))
+
+#define CORESIGHT_ACTIVE_CPU_MASK 0x0
+#define CORESIGHT_ASSOCIATED_CPU_MASK 0x1
+
+
+/* Check if field f in flags is set with value v */
+#define CHECK_APMT_FLAG(flags, f, v) \
+ ((flags & (ACPI_APMT_FLAGS_ ## f)) == (ACPI_APMT_FLAGS_ ## f ## _ ## v))
+
+static unsigned long coresight_pmu_cpuhp_state;
+
+/*
+ * In CoreSight PMU architecture, all of the MMIO registers are 32-bit except
+ * counter register. The counter register can be implemented as 32-bit or 64-bit
+ * register depending on the value of PMCFGR.SIZE field. For 64-bit access,
+ * single-copy 64-bit atomic support is implementation defined. APMT node flag
+ * is used to identify if the PMU supports 64-bit single copy atomic. If 64-bit
+ * single copy atomic is not supported, the driver treats the register as a pair
+ * of 32-bit register.
+ */
+
+/*
+ * Read 64-bit register as a pair of 32-bit registers using hi-lo-hi sequence.
+ */
+static u64 read_reg64_hilohi(const void __iomem *addr)
+{
+ u32 val_lo, val_hi;
+ u64 val;
+
+ /* Use high-low-high sequence to avoid tearing */
+ do {
+ val_hi = readl(addr + 4);
+ val_lo = readl(addr);
+ } while (val_hi != readl(addr + 4));
+
+ val = (((u64)val_hi << 32) | val_lo);
+
+ return val;
+}
+
+/* Check if PMU supports 64-bit single copy atomic. */
+static inline bool support_atomic(const struct coresight_pmu *coresight_pmu)
+{
+ return CHECK_APMT_FLAG(coresight_pmu->apmt_node->flags, ATOMIC, SUPP);
+}
+
+/* Check if cycle counter is supported. */
+static inline bool support_cc(const struct coresight_pmu *coresight_pmu)
+{
+ return (coresight_pmu->pmcfgr & PMCFGR_CC);
+}
+
+/* Get counter size. */
+static inline u32 pmcfgr_size(const struct coresight_pmu *coresight_pmu)
+{
+ return (coresight_pmu->pmcfgr >> PMCFGR_SIZE_SHIFT) & PMCFGR_SIZE_MASK;
+}
+
+/* Check if counter is implemented as 64-bit register. */
+static inline bool
+use_64b_counter_reg(const struct coresight_pmu *coresight_pmu)
+{
+ return (pmcfgr_size(coresight_pmu) > 31);
+}
+
+/* Get number of counters, minus one. */
+static inline u32 pmcfgr_n(const struct coresight_pmu *coresight_pmu)
+{
+ return (coresight_pmu->pmcfgr >> PMCFGR_N_SHIFT) & PMCFGR_N_MASK;
+}
+
+ssize_t coresight_pmu_sysfs_event_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct dev_ext_attribute *eattr =
+ container_of(attr, struct dev_ext_attribute, attr);
+ return sysfs_emit(buf, "event=0x%llx\n",
+ (unsigned long long)eattr->var);
+}
+EXPORT_SYMBOL_GPL(coresight_pmu_sysfs_event_show);
+
+/* Default event list. */
+static struct attribute *coresight_pmu_event_attrs[] = {
+ CORESIGHT_EVENT_ATTR(cycles, CORESIGHT_PMU_EVT_CYCLES_DEFAULT),
+ NULL,
+};
+
+struct attribute **
+coresight_pmu_get_event_attrs(const struct coresight_pmu *coresight_pmu)
+{
+ return coresight_pmu_event_attrs;
+}
+EXPORT_SYMBOL_GPL(coresight_pmu_get_event_attrs);
+
+umode_t coresight_pmu_event_attr_is_visible(struct kobject *kobj,
+ struct attribute *attr, int unused)
+{
+ struct device *dev = kobj_to_dev(kobj);
+ struct coresight_pmu *coresight_pmu =
+ to_coresight_pmu(dev_get_drvdata(dev));
+ struct perf_pmu_events_attr *eattr;
+
+ eattr = container_of(attr, typeof(*eattr), attr.attr);
+
+ /* Hide cycle event if not supported */
+ if (!support_cc(coresight_pmu) &&
+ eattr->id == CORESIGHT_PMU_EVT_CYCLES_DEFAULT) {
+ return 0;
+ }
+
+ return attr->mode;
+}
+EXPORT_SYMBOL_GPL(coresight_pmu_event_attr_is_visible);
+
+ssize_t coresight_pmu_sysfs_format_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ struct dev_ext_attribute *eattr =
+ container_of(attr, struct dev_ext_attribute, attr);
+ return sysfs_emit(buf, "%s\n", (char *)eattr->var);
+}
+EXPORT_SYMBOL_GPL(coresight_pmu_sysfs_format_show);
+
+static struct attribute *coresight_pmu_format_attrs[] = {
+ CORESIGHT_FORMAT_EVENT_ATTR,
+ CORESIGHT_FORMAT_FILTER_ATTR,
+ NULL,
+};
+
+struct attribute **
+coresight_pmu_get_format_attrs(const struct coresight_pmu *coresight_pmu)
+{
+ return coresight_pmu_format_attrs;
+}
+EXPORT_SYMBOL_GPL(coresight_pmu_get_format_attrs);
+
+u32 coresight_pmu_event_type(const struct perf_event *event)
+{
+ return event->attr.config & CORESIGHT_EVENT_MASK;
+}
+EXPORT_SYMBOL_GPL(coresight_pmu_event_type);
+
+u32 coresight_pmu_event_filter(const struct perf_event *event)
+{
+ return event->attr.config1 & CORESIGHT_FILTER_MASK;
+}
+EXPORT_SYMBOL_GPL(coresight_pmu_event_filter);
+
+static ssize_t coresight_pmu_identifier_show(struct device *dev,
+ struct device_attribute *attr,
+ char *page)
+{
+ struct coresight_pmu *coresight_pmu =
+ to_coresight_pmu(dev_get_drvdata(dev));
+
+ return sysfs_emit(page, "%s\n", coresight_pmu->identifier);
+}
+
+static struct device_attribute coresight_pmu_identifier_attr =
+ __ATTR(identifier, 0444, coresight_pmu_identifier_show, NULL);
+
+static struct attribute *coresight_pmu_identifier_attrs[] = {
+ &coresight_pmu_identifier_attr.attr,
+ NULL,
+};
+
+static struct attribute_group coresight_pmu_identifier_attr_group = {
+ .attrs = coresight_pmu_identifier_attrs,
+};
+
+const char *
+coresight_pmu_get_identifier(const struct coresight_pmu *coresight_pmu)
+{
+ const char *identifier =
+ devm_kasprintf(coresight_pmu->dev, GFP_KERNEL, "%x",
+ coresight_pmu->impl.pmiidr);
+ return identifier;
+}
+EXPORT_SYMBOL_GPL(coresight_pmu_get_identifier);
+
+static const char *coresight_pmu_type_str[ACPI_APMT_NODE_TYPE_COUNT] = {
+ "mc",
+ "smmu",
+ "pcie",
+ "acpi",
+ "cache",
+};
+
+const char *coresight_pmu_get_name(const struct coresight_pmu *coresight_pmu)
+{
+ struct device *dev;
+ u8 pmu_type;
+ char *name;
+ char acpi_hid_string[ACPI_ID_LEN] = { 0 };
+ static atomic_t pmu_idx[ACPI_APMT_NODE_TYPE_COUNT] = { 0 };
+
+ dev = coresight_pmu->dev;
+ pmu_type = coresight_pmu->apmt_node->type;
+
+ if (pmu_type >= ACPI_APMT_NODE_TYPE_COUNT) {
+ dev_err(dev, "unsupported PMU type-%u\n", pmu_type);
+ return NULL;
+ }
+
+ if (pmu_type == ACPI_APMT_NODE_TYPE_ACPI) {
+ memcpy(acpi_hid_string,
+ &coresight_pmu->apmt_node->inst_primary,
+ sizeof(coresight_pmu->apmt_node->inst_primary));
+ name = devm_kasprintf(dev, GFP_KERNEL, "arm_%s_pmu_%s_%u",
+ coresight_pmu_type_str[pmu_type],
+ acpi_hid_string,
+ coresight_pmu->apmt_node->inst_secondary);
+ } else {
+ name = devm_kasprintf(dev, GFP_KERNEL, "arm_%s_pmu_%d",
+ coresight_pmu_type_str[pmu_type],
+ atomic_fetch_inc(&pmu_idx[pmu_type]));
+ }
+
+ return name;
+}
+EXPORT_SYMBOL_GPL(coresight_pmu_get_name);
+
+static ssize_t coresight_pmu_cpumask_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ struct pmu *pmu = dev_get_drvdata(dev);
+ struct coresight_pmu *coresight_pmu = to_coresight_pmu(pmu);
+ struct dev_ext_attribute *eattr =
+ container_of(attr, struct dev_ext_attribute, attr);
+ unsigned long mask_id = (unsigned long)eattr->var;
+ const cpumask_t *cpumask;
+
+ switch (mask_id) {
+ case CORESIGHT_ACTIVE_CPU_MASK:
+ cpumask = &coresight_pmu->active_cpu;
+ break;
+ case CORESIGHT_ASSOCIATED_CPU_MASK:
+ cpumask = &coresight_pmu->associated_cpus;
+ break;
+ default:
+ return 0;
+ }
+ return cpumap_print_to_pagebuf(true, buf, cpumask);
+}
+
+static struct attribute *coresight_pmu_cpumask_attrs[] = {
+ CORESIGHT_CPUMASK_ATTR(cpumask, CORESIGHT_ACTIVE_CPU_MASK),
+ CORESIGHT_CPUMASK_ATTR(associated_cpus, CORESIGHT_ASSOCIATED_CPU_MASK),
+ NULL,
+};
+
+static struct attribute_group coresight_pmu_cpumask_attr_group = {
+ .attrs = coresight_pmu_cpumask_attrs,
+};
+
+static const struct coresight_pmu_impl_ops default_impl_ops = {
+ .get_event_attrs = coresight_pmu_get_event_attrs,
+ .get_format_attrs = coresight_pmu_get_format_attrs,
+ .get_identifier = coresight_pmu_get_identifier,
+ .get_name = coresight_pmu_get_name,
+ .is_cc_event = coresight_pmu_is_cc_event,
+ .event_type = coresight_pmu_event_type,
+ .event_filter = coresight_pmu_event_filter,
+ .event_attr_is_visible = coresight_pmu_event_attr_is_visible
+};
+
+struct impl_match {
+ u32 pmiidr;
+ u32 mask;
+ int (*impl_init_ops)(struct coresight_pmu *coresight_pmu);
+};
+
+static const struct impl_match impl_match[] = {
+ {}
+};
+
+static int coresight_pmu_init_impl_ops(struct coresight_pmu *coresight_pmu)
+{
+ int idx, ret;
+ struct acpi_apmt_node *apmt_node = coresight_pmu->apmt_node;
+ const struct impl_match *match = impl_match;
+
+ /*
+ * Get PMU implementer and product id from APMT node.
+ * If APMT node doesn't have implementer/product id, try get it
+ * from PMIIDR.
+ */
+ coresight_pmu->impl.pmiidr =
+ (apmt_node->impl_id) ? apmt_node->impl_id :
+ readl(coresight_pmu->base0 + PMIIDR);
+
+ /* Find implementer specific attribute ops. */
+ for (idx = 0; match->pmiidr; match++, idx++) {
+ if ((match->pmiidr & match->mask) ==
+ (coresight_pmu->impl.pmiidr & match->mask)) {
+ ret = match->impl_init_ops(coresight_pmu);
+ if (ret)
+ return ret;
+
+ return 0;
+ }
+ }
+
+ /* We don't find implementer specific attribute ops, use default. */
+ coresight_pmu->impl.ops = &default_impl_ops;
+ return 0;
+}
+
+static struct attribute_group *
+coresight_pmu_alloc_event_attr_group(struct coresight_pmu *coresight_pmu)
+{
+ struct attribute_group *event_group;
+ struct device *dev = coresight_pmu->dev;
+
+ event_group =
+ devm_kzalloc(dev, sizeof(struct attribute_group), GFP_KERNEL);
+ if (!event_group)
+ return NULL;
+
+ event_group->name = "events";
+ event_group->attrs =
+ coresight_pmu->impl.ops->get_event_attrs(coresight_pmu);
+ event_group->is_visible =
+ coresight_pmu->impl.ops->event_attr_is_visible;
+
+ return event_group;
+}
+
+static struct attribute_group *
+coresight_pmu_alloc_format_attr_group(struct coresight_pmu *coresight_pmu)
+{
+ struct attribute_group *format_group;
+ struct device *dev = coresight_pmu->dev;
+
+ format_group =
+ devm_kzalloc(dev, sizeof(struct attribute_group), GFP_KERNEL);
+ if (!format_group)
+ return NULL;
+
+ format_group->name = "format";
+ format_group->attrs =
+ coresight_pmu->impl.ops->get_format_attrs(coresight_pmu);
+
+ return format_group;
+}
+
+static struct attribute_group **
+coresight_pmu_alloc_attr_group(struct coresight_pmu *coresight_pmu)
+{
+ const struct coresight_pmu_impl_ops *impl_ops;
+ struct attribute_group **attr_groups = NULL;
+ struct device *dev = coresight_pmu->dev;
+ int ret;
+
+ ret = coresight_pmu_init_impl_ops(coresight_pmu);
+ if (ret)
+ return NULL;
+
+ impl_ops = coresight_pmu->impl.ops;
+
+ coresight_pmu->identifier = impl_ops->get_identifier(coresight_pmu);
+ coresight_pmu->name = impl_ops->get_name(coresight_pmu);
+
+ if (!coresight_pmu->name)
+ return NULL;
+
+ attr_groups = devm_kzalloc(dev, 5 * sizeof(struct attribute_group *),
+ GFP_KERNEL);
+ if (!attr_groups)
+ return NULL;
+
+ attr_groups[0] = coresight_pmu_alloc_event_attr_group(coresight_pmu);
+ attr_groups[1] = coresight_pmu_alloc_format_attr_group(coresight_pmu);
+ attr_groups[2] = &coresight_pmu_identifier_attr_group;
+ attr_groups[3] = &coresight_pmu_cpumask_attr_group;
+
+ return attr_groups;
+}
+
+static inline void
+coresight_pmu_reset_counters(struct coresight_pmu *coresight_pmu)
+{
+ u32 pmcr = 0;
+
+ pmcr |= PMCR_P;
+ pmcr |= PMCR_C;
+ writel(pmcr, coresight_pmu->base0 + PMCR);
+}
+
+static inline void
+coresight_pmu_start_counters(struct coresight_pmu *coresight_pmu)
+{
+ u32 pmcr;
+
+ pmcr = PMCR_E;
+ writel(pmcr, coresight_pmu->base0 + PMCR);
+}
+
+static inline void
+coresight_pmu_stop_counters(struct coresight_pmu *coresight_pmu)
+{
+ u32 pmcr;
+
+ pmcr = 0;
+ writel(pmcr, coresight_pmu->base0 + PMCR);
+}
+
+static void coresight_pmu_enable(struct pmu *pmu)
+{
+ bool disabled;
+ struct coresight_pmu *coresight_pmu = to_coresight_pmu(pmu);
+
+ disabled = bitmap_empty(coresight_pmu->hw_events.used_ctrs,
+ coresight_pmu->num_logical_counters);
+
+ if (disabled)
+ return;
+
+ coresight_pmu_start_counters(coresight_pmu);
+}
+
+static void coresight_pmu_disable(struct pmu *pmu)
+{
+ struct coresight_pmu *coresight_pmu = to_coresight_pmu(pmu);
+
+ coresight_pmu_stop_counters(coresight_pmu);
+}
+
+bool coresight_pmu_is_cc_event(const struct perf_event *event)
+{
+ return (event->attr.config == CORESIGHT_PMU_EVT_CYCLES_DEFAULT);
+}
+EXPORT_SYMBOL_GPL(coresight_pmu_is_cc_event);
+
+static int
+coresight_pmu_get_event_idx(struct coresight_pmu_hw_events *hw_events,
+ struct perf_event *event)
+{
+ int idx;
+ struct coresight_pmu *coresight_pmu = to_coresight_pmu(event->pmu);
+
+ if (support_cc(coresight_pmu)) {
+ if (coresight_pmu->impl.ops->is_cc_event(event)) {
+ /* Search for available cycle counter. */
+ if (test_and_set_bit(coresight_pmu->cc_logical_idx,
+ hw_events->used_ctrs))
+ return -EAGAIN;
+
+ return coresight_pmu->cc_logical_idx;
+ }
+
+ /*
+ * Search a regular counter from the used counter bitmap.
+ * The cycle counter divides the bitmap into two parts. Search
+ * the first then second half to exclude the cycle counter bit.
+ */
+ idx = find_first_zero_bit(hw_events->used_ctrs,
+ coresight_pmu->cc_logical_idx);
+ if (idx >= coresight_pmu->cc_logical_idx) {
+ idx = find_next_zero_bit(
+ hw_events->used_ctrs,
+ coresight_pmu->num_logical_counters,
+ coresight_pmu->cc_logical_idx + 1);
+ }
+ } else {
+ idx = find_first_zero_bit(hw_events->used_ctrs,
+ coresight_pmu->num_logical_counters);
+ }
+
+ if (idx >= coresight_pmu->num_logical_counters)
+ return -EAGAIN;
+
+ set_bit(idx, hw_events->used_ctrs);
+
+ return idx;
+}
+
+static bool
+coresight_pmu_validate_event(struct pmu *pmu,
+ struct coresight_pmu_hw_events *hw_events,
+ struct perf_event *event)
+{
+ if (is_software_event(event))
+ return true;
+
+ /* Reject groups spanning multiple HW PMUs. */
+ if (event->pmu != pmu)
+ return false;
+
+ return (coresight_pmu_get_event_idx(hw_events, event) >= 0);
+}
+
+/*
+ * Make sure the group of events can be scheduled at once
+ * on the PMU.
+ */
+static bool coresight_pmu_validate_group(struct perf_event *event)
+{
+ struct perf_event *sibling, *leader = event->group_leader;
+ struct coresight_pmu_hw_events fake_hw_events;
+
+ if (event->group_leader == event)
+ return true;
+
+ memset(&fake_hw_events, 0, sizeof(fake_hw_events));
+
+ if (!coresight_pmu_validate_event(event->pmu, &fake_hw_events, leader))
+ return false;
+
+ for_each_sibling_event(sibling, leader) {
+ if (!coresight_pmu_validate_event(event->pmu, &fake_hw_events,
+ sibling))
+ return false;
+ }
+
+ return coresight_pmu_validate_event(event->pmu, &fake_hw_events, event);
+}
+
+static int coresight_pmu_event_init(struct perf_event *event)
+{
+ struct coresight_pmu *coresight_pmu;
+ struct hw_perf_event *hwc = &event->hw;
+
+ coresight_pmu = to_coresight_pmu(event->pmu);
+
+ /*
+ * Following other "uncore" PMUs, we do not support sampling mode or
+ * attach to a task (per-process mode).
+ */
+ if (is_sampling_event(event)) {
+ dev_dbg(coresight_pmu->pmu.dev,
+ "Can't support sampling events\n");
+ return -EOPNOTSUPP;
+ }
+
+ if (event->cpu < 0 || event->attach_state & PERF_ATTACH_TASK) {
+ dev_dbg(coresight_pmu->pmu.dev,
+ "Can't support per-task counters\n");
+ return -EINVAL;
+ }
+
+ /*
+ * Make sure the CPU assignment is on one of the CPUs associated with
+ * this PMU.
+ */
+ if (!cpumask_test_cpu(event->cpu, &coresight_pmu->associated_cpus)) {
+ dev_dbg(coresight_pmu->pmu.dev,
+ "Requested cpu is not associated with the PMU\n");
+ return -EINVAL;
+ }
+
+ /* Enforce the current active CPU to handle the events in this PMU. */
+ event->cpu = cpumask_first(&coresight_pmu->active_cpu);
+ if (event->cpu >= nr_cpu_ids)
+ return -EINVAL;
+
+ if (!coresight_pmu_validate_group(event))
+ return -EINVAL;
+
+ /*
+ * The logical counter id is tracked with hw_perf_event.extra_reg.idx.
+ * The physical counter id is tracked with hw_perf_event.idx.
+ * We don't assign an index until we actually place the event onto
+ * hardware. Use -1 to signify that we haven't decided where to put it
+ * yet.
+ */
+ hwc->idx = -1;
+ hwc->extra_reg.idx = -1;
+ hwc->config_base = coresight_pmu->impl.ops->event_type(event);
+
+ return 0;
+}
+
+static inline u32 counter_offset(u32 reg_sz, u32 ctr_idx)
+{
+ return (PMEVCNTR_LO + (reg_sz * ctr_idx));
+}
+
+static void coresight_pmu_write_counter(struct perf_event *event, u64 val)
+{
+ u32 offset;
+ struct coresight_pmu *coresight_pmu = to_coresight_pmu(event->pmu);
+
+ if (use_64b_counter_reg(coresight_pmu)) {
+ offset = counter_offset(sizeof(u64), event->hw.idx);
+
+ writeq(val, coresight_pmu->base1 + offset);
+ } else {
+ offset = counter_offset(sizeof(u32), event->hw.idx);
+
+ writel(lower_32_bits(val), coresight_pmu->base1 + offset);
+ }
+}
+
+static u64 coresight_pmu_read_counter(struct perf_event *event)
+{
+ u32 offset;
+ const void __iomem *counter_addr;
+ struct coresight_pmu *coresight_pmu = to_coresight_pmu(event->pmu);
+
+ if (use_64b_counter_reg(coresight_pmu)) {
+ offset = counter_offset(sizeof(u64), event->hw.idx);
+ counter_addr = coresight_pmu->base1 + offset;
+
+ return support_atomic(coresight_pmu) ?
+ readq(counter_addr) :
+ read_reg64_hilohi(counter_addr);
+ }
+
+ offset = counter_offset(sizeof(u32), event->hw.idx);
+ return readl(coresight_pmu->base1 + offset);
+}
+
+/*
+ * coresight_pmu_set_event_period: Set the period for the counter.
+ *
+ * To handle cases of extreme interrupt latency, we program
+ * the counter with half of the max count for the counters.
+ */
+static void coresight_pmu_set_event_period(struct perf_event *event)
+{
+ struct coresight_pmu *coresight_pmu = to_coresight_pmu(event->pmu);
+ u64 val = GENMASK_ULL(pmcfgr_size(coresight_pmu), 0) >> 1;
+
+ local64_set(&event->hw.prev_count, val);
+ coresight_pmu_write_counter(event, val);
+}
+
+static void coresight_pmu_enable_counter(struct coresight_pmu *coresight_pmu,
+ int idx)
+{
+ u32 reg_id, reg_bit, inten_off, cnten_off;
+
+ reg_id = CORESIGHT_IDX_TO_SET_CLR_REG_ID(idx);
+ reg_bit = CORESIGHT_IDX_TO_SET_CLR_REG_BIT(idx);
+
+ inten_off = PMINTENSET + (4 * reg_id);
+ cnten_off = PMCNTENSET + (4 * reg_id);
+
+ writel(BIT(reg_bit), coresight_pmu->base0 + inten_off);
+ writel(BIT(reg_bit), coresight_pmu->base0 + cnten_off);
+}
+
+static void coresight_pmu_disable_counter(struct coresight_pmu *coresight_pmu,
+ int idx)
+{
+ u32 reg_id, reg_bit, inten_off, cnten_off;
+
+ reg_id = CORESIGHT_IDX_TO_SET_CLR_REG_ID(idx);
+ reg_bit = CORESIGHT_IDX_TO_SET_CLR_REG_BIT(idx);
+
+ inten_off = PMINTENCLR + (4 * reg_id);
+ cnten_off = PMCNTENCLR + (4 * reg_id);
+
+ writel(BIT(reg_bit), coresight_pmu->base0 + cnten_off);
+ writel(BIT(reg_bit), coresight_pmu->base0 + inten_off);
+}
+
+static void coresight_pmu_event_update(struct perf_event *event)
+{
+ struct coresight_pmu *coresight_pmu = to_coresight_pmu(event->pmu);
+ struct hw_perf_event *hwc = &event->hw;
+ u64 delta, prev, now;
+
+ do {
+ prev = local64_read(&hwc->prev_count);
+ now = coresight_pmu_read_counter(event);
+ } while (local64_cmpxchg(&hwc->prev_count, prev, now) != prev);
+
+ delta = (now - prev) & GENMASK_ULL(pmcfgr_size(coresight_pmu), 0);
+ local64_add(delta, &event->count);
+}
+
+static inline void coresight_pmu_set_event(struct coresight_pmu *coresight_pmu,
+ struct hw_perf_event *hwc)
+{
+ u32 offset = PMEVTYPER + (4 * hwc->idx);
+
+ writel(hwc->config_base, coresight_pmu->base0 + offset);
+}
+
+static inline void
+coresight_pmu_set_ev_filter(struct coresight_pmu *coresight_pmu,
+ struct hw_perf_event *hwc, u32 filter)
+{
+ u32 offset = PMEVFILTR + (4 * hwc->idx);
+
+ writel(filter, coresight_pmu->base0 + offset);
+}
+
+static inline void
+coresight_pmu_set_cc_filter(struct coresight_pmu *coresight_pmu, u32 filter)
+{
+ u32 offset = PMCCFILTR;
+
+ writel(filter, coresight_pmu->base0 + offset);
+}
+
+static void coresight_pmu_start(struct perf_event *event, int pmu_flags)
+{
+ struct coresight_pmu *coresight_pmu = to_coresight_pmu(event->pmu);
+ struct hw_perf_event *hwc = &event->hw;
+ u32 filter;
+
+ /* We always reprogram the counter */
+ if (pmu_flags & PERF_EF_RELOAD)
+ WARN_ON(!(hwc->state & PERF_HES_UPTODATE));
+
+ coresight_pmu_set_event_period(event);
+
+ filter = coresight_pmu->impl.ops->event_filter(event);
+
+ if (event->hw.extra_reg.idx == coresight_pmu->cc_logical_idx) {
+ coresight_pmu_set_cc_filter(coresight_pmu, filter);
+ } else {
+ coresight_pmu_set_event(coresight_pmu, hwc);
+ coresight_pmu_set_ev_filter(coresight_pmu, hwc, filter);
+ }
+
+ hwc->state = 0;
+
+ coresight_pmu_enable_counter(coresight_pmu, hwc->idx);
+}
+
+static void coresight_pmu_stop(struct perf_event *event, int pmu_flags)
+{
+ struct coresight_pmu *coresight_pmu = to_coresight_pmu(event->pmu);
+ struct hw_perf_event *hwc = &event->hw;
+
+ if (hwc->state & PERF_HES_STOPPED)
+ return;
+
+ coresight_pmu_disable_counter(coresight_pmu, hwc->idx);
+ coresight_pmu_event_update(event);
+
+ hwc->state |= PERF_HES_STOPPED | PERF_HES_UPTODATE;
+}
+
+static inline u32 to_phys_idx(struct coresight_pmu *coresight_pmu, u32 idx)
+{
+ return (idx == coresight_pmu->cc_logical_idx) ?
+ CORESIGHT_PMU_IDX_CCNTR : idx;
+}
+
+static int coresight_pmu_add(struct perf_event *event, int flags)
+{
+ struct coresight_pmu *coresight_pmu = to_coresight_pmu(event->pmu);
+ struct coresight_pmu_hw_events *hw_events = &coresight_pmu->hw_events;
+ struct hw_perf_event *hwc = &event->hw;
+ int idx;
+
+ if (WARN_ON_ONCE(!cpumask_test_cpu(smp_processor_id(),
+ &coresight_pmu->associated_cpus)))
+ return -ENOENT;
+
+ idx = coresight_pmu_get_event_idx(hw_events, event);
+ if (idx < 0)
+ return idx;
+
+ hw_events->events[idx] = event;
+ hwc->idx = to_phys_idx(coresight_pmu, idx);
+ hwc->extra_reg.idx = idx;
+ hwc->state = PERF_HES_STOPPED | PERF_HES_UPTODATE;
+
+ if (flags & PERF_EF_START)
+ coresight_pmu_start(event, PERF_EF_RELOAD);
+
+ /* Propagate changes to the userspace mapping. */
+ perf_event_update_userpage(event);
+
+ return 0;
+}
+
+static void coresight_pmu_del(struct perf_event *event, int flags)
+{
+ struct coresight_pmu *coresight_pmu = to_coresight_pmu(event->pmu);
+ struct coresight_pmu_hw_events *hw_events = &coresight_pmu->hw_events;
+ struct hw_perf_event *hwc = &event->hw;
+ int idx = hwc->extra_reg.idx;
+
+ coresight_pmu_stop(event, PERF_EF_UPDATE);
+
+ hw_events->events[idx] = NULL;
+
+ clear_bit(idx, hw_events->used_ctrs);
+
+ perf_event_update_userpage(event);
+}
+
+static void coresight_pmu_read(struct perf_event *event)
+{
+ coresight_pmu_event_update(event);
+}
+
+static int coresight_pmu_alloc(struct platform_device *pdev,
+ struct coresight_pmu **coresight_pmu)
+{
+ struct acpi_apmt_node *apmt_node;
+ struct device *dev;
+ struct coresight_pmu *pmu;
+
+ dev = &pdev->dev;
+ apmt_node = *(struct acpi_apmt_node **)dev_get_platdata(dev);
+ if (!apmt_node) {
+ dev_err(dev, "failed to get APMT node\n");
+ return -ENOMEM;
+ }
+
+ pmu = devm_kzalloc(dev, sizeof(*pmu), GFP_KERNEL);
+ if (!pmu)
+ return -ENOMEM;
+
+ *coresight_pmu = pmu;
+
+ pmu->dev = dev;
+ pmu->apmt_node = apmt_node;
+
+ platform_set_drvdata(pdev, coresight_pmu);
+
+ return 0;
+}
+
+static int coresight_pmu_init_mmio(struct coresight_pmu *coresight_pmu)
+{
+ struct device *dev;
+ struct platform_device *pdev;
+ struct acpi_apmt_node *apmt_node;
+
+ dev = coresight_pmu->dev;
+ pdev = to_platform_device(dev);
+ apmt_node = coresight_pmu->apmt_node;
+
+ /* Base address for page 0. */
+ coresight_pmu->base0 = devm_platform_ioremap_resource(pdev, 0);
+ if (IS_ERR(coresight_pmu->base0)) {
+ dev_err(dev, "ioremap failed for page-0 resource\n");
+ return PTR_ERR(coresight_pmu->base0);
+ }
+
+ /* Base address for page 1 if supported. Otherwise point it to page 0. */
+ coresight_pmu->base1 = coresight_pmu->base0;
+ if (CHECK_APMT_FLAG(apmt_node->flags, DUAL_PAGE, SUPP)) {
+ coresight_pmu->base1 = devm_platform_ioremap_resource(pdev, 1);
+ if (IS_ERR(coresight_pmu->base1)) {
+ dev_err(dev, "ioremap failed for page-1 resource\n");
+ return PTR_ERR(coresight_pmu->base1);
+ }
+ }
+
+ coresight_pmu->pmcfgr = readl(coresight_pmu->base0 + PMCFGR);
+
+ coresight_pmu->num_logical_counters = pmcfgr_n(coresight_pmu) + 1;
+
+ coresight_pmu->cc_logical_idx = CORESIGHT_PMU_MAX_HW_CNTRS;
+
+ if (support_cc(coresight_pmu)) {
+ /*
+ * The last logical counter is mapped to cycle counter if
+ * there is a gap between regular and cycle counter. Otherwise,
+ * logical and physical have 1-to-1 mapping.
+ */
+ coresight_pmu->cc_logical_idx =
+ (coresight_pmu->num_logical_counters <=
+ CORESIGHT_PMU_IDX_CCNTR) ?
+ coresight_pmu->num_logical_counters - 1 :
+ CORESIGHT_PMU_IDX_CCNTR;
+ }
+
+ coresight_pmu->num_set_clr_reg =
+ DIV_ROUND_UP(coresight_pmu->num_logical_counters,
+ CORESIGHT_SET_CLR_REG_COUNTER_NUM);
+
+ coresight_pmu->hw_events.events =
+ devm_kzalloc(dev,
+ sizeof(*coresight_pmu->hw_events.events) *
+ coresight_pmu->num_logical_counters,
+ GFP_KERNEL);
+
+ if (!coresight_pmu->hw_events.events)
+ return -ENOMEM;
+
+ return 0;
+}
+
+static inline int
+coresight_pmu_get_reset_overflow(struct coresight_pmu *coresight_pmu,
+ u32 *pmovs)
+{
+ int i;
+ u32 pmovclr_offset = PMOVSCLR;
+ u32 has_overflowed = 0;
+
+ for (i = 0; i < coresight_pmu->num_set_clr_reg; ++i) {
+ pmovs[i] = readl(coresight_pmu->base1 + pmovclr_offset);
+ has_overflowed |= pmovs[i];
+ writel(pmovs[i], coresight_pmu->base1 + pmovclr_offset);
+ pmovclr_offset += sizeof(u32);
+ }
+
+ return has_overflowed != 0;
+}
+
+static irqreturn_t coresight_pmu_handle_irq(int irq_num, void *dev)
+{
+ int idx, has_overflowed;
+ struct perf_event *event;
+ struct coresight_pmu *coresight_pmu = dev;
+ u32 pmovs[CORESIGHT_SET_CLR_REG_MAX_NUM] = { 0 };
+ bool handled = false;
+
+ coresight_pmu_stop_counters(coresight_pmu);
+
+ has_overflowed = coresight_pmu_get_reset_overflow(coresight_pmu, pmovs);
+ if (!has_overflowed)
+ goto done;
+
+ for_each_set_bit(idx, coresight_pmu->hw_events.used_ctrs,
+ coresight_pmu->num_logical_counters) {
+ event = coresight_pmu->hw_events.events[idx];
+
+ if (!event)
+ continue;
+
+ if (!test_bit(event->hw.idx, (unsigned long *)pmovs))
+ continue;
+
+ coresight_pmu_event_update(event);
+ coresight_pmu_set_event_period(event);
+
+ handled = true;
+ }
+
+done:
+ coresight_pmu_start_counters(coresight_pmu);
+ return IRQ_RETVAL(handled);
+}
+
+static int coresight_pmu_request_irq(struct coresight_pmu *coresight_pmu)
+{
+ int irq, ret;
+ struct device *dev;
+ struct platform_device *pdev;
+ struct acpi_apmt_node *apmt_node;
+
+ dev = coresight_pmu->dev;
+ pdev = to_platform_device(dev);
+ apmt_node = coresight_pmu->apmt_node;
+
+ /* Skip IRQ request if the PMU does not support overflow interrupt. */
+ if (apmt_node->ovflw_irq == 0)
+ return 0;
+
+ irq = platform_get_irq(pdev, 0);
+ if (irq < 0)
+ return irq;
+
+ ret = devm_request_irq(dev, irq, coresight_pmu_handle_irq,
+ IRQF_NOBALANCING | IRQF_NO_THREAD, dev_name(dev),
+ coresight_pmu);
+ if (ret) {
+ dev_err(dev, "Could not request IRQ %d\n", irq);
+ return ret;
+ }
+
+ coresight_pmu->irq = irq;
+
+ return 0;
+}
+
+static inline int coresight_pmu_find_cpu_container(int cpu, u32 container_uid)
+{
+ u32 acpi_uid;
+ struct device *cpu_dev = get_cpu_device(cpu);
+ struct acpi_device *acpi_dev = ACPI_COMPANION(cpu_dev);
+
+ if (!cpu_dev)
+ return -ENODEV;
+
+ while (acpi_dev) {
+ if (!strcmp(acpi_device_hid(acpi_dev),
+ ACPI_PROCESSOR_CONTAINER_HID) &&
+ !kstrtouint(acpi_device_uid(acpi_dev), 0, &acpi_uid) &&
+ acpi_uid == container_uid)
+ return 0;
+
+ acpi_dev = acpi_dev->parent;
+ }
+
+ return -ENODEV;
+}
+
+static int coresight_pmu_get_cpus(struct coresight_pmu *coresight_pmu)
+{
+ struct acpi_apmt_node *apmt_node;
+ int affinity_flag;
+ int cpu;
+
+ apmt_node = coresight_pmu->apmt_node;
+ affinity_flag = apmt_node->flags & ACPI_APMT_FLAGS_AFFINITY;
+
+ if (affinity_flag == ACPI_APMT_FLAGS_AFFINITY_PROC) {
+ for_each_possible_cpu(cpu) {
+ if (apmt_node->proc_affinity ==
+ get_acpi_id_for_cpu(cpu)) {
+ cpumask_set_cpu(
+ cpu, &coresight_pmu->associated_cpus);
+ break;
+ }
+ }
+ } else {
+ for_each_possible_cpu(cpu) {
+ if (coresight_pmu_find_cpu_container(
+ cpu, apmt_node->proc_affinity))
+ continue;
+
+ cpumask_set_cpu(cpu, &coresight_pmu->associated_cpus);
+ }
+ }
+
+ return 0;
+}
+
+static int coresight_pmu_register_pmu(struct coresight_pmu *coresight_pmu)
+{
+ int ret;
+ struct attribute_group **attr_groups;
+
+ attr_groups = coresight_pmu_alloc_attr_group(coresight_pmu);
+ if (!attr_groups) {
+ ret = -ENOMEM;
+ return ret;
+ }
+
+ ret = cpuhp_state_add_instance(coresight_pmu_cpuhp_state,
+ &coresight_pmu->cpuhp_node);
+ if (ret)
+ return ret;
+
+ coresight_pmu->pmu = (struct pmu){
+ .task_ctx_nr = perf_invalid_context,
+ .module = THIS_MODULE,
+ .pmu_enable = coresight_pmu_enable,
+ .pmu_disable = coresight_pmu_disable,
+ .event_init = coresight_pmu_event_init,
+ .add = coresight_pmu_add,
+ .del = coresight_pmu_del,
+ .start = coresight_pmu_start,
+ .stop = coresight_pmu_stop,
+ .read = coresight_pmu_read,
+ .attr_groups = (const struct attribute_group **)attr_groups,
+ .capabilities = PERF_PMU_CAP_NO_EXCLUDE,
+ };
+
+ /* Hardware counter init */
+ coresight_pmu_stop_counters(coresight_pmu);
+ coresight_pmu_reset_counters(coresight_pmu);
+
+ ret = perf_pmu_register(&coresight_pmu->pmu, coresight_pmu->name, -1);
+ if (ret) {
+ cpuhp_state_remove_instance(coresight_pmu_cpuhp_state,
+ &coresight_pmu->cpuhp_node);
+ }
+
+ return ret;
+}
+
+static int coresight_pmu_device_probe(struct platform_device *pdev)
+{
+ int ret;
+ struct coresight_pmu *coresight_pmu;
+
+ ret = coresight_pmu_alloc(pdev, &coresight_pmu);
+ if (ret)
+ return ret;
+
+ ret = coresight_pmu_init_mmio(coresight_pmu);
+ if (ret)
+ return ret;
+
+ ret = coresight_pmu_request_irq(coresight_pmu);
+ if (ret)
+ return ret;
+
+ ret = coresight_pmu_get_cpus(coresight_pmu);
+ if (ret)
+ return ret;
+
+ ret = coresight_pmu_register_pmu(coresight_pmu);
+ if (ret)
+ return ret;
+
+ return 0;
+}
+
+static int coresight_pmu_device_remove(struct platform_device *pdev)
+{
+ struct coresight_pmu *coresight_pmu = platform_get_drvdata(pdev);
+
+ perf_pmu_unregister(&coresight_pmu->pmu);
+ cpuhp_state_remove_instance(coresight_pmu_cpuhp_state,
+ &coresight_pmu->cpuhp_node);
+
+ return 0;
+}
+
+static struct platform_driver coresight_pmu_driver = {
+ .driver = {
+ .name = "arm-system-pmu",
+ .suppress_bind_attrs = true,
+ },
+ .probe = coresight_pmu_device_probe,
+ .remove = coresight_pmu_device_remove,
+};
+
+static void coresight_pmu_set_active_cpu(int cpu,
+ struct coresight_pmu *coresight_pmu)
+{
+ cpumask_set_cpu(cpu, &coresight_pmu->active_cpu);
+ WARN_ON(irq_set_affinity(coresight_pmu->irq,
+ &coresight_pmu->active_cpu));
+}
+
+static int coresight_pmu_cpu_online(unsigned int cpu, struct hlist_node *node)
+{
+ struct coresight_pmu *coresight_pmu =
+ hlist_entry_safe(node, struct coresight_pmu, cpuhp_node);
+
+ if (!cpumask_test_cpu(cpu, &coresight_pmu->associated_cpus))
+ return 0;
+
+ /* If the PMU is already managed, there is nothing to do */
+ if (!cpumask_empty(&coresight_pmu->active_cpu))
+ return 0;
+
+ /* Use this CPU for event counting */
+ coresight_pmu_set_active_cpu(cpu, coresight_pmu);
+
+ return 0;
+}
+
+static int coresight_pmu_cpu_teardown(unsigned int cpu, struct hlist_node *node)
+{
+ int dst;
+ struct cpumask online_supported;
+
+ struct coresight_pmu *coresight_pmu =
+ hlist_entry_safe(node, struct coresight_pmu, cpuhp_node);
+
+ /* Nothing to do if this CPU doesn't own the PMU */
+ if (!cpumask_test_and_clear_cpu(cpu, &coresight_pmu->active_cpu))
+ return 0;
+
+ /* Choose a new CPU to migrate ownership of the PMU to */
+ cpumask_and(&online_supported, &coresight_pmu->associated_cpus,
+ cpu_online_mask);
+ dst = cpumask_any_but(&online_supported, cpu);
+ if (dst >= nr_cpu_ids)
+ return 0;
+
+ /* Use this CPU for event counting */
+ perf_pmu_migrate_context(&coresight_pmu->pmu, cpu, dst);
+ coresight_pmu_set_active_cpu(dst, coresight_pmu);
+
+ return 0;
+}
+
+static int __init coresight_pmu_init(void)
+{
+ int ret;
+
+ ret = cpuhp_setup_state_multi(CPUHP_AP_ONLINE_DYN, PMUNAME,
+ coresight_pmu_cpu_online,
+ coresight_pmu_cpu_teardown);
+ if (ret < 0)
+ return ret;
+ coresight_pmu_cpuhp_state = ret;
+ return platform_driver_register(&coresight_pmu_driver);
+}
+
+static void __exit coresight_pmu_exit(void)
+{
+ platform_driver_unregister(&coresight_pmu_driver);
+ cpuhp_remove_multi_state(coresight_pmu_cpuhp_state);
+}
+
+module_init(coresight_pmu_init);
+module_exit(coresight_pmu_exit);
diff --git a/drivers/perf/coresight_pmu/arm_coresight_pmu.h b/drivers/perf/coresight_pmu/arm_coresight_pmu.h
new file mode 100644
index 000000000000..88fb4cd3dafa
--- /dev/null
+++ b/drivers/perf/coresight_pmu/arm_coresight_pmu.h
@@ -0,0 +1,177 @@
+/* SPDX-License-Identifier: GPL-2.0
+ *
+ * ARM CoreSight PMU driver.
+ * Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES.
+ *
+ */
+
+#ifndef __ARM_CORESIGHT_PMU_H__
+#define __ARM_CORESIGHT_PMU_H__
+
+#include <linux/acpi.h>
+#include <linux/bitfield.h>
+#include <linux/cpumask.h>
+#include <linux/device.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/perf_event.h>
+#include <linux/platform_device.h>
+#include <linux/types.h>
+
+#define to_coresight_pmu(p) (container_of(p, struct coresight_pmu, pmu))
+
+#define CORESIGHT_EXT_ATTR(_name, _func, _config) \
+ (&((struct dev_ext_attribute[]){ \
+ { \
+ .attr = __ATTR(_name, 0444, _func, NULL), \
+ .var = (void *)_config \
+ } \
+ })[0].attr.attr)
+
+#define CORESIGHT_FORMAT_ATTR(_name, _config) \
+ CORESIGHT_EXT_ATTR(_name, coresight_pmu_sysfs_format_show, \
+ (char *)_config)
+
+#define CORESIGHT_EVENT_ATTR(_name, _config) \
+ PMU_EVENT_ATTR_ID(_name, coresight_pmu_sysfs_event_show, _config)
+
+
+/* Default event id mask */
+#define CORESIGHT_EVENT_MASK 0xFFFFFFFFULL
+
+/* Default filter value mask */
+#define CORESIGHT_FILTER_MASK 0xFFFFFFFFULL
+
+/* Default event format */
+#define CORESIGHT_FORMAT_EVENT_ATTR CORESIGHT_FORMAT_ATTR(event, "config:0-32")
+
+/* Default filter format */
+#define CORESIGHT_FORMAT_FILTER_ATTR \
+ CORESIGHT_FORMAT_ATTR(filter, "config1:0-31")
+
+/*
+ * This is the default event number for cycle count, if supported, since the
+ * ARM Coresight PMU specification does not define a standard event code
+ * for cycle count.
+ */
+#define CORESIGHT_PMU_EVT_CYCLES_DEFAULT (0x1ULL << 32)
+
+/*
+ * The ARM Coresight PMU supports up to 256 event counters.
+ * If the counters are larger-than 32-bits, then the PMU includes at
+ * most 128 counters.
+ */
+#define CORESIGHT_PMU_MAX_HW_CNTRS 256
+
+/* The cycle counter, if implemented, is located at counter[31]. */
+#define CORESIGHT_PMU_IDX_CCNTR 31
+
+struct coresight_pmu;
+
+/* This tracks the events assigned to each counter in the PMU. */
+struct coresight_pmu_hw_events {
+ /* The events that are active on the PMU for a given logical index. */
+ struct perf_event **events;
+
+ /*
+ * Each bit indicates a logical counter is being used (or not) for an
+ * event. If cycle counter is supported and there is a gap between
+ * regular and cycle counter, the last logical counter is mapped to
+ * cycle counter. Otherwise, logical and physical have 1-to-1 mapping.
+ */
+ DECLARE_BITMAP(used_ctrs, CORESIGHT_PMU_MAX_HW_CNTRS);
+};
+
+/* Contains ops to query vendor/implementer specific attribute. */
+struct coresight_pmu_impl_ops {
+ /* Get event attributes */
+ struct attribute **(*get_event_attrs)(
+ const struct coresight_pmu *coresight_pmu);
+ /* Get format attributes */
+ struct attribute **(*get_format_attrs)(
+ const struct coresight_pmu *coresight_pmu);
+ /* Get string identifier */
+ const char *(*get_identifier)(const struct coresight_pmu *coresight_pmu);
+ /* Get PMU name to register to core perf */
+ const char *(*get_name)(const struct coresight_pmu *coresight_pmu);
+ /* Check if the event corresponds to cycle count event */
+ bool (*is_cc_event)(const struct perf_event *event);
+ /* Decode event type/id from configs */
+ u32 (*event_type)(const struct perf_event *event);
+ /* Decode filter value from configs */
+ u32 (*event_filter)(const struct perf_event *event);
+ /* Hide/show unsupported events */
+ umode_t (*event_attr_is_visible)(struct kobject *kobj,
+ struct attribute *attr, int unused);
+};
+
+/* Vendor/implementer descriptor. */
+struct coresight_pmu_impl {
+ u32 pmiidr;
+ const struct coresight_pmu_impl_ops *ops;
+};
+
+/* Coresight PMU descriptor. */
+struct coresight_pmu {
+ struct pmu pmu;
+ struct device *dev;
+ struct acpi_apmt_node *apmt_node;
+ const char *name;
+ const char *identifier;
+ void __iomem *base0;
+ void __iomem *base1;
+ int irq;
+ cpumask_t associated_cpus;
+ cpumask_t active_cpu;
+ struct hlist_node cpuhp_node;
+
+ u32 pmcfgr;
+ u32 num_logical_counters;
+ u32 num_set_clr_reg;
+ int cc_logical_idx;
+
+ struct coresight_pmu_hw_events hw_events;
+
+ struct coresight_pmu_impl impl;
+};
+
+/* Default function to show event attribute in sysfs. */
+ssize_t coresight_pmu_sysfs_event_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf);
+
+/* Default function to show format attribute in sysfs. */
+ssize_t coresight_pmu_sysfs_format_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf);
+
+/* Get the default Coresight PMU event attributes. */
+struct attribute **
+coresight_pmu_get_event_attrs(const struct coresight_pmu *coresight_pmu);
+
+/* Get the default Coresight PMU format attributes. */
+struct attribute **
+coresight_pmu_get_format_attrs(const struct coresight_pmu *coresight_pmu);
+
+/* Get the default Coresight PMU device identifier. */
+const char *
+coresight_pmu_get_identifier(const struct coresight_pmu *coresight_pmu);
+
+/* Get the default PMU name. */
+const char *
+coresight_pmu_get_name(const struct coresight_pmu *coresight_pmu);
+
+/* Default function to query if an event is a cycle counter event. */
+bool coresight_pmu_is_cc_event(const struct perf_event *event);
+
+/* Default function to query the type/id of an event. */
+u32 coresight_pmu_event_type(const struct perf_event *event);
+
+/* Default function to query the filter value of an event. */
+u32 coresight_pmu_event_filter(const struct perf_event *event);
+
+/* Default function that hides (default) cycle event id if not supported. */
+umode_t coresight_pmu_event_attr_is_visible(struct kobject *kobj,
+ struct attribute *attr, int unused);
+
+#endif /* __ARM_CORESIGHT_PMU_H__ */
--
2.17.1
On Tue, Jun 21, 2022 at 12:50:35AM -0500, Besar Wicaksono wrote:
> Add support for NVIDIA System Cache Fabric (SCF) and Memory Control
> Fabric (MCF) PMU attributes for CoreSight PMU implementation in
> NVIDIA devices.
>
> Signed-off-by: Besar Wicaksono <[email protected]>
> ---
> drivers/perf/coresight_pmu/Makefile | 3 +-
> .../perf/coresight_pmu/arm_coresight_pmu.c | 4 +
> .../coresight_pmu/arm_coresight_pmu_nvidia.c | 312 ++++++++++++++++++
> .../coresight_pmu/arm_coresight_pmu_nvidia.h | 17 +
> 4 files changed, 335 insertions(+), 1 deletion(-)
> create mode 100644 drivers/perf/coresight_pmu/arm_coresight_pmu_nvidia.c
> create mode 100644 drivers/perf/coresight_pmu/arm_coresight_pmu_nvidia.h
Please can you include some documentation along with this driver? See
Documentation/admin-guide/perf/ for examples of other PMUs.
Will
On Tue, Jun 21, 2022 at 12:50:34AM -0500, Besar Wicaksono wrote:
> Add support for ARM CoreSight PMU driver framework and interfaces.
> The driver provides generic implementation to operate uncore PMU based
> on ARM CoreSight PMU architecture. The driver also provides interface
> to get vendor/implementation specific information, for example event
> attributes and formating.
>
> The specification used in this implementation can be found below:
> * ACPI Arm Performance Monitoring Unit table:
> https://developer.arm.com/documentation/den0117/latest
> * ARM Coresight PMU architecture:
> https://developer.arm.com/documentation/ihi0091/latest
>
> Signed-off-by: Besar Wicaksono <[email protected]>
> ---
> arch/arm64/configs/defconfig | 1 +
> drivers/perf/Kconfig | 2 +
> drivers/perf/Makefile | 1 +
> drivers/perf/coresight_pmu/Kconfig | 11 +
> drivers/perf/coresight_pmu/Makefile | 6 +
> .../perf/coresight_pmu/arm_coresight_pmu.c | 1312 +++++++++++++++++
> .../perf/coresight_pmu/arm_coresight_pmu.h | 177 +++
> 7 files changed, 1510 insertions(+)
> create mode 100644 drivers/perf/coresight_pmu/Kconfig
> create mode 100644 drivers/perf/coresight_pmu/Makefile
> create mode 100644 drivers/perf/coresight_pmu/arm_coresight_pmu.c
> create mode 100644 drivers/perf/coresight_pmu/arm_coresight_pmu.h
>
> diff --git a/arch/arm64/configs/defconfig b/arch/arm64/configs/defconfig
> index 7d1105343bc2..22184f8883da 100644
> --- a/arch/arm64/configs/defconfig
> +++ b/arch/arm64/configs/defconfig
> @@ -1212,6 +1212,7 @@ CONFIG_PHY_UNIPHIER_USB3=y
> CONFIG_PHY_TEGRA_XUSB=y
> CONFIG_PHY_AM654_SERDES=m
> CONFIG_PHY_J721E_WIZ=m
> +CONFIG_ARM_CORESIGHT_PMU=y
> CONFIG_ARM_SMMU_V3_PMU=m
> CONFIG_FSL_IMX8_DDR_PMU=m
> CONFIG_QCOM_L2_PMU=y
> diff --git a/drivers/perf/Kconfig b/drivers/perf/Kconfig
> index 1e2d69453771..c4e7cd5b4162 100644
> --- a/drivers/perf/Kconfig
> +++ b/drivers/perf/Kconfig
> @@ -192,4 +192,6 @@ config MARVELL_CN10K_DDR_PMU
> Enable perf support for Marvell DDR Performance monitoring
> event on CN10K platform.
>
> +source "drivers/perf/coresight_pmu/Kconfig"
> +
> endmenu
> diff --git a/drivers/perf/Makefile b/drivers/perf/Makefile
> index 57a279c61df5..4126a04b5583 100644
> --- a/drivers/perf/Makefile
> +++ b/drivers/perf/Makefile
> @@ -20,3 +20,4 @@ obj-$(CONFIG_ARM_DMC620_PMU) += arm_dmc620_pmu.o
> obj-$(CONFIG_MARVELL_CN10K_TAD_PMU) += marvell_cn10k_tad_pmu.o
> obj-$(CONFIG_MARVELL_CN10K_DDR_PMU) += marvell_cn10k_ddr_pmu.o
> obj-$(CONFIG_APPLE_M1_CPU_PMU) += apple_m1_cpu_pmu.o
> +obj-$(CONFIG_ARM_CORESIGHT_PMU) += coresight_pmu/
> diff --git a/drivers/perf/coresight_pmu/Kconfig b/drivers/perf/coresight_pmu/Kconfig
> new file mode 100644
> index 000000000000..89174f54c7be
> --- /dev/null
> +++ b/drivers/perf/coresight_pmu/Kconfig
> @@ -0,0 +1,11 @@
> +# SPDX-License-Identifier: GPL-2.0
> +#
> +# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES.
> +
> +config ARM_CORESIGHT_PMU
> + tristate "ARM Coresight PMU"
> + depends on ACPI
> + depends on ACPI_APMT || COMPILE_TEST
ACPI_APMT doesn't exist in my tree. What's missing here?
Will
> -----Original Message-----
> From: Will Deacon <[email protected]>
> Sent: Monday, June 27, 2022 5:37 PM
> To: Besar Wicaksono <[email protected]>
> Cc: [email protected]; [email protected];
> [email protected]; [email protected]; linux-arm-
> [email protected]; [email protected]; linux-
> [email protected]; [email protected];
> [email protected]; [email protected]; Thierry Reding
> <[email protected]>; Jonathan Hunter <[email protected]>; Vikram
> Sethi <[email protected]>; [email protected];
> [email protected]; [email protected]
> Subject: Re: [RESEND PATCH v3 1/2] perf: coresight_pmu: Add support for
> ARM CoreSight PMU driver
>
> External email: Use caution opening links or attachments
>
>
> On Tue, Jun 21, 2022 at 12:50:34AM -0500, Besar Wicaksono wrote:
> > Add support for ARM CoreSight PMU driver framework and interfaces.
> > The driver provides generic implementation to operate uncore PMU based
> > on ARM CoreSight PMU architecture. The driver also provides interface
> > to get vendor/implementation specific information, for example event
> > attributes and formating.
> >
> > The specification used in this implementation can be found below:
> > * ACPI Arm Performance Monitoring Unit table:
> > https://developer.arm.com/documentation/den0117/latest
> > * ARM Coresight PMU architecture:
> > https://developer.arm.com/documentation/ihi0091/latest
> >
> > Signed-off-by: Besar Wicaksono <[email protected]>
> > ---
> > arch/arm64/configs/defconfig | 1 +
> > drivers/perf/Kconfig | 2 +
> > drivers/perf/Makefile | 1 +
> > drivers/perf/coresight_pmu/Kconfig | 11 +
> > drivers/perf/coresight_pmu/Makefile | 6 +
> > .../perf/coresight_pmu/arm_coresight_pmu.c | 1312
> +++++++++++++++++
> > .../perf/coresight_pmu/arm_coresight_pmu.h | 177 +++
> > 7 files changed, 1510 insertions(+)
> > create mode 100644 drivers/perf/coresight_pmu/Kconfig
> > create mode 100644 drivers/perf/coresight_pmu/Makefile
> > create mode 100644 drivers/perf/coresight_pmu/arm_coresight_pmu.c
> > create mode 100644 drivers/perf/coresight_pmu/arm_coresight_pmu.h
> >
> > diff --git a/arch/arm64/configs/defconfig b/arch/arm64/configs/defconfig
> > index 7d1105343bc2..22184f8883da 100644
> > --- a/arch/arm64/configs/defconfig
> > +++ b/arch/arm64/configs/defconfig
> > @@ -1212,6 +1212,7 @@ CONFIG_PHY_UNIPHIER_USB3=y
> > CONFIG_PHY_TEGRA_XUSB=y
> > CONFIG_PHY_AM654_SERDES=m
> > CONFIG_PHY_J721E_WIZ=m
> > +CONFIG_ARM_CORESIGHT_PMU=y
> > CONFIG_ARM_SMMU_V3_PMU=m
> > CONFIG_FSL_IMX8_DDR_PMU=m
> > CONFIG_QCOM_L2_PMU=y
> > diff --git a/drivers/perf/Kconfig b/drivers/perf/Kconfig
> > index 1e2d69453771..c4e7cd5b4162 100644
> > --- a/drivers/perf/Kconfig
> > +++ b/drivers/perf/Kconfig
> > @@ -192,4 +192,6 @@ config MARVELL_CN10K_DDR_PMU
> > Enable perf support for Marvell DDR Performance monitoring
> > event on CN10K platform.
> >
> > +source "drivers/perf/coresight_pmu/Kconfig"
> > +
> > endmenu
> > diff --git a/drivers/perf/Makefile b/drivers/perf/Makefile
> > index 57a279c61df5..4126a04b5583 100644
> > --- a/drivers/perf/Makefile
> > +++ b/drivers/perf/Makefile
> > @@ -20,3 +20,4 @@ obj-$(CONFIG_ARM_DMC620_PMU) +=
> arm_dmc620_pmu.o
> > obj-$(CONFIG_MARVELL_CN10K_TAD_PMU) +=
> marvell_cn10k_tad_pmu.o
> > obj-$(CONFIG_MARVELL_CN10K_DDR_PMU) +=
> marvell_cn10k_ddr_pmu.o
> > obj-$(CONFIG_APPLE_M1_CPU_PMU) += apple_m1_cpu_pmu.o
> > +obj-$(CONFIG_ARM_CORESIGHT_PMU) += coresight_pmu/
> > diff --git a/drivers/perf/coresight_pmu/Kconfig
> b/drivers/perf/coresight_pmu/Kconfig
> > new file mode 100644
> > index 000000000000..89174f54c7be
> > --- /dev/null
> > +++ b/drivers/perf/coresight_pmu/Kconfig
> > @@ -0,0 +1,11 @@
> > +# SPDX-License-Identifier: GPL-2.0
> > +#
> > +# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES.
> > +
> > +config ARM_CORESIGHT_PMU
> > + tristate "ARM Coresight PMU"
> > + depends on ACPI
> > + depends on ACPI_APMT || COMPILE_TEST
>
> ACPI_APMT doesn't exist in my tree. What's missing here?
>
> Will
It is missing the APMT support patch in: https://lkml.org/lkml/2022/4/19/1395
I will add this reference to the driver patch cover letter.
Regards.
Besar
> -----Original Message-----
> From: Will Deacon <[email protected]>
> Sent: Monday, June 27, 2022 5:38 PM
> To: Besar Wicaksono <[email protected]>
> Cc: [email protected]; [email protected];
> [email protected]; [email protected]; linux-arm-
> [email protected]; [email protected]; linux-
> [email protected]; [email protected];
> [email protected]; [email protected]; Thierry Reding
> <[email protected]>; Jonathan Hunter <[email protected]>; Vikram
> Sethi <[email protected]>; [email protected];
> [email protected]; [email protected]
> Subject: Re: [RESEND PATCH v3 2/2] perf: coresight_pmu: Add support for
> NVIDIA SCF and MCF attribute
>
> External email: Use caution opening links or attachments
>
>
> On Tue, Jun 21, 2022 at 12:50:35AM -0500, Besar Wicaksono wrote:
> > Add support for NVIDIA System Cache Fabric (SCF) and Memory Control
> > Fabric (MCF) PMU attributes for CoreSight PMU implementation in
> > NVIDIA devices.
> >
> > Signed-off-by: Besar Wicaksono <[email protected]>
> > ---
> > drivers/perf/coresight_pmu/Makefile | 3 +-
> > .../perf/coresight_pmu/arm_coresight_pmu.c | 4 +
> > .../coresight_pmu/arm_coresight_pmu_nvidia.c | 312
> ++++++++++++++++++
> > .../coresight_pmu/arm_coresight_pmu_nvidia.h | 17 +
> > 4 files changed, 335 insertions(+), 1 deletion(-)
> > create mode 100644
> drivers/perf/coresight_pmu/arm_coresight_pmu_nvidia.c
> > create mode 100644
> drivers/perf/coresight_pmu/arm_coresight_pmu_nvidia.h
>
> Please can you include some documentation along with this driver? See
> Documentation/admin-guide/perf/ for examples of other PMUs.
>
> Will
Thank you for the pointer.
I will include the documentation in the next revision.
Regards,
Besar
On 21/06/2022 06:50, Besar Wicaksono wrote:
> Add support for ARM CoreSight PMU driver framework and interfaces.
> The driver provides generic implementation to operate uncore PMU based
> on ARM CoreSight PMU architecture. The driver also provides interface
> to get vendor/implementation specific information, for example event
> attributes and formating.
>
> The specification used in this implementation can be found below:
> * ACPI Arm Performance Monitoring Unit table:
> https://developer.arm.com/documentation/den0117/latest
> * ARM Coresight PMU architecture:
> https://developer.arm.com/documentation/ihi0091/latest
>
> Signed-off-by: Besar Wicaksono <[email protected]>
> ---
> arch/arm64/configs/defconfig | 1 +
> drivers/perf/Kconfig | 2 +
> drivers/perf/Makefile | 1 +
> drivers/perf/coresight_pmu/Kconfig | 11 +
> drivers/perf/coresight_pmu/Makefile | 6 +
> .../perf/coresight_pmu/arm_coresight_pmu.c | 1312 +++++++++++++++++
> .../perf/coresight_pmu/arm_coresight_pmu.h | 177 +++
> 7 files changed, 1510 insertions(+)
> create mode 100644 drivers/perf/coresight_pmu/Kconfig
> create mode 100644 drivers/perf/coresight_pmu/Makefile
> create mode 100644 drivers/perf/coresight_pmu/arm_coresight_pmu.c
> create mode 100644 drivers/perf/coresight_pmu/arm_coresight_pmu.h
>
> diff --git a/arch/arm64/configs/defconfig b/arch/arm64/configs/defconfig
> index 7d1105343bc2..22184f8883da 100644
> --- a/arch/arm64/configs/defconfig
> +++ b/arch/arm64/configs/defconfig
> @@ -1212,6 +1212,7 @@ CONFIG_PHY_UNIPHIER_USB3=y
> CONFIG_PHY_TEGRA_XUSB=y
> CONFIG_PHY_AM654_SERDES=m
> CONFIG_PHY_J721E_WIZ=m
> +CONFIG_ARM_CORESIGHT_PMU=y
> CONFIG_ARM_SMMU_V3_PMU=m
> CONFIG_FSL_IMX8_DDR_PMU=m
> CONFIG_QCOM_L2_PMU=y
> diff --git a/drivers/perf/Kconfig b/drivers/perf/Kconfig
> index 1e2d69453771..c4e7cd5b4162 100644
> --- a/drivers/perf/Kconfig
> +++ b/drivers/perf/Kconfig
> @@ -192,4 +192,6 @@ config MARVELL_CN10K_DDR_PMU
> Enable perf support for Marvell DDR Performance monitoring
> event on CN10K platform.
>
> +source "drivers/perf/coresight_pmu/Kconfig"
> +
> endmenu
> diff --git a/drivers/perf/Makefile b/drivers/perf/Makefile
> index 57a279c61df5..4126a04b5583 100644
> --- a/drivers/perf/Makefile
> +++ b/drivers/perf/Makefile
> @@ -20,3 +20,4 @@ obj-$(CONFIG_ARM_DMC620_PMU) += arm_dmc620_pmu.o
> obj-$(CONFIG_MARVELL_CN10K_TAD_PMU) += marvell_cn10k_tad_pmu.o
> obj-$(CONFIG_MARVELL_CN10K_DDR_PMU) += marvell_cn10k_ddr_pmu.o
> obj-$(CONFIG_APPLE_M1_CPU_PMU) += apple_m1_cpu_pmu.o
> +obj-$(CONFIG_ARM_CORESIGHT_PMU) += coresight_pmu/
> diff --git a/drivers/perf/coresight_pmu/Kconfig b/drivers/perf/coresight_pmu/Kconfig
> new file mode 100644
> index 000000000000..89174f54c7be
> --- /dev/null
> +++ b/drivers/perf/coresight_pmu/Kconfig
> @@ -0,0 +1,11 @@
> +# SPDX-License-Identifier: GPL-2.0
> +#
> +# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES.
> +
> +config ARM_CORESIGHT_PMU
> + tristate "ARM Coresight PMU"
> + depends on ACPI
> + depends on ACPI_APMT || COMPILE_TEST
> + help
> + Provides support for Performance Monitoring Unit (PMU) events based on
> + ARM CoreSight PMU architecture.
> \ No newline at end of file
> diff --git a/drivers/perf/coresight_pmu/Makefile b/drivers/perf/coresight_pmu/Makefile
> new file mode 100644
> index 000000000000..a2a7a5fbbc16
> --- /dev/null
> +++ b/drivers/perf/coresight_pmu/Makefile
> @@ -0,0 +1,6 @@
> +# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES.
> +#
> +# SPDX-License-Identifier: GPL-2.0
> +
> +obj-$(CONFIG_ARM_CORESIGHT_PMU) += \
> + arm_coresight_pmu.o
> diff --git a/drivers/perf/coresight_pmu/arm_coresight_pmu.c b/drivers/perf/coresight_pmu/arm_coresight_pmu.c
> new file mode 100644
> index 000000000000..ba52cc592b2d
> --- /dev/null
> +++ b/drivers/perf/coresight_pmu/arm_coresight_pmu.c
> @@ -0,0 +1,1312 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * ARM CoreSight PMU driver.
> + *
> + * This driver adds support for uncore PMU based on ARM CoreSight Performance
> + * Monitoring Unit Architecture. The PMU is accessible via MMIO registers and
> + * like other uncore PMUs, it does not support process specific events and
> + * cannot be used in sampling mode.
> + *
> + * This code is based on other uncore PMUs like ARM DSU PMU. It provides a
> + * generic implementation to operate the PMU according to CoreSight PMU
> + * architecture and ACPI ARM PMU table (APMT) documents below:
> + * - ARM CoreSight PMU architecture document number: ARM IHI 0091 A.a-00bet0.
> + * - APMT document number: ARM DEN0117.
> + * The description of the PMU, like the PMU device identification, available
> + * events, and configuration options, is vendor specific. The driver provides
> + * interface for vendor specific code to get this information. This allows the
> + * driver to be shared with PMU from different vendors.
> + *
> + * The CoreSight PMU devices can be named using implementor specific format, or
> + * with default naming format: arm_<apmt node type>_pmu_<numeric id>.
---
> + * The description of the device, like the identifier, supported events, and
> + * formats can be found in sysfs
> + * /sys/bus/event_source/devices/arm_<apmt node type>_pmu_<numeric id>
I don't think the above is needed. This is true for all PMU devices.
> + *
> + * The user should refer to the vendor technical documentation to get details
> + * about the supported events.
> + *
> + * Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES.
> + *
> + */
> +
> +#include <linux/acpi.h>
> +#include <linux/cacheinfo.h>
> +#include <linux/ctype.h>
> +#include <linux/interrupt.h>
> +#include <linux/io-64-nonatomic-lo-hi.h>
> +#include <linux/module.h>
> +#include <linux/mod_devicetable.h>
> +#include <linux/perf_event.h>
> +#include <linux/platform_device.h>
> +#include <acpi/processor.h>
> +
> +#include "arm_coresight_pmu.h"
> +
> +#define PMUNAME "arm_system_pmu"
> +
If we have decied to call this arm_system_pmu, (which I am perfectly
happy with), could we please stick to that name for functions that we
export ?
e.g, s/coresight_pmu_sysfs_event_show/arm_system_pmu_event_show()/
> +#define CORESIGHT_CPUMASK_ATTR(_name, _config) \
> + CORESIGHT_EXT_ATTR(_name, coresight_pmu_cpumask_show, \
> + (unsigned long)_config)
> +
> +/*
> + * Register offsets based on CoreSight Performance Monitoring Unit Architecture
> + * Document number: ARM-ECM-0640169 00alp6
> + */
> +#define PMEVCNTR_LO 0x0
> +#define PMEVCNTR_HI 0x4
> +#define PMEVTYPER 0x400
> +#define PMCCFILTR 0x47C
> +#define PMEVFILTR 0xA00
> +#define PMCNTENSET 0xC00
> +#define PMCNTENCLR 0xC20
> +#define PMINTENSET 0xC40
> +#define PMINTENCLR 0xC60
> +#define PMOVSCLR 0xC80
> +#define PMOVSSET 0xCC0
> +#define PMCFGR 0xE00
> +#define PMCR 0xE04
> +#define PMIIDR 0xE08
> +
> +/* PMCFGR register field */
> +#define PMCFGR_NCG_SHIFT 28
> +#define PMCFGR_NCG_MASK 0xf
Please could we instead do :
PMCFGR_NCG GENMASK(31, 28)
and similarly for all the other fields and use :
FIELD_PREP(), FIELD_GET() helpers ?
> +#define PMCFGR_HDBG BIT(24)
> +#define PMCFGR_TRO BIT(23)
> +#define PMCFGR_SS BIT(22)
> +#define PMCFGR_FZO BIT(21)
> +#define PMCFGR_MSI BIT(20)
> +#define PMCFGR_UEN BIT(19)
> +#define PMCFGR_NA BIT(17)
> +#define PMCFGR_EX BIT(16)
> +#define PMCFGR_CCD BIT(15)
> +#define PMCFGR_CC BIT(14)
> +#define PMCFGR_SIZE_SHIFT 8
> +#define PMCFGR_SIZE_MASK 0x3f
> +#define PMCFGR_N_SHIFT 0
> +#define PMCFGR_N_MASK 0xff
> +
> +/* PMCR register field */
> +#define PMCR_TRO BIT(11)
> +#define PMCR_HDBG BIT(10)
> +#define PMCR_FZO BIT(9)
> +#define PMCR_NA BIT(8)
> +#define PMCR_DP BIT(5)
> +#define PMCR_X BIT(4)
> +#define PMCR_D BIT(3)
> +#define PMCR_C BIT(2)
> +#define PMCR_P BIT(1)
> +#define PMCR_E BIT(0)
> +
> +/* PMIIDR register field */
> +#define PMIIDR_IMPLEMENTER_MASK 0xFFF
> +#define PMIIDR_PRODUCTID_MASK 0xFFF
> +#define PMIIDR_PRODUCTID_SHIFT 20
> +
> +/* Each SET/CLR register supports up to 32 counters. */
> +#define CORESIGHT_SET_CLR_REG_COUNTER_NUM 32
> +#define CORESIGHT_SET_CLR_REG_COUNTER_SHIFT 5
minor nits:
Could we rename sucht that s/CORESIGHT/PMU ? Without the PMU, it could
be confused as a CORESIGHT architecture specific thing.
Also
#define CORESIGHT_SET_CLR_REG_COUNTER_NUM (1 <
CORESIGHT_SET_CLR_REG_COUNTER_SHIFT)
> +
> +/* The number of 32-bit SET/CLR register that can be supported. */
> +#define CORESIGHT_SET_CLR_REG_MAX_NUM ((PMCNTENCLR - PMCNTENSET) / sizeof(u32))
> +
> +static_assert((CORESIGHT_SET_CLR_REG_MAX_NUM *
> + CORESIGHT_SET_CLR_REG_COUNTER_NUM) >=
> + CORESIGHT_PMU_MAX_HW_CNTRS);
> +
> +/* Convert counter idx into SET/CLR register number. */
> +#define CORESIGHT_IDX_TO_SET_CLR_REG_ID(idx) \
> + (idx >> CORESIGHT_SET_CLR_REG_COUNTER_SHIFT)
> +
> +/* Convert counter idx into SET/CLR register bit. */
> +#define CORESIGHT_IDX_TO_SET_CLR_REG_BIT(idx) \
> + (idx & (CORESIGHT_SET_CLR_REG_COUNTER_NUM - 1))
super minor nit:
COUNTER_TO_SET_CLR_REG_IDX
COUNTER_TO_SET_CLR_REG_BIT ?
I don't see a need for prefixing them with CORESIGHT and making them
unnecessary longer.
> +
> +#define CORESIGHT_ACTIVE_CPU_MASK 0x0
> +#define CORESIGHT_ASSOCIATED_CPU_MASK 0x1
> +
> +
> +/* Check if field f in flags is set with value v */
> +#define CHECK_APMT_FLAG(flags, f, v) \
> + ((flags & (ACPI_APMT_FLAGS_ ## f)) == (ACPI_APMT_FLAGS_ ## f ## _ ## v))
> +
> +static unsigned long coresight_pmu_cpuhp_state;
> +
> +/*
> + * In CoreSight PMU architecture, all of the MMIO registers are 32-bit except
> + * counter register. The counter register can be implemented as 32-bit or 64-bit
> + * register depending on the value of PMCFGR.SIZE field. For 64-bit access,
> + * single-copy 64-bit atomic support is implementation defined. APMT node flag
> + * is used to identify if the PMU supports 64-bit single copy atomic. If 64-bit
> + * single copy atomic is not supported, the driver treats the register as a pair
> + * of 32-bit register.
> + */
> +
> +/*
> + * Read 64-bit register as a pair of 32-bit registers using hi-lo-hi sequence.
> + */
> +static u64 read_reg64_hilohi(const void __iomem *addr)
> +{
> + u32 val_lo, val_hi;
> + u64 val;
> +
> + /* Use high-low-high sequence to avoid tearing */
> + do {
> + val_hi = readl(addr + 4);
> + val_lo = readl(addr);
> + } while (val_hi != readl(addr + 4));
> +
> + val = (((u64)val_hi << 32) | val_lo);
> +
> + return val;
> +}
> +
> +/* Check if PMU supports 64-bit single copy atomic. */
> +static inline bool support_atomic(const struct coresight_pmu *coresight_pmu)
minor nit: supports_64bit_atomics() ?
> +{
> + return CHECK_APMT_FLAG(coresight_pmu->apmt_node->flags, ATOMIC, SUPP);
> +}
> +
> +/* Check if cycle counter is supported. */
> +static inline bool support_cc(const struct coresight_pmu *coresight_pmu)
minor nit: supports_cycle_counter() ?
> +{
> + return (coresight_pmu->pmcfgr & PMCFGR_CC);
> +}
> +
> +/* Get counter size. */
> +static inline u32 pmcfgr_size(const struct coresight_pmu *coresight_pmu)
Please could we change this to
counter_size(const struct coresight_pmu *coresight_pmu)
{
/* Counter size is PMCFGR_SIZE + 1 */
return FIELD_GET(PMCFGR_SIZE, pmu->pmcfgr) + 1;
}
and also add :
u64 counter_mask(..)
{
return GENMASK_ULL(counter_size(pmu) - 1, 0);
}
See more on this below.
> +{
> + return (coresight_pmu->pmcfgr >> PMCFGR_SIZE_SHIFT) & PMCFGR_SIZE_MASK;
> +}
> +/* Check if counter is implemented as 64-bit register. */
> +static inline bool
> +use_64b_counter_reg(const struct coresight_pmu *coresight_pmu)
> +{
> + return (pmcfgr_size(coresight_pmu) > 31);
> +}
> +
> +/* Get number of counters, minus one. */
> +static inline u32 pmcfgr_n(const struct coresight_pmu *coresight_pmu)
> +{
> + return (coresight_pmu->pmcfgr >> PMCFGR_N_SHIFT) & PMCFGR_N_MASK;
> +}
> +
> +ssize_t coresight_pmu_sysfs_event_show(struct device *dev,
> + struct device_attribute *attr, char *buf)
> +{
> + struct dev_ext_attribute *eattr =
> + container_of(attr, struct dev_ext_attribute, attr);
> + return sysfs_emit(buf, "event=0x%llx\n",
> + (unsigned long long)eattr->var);
> +}
> +EXPORT_SYMBOL_GPL(coresight_pmu_sysfs_event_show);
> +
> +/* Default event list. */
> +static struct attribute *coresight_pmu_event_attrs[] = {
> + CORESIGHT_EVENT_ATTR(cycles, CORESIGHT_PMU_EVT_CYCLES_DEFAULT),
> + NULL,
> +};
> +
> +struct attribute **
> +coresight_pmu_get_event_attrs(const struct coresight_pmu *coresight_pmu)
> +{
> + return coresight_pmu_event_attrs;
> +}
> +EXPORT_SYMBOL_GPL(coresight_pmu_get_event_attrs);
> +
> +umode_t coresight_pmu_event_attr_is_visible(struct kobject *kobj,
> + struct attribute *attr, int unused)
> +{
> + struct device *dev = kobj_to_dev(kobj);
> + struct coresight_pmu *coresight_pmu =
> + to_coresight_pmu(dev_get_drvdata(dev));
> + struct perf_pmu_events_attr *eattr;
> +
> + eattr = container_of(attr, typeof(*eattr), attr.attr);
> +
> + /* Hide cycle event if not supported */
> + if (!support_cc(coresight_pmu) &&
> + eattr->id == CORESIGHT_PMU_EVT_CYCLES_DEFAULT) {
> + return 0;
> + }
> +
> + return attr->mode;
> +}
> +EXPORT_SYMBOL_GPL(coresight_pmu_event_attr_is_visible);
> +
> +ssize_t coresight_pmu_sysfs_format_show(struct device *dev,
> + struct device_attribute *attr,
> + char *buf)
> +{
> + struct dev_ext_attribute *eattr =
> + container_of(attr, struct dev_ext_attribute, attr);
> + return sysfs_emit(buf, "%s\n", (char *)eattr->var);
> +}
> +EXPORT_SYMBOL_GPL(coresight_pmu_sysfs_format_show);
> +
> +static struct attribute *coresight_pmu_format_attrs[] = {
> + CORESIGHT_FORMAT_EVENT_ATTR,
> + CORESIGHT_FORMAT_FILTER_ATTR,
> + NULL,
> +};
> +
> +struct attribute **
> +coresight_pmu_get_format_attrs(const struct coresight_pmu *coresight_pmu)
> +{
> + return coresight_pmu_format_attrs;
> +}
> +EXPORT_SYMBOL_GPL(coresight_pmu_get_format_attrs);
> +
> +u32 coresight_pmu_event_type(const struct perf_event *event)
> +{
> + return event->attr.config & CORESIGHT_EVENT_MASK;
> +}
> +EXPORT_SYMBOL_GPL(coresight_pmu_event_type);
Why do we need to export these ?
> +
> +u32 coresight_pmu_event_filter(const struct perf_event *event)
> +{
> + return event->attr.config1 & CORESIGHT_FILTER_MASK;
> +}
> +EXPORT_SYMBOL_GPL(coresight_pmu_event_filter);
> +
> +static ssize_t coresight_pmu_identifier_show(struct device *dev,
> + struct device_attribute *attr,
> + char *page)
> +{
> + struct coresight_pmu *coresight_pmu =
> + to_coresight_pmu(dev_get_drvdata(dev));
> +
> + return sysfs_emit(page, "%s\n", coresight_pmu->identifier);
> +}
> +
> +static struct device_attribute coresight_pmu_identifier_attr =
> + __ATTR(identifier, 0444, coresight_pmu_identifier_show, NULL);
> +
> +static struct attribute *coresight_pmu_identifier_attrs[] = {
> + &coresight_pmu_identifier_attr.attr,
> + NULL,
> +};
> +
> +static struct attribute_group coresight_pmu_identifier_attr_group = {
> + .attrs = coresight_pmu_identifier_attrs,
> +};
> +
> +const char *
> +coresight_pmu_get_identifier(const struct coresight_pmu *coresight_pmu)
> +{
> + const char *identifier =
> + devm_kasprintf(coresight_pmu->dev, GFP_KERNEL, "%x",
> + coresight_pmu->impl.pmiidr);
> + return identifier;
> +}
> +EXPORT_SYMBOL_GPL(coresight_pmu_get_identifier);
> +
> +static const char *coresight_pmu_type_str[ACPI_APMT_NODE_TYPE_COUNT] = {
> + "mc",
> + "smmu",
> + "pcie",
> + "acpi",
> + "cache",
> +};
> +
> +const char *coresight_pmu_get_name(const struct coresight_pmu *coresight_pmu)
> +{
> + struct device *dev;
> + u8 pmu_type;
> + char *name;
> + char acpi_hid_string[ACPI_ID_LEN] = { 0 };
> + static atomic_t pmu_idx[ACPI_APMT_NODE_TYPE_COUNT] = { 0 };
> +
> + dev = coresight_pmu->dev;
> + pmu_type = coresight_pmu->apmt_node->type;
Please could we use a variable to refer to the "coresight_pmu->apmt_node" ?
It helps with code reading below and also gives us a hint to the
type of that field, we refer multiple times below.
> +
> + if (pmu_type >= ACPI_APMT_NODE_TYPE_COUNT) {
> + dev_err(dev, "unsupported PMU type-%u\n", pmu_type);
> + return NULL;
> + }
> +
> + if (pmu_type == ACPI_APMT_NODE_TYPE_ACPI) {
> + memcpy(acpi_hid_string,
> + &coresight_pmu->apmt_node->inst_primary,
> + sizeof(coresight_pmu->apmt_node->inst_primary));
> + name = devm_kasprintf(dev, GFP_KERNEL, "arm_%s_pmu_%s_%u",
> + coresight_pmu_type_str[pmu_type],
> + acpi_hid_string,
> + coresight_pmu->apmt_node->inst_secondary);
> + } else {
> + name = devm_kasprintf(dev, GFP_KERNEL, "arm_%s_pmu_%d",
> + coresight_pmu_type_str[pmu_type],
> + atomic_fetch_inc(&pmu_idx[pmu_type]));
> + }
> +
> + return name;
> +}
> +EXPORT_SYMBOL_GPL(coresight_pmu_get_name);
> +
> +static ssize_t coresight_pmu_cpumask_show(struct device *dev,
> + struct device_attribute *attr,
> + char *buf)
> +{
> + struct pmu *pmu = dev_get_drvdata(dev);
> + struct coresight_pmu *coresight_pmu = to_coresight_pmu(pmu);
> + struct dev_ext_attribute *eattr =
> + container_of(attr, struct dev_ext_attribute, attr);
> + unsigned long mask_id = (unsigned long)eattr->var;
> + const cpumask_t *cpumask;
> +
> + switch (mask_id) {
> + case CORESIGHT_ACTIVE_CPU_MASK:
> + cpumask = &coresight_pmu->active_cpu;
> + break;
> + case CORESIGHT_ASSOCIATED_CPU_MASK:
> + cpumask = &coresight_pmu->associated_cpus;
> + break;
> + default:
> + return 0;
> + }
> + return cpumap_print_to_pagebuf(true, buf, cpumask);
> +}
> +
> +static struct attribute *coresight_pmu_cpumask_attrs[] = {
> + CORESIGHT_CPUMASK_ATTR(cpumask, CORESIGHT_ACTIVE_CPU_MASK),
> + CORESIGHT_CPUMASK_ATTR(associated_cpus, CORESIGHT_ASSOCIATED_CPU_MASK),
> + NULL,
> +};
> +
> +static struct attribute_group coresight_pmu_cpumask_attr_group = {
> + .attrs = coresight_pmu_cpumask_attrs,
> +};
> +
> +static const struct coresight_pmu_impl_ops default_impl_ops = {
> + .get_event_attrs = coresight_pmu_get_event_attrs,
> + .get_format_attrs = coresight_pmu_get_format_attrs,
> + .get_identifier = coresight_pmu_get_identifier,
> + .get_name = coresight_pmu_get_name,
> + .is_cc_event = coresight_pmu_is_cc_event,
> + .event_type = coresight_pmu_event_type,
> + .event_filter = coresight_pmu_event_filter,
> + .event_attr_is_visible = coresight_pmu_event_attr_is_visible
> +};
> +
> +struct impl_match {
> + u32 pmiidr;
> + u32 mask;
> + int (*impl_init_ops)(struct coresight_pmu *coresight_pmu); > +};
> +
> +static const struct impl_match impl_match[] = {
> + {}
> +};
> +
> +static int coresight_pmu_init_impl_ops(struct coresight_pmu *coresight_pmu)
> +{
> + int idx, ret;
> + struct acpi_apmt_node *apmt_node = coresight_pmu->apmt_node;
> + const struct impl_match *match = impl_match;
> +
> + /*
> + * Get PMU implementer and product id from APMT node.
> + * If APMT node doesn't have implementer/product id, try get it
> + * from PMIIDR.
> + */
> + coresight_pmu->impl.pmiidr =
> + (apmt_node->impl_id) ? apmt_node->impl_id :
> + readl(coresight_pmu->base0 + PMIIDR);
> +
> + /* Find implementer specific attribute ops. */
> + for (idx = 0; match->pmiidr; match++, idx++) {
idx is unused, please remove it.
> + if ((match->pmiidr & match->mask) ==
> + (coresight_pmu->impl.pmiidr & match->mask)) {
> + ret = match->impl_init_ops(coresight_pmu);
> + if (ret)
> + return ret;
> +
> + return 0;
> + }
> + }
> +
> + /* We don't find implementer specific attribute ops, use default. */
> + coresight_pmu->impl.ops = &default_impl_ops;
> + return 0;
> +}
> +
> +static struct attribute_group *
> +coresight_pmu_alloc_event_attr_group(struct coresight_pmu *coresight_pmu)
> +{
> + struct attribute_group *event_group;
> + struct device *dev = coresight_pmu->dev;
> +
> + event_group =
> + devm_kzalloc(dev, sizeof(struct attribute_group), GFP_KERNEL);
> + if (!event_group)
> + return NULL;
> +
> + event_group->name = "events";
> + event_group->attrs =
> + coresight_pmu->impl.ops->get_event_attrs(coresight_pmu);
Is it mandatory for the PMUs to implement get_event_attrs() and all the
call backs ? Does it makes sense to check and optionally call it, if the
backend driver implements it ? Or at least we should make sure the
required operations are filled in by backend.
> + event_group->is_visible =
> + coresight_pmu->impl.ops->event_attr_is_visible;
> +
> + return event_group;
> +}
> +
> +static struct attribute_group *
> +coresight_pmu_alloc_format_attr_group(struct coresight_pmu *coresight_pmu)
> +{
> + struct attribute_group *format_group;
> + struct device *dev = coresight_pmu->dev;
> +
> + format_group =
> + devm_kzalloc(dev, sizeof(struct attribute_group), GFP_KERNEL);
> + if (!format_group)
> + return NULL;
> +
> + format_group->name = "format";
> + format_group->attrs =
> + coresight_pmu->impl.ops->get_format_attrs(coresight_pmu);
Same as above.
> +
> + return format_group;
> +}
> +
> +static struct attribute_group **
> +coresight_pmu_alloc_attr_group(struct coresight_pmu *coresight_pmu)
> +{
> + const struct coresight_pmu_impl_ops *impl_ops;
> + struct attribute_group **attr_groups = NULL;
> + struct device *dev = coresight_pmu->dev;
> + int ret;
> +
> + ret = coresight_pmu_init_impl_ops(coresight_pmu);
> + if (ret)
> + return NULL;
> +
> + impl_ops = coresight_pmu->impl.ops;
> +
> + coresight_pmu->identifier = impl_ops->get_identifier(coresight_pmu);
> + coresight_pmu->name = impl_ops->get_name(coresight_pmu);
If the implementor doesn't have a naming scheme, could we default to the
generic scheme ? In gerneal, we could fall back to the "generic"
callbacks here if the implementor doesn't provide a specific call back.
That way, we could skip the exporting a lot of the callbacks.
> +
> + if (!coresight_pmu->name)
> + return NULL;
> +
> + attr_groups = devm_kzalloc(dev, 5 * sizeof(struct attribute_group *),
> + GFP_KERNEL);
nit: devm_kcalloc() ?
> + if (!attr_groups)
> + return NULL;
> +
> + attr_groups[0] = coresight_pmu_alloc_event_attr_group(coresight_pmu);
> + attr_groups[1] = coresight_pmu_alloc_format_attr_group(coresight_pmu);
What if one of the above returned NULL and you wouldn't get the
following listed ?
> + attr_groups[2] = &coresight_pmu_identifier_attr_group;
> + attr_groups[3] = &coresight_pmu_cpumask_attr_group;
> +
> + return attr_groups;
> +}
> +
> +static inline void
> +coresight_pmu_reset_counters(struct coresight_pmu *coresight_pmu)
> +{
> + u32 pmcr = 0;
> +
> + pmcr |= PMCR_P;
> + pmcr |= PMCR_C;
> + writel(pmcr, coresight_pmu->base0 + PMCR);
> +}
> +
> +static inline void
> +coresight_pmu_start_counters(struct coresight_pmu *coresight_pmu)
> +{
> + u32 pmcr;
> +
> + pmcr = PMCR_E;
> + writel(pmcr, coresight_pmu->base0 + PMCR);
nit: Isn't it easier to read as p;:
writel(PMCR_E, ...);
> +}
> +
> +static inline void
> +coresight_pmu_stop_counters(struct coresight_pmu *coresight_pmu)
> +{
> + u32 pmcr;
> +
> + pmcr = 0;
> + writel(pmcr, coresight_pmu->base0 + PMCR);
Similar to above,
writel(0, ...);
?
> +}
> +
> +static void coresight_pmu_enable(struct pmu *pmu)
> +{
> + bool disabled;
> + struct coresight_pmu *coresight_pmu = to_coresight_pmu(pmu);
> +
> + disabled = bitmap_empty(coresight_pmu->hw_events.used_ctrs,
> + coresight_pmu->num_logical_counters);
> +
> + if (disabled)
> + return;
> +
> + coresight_pmu_start_counters(coresight_pmu);
> +}
> +
> +static void coresight_pmu_disable(struct pmu *pmu)
> +{
> + struct coresight_pmu *coresight_pmu = to_coresight_pmu(pmu);
> +
> + coresight_pmu_stop_counters(coresight_pmu);
> +}
> +
> +bool coresight_pmu_is_cc_event(const struct perf_event *event)
> +{
> + return (event->attr.config == CORESIGHT_PMU_EVT_CYCLES_DEFAULT);
> +}
> +EXPORT_SYMBOL_GPL(coresight_pmu_is_cc_event);
> +
> +static int
> +coresight_pmu_get_event_idx(struct coresight_pmu_hw_events *hw_events,
> + struct perf_event *event)
> +{
> + int idx;
> + struct coresight_pmu *coresight_pmu = to_coresight_pmu(event->pmu);
> +
> + if (support_cc(coresight_pmu)) {
> + if (coresight_pmu->impl.ops->is_cc_event(event)) {
> + /* Search for available cycle counter. */
> + if (test_and_set_bit(coresight_pmu->cc_logical_idx,
> + hw_events->used_ctrs))
> + return -EAGAIN;
> +
> + return coresight_pmu->cc_logical_idx;
> + }
> +
> + /*
> + * Search a regular counter from the used counter bitmap.
> + * The cycle counter divides the bitmap into two parts. Search
> + * the first then second half to exclude the cycle counter bit.
> + */
> + idx = find_first_zero_bit(hw_events->used_ctrs,
> + coresight_pmu->cc_logical_idx);
> + if (idx >= coresight_pmu->cc_logical_idx) {
> + idx = find_next_zero_bit(
> + hw_events->used_ctrs,
> + coresight_pmu->num_logical_counters,
> + coresight_pmu->cc_logical_idx + 1);
> + }
> + } else {
> + idx = find_first_zero_bit(hw_events->used_ctrs,
> + coresight_pmu->num_logical_counters);
> + }
> +
> + if (idx >= coresight_pmu->num_logical_counters)
> + return -EAGAIN;
> +
> + set_bit(idx, hw_events->used_ctrs);
> +
> + return idx;
> +}
> +
> +static bool
> +coresight_pmu_validate_event(struct pmu *pmu,
> + struct coresight_pmu_hw_events *hw_events,
> + struct perf_event *event)
> +{
> + if (is_software_event(event))
> + return true;
> +
> + /* Reject groups spanning multiple HW PMUs. */
> + if (event->pmu != pmu)
> + return false;
> +
> + return (coresight_pmu_get_event_idx(hw_events, event) >= 0);
> +}
> +
> +/*
> + * Make sure the group of events can be scheduled at once
> + * on the PMU.
> + */
> +static bool coresight_pmu_validate_group(struct perf_event *event)
> +{
> + struct perf_event *sibling, *leader = event->group_leader;
> + struct coresight_pmu_hw_events fake_hw_events;
> +
> + if (event->group_leader == event)
> + return true;
> +
> + memset(&fake_hw_events, 0, sizeof(fake_hw_events));
> +
> + if (!coresight_pmu_validate_event(event->pmu, &fake_hw_events, leader))
> + return false;
> +
> + for_each_sibling_event(sibling, leader) {
> + if (!coresight_pmu_validate_event(event->pmu, &fake_hw_events,
> + sibling))
> + return false;
> + }
> +
> + return coresight_pmu_validate_event(event->pmu, &fake_hw_events, event);
> +}
> +
> +static int coresight_pmu_event_init(struct perf_event *event)
> +{
> + struct coresight_pmu *coresight_pmu;
> + struct hw_perf_event *hwc = &event->hw;
> +
> + coresight_pmu = to_coresight_pmu(event->pmu);
> +
> + /*
> + * Following other "uncore" PMUs, we do not support sampling mode or
> + * attach to a task (per-process mode).
> + */
> + if (is_sampling_event(event)) {
> + dev_dbg(coresight_pmu->pmu.dev,
> + "Can't support sampling events\n");
> + return -EOPNOTSUPP;
> + }
> +
> + if (event->cpu < 0 || event->attach_state & PERF_ATTACH_TASK) {
> + dev_dbg(coresight_pmu->pmu.dev,
> + "Can't support per-task counters\n");
> + return -EINVAL;
> + }
> +
> + /*
> + * Make sure the CPU assignment is on one of the CPUs associated with
> + * this PMU.
> + */
> + if (!cpumask_test_cpu(event->cpu, &coresight_pmu->associated_cpus)) {
> + dev_dbg(coresight_pmu->pmu.dev,
> + "Requested cpu is not associated with the PMU\n");
> + return -EINVAL;
> + }
> +
> + /* Enforce the current active CPU to handle the events in this PMU. */
> + event->cpu = cpumask_first(&coresight_pmu->active_cpu);
> + if (event->cpu >= nr_cpu_ids)
> + return -EINVAL;
> +
> + if (!coresight_pmu_validate_group(event))
> + return -EINVAL;
> +
> + /*
> + * The logical counter id is tracked with hw_perf_event.extra_reg.idx.
> + * The physical counter id is tracked with hw_perf_event.idx.
> + * We don't assign an index until we actually place the event onto
> + * hardware. Use -1 to signify that we haven't decided where to put it
> + * yet.
> + */
> + hwc->idx = -1;
> + hwc->extra_reg.idx = -1;
> + hwc->config_base = coresight_pmu->impl.ops->event_type(event);
nit: This is a bit confusing combined with with pmu->base0, when we
write the event config.
Could we use hwc->config instead ?
> +
> + return 0;
> +}
> +
> +static inline u32 counter_offset(u32 reg_sz, u32 ctr_idx)
> +{
> + return (PMEVCNTR_LO + (reg_sz * ctr_idx));
> +}
> +
> +static void coresight_pmu_write_counter(struct perf_event *event, u64 val)
> +{
> + u32 offset;
> + struct coresight_pmu *coresight_pmu = to_coresight_pmu(event->pmu);
> +
> + if (use_64b_counter_reg(coresight_pmu)) {
> + offset = counter_offset(sizeof(u64), event->hw.idx);
> +
> + writeq(val, coresight_pmu->base1 + offset);
> + } else {
> + offset = counter_offset(sizeof(u32), event->hw.idx);
> +
> + writel(lower_32_bits(val), coresight_pmu->base1 + offset);
> + }
> +}
> +
> +static u64 coresight_pmu_read_counter(struct perf_event *event)
> +{
> + u32 offset;
> + const void __iomem *counter_addr;
> + struct coresight_pmu *coresight_pmu = to_coresight_pmu(event->pmu);
> +
> + if (use_64b_counter_reg(coresight_pmu)) {
> + offset = counter_offset(sizeof(u64), event->hw.idx);
> + counter_addr = coresight_pmu->base1 + offset;
> +
> + return support_atomic(coresight_pmu) ?
> + readq(counter_addr) :
> + read_reg64_hilohi(counter_addr);
> + }
> +
> + offset = counter_offset(sizeof(u32), event->hw.idx);
> + return readl(coresight_pmu->base1 + offset);
> +}
> +
> +/*
> + * coresight_pmu_set_event_period: Set the period for the counter.
> + *
> + * To handle cases of extreme interrupt latency, we program
> + * the counter with half of the max count for the counters.
> + */
> +static void coresight_pmu_set_event_period(struct perf_event *event)
> +{
> + struct coresight_pmu *coresight_pmu = to_coresight_pmu(event->pmu);
> + u64 val = GENMASK_ULL(pmcfgr_size(coresight_pmu), 0) >> 1;
Please could we add a helper to get the "counter mask" ? Otherwise this
is a bit confusing. We would expect
pmcfgr_size() = 64 for 64bit counters, but it is really 63 instead.
and that made me stare the GENMASK_ULL() and go back trace it down
to the spec. Please see my suggestion on the top to add counter_mask().
> +
> + local64_set(&event->hw.prev_count, val);
> + coresight_pmu_write_counter(event, val);
> +}
> +
> +static void coresight_pmu_enable_counter(struct coresight_pmu *coresight_pmu,
> + int idx)
> +{
> + u32 reg_id, reg_bit, inten_off, cnten_off;
> +
> + reg_id = CORESIGHT_IDX_TO_SET_CLR_REG_ID(idx);
> + reg_bit = CORESIGHT_IDX_TO_SET_CLR_REG_BIT(idx);
> +
> + inten_off = PMINTENSET + (4 * reg_id);
> + cnten_off = PMCNTENSET + (4 * reg_id);
> +
> + writel(BIT(reg_bit), coresight_pmu->base0 + inten_off);
> + writel(BIT(reg_bit), coresight_pmu->base0 + cnten_off);
> +}
> +
> +static void coresight_pmu_disable_counter(struct coresight_pmu *coresight_pmu,
> + int idx)
> +{
> + u32 reg_id, reg_bit, inten_off, cnten_off;
> +
> + reg_id = CORESIGHT_IDX_TO_SET_CLR_REG_ID(idx);
> + reg_bit = CORESIGHT_IDX_TO_SET_CLR_REG_BIT(idx);
> +
> + inten_off = PMINTENCLR + (4 * reg_id);
> + cnten_off = PMCNTENCLR + (4 * reg_id);
> +
> + writel(BIT(reg_bit), coresight_pmu->base0 + cnten_off);
> + writel(BIT(reg_bit), coresight_pmu->base0 + inten_off);
> +}
> +
> +static void coresight_pmu_event_update(struct perf_event *event)
> +{
> + struct coresight_pmu *coresight_pmu = to_coresight_pmu(event->pmu);
> + struct hw_perf_event *hwc = &event->hw;
> + u64 delta, prev, now;
> +
> + do {
> + prev = local64_read(&hwc->prev_count);
> + now = coresight_pmu_read_counter(event);
> + } while (local64_cmpxchg(&hwc->prev_count, prev, now) != prev);
> +
> + delta = (now - prev) & GENMASK_ULL(pmcfgr_size(coresight_pmu), 0);
> + local64_add(delta, &event->count);
> +}
> +
> +static inline void coresight_pmu_set_event(struct coresight_pmu *coresight_pmu,
> + struct hw_perf_event *hwc)
> +{
> + u32 offset = PMEVTYPER + (4 * hwc->idx);
> +
> + writel(hwc->config_base, coresight_pmu->base0 + offset);
> +}
> +
> +static inline void
> +coresight_pmu_set_ev_filter(struct coresight_pmu *coresight_pmu,
> + struct hw_perf_event *hwc, u32 filter)
> +{
> + u32 offset = PMEVFILTR + (4 * hwc->idx);
> +
> + writel(filter, coresight_pmu->base0 + offset);
> +}
> +
> +static inline void
> +coresight_pmu_set_cc_filter(struct coresight_pmu *coresight_pmu, u32 filter)
> +{
> + u32 offset = PMCCFILTR;
> +
> + writel(filter, coresight_pmu->base0 + offset);
> +}
> +
> +static void coresight_pmu_start(struct perf_event *event, int pmu_flags)
> +{
> + struct coresight_pmu *coresight_pmu = to_coresight_pmu(event->pmu);
> + struct hw_perf_event *hwc = &event->hw;
> + u32 filter;
> +
> + /* We always reprogram the counter */
> + if (pmu_flags & PERF_EF_RELOAD)
> + WARN_ON(!(hwc->state & PERF_HES_UPTODATE));
> +
> + coresight_pmu_set_event_period(event);
> +
> + filter = coresight_pmu->impl.ops->event_filter(event);
> +
> + if (event->hw.extra_reg.idx == coresight_pmu->cc_logical_idx) {
> + coresight_pmu_set_cc_filter(coresight_pmu, filter);
> + } else {
> + coresight_pmu_set_event(coresight_pmu, hwc);
> + coresight_pmu_set_ev_filter(coresight_pmu, hwc, filter);
> + }
> +
> + hwc->state = 0;
> +
> + coresight_pmu_enable_counter(coresight_pmu, hwc->idx);
> +}
> +
> +static void coresight_pmu_stop(struct perf_event *event, int pmu_flags)
> +{
> + struct coresight_pmu *coresight_pmu = to_coresight_pmu(event->pmu);
> + struct hw_perf_event *hwc = &event->hw;
> +
> + if (hwc->state & PERF_HES_STOPPED)
> + return;
> +
> + coresight_pmu_disable_counter(coresight_pmu, hwc->idx);
> + coresight_pmu_event_update(event);
> +
> + hwc->state |= PERF_HES_STOPPED | PERF_HES_UPTODATE;
> +}
> +
> +static inline u32 to_phys_idx(struct coresight_pmu *coresight_pmu, u32 idx)
> +{
> + return (idx == coresight_pmu->cc_logical_idx) ?
> + CORESIGHT_PMU_IDX_CCNTR : idx;
> +}
> +
> +static int coresight_pmu_add(struct perf_event *event, int flags)
> +{
> + struct coresight_pmu *coresight_pmu = to_coresight_pmu(event->pmu);
> + struct coresight_pmu_hw_events *hw_events = &coresight_pmu->hw_events;
> + struct hw_perf_event *hwc = &event->hw;
> + int idx;
> +
> + if (WARN_ON_ONCE(!cpumask_test_cpu(smp_processor_id(),
> + &coresight_pmu->associated_cpus)))
> + return -ENOENT;
> +
> + idx = coresight_pmu_get_event_idx(hw_events, event);
> + if (idx < 0)
> + return idx;
> +
> + hw_events->events[idx] = event;
> + hwc->idx = to_phys_idx(coresight_pmu, idx);
> + hwc->extra_reg.idx = idx;
> + hwc->state = PERF_HES_STOPPED | PERF_HES_UPTODATE;
> +
> + if (flags & PERF_EF_START)
> + coresight_pmu_start(event, PERF_EF_RELOAD);
> +
> + /* Propagate changes to the userspace mapping. */
> + perf_event_update_userpage(event);
> +
> + return 0;
> +}
> +
> +static void coresight_pmu_del(struct perf_event *event, int flags)
> +{
> + struct coresight_pmu *coresight_pmu = to_coresight_pmu(event->pmu);
> + struct coresight_pmu_hw_events *hw_events = &coresight_pmu->hw_events;
> + struct hw_perf_event *hwc = &event->hw;
> + int idx = hwc->extra_reg.idx;
> +
> + coresight_pmu_stop(event, PERF_EF_UPDATE);
> +
> + hw_events->events[idx] = NULL;
> +
> + clear_bit(idx, hw_events->used_ctrs);
> +
> + perf_event_update_userpage(event);
> +}
> +
> +static void coresight_pmu_read(struct perf_event *event)
> +{
> + coresight_pmu_event_update(event);
> +}
> +
> +static int coresight_pmu_alloc(struct platform_device *pdev,
> + struct coresight_pmu **coresight_pmu)
> +{
> + struct acpi_apmt_node *apmt_node;
> + struct device *dev;
> + struct coresight_pmu *pmu;
> +
> + dev = &pdev->dev;
> + apmt_node = *(struct acpi_apmt_node **)dev_get_platdata(dev);
> + if (!apmt_node) {
> + dev_err(dev, "failed to get APMT node\n");
> + return -ENOMEM;
> + }
> +
> + pmu = devm_kzalloc(dev, sizeof(*pmu), GFP_KERNEL);
> + if (!pmu)
> + return -ENOMEM;
> +
> + *coresight_pmu = pmu;
> +
> + pmu->dev = dev;
> + pmu->apmt_node = apmt_node;
> +
> + platform_set_drvdata(pdev, coresight_pmu);
> +
> + return 0;
> +}
> +
> +static int coresight_pmu_init_mmio(struct coresight_pmu *coresight_pmu)
> +{
> + struct device *dev;
> + struct platform_device *pdev;
> + struct acpi_apmt_node *apmt_node;
> +
> + dev = coresight_pmu->dev;
> + pdev = to_platform_device(dev);
> + apmt_node = coresight_pmu->apmt_node;
> +
> + /* Base address for page 0. */
> + coresight_pmu->base0 = devm_platform_ioremap_resource(pdev, 0);
> + if (IS_ERR(coresight_pmu->base0)) {
> + dev_err(dev, "ioremap failed for page-0 resource\n");
> + return PTR_ERR(coresight_pmu->base0);
> + }
> +
> + /* Base address for page 1 if supported. Otherwise point it to page 0. */
> + coresight_pmu->base1 = coresight_pmu->base0;
> + if (CHECK_APMT_FLAG(apmt_node->flags, DUAL_PAGE, SUPP)) {
> + coresight_pmu->base1 = devm_platform_ioremap_resource(pdev, 1);
> + if (IS_ERR(coresight_pmu->base1)) {
> + dev_err(dev, "ioremap failed for page-1 resource\n");
> + return PTR_ERR(coresight_pmu->base1);
> + }
> + }
> +
> + coresight_pmu->pmcfgr = readl(coresight_pmu->base0 + PMCFGR);
> +
> + coresight_pmu->num_logical_counters = pmcfgr_n(coresight_pmu) + 1;
> +
> + coresight_pmu->cc_logical_idx = CORESIGHT_PMU_MAX_HW_CNTRS;
> +
> + if (support_cc(coresight_pmu)) {
> + /*
> + * The last logical counter is mapped to cycle counter if
> + * there is a gap between regular and cycle counter. Otherwise,
> + * logical and physical have 1-to-1 mapping.
> + */
> + coresight_pmu->cc_logical_idx =
> + (coresight_pmu->num_logical_counters <=
> + CORESIGHT_PMU_IDX_CCNTR) ?
> + coresight_pmu->num_logical_counters - 1 :
> + CORESIGHT_PMU_IDX_CCNTR;
> + }
> +
> + coresight_pmu->num_set_clr_reg =
> + DIV_ROUND_UP(coresight_pmu->num_logical_counters,
> + CORESIGHT_SET_CLR_REG_COUNTER_NUM);
> +
> + coresight_pmu->hw_events.events =
> + devm_kzalloc(dev,
> + sizeof(*coresight_pmu->hw_events.events) *
> + coresight_pmu->num_logical_counters,
> + GFP_KERNEL);
> +
> + if (!coresight_pmu->hw_events.events)
> + return -ENOMEM;
> +
> + return 0;
> +}
> +
> +static inline int
> +coresight_pmu_get_reset_overflow(struct coresight_pmu *coresight_pmu,
> + u32 *pmovs)
> +{
> + int i;
> + u32 pmovclr_offset = PMOVSCLR;
> + u32 has_overflowed = 0;
> +
> + for (i = 0; i < coresight_pmu->num_set_clr_reg; ++i) {
> + pmovs[i] = readl(coresight_pmu->base1 + pmovclr_offset);
> + has_overflowed |= pmovs[i];
> + writel(pmovs[i], coresight_pmu->base1 + pmovclr_offset);
> + pmovclr_offset += sizeof(u32);
> + }
> +
> + return has_overflowed != 0;
> +}
> +
> +static irqreturn_t coresight_pmu_handle_irq(int irq_num, void *dev)
> +{
> + int idx, has_overflowed;
> + struct perf_event *event;
> + struct coresight_pmu *coresight_pmu = dev;
> + u32 pmovs[CORESIGHT_SET_CLR_REG_MAX_NUM] = { 0 };
> + bool handled = false;
> +
> + coresight_pmu_stop_counters(coresight_pmu);
> +
> + has_overflowed = coresight_pmu_get_reset_overflow(coresight_pmu, pmovs);
> + if (!has_overflowed)
> + goto done;
> +
> + for_each_set_bit(idx, coresight_pmu->hw_events.used_ctrs,
> + coresight_pmu->num_logical_counters) {
> + event = coresight_pmu->hw_events.events[idx];
> +
> + if (!event)
> + continue;
> +
> + if (!test_bit(event->hw.idx, (unsigned long *)pmovs))
> + continue;
> +
> + coresight_pmu_event_update(event);
> + coresight_pmu_set_event_period(event);
> +
> + handled = true;
> + }
> +
> +done:
> + coresight_pmu_start_counters(coresight_pmu);
> + return IRQ_RETVAL(handled);
> +}
> +
> +static int coresight_pmu_request_irq(struct coresight_pmu *coresight_pmu)
> +{
> + int irq, ret;
> + struct device *dev;
> + struct platform_device *pdev;
> + struct acpi_apmt_node *apmt_node;
> +
> + dev = coresight_pmu->dev;
> + pdev = to_platform_device(dev);
> + apmt_node = coresight_pmu->apmt_node;
> +
> + /* Skip IRQ request if the PMU does not support overflow interrupt. */
> + if (apmt_node->ovflw_irq == 0)
> + return 0;
> +
> + irq = platform_get_irq(pdev, 0);
> + if (irq < 0)
> + return irq;
> +
> + ret = devm_request_irq(dev, irq, coresight_pmu_handle_irq,
> + IRQF_NOBALANCING | IRQF_NO_THREAD, dev_name(dev),
> + coresight_pmu);
> + if (ret) {
> + dev_err(dev, "Could not request IRQ %d\n", irq);
> + return ret;
> + }
> +
> + coresight_pmu->irq = irq;
> +
> + return 0;
> +}
> +
> +static inline int coresight_pmu_find_cpu_container(int cpu, u32 container_uid)
> +{
> + u32 acpi_uid;
> + struct device *cpu_dev = get_cpu_device(cpu);
> + struct acpi_device *acpi_dev = ACPI_COMPANION(cpu_dev);
> +
> + if (!cpu_dev)
> + return -ENODEV;
> +
> + while (acpi_dev) {
> + if (!strcmp(acpi_device_hid(acpi_dev),
> + ACPI_PROCESSOR_CONTAINER_HID) &&
> + !kstrtouint(acpi_device_uid(acpi_dev), 0, &acpi_uid) &&
> + acpi_uid == container_uid)
> + return 0;
> +
> + acpi_dev = acpi_dev->parent;
> + }
> +
> + return -ENODEV;
> +}
> +
> +static int coresight_pmu_get_cpus(struct coresight_pmu *coresight_pmu)
> +{
> + struct acpi_apmt_node *apmt_node;
> + int affinity_flag;
> + int cpu;
> +
> + apmt_node = coresight_pmu->apmt_node;
> + affinity_flag = apmt_node->flags & ACPI_APMT_FLAGS_AFFINITY;
> +
> + if (affinity_flag == ACPI_APMT_FLAGS_AFFINITY_PROC) {
> + for_each_possible_cpu(cpu) {
> + if (apmt_node->proc_affinity ==
> + get_acpi_id_for_cpu(cpu)) {
minor nit:
There is one another user of this reverse search for CPU by acpi_id.
See get_cpu_for_acpi_id() in arch/arm64/kernel/acpi_numa.c.
May be we should make that a generic helper and move it to common code.
> + cpumask_set_cpu(
> + cpu, &coresight_pmu->associated_cpus);
> + break;
> + }
> + }
> + } else {
> + for_each_possible_cpu(cpu) {
> + if (coresight_pmu_find_cpu_container(
> + cpu, apmt_node->proc_affinity))
> + continue;
> +
> + cpumask_set_cpu(cpu, &coresight_pmu->associated_cpus);
> + }
> + }
> +
> + return 0;
> +}
> +
> +static int coresight_pmu_register_pmu(struct coresight_pmu *coresight_pmu)
> +{
> + int ret;
> + struct attribute_group **attr_groups;
> +
> + attr_groups = coresight_pmu_alloc_attr_group(coresight_pmu);
> + if (!attr_groups) {
> + ret = -ENOMEM;
> + return ret;
> + }
> +
> + ret = cpuhp_state_add_instance(coresight_pmu_cpuhp_state,
> + &coresight_pmu->cpuhp_node);
> + if (ret)
> + return ret;
> +
> + coresight_pmu->pmu = (struct pmu){
> + .task_ctx_nr = perf_invalid_context,
> + .module = THIS_MODULE,
> + .pmu_enable = coresight_pmu_enable,
> + .pmu_disable = coresight_pmu_disable,
> + .event_init = coresight_pmu_event_init,
> + .add = coresight_pmu_add,
> + .del = coresight_pmu_del,
> + .start = coresight_pmu_start,
> + .stop = coresight_pmu_stop,
> + .read = coresight_pmu_read,
> + .attr_groups = (const struct attribute_group **)attr_groups,
> + .capabilities = PERF_PMU_CAP_NO_EXCLUDE,
> + };
> +
> + /* Hardware counter init */
> + coresight_pmu_stop_counters(coresight_pmu);
> + coresight_pmu_reset_counters(coresight_pmu);
> +
> + ret = perf_pmu_register(&coresight_pmu->pmu, coresight_pmu->name, -1);
> + if (ret) {
> + cpuhp_state_remove_instance(coresight_pmu_cpuhp_state,
> + &coresight_pmu->cpuhp_node);
> + }
> +
> + return ret;
> +}
> +
> +static int coresight_pmu_device_probe(struct platform_device *pdev)
> +{
> + int ret;
> + struct coresight_pmu *coresight_pmu;
> +
> + ret = coresight_pmu_alloc(pdev, &coresight_pmu);
> + if (ret)
> + return ret;
> +
> + ret = coresight_pmu_init_mmio(coresight_pmu);
> + if (ret)
> + return ret;
> +
> + ret = coresight_pmu_request_irq(coresight_pmu);
> + if (ret)
> + return ret;
> +
> + ret = coresight_pmu_get_cpus(coresight_pmu);
> + if (ret)
> + return ret;
> +
> + ret = coresight_pmu_register_pmu(coresight_pmu);
> + if (ret)
> + return ret;
> +
> + return 0;
> +}
> +
> +static int coresight_pmu_device_remove(struct platform_device *pdev)
> +{
> + struct coresight_pmu *coresight_pmu = platform_get_drvdata(pdev);
> +
> + perf_pmu_unregister(&coresight_pmu->pmu);
> + cpuhp_state_remove_instance(coresight_pmu_cpuhp_state,
> + &coresight_pmu->cpuhp_node);
> +
> + return 0;
> +}
> +
> +static struct platform_driver coresight_pmu_driver = {
> + .driver = {
> + .name = "arm-system-pmu",
> + .suppress_bind_attrs = true,
> + },
> + .probe = coresight_pmu_device_probe,
> + .remove = coresight_pmu_device_remove,
> +};
> +
> +static void coresight_pmu_set_active_cpu(int cpu,
> + struct coresight_pmu *coresight_pmu)
> +{
> + cpumask_set_cpu(cpu, &coresight_pmu->active_cpu);
> + WARN_ON(irq_set_affinity(coresight_pmu->irq,
> + &coresight_pmu->active_cpu));
> +}
> +
> +static int coresight_pmu_cpu_online(unsigned int cpu, struct hlist_node *node)
> +{
> + struct coresight_pmu *coresight_pmu =
> + hlist_entry_safe(node, struct coresight_pmu, cpuhp_node);
> +
> + if (!cpumask_test_cpu(cpu, &coresight_pmu->associated_cpus))
> + return 0;
> +
> + /* If the PMU is already managed, there is nothing to do */
> + if (!cpumask_empty(&coresight_pmu->active_cpu))
> + return 0;
> +
> + /* Use this CPU for event counting */
> + coresight_pmu_set_active_cpu(cpu, coresight_pmu);
> +
> + return 0;
> +}
> +
> +static int coresight_pmu_cpu_teardown(unsigned int cpu, struct hlist_node *node)
> +{
> + int dst;
> + struct cpumask online_supported;
> +
> + struct coresight_pmu *coresight_pmu =
> + hlist_entry_safe(node, struct coresight_pmu, cpuhp_node);
> +
> + /* Nothing to do if this CPU doesn't own the PMU */
> + if (!cpumask_test_and_clear_cpu(cpu, &coresight_pmu->active_cpu))
> + return 0;
> +
> + /* Choose a new CPU to migrate ownership of the PMU to */
> + cpumask_and(&online_supported, &coresight_pmu->associated_cpus,
> + cpu_online_mask);
> + dst = cpumask_any_but(&online_supported, cpu);
> + if (dst >= nr_cpu_ids)
> + return 0;
> +
> + /* Use this CPU for event counting */
> + perf_pmu_migrate_context(&coresight_pmu->pmu, cpu, dst);
> + coresight_pmu_set_active_cpu(dst, coresight_pmu);
> +
> + return 0;
> +}
> +
> +static int __init coresight_pmu_init(void)
> +{
> + int ret;
> +
> + ret = cpuhp_setup_state_multi(CPUHP_AP_ONLINE_DYN, PMUNAME,
> + coresight_pmu_cpu_online,
> + coresight_pmu_cpu_teardown);
> + if (ret < 0)
> + return ret;
> + coresight_pmu_cpuhp_state = ret;
> + return platform_driver_register(&coresight_pmu_driver);
> +}
> +
> +static void __exit coresight_pmu_exit(void)
> +{
> + platform_driver_unregister(&coresight_pmu_driver);
> + cpuhp_remove_multi_state(coresight_pmu_cpuhp_state);
> +}
> +
> +module_init(coresight_pmu_init);
> +module_exit(coresight_pmu_exit);
> diff --git a/drivers/perf/coresight_pmu/arm_coresight_pmu.h b/drivers/perf/coresight_pmu/arm_coresight_pmu.h
> new file mode 100644
> index 000000000000..88fb4cd3dafa
> --- /dev/null
> +++ b/drivers/perf/coresight_pmu/arm_coresight_pmu.h
> @@ -0,0 +1,177 @@
> +/* SPDX-License-Identifier: GPL-2.0
> + *
> + * ARM CoreSight PMU driver.
> + * Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES.
> + *
> + */
> +
> +#ifndef __ARM_CORESIGHT_PMU_H__
> +#define __ARM_CORESIGHT_PMU_H__
> +
> +#include <linux/acpi.h>
> +#include <linux/bitfield.h>
> +#include <linux/cpumask.h>
> +#include <linux/device.h>
> +#include <linux/kernel.h>
> +#include <linux/module.h>
> +#include <linux/perf_event.h>
> +#include <linux/platform_device.h>
> +#include <linux/types.h>
> +
> +#define to_coresight_pmu(p) (container_of(p, struct coresight_pmu, pmu))
> +
> +#define CORESIGHT_EXT_ATTR(_name, _func, _config) \
> + (&((struct dev_ext_attribute[]){ \
> + { \
> + .attr = __ATTR(_name, 0444, _func, NULL), \
> + .var = (void *)_config \
> + } \
> + })[0].attr.attr)
> +
> +#define CORESIGHT_FORMAT_ATTR(_name, _config) \
> + CORESIGHT_EXT_ATTR(_name, coresight_pmu_sysfs_format_show, \
> + (char *)_config)
> +
> +#define CORESIGHT_EVENT_ATTR(_name, _config) \
> + PMU_EVENT_ATTR_ID(_name, coresight_pmu_sysfs_event_show, _config)
> +
> +
> +/* Default event id mask */
> +#define CORESIGHT_EVENT_MASK 0xFFFFFFFFULL
Please use GENMASK_ULL(), everywhere.
Thanks
Suzuki
On 21/06/2022 06:50, Besar Wicaksono wrote:
> Add support for NVIDIA System Cache Fabric (SCF) and Memory Control
> Fabric (MCF) PMU attributes for CoreSight PMU implementation in
> NVIDIA devices.
>
> Signed-off-by: Besar Wicaksono <[email protected]>
> ---
> drivers/perf/coresight_pmu/Makefile | 3 +-
> .../perf/coresight_pmu/arm_coresight_pmu.c | 4 +
> .../coresight_pmu/arm_coresight_pmu_nvidia.c | 312 ++++++++++++++++++
> .../coresight_pmu/arm_coresight_pmu_nvidia.h | 17 +
> 4 files changed, 335 insertions(+), 1 deletion(-)
> create mode 100644 drivers/perf/coresight_pmu/arm_coresight_pmu_nvidia.c
> create mode 100644 drivers/perf/coresight_pmu/arm_coresight_pmu_nvidia.h
>
> diff --git a/drivers/perf/coresight_pmu/Makefile b/drivers/perf/coresight_pmu/Makefile
> index a2a7a5fbbc16..181b1b0dbaa1 100644
> --- a/drivers/perf/coresight_pmu/Makefile
> +++ b/drivers/perf/coresight_pmu/Makefile
> @@ -3,4 +3,5 @@
> # SPDX-License-Identifier: GPL-2.0
>
> obj-$(CONFIG_ARM_CORESIGHT_PMU) += \
> - arm_coresight_pmu.o
> + arm_coresight_pmu.o \
> + arm_coresight_pmu_nvidia.o
> diff --git a/drivers/perf/coresight_pmu/arm_coresight_pmu.c b/drivers/perf/coresight_pmu/arm_coresight_pmu.c
> index ba52cc592b2d..12179d029bfd 100644
> --- a/drivers/perf/coresight_pmu/arm_coresight_pmu.c
> +++ b/drivers/perf/coresight_pmu/arm_coresight_pmu.c
> @@ -42,6 +42,7 @@
> #include <acpi/processor.h>
>
> #include "arm_coresight_pmu.h"
> +#include "arm_coresight_pmu_nvidia.h"
>
> #define PMUNAME "arm_system_pmu"
>
> @@ -396,6 +397,9 @@ struct impl_match {
> };
>
> static const struct impl_match impl_match[] = {
> + { .pmiidr = 0x36B,
> + .mask = PMIIDR_IMPLEMENTER_MASK,
> + .impl_init_ops = nv_coresight_init_ops },
> {}
> };
>
> diff --git a/drivers/perf/coresight_pmu/arm_coresight_pmu_nvidia.c b/drivers/perf/coresight_pmu/arm_coresight_pmu_nvidia.c
> new file mode 100644
> index 000000000000..54f4eae4c529
> --- /dev/null
> +++ b/drivers/perf/coresight_pmu/arm_coresight_pmu_nvidia.c
> @@ -0,0 +1,312 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES.
> + *
> + */
> +
> +/* Support for NVIDIA specific attributes. */
> +
> +#include "arm_coresight_pmu_nvidia.h"
> +
> +#define NV_MCF_PCIE_PORT_COUNT 10ULL
> +#define NV_MCF_PCIE_FILTER_ID_MASK ((1ULL << NV_MCF_PCIE_PORT_COUNT) - 1)
> +
> +#define NV_MCF_GPU_PORT_COUNT 2ULL
> +#define NV_MCF_GPU_FILTER_ID_MASK ((1ULL << NV_MCF_GPU_PORT_COUNT) - 1)
> +
> +#define NV_MCF_NVLINK_PORT_COUNT 4ULL
> +#define NV_MCF_NVLINK_FILTER_ID_MASK ((1ULL << NV_MCF_NVLINK_PORT_COUNT) - 1)
> +
> +#define PMIIDR_PRODUCTID_MASK 0xFFF
> +#define PMIIDR_PRODUCTID_SHIFT 20
> +
> +#define to_nv_pmu_impl(coresight_pmu) \
> + (container_of(coresight_pmu->impl.ops, struct nv_pmu_impl, ops))
> +
> +#define CORESIGHT_EVENT_ATTR_4_INNER(_pref, _num, _suff, _config) \
> + CORESIGHT_EVENT_ATTR(_pref##_num##_suff, _config)
> +
> +#define CORESIGHT_EVENT_ATTR_4(_pref, _suff, _config) \
> + CORESIGHT_EVENT_ATTR_4_INNER(_pref, _0_, _suff, _config), \
> + CORESIGHT_EVENT_ATTR_4_INNER(_pref, _1_, _suff, _config + 1), \
> + CORESIGHT_EVENT_ATTR_4_INNER(_pref, _2_, _suff, _config + 2), \
> + CORESIGHT_EVENT_ATTR_4_INNER(_pref, _3_, _suff, _config + 3)
> +
> +struct nv_pmu_impl {
> + struct coresight_pmu_impl_ops ops;
> + const char *name;
> + u32 filter_mask;
> + struct attribute **event_attr;
> + struct attribute **format_attr;
> +};
> +
> +static struct attribute *scf_pmu_event_attrs[] = {
> + CORESIGHT_EVENT_ATTR(bus_cycles, 0x1d),
> +
> + CORESIGHT_EVENT_ATTR(scf_cache_allocate, 0xF0),
> + CORESIGHT_EVENT_ATTR(scf_cache_refill, 0xF1),
> + CORESIGHT_EVENT_ATTR(scf_cache, 0xF2),
> + CORESIGHT_EVENT_ATTR(scf_cache_wb, 0xF3),
> +
> + CORESIGHT_EVENT_ATTR_4(socket, rd_data, 0x101),
> + CORESIGHT_EVENT_ATTR_4(socket, dl_rsp, 0x105),
> + CORESIGHT_EVENT_ATTR_4(socket, wb_data, 0x109),
> + CORESIGHT_EVENT_ATTR_4(socket, ev_rsp, 0x10d),
> + CORESIGHT_EVENT_ATTR_4(socket, prb_data, 0x111),
> +
> + CORESIGHT_EVENT_ATTR_4(socket, rd_outstanding, 0x115),
> + CORESIGHT_EVENT_ATTR_4(socket, dl_outstanding, 0x119),
> + CORESIGHT_EVENT_ATTR_4(socket, wb_outstanding, 0x11d),
> + CORESIGHT_EVENT_ATTR_4(socket, wr_outstanding, 0x121),
> + CORESIGHT_EVENT_ATTR_4(socket, ev_outstanding, 0x125),
> + CORESIGHT_EVENT_ATTR_4(socket, prb_outstanding, 0x129),
> +
> + CORESIGHT_EVENT_ATTR_4(socket, rd_access, 0x12d),
> + CORESIGHT_EVENT_ATTR_4(socket, dl_access, 0x131),
> + CORESIGHT_EVENT_ATTR_4(socket, wb_access, 0x135),
> + CORESIGHT_EVENT_ATTR_4(socket, wr_access, 0x139),
> + CORESIGHT_EVENT_ATTR_4(socket, ev_access, 0x13d),
> + CORESIGHT_EVENT_ATTR_4(socket, prb_access, 0x141),
> +
> + CORESIGHT_EVENT_ATTR_4(ocu, gmem_rd_data, 0x145),
> + CORESIGHT_EVENT_ATTR_4(ocu, gmem_rd_access, 0x149),
> + CORESIGHT_EVENT_ATTR_4(ocu, gmem_wb_access, 0x14d),
> + CORESIGHT_EVENT_ATTR_4(ocu, gmem_rd_outstanding, 0x151),
> + CORESIGHT_EVENT_ATTR_4(ocu, gmem_wr_outstanding, 0x155),
> +
> + CORESIGHT_EVENT_ATTR_4(ocu, rem_rd_data, 0x159),
> + CORESIGHT_EVENT_ATTR_4(ocu, rem_rd_access, 0x15d),
> + CORESIGHT_EVENT_ATTR_4(ocu, rem_wb_access, 0x161),
> + CORESIGHT_EVENT_ATTR_4(ocu, rem_rd_outstanding, 0x165),
> + CORESIGHT_EVENT_ATTR_4(ocu, rem_wr_outstanding, 0x169),
> +
> + CORESIGHT_EVENT_ATTR(gmem_rd_data, 0x16d),
> + CORESIGHT_EVENT_ATTR(gmem_rd_access, 0x16e),
> + CORESIGHT_EVENT_ATTR(gmem_rd_outstanding, 0x16f),
> + CORESIGHT_EVENT_ATTR(gmem_dl_rsp, 0x170),
> + CORESIGHT_EVENT_ATTR(gmem_dl_access, 0x171),
> + CORESIGHT_EVENT_ATTR(gmem_dl_outstanding, 0x172),
> + CORESIGHT_EVENT_ATTR(gmem_wb_data, 0x173),
> + CORESIGHT_EVENT_ATTR(gmem_wb_access, 0x174),
> + CORESIGHT_EVENT_ATTR(gmem_wb_outstanding, 0x175),
> + CORESIGHT_EVENT_ATTR(gmem_ev_rsp, 0x176),
> + CORESIGHT_EVENT_ATTR(gmem_ev_access, 0x177),
> + CORESIGHT_EVENT_ATTR(gmem_ev_outstanding, 0x178),
> + CORESIGHT_EVENT_ATTR(gmem_wr_data, 0x179),
> + CORESIGHT_EVENT_ATTR(gmem_wr_outstanding, 0x17a),
> + CORESIGHT_EVENT_ATTR(gmem_wr_access, 0x17b),
> +
> + CORESIGHT_EVENT_ATTR_4(socket, wr_data, 0x17c),
> +
> + CORESIGHT_EVENT_ATTR_4(ocu, gmem_wr_data, 0x180),
> + CORESIGHT_EVENT_ATTR_4(ocu, gmem_wb_data, 0x184),
> + CORESIGHT_EVENT_ATTR_4(ocu, gmem_wr_access, 0x188),
> + CORESIGHT_EVENT_ATTR_4(ocu, gmem_wb_outstanding, 0x18c),
> +
> + CORESIGHT_EVENT_ATTR_4(ocu, rem_wr_data, 0x190),
> + CORESIGHT_EVENT_ATTR_4(ocu, rem_wb_data, 0x194),
> + CORESIGHT_EVENT_ATTR_4(ocu, rem_wr_access, 0x198),
> + CORESIGHT_EVENT_ATTR_4(ocu, rem_wb_outstanding, 0x19c),
> +
> + CORESIGHT_EVENT_ATTR(gmem_wr_total_bytes, 0x1a0),
> + CORESIGHT_EVENT_ATTR(remote_socket_wr_total_bytes, 0x1a1),
> + CORESIGHT_EVENT_ATTR(remote_socket_rd_data, 0x1a2),
> + CORESIGHT_EVENT_ATTR(remote_socket_rd_outstanding, 0x1a3),
> + CORESIGHT_EVENT_ATTR(remote_socket_rd_access, 0x1a4),
> +
> + CORESIGHT_EVENT_ATTR(cmem_rd_data, 0x1a5),
> + CORESIGHT_EVENT_ATTR(cmem_rd_access, 0x1a6),
> + CORESIGHT_EVENT_ATTR(cmem_rd_outstanding, 0x1a7),
> + CORESIGHT_EVENT_ATTR(cmem_dl_rsp, 0x1a8),
> + CORESIGHT_EVENT_ATTR(cmem_dl_access, 0x1a9),
> + CORESIGHT_EVENT_ATTR(cmem_dl_outstanding, 0x1aa),
> + CORESIGHT_EVENT_ATTR(cmem_wb_data, 0x1ab),
> + CORESIGHT_EVENT_ATTR(cmem_wb_access, 0x1ac),
> + CORESIGHT_EVENT_ATTR(cmem_wb_outstanding, 0x1ad),
> + CORESIGHT_EVENT_ATTR(cmem_ev_rsp, 0x1ae),
> + CORESIGHT_EVENT_ATTR(cmem_ev_access, 0x1af),
> + CORESIGHT_EVENT_ATTR(cmem_ev_outstanding, 0x1b0),
> + CORESIGHT_EVENT_ATTR(cmem_wr_data, 0x1b1),
> + CORESIGHT_EVENT_ATTR(cmem_wr_outstanding, 0x1b2),
> +
> + CORESIGHT_EVENT_ATTR_4(ocu, cmem_rd_data, 0x1b3),
> + CORESIGHT_EVENT_ATTR_4(ocu, cmem_rd_access, 0x1b7),
> + CORESIGHT_EVENT_ATTR_4(ocu, cmem_wb_access, 0x1bb),
> + CORESIGHT_EVENT_ATTR_4(ocu, cmem_rd_outstanding, 0x1bf),
> + CORESIGHT_EVENT_ATTR_4(ocu, cmem_wr_outstanding, 0x1c3),
> +
> + CORESIGHT_EVENT_ATTR(ocu_prb_access, 0x1c7),
> + CORESIGHT_EVENT_ATTR(ocu_prb_data, 0x1c8),
> + CORESIGHT_EVENT_ATTR(ocu_prb_outstanding, 0x1c9),
> +
> + CORESIGHT_EVENT_ATTR(cmem_wr_access, 0x1ca),
> +
> + CORESIGHT_EVENT_ATTR_4(ocu, cmem_wr_access, 0x1cb),
> + CORESIGHT_EVENT_ATTR_4(ocu, cmem_wb_data, 0x1cf),
> + CORESIGHT_EVENT_ATTR_4(ocu, cmem_wr_data, 0x1d3),
> + CORESIGHT_EVENT_ATTR_4(ocu, cmem_wb_outstanding, 0x1d7),
> +
> + CORESIGHT_EVENT_ATTR(cmem_wr_total_bytes, 0x1db),
> +
> + CORESIGHT_EVENT_ATTR(cycles, CORESIGHT_PMU_EVT_CYCLES_DEFAULT),
> + NULL,
> +};
> +
> +static struct attribute *mcf_pmu_event_attrs[] = {
> + CORESIGHT_EVENT_ATTR(rd_bytes_loc, 0x0),
> + CORESIGHT_EVENT_ATTR(rd_bytes_rem, 0x1),
> + CORESIGHT_EVENT_ATTR(wr_bytes_loc, 0x2),
> + CORESIGHT_EVENT_ATTR(wr_bytes_rem, 0x3),
> + CORESIGHT_EVENT_ATTR(total_bytes_loc, 0x4),
> + CORESIGHT_EVENT_ATTR(total_bytes_rem, 0x5),
> + CORESIGHT_EVENT_ATTR(rd_req_loc, 0x6),
> + CORESIGHT_EVENT_ATTR(rd_req_rem, 0x7),
> + CORESIGHT_EVENT_ATTR(wr_req_loc, 0x8),
> + CORESIGHT_EVENT_ATTR(wr_req_rem, 0x9),
> + CORESIGHT_EVENT_ATTR(total_req_loc, 0xa),
> + CORESIGHT_EVENT_ATTR(total_req_rem, 0xb),
> + CORESIGHT_EVENT_ATTR(rd_cum_outs_loc, 0xc),
> + CORESIGHT_EVENT_ATTR(rd_cum_outs_rem, 0xd),
> + CORESIGHT_EVENT_ATTR(cycles, CORESIGHT_PMU_EVT_CYCLES_DEFAULT),
> + NULL,
> +};
> +
> +static struct attribute *scf_pmu_format_attrs[] = {
> + CORESIGHT_FORMAT_EVENT_ATTR,
> + NULL,
> +};
> +
> +static struct attribute *mcf_pcie_pmu_format_attrs[] = {
> + CORESIGHT_FORMAT_EVENT_ATTR,
> + CORESIGHT_FORMAT_ATTR(root_port, "config1:0-9"),
> + NULL,
> +};
> +
> +static struct attribute *mcf_gpu_pmu_format_attrs[] = {
> + CORESIGHT_FORMAT_EVENT_ATTR,
> + CORESIGHT_FORMAT_ATTR(gpu, "config1:0-1"),
> + NULL,
> +};
> +
> +static struct attribute *mcf_nvlink_pmu_format_attrs[] = {
> + CORESIGHT_FORMAT_EVENT_ATTR,
> + CORESIGHT_FORMAT_ATTR(socket, "config1:0-3"),
> + NULL,
> +};
> +
> +static struct attribute **
> +nv_coresight_pmu_get_event_attrs(const struct coresight_pmu *coresight_pmu)
> +{
> + const struct nv_pmu_impl *impl = to_nv_pmu_impl(coresight_pmu);
> +
> + return impl->event_attr;
> +}
> +
> +static struct attribute **
> +nv_coresight_pmu_get_format_attrs(const struct coresight_pmu *coresight_pmu)
> +{
> + const struct nv_pmu_impl *impl = to_nv_pmu_impl(coresight_pmu);
> +
> + return impl->format_attr;
> +}
> +
> +static const char *
> +nv_coresight_pmu_get_name(const struct coresight_pmu *coresight_pmu)
> +{
> + const struct nv_pmu_impl *impl = to_nv_pmu_impl(coresight_pmu);
> +
> + return impl->name;
> +}
> +
> +static u32 nv_coresight_pmu_event_filter(const struct perf_event *event)
> +{
> + const struct nv_pmu_impl *impl =
> + to_nv_pmu_impl(to_coresight_pmu(event->pmu));
> + return event->attr.config1 & impl->filter_mask;
> +}
> +
> +int nv_coresight_init_ops(struct coresight_pmu *coresight_pmu)
> +{
> + u32 product_id;
> + struct device *dev;
> + struct nv_pmu_impl *impl;
> + static atomic_t pmu_idx = {0};
> +
> + dev = coresight_pmu->dev;
> +
> + impl = devm_kzalloc(dev, sizeof(struct nv_pmu_impl), GFP_KERNEL);
> + if (!impl)
> + return -ENOMEM;
> +
> + product_id = (coresight_pmu->impl.pmiidr >> PMIIDR_PRODUCTID_SHIFT) &
> + PMIIDR_PRODUCTID_MASK;
> +
> + switch (product_id) {
> + case 0x103:
> + impl->name =
> + devm_kasprintf(dev, GFP_KERNEL,
> + "nvidia_mcf_pcie_pmu_%u",
> + coresight_pmu->apmt_node->proc_affinity);
> + impl->filter_mask = NV_MCF_PCIE_FILTER_ID_MASK;
> + impl->event_attr = mcf_pmu_event_attrs;
> + impl->format_attr = mcf_pcie_pmu_format_attrs;
> + break;
> + case 0x104:
> + impl->name =
> + devm_kasprintf(dev, GFP_KERNEL,
> + "nvidia_mcf_gpuvir_pmu_%u",
> + coresight_pmu->apmt_node->proc_affinity);
> + impl->filter_mask = NV_MCF_GPU_FILTER_ID_MASK;
> + impl->event_attr = mcf_pmu_event_attrs;
> + impl->format_attr = mcf_gpu_pmu_format_attrs;
> + break;
> + case 0x105:
> + impl->name =
> + devm_kasprintf(dev, GFP_KERNEL,
> + "nvidia_mcf_gpu_pmu_%u",
> + coresight_pmu->apmt_node->proc_affinity);
> + impl->filter_mask = NV_MCF_GPU_FILTER_ID_MASK;
> + impl->event_attr = mcf_pmu_event_attrs;
> + impl->format_attr = mcf_gpu_pmu_format_attrs;
> + break;
> + case 0x106:
> + impl->name =
> + devm_kasprintf(dev, GFP_KERNEL,
> + "nvidia_mcf_nvlink_pmu_%u",
> + coresight_pmu->apmt_node->proc_affinity);
> + impl->filter_mask = NV_MCF_NVLINK_FILTER_ID_MASK;
> + impl->event_attr = mcf_pmu_event_attrs;
> + impl->format_attr = mcf_nvlink_pmu_format_attrs;
> + break;
> + case 0x2CF:
> + impl->name =
> + devm_kasprintf(dev, GFP_KERNEL, "nvidia_scf_pmu_%u",
> + coresight_pmu->apmt_node->proc_affinity);
> + impl->filter_mask = 0x0;
> + impl->event_attr = scf_pmu_event_attrs;
> + impl->format_attr = scf_pmu_format_attrs;
> + break;
> + default:
> + impl->name =
> + devm_kasprintf(dev, GFP_KERNEL, "nvidia_uncore_pmu_%u",
> + atomic_fetch_inc(&pmu_idx));
> + impl->filter_mask = CORESIGHT_FILTER_MASK;
> + impl->event_attr = coresight_pmu_get_event_attrs(coresight_pmu);
> + impl->format_attr =
> + coresight_pmu_get_format_attrs(coresight_pmu);
Couldn't we make the generic driver fall back to these routines if a
"IP" pmu specific backend doesn't implement these callback ?
Thanks
Suzuki
I will look at this patchset next week.
Thanks,
Mathieu
On Tue, Jun 21, 2022 at 12:50:33AM -0500, Besar Wicaksono wrote:
> Add driver support for ARM CoreSight PMU device and event attributes for NVIDIA
> implementation. The code is based on ARM Coresight PMU architecture and ACPI ARM
> Performance Monitoring Unit table (APMT) specification below:
> * ARM Coresight PMU:
> https://developer.arm.com/documentation/ihi0091/latest
> * APMT: https://developer.arm.com/documentation/den0117/latest
>
> Notes:
> * There is a concern on the naming of the PMU device.
> Currently the driver is probing "arm-coresight-pmu" device, however the APMT
> spec supports different kinds of CoreSight PMU based implementation. So it is
> open for discussion if the name can stay or a "generic" name is required.
> Please see the following thread:
> http://lists.infradead.org/pipermail/linux-arm-kernel/2022-May/740485.html
>
> The patchset applies on top of
> https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git
> master next-20220524
>
> Changes from v2:
> * Driver is now probing "arm-system-pmu" device.
> * Change default PMU naming to "arm_<APMT node type>_pmu".
> * Add implementor ops to generate custom name.
> Thanks to [email protected] for the review comments.
> v2: https://lore.kernel.org/all/[email protected]/
>
> Changes from v1:
> * Remove CPU arch dependency.
> * Remove 32-bit read/write helper function and just use read/writel.
> * Add .is_visible into event attribute to filter out cycle counter event.
> * Update pmiidr matching.
> * Remove read-modify-write on PMCR since the driver only writes to PMCR.E.
> * Assign default cycle event outside the 32-bit PMEVTYPER range.
> * Rework the active event and used counter tracking.
> Thanks to [email protected] for the review comments.
> v1: https://lore.kernel.org/all/[email protected]/
>
> Besar Wicaksono (2):
> perf: coresight_pmu: Add support for ARM CoreSight PMU driver
> perf: coresight_pmu: Add support for NVIDIA SCF and MCF attribute
>
> arch/arm64/configs/defconfig | 1 +
> drivers/perf/Kconfig | 2 +
> drivers/perf/Makefile | 1 +
> drivers/perf/coresight_pmu/Kconfig | 11 +
> drivers/perf/coresight_pmu/Makefile | 7 +
> .../perf/coresight_pmu/arm_coresight_pmu.c | 1316 +++++++++++++++++
> .../perf/coresight_pmu/arm_coresight_pmu.h | 177 +++
> .../coresight_pmu/arm_coresight_pmu_nvidia.c | 312 ++++
> .../coresight_pmu/arm_coresight_pmu_nvidia.h | 17 +
> 9 files changed, 1844 insertions(+)
> create mode 100644 drivers/perf/coresight_pmu/Kconfig
> create mode 100644 drivers/perf/coresight_pmu/Makefile
> create mode 100644 drivers/perf/coresight_pmu/arm_coresight_pmu.c
> create mode 100644 drivers/perf/coresight_pmu/arm_coresight_pmu.h
> create mode 100644 drivers/perf/coresight_pmu/arm_coresight_pmu_nvidia.c
> create mode 100644 drivers/perf/coresight_pmu/arm_coresight_pmu_nvidia.h
>
>
> base-commit: 09ce5091ff971cdbfd67ad84dc561ea27f10d67a
> --
> 2.17.1
>
Hi Suzuki,
Please see my replies inline.
> -----Original Message-----
> From: Suzuki K Poulose <[email protected]>
> Sent: Thursday, July 7, 2022 6:08 PM
> To: Besar Wicaksono <[email protected]>; [email protected];
> [email protected]; [email protected]; [email protected]
> Cc: [email protected]; [email protected];
> [email protected]; [email protected];
> [email protected]; [email protected]; Thierry Reding
> <[email protected]>; Jonathan Hunter <[email protected]>; Vikram
> Sethi <[email protected]>; [email protected];
> [email protected]; [email protected]
> Subject: Re: [RESEND PATCH v3 1/2] perf: coresight_pmu: Add support for
> ARM CoreSight PMU driver
>
> External email: Use caution opening links or attachments
>
>
> On 21/06/2022 06:50, Besar Wicaksono wrote:
> > Add support for ARM CoreSight PMU driver framework and interfaces.
> > The driver provides generic implementation to operate uncore PMU based
> > on ARM CoreSight PMU architecture. The driver also provides interface
> > to get vendor/implementation specific information, for example event
> > attributes and formating.
> >
> > The specification used in this implementation can be found below:
> > * ACPI Arm Performance Monitoring Unit table:
> > https://developer.arm.com/documentation/den0117/latest
> > * ARM Coresight PMU architecture:
> > https://developer.arm.com/documentation/ihi0091/latest
> >
> > Signed-off-by: Besar Wicaksono <[email protected]>
> > ---
> > arch/arm64/configs/defconfig | 1 +
> > drivers/perf/Kconfig | 2 +
> > drivers/perf/Makefile | 1 +
> > drivers/perf/coresight_pmu/Kconfig | 11 +
> > drivers/perf/coresight_pmu/Makefile | 6 +
> > .../perf/coresight_pmu/arm_coresight_pmu.c | 1312
> +++++++++++++++++
> > .../perf/coresight_pmu/arm_coresight_pmu.h | 177 +++
> > 7 files changed, 1510 insertions(+)
> > create mode 100644 drivers/perf/coresight_pmu/Kconfig
> > create mode 100644 drivers/perf/coresight_pmu/Makefile
> > create mode 100644 drivers/perf/coresight_pmu/arm_coresight_pmu.c
> > create mode 100644 drivers/perf/coresight_pmu/arm_coresight_pmu.h
> >
> > diff --git a/arch/arm64/configs/defconfig b/arch/arm64/configs/defconfig
> > index 7d1105343bc2..22184f8883da 100644
> > --- a/arch/arm64/configs/defconfig
> > +++ b/arch/arm64/configs/defconfig
> > @@ -1212,6 +1212,7 @@ CONFIG_PHY_UNIPHIER_USB3=y
> > CONFIG_PHY_TEGRA_XUSB=y
> > CONFIG_PHY_AM654_SERDES=m
> > CONFIG_PHY_J721E_WIZ=m
> > +CONFIG_ARM_CORESIGHT_PMU=y
> > CONFIG_ARM_SMMU_V3_PMU=m
> > CONFIG_FSL_IMX8_DDR_PMU=m
> > CONFIG_QCOM_L2_PMU=y
> > diff --git a/drivers/perf/Kconfig b/drivers/perf/Kconfig
> > index 1e2d69453771..c4e7cd5b4162 100644
> > --- a/drivers/perf/Kconfig
> > +++ b/drivers/perf/Kconfig
> > @@ -192,4 +192,6 @@ config MARVELL_CN10K_DDR_PMU
> > Enable perf support for Marvell DDR Performance monitoring
> > event on CN10K platform.
> >
> > +source "drivers/perf/coresight_pmu/Kconfig"
> > +
> > endmenu
> > diff --git a/drivers/perf/Makefile b/drivers/perf/Makefile
> > index 57a279c61df5..4126a04b5583 100644
> > --- a/drivers/perf/Makefile
> > +++ b/drivers/perf/Makefile
> > @@ -20,3 +20,4 @@ obj-$(CONFIG_ARM_DMC620_PMU) +=
> arm_dmc620_pmu.o
> > obj-$(CONFIG_MARVELL_CN10K_TAD_PMU) +=
> marvell_cn10k_tad_pmu.o
> > obj-$(CONFIG_MARVELL_CN10K_DDR_PMU) +=
> marvell_cn10k_ddr_pmu.o
> > obj-$(CONFIG_APPLE_M1_CPU_PMU) += apple_m1_cpu_pmu.o
> > +obj-$(CONFIG_ARM_CORESIGHT_PMU) += coresight_pmu/
> > diff --git a/drivers/perf/coresight_pmu/Kconfig
> b/drivers/perf/coresight_pmu/Kconfig
> > new file mode 100644
> > index 000000000000..89174f54c7be
> > --- /dev/null
> > +++ b/drivers/perf/coresight_pmu/Kconfig
> > @@ -0,0 +1,11 @@
> > +# SPDX-License-Identifier: GPL-2.0
> > +#
> > +# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES.
> > +
> > +config ARM_CORESIGHT_PMU
> > + tristate "ARM Coresight PMU"
> > + depends on ACPI
> > + depends on ACPI_APMT || COMPILE_TEST
> > + help
> > + Provides support for Performance Monitoring Unit (PMU) events
> based on
> > + ARM CoreSight PMU architecture.
> > \ No newline at end of file
> > diff --git a/drivers/perf/coresight_pmu/Makefile
> b/drivers/perf/coresight_pmu/Makefile
> > new file mode 100644
> > index 000000000000..a2a7a5fbbc16
> > --- /dev/null
> > +++ b/drivers/perf/coresight_pmu/Makefile
> > @@ -0,0 +1,6 @@
> > +# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES.
> > +#
> > +# SPDX-License-Identifier: GPL-2.0
> > +
> > +obj-$(CONFIG_ARM_CORESIGHT_PMU) += \
> > + arm_coresight_pmu.o
> > diff --git a/drivers/perf/coresight_pmu/arm_coresight_pmu.c
> b/drivers/perf/coresight_pmu/arm_coresight_pmu.c
> > new file mode 100644
> > index 000000000000..ba52cc592b2d
> > --- /dev/null
> > +++ b/drivers/perf/coresight_pmu/arm_coresight_pmu.c
> > @@ -0,0 +1,1312 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +/*
> > + * ARM CoreSight PMU driver.
> > + *
> > + * This driver adds support for uncore PMU based on ARM CoreSight
> Performance
> > + * Monitoring Unit Architecture. The PMU is accessible via MMIO registers
> and
> > + * like other uncore PMUs, it does not support process specific events and
> > + * cannot be used in sampling mode.
> > + *
> > + * This code is based on other uncore PMUs like ARM DSU PMU. It
> provides a
> > + * generic implementation to operate the PMU according to CoreSight
> PMU
> > + * architecture and ACPI ARM PMU table (APMT) documents below:
> > + * - ARM CoreSight PMU architecture document number: ARM IHI 0091
> A.a-00bet0.
> > + * - APMT document number: ARM DEN0117.
> > + * The description of the PMU, like the PMU device identification,
> available
> > + * events, and configuration options, is vendor specific. The driver
> provides
> > + * interface for vendor specific code to get this information. This allows
> the
> > + * driver to be shared with PMU from different vendors.
> > + *
> > + * The CoreSight PMU devices can be named using implementor specific
> format, or
> > + * with default naming format: arm_<apmt node type>_pmu_<numeric
> id>.
>
> ---
>
> > + * The description of the device, like the identifier, supported events, and
> > + * formats can be found in sysfs
> > + * /sys/bus/event_source/devices/arm_<apmt node
> type>_pmu_<numeric id>
>
> I don't think the above is needed. This is true for all PMU devices.
>
Acked, I will remove it on next version.
> > + *
> > + * The user should refer to the vendor technical documentation to get
> details
> > + * about the supported events.
> > + *
> > + * Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES.
> > + *
> > + */
> > +
> > +#include <linux/acpi.h>
> > +#include <linux/cacheinfo.h>
> > +#include <linux/ctype.h>
> > +#include <linux/interrupt.h>
> > +#include <linux/io-64-nonatomic-lo-hi.h>
> > +#include <linux/module.h>
> > +#include <linux/mod_devicetable.h>
> > +#include <linux/perf_event.h>
> > +#include <linux/platform_device.h>
> > +#include <acpi/processor.h>
> > +
> > +#include "arm_coresight_pmu.h"
> > +
> > +#define PMUNAME "arm_system_pmu"
> > +
>
> If we have decied to call this arm_system_pmu, (which I am perfectly
> happy with), could we please stick to that name for functions that we
> export ?
>
> e.g,
> s/coresight_pmu_sysfs_event_show/arm_system_pmu_event_show()/
>
Just want to confirm, is it just the public functions or do we need to replace
all that has "coresight" naming ? Including the static functions, structs, filename.
> > +#define CORESIGHT_CPUMASK_ATTR(_name, _config) \
> > + CORESIGHT_EXT_ATTR(_name, coresight_pmu_cpumask_show, \
> > + (unsigned long)_config)
> > +
> > +/*
> > + * Register offsets based on CoreSight Performance Monitoring Unit
> Architecture
> > + * Document number: ARM-ECM-0640169 00alp6
> > + */
> > +#define PMEVCNTR_LO 0x0
> > +#define PMEVCNTR_HI 0x4
> > +#define PMEVTYPER 0x400
> > +#define PMCCFILTR 0x47C
> > +#define PMEVFILTR 0xA00
> > +#define PMCNTENSET 0xC00
> > +#define PMCNTENCLR 0xC20
> > +#define PMINTENSET 0xC40
> > +#define PMINTENCLR 0xC60
> > +#define PMOVSCLR 0xC80
> > +#define PMOVSSET 0xCC0
> > +#define PMCFGR 0xE00
> > +#define PMCR 0xE04
> > +#define PMIIDR 0xE08
> > +
> > +/* PMCFGR register field */
> > +#define PMCFGR_NCG_SHIFT 28
> > +#define PMCFGR_NCG_MASK 0xf
>
> Please could we instead do :
>
> PMCFGR_NCG GENMASK(31, 28)
> and similarly for all the other fields and use :
>
> FIELD_PREP(), FIELD_GET() helpers ?
>
Sure thanks for pointing me to these helpers.
> > +#define PMCFGR_HDBG BIT(24)
> > +#define PMCFGR_TRO BIT(23)
> > +#define PMCFGR_SS BIT(22)
> > +#define PMCFGR_FZO BIT(21)
> > +#define PMCFGR_MSI BIT(20)
> > +#define PMCFGR_UEN BIT(19)
> > +#define PMCFGR_NA BIT(17)
> > +#define PMCFGR_EX BIT(16)
> > +#define PMCFGR_CCD BIT(15)
> > +#define PMCFGR_CC BIT(14)
> > +#define PMCFGR_SIZE_SHIFT 8
> > +#define PMCFGR_SIZE_MASK 0x3f
> > +#define PMCFGR_N_SHIFT 0
> > +#define PMCFGR_N_MASK 0xff
> > +
> > +/* PMCR register field */
> > +#define PMCR_TRO BIT(11)
> > +#define PMCR_HDBG BIT(10)
> > +#define PMCR_FZO BIT(9)
> > +#define PMCR_NA BIT(8)
> > +#define PMCR_DP BIT(5)
> > +#define PMCR_X BIT(4)
> > +#define PMCR_D BIT(3)
> > +#define PMCR_C BIT(2)
> > +#define PMCR_P BIT(1)
> > +#define PMCR_E BIT(0)
> > +
> > +/* PMIIDR register field */
> > +#define PMIIDR_IMPLEMENTER_MASK 0xFFF
> > +#define PMIIDR_PRODUCTID_MASK 0xFFF
> > +#define PMIIDR_PRODUCTID_SHIFT 20
> > +
> > +/* Each SET/CLR register supports up to 32 counters. */
> > +#define CORESIGHT_SET_CLR_REG_COUNTER_NUM 32
> > +#define CORESIGHT_SET_CLR_REG_COUNTER_SHIFT 5
>
> minor nits:
>
> Could we rename sucht that s/CORESIGHT/PMU ? Without the PMU, it could
> be confused as a CORESIGHT architecture specific thing.
>
> Also
>
> #define CORESIGHT_SET_CLR_REG_COUNTER_NUM (1 <
> CORESIGHT_SET_CLR_REG_COUNTER_SHIFT)
>
>
Acked, I will make the change on the next version.
> > +
> > +/* The number of 32-bit SET/CLR register that can be supported. */
> > +#define CORESIGHT_SET_CLR_REG_MAX_NUM ((PMCNTENCLR -
> PMCNTENSET) / sizeof(u32))
> > +
> > +static_assert((CORESIGHT_SET_CLR_REG_MAX_NUM *
> > + CORESIGHT_SET_CLR_REG_COUNTER_NUM) >=
> > + CORESIGHT_PMU_MAX_HW_CNTRS);
> > +
> > +/* Convert counter idx into SET/CLR register number. */
> > +#define CORESIGHT_IDX_TO_SET_CLR_REG_ID(idx) \
> > + (idx >> CORESIGHT_SET_CLR_REG_COUNTER_SHIFT)
> > +
> > +/* Convert counter idx into SET/CLR register bit. */
> > +#define CORESIGHT_IDX_TO_SET_CLR_REG_BIT(idx) \
> > + (idx & (CORESIGHT_SET_CLR_REG_COUNTER_NUM - 1))
>
> super minor nit:
>
> COUNTER_TO_SET_CLR_REG_IDX
> COUNTER_TO_SET_CLR_REG_BIT ?
>
> I don't see a need for prefixing them with CORESIGHT and making them
> unnecessary longer.
>
Acked.
> > +
> > +#define CORESIGHT_ACTIVE_CPU_MASK 0x0
> > +#define CORESIGHT_ASSOCIATED_CPU_MASK 0x1
> > +
> > +
> > +/* Check if field f in flags is set with value v */
> > +#define CHECK_APMT_FLAG(flags, f, v) \
> > + ((flags & (ACPI_APMT_FLAGS_ ## f)) == (ACPI_APMT_FLAGS_ ## f ##
> _ ## v))
> > +
> > +static unsigned long coresight_pmu_cpuhp_state;
> > +
> > +/*
> > + * In CoreSight PMU architecture, all of the MMIO registers are 32-bit
> except
> > + * counter register. The counter register can be implemented as 32-bit or
> 64-bit
> > + * register depending on the value of PMCFGR.SIZE field. For 64-bit
> access,
> > + * single-copy 64-bit atomic support is implementation defined. APMT
> node flag
> > + * is used to identify if the PMU supports 64-bit single copy atomic. If 64-
> bit
> > + * single copy atomic is not supported, the driver treats the register as a
> pair
> > + * of 32-bit register.
> > + */
> > +
> > +/*
> > + * Read 64-bit register as a pair of 32-bit registers using hi-lo-hi sequence.
> > + */
> > +static u64 read_reg64_hilohi(const void __iomem *addr)
> > +{
> > + u32 val_lo, val_hi;
> > + u64 val;
> > +
> > + /* Use high-low-high sequence to avoid tearing */
> > + do {
> > + val_hi = readl(addr + 4);
> > + val_lo = readl(addr);
> > + } while (val_hi != readl(addr + 4));
> > +
> > + val = (((u64)val_hi << 32) | val_lo);
> > +
> > + return val;
> > +}
> > +
> > +/* Check if PMU supports 64-bit single copy atomic. */
> > +static inline bool support_atomic(const struct coresight_pmu
> *coresight_pmu)
>
> minor nit: supports_64bit_atomics() ?
>
Acked.
> > +{
> > + return CHECK_APMT_FLAG(coresight_pmu->apmt_node->flags,
> ATOMIC, SUPP);
> > +}
> > +
> > +/* Check if cycle counter is supported. */
> > +static inline bool support_cc(const struct coresight_pmu *coresight_pmu)
>
> minor nit: supports_cycle_counter() ?
>
Acked.
> > +{
> > + return (coresight_pmu->pmcfgr & PMCFGR_CC);
> > +}
> > +
> > +/* Get counter size. */
> > +static inline u32 pmcfgr_size(const struct coresight_pmu *coresight_pmu)
>
>
> Please could we change this to
>
> counter_size(const struct coresight_pmu *coresight_pmu)
> {
> /* Counter size is PMCFGR_SIZE + 1 */
> return FIELD_GET(PMCFGR_SIZE, pmu->pmcfgr) + 1;
> }
>
> and also add :
>
> u64 counter_mask(..)
> {
> return GENMASK_ULL(counter_size(pmu) - 1, 0);
> }
>
> See more on this below.
>
Acked.
> > +{
> > + return (coresight_pmu->pmcfgr >> PMCFGR_SIZE_SHIFT) &
> PMCFGR_SIZE_MASK;
> > +}
>
>
>
> > +/* Check if counter is implemented as 64-bit register. */
> > +static inline bool
> > +use_64b_counter_reg(const struct coresight_pmu *coresight_pmu)
> > +{
> > + return (pmcfgr_size(coresight_pmu) > 31);
> > +}
> > +
> > +/* Get number of counters, minus one. */
> > +static inline u32 pmcfgr_n(const struct coresight_pmu *coresight_pmu)
> > +{
> > + return (coresight_pmu->pmcfgr >> PMCFGR_N_SHIFT) &
> PMCFGR_N_MASK;
> > +}
> > +
> > +ssize_t coresight_pmu_sysfs_event_show(struct device *dev,
> > + struct device_attribute *attr, char *buf)
> > +{
> > + struct dev_ext_attribute *eattr =
> > + container_of(attr, struct dev_ext_attribute, attr);
> > + return sysfs_emit(buf, "event=0x%llx\n",
> > + (unsigned long long)eattr->var);
> > +}
> > +EXPORT_SYMBOL_GPL(coresight_pmu_sysfs_event_show);
>
>
> > +
> > +/* Default event list. */
> > +static struct attribute *coresight_pmu_event_attrs[] = {
> > + CORESIGHT_EVENT_ATTR(cycles,
> CORESIGHT_PMU_EVT_CYCLES_DEFAULT),
> > + NULL,
> > +};
> > +
> > +struct attribute **
> > +coresight_pmu_get_event_attrs(const struct coresight_pmu
> *coresight_pmu)
> > +{
> > + return coresight_pmu_event_attrs;
> > +}
> > +EXPORT_SYMBOL_GPL(coresight_pmu_get_event_attrs);
> > +
> > +umode_t coresight_pmu_event_attr_is_visible(struct kobject *kobj,
> > + struct attribute *attr, int unused)
> > +{
> > + struct device *dev = kobj_to_dev(kobj);
> > + struct coresight_pmu *coresight_pmu =
> > + to_coresight_pmu(dev_get_drvdata(dev));
> > + struct perf_pmu_events_attr *eattr;
> > +
> > + eattr = container_of(attr, typeof(*eattr), attr.attr);
> > +
> > + /* Hide cycle event if not supported */
> > + if (!support_cc(coresight_pmu) &&
> > + eattr->id == CORESIGHT_PMU_EVT_CYCLES_DEFAULT) {
> > + return 0;
> > + }
> > +
> > + return attr->mode;
> > +}
> > +EXPORT_SYMBOL_GPL(coresight_pmu_event_attr_is_visible);
> > +
> > +ssize_t coresight_pmu_sysfs_format_show(struct device *dev,
> > + struct device_attribute *attr,
> > + char *buf)
> > +{
> > + struct dev_ext_attribute *eattr =
> > + container_of(attr, struct dev_ext_attribute, attr);
> > + return sysfs_emit(buf, "%s\n", (char *)eattr->var);
> > +}
> > +EXPORT_SYMBOL_GPL(coresight_pmu_sysfs_format_show);
> > +
> > +static struct attribute *coresight_pmu_format_attrs[] = {
> > + CORESIGHT_FORMAT_EVENT_ATTR,
> > + CORESIGHT_FORMAT_FILTER_ATTR,
> > + NULL,
> > +};
> > +
> > +struct attribute **
> > +coresight_pmu_get_format_attrs(const struct coresight_pmu
> *coresight_pmu)
> > +{
> > + return coresight_pmu_format_attrs;
> > +}
> > +EXPORT_SYMBOL_GPL(coresight_pmu_get_format_attrs);
> > +
> > +u32 coresight_pmu_event_type(const struct perf_event *event)
> > +{
> > + return event->attr.config & CORESIGHT_EVENT_MASK;
> > +}
> > +EXPORT_SYMBOL_GPL(coresight_pmu_event_type);
>
> Why do we need to export these ?
>
As you asked on the other question, we don't need to export it if driver can
implicitly fall back to default routine if vendor does not provide a callback.
I will provide a comment on coresight_pmu_impl_ops on this expectation.
> > +
> > +u32 coresight_pmu_event_filter(const struct perf_event *event)
> > +{
> > + return event->attr.config1 & CORESIGHT_FILTER_MASK;
> > +}
> > +EXPORT_SYMBOL_GPL(coresight_pmu_event_filter);
> > +
> > +static ssize_t coresight_pmu_identifier_show(struct device *dev,
> > + struct device_attribute *attr,
> > + char *page)
> > +{
> > + struct coresight_pmu *coresight_pmu =
> > + to_coresight_pmu(dev_get_drvdata(dev));
> > +
> > + return sysfs_emit(page, "%s\n", coresight_pmu->identifier);
> > +}
> > +
> > +static struct device_attribute coresight_pmu_identifier_attr =
> > + __ATTR(identifier, 0444, coresight_pmu_identifier_show, NULL);
> > +
> > +static struct attribute *coresight_pmu_identifier_attrs[] = {
> > + &coresight_pmu_identifier_attr.attr,
> > + NULL,
> > +};
> > +
> > +static struct attribute_group coresight_pmu_identifier_attr_group = {
> > + .attrs = coresight_pmu_identifier_attrs,
> > +};
> > +
> > +const char *
> > +coresight_pmu_get_identifier(const struct coresight_pmu
> *coresight_pmu)
> > +{
> > + const char *identifier =
> > + devm_kasprintf(coresight_pmu->dev, GFP_KERNEL, "%x",
> > + coresight_pmu->impl.pmiidr);
> > + return identifier;
> > +}
> > +EXPORT_SYMBOL_GPL(coresight_pmu_get_identifier);
> > +
> > +static const char
> *coresight_pmu_type_str[ACPI_APMT_NODE_TYPE_COUNT] = {
> > + "mc",
> > + "smmu",
> > + "pcie",
> > + "acpi",
> > + "cache",
> > +};
> > +
> > +const char *coresight_pmu_get_name(const struct coresight_pmu
> *coresight_pmu)
> > +{
> > + struct device *dev;
> > + u8 pmu_type;
> > + char *name;
> > + char acpi_hid_string[ACPI_ID_LEN] = { 0 };
> > + static atomic_t pmu_idx[ACPI_APMT_NODE_TYPE_COUNT] = { 0 };
> > +
> > + dev = coresight_pmu->dev;
> > + pmu_type = coresight_pmu->apmt_node->type;
>
> Please could we use a variable to refer to the "coresight_pmu->apmt_node"
> ?
> It helps with code reading below and also gives us a hint to the
> type of that field, we refer multiple times below.
>
Acked.
> > +
> > + if (pmu_type >= ACPI_APMT_NODE_TYPE_COUNT) {
> > + dev_err(dev, "unsupported PMU type-%u\n", pmu_type);
> > + return NULL;
> > + }
> > +
> > + if (pmu_type == ACPI_APMT_NODE_TYPE_ACPI) {
> > + memcpy(acpi_hid_string,
> > + &coresight_pmu->apmt_node->inst_primary,
> > + sizeof(coresight_pmu->apmt_node->inst_primary));
> > + name = devm_kasprintf(dev, GFP_KERNEL,
> "arm_%s_pmu_%s_%u",
> > + coresight_pmu_type_str[pmu_type],
> > + acpi_hid_string,
> > + coresight_pmu->apmt_node->inst_secondary);
> > + } else {
> > + name = devm_kasprintf(dev, GFP_KERNEL, "arm_%s_pmu_%d",
> > + coresight_pmu_type_str[pmu_type],
> > + atomic_fetch_inc(&pmu_idx[pmu_type]));
> > + }
> > +
> > + return name;
> > +}
> > +EXPORT_SYMBOL_GPL(coresight_pmu_get_name);
> > +
> > +static ssize_t coresight_pmu_cpumask_show(struct device *dev,
> > + struct device_attribute *attr,
> > + char *buf)
> > +{
> > + struct pmu *pmu = dev_get_drvdata(dev);
> > + struct coresight_pmu *coresight_pmu = to_coresight_pmu(pmu);
> > + struct dev_ext_attribute *eattr =
> > + container_of(attr, struct dev_ext_attribute, attr);
> > + unsigned long mask_id = (unsigned long)eattr->var;
> > + const cpumask_t *cpumask;
> > +
> > + switch (mask_id) {
> > + case CORESIGHT_ACTIVE_CPU_MASK:
> > + cpumask = &coresight_pmu->active_cpu;
> > + break;
> > + case CORESIGHT_ASSOCIATED_CPU_MASK:
> > + cpumask = &coresight_pmu->associated_cpus;
> > + break;
> > + default:
> > + return 0;
> > + }
> > + return cpumap_print_to_pagebuf(true, buf, cpumask);
> > +}
> > +
> > +static struct attribute *coresight_pmu_cpumask_attrs[] = {
> > + CORESIGHT_CPUMASK_ATTR(cpumask,
> CORESIGHT_ACTIVE_CPU_MASK),
> > + CORESIGHT_CPUMASK_ATTR(associated_cpus,
> CORESIGHT_ASSOCIATED_CPU_MASK),
> > + NULL,
> > +};
> > +
> > +static struct attribute_group coresight_pmu_cpumask_attr_group = {
> > + .attrs = coresight_pmu_cpumask_attrs,
> > +};
> > +
> > +static const struct coresight_pmu_impl_ops default_impl_ops = {
> > + .get_event_attrs = coresight_pmu_get_event_attrs,
> > + .get_format_attrs = coresight_pmu_get_format_attrs,
> > + .get_identifier = coresight_pmu_get_identifier,
> > + .get_name = coresight_pmu_get_name,
> > + .is_cc_event = coresight_pmu_is_cc_event,
> > + .event_type = coresight_pmu_event_type,
> > + .event_filter = coresight_pmu_event_filter,
> > + .event_attr_is_visible = coresight_pmu_event_attr_is_visible
> > +};
> > +
> > +struct impl_match {
> > + u32 pmiidr;
> > + u32 mask;
> > + int (*impl_init_ops)(struct coresight_pmu *coresight_pmu); > +};
> > +
> > +static const struct impl_match impl_match[] = {
> > + {}
> > +};
> > +
> > +static int coresight_pmu_init_impl_ops(struct coresight_pmu
> *coresight_pmu)
> > +{
> > + int idx, ret;
> > + struct acpi_apmt_node *apmt_node = coresight_pmu->apmt_node;
> > + const struct impl_match *match = impl_match;
> > +
> > + /*
> > + * Get PMU implementer and product id from APMT node.
> > + * If APMT node doesn't have implementer/product id, try get it
> > + * from PMIIDR.
> > + */
> > + coresight_pmu->impl.pmiidr =
> > + (apmt_node->impl_id) ? apmt_node->impl_id :
> > + readl(coresight_pmu->base0 + PMIIDR);
> > +
> > + /* Find implementer specific attribute ops. */
> > + for (idx = 0; match->pmiidr; match++, idx++) {
>
> idx is unused, please remove it.
>
Acked, thanks for catching this.
> > + if ((match->pmiidr & match->mask) ==
> > + (coresight_pmu->impl.pmiidr & match->mask)) {
> > + ret = match->impl_init_ops(coresight_pmu);
> > + if (ret)
> > + return ret;
> > +
> > + return 0;
> > + }
> > + }
> > +
> > + /* We don't find implementer specific attribute ops, use default. */
> > + coresight_pmu->impl.ops = &default_impl_ops;
> > + return 0;
> > +}
> > +
> > +static struct attribute_group *
> > +coresight_pmu_alloc_event_attr_group(struct coresight_pmu
> *coresight_pmu)
> > +{
> > + struct attribute_group *event_group;
> > + struct device *dev = coresight_pmu->dev;
> > +
> > + event_group =
> > + devm_kzalloc(dev, sizeof(struct attribute_group), GFP_KERNEL);
> > + if (!event_group)
> > + return NULL;
> > +
> > + event_group->name = "events";
> > + event_group->attrs =
> > + coresight_pmu->impl.ops->get_event_attrs(coresight_pmu);
>
>
> Is it mandatory for the PMUs to implement get_event_attrs() and all the
> call backs ? Does it makes sense to check and optionally call it, if the
> backend driver implements it ? Or at least we should make sure the
> required operations are filled in by backend.
>
>
> > + event_group->is_visible =
> > + coresight_pmu->impl.ops->event_attr_is_visible;
> > +
> > + return event_group;
> > +}
> > +
> > +static struct attribute_group *
> > +coresight_pmu_alloc_format_attr_group(struct coresight_pmu
> *coresight_pmu)
> > +{
> > + struct attribute_group *format_group;
> > + struct device *dev = coresight_pmu->dev;
> > +
> > + format_group =
> > + devm_kzalloc(dev, sizeof(struct attribute_group), GFP_KERNEL);
> > + if (!format_group)
> > + return NULL;
> > +
> > + format_group->name = "format";
> > + format_group->attrs =
> > + coresight_pmu->impl.ops->get_format_attrs(coresight_pmu);
>
> Same as above.
>
> > +
> > + return format_group;
> > +}
> > +
> > +static struct attribute_group **
> > +coresight_pmu_alloc_attr_group(struct coresight_pmu *coresight_pmu)
> > +{
> > + const struct coresight_pmu_impl_ops *impl_ops;
> > + struct attribute_group **attr_groups = NULL;
> > + struct device *dev = coresight_pmu->dev;
> > + int ret;
> > +
> > + ret = coresight_pmu_init_impl_ops(coresight_pmu);
> > + if (ret)
> > + return NULL;
> > +
> > + impl_ops = coresight_pmu->impl.ops;
> > +
> > + coresight_pmu->identifier = impl_ops->get_identifier(coresight_pmu);
> > + coresight_pmu->name = impl_ops->get_name(coresight_pmu);
>
> If the implementor doesn't have a naming scheme, could we default to the
> generic scheme ? In gerneal, we could fall back to the "generic"
> callbacks here if the implementor doesn't provide a specific call back.
> That way, we could skip the exporting a lot of the callbacks.
>
Yeah, we could implicitly call default callback if vendor does not provide it.
This also applies to other vendor callbacks. I will add notes on coresight_pmu_impl_ops.
> > +
> > + if (!coresight_pmu->name)
> > + return NULL;
> > +
> > + attr_groups = devm_kzalloc(dev, 5 * sizeof(struct attribute_group *),
> > + GFP_KERNEL);
>
> nit: devm_kcalloc() ?
>
Sure.
> > + if (!attr_groups)
> > + return NULL;
> > +
> > + attr_groups[0] =
> coresight_pmu_alloc_event_attr_group(coresight_pmu);
> > + attr_groups[1] =
> coresight_pmu_alloc_format_attr_group(coresight_pmu);
>
> What if one of the above returned NULL and you wouldn't get the
> following listed ?
>
Yes, I missed the check. Thanks for catching this.
> > + attr_groups[2] = &coresight_pmu_identifier_attr_group;
> > + attr_groups[3] = &coresight_pmu_cpumask_attr_group;
> > +
> > + return attr_groups;
> > +}
> > +
> > +static inline void
> > +coresight_pmu_reset_counters(struct coresight_pmu *coresight_pmu)
> > +{
> > + u32 pmcr = 0;
> > +
> > + pmcr |= PMCR_P;
> > + pmcr |= PMCR_C;
> > + writel(pmcr, coresight_pmu->base0 + PMCR);
> > +}
> > +
> > +static inline void
> > +coresight_pmu_start_counters(struct coresight_pmu *coresight_pmu)
> > +{
> > + u32 pmcr;
> > +
> > + pmcr = PMCR_E;
> > + writel(pmcr, coresight_pmu->base0 + PMCR);
>
> nit: Isn't it easier to read as p;:
>
> writel(PMCR_E, ...);
Acked.
> > +}
> > +
> > +static inline void
> > +coresight_pmu_stop_counters(struct coresight_pmu *coresight_pmu)
> > +{
> > + u32 pmcr;
> > +
> > + pmcr = 0;
> > + writel(pmcr, coresight_pmu->base0 + PMCR);
>
> Similar to above,
> writel(0, ...);
> ?
>
Acked.
> > +}
> > +
> > +static void coresight_pmu_enable(struct pmu *pmu)
> > +{
> > + bool disabled;
> > + struct coresight_pmu *coresight_pmu = to_coresight_pmu(pmu);
> > +
> > + disabled = bitmap_empty(coresight_pmu->hw_events.used_ctrs,
> > + coresight_pmu->num_logical_counters);
> > +
> > + if (disabled)
> > + return;
> > +
> > + coresight_pmu_start_counters(coresight_pmu);
> > +}
> > +
> > +static void coresight_pmu_disable(struct pmu *pmu)
> > +{
> > + struct coresight_pmu *coresight_pmu = to_coresight_pmu(pmu);
> > +
> > + coresight_pmu_stop_counters(coresight_pmu);
> > +}
> > +
> > +bool coresight_pmu_is_cc_event(const struct perf_event *event)
> > +{
> > + return (event->attr.config ==
> CORESIGHT_PMU_EVT_CYCLES_DEFAULT);
> > +}
> > +EXPORT_SYMBOL_GPL(coresight_pmu_is_cc_event);
> > +
> > +static int
> > +coresight_pmu_get_event_idx(struct coresight_pmu_hw_events
> *hw_events,
> > + struct perf_event *event)
> > +{
> > + int idx;
> > + struct coresight_pmu *coresight_pmu = to_coresight_pmu(event-
> >pmu);
> > +
> > + if (support_cc(coresight_pmu)) {
> > + if (coresight_pmu->impl.ops->is_cc_event(event)) {
> > + /* Search for available cycle counter. */
> > + if (test_and_set_bit(coresight_pmu->cc_logical_idx,
> > + hw_events->used_ctrs))
> > + return -EAGAIN;
> > +
> > + return coresight_pmu->cc_logical_idx;
> > + }
> > +
> > + /*
> > + * Search a regular counter from the used counter bitmap.
> > + * The cycle counter divides the bitmap into two parts. Search
> > + * the first then second half to exclude the cycle counter bit.
> > + */
> > + idx = find_first_zero_bit(hw_events->used_ctrs,
> > + coresight_pmu->cc_logical_idx);
> > + if (idx >= coresight_pmu->cc_logical_idx) {
> > + idx = find_next_zero_bit(
> > + hw_events->used_ctrs,
> > + coresight_pmu->num_logical_counters,
> > + coresight_pmu->cc_logical_idx + 1);
> > + }
> > + } else {
> > + idx = find_first_zero_bit(hw_events->used_ctrs,
> > + coresight_pmu->num_logical_counters);
> > + }
> > +
> > + if (idx >= coresight_pmu->num_logical_counters)
> > + return -EAGAIN;
> > +
> > + set_bit(idx, hw_events->used_ctrs);
> > +
> > + return idx;
> > +}
> > +
> > +static bool
> > +coresight_pmu_validate_event(struct pmu *pmu,
> > + struct coresight_pmu_hw_events *hw_events,
> > + struct perf_event *event)
> > +{
> > + if (is_software_event(event))
> > + return true;
> > +
> > + /* Reject groups spanning multiple HW PMUs. */
> > + if (event->pmu != pmu)
> > + return false;
> > +
> > + return (coresight_pmu_get_event_idx(hw_events, event) >= 0);
> > +}
> > +
> > +/*
> > + * Make sure the group of events can be scheduled at once
> > + * on the PMU.
> > + */
> > +static bool coresight_pmu_validate_group(struct perf_event *event)
> > +{
> > + struct perf_event *sibling, *leader = event->group_leader;
> > + struct coresight_pmu_hw_events fake_hw_events;
> > +
> > + if (event->group_leader == event)
> > + return true;
> > +
> > + memset(&fake_hw_events, 0, sizeof(fake_hw_events));
> > +
> > + if (!coresight_pmu_validate_event(event->pmu, &fake_hw_events,
> leader))
> > + return false;
> > +
> > + for_each_sibling_event(sibling, leader) {
> > + if (!coresight_pmu_validate_event(event->pmu,
> &fake_hw_events,
> > + sibling))
> > + return false;
> > + }
> > +
> > + return coresight_pmu_validate_event(event->pmu,
> &fake_hw_events, event);
> > +}
> > +
> > +static int coresight_pmu_event_init(struct perf_event *event)
> > +{
> > + struct coresight_pmu *coresight_pmu;
> > + struct hw_perf_event *hwc = &event->hw;
> > +
> > + coresight_pmu = to_coresight_pmu(event->pmu);
> > +
> > + /*
> > + * Following other "uncore" PMUs, we do not support sampling mode
> or
> > + * attach to a task (per-process mode).
> > + */
> > + if (is_sampling_event(event)) {
> > + dev_dbg(coresight_pmu->pmu.dev,
> > + "Can't support sampling events\n");
> > + return -EOPNOTSUPP;
> > + }
> > +
> > + if (event->cpu < 0 || event->attach_state & PERF_ATTACH_TASK) {
> > + dev_dbg(coresight_pmu->pmu.dev,
> > + "Can't support per-task counters\n");
> > + return -EINVAL;
> > + }
> > +
> > + /*
> > + * Make sure the CPU assignment is on one of the CPUs associated with
> > + * this PMU.
> > + */
> > + if (!cpumask_test_cpu(event->cpu, &coresight_pmu-
> >associated_cpus)) {
> > + dev_dbg(coresight_pmu->pmu.dev,
> > + "Requested cpu is not associated with the PMU\n");
> > + return -EINVAL;
> > + }
> > +
> > + /* Enforce the current active CPU to handle the events in this PMU. */
> > + event->cpu = cpumask_first(&coresight_pmu->active_cpu);
> > + if (event->cpu >= nr_cpu_ids)
> > + return -EINVAL;
> > +
> > + if (!coresight_pmu_validate_group(event))
> > + return -EINVAL;
> > +
> > + /*
> > + * The logical counter id is tracked with hw_perf_event.extra_reg.idx.
> > + * The physical counter id is tracked with hw_perf_event.idx.
> > + * We don't assign an index until we actually place the event onto
> > + * hardware. Use -1 to signify that we haven't decided where to put it
> > + * yet.
> > + */
> > + hwc->idx = -1;
> > + hwc->extra_reg.idx = -1;
> > + hwc->config_base = coresight_pmu->impl.ops->event_type(event);
>
> nit: This is a bit confusing combined with with pmu->base0, when we
> write the event config.
>
> Could we use hwc->config instead ?
>
I was following other drivers. I can change it to hwc->config since it is not used anywhere else.
> > +
> > + return 0;
> > +}
> > +
> > +static inline u32 counter_offset(u32 reg_sz, u32 ctr_idx)
> > +{
> > + return (PMEVCNTR_LO + (reg_sz * ctr_idx));
> > +}
> > +
> > +static void coresight_pmu_write_counter(struct perf_event *event, u64
> val)
> > +{
> > + u32 offset;
> > + struct coresight_pmu *coresight_pmu = to_coresight_pmu(event-
> >pmu);
> > +
> > + if (use_64b_counter_reg(coresight_pmu)) {
> > + offset = counter_offset(sizeof(u64), event->hw.idx);
> > +
> > + writeq(val, coresight_pmu->base1 + offset);
> > + } else {
> > + offset = counter_offset(sizeof(u32), event->hw.idx);
> > +
> > + writel(lower_32_bits(val), coresight_pmu->base1 + offset);
> > + }
> > +}
> > +
> > +static u64 coresight_pmu_read_counter(struct perf_event *event)
> > +{
> > + u32 offset;
> > + const void __iomem *counter_addr;
> > + struct coresight_pmu *coresight_pmu = to_coresight_pmu(event-
> >pmu);
> > +
> > + if (use_64b_counter_reg(coresight_pmu)) {
> > + offset = counter_offset(sizeof(u64), event->hw.idx);
> > + counter_addr = coresight_pmu->base1 + offset;
> > +
> > + return support_atomic(coresight_pmu) ?
> > + readq(counter_addr) :
> > + read_reg64_hilohi(counter_addr);
> > + }
> > +
> > + offset = counter_offset(sizeof(u32), event->hw.idx);
> > + return readl(coresight_pmu->base1 + offset);
> > +}
> > +
> > +/*
> > + * coresight_pmu_set_event_period: Set the period for the counter.
> > + *
> > + * To handle cases of extreme interrupt latency, we program
> > + * the counter with half of the max count for the counters.
> > + */
> > +static void coresight_pmu_set_event_period(struct perf_event *event)
> > +{
> > + struct coresight_pmu *coresight_pmu = to_coresight_pmu(event-
> >pmu);
> > + u64 val = GENMASK_ULL(pmcfgr_size(coresight_pmu), 0) >> 1;
>
> Please could we add a helper to get the "counter mask" ? Otherwise this
> is a bit confusing. We would expect
> pmcfgr_size() = 64 for 64bit counters, but it is really 63 instead.
> and that made me stare the GENMASK_ULL() and go back trace it down
> to the spec. Please see my suggestion on the top to add counter_mask().
>
Sure will do. I apologize for the confusion.
> > +
> > + local64_set(&event->hw.prev_count, val);
> > + coresight_pmu_write_counter(event, val);
> > +}
> > +
> > +static void coresight_pmu_enable_counter(struct coresight_pmu
> *coresight_pmu,
> > + int idx)
> > +{
> > + u32 reg_id, reg_bit, inten_off, cnten_off;
> > +
> > + reg_id = CORESIGHT_IDX_TO_SET_CLR_REG_ID(idx);
> > + reg_bit = CORESIGHT_IDX_TO_SET_CLR_REG_BIT(idx);
> > +
> > + inten_off = PMINTENSET + (4 * reg_id);
> > + cnten_off = PMCNTENSET + (4 * reg_id);
> > +
> > + writel(BIT(reg_bit), coresight_pmu->base0 + inten_off);
> > + writel(BIT(reg_bit), coresight_pmu->base0 + cnten_off);
> > +}
> > +
> > +static void coresight_pmu_disable_counter(struct coresight_pmu
> *coresight_pmu,
> > + int idx)
> > +{
> > + u32 reg_id, reg_bit, inten_off, cnten_off;
> > +
> > + reg_id = CORESIGHT_IDX_TO_SET_CLR_REG_ID(idx);
> > + reg_bit = CORESIGHT_IDX_TO_SET_CLR_REG_BIT(idx);
> > +
> > + inten_off = PMINTENCLR + (4 * reg_id);
> > + cnten_off = PMCNTENCLR + (4 * reg_id);
> > +
> > + writel(BIT(reg_bit), coresight_pmu->base0 + cnten_off);
> > + writel(BIT(reg_bit), coresight_pmu->base0 + inten_off);
> > +}
> > +
> > +static void coresight_pmu_event_update(struct perf_event *event)
> > +{
> > + struct coresight_pmu *coresight_pmu = to_coresight_pmu(event-
> >pmu);
> > + struct hw_perf_event *hwc = &event->hw;
> > + u64 delta, prev, now;
> > +
> > + do {
> > + prev = local64_read(&hwc->prev_count);
> > + now = coresight_pmu_read_counter(event);
> > + } while (local64_cmpxchg(&hwc->prev_count, prev, now) != prev);
> > +
> > + delta = (now - prev) & GENMASK_ULL(pmcfgr_size(coresight_pmu), 0);
> > + local64_add(delta, &event->count);
> > +}
> > +
> > +static inline void coresight_pmu_set_event(struct coresight_pmu
> *coresight_pmu,
> > + struct hw_perf_event *hwc)
> > +{
> > + u32 offset = PMEVTYPER + (4 * hwc->idx);
> > +
> > + writel(hwc->config_base, coresight_pmu->base0 + offset);
> > +}
> > +
> > +static inline void
> > +coresight_pmu_set_ev_filter(struct coresight_pmu *coresight_pmu,
> > + struct hw_perf_event *hwc, u32 filter)
> > +{
> > + u32 offset = PMEVFILTR + (4 * hwc->idx);
> > +
> > + writel(filter, coresight_pmu->base0 + offset);
> > +}
> > +
> > +static inline void
> > +coresight_pmu_set_cc_filter(struct coresight_pmu *coresight_pmu, u32
> filter)
> > +{
> > + u32 offset = PMCCFILTR;
> > +
> > + writel(filter, coresight_pmu->base0 + offset);
> > +}
> > +
> > +static void coresight_pmu_start(struct perf_event *event, int pmu_flags)
> > +{
> > + struct coresight_pmu *coresight_pmu = to_coresight_pmu(event-
> >pmu);
> > + struct hw_perf_event *hwc = &event->hw;
> > + u32 filter;
> > +
> > + /* We always reprogram the counter */
> > + if (pmu_flags & PERF_EF_RELOAD)
> > + WARN_ON(!(hwc->state & PERF_HES_UPTODATE));
> > +
> > + coresight_pmu_set_event_period(event);
> > +
> > + filter = coresight_pmu->impl.ops->event_filter(event);
> > +
> > + if (event->hw.extra_reg.idx == coresight_pmu->cc_logical_idx) {
> > + coresight_pmu_set_cc_filter(coresight_pmu, filter);
> > + } else {
> > + coresight_pmu_set_event(coresight_pmu, hwc);
> > + coresight_pmu_set_ev_filter(coresight_pmu, hwc, filter);
> > + }
> > +
> > + hwc->state = 0;
> > +
> > + coresight_pmu_enable_counter(coresight_pmu, hwc->idx);
> > +}
> > +
> > +static void coresight_pmu_stop(struct perf_event *event, int pmu_flags)
> > +{
> > + struct coresight_pmu *coresight_pmu = to_coresight_pmu(event-
> >pmu);
> > + struct hw_perf_event *hwc = &event->hw;
> > +
> > + if (hwc->state & PERF_HES_STOPPED)
> > + return;
> > +
> > + coresight_pmu_disable_counter(coresight_pmu, hwc->idx);
> > + coresight_pmu_event_update(event);
> > +
> > + hwc->state |= PERF_HES_STOPPED | PERF_HES_UPTODATE;
> > +}
> > +
> > +static inline u32 to_phys_idx(struct coresight_pmu *coresight_pmu, u32
> idx)
> > +{
> > + return (idx == coresight_pmu->cc_logical_idx) ?
> > + CORESIGHT_PMU_IDX_CCNTR : idx;
> > +}
> > +
> > +static int coresight_pmu_add(struct perf_event *event, int flags)
> > +{
> > + struct coresight_pmu *coresight_pmu = to_coresight_pmu(event-
> >pmu);
> > + struct coresight_pmu_hw_events *hw_events = &coresight_pmu-
> >hw_events;
> > + struct hw_perf_event *hwc = &event->hw;
> > + int idx;
> > +
> > + if (WARN_ON_ONCE(!cpumask_test_cpu(smp_processor_id(),
> > + &coresight_pmu->associated_cpus)))
> > + return -ENOENT;
> > +
> > + idx = coresight_pmu_get_event_idx(hw_events, event);
> > + if (idx < 0)
> > + return idx;
> > +
> > + hw_events->events[idx] = event;
> > + hwc->idx = to_phys_idx(coresight_pmu, idx);
> > + hwc->extra_reg.idx = idx;
> > + hwc->state = PERF_HES_STOPPED | PERF_HES_UPTODATE;
> > +
> > + if (flags & PERF_EF_START)
> > + coresight_pmu_start(event, PERF_EF_RELOAD);
> > +
> > + /* Propagate changes to the userspace mapping. */
> > + perf_event_update_userpage(event);
> > +
> > + return 0;
> > +}
> > +
> > +static void coresight_pmu_del(struct perf_event *event, int flags)
> > +{
> > + struct coresight_pmu *coresight_pmu = to_coresight_pmu(event-
> >pmu);
> > + struct coresight_pmu_hw_events *hw_events = &coresight_pmu-
> >hw_events;
> > + struct hw_perf_event *hwc = &event->hw;
> > + int idx = hwc->extra_reg.idx;
> > +
> > + coresight_pmu_stop(event, PERF_EF_UPDATE);
> > +
> > + hw_events->events[idx] = NULL;
> > +
> > + clear_bit(idx, hw_events->used_ctrs);
> > +
> > + perf_event_update_userpage(event);
> > +}
> > +
> > +static void coresight_pmu_read(struct perf_event *event)
> > +{
> > + coresight_pmu_event_update(event);
> > +}
> > +
> > +static int coresight_pmu_alloc(struct platform_device *pdev,
> > + struct coresight_pmu **coresight_pmu)
> > +{
> > + struct acpi_apmt_node *apmt_node;
> > + struct device *dev;
> > + struct coresight_pmu *pmu;
> > +
> > + dev = &pdev->dev;
> > + apmt_node = *(struct acpi_apmt_node **)dev_get_platdata(dev);
> > + if (!apmt_node) {
> > + dev_err(dev, "failed to get APMT node\n");
> > + return -ENOMEM;
> > + }
> > +
> > + pmu = devm_kzalloc(dev, sizeof(*pmu), GFP_KERNEL);
> > + if (!pmu)
> > + return -ENOMEM;
> > +
> > + *coresight_pmu = pmu;
> > +
> > + pmu->dev = dev;
> > + pmu->apmt_node = apmt_node;
> > +
> > + platform_set_drvdata(pdev, coresight_pmu);
> > +
> > + return 0;
> > +}
> > +
> > +static int coresight_pmu_init_mmio(struct coresight_pmu
> *coresight_pmu)
> > +{
> > + struct device *dev;
> > + struct platform_device *pdev;
> > + struct acpi_apmt_node *apmt_node;
> > +
> > + dev = coresight_pmu->dev;
> > + pdev = to_platform_device(dev);
> > + apmt_node = coresight_pmu->apmt_node;
> > +
> > + /* Base address for page 0. */
> > + coresight_pmu->base0 = devm_platform_ioremap_resource(pdev, 0);
> > + if (IS_ERR(coresight_pmu->base0)) {
> > + dev_err(dev, "ioremap failed for page-0 resource\n");
> > + return PTR_ERR(coresight_pmu->base0);
> > + }
> > +
> > + /* Base address for page 1 if supported. Otherwise point it to page 0. */
> > + coresight_pmu->base1 = coresight_pmu->base0;
> > + if (CHECK_APMT_FLAG(apmt_node->flags, DUAL_PAGE, SUPP)) {
> > + coresight_pmu->base1 = devm_platform_ioremap_resource(pdev,
> 1);
> > + if (IS_ERR(coresight_pmu->base1)) {
> > + dev_err(dev, "ioremap failed for page-1 resource\n");
> > + return PTR_ERR(coresight_pmu->base1);
> > + }
> > + }
> > +
> > + coresight_pmu->pmcfgr = readl(coresight_pmu->base0 + PMCFGR);
> > +
> > + coresight_pmu->num_logical_counters = pmcfgr_n(coresight_pmu) +
> 1;
> > +
> > + coresight_pmu->cc_logical_idx = CORESIGHT_PMU_MAX_HW_CNTRS;
> > +
> > + if (support_cc(coresight_pmu)) {
> > + /*
> > + * The last logical counter is mapped to cycle counter if
> > + * there is a gap between regular and cycle counter. Otherwise,
> > + * logical and physical have 1-to-1 mapping.
> > + */
> > + coresight_pmu->cc_logical_idx =
> > + (coresight_pmu->num_logical_counters <=
> > + CORESIGHT_PMU_IDX_CCNTR) ?
> > + coresight_pmu->num_logical_counters - 1 :
> > + CORESIGHT_PMU_IDX_CCNTR;
> > + }
> > +
> > + coresight_pmu->num_set_clr_reg =
> > + DIV_ROUND_UP(coresight_pmu->num_logical_counters,
> > + CORESIGHT_SET_CLR_REG_COUNTER_NUM);
> > +
> > + coresight_pmu->hw_events.events =
> > + devm_kzalloc(dev,
> > + sizeof(*coresight_pmu->hw_events.events) *
> > + coresight_pmu->num_logical_counters,
> > + GFP_KERNEL);
> > +
> > + if (!coresight_pmu->hw_events.events)
> > + return -ENOMEM;
> > +
> > + return 0;
> > +}
> > +
> > +static inline int
> > +coresight_pmu_get_reset_overflow(struct coresight_pmu
> *coresight_pmu,
> > + u32 *pmovs)
> > +{
> > + int i;
> > + u32 pmovclr_offset = PMOVSCLR;
> > + u32 has_overflowed = 0;
> > +
> > + for (i = 0; i < coresight_pmu->num_set_clr_reg; ++i) {
> > + pmovs[i] = readl(coresight_pmu->base1 + pmovclr_offset);
> > + has_overflowed |= pmovs[i];
> > + writel(pmovs[i], coresight_pmu->base1 + pmovclr_offset);
> > + pmovclr_offset += sizeof(u32);
> > + }
> > +
> > + return has_overflowed != 0;
> > +}
> > +
> > +static irqreturn_t coresight_pmu_handle_irq(int irq_num, void *dev)
> > +{
> > + int idx, has_overflowed;
> > + struct perf_event *event;
> > + struct coresight_pmu *coresight_pmu = dev;
> > + u32 pmovs[CORESIGHT_SET_CLR_REG_MAX_NUM] = { 0 };
> > + bool handled = false;
> > +
> > + coresight_pmu_stop_counters(coresight_pmu);
> > +
> > + has_overflowed =
> coresight_pmu_get_reset_overflow(coresight_pmu, pmovs);
> > + if (!has_overflowed)
> > + goto done;
> > +
> > + for_each_set_bit(idx, coresight_pmu->hw_events.used_ctrs,
> > + coresight_pmu->num_logical_counters) {
> > + event = coresight_pmu->hw_events.events[idx];
> > +
> > + if (!event)
> > + continue;
> > +
> > + if (!test_bit(event->hw.idx, (unsigned long *)pmovs))
> > + continue;
> > +
> > + coresight_pmu_event_update(event);
> > + coresight_pmu_set_event_period(event);
> > +
> > + handled = true;
> > + }
> > +
> > +done:
> > + coresight_pmu_start_counters(coresight_pmu);
> > + return IRQ_RETVAL(handled);
> > +}
> > +
> > +static int coresight_pmu_request_irq(struct coresight_pmu
> *coresight_pmu)
> > +{
> > + int irq, ret;
> > + struct device *dev;
> > + struct platform_device *pdev;
> > + struct acpi_apmt_node *apmt_node;
> > +
> > + dev = coresight_pmu->dev;
> > + pdev = to_platform_device(dev);
> > + apmt_node = coresight_pmu->apmt_node;
> > +
> > + /* Skip IRQ request if the PMU does not support overflow interrupt. */
> > + if (apmt_node->ovflw_irq == 0)
> > + return 0;
> > +
> > + irq = platform_get_irq(pdev, 0);
> > + if (irq < 0)
> > + return irq;
> > +
> > + ret = devm_request_irq(dev, irq, coresight_pmu_handle_irq,
> > + IRQF_NOBALANCING | IRQF_NO_THREAD,
> dev_name(dev),
> > + coresight_pmu);
> > + if (ret) {
> > + dev_err(dev, "Could not request IRQ %d\n", irq);
> > + return ret;
> > + }
> > +
> > + coresight_pmu->irq = irq;
> > +
> > + return 0;
> > +}
> > +
> > +static inline int coresight_pmu_find_cpu_container(int cpu, u32
> container_uid)
> > +{
> > + u32 acpi_uid;
> > + struct device *cpu_dev = get_cpu_device(cpu);
> > + struct acpi_device *acpi_dev = ACPI_COMPANION(cpu_dev);
> > +
> > + if (!cpu_dev)
> > + return -ENODEV;
> > +
> > + while (acpi_dev) {
> > + if (!strcmp(acpi_device_hid(acpi_dev),
> > + ACPI_PROCESSOR_CONTAINER_HID) &&
> > + !kstrtouint(acpi_device_uid(acpi_dev), 0, &acpi_uid) &&
> > + acpi_uid == container_uid)
> > + return 0;
> > +
> > + acpi_dev = acpi_dev->parent;
> > + }
> > +
> > + return -ENODEV;
> > +}
> > +
> > +static int coresight_pmu_get_cpus(struct coresight_pmu
> *coresight_pmu)
> > +{
> > + struct acpi_apmt_node *apmt_node;
> > + int affinity_flag;
> > + int cpu;
> > +
> > + apmt_node = coresight_pmu->apmt_node;
> > + affinity_flag = apmt_node->flags & ACPI_APMT_FLAGS_AFFINITY;
> > +
> > + if (affinity_flag == ACPI_APMT_FLAGS_AFFINITY_PROC) {
> > + for_each_possible_cpu(cpu) {
> > + if (apmt_node->proc_affinity ==
> > + get_acpi_id_for_cpu(cpu)) {
>
> minor nit:
>
> There is one another user of this reverse search for CPU by acpi_id.
> See get_cpu_for_acpi_id() in arch/arm64/kernel/acpi_numa.c.
>
> May be we should make that a generic helper and move it to common code.
>
Thanks for the reference. Is it okay to defer this with follow up patch ?
> > + cpumask_set_cpu(
> > + cpu, &coresight_pmu->associated_cpus);
> > + break;
> > + }
> > + }
> > + } else {
> > + for_each_possible_cpu(cpu) {
> > + if (coresight_pmu_find_cpu_container(
> > + cpu, apmt_node->proc_affinity))
> > + continue;
> > +
> > + cpumask_set_cpu(cpu, &coresight_pmu->associated_cpus);
> > + }
> > + }
> > +
> > + return 0;
> > +}
> > +
> > +static int coresight_pmu_register_pmu(struct coresight_pmu
> *coresight_pmu)
> > +{
> > + int ret;
> > + struct attribute_group **attr_groups;
> > +
> > + attr_groups = coresight_pmu_alloc_attr_group(coresight_pmu);
> > + if (!attr_groups) {
> > + ret = -ENOMEM;
> > + return ret;
> > + }
> > +
> > + ret = cpuhp_state_add_instance(coresight_pmu_cpuhp_state,
> > + &coresight_pmu->cpuhp_node);
> > + if (ret)
> > + return ret;
> > +
> > + coresight_pmu->pmu = (struct pmu){
> > + .task_ctx_nr = perf_invalid_context,
> > + .module = THIS_MODULE,
> > + .pmu_enable = coresight_pmu_enable,
> > + .pmu_disable = coresight_pmu_disable,
> > + .event_init = coresight_pmu_event_init,
> > + .add = coresight_pmu_add,
> > + .del = coresight_pmu_del,
> > + .start = coresight_pmu_start,
> > + .stop = coresight_pmu_stop,
> > + .read = coresight_pmu_read,
> > + .attr_groups = (const struct attribute_group **)attr_groups,
> > + .capabilities = PERF_PMU_CAP_NO_EXCLUDE,
> > + };
> > +
> > + /* Hardware counter init */
> > + coresight_pmu_stop_counters(coresight_pmu);
> > + coresight_pmu_reset_counters(coresight_pmu);
> > +
> > + ret = perf_pmu_register(&coresight_pmu->pmu, coresight_pmu-
> >name, -1);
> > + if (ret) {
> > + cpuhp_state_remove_instance(coresight_pmu_cpuhp_state,
> > + &coresight_pmu->cpuhp_node);
> > + }
> > +
> > + return ret;
> > +}
> > +
> > +static int coresight_pmu_device_probe(struct platform_device *pdev)
> > +{
> > + int ret;
> > + struct coresight_pmu *coresight_pmu;
> > +
> > + ret = coresight_pmu_alloc(pdev, &coresight_pmu);
> > + if (ret)
> > + return ret;
> > +
> > + ret = coresight_pmu_init_mmio(coresight_pmu);
> > + if (ret)
> > + return ret;
> > +
> > + ret = coresight_pmu_request_irq(coresight_pmu);
> > + if (ret)
> > + return ret;
> > +
> > + ret = coresight_pmu_get_cpus(coresight_pmu);
> > + if (ret)
> > + return ret;
> > +
> > + ret = coresight_pmu_register_pmu(coresight_pmu);
> > + if (ret)
> > + return ret;
> > +
> > + return 0;
> > +}
> > +
> > +static int coresight_pmu_device_remove(struct platform_device *pdev)
> > +{
> > + struct coresight_pmu *coresight_pmu = platform_get_drvdata(pdev);
> > +
> > + perf_pmu_unregister(&coresight_pmu->pmu);
> > + cpuhp_state_remove_instance(coresight_pmu_cpuhp_state,
> > + &coresight_pmu->cpuhp_node);
> > +
> > + return 0;
> > +}
> > +
> > +static struct platform_driver coresight_pmu_driver = {
> > + .driver = {
> > + .name = "arm-system-pmu",
> > + .suppress_bind_attrs = true,
> > + },
> > + .probe = coresight_pmu_device_probe,
> > + .remove = coresight_pmu_device_remove,
> > +};
> > +
> > +static void coresight_pmu_set_active_cpu(int cpu,
> > + struct coresight_pmu *coresight_pmu)
> > +{
> > + cpumask_set_cpu(cpu, &coresight_pmu->active_cpu);
> > + WARN_ON(irq_set_affinity(coresight_pmu->irq,
> > + &coresight_pmu->active_cpu));
> > +}
> > +
> > +static int coresight_pmu_cpu_online(unsigned int cpu, struct hlist_node
> *node)
> > +{
> > + struct coresight_pmu *coresight_pmu =
> > + hlist_entry_safe(node, struct coresight_pmu, cpuhp_node);
> > +
> > + if (!cpumask_test_cpu(cpu, &coresight_pmu->associated_cpus))
> > + return 0;
> > +
> > + /* If the PMU is already managed, there is nothing to do */
> > + if (!cpumask_empty(&coresight_pmu->active_cpu))
> > + return 0;
> > +
> > + /* Use this CPU for event counting */
> > + coresight_pmu_set_active_cpu(cpu, coresight_pmu);
> > +
> > + return 0;
> > +}
> > +
> > +static int coresight_pmu_cpu_teardown(unsigned int cpu, struct
> hlist_node *node)
> > +{
> > + int dst;
> > + struct cpumask online_supported;
> > +
> > + struct coresight_pmu *coresight_pmu =
> > + hlist_entry_safe(node, struct coresight_pmu, cpuhp_node);
> > +
> > + /* Nothing to do if this CPU doesn't own the PMU */
> > + if (!cpumask_test_and_clear_cpu(cpu, &coresight_pmu->active_cpu))
> > + return 0;
> > +
> > + /* Choose a new CPU to migrate ownership of the PMU to */
> > + cpumask_and(&online_supported, &coresight_pmu->associated_cpus,
> > + cpu_online_mask);
> > + dst = cpumask_any_but(&online_supported, cpu);
> > + if (dst >= nr_cpu_ids)
> > + return 0;
> > +
> > + /* Use this CPU for event counting */
> > + perf_pmu_migrate_context(&coresight_pmu->pmu, cpu, dst);
> > + coresight_pmu_set_active_cpu(dst, coresight_pmu);
> > +
> > + return 0;
> > +}
> > +
> > +static int __init coresight_pmu_init(void)
> > +{
> > + int ret;
> > +
> > + ret = cpuhp_setup_state_multi(CPUHP_AP_ONLINE_DYN, PMUNAME,
> > + coresight_pmu_cpu_online,
> > + coresight_pmu_cpu_teardown);
> > + if (ret < 0)
> > + return ret;
> > + coresight_pmu_cpuhp_state = ret;
> > + return platform_driver_register(&coresight_pmu_driver);
> > +}
> > +
> > +static void __exit coresight_pmu_exit(void)
> > +{
> > + platform_driver_unregister(&coresight_pmu_driver);
> > + cpuhp_remove_multi_state(coresight_pmu_cpuhp_state);
> > +}
> > +
> > +module_init(coresight_pmu_init);
> > +module_exit(coresight_pmu_exit);
> > diff --git a/drivers/perf/coresight_pmu/arm_coresight_pmu.h
> b/drivers/perf/coresight_pmu/arm_coresight_pmu.h
> > new file mode 100644
> > index 000000000000..88fb4cd3dafa
> > --- /dev/null
> > +++ b/drivers/perf/coresight_pmu/arm_coresight_pmu.h
> > @@ -0,0 +1,177 @@
> > +/* SPDX-License-Identifier: GPL-2.0
> > + *
> > + * ARM CoreSight PMU driver.
> > + * Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES.
> > + *
> > + */
> > +
> > +#ifndef __ARM_CORESIGHT_PMU_H__
> > +#define __ARM_CORESIGHT_PMU_H__
> > +
> > +#include <linux/acpi.h>
> > +#include <linux/bitfield.h>
> > +#include <linux/cpumask.h>
> > +#include <linux/device.h>
> > +#include <linux/kernel.h>
> > +#include <linux/module.h>
> > +#include <linux/perf_event.h>
> > +#include <linux/platform_device.h>
> > +#include <linux/types.h>
> > +
> > +#define to_coresight_pmu(p) (container_of(p, struct coresight_pmu,
> pmu))
> > +
> > +#define CORESIGHT_EXT_ATTR(_name, _func, _config) \
> > + (&((struct dev_ext_attribute[]){ \
> > + { \
> > + .attr = __ATTR(_name, 0444, _func, NULL), \
> > + .var = (void *)_config \
> > + } \
> > + })[0].attr.attr)
> > +
> > +#define CORESIGHT_FORMAT_ATTR(_name, _config) \
> > + CORESIGHT_EXT_ATTR(_name, coresight_pmu_sysfs_format_show,
> \
> > + (char *)_config)
> > +
> > +#define CORESIGHT_EVENT_ATTR(_name, _config) \
> > + PMU_EVENT_ATTR_ID(_name, coresight_pmu_sysfs_event_show,
> _config)
> > +
> > +
> > +/* Default event id mask */
> > +#define CORESIGHT_EVENT_MASK 0xFFFFFFFFULL
>
> Please use GENMASK_ULL(), everywhere.
>
Acked.
Thanks for all your comments, I will address it on the next version.
Regards,
Besar
> Thanks
> Suzuki
Hi Suzuki,
Please find my reply inline.
> -----Original Message-----
> From: Suzuki K Poulose <[email protected]>
> Sent: Thursday, July 7, 2022 6:08 PM
> To: Besar Wicaksono <[email protected]>; [email protected];
> [email protected]; [email protected]; [email protected]
> Cc: [email protected]; [email protected];
> [email protected]; [email protected];
> [email protected]; [email protected]; Thierry Reding
> <[email protected]>; Jonathan Hunter <[email protected]>; Vikram
> Sethi <[email protected]>; [email protected];
> [email protected]; [email protected]
> Subject: Re: [RESEND PATCH v3 2/2] perf: coresight_pmu: Add support for
> NVIDIA SCF and MCF attribute
>
> External email: Use caution opening links or attachments
>
>
> On 21/06/2022 06:50, Besar Wicaksono wrote:
> > Add support for NVIDIA System Cache Fabric (SCF) and Memory Control
> > Fabric (MCF) PMU attributes for CoreSight PMU implementation in
> > NVIDIA devices.
> >
> > Signed-off-by: Besar Wicaksono <[email protected]>
> > ---
> > drivers/perf/coresight_pmu/Makefile | 3 +-
> > .../perf/coresight_pmu/arm_coresight_pmu.c | 4 +
> > .../coresight_pmu/arm_coresight_pmu_nvidia.c | 312
> ++++++++++++++++++
> > .../coresight_pmu/arm_coresight_pmu_nvidia.h | 17 +
> > 4 files changed, 335 insertions(+), 1 deletion(-)
> > create mode 100644
> drivers/perf/coresight_pmu/arm_coresight_pmu_nvidia.c
> > create mode 100644
> drivers/perf/coresight_pmu/arm_coresight_pmu_nvidia.h
> >
> > diff --git a/drivers/perf/coresight_pmu/Makefile
> b/drivers/perf/coresight_pmu/Makefile
> > index a2a7a5fbbc16..181b1b0dbaa1 100644
> > --- a/drivers/perf/coresight_pmu/Makefile
> > +++ b/drivers/perf/coresight_pmu/Makefile
> > @@ -3,4 +3,5 @@
> > # SPDX-License-Identifier: GPL-2.0
> >
> > obj-$(CONFIG_ARM_CORESIGHT_PMU) += \
> > - arm_coresight_pmu.o
> > + arm_coresight_pmu.o \
> > + arm_coresight_pmu_nvidia.o
> > diff --git a/drivers/perf/coresight_pmu/arm_coresight_pmu.c
> b/drivers/perf/coresight_pmu/arm_coresight_pmu.c
> > index ba52cc592b2d..12179d029bfd 100644
> > --- a/drivers/perf/coresight_pmu/arm_coresight_pmu.c
> > +++ b/drivers/perf/coresight_pmu/arm_coresight_pmu.c
> > @@ -42,6 +42,7 @@
> > #include <acpi/processor.h>
> >
> > #include "arm_coresight_pmu.h"
> > +#include "arm_coresight_pmu_nvidia.h"
> >
> > #define PMUNAME "arm_system_pmu"
> >
> > @@ -396,6 +397,9 @@ struct impl_match {
> > };
> >
> > static const struct impl_match impl_match[] = {
> > + { .pmiidr = 0x36B,
> > + .mask = PMIIDR_IMPLEMENTER_MASK,
> > + .impl_init_ops = nv_coresight_init_ops },
> > {}
> > };
> >
> > diff --git a/drivers/perf/coresight_pmu/arm_coresight_pmu_nvidia.c
> b/drivers/perf/coresight_pmu/arm_coresight_pmu_nvidia.c
> > new file mode 100644
> > index 000000000000..54f4eae4c529
> > --- /dev/null
> > +++ b/drivers/perf/coresight_pmu/arm_coresight_pmu_nvidia.c
> > @@ -0,0 +1,312 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +/*
> > + * Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES.
> > + *
> > + */
> > +
> > +/* Support for NVIDIA specific attributes. */
> > +
> > +#include "arm_coresight_pmu_nvidia.h"
> > +
> > +#define NV_MCF_PCIE_PORT_COUNT 10ULL
> > +#define NV_MCF_PCIE_FILTER_ID_MASK ((1ULL <<
> NV_MCF_PCIE_PORT_COUNT) - 1)
> > +
> > +#define NV_MCF_GPU_PORT_COUNT 2ULL
> > +#define NV_MCF_GPU_FILTER_ID_MASK ((1ULL <<
> NV_MCF_GPU_PORT_COUNT) - 1)
> > +
> > +#define NV_MCF_NVLINK_PORT_COUNT 4ULL
> > +#define NV_MCF_NVLINK_FILTER_ID_MASK ((1ULL <<
> NV_MCF_NVLINK_PORT_COUNT) - 1)
> > +
> > +#define PMIIDR_PRODUCTID_MASK 0xFFF
> > +#define PMIIDR_PRODUCTID_SHIFT 20
> > +
> > +#define to_nv_pmu_impl(coresight_pmu) \
> > + (container_of(coresight_pmu->impl.ops, struct nv_pmu_impl, ops))
> > +
> > +#define CORESIGHT_EVENT_ATTR_4_INNER(_pref, _num, _suff, _config)
> \
> > + CORESIGHT_EVENT_ATTR(_pref##_num##_suff, _config)
> > +
> > +#define CORESIGHT_EVENT_ATTR_4(_pref, _suff, _config) \
> > + CORESIGHT_EVENT_ATTR_4_INNER(_pref, _0_, _suff, _config), \
> > + CORESIGHT_EVENT_ATTR_4_INNER(_pref, _1_, _suff, _config + 1), \
> > + CORESIGHT_EVENT_ATTR_4_INNER(_pref, _2_, _suff, _config + 2), \
> > + CORESIGHT_EVENT_ATTR_4_INNER(_pref, _3_, _suff, _config + 3)
> > +
> > +struct nv_pmu_impl {
> > + struct coresight_pmu_impl_ops ops;
> > + const char *name;
> > + u32 filter_mask;
> > + struct attribute **event_attr;
> > + struct attribute **format_attr;
> > +};
> > +
> > +static struct attribute *scf_pmu_event_attrs[] = {
> > + CORESIGHT_EVENT_ATTR(bus_cycles, 0x1d),
> > +
> > + CORESIGHT_EVENT_ATTR(scf_cache_allocate, 0xF0),
> > + CORESIGHT_EVENT_ATTR(scf_cache_refill, 0xF1),
> > + CORESIGHT_EVENT_ATTR(scf_cache, 0xF2),
> > + CORESIGHT_EVENT_ATTR(scf_cache_wb, 0xF3),
> > +
> > + CORESIGHT_EVENT_ATTR_4(socket, rd_data, 0x101),
> > + CORESIGHT_EVENT_ATTR_4(socket, dl_rsp, 0x105),
> > + CORESIGHT_EVENT_ATTR_4(socket, wb_data, 0x109),
> > + CORESIGHT_EVENT_ATTR_4(socket, ev_rsp, 0x10d),
> > + CORESIGHT_EVENT_ATTR_4(socket, prb_data, 0x111),
> > +
> > + CORESIGHT_EVENT_ATTR_4(socket, rd_outstanding, 0x115),
> > + CORESIGHT_EVENT_ATTR_4(socket, dl_outstanding, 0x119),
> > + CORESIGHT_EVENT_ATTR_4(socket, wb_outstanding, 0x11d),
> > + CORESIGHT_EVENT_ATTR_4(socket, wr_outstanding, 0x121),
> > + CORESIGHT_EVENT_ATTR_4(socket, ev_outstanding, 0x125),
> > + CORESIGHT_EVENT_ATTR_4(socket, prb_outstanding, 0x129),
> > +
> > + CORESIGHT_EVENT_ATTR_4(socket, rd_access, 0x12d),
> > + CORESIGHT_EVENT_ATTR_4(socket, dl_access, 0x131),
> > + CORESIGHT_EVENT_ATTR_4(socket, wb_access, 0x135),
> > + CORESIGHT_EVENT_ATTR_4(socket, wr_access, 0x139),
> > + CORESIGHT_EVENT_ATTR_4(socket, ev_access, 0x13d),
> > + CORESIGHT_EVENT_ATTR_4(socket, prb_access, 0x141),
> > +
> > + CORESIGHT_EVENT_ATTR_4(ocu, gmem_rd_data, 0x145),
> > + CORESIGHT_EVENT_ATTR_4(ocu, gmem_rd_access, 0x149),
> > + CORESIGHT_EVENT_ATTR_4(ocu, gmem_wb_access, 0x14d),
> > + CORESIGHT_EVENT_ATTR_4(ocu, gmem_rd_outstanding, 0x151),
> > + CORESIGHT_EVENT_ATTR_4(ocu, gmem_wr_outstanding, 0x155),
> > +
> > + CORESIGHT_EVENT_ATTR_4(ocu, rem_rd_data, 0x159),
> > + CORESIGHT_EVENT_ATTR_4(ocu, rem_rd_access, 0x15d),
> > + CORESIGHT_EVENT_ATTR_4(ocu, rem_wb_access, 0x161),
> > + CORESIGHT_EVENT_ATTR_4(ocu, rem_rd_outstanding, 0x165),
> > + CORESIGHT_EVENT_ATTR_4(ocu, rem_wr_outstanding, 0x169),
> > +
> > + CORESIGHT_EVENT_ATTR(gmem_rd_data, 0x16d),
> > + CORESIGHT_EVENT_ATTR(gmem_rd_access, 0x16e),
> > + CORESIGHT_EVENT_ATTR(gmem_rd_outstanding, 0x16f),
> > + CORESIGHT_EVENT_ATTR(gmem_dl_rsp, 0x170),
> > + CORESIGHT_EVENT_ATTR(gmem_dl_access, 0x171),
> > + CORESIGHT_EVENT_ATTR(gmem_dl_outstanding, 0x172),
> > + CORESIGHT_EVENT_ATTR(gmem_wb_data, 0x173),
> > + CORESIGHT_EVENT_ATTR(gmem_wb_access, 0x174),
> > + CORESIGHT_EVENT_ATTR(gmem_wb_outstanding, 0x175),
> > + CORESIGHT_EVENT_ATTR(gmem_ev_rsp, 0x176),
> > + CORESIGHT_EVENT_ATTR(gmem_ev_access, 0x177),
> > + CORESIGHT_EVENT_ATTR(gmem_ev_outstanding, 0x178),
> > + CORESIGHT_EVENT_ATTR(gmem_wr_data, 0x179),
> > + CORESIGHT_EVENT_ATTR(gmem_wr_outstanding, 0x17a),
> > + CORESIGHT_EVENT_ATTR(gmem_wr_access, 0x17b),
> > +
> > + CORESIGHT_EVENT_ATTR_4(socket, wr_data, 0x17c),
> > +
> > + CORESIGHT_EVENT_ATTR_4(ocu, gmem_wr_data, 0x180),
> > + CORESIGHT_EVENT_ATTR_4(ocu, gmem_wb_data, 0x184),
> > + CORESIGHT_EVENT_ATTR_4(ocu, gmem_wr_access, 0x188),
> > + CORESIGHT_EVENT_ATTR_4(ocu, gmem_wb_outstanding, 0x18c),
> > +
> > + CORESIGHT_EVENT_ATTR_4(ocu, rem_wr_data, 0x190),
> > + CORESIGHT_EVENT_ATTR_4(ocu, rem_wb_data, 0x194),
> > + CORESIGHT_EVENT_ATTR_4(ocu, rem_wr_access, 0x198),
> > + CORESIGHT_EVENT_ATTR_4(ocu, rem_wb_outstanding, 0x19c),
> > +
> > + CORESIGHT_EVENT_ATTR(gmem_wr_total_bytes, 0x1a0),
> > + CORESIGHT_EVENT_ATTR(remote_socket_wr_total_bytes, 0x1a1),
> > + CORESIGHT_EVENT_ATTR(remote_socket_rd_data, 0x1a2),
> > + CORESIGHT_EVENT_ATTR(remote_socket_rd_outstanding, 0x1a3),
> > + CORESIGHT_EVENT_ATTR(remote_socket_rd_access, 0x1a4),
> > +
> > + CORESIGHT_EVENT_ATTR(cmem_rd_data, 0x1a5),
> > + CORESIGHT_EVENT_ATTR(cmem_rd_access, 0x1a6),
> > + CORESIGHT_EVENT_ATTR(cmem_rd_outstanding, 0x1a7),
> > + CORESIGHT_EVENT_ATTR(cmem_dl_rsp, 0x1a8),
> > + CORESIGHT_EVENT_ATTR(cmem_dl_access, 0x1a9),
> > + CORESIGHT_EVENT_ATTR(cmem_dl_outstanding, 0x1aa),
> > + CORESIGHT_EVENT_ATTR(cmem_wb_data, 0x1ab),
> > + CORESIGHT_EVENT_ATTR(cmem_wb_access, 0x1ac),
> > + CORESIGHT_EVENT_ATTR(cmem_wb_outstanding, 0x1ad),
> > + CORESIGHT_EVENT_ATTR(cmem_ev_rsp, 0x1ae),
> > + CORESIGHT_EVENT_ATTR(cmem_ev_access, 0x1af),
> > + CORESIGHT_EVENT_ATTR(cmem_ev_outstanding, 0x1b0),
> > + CORESIGHT_EVENT_ATTR(cmem_wr_data, 0x1b1),
> > + CORESIGHT_EVENT_ATTR(cmem_wr_outstanding, 0x1b2),
> > +
> > + CORESIGHT_EVENT_ATTR_4(ocu, cmem_rd_data, 0x1b3),
> > + CORESIGHT_EVENT_ATTR_4(ocu, cmem_rd_access, 0x1b7),
> > + CORESIGHT_EVENT_ATTR_4(ocu, cmem_wb_access, 0x1bb),
> > + CORESIGHT_EVENT_ATTR_4(ocu, cmem_rd_outstanding, 0x1bf),
> > + CORESIGHT_EVENT_ATTR_4(ocu, cmem_wr_outstanding, 0x1c3),
> > +
> > + CORESIGHT_EVENT_ATTR(ocu_prb_access, 0x1c7),
> > + CORESIGHT_EVENT_ATTR(ocu_prb_data, 0x1c8),
> > + CORESIGHT_EVENT_ATTR(ocu_prb_outstanding, 0x1c9),
> > +
> > + CORESIGHT_EVENT_ATTR(cmem_wr_access, 0x1ca),
> > +
> > + CORESIGHT_EVENT_ATTR_4(ocu, cmem_wr_access, 0x1cb),
> > + CORESIGHT_EVENT_ATTR_4(ocu, cmem_wb_data, 0x1cf),
> > + CORESIGHT_EVENT_ATTR_4(ocu, cmem_wr_data, 0x1d3),
> > + CORESIGHT_EVENT_ATTR_4(ocu, cmem_wb_outstanding, 0x1d7),
> > +
> > + CORESIGHT_EVENT_ATTR(cmem_wr_total_bytes, 0x1db),
> > +
> > + CORESIGHT_EVENT_ATTR(cycles,
> CORESIGHT_PMU_EVT_CYCLES_DEFAULT),
> > + NULL,
> > +};
> > +
> > +static struct attribute *mcf_pmu_event_attrs[] = {
> > + CORESIGHT_EVENT_ATTR(rd_bytes_loc, 0x0),
> > + CORESIGHT_EVENT_ATTR(rd_bytes_rem, 0x1),
> > + CORESIGHT_EVENT_ATTR(wr_bytes_loc, 0x2),
> > + CORESIGHT_EVENT_ATTR(wr_bytes_rem, 0x3),
> > + CORESIGHT_EVENT_ATTR(total_bytes_loc, 0x4),
> > + CORESIGHT_EVENT_ATTR(total_bytes_rem, 0x5),
> > + CORESIGHT_EVENT_ATTR(rd_req_loc, 0x6),
> > + CORESIGHT_EVENT_ATTR(rd_req_rem, 0x7),
> > + CORESIGHT_EVENT_ATTR(wr_req_loc, 0x8),
> > + CORESIGHT_EVENT_ATTR(wr_req_rem, 0x9),
> > + CORESIGHT_EVENT_ATTR(total_req_loc, 0xa),
> > + CORESIGHT_EVENT_ATTR(total_req_rem, 0xb),
> > + CORESIGHT_EVENT_ATTR(rd_cum_outs_loc, 0xc),
> > + CORESIGHT_EVENT_ATTR(rd_cum_outs_rem, 0xd),
> > + CORESIGHT_EVENT_ATTR(cycles,
> CORESIGHT_PMU_EVT_CYCLES_DEFAULT),
> > + NULL,
> > +};
> > +
> > +static struct attribute *scf_pmu_format_attrs[] = {
> > + CORESIGHT_FORMAT_EVENT_ATTR,
> > + NULL,
> > +};
> > +
> > +static struct attribute *mcf_pcie_pmu_format_attrs[] = {
> > + CORESIGHT_FORMAT_EVENT_ATTR,
> > + CORESIGHT_FORMAT_ATTR(root_port, "config1:0-9"),
> > + NULL,
> > +};
> > +
> > +static struct attribute *mcf_gpu_pmu_format_attrs[] = {
> > + CORESIGHT_FORMAT_EVENT_ATTR,
> > + CORESIGHT_FORMAT_ATTR(gpu, "config1:0-1"),
> > + NULL,
> > +};
> > +
> > +static struct attribute *mcf_nvlink_pmu_format_attrs[] = {
> > + CORESIGHT_FORMAT_EVENT_ATTR,
> > + CORESIGHT_FORMAT_ATTR(socket, "config1:0-3"),
> > + NULL,
> > +};
> > +
> > +static struct attribute **
> > +nv_coresight_pmu_get_event_attrs(const struct coresight_pmu
> *coresight_pmu)
> > +{
> > + const struct nv_pmu_impl *impl = to_nv_pmu_impl(coresight_pmu);
> > +
> > + return impl->event_attr;
> > +}
> > +
> > +static struct attribute **
> > +nv_coresight_pmu_get_format_attrs(const struct coresight_pmu
> *coresight_pmu)
> > +{
> > + const struct nv_pmu_impl *impl = to_nv_pmu_impl(coresight_pmu);
> > +
> > + return impl->format_attr;
> > +}
> > +
> > +static const char *
> > +nv_coresight_pmu_get_name(const struct coresight_pmu
> *coresight_pmu)
> > +{
> > + const struct nv_pmu_impl *impl = to_nv_pmu_impl(coresight_pmu);
> > +
> > + return impl->name;
> > +}
> > +
> > +static u32 nv_coresight_pmu_event_filter(const struct perf_event
> *event)
> > +{
> > + const struct nv_pmu_impl *impl =
> > + to_nv_pmu_impl(to_coresight_pmu(event->pmu));
> > + return event->attr.config1 & impl->filter_mask;
> > +}
> > +
> > +int nv_coresight_init_ops(struct coresight_pmu *coresight_pmu)
> > +{
> > + u32 product_id;
> > + struct device *dev;
> > + struct nv_pmu_impl *impl;
> > + static atomic_t pmu_idx = {0};
> > +
> > + dev = coresight_pmu->dev;
> > +
> > + impl = devm_kzalloc(dev, sizeof(struct nv_pmu_impl), GFP_KERNEL);
> > + if (!impl)
> > + return -ENOMEM;
> > +
> > + product_id = (coresight_pmu->impl.pmiidr >>
> PMIIDR_PRODUCTID_SHIFT) &
> > + PMIIDR_PRODUCTID_MASK;
> > +
> > + switch (product_id) {
> > + case 0x103:
> > + impl->name =
> > + devm_kasprintf(dev, GFP_KERNEL,
> > + "nvidia_mcf_pcie_pmu_%u",
> > + coresight_pmu->apmt_node->proc_affinity);
> > + impl->filter_mask = NV_MCF_PCIE_FILTER_ID_MASK;
> > + impl->event_attr = mcf_pmu_event_attrs;
> > + impl->format_attr = mcf_pcie_pmu_format_attrs;
> > + break;
> > + case 0x104:
> > + impl->name =
> > + devm_kasprintf(dev, GFP_KERNEL,
> > + "nvidia_mcf_gpuvir_pmu_%u",
> > + coresight_pmu->apmt_node->proc_affinity);
> > + impl->filter_mask = NV_MCF_GPU_FILTER_ID_MASK;
> > + impl->event_attr = mcf_pmu_event_attrs;
> > + impl->format_attr = mcf_gpu_pmu_format_attrs;
> > + break;
> > + case 0x105:
> > + impl->name =
> > + devm_kasprintf(dev, GFP_KERNEL,
> > + "nvidia_mcf_gpu_pmu_%u",
> > + coresight_pmu->apmt_node->proc_affinity);
> > + impl->filter_mask = NV_MCF_GPU_FILTER_ID_MASK;
> > + impl->event_attr = mcf_pmu_event_attrs;
> > + impl->format_attr = mcf_gpu_pmu_format_attrs;
> > + break;
> > + case 0x106:
> > + impl->name =
> > + devm_kasprintf(dev, GFP_KERNEL,
> > + "nvidia_mcf_nvlink_pmu_%u",
> > + coresight_pmu->apmt_node->proc_affinity);
> > + impl->filter_mask = NV_MCF_NVLINK_FILTER_ID_MASK;
> > + impl->event_attr = mcf_pmu_event_attrs;
> > + impl->format_attr = mcf_nvlink_pmu_format_attrs;
> > + break;
> > + case 0x2CF:
> > + impl->name =
> > + devm_kasprintf(dev, GFP_KERNEL, "nvidia_scf_pmu_%u",
> > + coresight_pmu->apmt_node->proc_affinity);
> > + impl->filter_mask = 0x0;
> > + impl->event_attr = scf_pmu_event_attrs;
> > + impl->format_attr = scf_pmu_format_attrs;
> > + break;
> > + default:
> > + impl->name =
> > + devm_kasprintf(dev, GFP_KERNEL, "nvidia_uncore_pmu_%u",
> > + atomic_fetch_inc(&pmu_idx));
> > + impl->filter_mask = CORESIGHT_FILTER_MASK;
> > + impl->event_attr =
> coresight_pmu_get_event_attrs(coresight_pmu);
> > + impl->format_attr =
> > + coresight_pmu_get_format_attrs(coresight_pmu);
>
> Couldn't we make the generic driver fall back to these routines if a
> "IP" pmu specific backend doesn't implement these callback ?
>
Yes, we can do that. I will add it on the next version.
Regards,
Besar
> Thanks
> Suzuki
Good day,
On Fri, Jul 08, 2022 at 10:46:46AM +0000, Besar Wicaksono wrote:
> Hi Suzuki,
>
> Please see my replies inline.
>
> > -----Original Message-----
> > From: Suzuki K Poulose <[email protected]>
> > Sent: Thursday, July 7, 2022 6:08 PM
> > To: Besar Wicaksono <[email protected]>; [email protected];
> > [email protected]; [email protected]; [email protected]
> > Cc: [email protected]; [email protected];
> > [email protected]; [email protected];
> > [email protected]; [email protected]; Thierry Reding
> > <[email protected]>; Jonathan Hunter <[email protected]>; Vikram
> > Sethi <[email protected]>; [email protected];
> > [email protected]; [email protected]
> > Subject: Re: [RESEND PATCH v3 1/2] perf: coresight_pmu: Add support for
> > ARM CoreSight PMU driver
> >
> > External email: Use caution opening links or attachments
> >
> >
> > On 21/06/2022 06:50, Besar Wicaksono wrote:
> > > Add support for ARM CoreSight PMU driver framework and interfaces.
> > > The driver provides generic implementation to operate uncore PMU based
> > > on ARM CoreSight PMU architecture. The driver also provides interface
> > > to get vendor/implementation specific information, for example event
> > > attributes and formating.
> > >
> > > The specification used in this implementation can be found below:
> > > * ACPI Arm Performance Monitoring Unit table:
> > > https://developer.arm.com/documentation/den0117/latest
> > > * ARM Coresight PMU architecture:
> > > https://developer.arm.com/documentation/ihi0091/latest
> > >
> > > Signed-off-by: Besar Wicaksono <[email protected]>
> > > ---
> > > arch/arm64/configs/defconfig | 1 +
> > > drivers/perf/Kconfig | 2 +
> > > drivers/perf/Makefile | 1 +
> > > drivers/perf/coresight_pmu/Kconfig | 11 +
> > > drivers/perf/coresight_pmu/Makefile | 6 +
> > > .../perf/coresight_pmu/arm_coresight_pmu.c | 1312
> > +++++++++++++++++
> > > .../perf/coresight_pmu/arm_coresight_pmu.h | 177 +++
> > > 7 files changed, 1510 insertions(+)
> > > create mode 100644 drivers/perf/coresight_pmu/Kconfig
> > > create mode 100644 drivers/perf/coresight_pmu/Makefile
> > > create mode 100644 drivers/perf/coresight_pmu/arm_coresight_pmu.c
> > > create mode 100644 drivers/perf/coresight_pmu/arm_coresight_pmu.h
> > >
> > > diff --git a/arch/arm64/configs/defconfig b/arch/arm64/configs/defconfig
> > > index 7d1105343bc2..22184f8883da 100644
> > > --- a/arch/arm64/configs/defconfig
> > > +++ b/arch/arm64/configs/defconfig
> > > @@ -1212,6 +1212,7 @@ CONFIG_PHY_UNIPHIER_USB3=y
> > > CONFIG_PHY_TEGRA_XUSB=y
> > > CONFIG_PHY_AM654_SERDES=m
> > > CONFIG_PHY_J721E_WIZ=m
> > > +CONFIG_ARM_CORESIGHT_PMU=y
> > > CONFIG_ARM_SMMU_V3_PMU=m
> > > CONFIG_FSL_IMX8_DDR_PMU=m
> > > CONFIG_QCOM_L2_PMU=y
> > > diff --git a/drivers/perf/Kconfig b/drivers/perf/Kconfig
> > > index 1e2d69453771..c4e7cd5b4162 100644
> > > --- a/drivers/perf/Kconfig
> > > +++ b/drivers/perf/Kconfig
> > > @@ -192,4 +192,6 @@ config MARVELL_CN10K_DDR_PMU
> > > Enable perf support for Marvell DDR Performance monitoring
> > > event on CN10K platform.
> > >
> > > +source "drivers/perf/coresight_pmu/Kconfig"
> > > +
> > > endmenu
> > > diff --git a/drivers/perf/Makefile b/drivers/perf/Makefile
> > > index 57a279c61df5..4126a04b5583 100644
> > > --- a/drivers/perf/Makefile
> > > +++ b/drivers/perf/Makefile
> > > @@ -20,3 +20,4 @@ obj-$(CONFIG_ARM_DMC620_PMU) +=
> > arm_dmc620_pmu.o
> > > obj-$(CONFIG_MARVELL_CN10K_TAD_PMU) +=
> > marvell_cn10k_tad_pmu.o
> > > obj-$(CONFIG_MARVELL_CN10K_DDR_PMU) +=
> > marvell_cn10k_ddr_pmu.o
> > > obj-$(CONFIG_APPLE_M1_CPU_PMU) += apple_m1_cpu_pmu.o
> > > +obj-$(CONFIG_ARM_CORESIGHT_PMU) += coresight_pmu/
> > > diff --git a/drivers/perf/coresight_pmu/Kconfig
> > b/drivers/perf/coresight_pmu/Kconfig
> > > new file mode 100644
> > > index 000000000000..89174f54c7be
> > > --- /dev/null
> > > +++ b/drivers/perf/coresight_pmu/Kconfig
> > > @@ -0,0 +1,11 @@
> > > +# SPDX-License-Identifier: GPL-2.0
> > > +#
> > > +# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES.
> > > +
> > > +config ARM_CORESIGHT_PMU
> > > + tristate "ARM Coresight PMU"
> > > + depends on ACPI
> > > + depends on ACPI_APMT || COMPILE_TEST
> > > + help
> > > + Provides support for Performance Monitoring Unit (PMU) events
> > based on
> > > + ARM CoreSight PMU architecture.
> > > \ No newline at end of file
> > > diff --git a/drivers/perf/coresight_pmu/Makefile
> > b/drivers/perf/coresight_pmu/Makefile
> > > new file mode 100644
> > > index 000000000000..a2a7a5fbbc16
> > > --- /dev/null
> > > +++ b/drivers/perf/coresight_pmu/Makefile
> > > @@ -0,0 +1,6 @@
> > > +# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES.
> > > +#
> > > +# SPDX-License-Identifier: GPL-2.0
> > > +
> > > +obj-$(CONFIG_ARM_CORESIGHT_PMU) += \
> > > + arm_coresight_pmu.o
> > > diff --git a/drivers/perf/coresight_pmu/arm_coresight_pmu.c
> > b/drivers/perf/coresight_pmu/arm_coresight_pmu.c
> > > new file mode 100644
> > > index 000000000000..ba52cc592b2d
> > > --- /dev/null
> > > +++ b/drivers/perf/coresight_pmu/arm_coresight_pmu.c
> > > @@ -0,0 +1,1312 @@
> > > +// SPDX-License-Identifier: GPL-2.0
> > > +/*
> > > + * ARM CoreSight PMU driver.
> > > + *
> > > + * This driver adds support for uncore PMU based on ARM CoreSight
> > Performance
> > > + * Monitoring Unit Architecture. The PMU is accessible via MMIO registers
> > and
> > > + * like other uncore PMUs, it does not support process specific events and
> > > + * cannot be used in sampling mode.
> > > + *
> > > + * This code is based on other uncore PMUs like ARM DSU PMU. It
> > provides a
> > > + * generic implementation to operate the PMU according to CoreSight
> > PMU
> > > + * architecture and ACPI ARM PMU table (APMT) documents below:
> > > + * - ARM CoreSight PMU architecture document number: ARM IHI 0091
> > A.a-00bet0.
> > > + * - APMT document number: ARM DEN0117.
> > > + * The description of the PMU, like the PMU device identification,
> > available
> > > + * events, and configuration options, is vendor specific. The driver
> > provides
> > > + * interface for vendor specific code to get this information. This allows
> > the
> > > + * driver to be shared with PMU from different vendors.
> > > + *
> > > + * The CoreSight PMU devices can be named using implementor specific
> > format, or
> > > + * with default naming format: arm_<apmt node type>_pmu_<numeric
> > id>.
> >
> > ---
> >
> > > + * The description of the device, like the identifier, supported events, and
> > > + * formats can be found in sysfs
> > > + * /sys/bus/event_source/devices/arm_<apmt node
> > type>_pmu_<numeric id>
> >
> > I don't think the above is needed. This is true for all PMU devices.
> >
>
> Acked, I will remove it on next version.
>
> > > + *
> > > + * The user should refer to the vendor technical documentation to get
> > details
> > > + * about the supported events.
> > > + *
> > > + * Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES.
> > > + *
> > > + */
> > > +
> > > +#include <linux/acpi.h>
> > > +#include <linux/cacheinfo.h>
> > > +#include <linux/ctype.h>
> > > +#include <linux/interrupt.h>
> > > +#include <linux/io-64-nonatomic-lo-hi.h>
> > > +#include <linux/module.h>
> > > +#include <linux/mod_devicetable.h>
> > > +#include <linux/perf_event.h>
> > > +#include <linux/platform_device.h>
> > > +#include <acpi/processor.h>
> > > +
> > > +#include "arm_coresight_pmu.h"
> > > +
> > > +#define PMUNAME "arm_system_pmu"
> > > +
> >
> > If we have decied to call this arm_system_pmu, (which I am perfectly
> > happy with), could we please stick to that name for functions that we
> > export ?
> >
> > e.g,
> > s/coresight_pmu_sysfs_event_show/arm_system_pmu_event_show()/
> >
>
> Just want to confirm, is it just the public functions or do we need to replace
> all that has "coresight" naming ? Including the static functions, structs, filename.
I think all references to "coresight" should be changed to "arm_system_pmu",
including filenames. That way there is no doubt this IP block is not
related, and does not interoperate, with the any of the "coresight" IP blocks
already supported[1] in the kernel.
I have looked at the documentation[2] in the cover letter and I agree
with an earlier comment from Sudeep that this IP has very little to do with any
of the other CoreSight IP blocks found in the CoreSight framework[1]. Using the
"coresight" naming convention in this driver would be _extremely_ confusing,
especially when it comes to exported functions.
Thanks,
Mathieu
[1]. drivers/hwtracing/coresight/
[2]. https://developer.arm.com/documentation/ihi0091/latest
>
> > > +#define CORESIGHT_CPUMASK_ATTR(_name, _config) \
> > > + CORESIGHT_EXT_ATTR(_name, coresight_pmu_cpumask_show, \
> > > + (unsigned long)_config)
> > > +
> > > +/*
> > > + * Register offsets based on CoreSight Performance Monitoring Unit
> > Architecture
> > > + * Document number: ARM-ECM-0640169 00alp6
> > > + */
> > > +#define PMEVCNTR_LO 0x0
> > > +#define PMEVCNTR_HI 0x4
> > > +#define PMEVTYPER 0x400
> > > +#define PMCCFILTR 0x47C
> > > +#define PMEVFILTR 0xA00
> > > +#define PMCNTENSET 0xC00
> > > +#define PMCNTENCLR 0xC20
> > > +#define PMINTENSET 0xC40
> > > +#define PMINTENCLR 0xC60
> > > +#define PMOVSCLR 0xC80
> > > +#define PMOVSSET 0xCC0
> > > +#define PMCFGR 0xE00
> > > +#define PMCR 0xE04
> > > +#define PMIIDR 0xE08
> > > +
> > > +/* PMCFGR register field */
> > > +#define PMCFGR_NCG_SHIFT 28
> > > +#define PMCFGR_NCG_MASK 0xf
> >
> > Please could we instead do :
> >
> > PMCFGR_NCG GENMASK(31, 28)
> > and similarly for all the other fields and use :
> >
> > FIELD_PREP(), FIELD_GET() helpers ?
> >
>
> Sure thanks for pointing me to these helpers.
>
> > > +#define PMCFGR_HDBG BIT(24)
> > > +#define PMCFGR_TRO BIT(23)
> > > +#define PMCFGR_SS BIT(22)
> > > +#define PMCFGR_FZO BIT(21)
> > > +#define PMCFGR_MSI BIT(20)
> > > +#define PMCFGR_UEN BIT(19)
> > > +#define PMCFGR_NA BIT(17)
> > > +#define PMCFGR_EX BIT(16)
> > > +#define PMCFGR_CCD BIT(15)
> > > +#define PMCFGR_CC BIT(14)
> > > +#define PMCFGR_SIZE_SHIFT 8
> > > +#define PMCFGR_SIZE_MASK 0x3f
> > > +#define PMCFGR_N_SHIFT 0
> > > +#define PMCFGR_N_MASK 0xff
> > > +
> > > +/* PMCR register field */
> > > +#define PMCR_TRO BIT(11)
> > > +#define PMCR_HDBG BIT(10)
> > > +#define PMCR_FZO BIT(9)
> > > +#define PMCR_NA BIT(8)
> > > +#define PMCR_DP BIT(5)
> > > +#define PMCR_X BIT(4)
> > > +#define PMCR_D BIT(3)
> > > +#define PMCR_C BIT(2)
> > > +#define PMCR_P BIT(1)
> > > +#define PMCR_E BIT(0)
> > > +
> > > +/* PMIIDR register field */
> > > +#define PMIIDR_IMPLEMENTER_MASK 0xFFF
> > > +#define PMIIDR_PRODUCTID_MASK 0xFFF
> > > +#define PMIIDR_PRODUCTID_SHIFT 20
> > > +
> > > +/* Each SET/CLR register supports up to 32 counters. */
> > > +#define CORESIGHT_SET_CLR_REG_COUNTER_NUM 32
> > > +#define CORESIGHT_SET_CLR_REG_COUNTER_SHIFT 5
> >
> > minor nits:
> >
> > Could we rename sucht that s/CORESIGHT/PMU ? Without the PMU, it could
> > be confused as a CORESIGHT architecture specific thing.
> >
> > Also
> >
> > #define CORESIGHT_SET_CLR_REG_COUNTER_NUM (1 <
> > CORESIGHT_SET_CLR_REG_COUNTER_SHIFT)
> >
> >
>
> Acked, I will make the change on the next version.
>
> > > +
> > > +/* The number of 32-bit SET/CLR register that can be supported. */
> > > +#define CORESIGHT_SET_CLR_REG_MAX_NUM ((PMCNTENCLR -
> > PMCNTENSET) / sizeof(u32))
> > > +
> > > +static_assert((CORESIGHT_SET_CLR_REG_MAX_NUM *
> > > + CORESIGHT_SET_CLR_REG_COUNTER_NUM) >=
> > > + CORESIGHT_PMU_MAX_HW_CNTRS);
> > > +
> > > +/* Convert counter idx into SET/CLR register number. */
> > > +#define CORESIGHT_IDX_TO_SET_CLR_REG_ID(idx) \
> > > + (idx >> CORESIGHT_SET_CLR_REG_COUNTER_SHIFT)
> > > +
> > > +/* Convert counter idx into SET/CLR register bit. */
> > > +#define CORESIGHT_IDX_TO_SET_CLR_REG_BIT(idx) \
> > > + (idx & (CORESIGHT_SET_CLR_REG_COUNTER_NUM - 1))
> >
> > super minor nit:
> >
> > COUNTER_TO_SET_CLR_REG_IDX
> > COUNTER_TO_SET_CLR_REG_BIT ?
> >
> > I don't see a need for prefixing them with CORESIGHT and making them
> > unnecessary longer.
> >
>
> Acked.
>
> > > +
> > > +#define CORESIGHT_ACTIVE_CPU_MASK 0x0
> > > +#define CORESIGHT_ASSOCIATED_CPU_MASK 0x1
> > > +
> > > +
> > > +/* Check if field f in flags is set with value v */
> > > +#define CHECK_APMT_FLAG(flags, f, v) \
> > > + ((flags & (ACPI_APMT_FLAGS_ ## f)) == (ACPI_APMT_FLAGS_ ## f ##
> > _ ## v))
> > > +
> > > +static unsigned long coresight_pmu_cpuhp_state;
> > > +
> > > +/*
> > > + * In CoreSight PMU architecture, all of the MMIO registers are 32-bit
> > except
> > > + * counter register. The counter register can be implemented as 32-bit or
> > 64-bit
> > > + * register depending on the value of PMCFGR.SIZE field. For 64-bit
> > access,
> > > + * single-copy 64-bit atomic support is implementation defined. APMT
> > node flag
> > > + * is used to identify if the PMU supports 64-bit single copy atomic. If 64-
> > bit
> > > + * single copy atomic is not supported, the driver treats the register as a
> > pair
> > > + * of 32-bit register.
> > > + */
> > > +
> > > +/*
> > > + * Read 64-bit register as a pair of 32-bit registers using hi-lo-hi sequence.
> > > + */
> > > +static u64 read_reg64_hilohi(const void __iomem *addr)
> > > +{
> > > + u32 val_lo, val_hi;
> > > + u64 val;
> > > +
> > > + /* Use high-low-high sequence to avoid tearing */
> > > + do {
> > > + val_hi = readl(addr + 4);
> > > + val_lo = readl(addr);
> > > + } while (val_hi != readl(addr + 4));
> > > +
> > > + val = (((u64)val_hi << 32) | val_lo);
> > > +
> > > + return val;
> > > +}
> > > +
> > > +/* Check if PMU supports 64-bit single copy atomic. */
> > > +static inline bool support_atomic(const struct coresight_pmu
> > *coresight_pmu)
> >
> > minor nit: supports_64bit_atomics() ?
> >
>
> Acked.
>
> > > +{
> > > + return CHECK_APMT_FLAG(coresight_pmu->apmt_node->flags,
> > ATOMIC, SUPP);
> > > +}
> > > +
> > > +/* Check if cycle counter is supported. */
> > > +static inline bool support_cc(const struct coresight_pmu *coresight_pmu)
> >
> > minor nit: supports_cycle_counter() ?
> >
>
> Acked.
>
> > > +{
> > > + return (coresight_pmu->pmcfgr & PMCFGR_CC);
> > > +}
> > > +
> > > +/* Get counter size. */
> > > +static inline u32 pmcfgr_size(const struct coresight_pmu *coresight_pmu)
> >
> >
> > Please could we change this to
> >
> > counter_size(const struct coresight_pmu *coresight_pmu)
> > {
> > /* Counter size is PMCFGR_SIZE + 1 */
> > return FIELD_GET(PMCFGR_SIZE, pmu->pmcfgr) + 1;
> > }
> >
> > and also add :
> >
> > u64 counter_mask(..)
> > {
> > return GENMASK_ULL(counter_size(pmu) - 1, 0);
> > }
> >
> > See more on this below.
> >
>
> Acked.
>
> > > +{
> > > + return (coresight_pmu->pmcfgr >> PMCFGR_SIZE_SHIFT) &
> > PMCFGR_SIZE_MASK;
> > > +}
> >
> >
> >
> > > +/* Check if counter is implemented as 64-bit register. */
> > > +static inline bool
> > > +use_64b_counter_reg(const struct coresight_pmu *coresight_pmu)
> > > +{
> > > + return (pmcfgr_size(coresight_pmu) > 31);
> > > +}
> > > +
> > > +/* Get number of counters, minus one. */
> > > +static inline u32 pmcfgr_n(const struct coresight_pmu *coresight_pmu)
> > > +{
> > > + return (coresight_pmu->pmcfgr >> PMCFGR_N_SHIFT) &
> > PMCFGR_N_MASK;
> > > +}
> > > +
> > > +ssize_t coresight_pmu_sysfs_event_show(struct device *dev,
> > > + struct device_attribute *attr, char *buf)
> > > +{
> > > + struct dev_ext_attribute *eattr =
> > > + container_of(attr, struct dev_ext_attribute, attr);
> > > + return sysfs_emit(buf, "event=0x%llx\n",
> > > + (unsigned long long)eattr->var);
> > > +}
> > > +EXPORT_SYMBOL_GPL(coresight_pmu_sysfs_event_show);
> >
> >
> > > +
> > > +/* Default event list. */
> > > +static struct attribute *coresight_pmu_event_attrs[] = {
> > > + CORESIGHT_EVENT_ATTR(cycles,
> > CORESIGHT_PMU_EVT_CYCLES_DEFAULT),
> > > + NULL,
> > > +};
> > > +
> > > +struct attribute **
> > > +coresight_pmu_get_event_attrs(const struct coresight_pmu
> > *coresight_pmu)
> > > +{
> > > + return coresight_pmu_event_attrs;
> > > +}
> > > +EXPORT_SYMBOL_GPL(coresight_pmu_get_event_attrs);
> > > +
> > > +umode_t coresight_pmu_event_attr_is_visible(struct kobject *kobj,
> > > + struct attribute *attr, int unused)
> > > +{
> > > + struct device *dev = kobj_to_dev(kobj);
> > > + struct coresight_pmu *coresight_pmu =
> > > + to_coresight_pmu(dev_get_drvdata(dev));
> > > + struct perf_pmu_events_attr *eattr;
> > > +
> > > + eattr = container_of(attr, typeof(*eattr), attr.attr);
> > > +
> > > + /* Hide cycle event if not supported */
> > > + if (!support_cc(coresight_pmu) &&
> > > + eattr->id == CORESIGHT_PMU_EVT_CYCLES_DEFAULT) {
> > > + return 0;
> > > + }
> > > +
> > > + return attr->mode;
> > > +}
> > > +EXPORT_SYMBOL_GPL(coresight_pmu_event_attr_is_visible);
> > > +
> > > +ssize_t coresight_pmu_sysfs_format_show(struct device *dev,
> > > + struct device_attribute *attr,
> > > + char *buf)
> > > +{
> > > + struct dev_ext_attribute *eattr =
> > > + container_of(attr, struct dev_ext_attribute, attr);
> > > + return sysfs_emit(buf, "%s\n", (char *)eattr->var);
> > > +}
> > > +EXPORT_SYMBOL_GPL(coresight_pmu_sysfs_format_show);
> > > +
> > > +static struct attribute *coresight_pmu_format_attrs[] = {
> > > + CORESIGHT_FORMAT_EVENT_ATTR,
> > > + CORESIGHT_FORMAT_FILTER_ATTR,
> > > + NULL,
> > > +};
> > > +
> > > +struct attribute **
> > > +coresight_pmu_get_format_attrs(const struct coresight_pmu
> > *coresight_pmu)
> > > +{
> > > + return coresight_pmu_format_attrs;
> > > +}
> > > +EXPORT_SYMBOL_GPL(coresight_pmu_get_format_attrs);
> > > +
> > > +u32 coresight_pmu_event_type(const struct perf_event *event)
> > > +{
> > > + return event->attr.config & CORESIGHT_EVENT_MASK;
> > > +}
> > > +EXPORT_SYMBOL_GPL(coresight_pmu_event_type);
> >
> > Why do we need to export these ?
> >
>
> As you asked on the other question, we don't need to export it if driver can
> implicitly fall back to default routine if vendor does not provide a callback.
> I will provide a comment on coresight_pmu_impl_ops on this expectation.
>
> > > +
> > > +u32 coresight_pmu_event_filter(const struct perf_event *event)
> > > +{
> > > + return event->attr.config1 & CORESIGHT_FILTER_MASK;
> > > +}
> > > +EXPORT_SYMBOL_GPL(coresight_pmu_event_filter);
> > > +
> > > +static ssize_t coresight_pmu_identifier_show(struct device *dev,
> > > + struct device_attribute *attr,
> > > + char *page)
> > > +{
> > > + struct coresight_pmu *coresight_pmu =
> > > + to_coresight_pmu(dev_get_drvdata(dev));
> > > +
> > > + return sysfs_emit(page, "%s\n", coresight_pmu->identifier);
> > > +}
> > > +
> > > +static struct device_attribute coresight_pmu_identifier_attr =
> > > + __ATTR(identifier, 0444, coresight_pmu_identifier_show, NULL);
> > > +
> > > +static struct attribute *coresight_pmu_identifier_attrs[] = {
> > > + &coresight_pmu_identifier_attr.attr,
> > > + NULL,
> > > +};
> > > +
> > > +static struct attribute_group coresight_pmu_identifier_attr_group = {
> > > + .attrs = coresight_pmu_identifier_attrs,
> > > +};
> > > +
> > > +const char *
> > > +coresight_pmu_get_identifier(const struct coresight_pmu
> > *coresight_pmu)
> > > +{
> > > + const char *identifier =
> > > + devm_kasprintf(coresight_pmu->dev, GFP_KERNEL, "%x",
> > > + coresight_pmu->impl.pmiidr);
> > > + return identifier;
> > > +}
> > > +EXPORT_SYMBOL_GPL(coresight_pmu_get_identifier);
> > > +
> > > +static const char
> > *coresight_pmu_type_str[ACPI_APMT_NODE_TYPE_COUNT] = {
> > > + "mc",
> > > + "smmu",
> > > + "pcie",
> > > + "acpi",
> > > + "cache",
> > > +};
> > > +
> > > +const char *coresight_pmu_get_name(const struct coresight_pmu
> > *coresight_pmu)
> > > +{
> > > + struct device *dev;
> > > + u8 pmu_type;
> > > + char *name;
> > > + char acpi_hid_string[ACPI_ID_LEN] = { 0 };
> > > + static atomic_t pmu_idx[ACPI_APMT_NODE_TYPE_COUNT] = { 0 };
> > > +
> > > + dev = coresight_pmu->dev;
> > > + pmu_type = coresight_pmu->apmt_node->type;
> >
> > Please could we use a variable to refer to the "coresight_pmu->apmt_node"
> > ?
> > It helps with code reading below and also gives us a hint to the
> > type of that field, we refer multiple times below.
> >
>
> Acked.
>
> > > +
> > > + if (pmu_type >= ACPI_APMT_NODE_TYPE_COUNT) {
> > > + dev_err(dev, "unsupported PMU type-%u\n", pmu_type);
> > > + return NULL;
> > > + }
> > > +
> > > + if (pmu_type == ACPI_APMT_NODE_TYPE_ACPI) {
> > > + memcpy(acpi_hid_string,
> > > + &coresight_pmu->apmt_node->inst_primary,
> > > + sizeof(coresight_pmu->apmt_node->inst_primary));
> > > + name = devm_kasprintf(dev, GFP_KERNEL,
> > "arm_%s_pmu_%s_%u",
> > > + coresight_pmu_type_str[pmu_type],
> > > + acpi_hid_string,
> > > + coresight_pmu->apmt_node->inst_secondary);
> > > + } else {
> > > + name = devm_kasprintf(dev, GFP_KERNEL, "arm_%s_pmu_%d",
> > > + coresight_pmu_type_str[pmu_type],
> > > + atomic_fetch_inc(&pmu_idx[pmu_type]));
> > > + }
> > > +
> > > + return name;
> > > +}
> > > +EXPORT_SYMBOL_GPL(coresight_pmu_get_name);
> > > +
> > > +static ssize_t coresight_pmu_cpumask_show(struct device *dev,
> > > + struct device_attribute *attr,
> > > + char *buf)
> > > +{
> > > + struct pmu *pmu = dev_get_drvdata(dev);
> > > + struct coresight_pmu *coresight_pmu = to_coresight_pmu(pmu);
> > > + struct dev_ext_attribute *eattr =
> > > + container_of(attr, struct dev_ext_attribute, attr);
> > > + unsigned long mask_id = (unsigned long)eattr->var;
> > > + const cpumask_t *cpumask;
> > > +
> > > + switch (mask_id) {
> > > + case CORESIGHT_ACTIVE_CPU_MASK:
> > > + cpumask = &coresight_pmu->active_cpu;
> > > + break;
> > > + case CORESIGHT_ASSOCIATED_CPU_MASK:
> > > + cpumask = &coresight_pmu->associated_cpus;
> > > + break;
> > > + default:
> > > + return 0;
> > > + }
> > > + return cpumap_print_to_pagebuf(true, buf, cpumask);
> > > +}
> > > +
> > > +static struct attribute *coresight_pmu_cpumask_attrs[] = {
> > > + CORESIGHT_CPUMASK_ATTR(cpumask,
> > CORESIGHT_ACTIVE_CPU_MASK),
> > > + CORESIGHT_CPUMASK_ATTR(associated_cpus,
> > CORESIGHT_ASSOCIATED_CPU_MASK),
> > > + NULL,
> > > +};
> > > +
> > > +static struct attribute_group coresight_pmu_cpumask_attr_group = {
> > > + .attrs = coresight_pmu_cpumask_attrs,
> > > +};
> > > +
> > > +static const struct coresight_pmu_impl_ops default_impl_ops = {
> > > + .get_event_attrs = coresight_pmu_get_event_attrs,
> > > + .get_format_attrs = coresight_pmu_get_format_attrs,
> > > + .get_identifier = coresight_pmu_get_identifier,
> > > + .get_name = coresight_pmu_get_name,
> > > + .is_cc_event = coresight_pmu_is_cc_event,
> > > + .event_type = coresight_pmu_event_type,
> > > + .event_filter = coresight_pmu_event_filter,
> > > + .event_attr_is_visible = coresight_pmu_event_attr_is_visible
> > > +};
> > > +
> > > +struct impl_match {
> > > + u32 pmiidr;
> > > + u32 mask;
> > > + int (*impl_init_ops)(struct coresight_pmu *coresight_pmu); > +};
> > > +
> > > +static const struct impl_match impl_match[] = {
> > > + {}
> > > +};
> > > +
> > > +static int coresight_pmu_init_impl_ops(struct coresight_pmu
> > *coresight_pmu)
> > > +{
> > > + int idx, ret;
> > > + struct acpi_apmt_node *apmt_node = coresight_pmu->apmt_node;
> > > + const struct impl_match *match = impl_match;
> > > +
> > > + /*
> > > + * Get PMU implementer and product id from APMT node.
> > > + * If APMT node doesn't have implementer/product id, try get it
> > > + * from PMIIDR.
> > > + */
> > > + coresight_pmu->impl.pmiidr =
> > > + (apmt_node->impl_id) ? apmt_node->impl_id :
> > > + readl(coresight_pmu->base0 + PMIIDR);
> > > +
> > > + /* Find implementer specific attribute ops. */
> > > + for (idx = 0; match->pmiidr; match++, idx++) {
> >
> > idx is unused, please remove it.
> >
>
> Acked, thanks for catching this.
>
> > > + if ((match->pmiidr & match->mask) ==
> > > + (coresight_pmu->impl.pmiidr & match->mask)) {
> > > + ret = match->impl_init_ops(coresight_pmu);
> > > + if (ret)
> > > + return ret;
> > > +
> > > + return 0;
> > > + }
> > > + }
> > > +
> > > + /* We don't find implementer specific attribute ops, use default. */
> > > + coresight_pmu->impl.ops = &default_impl_ops;
> > > + return 0;
> > > +}
> > > +
> > > +static struct attribute_group *
> > > +coresight_pmu_alloc_event_attr_group(struct coresight_pmu
> > *coresight_pmu)
> > > +{
> > > + struct attribute_group *event_group;
> > > + struct device *dev = coresight_pmu->dev;
> > > +
> > > + event_group =
> > > + devm_kzalloc(dev, sizeof(struct attribute_group), GFP_KERNEL);
> > > + if (!event_group)
> > > + return NULL;
> > > +
> > > + event_group->name = "events";
> > > + event_group->attrs =
> > > + coresight_pmu->impl.ops->get_event_attrs(coresight_pmu);
> >
> >
> > Is it mandatory for the PMUs to implement get_event_attrs() and all the
> > call backs ? Does it makes sense to check and optionally call it, if the
> > backend driver implements it ? Or at least we should make sure the
> > required operations are filled in by backend.
> >
> >
> > > + event_group->is_visible =
> > > + coresight_pmu->impl.ops->event_attr_is_visible;
> > > +
> > > + return event_group;
> > > +}
> > > +
> > > +static struct attribute_group *
> > > +coresight_pmu_alloc_format_attr_group(struct coresight_pmu
> > *coresight_pmu)
> > > +{
> > > + struct attribute_group *format_group;
> > > + struct device *dev = coresight_pmu->dev;
> > > +
> > > + format_group =
> > > + devm_kzalloc(dev, sizeof(struct attribute_group), GFP_KERNEL);
> > > + if (!format_group)
> > > + return NULL;
> > > +
> > > + format_group->name = "format";
> > > + format_group->attrs =
> > > + coresight_pmu->impl.ops->get_format_attrs(coresight_pmu);
> >
> > Same as above.
> >
> > > +
> > > + return format_group;
> > > +}
> > > +
> > > +static struct attribute_group **
> > > +coresight_pmu_alloc_attr_group(struct coresight_pmu *coresight_pmu)
> > > +{
> > > + const struct coresight_pmu_impl_ops *impl_ops;
> > > + struct attribute_group **attr_groups = NULL;
> > > + struct device *dev = coresight_pmu->dev;
> > > + int ret;
> > > +
> > > + ret = coresight_pmu_init_impl_ops(coresight_pmu);
> > > + if (ret)
> > > + return NULL;
> > > +
> > > + impl_ops = coresight_pmu->impl.ops;
> > > +
> > > + coresight_pmu->identifier = impl_ops->get_identifier(coresight_pmu);
> > > + coresight_pmu->name = impl_ops->get_name(coresight_pmu);
> >
> > If the implementor doesn't have a naming scheme, could we default to the
> > generic scheme ? In gerneal, we could fall back to the "generic"
> > callbacks here if the implementor doesn't provide a specific call back.
> > That way, we could skip the exporting a lot of the callbacks.
> >
>
> Yeah, we could implicitly call default callback if vendor does not provide it.
> This also applies to other vendor callbacks. I will add notes on coresight_pmu_impl_ops.
>
> > > +
> > > + if (!coresight_pmu->name)
> > > + return NULL;
> > > +
> > > + attr_groups = devm_kzalloc(dev, 5 * sizeof(struct attribute_group *),
> > > + GFP_KERNEL);
> >
> > nit: devm_kcalloc() ?
> >
>
> Sure.
>
> > > + if (!attr_groups)
> > > + return NULL;
> > > +
> > > + attr_groups[0] =
> > coresight_pmu_alloc_event_attr_group(coresight_pmu);
> > > + attr_groups[1] =
> > coresight_pmu_alloc_format_attr_group(coresight_pmu);
> >
> > What if one of the above returned NULL and you wouldn't get the
> > following listed ?
> >
>
> Yes, I missed the check. Thanks for catching this.
>
> > > + attr_groups[2] = &coresight_pmu_identifier_attr_group;
> > > + attr_groups[3] = &coresight_pmu_cpumask_attr_group;
> > > +
> > > + return attr_groups;
> > > +}
> > > +
> > > +static inline void
> > > +coresight_pmu_reset_counters(struct coresight_pmu *coresight_pmu)
> > > +{
> > > + u32 pmcr = 0;
> > > +
> > > + pmcr |= PMCR_P;
> > > + pmcr |= PMCR_C;
> > > + writel(pmcr, coresight_pmu->base0 + PMCR);
> > > +}
> > > +
> > > +static inline void
> > > +coresight_pmu_start_counters(struct coresight_pmu *coresight_pmu)
> > > +{
> > > + u32 pmcr;
> > > +
> > > + pmcr = PMCR_E;
> > > + writel(pmcr, coresight_pmu->base0 + PMCR);
> >
> > nit: Isn't it easier to read as p;:
> >
> > writel(PMCR_E, ...);
>
> Acked.
>
> > > +}
> > > +
> > > +static inline void
> > > +coresight_pmu_stop_counters(struct coresight_pmu *coresight_pmu)
> > > +{
> > > + u32 pmcr;
> > > +
> > > + pmcr = 0;
> > > + writel(pmcr, coresight_pmu->base0 + PMCR);
> >
> > Similar to above,
> > writel(0, ...);
> > ?
> >
>
> Acked.
>
> > > +}
> > > +
> > > +static void coresight_pmu_enable(struct pmu *pmu)
> > > +{
> > > + bool disabled;
> > > + struct coresight_pmu *coresight_pmu = to_coresight_pmu(pmu);
> > > +
> > > + disabled = bitmap_empty(coresight_pmu->hw_events.used_ctrs,
> > > + coresight_pmu->num_logical_counters);
> > > +
> > > + if (disabled)
> > > + return;
> > > +
> > > + coresight_pmu_start_counters(coresight_pmu);
> > > +}
> > > +
> > > +static void coresight_pmu_disable(struct pmu *pmu)
> > > +{
> > > + struct coresight_pmu *coresight_pmu = to_coresight_pmu(pmu);
> > > +
> > > + coresight_pmu_stop_counters(coresight_pmu);
> > > +}
> > > +
> > > +bool coresight_pmu_is_cc_event(const struct perf_event *event)
> > > +{
> > > + return (event->attr.config ==
> > CORESIGHT_PMU_EVT_CYCLES_DEFAULT);
> > > +}
> > > +EXPORT_SYMBOL_GPL(coresight_pmu_is_cc_event);
> > > +
> > > +static int
> > > +coresight_pmu_get_event_idx(struct coresight_pmu_hw_events
> > *hw_events,
> > > + struct perf_event *event)
> > > +{
> > > + int idx;
> > > + struct coresight_pmu *coresight_pmu = to_coresight_pmu(event-
> > >pmu);
> > > +
> > > + if (support_cc(coresight_pmu)) {
> > > + if (coresight_pmu->impl.ops->is_cc_event(event)) {
> > > + /* Search for available cycle counter. */
> > > + if (test_and_set_bit(coresight_pmu->cc_logical_idx,
> > > + hw_events->used_ctrs))
> > > + return -EAGAIN;
> > > +
> > > + return coresight_pmu->cc_logical_idx;
> > > + }
> > > +
> > > + /*
> > > + * Search a regular counter from the used counter bitmap.
> > > + * The cycle counter divides the bitmap into two parts. Search
> > > + * the first then second half to exclude the cycle counter bit.
> > > + */
> > > + idx = find_first_zero_bit(hw_events->used_ctrs,
> > > + coresight_pmu->cc_logical_idx);
> > > + if (idx >= coresight_pmu->cc_logical_idx) {
> > > + idx = find_next_zero_bit(
> > > + hw_events->used_ctrs,
> > > + coresight_pmu->num_logical_counters,
> > > + coresight_pmu->cc_logical_idx + 1);
> > > + }
> > > + } else {
> > > + idx = find_first_zero_bit(hw_events->used_ctrs,
> > > + coresight_pmu->num_logical_counters);
> > > + }
> > > +
> > > + if (idx >= coresight_pmu->num_logical_counters)
> > > + return -EAGAIN;
> > > +
> > > + set_bit(idx, hw_events->used_ctrs);
> > > +
> > > + return idx;
> > > +}
> > > +
> > > +static bool
> > > +coresight_pmu_validate_event(struct pmu *pmu,
> > > + struct coresight_pmu_hw_events *hw_events,
> > > + struct perf_event *event)
> > > +{
> > > + if (is_software_event(event))
> > > + return true;
> > > +
> > > + /* Reject groups spanning multiple HW PMUs. */
> > > + if (event->pmu != pmu)
> > > + return false;
> > > +
> > > + return (coresight_pmu_get_event_idx(hw_events, event) >= 0);
> > > +}
> > > +
> > > +/*
> > > + * Make sure the group of events can be scheduled at once
> > > + * on the PMU.
> > > + */
> > > +static bool coresight_pmu_validate_group(struct perf_event *event)
> > > +{
> > > + struct perf_event *sibling, *leader = event->group_leader;
> > > + struct coresight_pmu_hw_events fake_hw_events;
> > > +
> > > + if (event->group_leader == event)
> > > + return true;
> > > +
> > > + memset(&fake_hw_events, 0, sizeof(fake_hw_events));
> > > +
> > > + if (!coresight_pmu_validate_event(event->pmu, &fake_hw_events,
> > leader))
> > > + return false;
> > > +
> > > + for_each_sibling_event(sibling, leader) {
> > > + if (!coresight_pmu_validate_event(event->pmu,
> > &fake_hw_events,
> > > + sibling))
> > > + return false;
> > > + }
> > > +
> > > + return coresight_pmu_validate_event(event->pmu,
> > &fake_hw_events, event);
> > > +}
> > > +
> > > +static int coresight_pmu_event_init(struct perf_event *event)
> > > +{
> > > + struct coresight_pmu *coresight_pmu;
> > > + struct hw_perf_event *hwc = &event->hw;
> > > +
> > > + coresight_pmu = to_coresight_pmu(event->pmu);
> > > +
> > > + /*
> > > + * Following other "uncore" PMUs, we do not support sampling mode
> > or
> > > + * attach to a task (per-process mode).
> > > + */
> > > + if (is_sampling_event(event)) {
> > > + dev_dbg(coresight_pmu->pmu.dev,
> > > + "Can't support sampling events\n");
> > > + return -EOPNOTSUPP;
> > > + }
> > > +
> > > + if (event->cpu < 0 || event->attach_state & PERF_ATTACH_TASK) {
> > > + dev_dbg(coresight_pmu->pmu.dev,
> > > + "Can't support per-task counters\n");
> > > + return -EINVAL;
> > > + }
> > > +
> > > + /*
> > > + * Make sure the CPU assignment is on one of the CPUs associated with
> > > + * this PMU.
> > > + */
> > > + if (!cpumask_test_cpu(event->cpu, &coresight_pmu-
> > >associated_cpus)) {
> > > + dev_dbg(coresight_pmu->pmu.dev,
> > > + "Requested cpu is not associated with the PMU\n");
> > > + return -EINVAL;
> > > + }
> > > +
> > > + /* Enforce the current active CPU to handle the events in this PMU. */
> > > + event->cpu = cpumask_first(&coresight_pmu->active_cpu);
> > > + if (event->cpu >= nr_cpu_ids)
> > > + return -EINVAL;
> > > +
> > > + if (!coresight_pmu_validate_group(event))
> > > + return -EINVAL;
> > > +
> > > + /*
> > > + * The logical counter id is tracked with hw_perf_event.extra_reg.idx.
> > > + * The physical counter id is tracked with hw_perf_event.idx.
> > > + * We don't assign an index until we actually place the event onto
> > > + * hardware. Use -1 to signify that we haven't decided where to put it
> > > + * yet.
> > > + */
> > > + hwc->idx = -1;
> > > + hwc->extra_reg.idx = -1;
> > > + hwc->config_base = coresight_pmu->impl.ops->event_type(event);
> >
> > nit: This is a bit confusing combined with with pmu->base0, when we
> > write the event config.
> >
> > Could we use hwc->config instead ?
> >
>
> I was following other drivers. I can change it to hwc->config since it is not used anywhere else.
>
> > > +
> > > + return 0;
> > > +}
> > > +
> > > +static inline u32 counter_offset(u32 reg_sz, u32 ctr_idx)
> > > +{
> > > + return (PMEVCNTR_LO + (reg_sz * ctr_idx));
> > > +}
> > > +
> > > +static void coresight_pmu_write_counter(struct perf_event *event, u64
> > val)
> > > +{
> > > + u32 offset;
> > > + struct coresight_pmu *coresight_pmu = to_coresight_pmu(event-
> > >pmu);
> > > +
> > > + if (use_64b_counter_reg(coresight_pmu)) {
> > > + offset = counter_offset(sizeof(u64), event->hw.idx);
> > > +
> > > + writeq(val, coresight_pmu->base1 + offset);
> > > + } else {
> > > + offset = counter_offset(sizeof(u32), event->hw.idx);
> > > +
> > > + writel(lower_32_bits(val), coresight_pmu->base1 + offset);
> > > + }
> > > +}
> > > +
> > > +static u64 coresight_pmu_read_counter(struct perf_event *event)
> > > +{
> > > + u32 offset;
> > > + const void __iomem *counter_addr;
> > > + struct coresight_pmu *coresight_pmu = to_coresight_pmu(event-
> > >pmu);
> > > +
> > > + if (use_64b_counter_reg(coresight_pmu)) {
> > > + offset = counter_offset(sizeof(u64), event->hw.idx);
> > > + counter_addr = coresight_pmu->base1 + offset;
> > > +
> > > + return support_atomic(coresight_pmu) ?
> > > + readq(counter_addr) :
> > > + read_reg64_hilohi(counter_addr);
> > > + }
> > > +
> > > + offset = counter_offset(sizeof(u32), event->hw.idx);
> > > + return readl(coresight_pmu->base1 + offset);
> > > +}
> > > +
> > > +/*
> > > + * coresight_pmu_set_event_period: Set the period for the counter.
> > > + *
> > > + * To handle cases of extreme interrupt latency, we program
> > > + * the counter with half of the max count for the counters.
> > > + */
> > > +static void coresight_pmu_set_event_period(struct perf_event *event)
> > > +{
> > > + struct coresight_pmu *coresight_pmu = to_coresight_pmu(event-
> > >pmu);
> > > + u64 val = GENMASK_ULL(pmcfgr_size(coresight_pmu), 0) >> 1;
> >
> > Please could we add a helper to get the "counter mask" ? Otherwise this
> > is a bit confusing. We would expect
> > pmcfgr_size() = 64 for 64bit counters, but it is really 63 instead.
> > and that made me stare the GENMASK_ULL() and go back trace it down
> > to the spec. Please see my suggestion on the top to add counter_mask().
> >
>
> Sure will do. I apologize for the confusion.
>
> > > +
> > > + local64_set(&event->hw.prev_count, val);
> > > + coresight_pmu_write_counter(event, val);
> > > +}
> > > +
> > > +static void coresight_pmu_enable_counter(struct coresight_pmu
> > *coresight_pmu,
> > > + int idx)
> > > +{
> > > + u32 reg_id, reg_bit, inten_off, cnten_off;
> > > +
> > > + reg_id = CORESIGHT_IDX_TO_SET_CLR_REG_ID(idx);
> > > + reg_bit = CORESIGHT_IDX_TO_SET_CLR_REG_BIT(idx);
> > > +
> > > + inten_off = PMINTENSET + (4 * reg_id);
> > > + cnten_off = PMCNTENSET + (4 * reg_id);
> > > +
> > > + writel(BIT(reg_bit), coresight_pmu->base0 + inten_off);
> > > + writel(BIT(reg_bit), coresight_pmu->base0 + cnten_off);
> > > +}
> > > +
> > > +static void coresight_pmu_disable_counter(struct coresight_pmu
> > *coresight_pmu,
> > > + int idx)
> > > +{
> > > + u32 reg_id, reg_bit, inten_off, cnten_off;
> > > +
> > > + reg_id = CORESIGHT_IDX_TO_SET_CLR_REG_ID(idx);
> > > + reg_bit = CORESIGHT_IDX_TO_SET_CLR_REG_BIT(idx);
> > > +
> > > + inten_off = PMINTENCLR + (4 * reg_id);
> > > + cnten_off = PMCNTENCLR + (4 * reg_id);
> > > +
> > > + writel(BIT(reg_bit), coresight_pmu->base0 + cnten_off);
> > > + writel(BIT(reg_bit), coresight_pmu->base0 + inten_off);
> > > +}
> > > +
> > > +static void coresight_pmu_event_update(struct perf_event *event)
> > > +{
> > > + struct coresight_pmu *coresight_pmu = to_coresight_pmu(event-
> > >pmu);
> > > + struct hw_perf_event *hwc = &event->hw;
> > > + u64 delta, prev, now;
> > > +
> > > + do {
> > > + prev = local64_read(&hwc->prev_count);
> > > + now = coresight_pmu_read_counter(event);
> > > + } while (local64_cmpxchg(&hwc->prev_count, prev, now) != prev);
> > > +
> > > + delta = (now - prev) & GENMASK_ULL(pmcfgr_size(coresight_pmu), 0);
> > > + local64_add(delta, &event->count);
> > > +}
> > > +
> > > +static inline void coresight_pmu_set_event(struct coresight_pmu
> > *coresight_pmu,
> > > + struct hw_perf_event *hwc)
> > > +{
> > > + u32 offset = PMEVTYPER + (4 * hwc->idx);
> > > +
> > > + writel(hwc->config_base, coresight_pmu->base0 + offset);
> > > +}
> > > +
> > > +static inline void
> > > +coresight_pmu_set_ev_filter(struct coresight_pmu *coresight_pmu,
> > > + struct hw_perf_event *hwc, u32 filter)
> > > +{
> > > + u32 offset = PMEVFILTR + (4 * hwc->idx);
> > > +
> > > + writel(filter, coresight_pmu->base0 + offset);
> > > +}
> > > +
> > > +static inline void
> > > +coresight_pmu_set_cc_filter(struct coresight_pmu *coresight_pmu, u32
> > filter)
> > > +{
> > > + u32 offset = PMCCFILTR;
> > > +
> > > + writel(filter, coresight_pmu->base0 + offset);
> > > +}
> > > +
> > > +static void coresight_pmu_start(struct perf_event *event, int pmu_flags)
> > > +{
> > > + struct coresight_pmu *coresight_pmu = to_coresight_pmu(event-
> > >pmu);
> > > + struct hw_perf_event *hwc = &event->hw;
> > > + u32 filter;
> > > +
> > > + /* We always reprogram the counter */
> > > + if (pmu_flags & PERF_EF_RELOAD)
> > > + WARN_ON(!(hwc->state & PERF_HES_UPTODATE));
> > > +
> > > + coresight_pmu_set_event_period(event);
> > > +
> > > + filter = coresight_pmu->impl.ops->event_filter(event);
> > > +
> > > + if (event->hw.extra_reg.idx == coresight_pmu->cc_logical_idx) {
> > > + coresight_pmu_set_cc_filter(coresight_pmu, filter);
> > > + } else {
> > > + coresight_pmu_set_event(coresight_pmu, hwc);
> > > + coresight_pmu_set_ev_filter(coresight_pmu, hwc, filter);
> > > + }
> > > +
> > > + hwc->state = 0;
> > > +
> > > + coresight_pmu_enable_counter(coresight_pmu, hwc->idx);
> > > +}
> > > +
> > > +static void coresight_pmu_stop(struct perf_event *event, int pmu_flags)
> > > +{
> > > + struct coresight_pmu *coresight_pmu = to_coresight_pmu(event-
> > >pmu);
> > > + struct hw_perf_event *hwc = &event->hw;
> > > +
> > > + if (hwc->state & PERF_HES_STOPPED)
> > > + return;
> > > +
> > > + coresight_pmu_disable_counter(coresight_pmu, hwc->idx);
> > > + coresight_pmu_event_update(event);
> > > +
> > > + hwc->state |= PERF_HES_STOPPED | PERF_HES_UPTODATE;
> > > +}
> > > +
> > > +static inline u32 to_phys_idx(struct coresight_pmu *coresight_pmu, u32
> > idx)
> > > +{
> > > + return (idx == coresight_pmu->cc_logical_idx) ?
> > > + CORESIGHT_PMU_IDX_CCNTR : idx;
> > > +}
> > > +
> > > +static int coresight_pmu_add(struct perf_event *event, int flags)
> > > +{
> > > + struct coresight_pmu *coresight_pmu = to_coresight_pmu(event-
> > >pmu);
> > > + struct coresight_pmu_hw_events *hw_events = &coresight_pmu-
> > >hw_events;
> > > + struct hw_perf_event *hwc = &event->hw;
> > > + int idx;
> > > +
> > > + if (WARN_ON_ONCE(!cpumask_test_cpu(smp_processor_id(),
> > > + &coresight_pmu->associated_cpus)))
> > > + return -ENOENT;
> > > +
> > > + idx = coresight_pmu_get_event_idx(hw_events, event);
> > > + if (idx < 0)
> > > + return idx;
> > > +
> > > + hw_events->events[idx] = event;
> > > + hwc->idx = to_phys_idx(coresight_pmu, idx);
> > > + hwc->extra_reg.idx = idx;
> > > + hwc->state = PERF_HES_STOPPED | PERF_HES_UPTODATE;
> > > +
> > > + if (flags & PERF_EF_START)
> > > + coresight_pmu_start(event, PERF_EF_RELOAD);
> > > +
> > > + /* Propagate changes to the userspace mapping. */
> > > + perf_event_update_userpage(event);
> > > +
> > > + return 0;
> > > +}
> > > +
> > > +static void coresight_pmu_del(struct perf_event *event, int flags)
> > > +{
> > > + struct coresight_pmu *coresight_pmu = to_coresight_pmu(event-
> > >pmu);
> > > + struct coresight_pmu_hw_events *hw_events = &coresight_pmu-
> > >hw_events;
> > > + struct hw_perf_event *hwc = &event->hw;
> > > + int idx = hwc->extra_reg.idx;
> > > +
> > > + coresight_pmu_stop(event, PERF_EF_UPDATE);
> > > +
> > > + hw_events->events[idx] = NULL;
> > > +
> > > + clear_bit(idx, hw_events->used_ctrs);
> > > +
> > > + perf_event_update_userpage(event);
> > > +}
> > > +
> > > +static void coresight_pmu_read(struct perf_event *event)
> > > +{
> > > + coresight_pmu_event_update(event);
> > > +}
> > > +
> > > +static int coresight_pmu_alloc(struct platform_device *pdev,
> > > + struct coresight_pmu **coresight_pmu)
> > > +{
> > > + struct acpi_apmt_node *apmt_node;
> > > + struct device *dev;
> > > + struct coresight_pmu *pmu;
> > > +
> > > + dev = &pdev->dev;
> > > + apmt_node = *(struct acpi_apmt_node **)dev_get_platdata(dev);
> > > + if (!apmt_node) {
> > > + dev_err(dev, "failed to get APMT node\n");
> > > + return -ENOMEM;
> > > + }
> > > +
> > > + pmu = devm_kzalloc(dev, sizeof(*pmu), GFP_KERNEL);
> > > + if (!pmu)
> > > + return -ENOMEM;
> > > +
> > > + *coresight_pmu = pmu;
> > > +
> > > + pmu->dev = dev;
> > > + pmu->apmt_node = apmt_node;
> > > +
> > > + platform_set_drvdata(pdev, coresight_pmu);
> > > +
> > > + return 0;
> > > +}
> > > +
> > > +static int coresight_pmu_init_mmio(struct coresight_pmu
> > *coresight_pmu)
> > > +{
> > > + struct device *dev;
> > > + struct platform_device *pdev;
> > > + struct acpi_apmt_node *apmt_node;
> > > +
> > > + dev = coresight_pmu->dev;
> > > + pdev = to_platform_device(dev);
> > > + apmt_node = coresight_pmu->apmt_node;
> > > +
> > > + /* Base address for page 0. */
> > > + coresight_pmu->base0 = devm_platform_ioremap_resource(pdev, 0);
> > > + if (IS_ERR(coresight_pmu->base0)) {
> > > + dev_err(dev, "ioremap failed for page-0 resource\n");
> > > + return PTR_ERR(coresight_pmu->base0);
> > > + }
> > > +
> > > + /* Base address for page 1 if supported. Otherwise point it to page 0. */
> > > + coresight_pmu->base1 = coresight_pmu->base0;
> > > + if (CHECK_APMT_FLAG(apmt_node->flags, DUAL_PAGE, SUPP)) {
> > > + coresight_pmu->base1 = devm_platform_ioremap_resource(pdev,
> > 1);
> > > + if (IS_ERR(coresight_pmu->base1)) {
> > > + dev_err(dev, "ioremap failed for page-1 resource\n");
> > > + return PTR_ERR(coresight_pmu->base1);
> > > + }
> > > + }
> > > +
> > > + coresight_pmu->pmcfgr = readl(coresight_pmu->base0 + PMCFGR);
> > > +
> > > + coresight_pmu->num_logical_counters = pmcfgr_n(coresight_pmu) +
> > 1;
> > > +
> > > + coresight_pmu->cc_logical_idx = CORESIGHT_PMU_MAX_HW_CNTRS;
> > > +
> > > + if (support_cc(coresight_pmu)) {
> > > + /*
> > > + * The last logical counter is mapped to cycle counter if
> > > + * there is a gap between regular and cycle counter. Otherwise,
> > > + * logical and physical have 1-to-1 mapping.
> > > + */
> > > + coresight_pmu->cc_logical_idx =
> > > + (coresight_pmu->num_logical_counters <=
> > > + CORESIGHT_PMU_IDX_CCNTR) ?
> > > + coresight_pmu->num_logical_counters - 1 :
> > > + CORESIGHT_PMU_IDX_CCNTR;
> > > + }
> > > +
> > > + coresight_pmu->num_set_clr_reg =
> > > + DIV_ROUND_UP(coresight_pmu->num_logical_counters,
> > > + CORESIGHT_SET_CLR_REG_COUNTER_NUM);
> > > +
> > > + coresight_pmu->hw_events.events =
> > > + devm_kzalloc(dev,
> > > + sizeof(*coresight_pmu->hw_events.events) *
> > > + coresight_pmu->num_logical_counters,
> > > + GFP_KERNEL);
> > > +
> > > + if (!coresight_pmu->hw_events.events)
> > > + return -ENOMEM;
> > > +
> > > + return 0;
> > > +}
> > > +
> > > +static inline int
> > > +coresight_pmu_get_reset_overflow(struct coresight_pmu
> > *coresight_pmu,
> > > + u32 *pmovs)
> > > +{
> > > + int i;
> > > + u32 pmovclr_offset = PMOVSCLR;
> > > + u32 has_overflowed = 0;
> > > +
> > > + for (i = 0; i < coresight_pmu->num_set_clr_reg; ++i) {
> > > + pmovs[i] = readl(coresight_pmu->base1 + pmovclr_offset);
> > > + has_overflowed |= pmovs[i];
> > > + writel(pmovs[i], coresight_pmu->base1 + pmovclr_offset);
> > > + pmovclr_offset += sizeof(u32);
> > > + }
> > > +
> > > + return has_overflowed != 0;
> > > +}
> > > +
> > > +static irqreturn_t coresight_pmu_handle_irq(int irq_num, void *dev)
> > > +{
> > > + int idx, has_overflowed;
> > > + struct perf_event *event;
> > > + struct coresight_pmu *coresight_pmu = dev;
> > > + u32 pmovs[CORESIGHT_SET_CLR_REG_MAX_NUM] = { 0 };
> > > + bool handled = false;
> > > +
> > > + coresight_pmu_stop_counters(coresight_pmu);
> > > +
> > > + has_overflowed =
> > coresight_pmu_get_reset_overflow(coresight_pmu, pmovs);
> > > + if (!has_overflowed)
> > > + goto done;
> > > +
> > > + for_each_set_bit(idx, coresight_pmu->hw_events.used_ctrs,
> > > + coresight_pmu->num_logical_counters) {
> > > + event = coresight_pmu->hw_events.events[idx];
> > > +
> > > + if (!event)
> > > + continue;
> > > +
> > > + if (!test_bit(event->hw.idx, (unsigned long *)pmovs))
> > > + continue;
> > > +
> > > + coresight_pmu_event_update(event);
> > > + coresight_pmu_set_event_period(event);
> > > +
> > > + handled = true;
> > > + }
> > > +
> > > +done:
> > > + coresight_pmu_start_counters(coresight_pmu);
> > > + return IRQ_RETVAL(handled);
> > > +}
> > > +
> > > +static int coresight_pmu_request_irq(struct coresight_pmu
> > *coresight_pmu)
> > > +{
> > > + int irq, ret;
> > > + struct device *dev;
> > > + struct platform_device *pdev;
> > > + struct acpi_apmt_node *apmt_node;
> > > +
> > > + dev = coresight_pmu->dev;
> > > + pdev = to_platform_device(dev);
> > > + apmt_node = coresight_pmu->apmt_node;
> > > +
> > > + /* Skip IRQ request if the PMU does not support overflow interrupt. */
> > > + if (apmt_node->ovflw_irq == 0)
> > > + return 0;
> > > +
> > > + irq = platform_get_irq(pdev, 0);
> > > + if (irq < 0)
> > > + return irq;
> > > +
> > > + ret = devm_request_irq(dev, irq, coresight_pmu_handle_irq,
> > > + IRQF_NOBALANCING | IRQF_NO_THREAD,
> > dev_name(dev),
> > > + coresight_pmu);
> > > + if (ret) {
> > > + dev_err(dev, "Could not request IRQ %d\n", irq);
> > > + return ret;
> > > + }
> > > +
> > > + coresight_pmu->irq = irq;
> > > +
> > > + return 0;
> > > +}
> > > +
> > > +static inline int coresight_pmu_find_cpu_container(int cpu, u32
> > container_uid)
> > > +{
> > > + u32 acpi_uid;
> > > + struct device *cpu_dev = get_cpu_device(cpu);
> > > + struct acpi_device *acpi_dev = ACPI_COMPANION(cpu_dev);
> > > +
> > > + if (!cpu_dev)
> > > + return -ENODEV;
> > > +
> > > + while (acpi_dev) {
> > > + if (!strcmp(acpi_device_hid(acpi_dev),
> > > + ACPI_PROCESSOR_CONTAINER_HID) &&
> > > + !kstrtouint(acpi_device_uid(acpi_dev), 0, &acpi_uid) &&
> > > + acpi_uid == container_uid)
> > > + return 0;
> > > +
> > > + acpi_dev = acpi_dev->parent;
> > > + }
> > > +
> > > + return -ENODEV;
> > > +}
> > > +
> > > +static int coresight_pmu_get_cpus(struct coresight_pmu
> > *coresight_pmu)
> > > +{
> > > + struct acpi_apmt_node *apmt_node;
> > > + int affinity_flag;
> > > + int cpu;
> > > +
> > > + apmt_node = coresight_pmu->apmt_node;
> > > + affinity_flag = apmt_node->flags & ACPI_APMT_FLAGS_AFFINITY;
> > > +
> > > + if (affinity_flag == ACPI_APMT_FLAGS_AFFINITY_PROC) {
> > > + for_each_possible_cpu(cpu) {
> > > + if (apmt_node->proc_affinity ==
> > > + get_acpi_id_for_cpu(cpu)) {
> >
> > minor nit:
> >
> > There is one another user of this reverse search for CPU by acpi_id.
> > See get_cpu_for_acpi_id() in arch/arm64/kernel/acpi_numa.c.
> >
> > May be we should make that a generic helper and move it to common code.
> >
>
> Thanks for the reference. Is it okay to defer this with follow up patch ?
>
> > > + cpumask_set_cpu(
> > > + cpu, &coresight_pmu->associated_cpus);
> > > + break;
> > > + }
> > > + }
> > > + } else {
> > > + for_each_possible_cpu(cpu) {
> > > + if (coresight_pmu_find_cpu_container(
> > > + cpu, apmt_node->proc_affinity))
> > > + continue;
> > > +
> > > + cpumask_set_cpu(cpu, &coresight_pmu->associated_cpus);
> > > + }
> > > + }
> > > +
> > > + return 0;
> > > +}
> > > +
> > > +static int coresight_pmu_register_pmu(struct coresight_pmu
> > *coresight_pmu)
> > > +{
> > > + int ret;
> > > + struct attribute_group **attr_groups;
> > > +
> > > + attr_groups = coresight_pmu_alloc_attr_group(coresight_pmu);
> > > + if (!attr_groups) {
> > > + ret = -ENOMEM;
> > > + return ret;
> > > + }
> > > +
> > > + ret = cpuhp_state_add_instance(coresight_pmu_cpuhp_state,
> > > + &coresight_pmu->cpuhp_node);
> > > + if (ret)
> > > + return ret;
> > > +
> > > + coresight_pmu->pmu = (struct pmu){
> > > + .task_ctx_nr = perf_invalid_context,
> > > + .module = THIS_MODULE,
> > > + .pmu_enable = coresight_pmu_enable,
> > > + .pmu_disable = coresight_pmu_disable,
> > > + .event_init = coresight_pmu_event_init,
> > > + .add = coresight_pmu_add,
> > > + .del = coresight_pmu_del,
> > > + .start = coresight_pmu_start,
> > > + .stop = coresight_pmu_stop,
> > > + .read = coresight_pmu_read,
> > > + .attr_groups = (const struct attribute_group **)attr_groups,
> > > + .capabilities = PERF_PMU_CAP_NO_EXCLUDE,
> > > + };
> > > +
> > > + /* Hardware counter init */
> > > + coresight_pmu_stop_counters(coresight_pmu);
> > > + coresight_pmu_reset_counters(coresight_pmu);
> > > +
> > > + ret = perf_pmu_register(&coresight_pmu->pmu, coresight_pmu-
> > >name, -1);
> > > + if (ret) {
> > > + cpuhp_state_remove_instance(coresight_pmu_cpuhp_state,
> > > + &coresight_pmu->cpuhp_node);
> > > + }
> > > +
> > > + return ret;
> > > +}
> > > +
> > > +static int coresight_pmu_device_probe(struct platform_device *pdev)
> > > +{
> > > + int ret;
> > > + struct coresight_pmu *coresight_pmu;
> > > +
> > > + ret = coresight_pmu_alloc(pdev, &coresight_pmu);
> > > + if (ret)
> > > + return ret;
> > > +
> > > + ret = coresight_pmu_init_mmio(coresight_pmu);
> > > + if (ret)
> > > + return ret;
> > > +
> > > + ret = coresight_pmu_request_irq(coresight_pmu);
> > > + if (ret)
> > > + return ret;
> > > +
> > > + ret = coresight_pmu_get_cpus(coresight_pmu);
> > > + if (ret)
> > > + return ret;
> > > +
> > > + ret = coresight_pmu_register_pmu(coresight_pmu);
> > > + if (ret)
> > > + return ret;
> > > +
> > > + return 0;
> > > +}
> > > +
> > > +static int coresight_pmu_device_remove(struct platform_device *pdev)
> > > +{
> > > + struct coresight_pmu *coresight_pmu = platform_get_drvdata(pdev);
> > > +
> > > + perf_pmu_unregister(&coresight_pmu->pmu);
> > > + cpuhp_state_remove_instance(coresight_pmu_cpuhp_state,
> > > + &coresight_pmu->cpuhp_node);
> > > +
> > > + return 0;
> > > +}
> > > +
> > > +static struct platform_driver coresight_pmu_driver = {
> > > + .driver = {
> > > + .name = "arm-system-pmu",
> > > + .suppress_bind_attrs = true,
> > > + },
> > > + .probe = coresight_pmu_device_probe,
> > > + .remove = coresight_pmu_device_remove,
> > > +};
> > > +
> > > +static void coresight_pmu_set_active_cpu(int cpu,
> > > + struct coresight_pmu *coresight_pmu)
> > > +{
> > > + cpumask_set_cpu(cpu, &coresight_pmu->active_cpu);
> > > + WARN_ON(irq_set_affinity(coresight_pmu->irq,
> > > + &coresight_pmu->active_cpu));
> > > +}
> > > +
> > > +static int coresight_pmu_cpu_online(unsigned int cpu, struct hlist_node
> > *node)
> > > +{
> > > + struct coresight_pmu *coresight_pmu =
> > > + hlist_entry_safe(node, struct coresight_pmu, cpuhp_node);
> > > +
> > > + if (!cpumask_test_cpu(cpu, &coresight_pmu->associated_cpus))
> > > + return 0;
> > > +
> > > + /* If the PMU is already managed, there is nothing to do */
> > > + if (!cpumask_empty(&coresight_pmu->active_cpu))
> > > + return 0;
> > > +
> > > + /* Use this CPU for event counting */
> > > + coresight_pmu_set_active_cpu(cpu, coresight_pmu);
> > > +
> > > + return 0;
> > > +}
> > > +
> > > +static int coresight_pmu_cpu_teardown(unsigned int cpu, struct
> > hlist_node *node)
> > > +{
> > > + int dst;
> > > + struct cpumask online_supported;
> > > +
> > > + struct coresight_pmu *coresight_pmu =
> > > + hlist_entry_safe(node, struct coresight_pmu, cpuhp_node);
> > > +
> > > + /* Nothing to do if this CPU doesn't own the PMU */
> > > + if (!cpumask_test_and_clear_cpu(cpu, &coresight_pmu->active_cpu))
> > > + return 0;
> > > +
> > > + /* Choose a new CPU to migrate ownership of the PMU to */
> > > + cpumask_and(&online_supported, &coresight_pmu->associated_cpus,
> > > + cpu_online_mask);
> > > + dst = cpumask_any_but(&online_supported, cpu);
> > > + if (dst >= nr_cpu_ids)
> > > + return 0;
> > > +
> > > + /* Use this CPU for event counting */
> > > + perf_pmu_migrate_context(&coresight_pmu->pmu, cpu, dst);
> > > + coresight_pmu_set_active_cpu(dst, coresight_pmu);
> > > +
> > > + return 0;
> > > +}
> > > +
> > > +static int __init coresight_pmu_init(void)
> > > +{
> > > + int ret;
> > > +
> > > + ret = cpuhp_setup_state_multi(CPUHP_AP_ONLINE_DYN, PMUNAME,
> > > + coresight_pmu_cpu_online,
> > > + coresight_pmu_cpu_teardown);
> > > + if (ret < 0)
> > > + return ret;
> > > + coresight_pmu_cpuhp_state = ret;
> > > + return platform_driver_register(&coresight_pmu_driver);
> > > +}
> > > +
> > > +static void __exit coresight_pmu_exit(void)
> > > +{
> > > + platform_driver_unregister(&coresight_pmu_driver);
> > > + cpuhp_remove_multi_state(coresight_pmu_cpuhp_state);
> > > +}
> > > +
> > > +module_init(coresight_pmu_init);
> > > +module_exit(coresight_pmu_exit);
> > > diff --git a/drivers/perf/coresight_pmu/arm_coresight_pmu.h
> > b/drivers/perf/coresight_pmu/arm_coresight_pmu.h
> > > new file mode 100644
> > > index 000000000000..88fb4cd3dafa
> > > --- /dev/null
> > > +++ b/drivers/perf/coresight_pmu/arm_coresight_pmu.h
> > > @@ -0,0 +1,177 @@
> > > +/* SPDX-License-Identifier: GPL-2.0
> > > + *
> > > + * ARM CoreSight PMU driver.
> > > + * Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES.
> > > + *
> > > + */
> > > +
> > > +#ifndef __ARM_CORESIGHT_PMU_H__
> > > +#define __ARM_CORESIGHT_PMU_H__
> > > +
> > > +#include <linux/acpi.h>
> > > +#include <linux/bitfield.h>
> > > +#include <linux/cpumask.h>
> > > +#include <linux/device.h>
> > > +#include <linux/kernel.h>
> > > +#include <linux/module.h>
> > > +#include <linux/perf_event.h>
> > > +#include <linux/platform_device.h>
> > > +#include <linux/types.h>
> > > +
> > > +#define to_coresight_pmu(p) (container_of(p, struct coresight_pmu,
> > pmu))
> > > +
> > > +#define CORESIGHT_EXT_ATTR(_name, _func, _config) \
> > > + (&((struct dev_ext_attribute[]){ \
> > > + { \
> > > + .attr = __ATTR(_name, 0444, _func, NULL), \
> > > + .var = (void *)_config \
> > > + } \
> > > + })[0].attr.attr)
> > > +
> > > +#define CORESIGHT_FORMAT_ATTR(_name, _config) \
> > > + CORESIGHT_EXT_ATTR(_name, coresight_pmu_sysfs_format_show,
> > \
> > > + (char *)_config)
> > > +
> > > +#define CORESIGHT_EVENT_ATTR(_name, _config) \
> > > + PMU_EVENT_ATTR_ID(_name, coresight_pmu_sysfs_event_show,
> > _config)
> > > +
> > > +
> > > +/* Default event id mask */
> > > +#define CORESIGHT_EVENT_MASK 0xFFFFFFFFULL
> >
> > Please use GENMASK_ULL(), everywhere.
> >
>
> Acked.
>
> Thanks for all your comments, I will address it on the next version.
>
> Regards,
> Besar
>
>
> > Thanks
> > Suzuki
On 2022-07-12 17:36, Mathieu Poirier wrote:
[...]
>>> If we have decied to call this arm_system_pmu, (which I am perfectly
>>> happy with), could we please stick to that name for functions that we
>>> export ?
>>>
>>> e.g,
>>> s/coresight_pmu_sysfs_event_show/arm_system_pmu_event_show()/
>>>
>>
>> Just want to confirm, is it just the public functions or do we need to replace
>> all that has "coresight" naming ? Including the static functions, structs, filename.
>
> I think all references to "coresight" should be changed to "arm_system_pmu",
> including filenames. That way there is no doubt this IP block is not
> related, and does not interoperate, with the any of the "coresight" IP blocks
> already supported[1] in the kernel.
>
> I have looked at the documentation[2] in the cover letter and I agree
> with an earlier comment from Sudeep that this IP has very little to do with any
> of the other CoreSight IP blocks found in the CoreSight framework[1]. Using the
> "coresight" naming convention in this driver would be _extremely_ confusing,
> especially when it comes to exported functions.
But conversely, how is it not confusing to make up completely different
names for things than what they're actually called? The CoreSight
Performance Monitoring Unit is a part of the Arm CoreSight architecture,
it says it right there on page 1. What if I instinctively associate the
name Mathieu with someone more familiar to me, so to avoid confusion I'd
prefer to call you Steve? Is that OK?
As it happens, Steve, I do actually agree with you that "coresight_" is
a bad prefix here, but only for the reason that it's too general. TBH I
think that's true of the existing Linux subsystem too, but that damage
is already done, and I'd concur that there's little value in trying to
unpick that now, despite the clear existence of products like CoreSight
DAP and CoreSight ELA which don't have all that much to do with program
trace either.
However, hindsight and inertia are hardly good reasons to double down on
poor decisions, so if I was going to vote for anything here it would be
"cspmu_", which is about as
obviously-related-to-the-thing-it-actually-is as we can get while also
being pleasantly concise.
[ And no, this isn't bikeshedding. Naming things right is *important* ]
Cheers,
Robin.
>
> Thanks,
> Steve
>
> [1]. drivers/hwtracing/coresight/
> [2]. https://developer.arm.com/documentation/ihi0091/latest
On Tue, 12 Jul 2022 at 14:13, Robin Murphy <[email protected]> wrote:
>
> On 2022-07-12 17:36, Mathieu Poirier wrote:
> [...]
> >>> If we have decied to call this arm_system_pmu, (which I am perfectly
> >>> happy with), could we please stick to that name for functions that we
> >>> export ?
> >>>
> >>> e.g,
> >>> s/coresight_pmu_sysfs_event_show/arm_system_pmu_event_show()/
> >>>
> >>
> >> Just want to confirm, is it just the public functions or do we need to replace
> >> all that has "coresight" naming ? Including the static functions, structs, filename.
> >
> > I think all references to "coresight" should be changed to "arm_system_pmu",
> > including filenames. That way there is no doubt this IP block is not
> > related, and does not interoperate, with the any of the "coresight" IP blocks
> > already supported[1] in the kernel.
> >
> > I have looked at the documentation[2] in the cover letter and I agree
> > with an earlier comment from Sudeep that this IP has very little to do with any
> > of the other CoreSight IP blocks found in the CoreSight framework[1]. Using the
> > "coresight" naming convention in this driver would be _extremely_ confusing,
> > especially when it comes to exported functions.
>
> But conversely, how is it not confusing to make up completely different
> names for things than what they're actually called? The CoreSight
> Performance Monitoring Unit is a part of the Arm CoreSight architecture,
> it says it right there on page 1. What if I instinctively associate the
> name Mathieu with someone more familiar to me, so to avoid confusion I'd
> prefer to call you Steve? Is that OK?
>
Not sure how the above helps moving the conversation forward.
> As it happens, Steve, I do actually agree with you that "coresight_" is
> a bad prefix here, but only for the reason that it's too general. TBH I
> think that's true of the existing Linux subsystem too, but that damage
> is already done, and I'd concur that there's little value in trying to
> unpick that now, despite the clear existence of products like CoreSight
> DAP and CoreSight ELA which don't have all that much to do with program
> trace either.
>
Happy to see that we are in complete agreement.
> However, hindsight and inertia are hardly good reasons to double down on
> poor decisions, so if I was going to vote for anything here it would be
> "cspmu_", which is about as
> obviously-related-to-the-thing-it-actually-is as we can get while also
> being pleasantly concise.
>
> [ And no, this isn't bikeshedding. Naming things right is *important* ]
>
> Cheers,
> Robin.
>
> >
> > Thanks,
> > Steve
> >
> > [1]. drivers/hwtracing/coresight/
> > [2]. https://developer.arm.com/documentation/ihi0091/latest
> -----Original Message-----
> From: Robin Murphy <[email protected]>
> Sent: Wednesday, July 13, 2022 3:13 AM
> To: Mathieu Poirier <[email protected]>; Besar Wicaksono
> <[email protected]>
> Cc: Suzuki K Poulose <[email protected]>; [email protected];
> [email protected]; [email protected]; linux-arm-
> [email protected]; [email protected]; linux-
> [email protected]; [email protected];
> [email protected]; [email protected]; Thierry Reding
> <[email protected]>; Jonathan Hunter <[email protected]>; Vikram
> Sethi <[email protected]>; [email protected]; [email protected]
> Subject: Re: [RESEND PATCH v3 1/2] perf: coresight_pmu: Add support for
> ARM CoreSight PMU driver
>
> External email: Use caution opening links or attachments
>
>
> On 2022-07-12 17:36, Mathieu Poirier wrote:
> [...]
> >>> If we have decied to call this arm_system_pmu, (which I am perfectly
> >>> happy with), could we please stick to that name for functions that we
> >>> export ?
> >>>
> >>> e.g,
> >>>
> s/coresight_pmu_sysfs_event_show/arm_system_pmu_event_show()/
> >>>
> >>
> >> Just want to confirm, is it just the public functions or do we need to
> replace
> >> all that has "coresight" naming ? Including the static functions, structs,
> filename.
> >
> > I think all references to "coresight" should be changed to
> "arm_system_pmu",
> > including filenames. That way there is no doubt this IP block is not
> > related, and does not interoperate, with the any of the "coresight" IP
> blocks
> > already supported[1] in the kernel.
> >
> > I have looked at the documentation[2] in the cover letter and I agree
> > with an earlier comment from Sudeep that this IP has very little to do with
> any
> > of the other CoreSight IP blocks found in the CoreSight framework[1].
> Using the
> > "coresight" naming convention in this driver would be _extremely_
> confusing,
> > especially when it comes to exported functions.
>
> But conversely, how is it not confusing to make up completely different
> names for things than what they're actually called? The CoreSight
> Performance Monitoring Unit is a part of the Arm CoreSight architecture,
> it says it right there on page 1. What if I instinctively associate the
> name Mathieu with someone more familiar to me, so to avoid confusion I'd
> prefer to call you Steve? Is that OK?
>
What is the naming convention for modules under drivers/perf ?
In my observation, the names there correspond to the part monitored by
the PMU. The confusion on using "coresight_pmu" naming could be that
people may think the PMU monitors coresight system, i.e the trace system under hwtracing.
However, the driver in this patch is for a new PMU standard that monitors uncore
parts. Uncore was considered as terminology from Intel, so "system" was picked instead.
Please see this thread for reference:
https://lore.kernel.org/linux-arm-kernel/20220510111318.GD27557@willie-the-truck/
> As it happens, Steve, I do actually agree with you that "coresight_" is
> a bad prefix here, but only for the reason that it's too general. TBH I
> think that's true of the existing Linux subsystem too, but that damage
> is already done, and I'd concur that there's little value in trying to
> unpick that now, despite the clear existence of products like CoreSight
> DAP and CoreSight ELA which don't have all that much to do with program
> trace either.
>
> However, hindsight and inertia are hardly good reasons to double down on
> poor decisions, so if I was going to vote for anything here it would be
> "cspmu_", which is about as
> obviously-related-to-the-thing-it-actually-is as we can get while also
> being pleasantly concise.
>
> [ And no, this isn't bikeshedding. Naming things right is *important* ]
>
I agree having the correct name is important, especially at this early stage.
A direction of what the naming should describe would be very helpful here.
> Cheers,
> Robin.
>
> >
> > Thanks,
> > Steve
> >
> > [1]. drivers/hwtracing/coresight/
> > [2]. https://developer.arm.com/documentation/ihi0091/latest
Hi Reviewers,
We still have disagreement on the naming, how do we resolve it and move forward ?
Thanks,
Besar
> -----Original Message-----
> From: Besar Wicaksono
> Sent: Thursday, July 14, 2022 11:47 AM
> To: Robin Murphy <[email protected]>; Mathieu Poirier
> <[email protected]>
> Cc: Suzuki K Poulose <[email protected]>; [email protected];
> [email protected]; [email protected]; linux-arm-
> [email protected]; [email protected]; linux-
> [email protected]; [email protected];
> [email protected]; [email protected]; Thierry Reding
> <[email protected]>; Jonathan Hunter <[email protected]>; Vikram
> Sethi <[email protected]>; [email protected]; [email protected]
> Subject: RE: [RESEND PATCH v3 1/2] perf: coresight_pmu: Add support for
> ARM CoreSight PMU driver
>
>
>
> > -----Original Message-----
> > From: Robin Murphy <[email protected]>
> > Sent: Wednesday, July 13, 2022 3:13 AM
> > To: Mathieu Poirier <[email protected]>; Besar Wicaksono
> > <[email protected]>
> > Cc: Suzuki K Poulose <[email protected]>;
> [email protected];
> > [email protected]; [email protected]; linux-arm-
> > [email protected]; [email protected]; linux-
> > [email protected]; [email protected];
> > [email protected]; [email protected]; Thierry Reding
> > <[email protected]>; Jonathan Hunter <[email protected]>; Vikram
> > Sethi <[email protected]>; [email protected]; [email protected]
> > Subject: Re: [RESEND PATCH v3 1/2] perf: coresight_pmu: Add support for
> > ARM CoreSight PMU driver
> >
> > External email: Use caution opening links or attachments
> >
> >
> > On 2022-07-12 17:36, Mathieu Poirier wrote:
> > [...]
> > >>> If we have decied to call this arm_system_pmu, (which I am perfectly
> > >>> happy with), could we please stick to that name for functions that we
> > >>> export ?
> > >>>
> > >>> e.g,
> > >>>
> > s/coresight_pmu_sysfs_event_show/arm_system_pmu_event_show()/
> > >>>
> > >>
> > >> Just want to confirm, is it just the public functions or do we need to
> > replace
> > >> all that has "coresight" naming ? Including the static functions, structs,
> > filename.
> > >
> > > I think all references to "coresight" should be changed to
> > "arm_system_pmu",
> > > including filenames. That way there is no doubt this IP block is not
> > > related, and does not interoperate, with the any of the "coresight" IP
> > blocks
> > > already supported[1] in the kernel.
> > >
> > > I have looked at the documentation[2] in the cover letter and I agree
> > > with an earlier comment from Sudeep that this IP has very little to do
> with
> > any
> > > of the other CoreSight IP blocks found in the CoreSight framework[1].
> > Using the
> > > "coresight" naming convention in this driver would be _extremely_
> > confusing,
> > > especially when it comes to exported functions.
> >
> > But conversely, how is it not confusing to make up completely different
> > names for things than what they're actually called? The CoreSight
> > Performance Monitoring Unit is a part of the Arm CoreSight architecture,
> > it says it right there on page 1. What if I instinctively associate the
> > name Mathieu with someone more familiar to me, so to avoid confusion I'd
> > prefer to call you Steve? Is that OK?
> >
>
> What is the naming convention for modules under drivers/perf ?
> In my observation, the names there correspond to the part monitored by
> the PMU. The confusion on using "coresight_pmu" naming could be that
> people may think the PMU monitors coresight system, i.e the trace system
> under hwtracing.
> However, the driver in this patch is for a new PMU standard that monitors
> uncore
> parts. Uncore was considered as terminology from Intel, so "system" was
> picked instead.
> Please see this thread for reference:
> https://lore.kernel.org/linux-arm-kernel/20220510111318.GD27557@willie-
> the-truck/
>
> > As it happens, Steve, I do actually agree with you that "coresight_" is
> > a bad prefix here, but only for the reason that it's too general. TBH I
> > think that's true of the existing Linux subsystem too, but that damage
> > is already done, and I'd concur that there's little value in trying to
> > unpick that now, despite the clear existence of products like CoreSight
> > DAP and CoreSight ELA which don't have all that much to do with program
> > trace either.
> >
> > However, hindsight and inertia are hardly good reasons to double down on
> > poor decisions, so if I was going to vote for anything here it would be
> > "cspmu_", which is about as
> > obviously-related-to-the-thing-it-actually-is as we can get while also
> > being pleasantly concise.
> >
> > [ And no, this isn't bikeshedding. Naming things right is *important* ]
> >
>
> I agree having the correct name is important, especially at this early stage.
> A direction of what the naming should describe would be very helpful here.
>
> > Cheers,
> > Robin.
> >
> > >
> > > Thanks,
> > > Steve
> > >
> > > [1]. drivers/hwtracing/coresight/
> > > [2]. https://developer.arm.com/documentation/ihi0091/latest
Hi
On 14/07/2022 05:47, Besar Wicaksono wrote:
>
>
>> -----Original Message-----
>> From: Robin Murphy <[email protected]>
>> Sent: Wednesday, July 13, 2022 3:13 AM
>> To: Mathieu Poirier <[email protected]>; Besar Wicaksono
>> <[email protected]>
>> Cc: Suzuki K Poulose <[email protected]>; [email protected];
>> [email protected]; [email protected]; linux-arm-
>> [email protected]; [email protected]; linux-
>> [email protected]; [email protected];
>> [email protected]; [email protected]; Thierry Reding
>> <[email protected]>; Jonathan Hunter <[email protected]>; Vikram
>> Sethi <[email protected]>; [email protected]; [email protected]
>> Subject: Re: [RESEND PATCH v3 1/2] perf: coresight_pmu: Add support for
>> ARM CoreSight PMU driver
>>
>> External email: Use caution opening links or attachments
>>
>>
>> On 2022-07-12 17:36, Mathieu Poirier wrote:
>> [...]
>>>>> If we have decied to call this arm_system_pmu, (which I am perfectly
>>>>> happy with), could we please stick to that name for functions that we
>>>>> export ?
>>>>>
>>>>> e.g,
>>>>>
>> s/coresight_pmu_sysfs_event_show/arm_system_pmu_event_show()/
>>>>>
>>>>
>>>> Just want to confirm, is it just the public functions or do we need to
>> replace
>>>> all that has "coresight" naming ? Including the static functions, structs,
>> filename.
>>>
>>> I think all references to "coresight" should be changed to
>> "arm_system_pmu",
>>> including filenames. That way there is no doubt this IP block is not
>>> related, and does not interoperate, with the any of the "coresight" IP
>> blocks
>>> already supported[1] in the kernel.
>>>
>>> I have looked at the documentation[2] in the cover letter and I agree
>>> with an earlier comment from Sudeep that this IP has very little to do with
>> any
>>> of the other CoreSight IP blocks found in the CoreSight framework[1].
>> Using the
>>> "coresight" naming convention in this driver would be _extremely_
>> confusing,
>>> especially when it comes to exported functions.
>>
>> But conversely, how is it not confusing to make up completely different
>> names for things than what they're actually called? The CoreSight
>> Performance Monitoring Unit is a part of the Arm CoreSight architecture,
>> it says it right there on page 1. What if I instinctively associate the
>> name Mathieu with someone more familiar to me, so to avoid confusion I'd
>> prefer to call you Steve? Is that OK?
>>
>
> What is the naming convention for modules under drivers/perf ?
> In my observation, the names there correspond to the part monitored by
> the PMU. The confusion on using "coresight_pmu" naming could be that
> people may think the PMU monitors coresight system, i.e the trace system under hwtracing.
> However, the driver in this patch is for a new PMU standard that monitors uncore
> parts. Uncore was considered as terminology from Intel, so "system" was picked instead.
> Please see this thread for reference:
> https://lore.kernel.org/linux-arm-kernel/20220510111318.GD27557@willie-the-truck/
I think we all understand the state of affairs.
- We have an architecutre specification for PMUs, Arm CoreSight PMU
Architecutre, which has absolutely no relationship with :
either CoreSight Self-Hosted Tracing (handled by "coresight"
subsystem in the kernel under drivers/hwtracing/coresight/, with a user
visible pmu as "cs_etm")
or the CoreSight Architecture (except for the name). This is of less
significance in general. But has a significant impact on the "name"
users might expect for the driver/Kconfig etc.
- We want to be able to make it easier for the users/developers to
choose what they want without causing confusion.
For an end-user: Having the PMU instance named after the "System IP"
(as implememented in the driver solves the problem and falling back to
arm_system_pmu is a good enough choice. So let us stick with that)
Kconfig: May be we can choose
CONFIG_ARM_CORESIGHT_PMU_ARCH_PMU
or even
CONFIG_ARM_CORESIGHT_PMU_ARCH_SYSTEM_PMU
with appropriate help text to ensure there is enough stress about what
this is and what this is not would be sufficient.
Now the remaining contention is about the name of the "subsystem" and
also the dir/files. This may sound insignificant. But it is also
important to get this right. e.g., helps the reviewers unambiguously
identify the change or maintainers accepting pull requests (remember
these two PMUs (cs_etm and this one) go via different trees.). Not
everyone who deals with this in the community may be aware of how
these are different.
We could choose arm_cspmu_ or simply cspmu. Given that only the
"normal" users care about the "association" with the "architecture"
and more advanced users (e.g, developers) can easily map "Kconfig"
to driver files, may be we could even stick to the "arm_syspmu"
(from "arm system pmu") ?
Suzuki
>
>> As it happens, Steve, I do actually agree with you that "coresight_" is
>> a bad prefix here, but only for the reason that it's too general. TBH I
>> think that's true of the existing Linux subsystem too, but that damage
>> is already done, and I'd concur that there's little value in trying to
>> unpick that now, despite the clear existence of products like CoreSight
>> DAP and CoreSight ELA which don't have all that much to do with program
>> trace either.
>>
>> However, hindsight and inertia are hardly good reasons to double down on
>> poor decisions, so if I was going to vote for anything here it would be
>> "cspmu_", which is about as
>> obviously-related-to-the-thing-it-actually-is as we can get while also
>> being pleasantly concise.
>>
>> [ And no, this isn't bikeshedding. Naming things right is *important* ]
>>
>
> I agree having the correct name is important, especially at this early stage.
> A direction of what the naming should describe would be very helpful here.
>
>> Cheers,
>> Robin.
>>
>>>
>>> Thanks,
>>> Steve
>>>
>>> [1]. drivers/hwtracing/coresight/
>>> [2]. https://developer.arm.com/documentation/ihi0091/latest
On Thu, 21 Jul 2022 at 03:19, Suzuki K Poulose <[email protected]> wrote:
>
> Hi
>
> On 14/07/2022 05:47, Besar Wicaksono wrote:
> >
> >
> >> -----Original Message-----
> >> From: Robin Murphy <[email protected]>
> >> Sent: Wednesday, July 13, 2022 3:13 AM
> >> To: Mathieu Poirier <[email protected]>; Besar Wicaksono
> >> <[email protected]>
> >> Cc: Suzuki K Poulose <[email protected]>; [email protected];
> >> [email protected]; [email protected]; linux-arm-
> >> [email protected]; [email protected]; linux-
> >> [email protected]; [email protected];
> >> [email protected]; [email protected]; Thierry Reding
> >> <[email protected]>; Jonathan Hunter <[email protected]>; Vikram
> >> Sethi <[email protected]>; [email protected]; [email protected]
> >> Subject: Re: [RESEND PATCH v3 1/2] perf: coresight_pmu: Add support for
> >> ARM CoreSight PMU driver
> >>
> >> External email: Use caution opening links or attachments
> >>
> >>
> >> On 2022-07-12 17:36, Mathieu Poirier wrote:
> >> [...]
> >>>>> If we have decied to call this arm_system_pmu, (which I am perfectly
> >>>>> happy with), could we please stick to that name for functions that we
> >>>>> export ?
> >>>>>
> >>>>> e.g,
> >>>>>
> >> s/coresight_pmu_sysfs_event_show/arm_system_pmu_event_show()/
> >>>>>
> >>>>
> >>>> Just want to confirm, is it just the public functions or do we need to
> >> replace
> >>>> all that has "coresight" naming ? Including the static functions, structs,
> >> filename.
> >>>
> >>> I think all references to "coresight" should be changed to
> >> "arm_system_pmu",
> >>> including filenames. That way there is no doubt this IP block is not
> >>> related, and does not interoperate, with the any of the "coresight" IP
> >> blocks
> >>> already supported[1] in the kernel.
> >>>
> >>> I have looked at the documentation[2] in the cover letter and I agree
> >>> with an earlier comment from Sudeep that this IP has very little to do with
> >> any
> >>> of the other CoreSight IP blocks found in the CoreSight framework[1].
> >> Using the
> >>> "coresight" naming convention in this driver would be _extremely_
> >> confusing,
> >>> especially when it comes to exported functions.
> >>
> >> But conversely, how is it not confusing to make up completely different
> >> names for things than what they're actually called? The CoreSight
> >> Performance Monitoring Unit is a part of the Arm CoreSight architecture,
> >> it says it right there on page 1. What if I instinctively associate the
> >> name Mathieu with someone more familiar to me, so to avoid confusion I'd
> >> prefer to call you Steve? Is that OK?
> >>
> >
> > What is the naming convention for modules under drivers/perf ?
> > In my observation, the names there correspond to the part monitored by
> > the PMU. The confusion on using "coresight_pmu" naming could be that
> > people may think the PMU monitors coresight system, i.e the trace system under hwtracing.
> > However, the driver in this patch is for a new PMU standard that monitors uncore
> > parts. Uncore was considered as terminology from Intel, so "system" was picked instead.
> > Please see this thread for reference:
> > https://lore.kernel.org/linux-arm-kernel/20220510111318.GD27557@willie-the-truck/
>
> I think we all understand the state of affairs.
>
> - We have an architecutre specification for PMUs, Arm CoreSight PMU
> Architecutre, which has absolutely no relationship with :
>
> either CoreSight Self-Hosted Tracing (handled by "coresight"
> subsystem in the kernel under drivers/hwtracing/coresight/, with a user
> visible pmu as "cs_etm")
>
> or the CoreSight Architecture (except for the name). This is of less
> significance in general. But has a significant impact on the "name"
> users might expect for the driver/Kconfig etc.
>
> - We want to be able to make it easier for the users/developers to
> choose what they want without causing confusion.
>
> For an end-user: Having the PMU instance named after the "System IP"
> (as implememented in the driver solves the problem and falling back to
> arm_system_pmu is a good enough choice. So let us stick with that)
>
> Kconfig: May be we can choose
> CONFIG_ARM_CORESIGHT_PMU_ARCH_PMU
> or even
> CONFIG_ARM_CORESIGHT_PMU_ARCH_SYSTEM_PMU
>
> with appropriate help text to ensure there is enough stress about what
> this is and what this is not would be sufficient.
>
> Now the remaining contention is about the name of the "subsystem" and
> also the dir/files. This may sound insignificant. But it is also
> important to get this right. e.g., helps the reviewers unambiguously
> identify the change or maintainers accepting pull requests (remember
> these two PMUs (cs_etm and this one) go via different trees.). Not
> everyone who deals with this in the community may be aware of how
> these are different.
>
> We could choose arm_cspmu_ or simply cspmu. Given that only the
> "normal" users care about the "association" with the "architecture"
> and more advanced users (e.g, developers) can easily map "Kconfig"
> to driver files, may be we could even stick to the "arm_syspmu"
> (from "arm system pmu") ?
>
+1 on "arm_syspmu"
> Suzuki
>
>
> >
> >> As it happens, Steve, I do actually agree with you that "coresight_" is
> >> a bad prefix here, but only for the reason that it's too general. TBH I
> >> think that's true of the existing Linux subsystem too, but that damage
> >> is already done, and I'd concur that there's little value in trying to
> >> unpick that now, despite the clear existence of products like CoreSight
> >> DAP and CoreSight ELA which don't have all that much to do with program
> >> trace either.
> >>
> >> However, hindsight and inertia are hardly good reasons to double down on
> >> poor decisions, so if I was going to vote for anything here it would be
> >> "cspmu_", which is about as
> >> obviously-related-to-the-thing-it-actually-is as we can get while also
> >> being pleasantly concise.
> >>
> >> [ And no, this isn't bikeshedding. Naming things right is *important* ]
> >>
> >
> > I agree having the correct name is important, especially at this early stage.
> > A direction of what the naming should describe would be very helpful here.
> >
> >> Cheers,
> >> Robin.
> >>
> >>>
> >>> Thanks,
> >>> Steve
> >>>
> >>> [1]. drivers/hwtracing/coresight/
> >>> [2]. https://developer.arm.com/documentation/ihi0091/latest
>
Hi
> -----Original Message-----
> From: Mathieu Poirier <[email protected]>
> Sent: Thursday, July 21, 2022 10:36 AM
> To: Suzuki K Poulose <[email protected]>
> Cc: Besar Wicaksono <[email protected]>; Robin Murphy
> <[email protected]>; [email protected]; [email protected];
> [email protected]; [email protected]; linux-
> [email protected]; [email protected];
> [email protected]; [email protected];
> [email protected]; Thierry Reding <[email protected]>; Jonathan
> Hunter <[email protected]>; Vikram Sethi <[email protected]>;
> [email protected]; [email protected]
> Subject: Re: [RESEND PATCH v3 1/2] perf: coresight_pmu: Add support for
> ARM CoreSight PMU driver
>
> External email: Use caution opening links or attachments
>
>
> On Thu, 21 Jul 2022 at 03:19, Suzuki K Poulose <[email protected]>
> wrote:
> >
> > Hi
> >
> > On 14/07/2022 05:47, Besar Wicaksono wrote:
> > >
> > >
> > >> -----Original Message-----
> > >> From: Robin Murphy <[email protected]>
> > >> Sent: Wednesday, July 13, 2022 3:13 AM
> > >> To: Mathieu Poirier <[email protected]>; Besar Wicaksono
> > >> <[email protected]>
> > >> Cc: Suzuki K Poulose <[email protected]>;
> [email protected];
> > >> [email protected]; [email protected]; linux-arm-
> > >> [email protected]; [email protected]; linux-
> > >> [email protected]; [email protected];
> > >> [email protected]; [email protected]; Thierry
> Reding
> > >> <[email protected]>; Jonathan Hunter <[email protected]>;
> Vikram
> > >> Sethi <[email protected]>; [email protected]; [email protected]
> > >> Subject: Re: [RESEND PATCH v3 1/2] perf: coresight_pmu: Add support
> for
> > >> ARM CoreSight PMU driver
> > >>
> > >> External email: Use caution opening links or attachments
> > >>
> > >>
> > >> On 2022-07-12 17:36, Mathieu Poirier wrote:
> > >> [...]
> > >>>>> If we have decied to call this arm_system_pmu, (which I am
> perfectly
> > >>>>> happy with), could we please stick to that name for functions that
> we
> > >>>>> export ?
> > >>>>>
> > >>>>> e.g,
> > >>>>>
> > >>
> s/coresight_pmu_sysfs_event_show/arm_system_pmu_event_show()/
> > >>>>>
> > >>>>
> > >>>> Just want to confirm, is it just the public functions or do we need to
> > >> replace
> > >>>> all that has "coresight" naming ? Including the static functions, structs,
> > >> filename.
> > >>>
> > >>> I think all references to "coresight" should be changed to
> > >> "arm_system_pmu",
> > >>> including filenames. That way there is no doubt this IP block is not
> > >>> related, and does not interoperate, with the any of the "coresight" IP
> > >> blocks
> > >>> already supported[1] in the kernel.
> > >>>
> > >>> I have looked at the documentation[2] in the cover letter and I agree
> > >>> with an earlier comment from Sudeep that this IP has very little to do
> with
> > >> any
> > >>> of the other CoreSight IP blocks found in the CoreSight framework[1].
> > >> Using the
> > >>> "coresight" naming convention in this driver would be _extremely_
> > >> confusing,
> > >>> especially when it comes to exported functions.
> > >>
> > >> But conversely, how is it not confusing to make up completely different
> > >> names for things than what they're actually called? The CoreSight
> > >> Performance Monitoring Unit is a part of the Arm CoreSight
> architecture,
> > >> it says it right there on page 1. What if I instinctively associate the
> > >> name Mathieu with someone more familiar to me, so to avoid confusion
> I'd
> > >> prefer to call you Steve? Is that OK?
> > >>
> > >
> > > What is the naming convention for modules under drivers/perf ?
> > > In my observation, the names there correspond to the part monitored by
> > > the PMU. The confusion on using "coresight_pmu" naming could be that
> > > people may think the PMU monitors coresight system, i.e the trace
> system under hwtracing.
> > > However, the driver in this patch is for a new PMU standard that
> monitors uncore
> > > parts. Uncore was considered as terminology from Intel, so "system" was
> picked instead.
> > > Please see this thread for reference:
> > > https://lore.kernel.org/linux-arm-
> kernel/20220510111318.GD27557@willie-the-truck/
> >
> > I think we all understand the state of affairs.
> >
> > - We have an architecutre specification for PMUs, Arm CoreSight PMU
> > Architecutre, which has absolutely no relationship with :
> >
> > either CoreSight Self-Hosted Tracing (handled by "coresight"
> > subsystem in the kernel under drivers/hwtracing/coresight/, with a user
> > visible pmu as "cs_etm")
> >
> > or the CoreSight Architecture (except for the name). This is of less
> > significance in general. But has a significant impact on the "name"
> > users might expect for the driver/Kconfig etc.
> >
> > - We want to be able to make it easier for the users/developers to
> > choose what they want without causing confusion.
> >
> > For an end-user: Having the PMU instance named after the "System IP"
> > (as implememented in the driver solves the problem and falling back to
> > arm_system_pmu is a good enough choice. So let us stick with that)
> >
> > Kconfig: May be we can choose
> > CONFIG_ARM_CORESIGHT_PMU_ARCH_PMU
> > or even
> > CONFIG_ARM_CORESIGHT_PMU_ARCH_SYSTEM_PMU
> >
> > with appropriate help text to ensure there is enough stress about what
> > this is and what this is not would be sufficient.
> >
CONFIG_ARM_CORESIGHT_PMU_ARCH_SYSTEM_PMU sounds good to me.
> > Now the remaining contention is about the name of the "subsystem" and
> > also the dir/files. This may sound insignificant. But it is also
> > important to get this right. e.g., helps the reviewers unambiguously
> > identify the change or maintainers accepting pull requests (remember
> > these two PMUs (cs_etm and this one) go via different trees.). Not
> > everyone who deals with this in the community may be aware of how
> > these are different.
> >
> > We could choose arm_cspmu_ or simply cspmu. Given that only the
> > "normal" users care about the "association" with the "architecture"
> > and more advanced users (e.g, developers) can easily map "Kconfig"
> > to driver files, may be we could even stick to the "arm_syspmu"
> > (from "arm system pmu") ?
> >
>
> +1 on "arm_syspmu"
>
I am fine too with arm_syspmu.
If there is no objection, I am going to post new update by end of this week
or early next week.
Thanks,
Besar
> > Suzuki
> >
> >
> > >
> > >> As it happens, Steve, I do actually agree with you that "coresight_" is
> > >> a bad prefix here, but only for the reason that it's too general. TBH I
> > >> think that's true of the existing Linux subsystem too, but that damage
> > >> is already done, and I'd concur that there's little value in trying to
> > >> unpick that now, despite the clear existence of products like CoreSight
> > >> DAP and CoreSight ELA which don't have all that much to do with
> program
> > >> trace either.
> > >>
> > >> However, hindsight and inertia are hardly good reasons to double down
> on
> > >> poor decisions, so if I was going to vote for anything here it would be
> > >> "cspmu_", which is about as
> > >> obviously-related-to-the-thing-it-actually-is as we can get while also
> > >> being pleasantly concise.
> > >>
> > >> [ And no, this isn't bikeshedding. Naming things right is *important* ]
> > >>
> > >
> > > I agree having the correct name is important, especially at this early stage.
> > > A direction of what the naming should describe would be very helpful
> here.
> > >
> > >> Cheers,
> > >> Robin.
> > >>
> > >>>
> > >>> Thanks,
> > >>> Steve
> > >>>
> > >>> [1]. drivers/hwtracing/coresight/
> > >>> [2]. https://developer.arm.com/documentation/ihi0091/latest
> >
Hi
On 01/08/2022 23:27, Besar Wicaksono wrote:
> Hi
>
>> -----Original Message-----
>> From: Mathieu Poirier <[email protected]>
>> Sent: Thursday, July 21, 2022 10:36 AM
>> To: Suzuki K Poulose <[email protected]>
>> Cc: Besar Wicaksono <[email protected]>; Robin Murphy
>> <[email protected]>; [email protected]; [email protected];
>> [email protected]; [email protected]; linux-
>> [email protected]; [email protected];
>> [email protected]; [email protected];
>> [email protected]; Thierry Reding <[email protected]>; Jonathan
>> Hunter <[email protected]>; Vikram Sethi <[email protected]>;
>> [email protected]; [email protected]
>> Subject: Re: [RESEND PATCH v3 1/2] perf: coresight_pmu: Add support for
>> ARM CoreSight PMU driver
>>
>> External email: Use caution opening links or attachments
>>
>>
>> On Thu, 21 Jul 2022 at 03:19, Suzuki K Poulose <[email protected]>
>> wrote:
>>>
>>> Hi
>>>
>>> On 14/07/2022 05:47, Besar Wicaksono wrote:
>>>>
>>>>
>>>>> -----Original Message-----
>>>>> From: Robin Murphy <[email protected]>
>>>>> Sent: Wednesday, July 13, 2022 3:13 AM
>>>>> To: Mathieu Poirier <[email protected]>; Besar Wicaksono
>>>>> <[email protected]>
>>>>> Cc: Suzuki K Poulose <[email protected]>;
>> [email protected];
>>>>> [email protected]; [email protected]; linux-arm-
>>>>> [email protected]; [email protected]; linux-
>>>>> [email protected]; [email protected];
>>>>> [email protected]; [email protected]; Thierry
>> Reding
>>>>> <[email protected]>; Jonathan Hunter <[email protected]>;
>> Vikram
>>>>> Sethi <[email protected]>; [email protected]; [email protected]
>>>>> Subject: Re: [RESEND PATCH v3 1/2] perf: coresight_pmu: Add support
>> for
>>>>> ARM CoreSight PMU driver
>>>>>
>>>>> External email: Use caution opening links or attachments
>>>>>
>>>>>
>>>>> On 2022-07-12 17:36, Mathieu Poirier wrote:
>>>>> [...]
>>>>>>>> If we have decied to call this arm_system_pmu, (which I am
>> perfectly
>>>>>>>> happy with), could we please stick to that name for functions that
>> we
>>>>>>>> export ?
>>>>>>>>
>>>>>>>> e.g,
>>>>>>>>
>>>>>
>> s/coresight_pmu_sysfs_event_show/arm_system_pmu_event_show()/
>>>>>>>>
>>>>>>>
>>>>>>> Just want to confirm, is it just the public functions or do we need to
>>>>> replace
>>>>>>> all that has "coresight" naming ? Including the static functions, structs,
>>>>> filename.
>>>>>>
>>>>>> I think all references to "coresight" should be changed to
>>>>> "arm_system_pmu",
>>>>>> including filenames. That way there is no doubt this IP block is not
>>>>>> related, and does not interoperate, with the any of the "coresight" IP
>>>>> blocks
>>>>>> already supported[1] in the kernel.
>>>>>>
>>>>>> I have looked at the documentation[2] in the cover letter and I agree
>>>>>> with an earlier comment from Sudeep that this IP has very little to do
>> with
>>>>> any
>>>>>> of the other CoreSight IP blocks found in the CoreSight framework[1].
>>>>> Using the
>>>>>> "coresight" naming convention in this driver would be _extremely_
>>>>> confusing,
>>>>>> especially when it comes to exported functions.
>>>>>
>>>>> But conversely, how is it not confusing to make up completely different
>>>>> names for things than what they're actually called? The CoreSight
>>>>> Performance Monitoring Unit is a part of the Arm CoreSight
>> architecture,
>>>>> it says it right there on page 1. What if I instinctively associate the
>>>>> name Mathieu with someone more familiar to me, so to avoid confusion
>> I'd
>>>>> prefer to call you Steve? Is that OK?
>>>>>
>>>>
>>>> What is the naming convention for modules under drivers/perf ?
>>>> In my observation, the names there correspond to the part monitored by
>>>> the PMU. The confusion on using "coresight_pmu" naming could be that
>>>> people may think the PMU monitors coresight system, i.e the trace
>> system under hwtracing.
>>>> However, the driver in this patch is for a new PMU standard that
>> monitors uncore
>>>> parts. Uncore was considered as terminology from Intel, so "system" was
>> picked instead.
>>>> Please see this thread for reference:
>>>> https://lore.kernel.org/linux-arm-
>> kernel/20220510111318.GD27557@willie-the-truck/
>>>
>>> I think we all understand the state of affairs.
>>>
>>> - We have an architecutre specification for PMUs, Arm CoreSight PMU
>>> Architecutre, which has absolutely no relationship with :
>>>
>>> either CoreSight Self-Hosted Tracing (handled by "coresight"
>>> subsystem in the kernel under drivers/hwtracing/coresight/, with a user
>>> visible pmu as "cs_etm")
>>>
>>> or the CoreSight Architecture (except for the name). This is of less
>>> significance in general. But has a significant impact on the "name"
>>> users might expect for the driver/Kconfig etc.
>>>
>>> - We want to be able to make it easier for the users/developers to
>>> choose what they want without causing confusion.
>>>
>>> For an end-user: Having the PMU instance named after the "System IP"
>>> (as implememented in the driver solves the problem and falling back to
>>> arm_system_pmu is a good enough choice. So let us stick with that)
>>>
>>> Kconfig: May be we can choose
>>> CONFIG_ARM_CORESIGHT_PMU_ARCH_PMU
>>> or even
>>> CONFIG_ARM_CORESIGHT_PMU_ARCH_SYSTEM_PMU
>>>
>>> with appropriate help text to ensure there is enough stress about what
>>> this is and what this is not would be sufficient.
>>>
>
> CONFIG_ARM_CORESIGHT_PMU_ARCH_SYSTEM_PMU sounds good to me.
>
>>> Now the remaining contention is about the name of the "subsystem" and
>>> also the dir/files. This may sound insignificant. But it is also
>>> important to get this right. e.g., helps the reviewers unambiguously
>>> identify the change or maintainers accepting pull requests (remember
>>> these two PMUs (cs_etm and this one) go via different trees.). Not
>>> everyone who deals with this in the community may be aware of how
>>> these are different.
>>>
>>> We could choose arm_cspmu_ or simply cspmu. Given that only the
>>> "normal" users care about the "association" with the "architecture"
>>> and more advanced users (e.g, developers) can easily map "Kconfig"
>>> to driver files, may be we could even stick to the "arm_syspmu"
>>> (from "arm system pmu") ?
>>>
>>
>> +1 on "arm_syspmu"
>>
>
> I am fine too with arm_syspmu.
>
> If there is no objection, I am going to post new update by end of this week
> or early next week.
Unfortunately, I have been told that we have a potential problem with
"arm_syspmu" and even choosing "arm-system-pmu" for the name as it may
conflict with something that is coming soon. So we may have to go back
to something else, to avoid this exact same conversation in the near
future. Apologies for that.
Could we use "arm_cspmu" for the code/subsystem and may be
"arm-csarch-pmu" / "arm-cs-arch-pmu" for the device name ?
Other suggestions ?
Suzuki
>
> Thanks,
> Besar
>
>>> Suzuki
>>>
>>>
>>>>
>>>>> As it happens, Steve, I do actually agree with you that "coresight_" is
>>>>> a bad prefix here, but only for the reason that it's too general. TBH I
>>>>> think that's true of the existing Linux subsystem too, but that damage
>>>>> is already done, and I'd concur that there's little value in trying to
>>>>> unpick that now, despite the clear existence of products like CoreSight
>>>>> DAP and CoreSight ELA which don't have all that much to do with
>> program
>>>>> trace either.
>>>>>
>>>>> However, hindsight and inertia are hardly good reasons to double down
>> on
>>>>> poor decisions, so if I was going to vote for anything here it would be
>>>>> "cspmu_", which is about as
>>>>> obviously-related-to-the-thing-it-actually-is as we can get while also
>>>>> being pleasantly concise.
>>>>>
>>>>> [ And no, this isn't bikeshedding. Naming things right is *important* ]
>>>>>
>>>>
>>>> I agree having the correct name is important, especially at this early stage.
>>>> A direction of what the naming should describe would be very helpful
>> here.
>>>>
>>>>> Cheers,
>>>>> Robin.
>>>>>
>>>>>>
>>>>>> Thanks,
>>>>>> Steve
>>>>>>
>>>>>> [1]. drivers/hwtracing/coresight/
>>>>>> [2]. https://developer.arm.com/documentation/ihi0091/latest
>>>
Hi
> -----Original Message-----
> From: Suzuki K Poulose <[email protected]>
> Sent: Tuesday, August 2, 2022 4:39 AM
> To: Besar Wicaksono <[email protected]>; Mathieu Poirier
> <[email protected]>
> Cc: Robin Murphy <[email protected]>; [email protected];
> [email protected]; [email protected]; linux-arm-
> [email protected]; [email protected]; linux-
> [email protected]; [email protected];
> [email protected]; [email protected]; Thierry Reding
> <[email protected]>; Jonathan Hunter <[email protected]>; Vikram
> Sethi <[email protected]>; [email protected]; [email protected]
> Subject: Re: [RESEND PATCH v3 1/2] perf: coresight_pmu: Add support for
> ARM CoreSight PMU driver
>
> External email: Use caution opening links or attachments
>
>
> Hi
>
> On 01/08/2022 23:27, Besar Wicaksono wrote:
> > Hi
> >
> >> -----Original Message-----
> >> From: Mathieu Poirier <[email protected]>
> >> Sent: Thursday, July 21, 2022 10:36 AM
> >> To: Suzuki K Poulose <[email protected]>
> >> Cc: Besar Wicaksono <[email protected]>; Robin Murphy
> >> <[email protected]>; [email protected]; [email protected];
> >> [email protected]; [email protected]; linux-
> >> [email protected]; [email protected];
> >> [email protected]; [email protected];
> >> [email protected]; Thierry Reding <[email protected]>;
> Jonathan
> >> Hunter <[email protected]>; Vikram Sethi <[email protected]>;
> >> [email protected]; [email protected]
> >> Subject: Re: [RESEND PATCH v3 1/2] perf: coresight_pmu: Add support for
> >> ARM CoreSight PMU driver
> >>
> >> External email: Use caution opening links or attachments
> >>
> >>
> >> On Thu, 21 Jul 2022 at 03:19, Suzuki K Poulose <[email protected]>
> >> wrote:
> >>>
> >>> Hi
> >>>
> >>> On 14/07/2022 05:47, Besar Wicaksono wrote:
> >>>>
> >>>>
> >>>>> -----Original Message-----
> >>>>> From: Robin Murphy <[email protected]>
> >>>>> Sent: Wednesday, July 13, 2022 3:13 AM
> >>>>> To: Mathieu Poirier <[email protected]>; Besar Wicaksono
> >>>>> <[email protected]>
> >>>>> Cc: Suzuki K Poulose <[email protected]>;
> >> [email protected];
> >>>>> [email protected]; [email protected]; linux-arm-
> >>>>> [email protected]; [email protected]; linux-
> >>>>> [email protected]; [email protected];
> >>>>> [email protected]; [email protected]; Thierry
> >> Reding
> >>>>> <[email protected]>; Jonathan Hunter <[email protected]>;
> >> Vikram
> >>>>> Sethi <[email protected]>; [email protected];
> [email protected]
> >>>>> Subject: Re: [RESEND PATCH v3 1/2] perf: coresight_pmu: Add
> support
> >> for
> >>>>> ARM CoreSight PMU driver
> >>>>>
> >>>>> External email: Use caution opening links or attachments
> >>>>>
> >>>>>
> >>>>> On 2022-07-12 17:36, Mathieu Poirier wrote:
> >>>>> [...]
> >>>>>>>> If we have decied to call this arm_system_pmu, (which I am
> >> perfectly
> >>>>>>>> happy with), could we please stick to that name for functions that
> >> we
> >>>>>>>> export ?
> >>>>>>>>
> >>>>>>>> e.g,
> >>>>>>>>
> >>>>>
> >> s/coresight_pmu_sysfs_event_show/arm_system_pmu_event_show()/
> >>>>>>>>
> >>>>>>>
> >>>>>>> Just want to confirm, is it just the public functions or do we need to
> >>>>> replace
> >>>>>>> all that has "coresight" naming ? Including the static functions,
> structs,
> >>>>> filename.
> >>>>>>
> >>>>>> I think all references to "coresight" should be changed to
> >>>>> "arm_system_pmu",
> >>>>>> including filenames. That way there is no doubt this IP block is not
> >>>>>> related, and does not interoperate, with the any of the "coresight"
> IP
> >>>>> blocks
> >>>>>> already supported[1] in the kernel.
> >>>>>>
> >>>>>> I have looked at the documentation[2] in the cover letter and I agree
> >>>>>> with an earlier comment from Sudeep that this IP has very little to do
> >> with
> >>>>> any
> >>>>>> of the other CoreSight IP blocks found in the CoreSight
> framework[1].
> >>>>> Using the
> >>>>>> "coresight" naming convention in this driver would be _extremely_
> >>>>> confusing,
> >>>>>> especially when it comes to exported functions.
> >>>>>
> >>>>> But conversely, how is it not confusing to make up completely
> different
> >>>>> names for things than what they're actually called? The CoreSight
> >>>>> Performance Monitoring Unit is a part of the Arm CoreSight
> >> architecture,
> >>>>> it says it right there on page 1. What if I instinctively associate the
> >>>>> name Mathieu with someone more familiar to me, so to avoid
> confusion
> >> I'd
> >>>>> prefer to call you Steve? Is that OK?
> >>>>>
> >>>>
> >>>> What is the naming convention for modules under drivers/perf ?
> >>>> In my observation, the names there correspond to the part monitored
> by
> >>>> the PMU. The confusion on using "coresight_pmu" naming could be
> that
> >>>> people may think the PMU monitors coresight system, i.e the trace
> >> system under hwtracing.
> >>>> However, the driver in this patch is for a new PMU standard that
> >> monitors uncore
> >>>> parts. Uncore was considered as terminology from Intel, so "system"
> was
> >> picked instead.
> >>>> Please see this thread for reference:
> >>>> https://lore.kernel.org/linux-arm-
> >> kernel/20220510111318.GD27557@willie-the-truck/
> >>>
> >>> I think we all understand the state of affairs.
> >>>
> >>> - We have an architecutre specification for PMUs, Arm CoreSight PMU
> >>> Architecutre, which has absolutely no relationship with :
> >>>
> >>> either CoreSight Self-Hosted Tracing (handled by "coresight"
> >>> subsystem in the kernel under drivers/hwtracing/coresight/, with a user
> >>> visible pmu as "cs_etm")
> >>>
> >>> or the CoreSight Architecture (except for the name). This is of less
> >>> significance in general. But has a significant impact on the "name"
> >>> users might expect for the driver/Kconfig etc.
> >>>
> >>> - We want to be able to make it easier for the users/developers to
> >>> choose what they want without causing confusion.
> >>>
> >>> For an end-user: Having the PMU instance named after the "System IP"
> >>> (as implememented in the driver solves the problem and falling back to
> >>> arm_system_pmu is a good enough choice. So let us stick with that)
> >>>
> >>> Kconfig: May be we can choose
> >>> CONFIG_ARM_CORESIGHT_PMU_ARCH_PMU
> >>> or even
> >>> CONFIG_ARM_CORESIGHT_PMU_ARCH_SYSTEM_PMU
> >>>
> >>> with appropriate help text to ensure there is enough stress about what
> >>> this is and what this is not would be sufficient.
> >>>
> >
> > CONFIG_ARM_CORESIGHT_PMU_ARCH_SYSTEM_PMU sounds good to
> me.
> >
> >>> Now the remaining contention is about the name of the "subsystem"
> and
> >>> also the dir/files. This may sound insignificant. But it is also
> >>> important to get this right. e.g., helps the reviewers unambiguously
> >>> identify the change or maintainers accepting pull requests (remember
> >>> these two PMUs (cs_etm and this one) go via different trees.). Not
> >>> everyone who deals with this in the community may be aware of how
> >>> these are different.
> >>>
> >>> We could choose arm_cspmu_ or simply cspmu. Given that only the
> >>> "normal" users care about the "association" with the "architecture"
> >>> and more advanced users (e.g, developers) can easily map "Kconfig"
> >>> to driver files, may be we could even stick to the "arm_syspmu"
> >>> (from "arm system pmu") ?
> >>>
> >>
> >> +1 on "arm_syspmu"
> >>
> >
> > I am fine too with arm_syspmu.
> >
> > If there is no objection, I am going to post new update by end of this week
> > or early next week.
>
> Unfortunately, I have been told that we have a potential problem with
> "arm_syspmu" and even choosing "arm-system-pmu" for the name as it may
> conflict with something that is coming soon. So we may have to go back
> to something else, to avoid this exact same conversation in the near
> future. Apologies for that.
>
> Could we use "arm_cspmu" for the code/subsystem and may be
> "arm-csarch-pmu" / "arm-cs-arch-pmu" for the device name ?
>
In that case, including "Coresight Architecture" in the name makes more sense.
I am fine with "arm_cspmu" and "arm-cs-arch-pmu".
Thanks,
Besar
> Other suggestions ?
>
> Suzuki
>
>
> >
> > Thanks,
> > Besar
> >
> >>> Suzuki
> >>>
> >>>
> >>>>
> >>>>> As it happens, Steve, I do actually agree with you that "coresight_" is
> >>>>> a bad prefix here, but only for the reason that it's too general. TBH I
> >>>>> think that's true of the existing Linux subsystem too, but that damage
> >>>>> is already done, and I'd concur that there's little value in trying to
> >>>>> unpick that now, despite the clear existence of products like
> CoreSight
> >>>>> DAP and CoreSight ELA which don't have all that much to do with
> >> program
> >>>>> trace either.
> >>>>>
> >>>>> However, hindsight and inertia are hardly good reasons to double
> down
> >> on
> >>>>> poor decisions, so if I was going to vote for anything here it would be
> >>>>> "cspmu_", which is about as
> >>>>> obviously-related-to-the-thing-it-actually-is as we can get while also
> >>>>> being pleasantly concise.
> >>>>>
> >>>>> [ And no, this isn't bikeshedding. Naming things right is *important* ]
> >>>>>
> >>>>
> >>>> I agree having the correct name is important, especially at this early
> stage.
> >>>> A direction of what the naming should describe would be very helpful
> >> here.
> >>>>
> >>>>> Cheers,
> >>>>> Robin.
> >>>>>
> >>>>>>
> >>>>>> Thanks,
> >>>>>> Steve
> >>>>>>
> >>>>>> [1]. drivers/hwtracing/coresight/
> >>>>>> [2]. https://developer.arm.com/documentation/ihi0091/latest
> >>>