2017-08-04 19:59:38

by Leeder, Neil

[permalink] [raw]
Subject: [PATCH 0/2] arm64 SMMUv3 PMU driver with IORT support

This adds a driver for the SMMUv3 PMU into the perf framework.
It includes an IORT update to support PM Counter Groups.

IORT has no mechanism for determining device names so PMUs
are named based on their physical address.

Tested on Qualcomm QDF2400. perf_fuzzer ran for 4+ hours
with no failures.

Neil Leeder (2):
acpi: arm64: add iort support for PMCG
perf: add arm64 smmuv3 pmu driver

drivers/acpi/arm64/iort.c | 54 +++
drivers/perf/Kconfig | 9 +
drivers/perf/Makefile | 1 +
drivers/perf/arm_smmuv3_pmu.c | 823 ++++++++++++++++++++++++++++++++++++++++++
include/acpi/actbl2.h | 9 +-
5 files changed, 895 insertions(+), 1 deletion(-)
create mode 100644 drivers/perf/arm_smmuv3_pmu.c

--
Qualcomm Datacenter Technologies, Inc. as an affiliate of Qualcomm Technologies Inc.
Qualcomm Technologies, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project.


2017-08-04 19:59:49

by Leeder, Neil

[permalink] [raw]
Subject: [PATCH 1/2] acpi: arm64: add iort support for PMCG

Add support for the SMMU Performance Monitor Counter Group
information from ACPI. This is in preparation for its use
in the SMMU v3 PMU driver.

Signed-off-by: Neil Leeder <[email protected]>
---
drivers/acpi/arm64/iort.c | 54 +++++++++++++++++++++++++++++++++++++++++++++++
include/acpi/actbl2.h | 9 +++++++-
2 files changed, 62 insertions(+), 1 deletion(-)

diff --git a/drivers/acpi/arm64/iort.c b/drivers/acpi/arm64/iort.c
index a3215ee..5a998cd 100644
--- a/drivers/acpi/arm64/iort.c
+++ b/drivers/acpi/arm64/iort.c
@@ -970,6 +970,40 @@ static bool __init arm_smmu_is_coherent(struct acpi_iort_node *node)
return smmu->flags & ACPI_IORT_SMMU_COHERENT_WALK;
}

+static int __init arm_smmu_pmu_count_resources(struct acpi_iort_node *node)
+{
+ struct acpi_iort_pmcg *pmcg;
+
+ /* Retrieve PMCG specific data */
+ pmcg = (struct acpi_iort_pmcg *)node->node_data;
+
+ /*
+ * There are always 2 memory resources.
+ * If the overflow_gsiv is present then add that for a total of 3.
+ */
+ return pmcg->overflow_gsiv > 0 ? 3 : 2;
+}
+
+static void __init arm_smmu_pmu_init_resources(struct resource *res,
+ struct acpi_iort_node *node)
+{
+ struct acpi_iort_pmcg *pmcg;
+
+ /* Retrieve PMCG specific data */
+ pmcg = (struct acpi_iort_pmcg *)node->node_data;
+
+ res[0].start = pmcg->base_address;
+ res[0].end = pmcg->base_address + SZ_4K - 1;
+ res[0].flags = IORESOURCE_MEM;
+ res[1].start = pmcg->base_address + SZ_64K;
+ res[1].end = pmcg->base_address + SZ_64K + SZ_4K - 1;
+ res[1].flags = IORESOURCE_MEM;
+
+ if (pmcg->overflow_gsiv)
+ acpi_iort_register_irq(pmcg->overflow_gsiv, "overflow",
+ ACPI_EDGE_SENSITIVE, &res[2]);
+}
+
struct iort_iommu_config {
const char *name;
int (*iommu_init)(struct acpi_iort_node *node);
@@ -993,6 +1027,12 @@ struct iort_iommu_config {
.iommu_init_resources = arm_smmu_init_resources
};

+static const struct iort_iommu_config iort_arm_smmu_pmcg_cfg __initconst = {
+ .name = "arm-smmu-pmu",
+ .iommu_count_resources = arm_smmu_pmu_count_resources,
+ .iommu_init_resources = arm_smmu_pmu_init_resources
+};
+
static __init
const struct iort_iommu_config *iort_get_iommu_cfg(struct acpi_iort_node *node)
{
@@ -1001,6 +1041,8 @@ const struct iort_iommu_config *iort_get_iommu_cfg(struct acpi_iort_node *node)
return &iort_arm_smmu_v3_cfg;
case ACPI_IORT_NODE_SMMU:
return &iort_arm_smmu_cfg;
+ case ACPI_IORT_NODE_PMCG:
+ return &iort_arm_smmu_pmcg_cfg;
default:
return NULL;
}
@@ -1056,6 +1098,15 @@ static int __init iort_add_smmu_platform_device(struct acpi_iort_node *node)
if (ret)
goto dev_put;

+ /* End of init for PMCG */
+ if (node->type == ACPI_IORT_NODE_PMCG) {
+ ret = platform_device_add(pdev);
+ if (ret)
+ goto dev_put;
+
+ return 0;
+ }
+
/*
* We expect the dma masks to be equivalent for
* all SMMUs set-ups
@@ -1131,6 +1182,9 @@ static void __init iort_init_platform_devices(void)
acpi_free_fwnode_static(fwnode);
return;
}
+ } else if (iort_node->type == ACPI_IORT_NODE_PMCG) {
+ if (iort_add_smmu_platform_device(iort_node))
+ return;
}

iort_node = ACPI_ADD_PTR(struct acpi_iort_node, iort_node,
diff --git a/include/acpi/actbl2.h b/include/acpi/actbl2.h
index 707dda74..2169b6f 100644
--- a/include/acpi/actbl2.h
+++ b/include/acpi/actbl2.h
@@ -695,7 +695,8 @@ enum acpi_iort_node_type {
ACPI_IORT_NODE_NAMED_COMPONENT = 0x01,
ACPI_IORT_NODE_PCI_ROOT_COMPLEX = 0x02,
ACPI_IORT_NODE_SMMU = 0x03,
- ACPI_IORT_NODE_SMMU_V3 = 0x04
+ ACPI_IORT_NODE_SMMU_V3 = 0x04,
+ ACPI_IORT_NODE_PMCG = 0x05
};

struct acpi_iort_id_mapping {
@@ -811,6 +812,12 @@ struct acpi_iort_smmu_v3 {
#define ACPI_IORT_SMMU_V3_COHACC_OVERRIDE (1)
#define ACPI_IORT_SMMU_V3_HTTU_OVERRIDE (1<<1)

+struct acpi_iort_pmcg {
+ u64 base_address; /* PMCG base address */
+ u32 overflow_gsiv;
+ u32 node_reference;
+};
+
/*******************************************************************************
*
* IVRS - I/O Virtualization Reporting Structure
--
Qualcomm Datacenter Technologies, Inc. as an affiliate of Qualcomm Technologies Inc.
Qualcomm Technologies, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project.

2017-08-04 20:00:01

by Leeder, Neil

[permalink] [raw]
Subject: [PATCH 2/2] perf: add arm64 smmuv3 pmu driver

Adds a new driver to support the SMMU v3 PMU and add it into the
perf events framework.

Each SMMU node may have multiple PMUs associated with it, each of
which may support different events.

PMUs are named smmu_0_<phys_addr_page> where <phys_addr_page>
is the physical page address of the SMMU PMCG.
For example, the SMMU PMCG at 0xff88840000 is named smmu_0_ff88840

Filtering by stream id is done by specifying filtering parameters
with the event. Options are:
filter_enable - 0 = no filtering, 1 = filtering enabled
filter_span - 0 = exact match, 1 = pattern match
filter_sec - applies to non-secure (0) or secure (1) namespace
filter_stream_id - pattern to filter against
Further filtering information is available in the SMMU documentation.

Example: perf stat -e smmu_0_ff88840/transaction,filter_enable=1,
filter_span=1,filter_stream_id=0x42/ -a pwd
Applies filter pattern 0x42 to transaction events.

SMMU events are not attributable to a CPU, so task mode and sampling
are not supported.

Signed-off-by: Neil Leeder <[email protected]>
---
drivers/perf/Kconfig | 9 +
drivers/perf/Makefile | 1 +
drivers/perf/arm_smmuv3_pmu.c | 813 ++++++++++++++++++++++++++++++++++++++++++
3 files changed, 823 insertions(+)
create mode 100644 drivers/perf/arm_smmuv3_pmu.c

diff --git a/drivers/perf/Kconfig b/drivers/perf/Kconfig
index e5197ff..e7721d1 100644
--- a/drivers/perf/Kconfig
+++ b/drivers/perf/Kconfig
@@ -17,6 +17,15 @@ config ARM_PMU_ACPI
depends on ARM_PMU && ACPI
def_bool y

+config ARM_SMMUV3_PMU
+ bool "ARM SMMUv3 PMU"
+ depends on PERF_EVENTS && ARM64 && ACPI
+ help
+ Provides support for the SMMU version 3 performance monitor unit (PMU)
+ on ARM-based systems.
+ Adds the SMMU PMU into the perf events subsystem for
+ monitoring SMMU performance events.
+
config QCOM_L2_PMU
bool "Qualcomm Technologies L2-cache PMU"
depends on ARCH_QCOM && ARM64 && ACPI
diff --git a/drivers/perf/Makefile b/drivers/perf/Makefile
index 6420bd4..3012f5e 100644
--- a/drivers/perf/Makefile
+++ b/drivers/perf/Makefile
@@ -1,5 +1,6 @@
obj-$(CONFIG_ARM_PMU) += arm_pmu.o arm_pmu_platform.o
obj-$(CONFIG_ARM_PMU_ACPI) += arm_pmu_acpi.o
+obj-$(CONFIG_ARM_SMMUV3_PMU) += arm_smmuv3_pmu.o
obj-$(CONFIG_QCOM_L2_PMU) += qcom_l2_pmu.o
obj-$(CONFIG_QCOM_L3_PMU) += qcom_l3_pmu.o
obj-$(CONFIG_XGENE_PMU) += xgene_pmu.o
diff --git a/drivers/perf/arm_smmuv3_pmu.c b/drivers/perf/arm_smmuv3_pmu.c
new file mode 100644
index 0000000..1e70791
--- /dev/null
+++ b/drivers/perf/arm_smmuv3_pmu.c
@@ -0,0 +1,813 @@
+/* Copyright (c) 2017 The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+/*
+ * This driver adds support for perf events to use the Performance
+ * Monitor Counter Groups (PMCG) associated with an SMMUv3 node
+ * to monitor that node.
+ *
+ * Devices are named smmu_0_<phys_addr_page> where <phys_addr_page>
+ * is the physical page address of the SMMU PMCG.
+ * For example, the SMMU PMCG at 0xff88840000 is named smmu_0_ff88840
+ *
+ * Filtering by stream id is done by specifying filtering parameters
+ * with the event. options are:
+ * filter_enable - 0 = no filtering, 1 = filtering enabled
+ * filter_span - 0 = exact match, 1 = pattern match
+ * filter_sec - filter applies to non-secure (0) or secure (1) namespace
+ * filter_stream_id - pattern to filter against
+ * Further filtering information is available in the SMMU documentation.
+ *
+ * Example: perf stat -e smmu_0_ff88840/transaction,filter_enable=1,
+ * filter_span=1,filter_stream_id=0x42/ -a pwd
+ * Applies filter pattern 0x42 to transaction events.
+ *
+ * SMMU events are not attributable to a CPU, so task mode and sampling
+ * are not supported.
+ */
+
+#include <linux/acpi.h>
+#include <linux/acpi_iort.h>
+#include <linux/bitops.h>
+#include <linux/cpuhotplug.h>
+#include <linux/cpumask.h>
+#include <linux/device.h>
+#include <linux/errno.h>
+#include <linux/interrupt.h>
+#include <linux/irq.h>
+#include <linux/kernel.h>
+#include <linux/list.h>
+#include <linux/msi.h>
+#include <linux/perf_event.h>
+#include <linux/platform_device.h>
+#include <linux/smp.h>
+#include <linux/sysfs.h>
+#include <linux/types.h>
+
+#include <asm/local64.h>
+
+#define SMMU_PMCG_EVCNTR0 0x0
+#define SMMU_PMCG_EVCNTR(n, stride) (SMMU_PMCG_EVCNTR0 + (n) * (stride))
+#define SMMU_PMCG_EVTYPER0 0x400
+#define SMMU_PMCG_EVTYPER(n) (SMMU_PMCG_EVTYPER0 + (n) * 4)
+#define SMMU_PMCG_EVTYPER_SEC_SID_SHIFT 30
+#define SMMU_PMCG_EVTYPER_SID_SPAN_SHIFT 29
+#define SMMU_PMCG_EVTYPER_EVENT_MASK GENMASK(15, 0)
+#define SMMU_PMCG_SVR0 0x600
+#define SMMU_PMCG_SVR(n, stride) (SMMU_PMCG_SVR0 + (n) * (stride))
+#define SMMU_PMCG_SMR0 0xA00
+#define SMMU_PMCG_SMR(n) (SMMU_PMCG_SMR0 + (n) * 4)
+#define SMMU_PMCG_CNTENSET0 0xC00
+#define SMMU_PMCG_CNTENCLR0 0xC20
+#define SMMU_PMCG_INTENSET0 0xC40
+#define SMMU_PMCG_INTENCLR0 0xC60
+#define SMMU_PMCG_OVSCLR0 0xC80
+#define SMMU_PMCG_OVSSET0 0xCC0
+#define SMMU_PMCG_CAPR 0xD88
+#define SMMU_PMCG_SCR 0xDF8
+#define SMMU_PMCG_CFGR 0xE00
+#define SMMU_PMCG_CFGR_SID_FILTER_TYPE BIT(23)
+#define SMMU_PMCG_CFGR_CAPTURE BIT(22)
+#define SMMU_PMCG_CFGR_MSI BIT(21)
+#define SMMU_PMCG_CFGR_RELOC_CTRS BIT(20)
+#define SMMU_PMCG_CFGR_SIZE_MASK GENMASK(13, 8)
+#define SMMU_PMCG_CFGR_SIZE_SHIFT 8
+#define SMMU_PMCG_CFGR_COUNTER_SIZE_32 31
+#define SMMU_PMCG_CFGR_NCTR_MASK GENMASK(5, 0)
+#define SMMU_PMCG_CFGR_NCTR_SHIFT 0
+#define SMMU_PMCG_CR 0xE04
+#define SMMU_PMCG_CR_ENABLE BIT(0)
+#define SMMU_PMCG_CEID0 0xE20
+#define SMMU_PMCG_CEID1 0xE28
+#define SMMU_PMCG_IRQ_CTRL 0xE50
+#define SMMU_PMCG_IRQ_CTRL_IRQEN BIT(0)
+#define SMMU_PMCG_IRQ_CTRLACK 0xE54
+#define SMMU_PMCG_IRQ_CTRLACK_IRQEN BIT(0)
+#define SMMU_PMCG_IRQ_CFG0 0xE58
+#define SMMU_PMCG_IRQ_CFG0_ADDR_MASK GENMASK(51, 2)
+#define SMMU_PMCG_IRQ_CFG1 0xE60
+#define SMMU_PMCG_IRQ_CFG2 0xE64
+#define SMMU_PMCG_IRQ_STATUS 0xE68
+#define SMMU_PMCG_IRQ_STATUS_IRQ_ABT BIT(0)
+#define SMMU_PMCG_AIDR 0xE70
+
+#define SMMU_COUNTER_RELOAD BIT(31)
+#define SMMU_DEFAULT_FILTER_SEC 0
+#define SMMU_DEFAULT_FILTER_SPAN 1
+#define SMMU_DEFAULT_FILTER_STREAM_ID GENMASK(31, 0)
+
+#define SMMU_MAX_COUNTERS 64
+#define SMMU_MAX_EVENT_ID 128
+#define SMMU_NUM_EVENTS_U32 (SMMU_MAX_EVENT_ID / sizeof(u32))
+
+#define SMMU_PA_SHIFT 12
+
+/* Events */
+#define SMMU_PMU_CYCLES 0
+#define SMMU_PMU_TRANSACTION 1
+#define SMMU_PMU_TLB_MISS 2
+#define SMMU_PMU_CONFIG_CACHE_MISS 3
+#define SMMU_PMU_TRANS_TABLE_WALK 4
+#define SMMU_PMU_CONFIG_STRUCT_ACCESS 5
+#define SMMU_PMU_PCIE_ATS_TRANS_RQ 6
+#define SMMU_PMU_PCIE_ATS_TRANS_PASSED 7
+
+static int cpuhp_state_num;
+
+struct smmu_pmu {
+ struct hlist_node node;
+ struct perf_event *events[SMMU_MAX_COUNTERS];
+ DECLARE_BITMAP(used_counters, SMMU_MAX_COUNTERS);
+ DECLARE_BITMAP(supported_events, SMMU_MAX_EVENT_ID);
+ unsigned int irq;
+ unsigned int on_cpu;
+ struct pmu pmu;
+ unsigned int num_counters;
+ struct platform_device *pdev;
+ void __iomem *reg_base;
+ void __iomem *reloc_base;
+ u64 counter_present_mask;
+ u64 counter_mask;
+ bool reg_size_32;
+};
+
+#define to_smmu_pmu(p) (container_of(p, struct smmu_pmu, pmu))
+
+#define SMMU_PMU_EVENT_ATTR_EXTRACTOR(_name, _config, _size, _shift) \
+ static inline u32 get_##_name(struct perf_event *event) \
+ { \
+ return (event->attr._config >> (_shift)) & \
+ GENMASK_ULL((_size) - 1, 0); \
+ }
+
+SMMU_PMU_EVENT_ATTR_EXTRACTOR(event, config, 16, 0);
+SMMU_PMU_EVENT_ATTR_EXTRACTOR(filter_stream_id, config1, 32, 0);
+SMMU_PMU_EVENT_ATTR_EXTRACTOR(filter_span, config1, 1, 32);
+SMMU_PMU_EVENT_ATTR_EXTRACTOR(filter_sec, config1, 1, 33);
+SMMU_PMU_EVENT_ATTR_EXTRACTOR(filter_enable, config1, 1, 34);
+
+static inline void smmu_pmu_enable(struct pmu *pmu)
+{
+ struct smmu_pmu *smmu_pmu = to_smmu_pmu(pmu);
+
+ writel(SMMU_PMCG_CR_ENABLE, smmu_pmu->reg_base + SMMU_PMCG_CR);
+ writel(SMMU_PMCG_IRQ_CTRL_IRQEN,
+ smmu_pmu->reg_base + SMMU_PMCG_IRQ_CTRL);
+}
+
+static inline void smmu_pmu_disable(struct pmu *pmu)
+{
+ struct smmu_pmu *smmu_pmu = to_smmu_pmu(pmu);
+
+ writel(0, smmu_pmu->reg_base + SMMU_PMCG_CR);
+ writel(0, smmu_pmu->reg_base + SMMU_PMCG_IRQ_CTRL);
+}
+
+static inline void smmu_pmu_counter_set_value(struct smmu_pmu *smmu_pmu,
+ u32 idx, u64 value)
+{
+ if (smmu_pmu->reg_size_32)
+ writel(value, smmu_pmu->reloc_base + SMMU_PMCG_EVCNTR(idx, 4));
+ else
+ writeq(value, smmu_pmu->reloc_base + SMMU_PMCG_EVCNTR(idx, 8));
+}
+
+static inline u64 smmu_pmu_counter_get_value(struct smmu_pmu *smmu_pmu, u32 idx)
+{
+ u64 value;
+
+ if (smmu_pmu->reg_size_32)
+ value = readl(smmu_pmu->reloc_base + SMMU_PMCG_EVCNTR(idx, 4));
+ else
+ value = readq(smmu_pmu->reloc_base + SMMU_PMCG_EVCNTR(idx, 8));
+
+ return value;
+}
+
+static inline void smmu_pmu_counter_enable(struct smmu_pmu *smmu_pmu, u32 idx)
+{
+ writeq(BIT(idx), smmu_pmu->reg_base + SMMU_PMCG_CNTENSET0);
+}
+
+static inline void smmu_pmu_counter_disable(struct smmu_pmu *smmu_pmu, u32 idx)
+{
+ writeq(BIT(idx), smmu_pmu->reg_base + SMMU_PMCG_CNTENCLR0);
+}
+
+static inline void smmu_pmu_interrupt_enable(struct smmu_pmu *smmu_pmu, u32 idx)
+{
+ writeq(BIT(idx), smmu_pmu->reg_base + SMMU_PMCG_INTENSET0);
+}
+
+static inline void smmu_pmu_interrupt_disable(struct smmu_pmu *smmu_pmu,
+ u32 idx)
+{
+ writeq(BIT(idx), smmu_pmu->reg_base + SMMU_PMCG_INTENCLR0);
+}
+
+static void smmu_pmu_reset(struct smmu_pmu *smmu_pmu)
+{
+ unsigned int i;
+
+ for (i = 0; i < smmu_pmu->num_counters; i++) {
+ smmu_pmu_counter_disable(smmu_pmu, i);
+ smmu_pmu_interrupt_disable(smmu_pmu, i);
+ }
+ smmu_pmu_disable(&smmu_pmu->pmu);
+}
+
+static inline void smmu_pmu_set_evtyper(struct smmu_pmu *smmu_pmu, u32 idx,
+ u32 val)
+{
+ writel(val, smmu_pmu->reg_base + SMMU_PMCG_EVTYPER(idx));
+}
+
+static inline void smmu_pmu_set_smr(struct smmu_pmu *smmu_pmu, u32 idx, u32 val)
+{
+ writel(val, smmu_pmu->reg_base + SMMU_PMCG_SMR(idx));
+}
+
+static inline u64 smmu_pmu_getreset_ovsr(struct smmu_pmu *smmu_pmu)
+{
+ u64 result = readq_relaxed(smmu_pmu->reloc_base + SMMU_PMCG_OVSSET0);
+
+ writeq(result, smmu_pmu->reloc_base + SMMU_PMCG_OVSCLR0);
+ return result;
+}
+
+static inline bool smmu_pmu_has_overflowed(struct smmu_pmu *smmu_pmu, u64 ovsr)
+{
+ return !!(ovsr & smmu_pmu->counter_present_mask);
+}
+
+static void smmu_pmu_event_update(struct perf_event *event)
+{
+ struct hw_perf_event *hwc = &event->hw;
+ struct smmu_pmu *smmu_pmu = to_smmu_pmu(event->pmu);
+ u64 delta, prev, now;
+ u32 idx = hwc->idx;
+
+ do {
+ prev = local64_read(&hwc->prev_count);
+ now = smmu_pmu_counter_get_value(smmu_pmu, idx);
+ } while (local64_cmpxchg(&hwc->prev_count, prev, now) != prev);
+
+ /* handle overflow. */
+ delta = now - prev;
+ delta &= smmu_pmu->counter_mask;
+
+ local64_add(delta, &event->count);
+}
+
+static void smmu_pmu_set_period(struct smmu_pmu *smmu_pmu,
+ struct hw_perf_event *hwc)
+{
+ u32 idx = hwc->idx;
+ u64 new;
+
+ /*
+ * We limit the max period to half the max counter value of the smallest
+ * counter size, so that even in the case of extreme interrupt latency
+ * the counter will (hopefully) not wrap past its initial value.
+ */
+ new = SMMU_COUNTER_RELOAD;
+
+ local64_set(&hwc->prev_count, new);
+ smmu_pmu_counter_set_value(smmu_pmu, idx, new);
+}
+
+static irqreturn_t smmu_pmu_handle_irq(int irq_num, void *data)
+{
+ struct smmu_pmu *smmu_pmu = data;
+ u64 ovsr;
+ unsigned int idx;
+
+ ovsr = smmu_pmu_getreset_ovsr(smmu_pmu);
+ if (!smmu_pmu_has_overflowed(smmu_pmu, ovsr))
+ return IRQ_NONE;
+
+ for_each_set_bit(idx, (unsigned long *)&ovsr, smmu_pmu->num_counters) {
+ struct perf_event *event = smmu_pmu->events[idx];
+ struct hw_perf_event *hwc;
+
+ if (WARN_ON_ONCE(!event))
+ continue;
+
+ smmu_pmu_event_update(event);
+ hwc = &event->hw;
+
+ smmu_pmu_set_period(smmu_pmu, hwc);
+ }
+
+ return IRQ_HANDLED;
+}
+
+static unsigned int smmu_pmu_get_event_idx(struct smmu_pmu *smmu_pmu)
+{
+ unsigned int idx;
+ unsigned int num_ctrs = smmu_pmu->num_counters;
+
+ idx = find_first_zero_bit(smmu_pmu->used_counters, num_ctrs);
+ if (idx == num_ctrs)
+ /* The counters are all in use. */
+ return -EAGAIN;
+
+ set_bit(idx, smmu_pmu->used_counters);
+
+ return idx;
+}
+
+/*
+ * Implementation of abstract pmu functionality required by
+ * the core perf events code.
+ */
+
+static int smmu_pmu_event_init(struct perf_event *event)
+{
+ struct hw_perf_event *hwc = &event->hw;
+ struct perf_event *sibling;
+ struct smmu_pmu *smmu_pmu;
+ u32 event_id;
+
+ if (event->attr.type != event->pmu->type)
+ return -ENOENT;
+
+ smmu_pmu = to_smmu_pmu(event->pmu);
+
+ if (hwc->sample_period) {
+ dev_dbg_ratelimited(&smmu_pmu->pdev->dev,
+ "Sampling not supported\n");
+ return -EOPNOTSUPP;
+ }
+
+ if (event->cpu < 0) {
+ dev_dbg_ratelimited(&smmu_pmu->pdev->dev,
+ "Per-task mode not supported\n");
+ return -EOPNOTSUPP;
+ }
+
+ /* We cannot filter accurately so we just don't allow it. */
+ if (event->attr.exclude_user || event->attr.exclude_kernel ||
+ event->attr.exclude_hv || event->attr.exclude_idle) {
+ dev_dbg_ratelimited(&smmu_pmu->pdev->dev,
+ "Can't exclude execution levels\n");
+ return -EOPNOTSUPP;
+ }
+
+ /* Verify specified event is supported on this PMU */
+ event_id = get_event(event);
+ if ((event_id >= SMMU_MAX_EVENT_ID) ||
+ (!test_bit(event_id, smmu_pmu->supported_events))) {
+ dev_dbg_ratelimited(&smmu_pmu->pdev->dev,
+ "Invalid event %d for this PMU\n",
+ event_id);
+ return -EINVAL;
+ }
+
+ /* Don't allow groups with mixed PMUs, except for s/w events */
+ if (event->group_leader->pmu != event->pmu &&
+ !is_software_event(event->group_leader)) {
+ dev_dbg_ratelimited(&smmu_pmu->pdev->dev,
+ "Can't create mixed PMU group\n");
+ return -EINVAL;
+ }
+
+ list_for_each_entry(sibling, &event->group_leader->sibling_list,
+ group_entry)
+ if (sibling->pmu != event->pmu &&
+ !is_software_event(sibling)) {
+ dev_dbg_ratelimited(&smmu_pmu->pdev->dev,
+ "Can't create mixed PMU group\n");
+ return -EINVAL;
+ }
+
+ /* Ensure all events in a group are on the same cpu */
+ if ((event->group_leader != event) &&
+ (event->cpu != event->group_leader->cpu)) {
+ dev_dbg_ratelimited(&smmu_pmu->pdev->dev,
+ "Can't create group on CPUs %d and %d",
+ event->cpu, event->group_leader->cpu);
+ return -EINVAL;
+ }
+
+ hwc->idx = -1;
+
+ /*
+ * Ensure all events are on the same cpu so all events are in the
+ * same cpu context, to avoid races on pmu_enable etc.
+ */
+ event->cpu = smmu_pmu->on_cpu;
+
+ return 0;
+}
+
+static void smmu_pmu_event_start(struct perf_event *event, int flags)
+{
+ struct smmu_pmu *smmu_pmu = to_smmu_pmu(event->pmu);
+ struct hw_perf_event *hwc = &event->hw;
+ int idx = hwc->idx;
+ u32 evtyper;
+ u32 filter_sec;
+ u32 filter_span;
+ u32 filter_stream_id;
+
+ hwc->state = 0;
+
+ smmu_pmu_set_period(smmu_pmu, hwc);
+
+ if (get_filter_enable(event)) {
+ filter_sec = get_filter_sec(event);
+ filter_span = get_filter_span(event);
+ filter_stream_id = get_filter_stream_id(event);
+ } else {
+ filter_sec = SMMU_DEFAULT_FILTER_SEC;
+ filter_span = SMMU_DEFAULT_FILTER_SPAN;
+ filter_stream_id = SMMU_DEFAULT_FILTER_STREAM_ID;
+ }
+
+ evtyper = get_event(event) |
+ filter_span << SMMU_PMCG_EVTYPER_SID_SPAN_SHIFT |
+ filter_sec << SMMU_PMCG_EVTYPER_SEC_SID_SHIFT;
+
+ smmu_pmu_set_evtyper(smmu_pmu, idx, evtyper);
+ smmu_pmu_set_smr(smmu_pmu, idx, filter_stream_id);
+ smmu_pmu_interrupt_enable(smmu_pmu, idx);
+ smmu_pmu_counter_enable(smmu_pmu, idx);
+}
+
+static void smmu_pmu_event_stop(struct perf_event *event, int flags)
+{
+ struct smmu_pmu *smmu_pmu = to_smmu_pmu(event->pmu);
+ struct hw_perf_event *hwc = &event->hw;
+ int idx = hwc->idx;
+
+ if (hwc->state & PERF_HES_STOPPED)
+ return;
+
+ smmu_pmu_interrupt_disable(smmu_pmu, idx);
+ smmu_pmu_counter_disable(smmu_pmu, idx);
+
+ if (flags & PERF_EF_UPDATE)
+ smmu_pmu_event_update(event);
+ hwc->state |= PERF_HES_STOPPED | PERF_HES_UPTODATE;
+}
+
+static int smmu_pmu_event_add(struct perf_event *event, int flags)
+{
+ struct hw_perf_event *hwc = &event->hw;
+ int idx;
+ struct smmu_pmu *smmu_pmu = to_smmu_pmu(event->pmu);
+
+ idx = smmu_pmu_get_event_idx(smmu_pmu);
+ if (idx < 0)
+ return idx;
+
+ hwc->idx = idx;
+ hwc->state = PERF_HES_STOPPED | PERF_HES_UPTODATE;
+ smmu_pmu->events[idx] = event;
+ local64_set(&hwc->prev_count, 0);
+
+ if (flags & PERF_EF_START)
+ smmu_pmu_event_start(event, flags);
+
+ /* Propagate changes to the userspace mapping. */
+ perf_event_update_userpage(event);
+
+ return 0;
+}
+
+static void smmu_pmu_event_del(struct perf_event *event, int flags)
+{
+ struct hw_perf_event *hwc = &event->hw;
+ struct smmu_pmu *smmu_pmu = to_smmu_pmu(event->pmu);
+ int idx = hwc->idx;
+
+ smmu_pmu_event_stop(event, flags | PERF_EF_UPDATE);
+ smmu_pmu->events[idx] = NULL;
+ clear_bit(idx, smmu_pmu->used_counters);
+
+ perf_event_update_userpage(event);
+}
+
+static void smmu_pmu_event_read(struct perf_event *event)
+{
+ smmu_pmu_event_update(event);
+}
+
+/* cpumask */
+
+static ssize_t smmu_pmu_cpumask_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ struct smmu_pmu *smmu_pmu = to_smmu_pmu(dev_get_drvdata(dev));
+
+ return cpumap_print_to_pagebuf(true, buf, cpumask_of(smmu_pmu->on_cpu));
+}
+
+static struct device_attribute smmu_pmu_cpumask_attr =
+ __ATTR(cpumask, 0444, smmu_pmu_cpumask_show, NULL);
+
+static struct attribute *smmu_pmu_cpumask_attrs[] = {
+ &smmu_pmu_cpumask_attr.attr,
+ NULL,
+};
+
+static struct attribute_group smmu_pmu_cpumask_group = {
+ .attrs = smmu_pmu_cpumask_attrs,
+};
+
+/* Events */
+
+ssize_t smmu_pmu_event_show(struct device *dev,
+ struct device_attribute *attr, char *page)
+{
+ struct perf_pmu_events_attr *pmu_attr;
+
+ pmu_attr = container_of(attr, struct perf_pmu_events_attr, attr);
+
+ return sprintf(page, "event=0x%02llx\n", pmu_attr->id);
+}
+
+#define SMMU_EVENT_ATTR(_name, _id) \
+ (&((struct perf_pmu_events_attr[]) { \
+ { .attr = __ATTR(_name, 0444, smmu_pmu_event_show, NULL), \
+ .id = _id, } \
+ })[0].attr.attr)
+
+static struct attribute *smmu_pmu_events[] = {
+ SMMU_EVENT_ATTR(cycles, SMMU_PMU_CYCLES),
+ SMMU_EVENT_ATTR(transaction, SMMU_PMU_TRANSACTION),
+ SMMU_EVENT_ATTR(tlb_miss, SMMU_PMU_TLB_MISS),
+ SMMU_EVENT_ATTR(config_cache_miss, SMMU_PMU_CONFIG_CACHE_MISS),
+ SMMU_EVENT_ATTR(trans_table_walk, SMMU_PMU_TRANS_TABLE_WALK),
+ SMMU_EVENT_ATTR(config_struct_access, SMMU_PMU_CONFIG_STRUCT_ACCESS),
+ SMMU_EVENT_ATTR(pcie_ats_trans_rq, SMMU_PMU_PCIE_ATS_TRANS_RQ),
+ SMMU_EVENT_ATTR(pcie_ats_trans_passed, SMMU_PMU_PCIE_ATS_TRANS_PASSED),
+ NULL
+};
+
+static umode_t smmu_pmu_event_is_visible(struct kobject *kobj,
+ struct attribute *attr, int unused)
+{
+ struct device *dev = kobj_to_dev(kobj);
+ struct smmu_pmu *smmu_pmu = to_smmu_pmu(dev_get_drvdata(dev));
+ struct perf_pmu_events_attr *pmu_attr;
+
+ pmu_attr = container_of(attr, struct perf_pmu_events_attr, attr.attr);
+
+ if (test_bit(pmu_attr->id, smmu_pmu->supported_events))
+ return attr->mode;
+
+ return 0;
+}
+static struct attribute_group smmu_pmu_events_group = {
+ .name = "events",
+ .attrs = smmu_pmu_events,
+ .is_visible = smmu_pmu_event_is_visible,
+};
+
+/* Formats */
+PMU_FORMAT_ATTR(event, "config:0-15");
+PMU_FORMAT_ATTR(filter_stream_id, "config1:0-31");
+PMU_FORMAT_ATTR(filter_span, "config1:32");
+PMU_FORMAT_ATTR(filter_sec, "config1:33");
+PMU_FORMAT_ATTR(filter_enable, "config1:34");
+
+static struct attribute *smmu_pmu_formats[] = {
+ &format_attr_event.attr,
+ &format_attr_filter_stream_id.attr,
+ &format_attr_filter_span.attr,
+ &format_attr_filter_sec.attr,
+ &format_attr_filter_enable.attr,
+ NULL
+};
+
+static struct attribute_group smmu_pmu_format_group = {
+ .name = "format",
+ .attrs = smmu_pmu_formats,
+};
+
+static const struct attribute_group *smmu_pmu_attr_grps[] = {
+ &smmu_pmu_cpumask_group,
+ &smmu_pmu_events_group,
+ &smmu_pmu_format_group,
+ NULL,
+};
+
+/*
+ * Generic device handlers
+ */
+
+static unsigned int get_num_counters(struct smmu_pmu *smmu_pmu)
+{
+ u32 cfgr = readl(smmu_pmu->reg_base + SMMU_PMCG_CFGR);
+
+ return ((cfgr & SMMU_PMCG_CFGR_NCTR_MASK) >> SMMU_PMCG_CFGR_NCTR_SHIFT)
+ + 1;
+}
+
+static int smmu_pmu_offline_cpu(unsigned int cpu, struct hlist_node *node)
+{
+ struct smmu_pmu *smmu_pmu;
+ unsigned int target;
+
+ smmu_pmu = hlist_entry_safe(node, struct smmu_pmu, node);
+ if (cpu != smmu_pmu->on_cpu)
+ return 0;
+
+ target = cpumask_any_but(cpu_online_mask, cpu);
+ if (target >= nr_cpu_ids)
+ return 0;
+
+ perf_pmu_migrate_context(&smmu_pmu->pmu, cpu, target);
+ smmu_pmu->on_cpu = target;
+ WARN_ON(irq_set_affinity(smmu_pmu->irq, cpumask_of(target)));
+
+ return 0;
+}
+
+static int smmu_pmu_probe(struct platform_device *pdev)
+{
+ struct smmu_pmu *smmu_pmu;
+ struct resource *mem_resource_0, *mem_resource_1;
+ void __iomem *mem_map_0, *mem_map_1;
+ unsigned int reg_size;
+ int err;
+ int irq;
+ u32 ceid[SMMU_NUM_EVENTS_U32];
+ u64 ceid_64;
+
+ smmu_pmu = devm_kzalloc(&pdev->dev, sizeof(*smmu_pmu), GFP_KERNEL);
+ if (!smmu_pmu)
+ return -ENOMEM;
+
+ platform_set_drvdata(pdev, smmu_pmu);
+ smmu_pmu->pmu = (struct pmu) {
+ .task_ctx_nr = perf_invalid_context,
+ .pmu_enable = smmu_pmu_enable,
+ .pmu_disable = smmu_pmu_disable,
+ .event_init = smmu_pmu_event_init,
+ .add = smmu_pmu_event_add,
+ .del = smmu_pmu_event_del,
+ .start = smmu_pmu_event_start,
+ .stop = smmu_pmu_event_stop,
+ .read = smmu_pmu_event_read,
+ .attr_groups = smmu_pmu_attr_grps,
+ };
+
+ mem_resource_0 = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ mem_map_0 = devm_ioremap_resource(&pdev->dev, mem_resource_0);
+
+ if (IS_ERR(mem_map_0)) {
+ dev_err(&pdev->dev, "Can't map SMMU PMU @%pa\n",
+ &mem_resource_0->start);
+ return PTR_ERR(mem_map_0);
+ }
+
+ smmu_pmu->reg_base = mem_map_0;
+ smmu_pmu->pmu.name =
+ devm_kasprintf(&pdev->dev, GFP_KERNEL, "smmu_0_%llx",
+ (mem_resource_0->start) >> SMMU_PA_SHIFT);
+
+ if (!smmu_pmu->pmu.name) {
+ dev_err(&pdev->dev, "Failed to create PMU name");
+ return -EINVAL;
+ }
+
+ ceid_64 = readq(smmu_pmu->reg_base + SMMU_PMCG_CEID0);
+ ceid[0] = ceid_64 & GENMASK(31, 0);
+ ceid[1] = ceid_64 >> 32;
+ ceid_64 = readq(smmu_pmu->reg_base + SMMU_PMCG_CEID1);
+ ceid[2] = ceid_64 & GENMASK(31, 0);
+ ceid[3] = ceid_64 >> 32;
+ bitmap_from_u32array(smmu_pmu->supported_events, SMMU_MAX_EVENT_ID,
+ ceid, SMMU_NUM_EVENTS_U32);
+
+ /* Determine if page 1 is present */
+ if (readl(smmu_pmu->reg_base + SMMU_PMCG_CFGR) &
+ SMMU_PMCG_CFGR_RELOC_CTRS) {
+ mem_resource_1 = platform_get_resource(pdev, IORESOURCE_MEM, 1);
+ mem_map_1 = devm_ioremap_resource(&pdev->dev, mem_resource_1);
+
+ if (IS_ERR(mem_map_1)) {
+ dev_err(&pdev->dev, "Can't map SMMU PMU @%pa\n",
+ &mem_resource_1->start);
+ return PTR_ERR(mem_map_1);
+ }
+ smmu_pmu->reloc_base = mem_map_1;
+ } else {
+ smmu_pmu->reloc_base = smmu_pmu->reg_base;
+ }
+
+ irq = platform_get_irq(pdev, 0);
+ if (irq < 0) {
+ dev_err(&pdev->dev,
+ "Failed to get valid irq for smmu @%pa\n",
+ &mem_resource_0->start);
+ return irq;
+ }
+
+ err = devm_request_irq(&pdev->dev, irq, smmu_pmu_handle_irq,
+ IRQF_NOBALANCING | IRQF_SHARED | IRQF_NO_THREAD,
+ "smmu-pmu", smmu_pmu);
+ if (err) {
+ dev_err(&pdev->dev,
+ "Unable to request IRQ%d for SMMU PMU counters\n", irq);
+ return err;
+ }
+
+ smmu_pmu->irq = irq;
+
+ /* Pick one CPU to be the preferred one to use */
+ smmu_pmu->on_cpu = smp_processor_id();
+ WARN_ON(irq_set_affinity(smmu_pmu->irq, cpumask_of(smmu_pmu->on_cpu)));
+
+ smmu_pmu->num_counters = get_num_counters(smmu_pmu);
+ smmu_pmu->pdev = pdev;
+ smmu_pmu->counter_present_mask = GENMASK(smmu_pmu->num_counters - 1, 0);
+ reg_size = (readl(smmu_pmu->reg_base + SMMU_PMCG_CFGR) &
+ SMMU_PMCG_CFGR_SIZE_MASK) >> SMMU_PMCG_CFGR_SIZE_SHIFT;
+ smmu_pmu->reg_size_32 = (reg_size == SMMU_PMCG_CFGR_COUNTER_SIZE_32);
+ smmu_pmu->counter_mask = GENMASK_ULL(reg_size, 0);
+
+ smmu_pmu_reset(smmu_pmu);
+
+ err = cpuhp_state_add_instance_nocalls(cpuhp_state_num,
+ &smmu_pmu->node);
+ if (err) {
+ dev_err(&pdev->dev, "Error %d registering hotplug", err);
+ return err;
+ }
+
+ err = perf_pmu_register(&smmu_pmu->pmu, smmu_pmu->pmu.name, -1);
+ if (err) {
+ dev_err(&pdev->dev, "Error %d registering SMMU PMU\n", err);
+ goto out_unregister;
+ }
+
+ dev_info(&pdev->dev, "Registered SMMU PMU @ %pa using %d counters\n",
+ &mem_resource_0->start, smmu_pmu->num_counters);
+
+ return err;
+
+out_unregister:
+ cpuhp_state_remove_instance_nocalls(cpuhp_state_num, &smmu_pmu->node);
+ return err;
+}
+
+static int smmu_pmu_remove(struct platform_device *pdev)
+{
+ struct smmu_pmu *smmu_pmu = platform_get_drvdata(pdev);
+
+ perf_pmu_unregister(&smmu_pmu->pmu);
+ cpuhp_state_remove_instance_nocalls(cpuhp_state_num, &smmu_pmu->node);
+
+ return 0;
+}
+
+static void smmu_pmu_shutdown(struct platform_device *pdev)
+{
+ struct smmu_pmu *smmu_pmu = platform_get_drvdata(pdev);
+
+ smmu_pmu_disable(&smmu_pmu->pmu);
+}
+
+static struct platform_driver smmu_pmu_driver = {
+ .driver = {
+ .name = "arm-smmu-pmu",
+ },
+ .probe = smmu_pmu_probe,
+ .remove = smmu_pmu_remove,
+ .shutdown = smmu_pmu_shutdown,
+};
+
+static int __init arm_smmu_pmu_init(void)
+{
+ cpuhp_state_num = cpuhp_setup_state_multi(CPUHP_AP_ONLINE_DYN,
+ "perf/arm/smmupmu:online",
+ NULL,
+ smmu_pmu_offline_cpu);
+ if (cpuhp_state_num < 0)
+ return cpuhp_state_num;
+
+ return platform_driver_register(&smmu_pmu_driver);
+}
+module_init(arm_smmu_pmu_init);
+
+static void __exit arm_smmu_pmu_exit(void)
+{
+ platform_driver_unregister(&smmu_pmu_driver);
+}
+
+module_exit(arm_smmu_pmu_exit);
+MODULE_LICENSE("GPL v2");
--
Qualcomm Datacenter Technologies, Inc. as an affiliate of Qualcomm Technologies Inc.
Qualcomm Technologies, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project.

2017-08-07 11:17:20

by Robin Murphy

[permalink] [raw]
Subject: Re: [PATCH 1/2] acpi: arm64: add iort support for PMCG

Hi Neil,

On 04/08/17 20:59, Neil Leeder wrote:
> Add support for the SMMU Performance Monitor Counter Group
> information from ACPI. This is in preparation for its use
> in the SMMU v3 PMU driver.
>
> Signed-off-by: Neil Leeder <[email protected]>
> ---
> drivers/acpi/arm64/iort.c | 54 +++++++++++++++++++++++++++++++++++++++++++++++
> include/acpi/actbl2.h | 9 +++++++-
> 2 files changed, 62 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/acpi/arm64/iort.c b/drivers/acpi/arm64/iort.c
> index a3215ee..5a998cd 100644
> --- a/drivers/acpi/arm64/iort.c
> +++ b/drivers/acpi/arm64/iort.c
> @@ -970,6 +970,40 @@ static bool __init arm_smmu_is_coherent(struct acpi_iort_node *node)
> return smmu->flags & ACPI_IORT_SMMU_COHERENT_WALK;
> }
>
> +static int __init arm_smmu_pmu_count_resources(struct acpi_iort_node *node)
> +{
> + struct acpi_iort_pmcg *pmcg;
> +
> + /* Retrieve PMCG specific data */
> + pmcg = (struct acpi_iort_pmcg *)node->node_data;
> +
> + /*
> + * There are always 2 memory resources.
> + * If the overflow_gsiv is present then add that for a total of 3.
> + */
> + return pmcg->overflow_gsiv > 0 ? 3 : 2;
> +}
> +
> +static void __init arm_smmu_pmu_init_resources(struct resource *res,
> + struct acpi_iort_node *node)
> +{
> + struct acpi_iort_pmcg *pmcg;
> +
> + /* Retrieve PMCG specific data */
> + pmcg = (struct acpi_iort_pmcg *)node->node_data;
> +
> + res[0].start = pmcg->base_address;
> + res[0].end = pmcg->base_address + SZ_4K - 1;
> + res[0].flags = IORESOURCE_MEM;
> + res[1].start = pmcg->base_address + SZ_64K;

Ugh, I see there's a nasty spec hole here - IORT only defines one base
address, but SMMUv3 says *both* pages have imp-def base addresses. I
guess this assumption of them being in consecutive 64K regions holds for
your platform, but as things stand it doesn't appear possible to rely on
it generally :(

I'll follow up internally to see if we need to get IORT and/or SBSA
updated or clarified.

> + res[1].end = pmcg->base_address + SZ_64K + SZ_4K - 1;
> + res[1].flags = IORESOURCE_MEM;
> +
> + if (pmcg->overflow_gsiv)
> + acpi_iort_register_irq(pmcg->overflow_gsiv, "overflow",
> + ACPI_EDGE_SENSITIVE, &res[2]);
> +}
> +
> struct iort_iommu_config {
> const char *name;
> int (*iommu_init)(struct acpi_iort_node *node);
> @@ -993,6 +1027,12 @@ struct iort_iommu_config {
> .iommu_init_resources = arm_smmu_init_resources
> };
>
> +static const struct iort_iommu_config iort_arm_smmu_pmcg_cfg __initconst = {
> + .name = "arm-smmu-pmu",
> + .iommu_count_resources = arm_smmu_pmu_count_resources,
> + .iommu_init_resources = arm_smmu_pmu_init_resources
> +};
> +
> static __init
> const struct iort_iommu_config *iort_get_iommu_cfg(struct acpi_iort_node *node)
> {
> @@ -1001,6 +1041,8 @@ const struct iort_iommu_config *iort_get_iommu_cfg(struct acpi_iort_node *node)
> return &iort_arm_smmu_v3_cfg;
> case ACPI_IORT_NODE_SMMU:
> return &iort_arm_smmu_cfg;
> + case ACPI_IORT_NODE_PMCG:
> + return &iort_arm_smmu_pmcg_cfg;
> default:
> return NULL;
> }
> @@ -1056,6 +1098,15 @@ static int __init iort_add_smmu_platform_device(struct acpi_iort_node *node)
> if (ret)
> goto dev_put;
>
> + /* End of init for PMCG */
> + if (node->type == ACPI_IORT_NODE_PMCG) {
> + ret = platform_device_add(pdev);
> + if (ret)
> + goto dev_put;
> +
> + return 0;
> + }
> +
> /*
> * We expect the dma masks to be equivalent for
> * all SMMUs set-ups
> @@ -1131,6 +1182,9 @@ static void __init iort_init_platform_devices(void)
> acpi_free_fwnode_static(fwnode);
> return;
> }
> + } else if (iort_node->type == ACPI_IORT_NODE_PMCG) {
> + if (iort_add_smmu_platform_device(iort_node))
> + return;
> }
>
> iort_node = ACPI_ADD_PTR(struct acpi_iort_node, iort_node,
> diff --git a/include/acpi/actbl2.h b/include/acpi/actbl2.h
> index 707dda74..2169b6f 100644
> --- a/include/acpi/actbl2.h
> +++ b/include/acpi/actbl2.h

Hopefully these changes are merely here for reference, but just in case
it needs to be said, they should go as a separate patch via ACPICA
(presumably there's also some corresponding iASL stuff to compile PMCG
nodes in the first place).

Robin.

> @@ -695,7 +695,8 @@ enum acpi_iort_node_type {
> ACPI_IORT_NODE_NAMED_COMPONENT = 0x01,
> ACPI_IORT_NODE_PCI_ROOT_COMPLEX = 0x02,
> ACPI_IORT_NODE_SMMU = 0x03,
> - ACPI_IORT_NODE_SMMU_V3 = 0x04
> + ACPI_IORT_NODE_SMMU_V3 = 0x04,
> + ACPI_IORT_NODE_PMCG = 0x05
> };
>
> struct acpi_iort_id_mapping {
> @@ -811,6 +812,12 @@ struct acpi_iort_smmu_v3 {
> #define ACPI_IORT_SMMU_V3_COHACC_OVERRIDE (1)
> #define ACPI_IORT_SMMU_V3_HTTU_OVERRIDE (1<<1)
>
> +struct acpi_iort_pmcg {
> + u64 base_address; /* PMCG base address */
> + u32 overflow_gsiv;
> + u32 node_reference;
> +};
> +
> /*******************************************************************************
> *
> * IVRS - I/O Virtualization Reporting Structure
>

2017-08-07 14:31:30

by Robin Murphy

[permalink] [raw]
Subject: Re: [PATCH 2/2] perf: add arm64 smmuv3 pmu driver

On 04/08/17 20:59, Neil Leeder wrote:
> Adds a new driver to support the SMMU v3 PMU and add it into the
> perf events framework.
>
> Each SMMU node may have multiple PMUs associated with it, each of
> which may support different events.
>
> PMUs are named smmu_0_<phys_addr_page> where <phys_addr_page>
> is the physical page address of the SMMU PMCG.
> For example, the SMMU PMCG at 0xff88840000 is named smmu_0_ff88840

This seems a bit rough - is it at feasible to at least chase the node
reference and namespace them by the associated component, e.g. something
like "arm-smmu-v3.x:pmcg.y"? The user can always dig the associated
physical address out of /proc/iomem if necessary.

> Filtering by stream id is done by specifying filtering parameters
> with the event. Options are:
> filter_enable - 0 = no filtering, 1 = filtering enabled
> filter_span - 0 = exact match, 1 = pattern match
> filter_sec - applies to non-secure (0) or secure (1) namespace

I'm a little dubious as to how useful it is to expose this, since we
can't see the true value of SCR.SO so have no way of knowing what we'll
actually end up counting.

> filter_stream_id - pattern to filter against
> Further filtering information is available in the SMMU documentation.
>
> Example: perf stat -e smmu_0_ff88840/transaction,filter_enable=1,
> filter_span=1,filter_stream_id=0x42/ -a pwd
> Applies filter pattern 0x42 to transaction events.
>
> SMMU events are not attributable to a CPU, so task mode and sampling
> are not supported.
>
> Signed-off-by: Neil Leeder <[email protected]>
> ---
> drivers/perf/Kconfig | 9 +
> drivers/perf/Makefile | 1 +
> drivers/perf/arm_smmuv3_pmu.c | 813 ++++++++++++++++++++++++++++++++++++++++++
> 3 files changed, 823 insertions(+)
> create mode 100644 drivers/perf/arm_smmuv3_pmu.c
>
> diff --git a/drivers/perf/Kconfig b/drivers/perf/Kconfig
> index e5197ff..e7721d1 100644
> --- a/drivers/perf/Kconfig
> +++ b/drivers/perf/Kconfig
> @@ -17,6 +17,15 @@ config ARM_PMU_ACPI
> depends on ARM_PMU && ACPI
> def_bool y
>
> +config ARM_SMMUV3_PMU
> + bool "ARM SMMUv3 PMU"
> + depends on PERF_EVENTS && ARM64 && ACPI

PERF_EVENTS is already a top-level dependency now.

> + help
> + Provides support for the SMMU version 3 performance monitor unit (PMU)
> + on ARM-based systems.
> + Adds the SMMU PMU into the perf events subsystem for
> + monitoring SMMU performance events.
> +
> config QCOM_L2_PMU
> bool "Qualcomm Technologies L2-cache PMU"
> depends on ARCH_QCOM && ARM64 && ACPI
> diff --git a/drivers/perf/Makefile b/drivers/perf/Makefile
> index 6420bd4..3012f5e 100644
> --- a/drivers/perf/Makefile
> +++ b/drivers/perf/Makefile
> @@ -1,5 +1,6 @@
> obj-$(CONFIG_ARM_PMU) += arm_pmu.o arm_pmu_platform.o
> obj-$(CONFIG_ARM_PMU_ACPI) += arm_pmu_acpi.o
> +obj-$(CONFIG_ARM_SMMUV3_PMU) += arm_smmuv3_pmu.o
> obj-$(CONFIG_QCOM_L2_PMU) += qcom_l2_pmu.o
> obj-$(CONFIG_QCOM_L3_PMU) += qcom_l3_pmu.o
> obj-$(CONFIG_XGENE_PMU) += xgene_pmu.o
> diff --git a/drivers/perf/arm_smmuv3_pmu.c b/drivers/perf/arm_smmuv3_pmu.c
> new file mode 100644
> index 0000000..1e70791
> --- /dev/null
> +++ b/drivers/perf/arm_smmuv3_pmu.c
> @@ -0,0 +1,813 @@
> +/* Copyright (c) 2017 The Linux Foundation. All rights reserved.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 and
> + * only version 2 as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> + * GNU General Public License for more details.
> + */
> +
> +/*
> + * This driver adds support for perf events to use the Performance
> + * Monitor Counter Groups (PMCG) associated with an SMMUv3 node
> + * to monitor that node.
> + *
> + * Devices are named smmu_0_<phys_addr_page> where <phys_addr_page>
> + * is the physical page address of the SMMU PMCG.
> + * For example, the SMMU PMCG at 0xff88840000 is named smmu_0_ff88840
> + *
> + * Filtering by stream id is done by specifying filtering parameters
> + * with the event. options are:
> + * filter_enable - 0 = no filtering, 1 = filtering enabled
> + * filter_span - 0 = exact match, 1 = pattern match
> + * filter_sec - filter applies to non-secure (0) or secure (1) namespace
> + * filter_stream_id - pattern to filter against
> + * Further filtering information is available in the SMMU documentation.
> + *
> + * Example: perf stat -e smmu_0_ff88840/transaction,filter_enable=1,
> + * filter_span=1,filter_stream_id=0x42/ -a pwd
> + * Applies filter pattern 0x42 to transaction events.
> + *
> + * SMMU events are not attributable to a CPU, so task mode and sampling
> + * are not supported.
> + */
> +
> +#include <linux/acpi.h>
> +#include <linux/acpi_iort.h>
> +#include <linux/bitops.h>
> +#include <linux/cpuhotplug.h>
> +#include <linux/cpumask.h>
> +#include <linux/device.h>
> +#include <linux/errno.h>
> +#include <linux/interrupt.h>
> +#include <linux/irq.h>
> +#include <linux/kernel.h>
> +#include <linux/list.h>
> +#include <linux/msi.h>

Is MSI support planned?

> +#include <linux/perf_event.h>
> +#include <linux/platform_device.h>
> +#include <linux/smp.h>
> +#include <linux/sysfs.h>
> +#include <linux/types.h>
> +
> +#include <asm/local64.h>
> +
> +#define SMMU_PMCG_EVCNTR0 0x0
> +#define SMMU_PMCG_EVCNTR(n, stride) (SMMU_PMCG_EVCNTR0 + (n) * (stride))
> +#define SMMU_PMCG_EVTYPER0 0x400
> +#define SMMU_PMCG_EVTYPER(n) (SMMU_PMCG_EVTYPER0 + (n) * 4)
> +#define SMMU_PMCG_EVTYPER_SEC_SID_SHIFT 30
> +#define SMMU_PMCG_EVTYPER_SID_SPAN_SHIFT 29
> +#define SMMU_PMCG_EVTYPER_EVENT_MASK GENMASK(15, 0)
> +#define SMMU_PMCG_SVR0 0x600
> +#define SMMU_PMCG_SVR(n, stride) (SMMU_PMCG_SVR0 + (n) * (stride))
> +#define SMMU_PMCG_SMR0 0xA00
> +#define SMMU_PMCG_SMR(n) (SMMU_PMCG_SMR0 + (n) * 4)
> +#define SMMU_PMCG_CNTENSET0 0xC00
> +#define SMMU_PMCG_CNTENCLR0 0xC20
> +#define SMMU_PMCG_INTENSET0 0xC40
> +#define SMMU_PMCG_INTENCLR0 0xC60
> +#define SMMU_PMCG_OVSCLR0 0xC80
> +#define SMMU_PMCG_OVSSET0 0xCC0
> +#define SMMU_PMCG_CAPR 0xD88
> +#define SMMU_PMCG_SCR 0xDF8
> +#define SMMU_PMCG_CFGR 0xE00
> +#define SMMU_PMCG_CFGR_SID_FILTER_TYPE BIT(23)
> +#define SMMU_PMCG_CFGR_CAPTURE BIT(22)
> +#define SMMU_PMCG_CFGR_MSI BIT(21)
> +#define SMMU_PMCG_CFGR_RELOC_CTRS BIT(20)
> +#define SMMU_PMCG_CFGR_SIZE_MASK GENMASK(13, 8)
> +#define SMMU_PMCG_CFGR_SIZE_SHIFT 8
> +#define SMMU_PMCG_CFGR_COUNTER_SIZE_32 31
> +#define SMMU_PMCG_CFGR_NCTR_MASK GENMASK(5, 0)
> +#define SMMU_PMCG_CFGR_NCTR_SHIFT 0
> +#define SMMU_PMCG_CR 0xE04
> +#define SMMU_PMCG_CR_ENABLE BIT(0)
> +#define SMMU_PMCG_CEID0 0xE20
> +#define SMMU_PMCG_CEID1 0xE28
> +#define SMMU_PMCG_IRQ_CTRL 0xE50
> +#define SMMU_PMCG_IRQ_CTRL_IRQEN BIT(0)
> +#define SMMU_PMCG_IRQ_CTRLACK 0xE54
> +#define SMMU_PMCG_IRQ_CTRLACK_IRQEN BIT(0)
> +#define SMMU_PMCG_IRQ_CFG0 0xE58
> +#define SMMU_PMCG_IRQ_CFG0_ADDR_MASK GENMASK(51, 2)
> +#define SMMU_PMCG_IRQ_CFG1 0xE60
> +#define SMMU_PMCG_IRQ_CFG2 0xE64
> +#define SMMU_PMCG_IRQ_STATUS 0xE68
> +#define SMMU_PMCG_IRQ_STATUS_IRQ_ABT BIT(0)
> +#define SMMU_PMCG_AIDR 0xE70

Several of these are unused (although at least IRQ0_CFG1 probably should
be, to zero it to a known state if we aren't supporting MSIs).

> +#define SMMU_COUNTER_RELOAD BIT(31)
> +#define SMMU_DEFAULT_FILTER_SEC 0
> +#define SMMU_DEFAULT_FILTER_SPAN 1
> +#define SMMU_DEFAULT_FILTER_STREAM_ID GENMASK(31, 0)
> +
> +#define SMMU_MAX_COUNTERS 64
> +#define SMMU_MAX_EVENT_ID 128
> +#define SMMU_NUM_EVENTS_U32 (SMMU_MAX_EVENT_ID / sizeof(u32))
> +
> +#define SMMU_PA_SHIFT 12
> +
> +/* Events */
> +#define SMMU_PMU_CYCLES 0
> +#define SMMU_PMU_TRANSACTION 1
> +#define SMMU_PMU_TLB_MISS 2
> +#define SMMU_PMU_CONFIG_CACHE_MISS 3
> +#define SMMU_PMU_TRANS_TABLE_WALK 4
> +#define SMMU_PMU_CONFIG_STRUCT_ACCESS 5
> +#define SMMU_PMU_PCIE_ATS_TRANS_RQ 6
> +#define SMMU_PMU_PCIE_ATS_TRANS_PASSED 7
> +
> +static int cpuhp_state_num;
> +
> +struct smmu_pmu {
> + struct hlist_node node;
> + struct perf_event *events[SMMU_MAX_COUNTERS];
> + DECLARE_BITMAP(used_counters, SMMU_MAX_COUNTERS);
> + DECLARE_BITMAP(supported_events, SMMU_MAX_EVENT_ID);
> + unsigned int irq;
> + unsigned int on_cpu;
> + struct pmu pmu;
> + unsigned int num_counters;
> + struct platform_device *pdev;
> + void __iomem *reg_base;
> + void __iomem *reloc_base;
> + u64 counter_present_mask;
> + u64 counter_mask;
> + bool reg_size_32;

This guy is redundant...

> +};
> +
> +#define to_smmu_pmu(p) (container_of(p, struct smmu_pmu, pmu))
> +
> +#define SMMU_PMU_EVENT_ATTR_EXTRACTOR(_name, _config, _size, _shift) \
> + static inline u32 get_##_name(struct perf_event *event) \
> + { \
> + return (event->attr._config >> (_shift)) & \
> + GENMASK_ULL((_size) - 1, 0); \
> + }
> +
> +SMMU_PMU_EVENT_ATTR_EXTRACTOR(event, config, 16, 0);
> +SMMU_PMU_EVENT_ATTR_EXTRACTOR(filter_stream_id, config1, 32, 0);
> +SMMU_PMU_EVENT_ATTR_EXTRACTOR(filter_span, config1, 1, 32);
> +SMMU_PMU_EVENT_ATTR_EXTRACTOR(filter_sec, config1, 1, 33);
> +SMMU_PMU_EVENT_ATTR_EXTRACTOR(filter_enable, config1, 1, 34);
> +
> +static inline void smmu_pmu_enable(struct pmu *pmu)
> +{
> + struct smmu_pmu *smmu_pmu = to_smmu_pmu(pmu);
> +
> + writel(SMMU_PMCG_CR_ENABLE, smmu_pmu->reg_base + SMMU_PMCG_CR);
> + writel(SMMU_PMCG_IRQ_CTRL_IRQEN,
> + smmu_pmu->reg_base + SMMU_PMCG_IRQ_CTRL);
> +}
> +
> +static inline void smmu_pmu_disable(struct pmu *pmu)
> +{
> + struct smmu_pmu *smmu_pmu = to_smmu_pmu(pmu);
> +
> + writel(0, smmu_pmu->reg_base + SMMU_PMCG_CR);
> + writel(0, smmu_pmu->reg_base + SMMU_PMCG_IRQ_CTRL);
> +}
> +
> +static inline void smmu_pmu_counter_set_value(struct smmu_pmu *smmu_pmu,
> + u32 idx, u64 value)
> +{
> + if (smmu_pmu->reg_size_32)

...since it would be just as efficient to directly test
smmu_pmu->counter_mask & BIT(32) here and below.

> + writel(value, smmu_pmu->reloc_base + SMMU_PMCG_EVCNTR(idx, 4));
> + else
> + writeq(value, smmu_pmu->reloc_base + SMMU_PMCG_EVCNTR(idx, 8));
> +}
> +
> +static inline u64 smmu_pmu_counter_get_value(struct smmu_pmu *smmu_pmu, u32 idx)
> +{
> + u64 value;
> +
> + if (smmu_pmu->reg_size_32)
> + value = readl(smmu_pmu->reloc_base + SMMU_PMCG_EVCNTR(idx, 4));
> + else
> + value = readq(smmu_pmu->reloc_base + SMMU_PMCG_EVCNTR(idx, 8));
> +
> + return value;
> +}
> +
> +static inline void smmu_pmu_counter_enable(struct smmu_pmu *smmu_pmu, u32 idx)
> +{
> + writeq(BIT(idx), smmu_pmu->reg_base + SMMU_PMCG_CNTENSET0);
> +}
> +
> +static inline void smmu_pmu_counter_disable(struct smmu_pmu *smmu_pmu, u32 idx)
> +{
> + writeq(BIT(idx), smmu_pmu->reg_base + SMMU_PMCG_CNTENCLR0);
> +}
> +
> +static inline void smmu_pmu_interrupt_enable(struct smmu_pmu *smmu_pmu, u32 idx)
> +{
> + writeq(BIT(idx), smmu_pmu->reg_base + SMMU_PMCG_INTENSET0);
> +}
> +
> +static inline void smmu_pmu_interrupt_disable(struct smmu_pmu *smmu_pmu,
> + u32 idx)
> +{
> + writeq(BIT(idx), smmu_pmu->reg_base + SMMU_PMCG_INTENCLR0);
> +}
> +
> +static void smmu_pmu_reset(struct smmu_pmu *smmu_pmu)
> +{
> + unsigned int i;
> +
> + for (i = 0; i < smmu_pmu->num_counters; i++) {
> + smmu_pmu_counter_disable(smmu_pmu, i);
> + smmu_pmu_interrupt_disable(smmu_pmu, i);
> + }

Surely it would be far quicker and simpler to do this?

writeq(smmu_pmu->counter_present_mask,
smmu_pmu->reg_base + SMMU_PMCG_CNTENCLR0);
writeq(smmu_pmu->counter_present_mask,
smmu_pmu->reg_base + SMMU_PMCG_INTENCLR0);

> + smmu_pmu_disable(&smmu_pmu->pmu);
> +}
> +
> +static inline void smmu_pmu_set_evtyper(struct smmu_pmu *smmu_pmu, u32 idx,
> + u32 val)
> +{
> + writel(val, smmu_pmu->reg_base + SMMU_PMCG_EVTYPER(idx));
> +}
> +
> +static inline void smmu_pmu_set_smr(struct smmu_pmu *smmu_pmu, u32 idx, u32 val)
> +{
> + writel(val, smmu_pmu->reg_base + SMMU_PMCG_SMR(idx));
> +}
> +
> +static inline u64 smmu_pmu_getreset_ovsr(struct smmu_pmu *smmu_pmu)
> +{
> + u64 result = readq_relaxed(smmu_pmu->reloc_base + SMMU_PMCG_OVSSET0);
> +
> + writeq(result, smmu_pmu->reloc_base + SMMU_PMCG_OVSCLR0);
> + return result;
> +}
> +
> +static inline bool smmu_pmu_has_overflowed(struct smmu_pmu *smmu_pmu, u64 ovsr)
> +{
> + return !!(ovsr & smmu_pmu->counter_present_mask);
> +}

Personally, I find these helpers abstracting simple reads/writes
actually make the code harder to follow, especially when they're each
used a grand total of once. That may well just be me, though.

> +static void smmu_pmu_event_update(struct perf_event *event)
> +{
> + struct hw_perf_event *hwc = &event->hw;
> + struct smmu_pmu *smmu_pmu = to_smmu_pmu(event->pmu);
> + u64 delta, prev, now;
> + u32 idx = hwc->idx;
> +
> + do {
> + prev = local64_read(&hwc->prev_count);
> + now = smmu_pmu_counter_get_value(smmu_pmu, idx);
> + } while (local64_cmpxchg(&hwc->prev_count, prev, now) != prev);
> +
> + /* handle overflow. */
> + delta = now - prev;
> + delta &= smmu_pmu->counter_mask;
> +
> + local64_add(delta, &event->count);
> +}
> +
> +static void smmu_pmu_set_period(struct smmu_pmu *smmu_pmu,
> + struct hw_perf_event *hwc)
> +{
> + u32 idx = hwc->idx;
> + u64 new;
> +
> + /*
> + * We limit the max period to half the max counter value of the smallest
> + * counter size, so that even in the case of extreme interrupt latency
> + * the counter will (hopefully) not wrap past its initial value.
> + */

Having once fought to properly understand the underlying logic I despise
this unhelpfully-vague comment, but that's not your fault ;)

> + new = SMMU_COUNTER_RELOAD;

Given that we *are* following the "use the top counter bit as an
implicit overflow bit" pattern of arm_pmu, it feels a bit weird to not
use the actual half-maximum value here (especially since it's easily
computable from counter_mask). I'm about 85% sure it probably still
works, but as ever inconsistency breeds uncertainty.
> + local64_set(&hwc->prev_count, new);
> + smmu_pmu_counter_set_value(smmu_pmu, idx, new);
> +}
> +
> +static irqreturn_t smmu_pmu_handle_irq(int irq_num, void *data)
> +{
> + struct smmu_pmu *smmu_pmu = data;
> + u64 ovsr;
> + unsigned int idx;
> +
> + ovsr = smmu_pmu_getreset_ovsr(smmu_pmu);
> + if (!smmu_pmu_has_overflowed(smmu_pmu, ovsr))

You have an architectural guarantee that unimplemented bits of OVSSET0
are RES0, so checking !ovsr is sufficient.

> + return IRQ_NONE;
> +
> + for_each_set_bit(idx, (unsigned long *)&ovsr, smmu_pmu->num_counters) {
> + struct perf_event *event = smmu_pmu->events[idx];
> + struct hw_perf_event *hwc;
> +
> + if (WARN_ON_ONCE(!event))
> + continue;
> +
> + smmu_pmu_event_update(event);
> + hwc = &event->hw;
> +
> + smmu_pmu_set_period(smmu_pmu, hwc);
> + }
> +
> + return IRQ_HANDLED;
> +}
> +
> +static unsigned int smmu_pmu_get_event_idx(struct smmu_pmu *smmu_pmu)
> +{
> + unsigned int idx;
> + unsigned int num_ctrs = smmu_pmu->num_counters;
> +
> + idx = find_first_zero_bit(smmu_pmu->used_counters, num_ctrs);
> + if (idx == num_ctrs)
> + /* The counters are all in use. */
> + return -EAGAIN;
> +
> + set_bit(idx, smmu_pmu->used_counters);
> +
> + return idx;
> +}
> +
> +/*
> + * Implementation of abstract pmu functionality required by
> + * the core perf events code.
> + */
> +
> +static int smmu_pmu_event_init(struct perf_event *event)
> +{
> + struct hw_perf_event *hwc = &event->hw;
> + struct perf_event *sibling;
> + struct smmu_pmu *smmu_pmu;
> + u32 event_id;
> +
> + if (event->attr.type != event->pmu->type)
> + return -ENOENT;
> +
> + smmu_pmu = to_smmu_pmu(event->pmu);
> +
> + if (hwc->sample_period) {
> + dev_dbg_ratelimited(&smmu_pmu->pdev->dev,
> + "Sampling not supported\n");
> + return -EOPNOTSUPP;
> + }
> +
> + if (event->cpu < 0) {
> + dev_dbg_ratelimited(&smmu_pmu->pdev->dev,
> + "Per-task mode not supported\n");
> + return -EOPNOTSUPP;
> + }
> +
> + /* We cannot filter accurately so we just don't allow it. */
> + if (event->attr.exclude_user || event->attr.exclude_kernel ||
> + event->attr.exclude_hv || event->attr.exclude_idle) {
> + dev_dbg_ratelimited(&smmu_pmu->pdev->dev,
> + "Can't exclude execution levels\n");
> + return -EOPNOTSUPP;
> + }
> +
> + /* Verify specified event is supported on this PMU */
> + event_id = get_event(event);
> + if ((event_id >= SMMU_MAX_EVENT_ID) ||

What about raw imp-def events?

> + (!test_bit(event_id, smmu_pmu->supported_events))) {
> + dev_dbg_ratelimited(&smmu_pmu->pdev->dev,
> + "Invalid event %d for this PMU\n",
> + event_id);
> + return -EINVAL;
> + }
> +
> + /* Don't allow groups with mixed PMUs, except for s/w events */
> + if (event->group_leader->pmu != event->pmu &&
> + !is_software_event(event->group_leader)) {
> + dev_dbg_ratelimited(&smmu_pmu->pdev->dev,
> + "Can't create mixed PMU group\n");
> + return -EINVAL;
> + }
> +
> + list_for_each_entry(sibling, &event->group_leader->sibling_list,
> + group_entry)
> + if (sibling->pmu != event->pmu &&
> + !is_software_event(sibling)) {
> + dev_dbg_ratelimited(&smmu_pmu->pdev->dev,
> + "Can't create mixed PMU group\n");
> + return -EINVAL;
> + }
> +
> + /* Ensure all events in a group are on the same cpu */
> + if ((event->group_leader != event) &&
> + (event->cpu != event->group_leader->cpu)) {
> + dev_dbg_ratelimited(&smmu_pmu->pdev->dev,
> + "Can't create group on CPUs %d and %d",
> + event->cpu, event->group_leader->cpu);
> + return -EINVAL;
> + }
> +
> + hwc->idx = -1;
> +
> + /*
> + * Ensure all events are on the same cpu so all events are in the
> + * same cpu context, to avoid races on pmu_enable etc.
> + */
> + event->cpu = smmu_pmu->on_cpu;
> +
> + return 0;
> +}
> +
> +static void smmu_pmu_event_start(struct perf_event *event, int flags)
> +{
> + struct smmu_pmu *smmu_pmu = to_smmu_pmu(event->pmu);
> + struct hw_perf_event *hwc = &event->hw;
> + int idx = hwc->idx;
> + u32 evtyper;
> + u32 filter_sec;
> + u32 filter_span;
> + u32 filter_stream_id;
> +
> + hwc->state = 0;
> +
> + smmu_pmu_set_period(smmu_pmu, hwc);
> +
> + if (get_filter_enable(event)) {
> + filter_sec = get_filter_sec(event);
> + filter_span = get_filter_span(event);
> + filter_stream_id = get_filter_stream_id(event);
> + } else {
> + filter_sec = SMMU_DEFAULT_FILTER_SEC;
> + filter_span = SMMU_DEFAULT_FILTER_SPAN;
> + filter_stream_id = SMMU_DEFAULT_FILTER_STREAM_ID;
> + }
> +
> + evtyper = get_event(event) |
> + filter_span << SMMU_PMCG_EVTYPER_SID_SPAN_SHIFT |
> + filter_sec << SMMU_PMCG_EVTYPER_SEC_SID_SHIFT;
> +
> + smmu_pmu_set_evtyper(smmu_pmu, idx, evtyper);
> + smmu_pmu_set_smr(smmu_pmu, idx, filter_stream_id);
> + smmu_pmu_interrupt_enable(smmu_pmu, idx);
> + smmu_pmu_counter_enable(smmu_pmu, idx);
> +}
> +
> +static void smmu_pmu_event_stop(struct perf_event *event, int flags)
> +{
> + struct smmu_pmu *smmu_pmu = to_smmu_pmu(event->pmu);
> + struct hw_perf_event *hwc = &event->hw;
> + int idx = hwc->idx;
> +
> + if (hwc->state & PERF_HES_STOPPED)
> + return;
> +
> + smmu_pmu_interrupt_disable(smmu_pmu, idx);
> + smmu_pmu_counter_disable(smmu_pmu, idx);
> +
> + if (flags & PERF_EF_UPDATE)
> + smmu_pmu_event_update(event);
> + hwc->state |= PERF_HES_STOPPED | PERF_HES_UPTODATE;
> +}
> +
> +static int smmu_pmu_event_add(struct perf_event *event, int flags)
> +{
> + struct hw_perf_event *hwc = &event->hw;
> + int idx;
> + struct smmu_pmu *smmu_pmu = to_smmu_pmu(event->pmu);
> +
> + idx = smmu_pmu_get_event_idx(smmu_pmu);
> + if (idx < 0)
> + return idx;
> +
> + hwc->idx = idx;
> + hwc->state = PERF_HES_STOPPED | PERF_HES_UPTODATE;
> + smmu_pmu->events[idx] = event;
> + local64_set(&hwc->prev_count, 0);
> +
> + if (flags & PERF_EF_START)
> + smmu_pmu_event_start(event, flags);
> +
> + /* Propagate changes to the userspace mapping. */
> + perf_event_update_userpage(event);
> +
> + return 0;
> +}
> +
> +static void smmu_pmu_event_del(struct perf_event *event, int flags)
> +{
> + struct hw_perf_event *hwc = &event->hw;
> + struct smmu_pmu *smmu_pmu = to_smmu_pmu(event->pmu);
> + int idx = hwc->idx;
> +
> + smmu_pmu_event_stop(event, flags | PERF_EF_UPDATE);
> + smmu_pmu->events[idx] = NULL;
> + clear_bit(idx, smmu_pmu->used_counters);
> +
> + perf_event_update_userpage(event);
> +}
> +
> +static void smmu_pmu_event_read(struct perf_event *event)
> +{
> + smmu_pmu_event_update(event);
> +}
> +
> +/* cpumask */
> +
> +static ssize_t smmu_pmu_cpumask_show(struct device *dev,
> + struct device_attribute *attr,
> + char *buf)
> +{
> + struct smmu_pmu *smmu_pmu = to_smmu_pmu(dev_get_drvdata(dev));
> +
> + return cpumap_print_to_pagebuf(true, buf, cpumask_of(smmu_pmu->on_cpu));
> +}
> +
> +static struct device_attribute smmu_pmu_cpumask_attr =
> + __ATTR(cpumask, 0444, smmu_pmu_cpumask_show, NULL);
> +
> +static struct attribute *smmu_pmu_cpumask_attrs[] = {
> + &smmu_pmu_cpumask_attr.attr,
> + NULL,
> +};
> +
> +static struct attribute_group smmu_pmu_cpumask_group = {
> + .attrs = smmu_pmu_cpumask_attrs,
> +};
> +
> +/* Events */
> +
> +ssize_t smmu_pmu_event_show(struct device *dev,
> + struct device_attribute *attr, char *page)
> +{
> + struct perf_pmu_events_attr *pmu_attr;
> +
> + pmu_attr = container_of(attr, struct perf_pmu_events_attr, attr);
> +
> + return sprintf(page, "event=0x%02llx\n", pmu_attr->id);
> +}
> +
> +#define SMMU_EVENT_ATTR(_name, _id) \
> + (&((struct perf_pmu_events_attr[]) { \
> + { .attr = __ATTR(_name, 0444, smmu_pmu_event_show, NULL), \
> + .id = _id, } \
> + })[0].attr.attr)
> +
> +static struct attribute *smmu_pmu_events[] = {
> + SMMU_EVENT_ATTR(cycles, SMMU_PMU_CYCLES),
> + SMMU_EVENT_ATTR(transaction, SMMU_PMU_TRANSACTION),
> + SMMU_EVENT_ATTR(tlb_miss, SMMU_PMU_TLB_MISS),
> + SMMU_EVENT_ATTR(config_cache_miss, SMMU_PMU_CONFIG_CACHE_MISS),
> + SMMU_EVENT_ATTR(trans_table_walk, SMMU_PMU_TRANS_TABLE_WALK),
> + SMMU_EVENT_ATTR(config_struct_access, SMMU_PMU_CONFIG_STRUCT_ACCESS),
> + SMMU_EVENT_ATTR(pcie_ats_trans_rq, SMMU_PMU_PCIE_ATS_TRANS_RQ),
> + SMMU_EVENT_ATTR(pcie_ats_trans_passed, SMMU_PMU_PCIE_ATS_TRANS_PASSED),
> + NULL
> +};
> +
> +static umode_t smmu_pmu_event_is_visible(struct kobject *kobj,
> + struct attribute *attr, int unused)
> +{
> + struct device *dev = kobj_to_dev(kobj);
> + struct smmu_pmu *smmu_pmu = to_smmu_pmu(dev_get_drvdata(dev));
> + struct perf_pmu_events_attr *pmu_attr;
> +
> + pmu_attr = container_of(attr, struct perf_pmu_events_attr, attr.attr);
> +
> + if (test_bit(pmu_attr->id, smmu_pmu->supported_events))
> + return attr->mode;
> +
> + return 0;
> +}
> +static struct attribute_group smmu_pmu_events_group = {
> + .name = "events",
> + .attrs = smmu_pmu_events,
> + .is_visible = smmu_pmu_event_is_visible,
> +};
> +
> +/* Formats */
> +PMU_FORMAT_ATTR(event, "config:0-15");
> +PMU_FORMAT_ATTR(filter_stream_id, "config1:0-31");
> +PMU_FORMAT_ATTR(filter_span, "config1:32");
> +PMU_FORMAT_ATTR(filter_sec, "config1:33");
> +PMU_FORMAT_ATTR(filter_enable, "config1:34");
> +
> +static struct attribute *smmu_pmu_formats[] = {
> + &format_attr_event.attr,
> + &format_attr_filter_stream_id.attr,
> + &format_attr_filter_span.attr,
> + &format_attr_filter_sec.attr,
> + &format_attr_filter_enable.attr,
> + NULL
> +};
> +
> +static struct attribute_group smmu_pmu_format_group = {
> + .name = "format",
> + .attrs = smmu_pmu_formats,
> +};
> +
> +static const struct attribute_group *smmu_pmu_attr_grps[] = {
> + &smmu_pmu_cpumask_group,
> + &smmu_pmu_events_group,
> + &smmu_pmu_format_group,
> + NULL,
> +};
> +
> +/*
> + * Generic device handlers
> + */
> +
> +static unsigned int get_num_counters(struct smmu_pmu *smmu_pmu)
> +{
> + u32 cfgr = readl(smmu_pmu->reg_base + SMMU_PMCG_CFGR);
> +
> + return ((cfgr & SMMU_PMCG_CFGR_NCTR_MASK) >> SMMU_PMCG_CFGR_NCTR_SHIFT)
> + + 1;
> +}
> +
> +static int smmu_pmu_offline_cpu(unsigned int cpu, struct hlist_node *node)
> +{
> + struct smmu_pmu *smmu_pmu;
> + unsigned int target;
> +
> + smmu_pmu = hlist_entry_safe(node, struct smmu_pmu, node);

Is it ever valid for node to be NULL? If we can't trust it to be one of
the PMUs we registered I think all bets are off anyway.

> + if (cpu != smmu_pmu->on_cpu)
> + return 0;
> +
> + target = cpumask_any_but(cpu_online_mask, cpu);
> + if (target >= nr_cpu_ids)
> + return 0;
> +
> + perf_pmu_migrate_context(&smmu_pmu->pmu, cpu, target);
> + smmu_pmu->on_cpu = target;
> + WARN_ON(irq_set_affinity(smmu_pmu->irq, cpumask_of(target)));
> +
> + return 0;
> +}
> +
> +static int smmu_pmu_probe(struct platform_device *pdev)
> +{
> + struct smmu_pmu *smmu_pmu;
> + struct resource *mem_resource_0, *mem_resource_1;
> + void __iomem *mem_map_0, *mem_map_1;
> + unsigned int reg_size;
> + int err;
> + int irq;
> + u32 ceid[SMMU_NUM_EVENTS_U32];
> + u64 ceid_64;
> +
> + smmu_pmu = devm_kzalloc(&pdev->dev, sizeof(*smmu_pmu), GFP_KERNEL);
> + if (!smmu_pmu)
> + return -ENOMEM;
> +
> + platform_set_drvdata(pdev, smmu_pmu);
> + smmu_pmu->pmu = (struct pmu) {
> + .task_ctx_nr = perf_invalid_context,
> + .pmu_enable = smmu_pmu_enable,
> + .pmu_disable = smmu_pmu_disable,
> + .event_init = smmu_pmu_event_init,
> + .add = smmu_pmu_event_add,
> + .del = smmu_pmu_event_del,
> + .start = smmu_pmu_event_start,
> + .stop = smmu_pmu_event_stop,
> + .read = smmu_pmu_event_read,
> + .attr_groups = smmu_pmu_attr_grps,
> + };
> +
> + mem_resource_0 = platform_get_resource(pdev, IORESOURCE_MEM, 0);
> + mem_map_0 = devm_ioremap_resource(&pdev->dev, mem_resource_0);
> +
> + if (IS_ERR(mem_map_0)) {
> + dev_err(&pdev->dev, "Can't map SMMU PMU @%pa\n",
> + &mem_resource_0->start);
> + return PTR_ERR(mem_map_0);
> + }
> +
> + smmu_pmu->reg_base = mem_map_0;
> + smmu_pmu->pmu.name =
> + devm_kasprintf(&pdev->dev, GFP_KERNEL, "smmu_0_%llx",
> + (mem_resource_0->start) >> SMMU_PA_SHIFT);
> +
> + if (!smmu_pmu->pmu.name) {
> + dev_err(&pdev->dev, "Failed to create PMU name");
> + return -EINVAL;
> + }
> +
> + ceid_64 = readq(smmu_pmu->reg_base + SMMU_PMCG_CEID0);
> + ceid[0] = ceid_64 & GENMASK(31, 0);

It took a second look to determine that that masking does nothing...

> + ceid[1] = ceid_64 >> 32;
> + ceid_64 = readq(smmu_pmu->reg_base + SMMU_PMCG_CEID1);
> + ceid[2] = ceid_64 & GENMASK(31, 0);
> + ceid[3] = ceid_64 >> 32;
> + bitmap_from_u32array(smmu_pmu->supported_events, SMMU_MAX_EVENT_ID,
> + ceid, SMMU_NUM_EVENTS_U32);

...but then the whole lot might be cleaner and simpler with a u64[2]
cast to u32* (or unioned to u32[4]) as necessary.

> +
> + /* Determine if page 1 is present */
> + if (readl(smmu_pmu->reg_base + SMMU_PMCG_CFGR) &
> + SMMU_PMCG_CFGR_RELOC_CTRS) {
> + mem_resource_1 = platform_get_resource(pdev, IORESOURCE_MEM, 1);
> + mem_map_1 = devm_ioremap_resource(&pdev->dev, mem_resource_1);
> +
> + if (IS_ERR(mem_map_1)) {
> + dev_err(&pdev->dev, "Can't map SMMU PMU @%pa\n",
> + &mem_resource_1->start);
> + return PTR_ERR(mem_map_1);
> + }
> + smmu_pmu->reloc_base = mem_map_1;
> + } else {
> + smmu_pmu->reloc_base = smmu_pmu->reg_base;
> + }
> +
> + irq = platform_get_irq(pdev, 0);
> + if (irq < 0) {
> + dev_err(&pdev->dev,
> + "Failed to get valid irq for smmu @%pa\n",
> + &mem_resource_0->start);
> + return irq;
> + }
> +
> + err = devm_request_irq(&pdev->dev, irq, smmu_pmu_handle_irq,
> + IRQF_NOBALANCING | IRQF_SHARED | IRQF_NO_THREAD,
> + "smmu-pmu", smmu_pmu);
> + if (err) {
> + dev_err(&pdev->dev,
> + "Unable to request IRQ%d for SMMU PMU counters\n", irq);
> + return err;
> + }
> +
> + smmu_pmu->irq = irq;
> +
> + /* Pick one CPU to be the preferred one to use */
> + smmu_pmu->on_cpu = smp_processor_id();
> + WARN_ON(irq_set_affinity(smmu_pmu->irq, cpumask_of(smmu_pmu->on_cpu)));
> +
> + smmu_pmu->num_counters = get_num_counters(smmu_pmu);
> + smmu_pmu->pdev = pdev;
> + smmu_pmu->counter_present_mask = GENMASK(smmu_pmu->num_counters - 1, 0);
> + reg_size = (readl(smmu_pmu->reg_base + SMMU_PMCG_CFGR) &
> + SMMU_PMCG_CFGR_SIZE_MASK) >> SMMU_PMCG_CFGR_SIZE_SHIFT;
> + smmu_pmu->reg_size_32 = (reg_size == SMMU_PMCG_CFGR_COUNTER_SIZE_32);
> + smmu_pmu->counter_mask = GENMASK_ULL(reg_size, 0);
> +
> + smmu_pmu_reset(smmu_pmu);
> +
> + err = cpuhp_state_add_instance_nocalls(cpuhp_state_num,
> + &smmu_pmu->node);
> + if (err) {
> + dev_err(&pdev->dev, "Error %d registering hotplug", err);
> + return err;
> + }
> +
> + err = perf_pmu_register(&smmu_pmu->pmu, smmu_pmu->pmu.name, -1);
> + if (err) {
> + dev_err(&pdev->dev, "Error %d registering SMMU PMU\n", err);
> + goto out_unregister;
> + }
> +
> + dev_info(&pdev->dev, "Registered SMMU PMU @ %pa using %d counters\n",
> + &mem_resource_0->start, smmu_pmu->num_counters);
> +
> + return err;
> +
> +out_unregister:
> + cpuhp_state_remove_instance_nocalls(cpuhp_state_num, &smmu_pmu->node);
> + return err;
> +}
> +
> +static int smmu_pmu_remove(struct platform_device *pdev)
> +{
> + struct smmu_pmu *smmu_pmu = platform_get_drvdata(pdev);
> +
> + perf_pmu_unregister(&smmu_pmu->pmu);
> + cpuhp_state_remove_instance_nocalls(cpuhp_state_num, &smmu_pmu->node);
> +
> + return 0;
> +}
> +
> +static void smmu_pmu_shutdown(struct platform_device *pdev)
> +{
> + struct smmu_pmu *smmu_pmu = platform_get_drvdata(pdev);
> +
> + smmu_pmu_disable(&smmu_pmu->pmu);
> +}
> +
> +static struct platform_driver smmu_pmu_driver = {
> + .driver = {
> + .name = "arm-smmu-pmu",

Nit: "arm-smmu-v3-pmu" please, for consistency with the IOMMU driver
naming. There is a SMMUv2 PMU driver in the works, too ;)

Robin.

> + },
> + .probe = smmu_pmu_probe,
> + .remove = smmu_pmu_remove,
> + .shutdown = smmu_pmu_shutdown,
> +};
> +
> +static int __init arm_smmu_pmu_init(void)
> +{
> + cpuhp_state_num = cpuhp_setup_state_multi(CPUHP_AP_ONLINE_DYN,
> + "perf/arm/smmupmu:online",
> + NULL,
> + smmu_pmu_offline_cpu);
> + if (cpuhp_state_num < 0)
> + return cpuhp_state_num;
> +
> + return platform_driver_register(&smmu_pmu_driver);
> +}
> +module_init(arm_smmu_pmu_init);
> +
> +static void __exit arm_smmu_pmu_exit(void)
> +{
> + platform_driver_unregister(&smmu_pmu_driver);
> +}
> +
> +module_exit(arm_smmu_pmu_exit);
> +MODULE_LICENSE("GPL v2");
>

2017-08-07 16:42:05

by Lorenzo Pieralisi

[permalink] [raw]
Subject: Re: [PATCH 1/2] acpi: arm64: add iort support for PMCG

On Fri, Aug 04, 2017 at 03:59:13PM -0400, Neil Leeder wrote:
> Add support for the SMMU Performance Monitor Counter Group
> information from ACPI. This is in preparation for its use
> in the SMMU v3 PMU driver.
>
> Signed-off-by: Neil Leeder <[email protected]>
> ---
> drivers/acpi/arm64/iort.c | 54 +++++++++++++++++++++++++++++++++++++++++++++++
> include/acpi/actbl2.h | 9 +++++++-
> 2 files changed, 62 insertions(+), 1 deletion(-)

Please CC me for next versions.

> diff --git a/drivers/acpi/arm64/iort.c b/drivers/acpi/arm64/iort.c
> index a3215ee..5a998cd 100644
> --- a/drivers/acpi/arm64/iort.c
> +++ b/drivers/acpi/arm64/iort.c
> @@ -970,6 +970,40 @@ static bool __init arm_smmu_is_coherent(struct acpi_iort_node *node)
> return smmu->flags & ACPI_IORT_SMMU_COHERENT_WALK;
> }
>
> +static int __init arm_smmu_pmu_count_resources(struct acpi_iort_node *node)
> +{
> + struct acpi_iort_pmcg *pmcg;
> +
> + /* Retrieve PMCG specific data */
> + pmcg = (struct acpi_iort_pmcg *)node->node_data;
> +
> + /*
> + * There are always 2 memory resources.
> + * If the overflow_gsiv is present then add that for a total of 3.
> + */
> + return pmcg->overflow_gsiv > 0 ? 3 : 2;
> +}
> +
> +static void __init arm_smmu_pmu_init_resources(struct resource *res,
> + struct acpi_iort_node *node)
> +{
> + struct acpi_iort_pmcg *pmcg;
> +
> + /* Retrieve PMCG specific data */
> + pmcg = (struct acpi_iort_pmcg *)node->node_data;
> +
> + res[0].start = pmcg->base_address;
> + res[0].end = pmcg->base_address + SZ_4K - 1;
> + res[0].flags = IORESOURCE_MEM;
> + res[1].start = pmcg->base_address + SZ_64K;
> + res[1].end = pmcg->base_address + SZ_64K + SZ_4K - 1;
> + res[1].flags = IORESOURCE_MEM;
> +
> + if (pmcg->overflow_gsiv)
> + acpi_iort_register_irq(pmcg->overflow_gsiv, "overflow",
> + ACPI_EDGE_SENSITIVE, &res[2]);
> +}
> +
> struct iort_iommu_config {
> const char *name;
> int (*iommu_init)(struct acpi_iort_node *node);
> @@ -993,6 +1027,12 @@ struct iort_iommu_config {
> .iommu_init_resources = arm_smmu_init_resources
> };
>
> +static const struct iort_iommu_config iort_arm_smmu_pmcg_cfg __initconst = {
> + .name = "arm-smmu-pmu",
> + .iommu_count_resources = arm_smmu_pmu_count_resources,
> + .iommu_init_resources = arm_smmu_pmu_init_resources
> +};
> +
> static __init
> const struct iort_iommu_config *iort_get_iommu_cfg(struct acpi_iort_node *node)
> {
> @@ -1001,6 +1041,8 @@ const struct iort_iommu_config *iort_get_iommu_cfg(struct acpi_iort_node *node)
> return &iort_arm_smmu_v3_cfg;
> case ACPI_IORT_NODE_SMMU:
> return &iort_arm_smmu_cfg;
> + case ACPI_IORT_NODE_PMCG:
> + return &iort_arm_smmu_pmcg_cfg;
> default:
> return NULL;
> }
> @@ -1056,6 +1098,15 @@ static int __init iort_add_smmu_platform_device(struct acpi_iort_node *node)
> if (ret)
> goto dev_put;
>
> + /* End of init for PMCG */
> + if (node->type == ACPI_IORT_NODE_PMCG) {
> + ret = platform_device_add(pdev);
> + if (ret)
> + goto dev_put;
> +
> + return 0;
> + }
> +
> /*
> * We expect the dma masks to be equivalent for
> * all SMMUs set-ups
> @@ -1131,6 +1182,9 @@ static void __init iort_init_platform_devices(void)
> acpi_free_fwnode_static(fwnode);
> return;
> }
> + } else if (iort_node->type == ACPI_IORT_NODE_PMCG) {
> + if (iort_add_smmu_platform_device(iort_node))
> + return;


It is becoming a bit messy, probably it makes sense to rework the
iommu platform device creation to make room for more generic (ie
iommu platform device creation is not really iommu specific) behaviour
that can accommodate the PMCG too, it can be made cleaner.

I do not know if we can make it for this cycle but I am happy to put
a patch together shortly with this in mind.

> }
>
> iort_node = ACPI_ADD_PTR(struct acpi_iort_node, iort_node,
> diff --git a/include/acpi/actbl2.h b/include/acpi/actbl2.h
> index 707dda74..2169b6f 100644
> --- a/include/acpi/actbl2.h
> +++ b/include/acpi/actbl2.h
> @@ -695,7 +695,8 @@ enum acpi_iort_node_type {
> ACPI_IORT_NODE_NAMED_COMPONENT = 0x01,
> ACPI_IORT_NODE_PCI_ROOT_COMPLEX = 0x02,
> ACPI_IORT_NODE_SMMU = 0x03,
> - ACPI_IORT_NODE_SMMU_V3 = 0x04
> + ACPI_IORT_NODE_SMMU_V3 = 0x04,
> + ACPI_IORT_NODE_PMCG = 0x05
> };
>
> struct acpi_iort_id_mapping {
> @@ -811,6 +812,12 @@ struct acpi_iort_smmu_v3 {
> #define ACPI_IORT_SMMU_V3_COHACC_OVERRIDE (1)
> #define ACPI_IORT_SMMU_V3_HTTU_OVERRIDE (1<<1)
>
> +struct acpi_iort_pmcg {
> + u64 base_address; /* PMCG base address */
> + u32 overflow_gsiv;
> + u32 node_reference;
> +};

As Robin already said this hunk should be made an ACPICA pull but
NOT before seeking IORT specs clarification as per his comments.

Thanks,
Lorenzo

> +
> /*******************************************************************************
> *
> * IVRS - I/O Virtualization Reporting Structure
> --
> Qualcomm Datacenter Technologies, Inc. as an affiliate of Qualcomm Technologies Inc.
> Qualcomm Technologies, Inc. is a member of the Code Aurora Forum,
> a Linux Foundation Collaborative Project.
>

2017-08-07 20:52:06

by Leeder, Neil

[permalink] [raw]
Subject: Re: [PATCH 1/2] acpi: arm64: add iort support for PMCG

Hi Robin,
Thank you for the review.

On 8/7/2017 7:17 AM, Robin Murphy wrote:
> Hi Neil,
>
> On 04/08/17 20:59, Neil Leeder wrote:
[...]
>> + res[1].start = pmcg->base_address + SZ_64K;
>
> Ugh, I see there's a nasty spec hole here - IORT only defines one base
> address, but SMMUv3 says *both* pages have imp-def base addresses. I
> guess this assumption of them being in consecutive 64K regions holds for
> your platform, but as things stand it doesn't appear possible to rely on
> it generally :(
>
> I'll follow up internally to see if we need to get IORT and/or SBSA
> updated or clarified.
>
Thanks. I'll wait for the outcome before submitting a new patch.

[...]
>> diff --git a/include/acpi/actbl2.h b/include/acpi/actbl2.h
>> index 707dda74..2169b6f 100644
>> --- a/include/acpi/actbl2.h
>> +++ b/include/acpi/actbl2.h
>
> Hopefully these changes are merely here for reference, but just in case
> it needs to be said, they should go as a separate patch via ACPICA
> (presumably there's also some corresponding iASL stuff to compile PMCG
> nodes in the first place).
>
Yes, I'll submit this to ACPICA once the IORT addresses get figured out.

Neil
--
Qualcomm Datacenter Technologies, Inc. as an affiliate of Qualcomm Technologies Inc.
Qualcomm Technologies, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project.

2017-08-07 21:00:45

by Leeder, Neil

[permalink] [raw]
Subject: Re: [PATCH 1/2] acpi: arm64: add iort support for PMCG

Hi Lorenzo,

On 8/7/2017 12:44 PM, Lorenzo Pieralisi wrote:
> On Fri, Aug 04, 2017 at 03:59:13PM -0400, Neil Leeder wrote:
[...]
>> + } else if (iort_node->type == ACPI_IORT_NODE_PMCG) {
>> + if (iort_add_smmu_platform_device(iort_node))
>> + return;
>
>
> It is becoming a bit messy, probably it makes sense to rework the
> iommu platform device creation to make room for more generic (ie
> iommu platform device creation is not really iommu specific) behaviour
> that can accommodate the PMCG too, it can be made cleaner.
>
> I do not know if we can make it for this cycle but I am happy to put
> a patch together shortly with this in mind.
>
Ok, I will rebase on top of it when it's ready.

>> }
>>
>> iort_node = ACPI_ADD_PTR(struct acpi_iort_node, iort_node,
>> diff --git a/include/acpi/actbl2.h b/include/acpi/actbl2.h
>> index 707dda74..2169b6f 100644
>> --- a/include/acpi/actbl2.h
>> +++ b/include/acpi/actbl2.h
>> @@ -695,7 +695,8 @@ enum acpi_iort_node_type {
>> ACPI_IORT_NODE_NAMED_COMPONENT = 0x01,
>> ACPI_IORT_NODE_PCI_ROOT_COMPLEX = 0x02,
>> ACPI_IORT_NODE_SMMU = 0x03,
>> - ACPI_IORT_NODE_SMMU_V3 = 0x04
>> + ACPI_IORT_NODE_SMMU_V3 = 0x04,
>> + ACPI_IORT_NODE_PMCG = 0x05
>> };
>>
>> struct acpi_iort_id_mapping {
>> @@ -811,6 +812,12 @@ struct acpi_iort_smmu_v3 {
>> #define ACPI_IORT_SMMU_V3_COHACC_OVERRIDE (1)
>> #define ACPI_IORT_SMMU_V3_HTTU_OVERRIDE (1<<1)
>>
>> +struct acpi_iort_pmcg {
>> + u64 base_address; /* PMCG base address */
>> + u32 overflow_gsiv;
>> + u32 node_reference;
>> +};
>
> As Robin already said this hunk should be made an ACPICA pull but
> NOT before seeking IORT specs clarification as per his comments.
>
OK, I will add this through ACPICA once the IORT decision is made.

Thanks,

Neil
--
Qualcomm Datacenter Technologies, Inc. as an affiliate of Qualcomm Technologies Inc.
Qualcomm Technologies, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project.

2017-08-07 21:18:43

by Leeder, Neil

[permalink] [raw]
Subject: Re: [PATCH 2/2] perf: add arm64 smmuv3 pmu driver

Hi Robin,
Thank you for your comments.

On 8/7/2017 10:31 AM, Robin Murphy wrote:
> On 04/08/17 20:59, Neil Leeder wrote:
>> PMUs are named smmu_0_<phys_addr_page> where <phys_addr_page>
>> is the physical page address of the SMMU PMCG.
>> For example, the SMMU PMCG at 0xff88840000 is named smmu_0_ff88840
>
> This seems a bit rough - is it at feasible to at least chase the node
> reference and namespace them by the associated component, e.g. something
> like "arm-smmu-v3.x:pmcg.y"? The user can always dig the associated
> physical address out of /proc/iomem if necessary.
>
That looks like it may be better - I'll look into it.

>> Filtering by stream id is done by specifying filtering parameters
>> with the event. Options are:
>> filter_enable - 0 = no filtering, 1 = filtering enabled
>> filter_span - 0 = exact match, 1 = pattern match
>> filter_sec - applies to non-secure (0) or secure (1) namespace
>
> I'm a little dubious as to how useful it is to expose this, since we
> can't see the true value of SCR.SO so have no way of knowing what we'll
> actually end up counting.

I can remove the sec filter.

>> +config ARM_SMMUV3_PMU
>> + bool "ARM SMMUv3 PMU"
>> + depends on PERF_EVENTS && ARM64 && ACPI
>
> PERF_EVENTS is already a top-level dependency now.
>
OK

>> +#include <linux/msi.h>
>
> Is MSI support planned?
>
Not in this patchset. I'll remove the include.

>> +#define SMMU_PMCG_EVCNTR0 0x0
>> +#define SMMU_PMCG_EVCNTR(n, stride) (SMMU_PMCG_EVCNTR0 + (n) * (stride))
>> +#define SMMU_PMCG_EVTYPER0 0x400
>> +#define SMMU_PMCG_EVTYPER(n) (SMMU_PMCG_EVTYPER0 + (n) * 4)
>> +#define SMMU_PMCG_EVTYPER_SEC_SID_SHIFT 30
>> +#define SMMU_PMCG_EVTYPER_SID_SPAN_SHIFT 29
>> +#define SMMU_PMCG_EVTYPER_EVENT_MASK GENMASK(15, 0)
>> +#define SMMU_PMCG_SVR0 0x600
>> +#define SMMU_PMCG_SVR(n, stride) (SMMU_PMCG_SVR0 + (n) * (stride))
>> +#define SMMU_PMCG_SMR0 0xA00
>> +#define SMMU_PMCG_SMR(n) (SMMU_PMCG_SMR0 + (n) * 4)
>> +#define SMMU_PMCG_CNTENSET0 0xC00
>> +#define SMMU_PMCG_CNTENCLR0 0xC20
>> +#define SMMU_PMCG_INTENSET0 0xC40
>> +#define SMMU_PMCG_INTENCLR0 0xC60
>> +#define SMMU_PMCG_OVSCLR0 0xC80
>> +#define SMMU_PMCG_OVSSET0 0xCC0
>> +#define SMMU_PMCG_CAPR 0xD88
>> +#define SMMU_PMCG_SCR 0xDF8
>> +#define SMMU_PMCG_CFGR 0xE00
>> +#define SMMU_PMCG_CFGR_SID_FILTER_TYPE BIT(23)
>> +#define SMMU_PMCG_CFGR_CAPTURE BIT(22)
>> +#define SMMU_PMCG_CFGR_MSI BIT(21)
>> +#define SMMU_PMCG_CFGR_RELOC_CTRS BIT(20)
>> +#define SMMU_PMCG_CFGR_SIZE_MASK GENMASK(13, 8)
>> +#define SMMU_PMCG_CFGR_SIZE_SHIFT 8
>> +#define SMMU_PMCG_CFGR_COUNTER_SIZE_32 31
>> +#define SMMU_PMCG_CFGR_NCTR_MASK GENMASK(5, 0)
>> +#define SMMU_PMCG_CFGR_NCTR_SHIFT 0
>> +#define SMMU_PMCG_CR 0xE04
>> +#define SMMU_PMCG_CR_ENABLE BIT(0)
>> +#define SMMU_PMCG_CEID0 0xE20
>> +#define SMMU_PMCG_CEID1 0xE28
>> +#define SMMU_PMCG_IRQ_CTRL 0xE50
>> +#define SMMU_PMCG_IRQ_CTRL_IRQEN BIT(0)
>> +#define SMMU_PMCG_IRQ_CTRLACK 0xE54
>> +#define SMMU_PMCG_IRQ_CTRLACK_IRQEN BIT(0)
>> +#define SMMU_PMCG_IRQ_CFG0 0xE58
>> +#define SMMU_PMCG_IRQ_CFG0_ADDR_MASK GENMASK(51, 2)
>> +#define SMMU_PMCG_IRQ_CFG1 0xE60
>> +#define SMMU_PMCG_IRQ_CFG2 0xE64
>> +#define SMMU_PMCG_IRQ_STATUS 0xE68
>> +#define SMMU_PMCG_IRQ_STATUS_IRQ_ABT BIT(0)
>> +#define SMMU_PMCG_AIDR 0xE70
>
> Several of these are unused (although at least IRQ0_CFG1 probably should
> be, to zero it to a known state if we aren't supporting MSIs).
>
I can remove the unused defines and clear IRQ_CFG1.

>> + bool reg_size_32;
>
> This guy is redundant...
>
[...]
>> + if (smmu_pmu->reg_size_32)
>
> ...since it would be just as efficient to directly test
> smmu_pmu->counter_mask & BIT(32) here and below.
>
OK

>> +
>> + for (i = 0; i < smmu_pmu->num_counters; i++) {
>> + smmu_pmu_counter_disable(smmu_pmu, i);
>> + smmu_pmu_interrupt_disable(smmu_pmu, i);
>> + }
>
> Surely it would be far quicker and simpler to do this?
>
> writeq(smmu_pmu->counter_present_mask,
> smmu_pmu->reg_base + SMMU_PMCG_CNTENCLR0);
> writeq(smmu_pmu->counter_present_mask,
> smmu_pmu->reg_base + SMMU_PMCG_INTENCLR0);
>
OK

>> +static inline bool smmu_pmu_has_overflowed(struct smmu_pmu *smmu_pmu, u64 ovsr)
>> +{
>> + return !!(ovsr & smmu_pmu->counter_present_mask);
>> +}
>
> Personally, I find these helpers abstracting simple reads/writes
> actually make the code harder to follow, especially when they're each
> used a grand total of once. That may well just be me, though.
>
At least this one will go away with the below change to the interrupt handler.

>> + new = SMMU_COUNTER_RELOAD;
>
> Given that we *are* following the "use the top counter bit as an
> implicit overflow bit" pattern of arm_pmu, it feels a bit weird to not
> use the actual half-maximum value here (especially since it's easily
> computable from counter_mask). I'm about 85% sure it probably still
> works, but as ever inconsistency breeds uncertainty.

I thought that if we're happy with BIT(31) working fine with 32-bit registers,
it should also work for larger registers, so there was no need to waste more of
their bits. But I can change it to be half-max for all of them.

>> +static irqreturn_t smmu_pmu_handle_irq(int irq_num, void *data)
>> +{
>> + struct smmu_pmu *smmu_pmu = data;
>> + u64 ovsr;
>> + unsigned int idx;
>> +
>> + ovsr = smmu_pmu_getreset_ovsr(smmu_pmu);
>> + if (!smmu_pmu_has_overflowed(smmu_pmu, ovsr))
>
> You have an architectural guarantee that unimplemented bits of OVSSET0
> are RES0, so checking !ovsr is sufficient.
>
OK

>> + /* Verify specified event is supported on this PMU */
>> + event_id = get_event(event);
>> + if ((event_id >= SMMU_MAX_EVENT_ID) ||
>
> What about raw imp-def events?
>
I can keep the check for common events, but also allow any raw event
in the imp-def range.

>> + (!test_bit(event_id, smmu_pmu->supported_events))) {
>> + dev_dbg_ratelimited(&smmu_pmu->pdev->dev,
>> + "Invalid event %d for this PMU\n",
>> + event_id);
>> + return -EINVAL;
>> + }
[...]

>> +static int smmu_pmu_offline_cpu(unsigned int cpu, struct hlist_node *node)
>> +{
>> + struct smmu_pmu *smmu_pmu;
>> + unsigned int target;
>> +
>> + smmu_pmu = hlist_entry_safe(node, struct smmu_pmu, node);
>
> Is it ever valid for node to be NULL? If we can't trust it to be one of
> the PMUs we registered I think all bets are off anyway.
>
I was following the logic in arm-ccn.c and arm-cci.c. If it works for them
I would hope it works here too.

>> +
>> + ceid_64 = readq(smmu_pmu->reg_base + SMMU_PMCG_CEID0);
>> + ceid[0] = ceid_64 & GENMASK(31, 0);
>
> It took a second look to determine that that masking does nothing...
>
>> + ceid[1] = ceid_64 >> 32;
>> + ceid_64 = readq(smmu_pmu->reg_base + SMMU_PMCG_CEID1);
>> + ceid[2] = ceid_64 & GENMASK(31, 0);
>> + ceid[3] = ceid_64 >> 32;
>> + bitmap_from_u32array(smmu_pmu->supported_events, SMMU_MAX_EVENT_ID,
>> + ceid, SMMU_NUM_EVENTS_U32);
>
> ...but then the whole lot might be cleaner and simpler with a u64[2]
> cast to u32* (or unioned to u32[4]) as necessary.
>
I've rewritten this about 4 different ways and didn't love any of them,
including this one. I can re-do this as you suggest.

>> +static struct platform_driver smmu_pmu_driver = {
>> + .driver = {
>> + .name = "arm-smmu-pmu",
>
> Nit: "arm-smmu-v3-pmu" please, for consistency with the IOMMU driver
> naming. There is a SMMUv2 PMU driver in the works, too ;)
>
ok

Thanks,

Neil
--
Qualcomm Datacenter Technologies, Inc. as an affiliate of Qualcomm Technologies Inc.
Qualcomm Technologies, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project.

2017-08-09 07:56:34

by Hanjun Guo

[permalink] [raw]
Subject: Re: [PATCH 0/2] arm64 SMMUv3 PMU driver with IORT support

Hi Neil,

On 2017/8/5 3:59, Neil Leeder wrote:
> This adds a driver for the SMMUv3 PMU into the perf framework.
> It includes an IORT update to support PM Counter Groups.
>
> IORT has no mechanism for determining device names so PMUs
> are named based on their physical address.
>
> Tested on Qualcomm QDF2400. perf_fuzzer ran for 4+ hours
> with no failures.
>
> Neil Leeder (2):
> acpi: arm64: add iort support for PMCG
> perf: add arm64 smmuv3 pmu driver
>
> drivers/acpi/arm64/iort.c | 54 +++

I would like to be Cced for next version.

> drivers/perf/Kconfig | 9 +
> drivers/perf/Makefile | 1 +
> drivers/perf/arm_smmuv3_pmu.c | 823 ++++++++++++++++++++++++++++++++++++++++++
> include/acpi/actbl2.h | 9 +-

Do you have the acpica support code? I'm currently
working on SMMUv3 MSI support and I would like to test
it with MSI support for PMCG as well.

Thanks
Hanjun

> 5 files changed, 895 insertions(+), 1 deletion(-)
> create mode 100644 drivers/perf/arm_smmuv3_pmu.c
>

2017-08-09 15:48:50

by Leeder, Neil

[permalink] [raw]
Subject: Re: [PATCH 0/2] arm64 SMMUv3 PMU driver with IORT support

Hi Hanjun,

On 8/9/2017 3:56 AM, Hanjun Guo wrote:
> Hi Neil,
>
> On 2017/8/5 3:59, Neil Leeder wrote:
>> This adds a driver for the SMMUv3 PMU into the perf framework.
>> It includes an IORT update to support PM Counter Groups.
>>
>> IORT has no mechanism for determining device names so PMUs
>> are named based on their physical address.
>>
>> Tested on Qualcomm QDF2400. perf_fuzzer ran for 4+ hours
>> with no failures.
>>
>> Neil Leeder (2):
>> acpi: arm64: add iort support for PMCG
>> perf: add arm64 smmuv3 pmu driver
>>
>> drivers/acpi/arm64/iort.c | 54 +++
>
> I would like to be Cced for next version.

I will. I apologise for not including all the interested parties on this patchset.

>
>> drivers/perf/Kconfig | 9 +
>> drivers/perf/Makefile | 1 +
>> drivers/perf/arm_smmuv3_pmu.c | 823 ++++++++++++++++++++++++++++++++++++++++++
>> include/acpi/actbl2.h | 9 +-
>
> Do you have the acpica support code? I'm currently
> working on SMMUv3 MSI support and I would like to test
> it with MSI support for PMCG as well.

I don't have any code other than what was posted here.
What additional ACPICA support code is needed?

Neil
--
Qualcomm Datacenter Technologies, Inc. as an affiliate of Qualcomm Technologies Inc.
Qualcomm Technologies, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project.

2017-08-10 01:26:28

by Hanjun Guo

[permalink] [raw]
Subject: Re: [PATCH 0/2] arm64 SMMUv3 PMU driver with IORT support

On 2017/8/9 23:48, Leeder, Neil wrote:
>>> drivers/perf/Kconfig | 9 +
>>> drivers/perf/Makefile | 1 +
>>> drivers/perf/arm_smmuv3_pmu.c | 823 ++++++++++++++++++++++++++++++++++++++++++
>>> include/acpi/actbl2.h | 9 +-
>> Do you have the acpica support code? I'm currently
>> working on SMMUv3 MSI support and I would like to test
>> it with MSI support for PMCG as well.
> I don't have any code other than what was posted here.
> What additional ACPICA support code is needed?

Sorry for not clear, I mean the acpica code for iASL and
other tool: github.com/acpica/acpica.git

With that code, I can compile my IORT table with PMCG node.

Thanks
Hanjun

2017-08-11 03:28:35

by Leeder, Neil

[permalink] [raw]
Subject: Re: [PATCH 0/2] arm64 SMMUv3 PMU driver with IORT support

On 8/9/2017 9:26 PM, Hanjun Guo wrote:
> On 2017/8/9 23:48, Leeder, Neil wrote:
>>>> drivers/perf/Kconfig | 9 +
>>>> drivers/perf/Makefile | 1 +
>>>> drivers/perf/arm_smmuv3_pmu.c | 823 ++++++++++++++++++++++++++++++++++++++++++
>>>> include/acpi/actbl2.h | 9 +-
>>> Do you have the acpica support code? I'm currently
>>> working on SMMUv3 MSI support and I would like to test
>>> it with MSI support for PMCG as well.
>> I don't have any code other than what was posted here.
>> What additional ACPICA support code is needed?
>
> Sorry for not clear, I mean the acpica code for iASL and
> other tool: github.com/acpica/acpica.git
>
> With that code, I can compile my IORT table with PMCG node.

Unfortunately it looks like I'm the first person from Qualcomm Datacenter
Technologies to try to add something to ACPICA, which means I'll have to
kick off an internal legal process which may take some time. Unless someone
else wants to take a crack at it - and really, there's nothing more than is
in the ARM IORT spec - it could be a while before I can do that.

Neil
--
Qualcomm Datacenter Technologies, Inc. as an affiliate of Qualcomm Technologies Inc.
Qualcomm Technologies, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project.

2017-12-05 05:02:19

by Linu Cherian

[permalink] [raw]
Subject: Re: [PATCH 2/2] perf: add arm64 smmuv3 pmu driver

Hi Robin,

On Mon Aug 07, 2017 at 03:31:24PM +0100, Robin Murphy wrote:
> On 04/08/17 20:59, Neil Leeder wrote:
> > Adds a new driver to support the SMMU v3 PMU and add it into the
> > perf events framework.
> >
> > Each SMMU node may have multiple PMUs associated with it, each of
> > which may support different events.
> >
> > PMUs are named smmu_0_<phys_addr_page> where <phys_addr_page>
> > is the physical page address of the SMMU PMCG.
> > For example, the SMMU PMCG at 0xff88840000 is named smmu_0_ff88840
>
> This seems a bit rough - is it at feasible to at least chase the node
> reference and namespace them by the associated component, e.g. something
> like "arm-smmu-v3.x:pmcg.y"? The user can always dig the associated
> physical address out of /proc/iomem if necessary.
>

Assuming x indicates the smmu v3 node id and y indicates pmcg group id.
If are making such a assumption,

# Would like to clarified if the architecture gurantees that a smmu pmcg group is always
associated with a specific SMMU node.

From, Section 10.2 in SMMU Architecture Specification
"A counter group is conceptually free-standing and has no interdependency on the behavior of other counter
groups or even on the SMMU configuration itself"

Does the above and also Fig 19 from the same section,
imply that a PMCG group need not be restricted to a SMMU node ?

> > Filtering by stream id is done by specifying filtering parameters
> > with the event. Options are:
> > filter_enable - 0 = no filtering, 1 = filtering enabled
> > filter_span - 0 = exact match, 1 = pattern match
> > filter_sec - applies to non-secure (0) or secure (1) namespace
>
> I'm a little dubious as to how useful it is to expose this, since we
> can't see the true value of SCR.SO so have no way of knowing what we'll
> actually end up counting.
>
> > filter_stream_id - pattern to filter against
> > Further filtering information is available in the SMMU documentation.
> >
> > Example: perf stat -e smmu_0_ff88840/transaction,filter_enable=1,
> > filter_span=1,filter_stream_id=0x42/ -a pwd
> > Applies filter pattern 0x42 to transaction events.
> >
> > SMMU events are not attributable to a CPU, so task mode and sampling
> > are not supported.
> >
> > Signed-off-by: Neil Leeder <[email protected]>
> > ---
> > drivers/perf/Kconfig | 9 +
> > drivers/perf/Makefile | 1 +
> > drivers/perf/arm_smmuv3_pmu.c | 813 ++++++++++++++++++++++++++++++++++++++++++
> > 3 files changed, 823 insertions(+)
> > create mode 100644 drivers/perf/arm_smmuv3_pmu.c
> >
> > diff --git a/drivers/perf/Kconfig b/drivers/perf/Kconfig
> > index e5197ff..e7721d1 100644
> > --- a/drivers/perf/Kconfig
> > +++ b/drivers/perf/Kconfig
> > @@ -17,6 +17,15 @@ config ARM_PMU_ACPI
> > depends on ARM_PMU && ACPI
> > def_bool y
> >
> > +config ARM_SMMUV3_PMU
> > + bool "ARM SMMUv3 PMU"
> > + depends on PERF_EVENTS && ARM64 && ACPI
>
> PERF_EVENTS is already a top-level dependency now.
>
> > + help
> > + Provides support for the SMMU version 3 performance monitor unit (PMU)
> > + on ARM-based systems.
> > + Adds the SMMU PMU into the perf events subsystem for
> > + monitoring SMMU performance events.
> > +
> > config QCOM_L2_PMU
> > bool "Qualcomm Technologies L2-cache PMU"
> > depends on ARCH_QCOM && ARM64 && ACPI
> > diff --git a/drivers/perf/Makefile b/drivers/perf/Makefile
> > index 6420bd4..3012f5e 100644
> > --- a/drivers/perf/Makefile
> > +++ b/drivers/perf/Makefile
> > @@ -1,5 +1,6 @@
> > obj-$(CONFIG_ARM_PMU) += arm_pmu.o arm_pmu_platform.o
> > obj-$(CONFIG_ARM_PMU_ACPI) += arm_pmu_acpi.o
> > +obj-$(CONFIG_ARM_SMMUV3_PMU) += arm_smmuv3_pmu.o
> > obj-$(CONFIG_QCOM_L2_PMU) += qcom_l2_pmu.o
> > obj-$(CONFIG_QCOM_L3_PMU) += qcom_l3_pmu.o
> > obj-$(CONFIG_XGENE_PMU) += xgene_pmu.o
> > diff --git a/drivers/perf/arm_smmuv3_pmu.c b/drivers/perf/arm_smmuv3_pmu.c
> > new file mode 100644
> > index 0000000..1e70791
> > --- /dev/null
> > +++ b/drivers/perf/arm_smmuv3_pmu.c
> > @@ -0,0 +1,813 @@
> > +/* Copyright (c) 2017 The Linux Foundation. All rights reserved.
> > + *
> > + * This program is free software; you can redistribute it and/or modify
> > + * it under the terms of the GNU General Public License version 2 and
> > + * only version 2 as published by the Free Software Foundation.
> > + *
> > + * This program is distributed in the hope that it will be useful,
> > + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> > + * GNU General Public License for more details.
> > + */
> > +
> > +/*
> > + * This driver adds support for perf events to use the Performance
> > + * Monitor Counter Groups (PMCG) associated with an SMMUv3 node
> > + * to monitor that node.
> > + *
> > + * Devices are named smmu_0_<phys_addr_page> where <phys_addr_page>
> > + * is the physical page address of the SMMU PMCG.
> > + * For example, the SMMU PMCG at 0xff88840000 is named smmu_0_ff88840
> > + *
> > + * Filtering by stream id is done by specifying filtering parameters
> > + * with the event. options are:
> > + * filter_enable - 0 = no filtering, 1 = filtering enabled
> > + * filter_span - 0 = exact match, 1 = pattern match
> > + * filter_sec - filter applies to non-secure (0) or secure (1) namespace
> > + * filter_stream_id - pattern to filter against
> > + * Further filtering information is available in the SMMU documentation.
> > + *
> > + * Example: perf stat -e smmu_0_ff88840/transaction,filter_enable=1,
> > + * filter_span=1,filter_stream_id=0x42/ -a pwd
> > + * Applies filter pattern 0x42 to transaction events.
> > + *
> > + * SMMU events are not attributable to a CPU, so task mode and sampling
> > + * are not supported.
> > + */
> > +
> > +#include <linux/acpi.h>
> > +#include <linux/acpi_iort.h>
> > +#include <linux/bitops.h>
> > +#include <linux/cpuhotplug.h>
> > +#include <linux/cpumask.h>
> > +#include <linux/device.h>
> > +#include <linux/errno.h>
> > +#include <linux/interrupt.h>
> > +#include <linux/irq.h>
> > +#include <linux/kernel.h>
> > +#include <linux/list.h>
> > +#include <linux/msi.h>
>
> Is MSI support planned?
>
> > +#include <linux/perf_event.h>
> > +#include <linux/platform_device.h>
> > +#include <linux/smp.h>
> > +#include <linux/sysfs.h>
> > +#include <linux/types.h>
> > +
> > +#include <asm/local64.h>
> > +
> > +#define SMMU_PMCG_EVCNTR0 0x0
> > +#define SMMU_PMCG_EVCNTR(n, stride) (SMMU_PMCG_EVCNTR0 + (n) * (stride))
> > +#define SMMU_PMCG_EVTYPER0 0x400
> > +#define SMMU_PMCG_EVTYPER(n) (SMMU_PMCG_EVTYPER0 + (n) * 4)
> > +#define SMMU_PMCG_EVTYPER_SEC_SID_SHIFT 30
> > +#define SMMU_PMCG_EVTYPER_SID_SPAN_SHIFT 29
> > +#define SMMU_PMCG_EVTYPER_EVENT_MASK GENMASK(15, 0)
> > +#define SMMU_PMCG_SVR0 0x600
> > +#define SMMU_PMCG_SVR(n, stride) (SMMU_PMCG_SVR0 + (n) * (stride))
> > +#define SMMU_PMCG_SMR0 0xA00
> > +#define SMMU_PMCG_SMR(n) (SMMU_PMCG_SMR0 + (n) * 4)
> > +#define SMMU_PMCG_CNTENSET0 0xC00
> > +#define SMMU_PMCG_CNTENCLR0 0xC20
> > +#define SMMU_PMCG_INTENSET0 0xC40
> > +#define SMMU_PMCG_INTENCLR0 0xC60
> > +#define SMMU_PMCG_OVSCLR0 0xC80
> > +#define SMMU_PMCG_OVSSET0 0xCC0
> > +#define SMMU_PMCG_CAPR 0xD88
> > +#define SMMU_PMCG_SCR 0xDF8
> > +#define SMMU_PMCG_CFGR 0xE00
> > +#define SMMU_PMCG_CFGR_SID_FILTER_TYPE BIT(23)
> > +#define SMMU_PMCG_CFGR_CAPTURE BIT(22)
> > +#define SMMU_PMCG_CFGR_MSI BIT(21)
> > +#define SMMU_PMCG_CFGR_RELOC_CTRS BIT(20)
> > +#define SMMU_PMCG_CFGR_SIZE_MASK GENMASK(13, 8)
> > +#define SMMU_PMCG_CFGR_SIZE_SHIFT 8
> > +#define SMMU_PMCG_CFGR_COUNTER_SIZE_32 31
> > +#define SMMU_PMCG_CFGR_NCTR_MASK GENMASK(5, 0)
> > +#define SMMU_PMCG_CFGR_NCTR_SHIFT 0
> > +#define SMMU_PMCG_CR 0xE04
> > +#define SMMU_PMCG_CR_ENABLE BIT(0)
> > +#define SMMU_PMCG_CEID0 0xE20
> > +#define SMMU_PMCG_CEID1 0xE28
> > +#define SMMU_PMCG_IRQ_CTRL 0xE50
> > +#define SMMU_PMCG_IRQ_CTRL_IRQEN BIT(0)
> > +#define SMMU_PMCG_IRQ_CTRLACK 0xE54
> > +#define SMMU_PMCG_IRQ_CTRLACK_IRQEN BIT(0)
> > +#define SMMU_PMCG_IRQ_CFG0 0xE58
> > +#define SMMU_PMCG_IRQ_CFG0_ADDR_MASK GENMASK(51, 2)
> > +#define SMMU_PMCG_IRQ_CFG1 0xE60
> > +#define SMMU_PMCG_IRQ_CFG2 0xE64
> > +#define SMMU_PMCG_IRQ_STATUS 0xE68
> > +#define SMMU_PMCG_IRQ_STATUS_IRQ_ABT BIT(0)
> > +#define SMMU_PMCG_AIDR 0xE70
>
> Several of these are unused (although at least IRQ0_CFG1 probably should
> be, to zero it to a known state if we aren't supporting MSIs).
>
> > +#define SMMU_COUNTER_RELOAD BIT(31)
> > +#define SMMU_DEFAULT_FILTER_SEC 0
> > +#define SMMU_DEFAULT_FILTER_SPAN 1
> > +#define SMMU_DEFAULT_FILTER_STREAM_ID GENMASK(31, 0)
> > +
> > +#define SMMU_MAX_COUNTERS 64
> > +#define SMMU_MAX_EVENT_ID 128
> > +#define SMMU_NUM_EVENTS_U32 (SMMU_MAX_EVENT_ID / sizeof(u32))
> > +
> > +#define SMMU_PA_SHIFT 12
> > +
> > +/* Events */
> > +#define SMMU_PMU_CYCLES 0
> > +#define SMMU_PMU_TRANSACTION 1
> > +#define SMMU_PMU_TLB_MISS 2
> > +#define SMMU_PMU_CONFIG_CACHE_MISS 3
> > +#define SMMU_PMU_TRANS_TABLE_WALK 4
> > +#define SMMU_PMU_CONFIG_STRUCT_ACCESS 5
> > +#define SMMU_PMU_PCIE_ATS_TRANS_RQ 6
> > +#define SMMU_PMU_PCIE_ATS_TRANS_PASSED 7
> > +
> > +static int cpuhp_state_num;
> > +
> > +struct smmu_pmu {
> > + struct hlist_node node;
> > + struct perf_event *events[SMMU_MAX_COUNTERS];
> > + DECLARE_BITMAP(used_counters, SMMU_MAX_COUNTERS);
> > + DECLARE_BITMAP(supported_events, SMMU_MAX_EVENT_ID);
> > + unsigned int irq;
> > + unsigned int on_cpu;
> > + struct pmu pmu;
> > + unsigned int num_counters;
> > + struct platform_device *pdev;
> > + void __iomem *reg_base;
> > + void __iomem *reloc_base;
> > + u64 counter_present_mask;
> > + u64 counter_mask;
> > + bool reg_size_32;
>
> This guy is redundant...
>
> > +};
> > +
> > +#define to_smmu_pmu(p) (container_of(p, struct smmu_pmu, pmu))
> > +
> > +#define SMMU_PMU_EVENT_ATTR_EXTRACTOR(_name, _config, _size, _shift) \
> > + static inline u32 get_##_name(struct perf_event *event) \
> > + { \
> > + return (event->attr._config >> (_shift)) & \
> > + GENMASK_ULL((_size) - 1, 0); \
> > + }
> > +
> > +SMMU_PMU_EVENT_ATTR_EXTRACTOR(event, config, 16, 0);
> > +SMMU_PMU_EVENT_ATTR_EXTRACTOR(filter_stream_id, config1, 32, 0);
> > +SMMU_PMU_EVENT_ATTR_EXTRACTOR(filter_span, config1, 1, 32);
> > +SMMU_PMU_EVENT_ATTR_EXTRACTOR(filter_sec, config1, 1, 33);
> > +SMMU_PMU_EVENT_ATTR_EXTRACTOR(filter_enable, config1, 1, 34);
> > +
> > +static inline void smmu_pmu_enable(struct pmu *pmu)
> > +{
> > + struct smmu_pmu *smmu_pmu = to_smmu_pmu(pmu);
> > +
> > + writel(SMMU_PMCG_CR_ENABLE, smmu_pmu->reg_base + SMMU_PMCG_CR);
> > + writel(SMMU_PMCG_IRQ_CTRL_IRQEN,
> > + smmu_pmu->reg_base + SMMU_PMCG_IRQ_CTRL);
> > +}
> > +
> > +static inline void smmu_pmu_disable(struct pmu *pmu)
> > +{
> > + struct smmu_pmu *smmu_pmu = to_smmu_pmu(pmu);
> > +
> > + writel(0, smmu_pmu->reg_base + SMMU_PMCG_CR);
> > + writel(0, smmu_pmu->reg_base + SMMU_PMCG_IRQ_CTRL);
> > +}
> > +
> > +static inline void smmu_pmu_counter_set_value(struct smmu_pmu *smmu_pmu,
> > + u32 idx, u64 value)
> > +{
> > + if (smmu_pmu->reg_size_32)
>
> ...since it would be just as efficient to directly test
> smmu_pmu->counter_mask & BIT(32) here and below.
>
> > + writel(value, smmu_pmu->reloc_base + SMMU_PMCG_EVCNTR(idx, 4));
> > + else
> > + writeq(value, smmu_pmu->reloc_base + SMMU_PMCG_EVCNTR(idx, 8));
> > +}
> > +
> > +static inline u64 smmu_pmu_counter_get_value(struct smmu_pmu *smmu_pmu, u32 idx)
> > +{
> > + u64 value;
> > +
> > + if (smmu_pmu->reg_size_32)
> > + value = readl(smmu_pmu->reloc_base + SMMU_PMCG_EVCNTR(idx, 4));
> > + else
> > + value = readq(smmu_pmu->reloc_base + SMMU_PMCG_EVCNTR(idx, 8));
> > +
> > + return value;
> > +}
> > +
> > +static inline void smmu_pmu_counter_enable(struct smmu_pmu *smmu_pmu, u32 idx)
> > +{
> > + writeq(BIT(idx), smmu_pmu->reg_base + SMMU_PMCG_CNTENSET0);
> > +}
> > +
> > +static inline void smmu_pmu_counter_disable(struct smmu_pmu *smmu_pmu, u32 idx)
> > +{
> > + writeq(BIT(idx), smmu_pmu->reg_base + SMMU_PMCG_CNTENCLR0);
> > +}
> > +
> > +static inline void smmu_pmu_interrupt_enable(struct smmu_pmu *smmu_pmu, u32 idx)
> > +{
> > + writeq(BIT(idx), smmu_pmu->reg_base + SMMU_PMCG_INTENSET0);
> > +}
> > +
> > +static inline void smmu_pmu_interrupt_disable(struct smmu_pmu *smmu_pmu,
> > + u32 idx)
> > +{
> > + writeq(BIT(idx), smmu_pmu->reg_base + SMMU_PMCG_INTENCLR0);
> > +}
> > +
> > +static void smmu_pmu_reset(struct smmu_pmu *smmu_pmu)
> > +{
> > + unsigned int i;
> > +
> > + for (i = 0; i < smmu_pmu->num_counters; i++) {
> > + smmu_pmu_counter_disable(smmu_pmu, i);
> > + smmu_pmu_interrupt_disable(smmu_pmu, i);
> > + }
>
> Surely it would be far quicker and simpler to do this?
>
> writeq(smmu_pmu->counter_present_mask,
> smmu_pmu->reg_base + SMMU_PMCG_CNTENCLR0);
> writeq(smmu_pmu->counter_present_mask,
> smmu_pmu->reg_base + SMMU_PMCG_INTENCLR0);
>
> > + smmu_pmu_disable(&smmu_pmu->pmu);
> > +}
> > +
> > +static inline void smmu_pmu_set_evtyper(struct smmu_pmu *smmu_pmu, u32 idx,
> > + u32 val)
> > +{
> > + writel(val, smmu_pmu->reg_base + SMMU_PMCG_EVTYPER(idx));
> > +}
> > +
> > +static inline void smmu_pmu_set_smr(struct smmu_pmu *smmu_pmu, u32 idx, u32 val)
> > +{
> > + writel(val, smmu_pmu->reg_base + SMMU_PMCG_SMR(idx));
> > +}
> > +
> > +static inline u64 smmu_pmu_getreset_ovsr(struct smmu_pmu *smmu_pmu)
> > +{
> > + u64 result = readq_relaxed(smmu_pmu->reloc_base + SMMU_PMCG_OVSSET0);
> > +
> > + writeq(result, smmu_pmu->reloc_base + SMMU_PMCG_OVSCLR0);
> > + return result;
> > +}
> > +
> > +static inline bool smmu_pmu_has_overflowed(struct smmu_pmu *smmu_pmu, u64 ovsr)
> > +{
> > + return !!(ovsr & smmu_pmu->counter_present_mask);
> > +}
>
> Personally, I find these helpers abstracting simple reads/writes
> actually make the code harder to follow, especially when they're each
> used a grand total of once. That may well just be me, though.
>
> > +static void smmu_pmu_event_update(struct perf_event *event)
> > +{
> > + struct hw_perf_event *hwc = &event->hw;
> > + struct smmu_pmu *smmu_pmu = to_smmu_pmu(event->pmu);
> > + u64 delta, prev, now;
> > + u32 idx = hwc->idx;
> > +
> > + do {
> > + prev = local64_read(&hwc->prev_count);
> > + now = smmu_pmu_counter_get_value(smmu_pmu, idx);
> > + } while (local64_cmpxchg(&hwc->prev_count, prev, now) != prev);
> > +
> > + /* handle overflow. */
> > + delta = now - prev;
> > + delta &= smmu_pmu->counter_mask;
> > +
> > + local64_add(delta, &event->count);
> > +}
> > +
> > +static void smmu_pmu_set_period(struct smmu_pmu *smmu_pmu,
> > + struct hw_perf_event *hwc)
> > +{
> > + u32 idx = hwc->idx;
> > + u64 new;
> > +
> > + /*
> > + * We limit the max period to half the max counter value of the smallest
> > + * counter size, so that even in the case of extreme interrupt latency
> > + * the counter will (hopefully) not wrap past its initial value.
> > + */
>
> Having once fought to properly understand the underlying logic I despise
> this unhelpfully-vague comment, but that's not your fault ;)
>
> > + new = SMMU_COUNTER_RELOAD;
>
> Given that we *are* following the "use the top counter bit as an
> implicit overflow bit" pattern of arm_pmu, it feels a bit weird to not
> use the actual half-maximum value here (especially since it's easily
> computable from counter_mask). I'm about 85% sure it probably still
> works, but as ever inconsistency breeds uncertainty.
> > + local64_set(&hwc->prev_count, new);
> > + smmu_pmu_counter_set_value(smmu_pmu, idx, new);
> > +}
> > +
> > +static irqreturn_t smmu_pmu_handle_irq(int irq_num, void *data)
> > +{
> > + struct smmu_pmu *smmu_pmu = data;
> > + u64 ovsr;
> > + unsigned int idx;
> > +
> > + ovsr = smmu_pmu_getreset_ovsr(smmu_pmu);
> > + if (!smmu_pmu_has_overflowed(smmu_pmu, ovsr))
>
> You have an architectural guarantee that unimplemented bits of OVSSET0
> are RES0, so checking !ovsr is sufficient.
>
> > + return IRQ_NONE;
> > +
> > + for_each_set_bit(idx, (unsigned long *)&ovsr, smmu_pmu->num_counters) {
> > + struct perf_event *event = smmu_pmu->events[idx];
> > + struct hw_perf_event *hwc;
> > +
> > + if (WARN_ON_ONCE(!event))
> > + continue;
> > +
> > + smmu_pmu_event_update(event);
> > + hwc = &event->hw;
> > +
> > + smmu_pmu_set_period(smmu_pmu, hwc);
> > + }
> > +
> > + return IRQ_HANDLED;
> > +}
> > +
> > +static unsigned int smmu_pmu_get_event_idx(struct smmu_pmu *smmu_pmu)
> > +{
> > + unsigned int idx;
> > + unsigned int num_ctrs = smmu_pmu->num_counters;
> > +
> > + idx = find_first_zero_bit(smmu_pmu->used_counters, num_ctrs);
> > + if (idx == num_ctrs)
> > + /* The counters are all in use. */
> > + return -EAGAIN;
> > +
> > + set_bit(idx, smmu_pmu->used_counters);
> > +
> > + return idx;
> > +}
> > +
> > +/*
> > + * Implementation of abstract pmu functionality required by
> > + * the core perf events code.
> > + */
> > +
> > +static int smmu_pmu_event_init(struct perf_event *event)
> > +{
> > + struct hw_perf_event *hwc = &event->hw;
> > + struct perf_event *sibling;
> > + struct smmu_pmu *smmu_pmu;
> > + u32 event_id;
> > +
> > + if (event->attr.type != event->pmu->type)
> > + return -ENOENT;
> > +
> > + smmu_pmu = to_smmu_pmu(event->pmu);
> > +
> > + if (hwc->sample_period) {
> > + dev_dbg_ratelimited(&smmu_pmu->pdev->dev,
> > + "Sampling not supported\n");
> > + return -EOPNOTSUPP;
> > + }
> > +
> > + if (event->cpu < 0) {
> > + dev_dbg_ratelimited(&smmu_pmu->pdev->dev,
> > + "Per-task mode not supported\n");
> > + return -EOPNOTSUPP;
> > + }
> > +
> > + /* We cannot filter accurately so we just don't allow it. */
> > + if (event->attr.exclude_user || event->attr.exclude_kernel ||
> > + event->attr.exclude_hv || event->attr.exclude_idle) {
> > + dev_dbg_ratelimited(&smmu_pmu->pdev->dev,
> > + "Can't exclude execution levels\n");
> > + return -EOPNOTSUPP;
> > + }
> > +
> > + /* Verify specified event is supported on this PMU */
> > + event_id = get_event(event);
> > + if ((event_id >= SMMU_MAX_EVENT_ID) ||
>
> What about raw imp-def events?
>
> > + (!test_bit(event_id, smmu_pmu->supported_events))) {
> > + dev_dbg_ratelimited(&smmu_pmu->pdev->dev,
> > + "Invalid event %d for this PMU\n",
> > + event_id);
> > + return -EINVAL;
> > + }
> > +
> > + /* Don't allow groups with mixed PMUs, except for s/w events */
> > + if (event->group_leader->pmu != event->pmu &&
> > + !is_software_event(event->group_leader)) {
> > + dev_dbg_ratelimited(&smmu_pmu->pdev->dev,
> > + "Can't create mixed PMU group\n");
> > + return -EINVAL;
> > + }
> > +
> > + list_for_each_entry(sibling, &event->group_leader->sibling_list,
> > + group_entry)
> > + if (sibling->pmu != event->pmu &&
> > + !is_software_event(sibling)) {
> > + dev_dbg_ratelimited(&smmu_pmu->pdev->dev,
> > + "Can't create mixed PMU group\n");
> > + return -EINVAL;
> > + }
> > +
> > + /* Ensure all events in a group are on the same cpu */
> > + if ((event->group_leader != event) &&
> > + (event->cpu != event->group_leader->cpu)) {
> > + dev_dbg_ratelimited(&smmu_pmu->pdev->dev,
> > + "Can't create group on CPUs %d and %d",
> > + event->cpu, event->group_leader->cpu);
> > + return -EINVAL;
> > + }
> > +
> > + hwc->idx = -1;
> > +
> > + /*
> > + * Ensure all events are on the same cpu so all events are in the
> > + * same cpu context, to avoid races on pmu_enable etc.
> > + */
> > + event->cpu = smmu_pmu->on_cpu;
> > +
> > + return 0;
> > +}
> > +
> > +static void smmu_pmu_event_start(struct perf_event *event, int flags)
> > +{
> > + struct smmu_pmu *smmu_pmu = to_smmu_pmu(event->pmu);
> > + struct hw_perf_event *hwc = &event->hw;
> > + int idx = hwc->idx;
> > + u32 evtyper;
> > + u32 filter_sec;
> > + u32 filter_span;
> > + u32 filter_stream_id;
> > +
> > + hwc->state = 0;
> > +
> > + smmu_pmu_set_period(smmu_pmu, hwc);
> > +
> > + if (get_filter_enable(event)) {
> > + filter_sec = get_filter_sec(event);
> > + filter_span = get_filter_span(event);
> > + filter_stream_id = get_filter_stream_id(event);
> > + } else {
> > + filter_sec = SMMU_DEFAULT_FILTER_SEC;
> > + filter_span = SMMU_DEFAULT_FILTER_SPAN;
> > + filter_stream_id = SMMU_DEFAULT_FILTER_STREAM_ID;
> > + }
> > +
> > + evtyper = get_event(event) |
> > + filter_span << SMMU_PMCG_EVTYPER_SID_SPAN_SHIFT |
> > + filter_sec << SMMU_PMCG_EVTYPER_SEC_SID_SHIFT;
> > +
> > + smmu_pmu_set_evtyper(smmu_pmu, idx, evtyper);
> > + smmu_pmu_set_smr(smmu_pmu, idx, filter_stream_id);
> > + smmu_pmu_interrupt_enable(smmu_pmu, idx);
> > + smmu_pmu_counter_enable(smmu_pmu, idx);
> > +}
> > +
> > +static void smmu_pmu_event_stop(struct perf_event *event, int flags)
> > +{
> > + struct smmu_pmu *smmu_pmu = to_smmu_pmu(event->pmu);
> > + struct hw_perf_event *hwc = &event->hw;
> > + int idx = hwc->idx;
> > +
> > + if (hwc->state & PERF_HES_STOPPED)
> > + return;
> > +
> > + smmu_pmu_interrupt_disable(smmu_pmu, idx);
> > + smmu_pmu_counter_disable(smmu_pmu, idx);
> > +
> > + if (flags & PERF_EF_UPDATE)
> > + smmu_pmu_event_update(event);
> > + hwc->state |= PERF_HES_STOPPED | PERF_HES_UPTODATE;
> > +}
> > +
> > +static int smmu_pmu_event_add(struct perf_event *event, int flags)
> > +{
> > + struct hw_perf_event *hwc = &event->hw;
> > + int idx;
> > + struct smmu_pmu *smmu_pmu = to_smmu_pmu(event->pmu);
> > +
> > + idx = smmu_pmu_get_event_idx(smmu_pmu);
> > + if (idx < 0)
> > + return idx;
> > +
> > + hwc->idx = idx;
> > + hwc->state = PERF_HES_STOPPED | PERF_HES_UPTODATE;
> > + smmu_pmu->events[idx] = event;
> > + local64_set(&hwc->prev_count, 0);
> > +
> > + if (flags & PERF_EF_START)
> > + smmu_pmu_event_start(event, flags);
> > +
> > + /* Propagate changes to the userspace mapping. */
> > + perf_event_update_userpage(event);
> > +
> > + return 0;
> > +}
> > +
> > +static void smmu_pmu_event_del(struct perf_event *event, int flags)
> > +{
> > + struct hw_perf_event *hwc = &event->hw;
> > + struct smmu_pmu *smmu_pmu = to_smmu_pmu(event->pmu);
> > + int idx = hwc->idx;
> > +
> > + smmu_pmu_event_stop(event, flags | PERF_EF_UPDATE);
> > + smmu_pmu->events[idx] = NULL;
> > + clear_bit(idx, smmu_pmu->used_counters);
> > +
> > + perf_event_update_userpage(event);
> > +}
> > +
> > +static void smmu_pmu_event_read(struct perf_event *event)
> > +{
> > + smmu_pmu_event_update(event);
> > +}
> > +
> > +/* cpumask */
> > +
> > +static ssize_t smmu_pmu_cpumask_show(struct device *dev,
> > + struct device_attribute *attr,
> > + char *buf)
> > +{
> > + struct smmu_pmu *smmu_pmu = to_smmu_pmu(dev_get_drvdata(dev));
> > +
> > + return cpumap_print_to_pagebuf(true, buf, cpumask_of(smmu_pmu->on_cpu));
> > +}
> > +
> > +static struct device_attribute smmu_pmu_cpumask_attr =
> > + __ATTR(cpumask, 0444, smmu_pmu_cpumask_show, NULL);
> > +
> > +static struct attribute *smmu_pmu_cpumask_attrs[] = {
> > + &smmu_pmu_cpumask_attr.attr,
> > + NULL,
> > +};
> > +
> > +static struct attribute_group smmu_pmu_cpumask_group = {
> > + .attrs = smmu_pmu_cpumask_attrs,
> > +};
> > +
> > +/* Events */
> > +
> > +ssize_t smmu_pmu_event_show(struct device *dev,
> > + struct device_attribute *attr, char *page)
> > +{
> > + struct perf_pmu_events_attr *pmu_attr;
> > +
> > + pmu_attr = container_of(attr, struct perf_pmu_events_attr, attr);
> > +
> > + return sprintf(page, "event=0x%02llx\n", pmu_attr->id);
> > +}
> > +
> > +#define SMMU_EVENT_ATTR(_name, _id) \
> > + (&((struct perf_pmu_events_attr[]) { \
> > + { .attr = __ATTR(_name, 0444, smmu_pmu_event_show, NULL), \
> > + .id = _id, } \
> > + })[0].attr.attr)
> > +
> > +static struct attribute *smmu_pmu_events[] = {
> > + SMMU_EVENT_ATTR(cycles, SMMU_PMU_CYCLES),
> > + SMMU_EVENT_ATTR(transaction, SMMU_PMU_TRANSACTION),
> > + SMMU_EVENT_ATTR(tlb_miss, SMMU_PMU_TLB_MISS),
> > + SMMU_EVENT_ATTR(config_cache_miss, SMMU_PMU_CONFIG_CACHE_MISS),
> > + SMMU_EVENT_ATTR(trans_table_walk, SMMU_PMU_TRANS_TABLE_WALK),
> > + SMMU_EVENT_ATTR(config_struct_access, SMMU_PMU_CONFIG_STRUCT_ACCESS),
> > + SMMU_EVENT_ATTR(pcie_ats_trans_rq, SMMU_PMU_PCIE_ATS_TRANS_RQ),
> > + SMMU_EVENT_ATTR(pcie_ats_trans_passed, SMMU_PMU_PCIE_ATS_TRANS_PASSED),
> > + NULL
> > +};
> > +
> > +static umode_t smmu_pmu_event_is_visible(struct kobject *kobj,
> > + struct attribute *attr, int unused)
> > +{
> > + struct device *dev = kobj_to_dev(kobj);
> > + struct smmu_pmu *smmu_pmu = to_smmu_pmu(dev_get_drvdata(dev));
> > + struct perf_pmu_events_attr *pmu_attr;
> > +
> > + pmu_attr = container_of(attr, struct perf_pmu_events_attr, attr.attr);
> > +
> > + if (test_bit(pmu_attr->id, smmu_pmu->supported_events))
> > + return attr->mode;
> > +
> > + return 0;
> > +}
> > +static struct attribute_group smmu_pmu_events_group = {
> > + .name = "events",
> > + .attrs = smmu_pmu_events,
> > + .is_visible = smmu_pmu_event_is_visible,
> > +};
> > +
> > +/* Formats */
> > +PMU_FORMAT_ATTR(event, "config:0-15");
> > +PMU_FORMAT_ATTR(filter_stream_id, "config1:0-31");
> > +PMU_FORMAT_ATTR(filter_span, "config1:32");
> > +PMU_FORMAT_ATTR(filter_sec, "config1:33");
> > +PMU_FORMAT_ATTR(filter_enable, "config1:34");
> > +
> > +static struct attribute *smmu_pmu_formats[] = {
> > + &format_attr_event.attr,
> > + &format_attr_filter_stream_id.attr,
> > + &format_attr_filter_span.attr,
> > + &format_attr_filter_sec.attr,
> > + &format_attr_filter_enable.attr,
> > + NULL
> > +};
> > +
> > +static struct attribute_group smmu_pmu_format_group = {
> > + .name = "format",
> > + .attrs = smmu_pmu_formats,
> > +};
> > +
> > +static const struct attribute_group *smmu_pmu_attr_grps[] = {
> > + &smmu_pmu_cpumask_group,
> > + &smmu_pmu_events_group,
> > + &smmu_pmu_format_group,
> > + NULL,
> > +};
> > +
> > +/*
> > + * Generic device handlers
> > + */
> > +
> > +static unsigned int get_num_counters(struct smmu_pmu *smmu_pmu)
> > +{
> > + u32 cfgr = readl(smmu_pmu->reg_base + SMMU_PMCG_CFGR);
> > +
> > + return ((cfgr & SMMU_PMCG_CFGR_NCTR_MASK) >> SMMU_PMCG_CFGR_NCTR_SHIFT)
> > + + 1;
> > +}
> > +
> > +static int smmu_pmu_offline_cpu(unsigned int cpu, struct hlist_node *node)
> > +{
> > + struct smmu_pmu *smmu_pmu;
> > + unsigned int target;
> > +
> > + smmu_pmu = hlist_entry_safe(node, struct smmu_pmu, node);
>
> Is it ever valid for node to be NULL? If we can't trust it to be one of
> the PMUs we registered I think all bets are off anyway.
>
> > + if (cpu != smmu_pmu->on_cpu)
> > + return 0;
> > +
> > + target = cpumask_any_but(cpu_online_mask, cpu);
> > + if (target >= nr_cpu_ids)
> > + return 0;
> > +
> > + perf_pmu_migrate_context(&smmu_pmu->pmu, cpu, target);
> > + smmu_pmu->on_cpu = target;
> > + WARN_ON(irq_set_affinity(smmu_pmu->irq, cpumask_of(target)));
> > +
> > + return 0;
> > +}
> > +
> > +static int smmu_pmu_probe(struct platform_device *pdev)
> > +{
> > + struct smmu_pmu *smmu_pmu;
> > + struct resource *mem_resource_0, *mem_resource_1;
> > + void __iomem *mem_map_0, *mem_map_1;
> > + unsigned int reg_size;
> > + int err;
> > + int irq;
> > + u32 ceid[SMMU_NUM_EVENTS_U32];
> > + u64 ceid_64;
> > +
> > + smmu_pmu = devm_kzalloc(&pdev->dev, sizeof(*smmu_pmu), GFP_KERNEL);
> > + if (!smmu_pmu)
> > + return -ENOMEM;
> > +
> > + platform_set_drvdata(pdev, smmu_pmu);
> > + smmu_pmu->pmu = (struct pmu) {
> > + .task_ctx_nr = perf_invalid_context,
> > + .pmu_enable = smmu_pmu_enable,
> > + .pmu_disable = smmu_pmu_disable,
> > + .event_init = smmu_pmu_event_init,
> > + .add = smmu_pmu_event_add,
> > + .del = smmu_pmu_event_del,
> > + .start = smmu_pmu_event_start,
> > + .stop = smmu_pmu_event_stop,
> > + .read = smmu_pmu_event_read,
> > + .attr_groups = smmu_pmu_attr_grps,
> > + };
> > +
> > + mem_resource_0 = platform_get_resource(pdev, IORESOURCE_MEM, 0);
> > + mem_map_0 = devm_ioremap_resource(&pdev->dev, mem_resource_0);
> > +
> > + if (IS_ERR(mem_map_0)) {
> > + dev_err(&pdev->dev, "Can't map SMMU PMU @%pa\n",
> > + &mem_resource_0->start);
> > + return PTR_ERR(mem_map_0);
> > + }
> > +
> > + smmu_pmu->reg_base = mem_map_0;
> > + smmu_pmu->pmu.name =
> > + devm_kasprintf(&pdev->dev, GFP_KERNEL, "smmu_0_%llx",
> > + (mem_resource_0->start) >> SMMU_PA_SHIFT);
> > +
> > + if (!smmu_pmu->pmu.name) {
> > + dev_err(&pdev->dev, "Failed to create PMU name");
> > + return -EINVAL;
> > + }
> > +
> > + ceid_64 = readq(smmu_pmu->reg_base + SMMU_PMCG_CEID0);
> > + ceid[0] = ceid_64 & GENMASK(31, 0);
>
> It took a second look to determine that that masking does nothing...
>
> > + ceid[1] = ceid_64 >> 32;
> > + ceid_64 = readq(smmu_pmu->reg_base + SMMU_PMCG_CEID1);
> > + ceid[2] = ceid_64 & GENMASK(31, 0);
> > + ceid[3] = ceid_64 >> 32;
> > + bitmap_from_u32array(smmu_pmu->supported_events, SMMU_MAX_EVENT_ID,
> > + ceid, SMMU_NUM_EVENTS_U32);
>
> ...but then the whole lot might be cleaner and simpler with a u64[2]
> cast to u32* (or unioned to u32[4]) as necessary.
>
> > +
> > + /* Determine if page 1 is present */
> > + if (readl(smmu_pmu->reg_base + SMMU_PMCG_CFGR) &
> > + SMMU_PMCG_CFGR_RELOC_CTRS) {
> > + mem_resource_1 = platform_get_resource(pdev, IORESOURCE_MEM, 1);
> > + mem_map_1 = devm_ioremap_resource(&pdev->dev, mem_resource_1);
> > +
> > + if (IS_ERR(mem_map_1)) {
> > + dev_err(&pdev->dev, "Can't map SMMU PMU @%pa\n",
> > + &mem_resource_1->start);
> > + return PTR_ERR(mem_map_1);
> > + }
> > + smmu_pmu->reloc_base = mem_map_1;
> > + } else {
> > + smmu_pmu->reloc_base = smmu_pmu->reg_base;
> > + }
> > +
> > + irq = platform_get_irq(pdev, 0);
> > + if (irq < 0) {
> > + dev_err(&pdev->dev,
> > + "Failed to get valid irq for smmu @%pa\n",
> > + &mem_resource_0->start);
> > + return irq;
> > + }
> > +
> > + err = devm_request_irq(&pdev->dev, irq, smmu_pmu_handle_irq,
> > + IRQF_NOBALANCING | IRQF_SHARED | IRQF_NO_THREAD,
> > + "smmu-pmu", smmu_pmu);
> > + if (err) {
> > + dev_err(&pdev->dev,
> > + "Unable to request IRQ%d for SMMU PMU counters\n", irq);
> > + return err;
> > + }
> > +
> > + smmu_pmu->irq = irq;
> > +
> > + /* Pick one CPU to be the preferred one to use */
> > + smmu_pmu->on_cpu = smp_processor_id();
> > + WARN_ON(irq_set_affinity(smmu_pmu->irq, cpumask_of(smmu_pmu->on_cpu)));
> > +
> > + smmu_pmu->num_counters = get_num_counters(smmu_pmu);
> > + smmu_pmu->pdev = pdev;
> > + smmu_pmu->counter_present_mask = GENMASK(smmu_pmu->num_counters - 1, 0);
> > + reg_size = (readl(smmu_pmu->reg_base + SMMU_PMCG_CFGR) &
> > + SMMU_PMCG_CFGR_SIZE_MASK) >> SMMU_PMCG_CFGR_SIZE_SHIFT;
> > + smmu_pmu->reg_size_32 = (reg_size == SMMU_PMCG_CFGR_COUNTER_SIZE_32);
> > + smmu_pmu->counter_mask = GENMASK_ULL(reg_size, 0);
> > +
> > + smmu_pmu_reset(smmu_pmu);
> > +
> > + err = cpuhp_state_add_instance_nocalls(cpuhp_state_num,
> > + &smmu_pmu->node);
> > + if (err) {
> > + dev_err(&pdev->dev, "Error %d registering hotplug", err);
> > + return err;
> > + }
> > +
> > + err = perf_pmu_register(&smmu_pmu->pmu, smmu_pmu->pmu.name, -1);
> > + if (err) {
> > + dev_err(&pdev->dev, "Error %d registering SMMU PMU\n", err);
> > + goto out_unregister;
> > + }
> > +
> > + dev_info(&pdev->dev, "Registered SMMU PMU @ %pa using %d counters\n",
> > + &mem_resource_0->start, smmu_pmu->num_counters);
> > +
> > + return err;
> > +
> > +out_unregister:
> > + cpuhp_state_remove_instance_nocalls(cpuhp_state_num, &smmu_pmu->node);
> > + return err;
> > +}
> > +
> > +static int smmu_pmu_remove(struct platform_device *pdev)
> > +{
> > + struct smmu_pmu *smmu_pmu = platform_get_drvdata(pdev);
> > +
> > + perf_pmu_unregister(&smmu_pmu->pmu);
> > + cpuhp_state_remove_instance_nocalls(cpuhp_state_num, &smmu_pmu->node);
> > +
> > + return 0;
> > +}
> > +
> > +static void smmu_pmu_shutdown(struct platform_device *pdev)
> > +{
> > + struct smmu_pmu *smmu_pmu = platform_get_drvdata(pdev);
> > +
> > + smmu_pmu_disable(&smmu_pmu->pmu);
> > +}
> > +
> > +static struct platform_driver smmu_pmu_driver = {
> > + .driver = {
> > + .name = "arm-smmu-pmu",
>
> Nit: "arm-smmu-v3-pmu" please, for consistency with the IOMMU driver
> naming. There is a SMMUv2 PMU driver in the works, too ;)
>
> Robin.
>
> > + },
> > + .probe = smmu_pmu_probe,
> > + .remove = smmu_pmu_remove,
> > + .shutdown = smmu_pmu_shutdown,
> > +};
> > +
> > +static int __init arm_smmu_pmu_init(void)
> > +{
> > + cpuhp_state_num = cpuhp_setup_state_multi(CPUHP_AP_ONLINE_DYN,
> > + "perf/arm/smmupmu:online",
> > + NULL,
> > + smmu_pmu_offline_cpu);
> > + if (cpuhp_state_num < 0)
> > + return cpuhp_state_num;
> > +
> > + return platform_driver_register(&smmu_pmu_driver);
> > +}
> > +module_init(arm_smmu_pmu_init);
> > +
> > +static void __exit arm_smmu_pmu_exit(void)
> > +{
> > + platform_driver_unregister(&smmu_pmu_driver);
> > +}
> > +
> > +module_exit(arm_smmu_pmu_exit);
> > +MODULE_LICENSE("GPL v2");
> >
>
>
> _______________________________________________
> linux-arm-kernel mailing list
> [email protected]
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

--
Linu cherian

2017-12-10 02:36:03

by Linu Cherian

[permalink] [raw]
Subject: Re: [PATCH 0/2] arm64 SMMUv3 PMU driver with IORT support

Hi,


On Fri Aug 04, 2017 at 03:59:12PM -0400, Neil Leeder wrote:
> This adds a driver for the SMMUv3 PMU into the perf framework.
> It includes an IORT update to support PM Counter Groups.
>

In one of Cavium's upcoming SOC, SMMU PMCG implementation is such a way
that platform bus id (Device ID in ITS terminmology)is shared with that of SMMU.
This would be a matter of concern for software if the SMMU and SMMU PMCG blocks
are managed by two independent drivers.

The problem arises when we want to alloc/free MSIs for these devices
using the APIs, platform_msi_domain_alloc/free_irqs.
Platform bus id being same for these two hardware blocks, they end up sharing the same
ITT(Interrupt Translation Table) in GIC ITS and hence alloc, free and management
of this shared ITT becomes a problem when they are managed by two independent
drivers.

We were looking into the option of keeping the SMMU PMCG nodes as sub nodes under
SMMUv3 node, so that SMMUv3 driver could probe and figure out the total vectors
required for SMMU PMCG devices and make a common platform_msi_domain_alloc/free_irqs
function call for all devices that share the platform bus id.

Would like to know your expert opinion on what would be the right approach
to handle this case ?

Thanks.



> IORT has no mechanism for determining device names so PMUs
> are named based on their physical address.
>
> Tested on Qualcomm QDF2400. perf_fuzzer ran for 4+ hours
> with no failures.
>
> Neil Leeder (2):
> acpi: arm64: add iort support for PMCG
> perf: add arm64 smmuv3 pmu driver
>
> drivers/acpi/arm64/iort.c | 54 +++
> drivers/perf/Kconfig | 9 +
> drivers/perf/Makefile | 1 +
> drivers/perf/arm_smmuv3_pmu.c | 823 ++++++++++++++++++++++++++++++++++++++++++
> include/acpi/actbl2.h | 9 +-
> 5 files changed, 895 insertions(+), 1 deletion(-)
> create mode 100644 drivers/perf/arm_smmuv3_pmu.c
>
> --
> Qualcomm Datacenter Technologies, Inc. as an affiliate of Qualcomm Technologies Inc.
> Qualcomm Technologies, Inc. is a member of the Code Aurora Forum,
> a Linux Foundation Collaborative Project.
>
>
> _______________________________________________
> linux-arm-kernel mailing list
> [email protected]
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

--
Linu cherian

2017-12-18 14:48:24

by Robin Murphy

[permalink] [raw]
Subject: Re: [PATCH 0/2] arm64 SMMUv3 PMU driver with IORT support

On 10/12/17 02:35, Linu Cherian wrote:
> Hi,
>
>
> On Fri Aug 04, 2017 at 03:59:12PM -0400, Neil Leeder wrote:
>> This adds a driver for the SMMUv3 PMU into the perf framework.
>> It includes an IORT update to support PM Counter Groups.
>>
>
> In one of Cavium's upcoming SOC, SMMU PMCG implementation is such a way
> that platform bus id (Device ID in ITS terminmology)is shared with that of SMMU.
> This would be a matter of concern for software if the SMMU and SMMU PMCG blocks
> are managed by two independent drivers.
>
> The problem arises when we want to alloc/free MSIs for these devices
> using the APIs, platform_msi_domain_alloc/free_irqs.
> Platform bus id being same for these two hardware blocks, they end up sharing the same
> ITT(Interrupt Translation Table) in GIC ITS and hence alloc, free and management
> of this shared ITT becomes a problem when they are managed by two independent
> drivers.

What is the problem exactly? IIRC resizing a possibly-live ITT is a
right pain in the bum to do - is it just that?

> We were looking into the option of keeping the SMMU PMCG nodes as sub nodes under
> SMMUv3 node, so that SMMUv3 driver could probe and figure out the total vectors
> required for SMMU PMCG devices and make a common platform_msi_domain_alloc/free_irqs
> function call for all devices that share the platform bus id.

I'm not sure how scalable that approach would be, since it's not
entirely obvious how to handle PMCGs associated with named components or
root complexes (rather than directly with SMMU instances). We certainly
don't want to end up spraying similar PMCG DevID logic around all manner
of GPU/accelerator/etc. drivers in future (whilst PMCGs for device TLBs
will be expected to have distinct IDs from their host devices, they
could reasonably still overlap with other PMCGs/SMMUs).

> Would like to know your expert opinion on what would be the right approach
> to handle this case ?

My gut feeling says the way to deal with this properly is in the ITS
code, but I appreciate that that's a lot easier said than done :/

Robin.

2017-12-18 15:39:50

by Marc Zyngier

[permalink] [raw]
Subject: Re: [PATCH 0/2] arm64 SMMUv3 PMU driver with IORT support

Thanks for putting me in the loop Robin.

On 18/12/17 14:48, Robin Murphy wrote:
> On 10/12/17 02:35, Linu Cherian wrote:
>> Hi,
>>
>>
>> On Fri Aug 04, 2017 at 03:59:12PM -0400, Neil Leeder wrote:
>>> This adds a driver for the SMMUv3 PMU into the perf framework.
>>> It includes an IORT update to support PM Counter Groups.
>>>
>>
>> In one of Cavium's upcoming SOC, SMMU PMCG implementation is such a way
>> that platform bus id (Device ID in ITS terminmology)is shared with that of SMMU.
>> This would be a matter of concern for software if the SMMU and SMMU PMCG blocks
>> are managed by two independent drivers.
>>
>> The problem arises when we want to alloc/free MSIs for these devices
>> using the APIs, platform_msi_domain_alloc/free_irqs.
>> Platform bus id being same for these two hardware blocks, they end up sharing the same
>> ITT(Interrupt Translation Table) in GIC ITS and hence alloc, free and management
>> of this shared ITT becomes a problem when they are managed by two independent
>> drivers.
>
> What is the problem exactly? IIRC resizing a possibly-live ITT is a
> right pain in the bum to do - is it just that?

Understatement of the day! ;-) Yes, it is a massive headache, and will
either result in a temporary loss of interrupts (at some point you have
to unmap the ITT), or with spurious interrupts (you generate interrupts
for all the MSIs you've blackholed when unmapping the ITT).

>
>> We were looking into the option of keeping the SMMU PMCG nodes as sub nodes under
>> SMMUv3 node, so that SMMUv3 driver could probe and figure out the total vectors
>> required for SMMU PMCG devices and make a common platform_msi_domain_alloc/free_irqs
>> function call for all devices that share the platform bus id.
>
> I'm not sure how scalable that approach would be, since it's not
> entirely obvious how to handle PMCGs associated with named components or
> root complexes (rather than directly with SMMU instances). We certainly
> don't want to end up spraying similar PMCG DevID logic around all manner
> of GPU/accelerator/etc. drivers in future (whilst PMCGs for device TLBs
> will be expected to have distinct IDs from their host devices, they
> could reasonably still overlap with other PMCGs/SMMUs).
>
>> Would like to know your expert opinion on what would be the right approach
>> to handle this case ?
>
> My gut feeling says the way to deal with this properly is in the ITS
> code, but I appreciate that that's a lot easier said than done :/

I can revive the hack I once wrote for that (and that was hoping to
forever forget), but that's pretty disgusting, and subject to the above
caveat.

Thanks,

M.
--
Jazz is not dead. It just smells funny...

2017-12-19 06:37:25

by Linu Cherian

[permalink] [raw]
Subject: Re: [PATCH 0/2] arm64 SMMUv3 PMU driver with IORT support

Hi Robin,

On Mon Dec 18, 2017 at 02:48:14PM +0000, Robin Murphy wrote:
> On 10/12/17 02:35, Linu Cherian wrote:
> >Hi,
> >
> >
> >On Fri Aug 04, 2017 at 03:59:12PM -0400, Neil Leeder wrote:
> >>This adds a driver for the SMMUv3 PMU into the perf framework.
> >>It includes an IORT update to support PM Counter Groups.
> >>
> >
> >In one of Cavium's upcoming SOC, SMMU PMCG implementation is such a way
> >that platform bus id (Device ID in ITS terminmology)is shared with that of SMMU.
> >This would be a matter of concern for software if the SMMU and SMMU PMCG blocks
> >are managed by two independent drivers.
> >
> >The problem arises when we want to alloc/free MSIs for these devices
> >using the APIs, platform_msi_domain_alloc/free_irqs.
> >Platform bus id being same for these two hardware blocks, they end up sharing the same
> >ITT(Interrupt Translation Table) in GIC ITS and hence alloc, free and management
> >of this shared ITT becomes a problem when they are managed by two independent
> >drivers.
>
> What is the problem exactly? IIRC resizing a possibly-live ITT is a
> right pain in the bum to do - is it just that?

Yes exactly. Resizing ITT was the problem in sharing.


> >We were looking into the option of keeping the SMMU PMCG nodes as sub nodes under
> >SMMUv3 node, so that SMMUv3 driver could probe and figure out the total vectors
> >required for SMMU PMCG devices and make a common platform_msi_domain_alloc/free_irqs
> >function call for all devices that share the platform bus id.
>
> I'm not sure how scalable that approach would be, since it's not
> entirely obvious how to handle PMCGs associated with named
> components or root complexes (rather than directly with SMMU
> instances). We certainly don't want to end up spraying similar PMCG
> DevID logic around all manner of GPU/accelerator/etc. drivers in
> future (whilst PMCGs for device TLBs will be expected to have
> distinct IDs from their host devices, they could reasonably still
> overlap with other PMCGs/SMMUs).
>

OK.

While trying the above approach, we also felt that the code will become
lot messier than actually thought.

> >Would like to know your expert opinion on what would be the right approach
> >to handle this case ?
>
> My gut feeling says the way to deal with this properly is in the ITS
> code, but I appreciate that that's a lot easier said than done :/
>

Yes Correct.

> Robin.

--
Linu cherian

2017-12-19 06:56:02

by Linu Cherian

[permalink] [raw]
Subject: Re: [PATCH 0/2] arm64 SMMUv3 PMU driver with IORT support

Hi Marc,

On Mon Dec 18, 2017 at 03:39:22PM +0000, Marc Zyngier wrote:
> Thanks for putting me in the loop Robin.
>
> On 18/12/17 14:48, Robin Murphy wrote:
> > On 10/12/17 02:35, Linu Cherian wrote:
> >> Hi,
> >>
> >>
> >> On Fri Aug 04, 2017 at 03:59:12PM -0400, Neil Leeder wrote:
> >>> This adds a driver for the SMMUv3 PMU into the perf framework.
> >>> It includes an IORT update to support PM Counter Groups.
> >>>
> >>
> >> In one of Cavium's upcoming SOC, SMMU PMCG implementation is such a way
> >> that platform bus id (Device ID in ITS terminmology)is shared with that of SMMU.
> >> This would be a matter of concern for software if the SMMU and SMMU PMCG blocks
> >> are managed by two independent drivers.
> >>
> >> The problem arises when we want to alloc/free MSIs for these devices
> >> using the APIs, platform_msi_domain_alloc/free_irqs.
> >> Platform bus id being same for these two hardware blocks, they end up sharing the same
> >> ITT(Interrupt Translation Table) in GIC ITS and hence alloc, free and management
> >> of this shared ITT becomes a problem when they are managed by two independent
> >> drivers.
> >
> > What is the problem exactly? IIRC resizing a possibly-live ITT is a
> > right pain in the bum to do - is it just that?
>
> Understatement of the day! ;-) Yes, it is a massive headache, and will
> either result in a temporary loss of interrupts (at some point you have
> to unmap the ITT), or with spurious interrupts (you generate interrupts
> for all the MSIs you've blackholed when unmapping the ITT).

Just sharing a thought.

If the firmware, through device tree/ACPI
can provide the following input to the ITS driver,
(For example, as part of msi-parent property in device tree)

1. flag indicating the ITT (Device ID) is being shared
2. the maximum number of vectors that are required to be allocated for this ITT

resizing of ITT and the associated complexities could be avoided.


>
> >
> >> We were looking into the option of keeping the SMMU PMCG nodes as sub nodes under
> >> SMMUv3 node, so that SMMUv3 driver could probe and figure out the total vectors
> >> required for SMMU PMCG devices and make a common platform_msi_domain_alloc/free_irqs
> >> function call for all devices that share the platform bus id.
> >
> > I'm not sure how scalable that approach would be, since it's not
> > entirely obvious how to handle PMCGs associated with named components or
> > root complexes (rather than directly with SMMU instances). We certainly
> > don't want to end up spraying similar PMCG DevID logic around all manner
> > of GPU/accelerator/etc. drivers in future (whilst PMCGs for device TLBs
> > will be expected to have distinct IDs from their host devices, they
> > could reasonably still overlap with other PMCGs/SMMUs).
> >
> >> Would like to know your expert opinion on what would be the right approach
> >> to handle this case ?
> >
> > My gut feeling says the way to deal with this properly is in the ITS
> > code, but I appreciate that that's a lot easier said than done :/
>
> I can revive the hack I once wrote for that (and that was hoping to
> forever forget), but that's pretty disgusting, and subject to the above
> caveat.
>
> Thanks,
>
> M.
> --
> Jazz is not dead. It just smells funny...

--
Linu cherian

2017-12-19 12:11:32

by Marc Zyngier

[permalink] [raw]
Subject: Re: [PATCH 0/2] arm64 SMMUv3 PMU driver with IORT support

On 19/12/17 06:55, Linu Cherian wrote:
> Hi Marc,
>
> On Mon Dec 18, 2017 at 03:39:22PM +0000, Marc Zyngier wrote:
>> Thanks for putting me in the loop Robin.
>>
>> On 18/12/17 14:48, Robin Murphy wrote:
>>> On 10/12/17 02:35, Linu Cherian wrote:
>>>> Hi,
>>>>
>>>>
>>>> On Fri Aug 04, 2017 at 03:59:12PM -0400, Neil Leeder wrote:
>>>>> This adds a driver for the SMMUv3 PMU into the perf framework.
>>>>> It includes an IORT update to support PM Counter Groups.
>>>>>
>>>>
>>>> In one of Cavium's upcoming SOC, SMMU PMCG implementation is such a way
>>>> that platform bus id (Device ID in ITS terminmology)is shared with that of SMMU.
>>>> This would be a matter of concern for software if the SMMU and SMMU PMCG blocks
>>>> are managed by two independent drivers.
>>>>
>>>> The problem arises when we want to alloc/free MSIs for these devices
>>>> using the APIs, platform_msi_domain_alloc/free_irqs.
>>>> Platform bus id being same for these two hardware blocks, they end up sharing the same
>>>> ITT(Interrupt Translation Table) in GIC ITS and hence alloc, free and management
>>>> of this shared ITT becomes a problem when they are managed by two independent
>>>> drivers.
>>>
>>> What is the problem exactly? IIRC resizing a possibly-live ITT is a
>>> right pain in the bum to do - is it just that?
>>
>> Understatement of the day! ;-) Yes, it is a massive headache, and will
>> either result in a temporary loss of interrupts (at some point you have
>> to unmap the ITT), or with spurious interrupts (you generate interrupts
>> for all the MSIs you've blackholed when unmapping the ITT).
>
> Just sharing a thought.
>
> If the firmware, through device tree/ACPI
> can provide the following input to the ITS driver,
> (For example, as part of msi-parent property in device tree)
>
> 1. flag indicating the ITT (Device ID) is being shared
> 2. the maximum number of vectors that are required to be allocated for this ITT
>
> resizing of ITT and the associated complexities could be avoided.

I'm not sure it is that simple.

First, this is a change of the spec, and we need to support the current
states of ACPI and DT. In any case, this would need to affect all nodes.

Then, MSIs are very dynamic, and there may be decision that SW makes at
runtime that would change the parameters of the ITT allocation
(platform_msi_domain_alloc_irqs does take an nvec parameter that could
override firmware data -- what if all the drivers do that?).

Finally, and assuming we still want to go in that direction, I'd rather
have each node describing its maximum MSI allocation and let the ITS
driver sum it up, much like we do it on PCI.

Thanks,

M.
--
Jazz is not dead. It just smells funny...

Subject: RE: [PATCH 1/2] acpi: arm64: add iort support for PMCG

Hi Neil/Lorenzo,

> -----Original Message-----
> From: linux-arm-kernel [mailto:[email protected]]
> On Behalf Of Neil Leeder
> Sent: Friday, August 04, 2017 8:59 PM
> To: Will Deacon <[email protected]>; Mark Rutland
> <[email protected]>
> Cc: Mark Langsdorf <[email protected]>; Jon Masters
> <[email protected]>; Timur Tabi <[email protected]>; linux-
> [email protected]; Mark Brown <[email protected]>; Mark Salter
> <[email protected]>; [email protected]; linux-arm-
> [email protected]
> Subject: [PATCH 1/2] acpi: arm64: add iort support for PMCG
>
> Add support for the SMMU Performance Monitor Counter Group
> information from ACPI. This is in preparation for its use
> in the SMMU v3 PMU driver.

[...]

> static __init
> const struct iort_iommu_config *iort_get_iommu_cfg(struct acpi_iort_node
> *node)
> {
> @@ -1001,6 +1041,8 @@ const struct iort_iommu_config
> *iort_get_iommu_cfg(struct acpi_iort_node *node)
> return &iort_arm_smmu_v3_cfg;
> case ACPI_IORT_NODE_SMMU:
> return &iort_arm_smmu_cfg;
> + case ACPI_IORT_NODE_PMCG:
> + return &iort_arm_smmu_pmcg_cfg;

I understand this series will be revised based on the iort spec update.

This Is to clarify few queries we have with respect to one of our hisilicon
platform which has support for SMMU v3 PMCG. In our implementation
the base address for the PMCG is defined at a IMP DEF address offset
within the SMMUv3 page 0 address space. And as per SMMU spec,

" The Performance Monitor Counter Groups are standalone monitoring
facilities and, as such, can be implemented in separate components that
are all associated with (but not necessarily part of) an SMMU"

It looks like PMCG can be part of SMMU, it is not clear whether this kind
of address mapping is forbidden or not. If this is an accepted scenario, then
the devm_ioremap_resource() call in SMMv3 probe will fail as it reports conflict.

As far as I can see, the above code just checks the iort node type is PMCG
and assumes that its SMMU PMCG. As per IORT spec, it make sense to check
the node reference filed and make that call.

And to fix the resource conflict issue we have, is it possible to treat the PMCG
node as the child of the SMMU and call the platform_device_add() appropriately
to avoid the conflict? I am not sure this will fix the issue and also about the order
in which SMMU and PMCG devices will be populated will have an impact on this.

Please let me know your thoughts on this.

Thanks,
Shameer

> default:
> return NULL;
> }
> @@ -1056,6 +1098,15 @@ static int __init
> iort_add_smmu_platform_device(struct acpi_iort_node *node)
> if (ret)
> goto dev_put;
>
> + /* End of init for PMCG */
> + if (node->type == ACPI_IORT_NODE_PMCG) {
> + ret = platform_device_add(pdev);
> + if (ret)
> + goto dev_put;
> +
> + return 0;
> + }
> +
> /*
> * We expect the dma masks to be equivalent for
> * all SMMUs set-ups
> @@ -1131,6 +1182,9 @@ static void __init iort_init_platform_devices(void)
> acpi_free_fwnode_static(fwnode);
> return;
> }
> + } else if (iort_node->type == ACPI_IORT_NODE_PMCG) {
> + if (iort_add_smmu_platform_device(iort_node))
> + return;
> }
>
> iort_node = ACPI_ADD_PTR(struct acpi_iort_node, iort_node,
> diff --git a/include/acpi/actbl2.h b/include/acpi/actbl2.h
> index 707dda74..2169b6f 100644
> --- a/include/acpi/actbl2.h
> +++ b/include/acpi/actbl2.h
> @@ -695,7 +695,8 @@ enum acpi_iort_node_type {
> ACPI_IORT_NODE_NAMED_COMPONENT = 0x01,
> ACPI_IORT_NODE_PCI_ROOT_COMPLEX = 0x02,
> ACPI_IORT_NODE_SMMU = 0x03,
> - ACPI_IORT_NODE_SMMU_V3 = 0x04
> + ACPI_IORT_NODE_SMMU_V3 = 0x04,
> + ACPI_IORT_NODE_PMCG = 0x05
> };
>
> struct acpi_iort_id_mapping {
> @@ -811,6 +812,12 @@ struct acpi_iort_smmu_v3 {
> #define ACPI_IORT_SMMU_V3_COHACC_OVERRIDE (1)
> #define ACPI_IORT_SMMU_V3_HTTU_OVERRIDE (1<<1)
>
> +struct acpi_iort_pmcg {
> + u64 base_address; /* PMCG base address */
> + u32 overflow_gsiv;
> + u32 node_reference;
> +};
> +
>
> /****************************************************************
> ***************
> *
> * IVRS - I/O Virtualization Reporting Structure
> --
> Qualcomm Datacenter Technologies, Inc. as an affiliate of Qualcomm
> Technologies Inc.
> Qualcomm Technologies, Inc. is a member of the Code Aurora Forum,
> a Linux Foundation Collaborative Project.
>
>
> _______________________________________________
> linux-arm-kernel mailing list
> [email protected]
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

2018-01-30 19:03:41

by Lorenzo Pieralisi

[permalink] [raw]
Subject: Re: [PATCH 1/2] acpi: arm64: add iort support for PMCG

On Tue, Jan 30, 2018 at 10:39:20AM +0000, Shameerali Kolothum Thodi wrote:
> Hi Neil/Lorenzo,
>
> > -----Original Message-----
> > From: linux-arm-kernel [mailto:[email protected]]
> > On Behalf Of Neil Leeder
> > Sent: Friday, August 04, 2017 8:59 PM
> > To: Will Deacon <[email protected]>; Mark Rutland
> > <[email protected]>
> > Cc: Mark Langsdorf <[email protected]>; Jon Masters
> > <[email protected]>; Timur Tabi <[email protected]>; linux-
> > [email protected]; Mark Brown <[email protected]>; Mark Salter
> > <[email protected]>; [email protected]; linux-arm-
> > [email protected]
> > Subject: [PATCH 1/2] acpi: arm64: add iort support for PMCG
> >
> > Add support for the SMMU Performance Monitor Counter Group
> > information from ACPI. This is in preparation for its use
> > in the SMMU v3 PMU driver.
>
> [...]
>
> > static __init
> > const struct iort_iommu_config *iort_get_iommu_cfg(struct acpi_iort_node
> > *node)
> > {
> > @@ -1001,6 +1041,8 @@ const struct iort_iommu_config
> > *iort_get_iommu_cfg(struct acpi_iort_node *node)
> > return &iort_arm_smmu_v3_cfg;
> > case ACPI_IORT_NODE_SMMU:
> > return &iort_arm_smmu_cfg;
> > + case ACPI_IORT_NODE_PMCG:
> > + return &iort_arm_smmu_pmcg_cfg;
>
> I understand this series will be revised based on the iort spec update.
>
> This Is to clarify few queries we have with respect to one of our hisilicon
> platform which has support for SMMU v3 PMCG. In our implementation
> the base address for the PMCG is defined at a IMP DEF address offset
> within the SMMUv3 page 0 address space. And as per SMMU spec,
>
> " The Performance Monitor Counter Groups are standalone monitoring
> facilities and, as such, can be implemented in separate components that
> are all associated with (but not necessarily part of) an SMMU"
>
> It looks like PMCG can be part of SMMU, it is not clear whether this kind
> of address mapping is forbidden or not. If this is an accepted scenario, then
> the devm_ioremap_resource() call in SMMv3 probe will fail as it reports conflict.
>
> As far as I can see, the above code just checks the iort node type is PMCG
> and assumes that its SMMU PMCG. As per IORT spec, it make sense to check
> the node reference filed and make that call.
>
> And to fix the resource conflict issue we have, is it possible to treat the PMCG
> node as the child of the SMMU and call the platform_device_add() appropriately
> to avoid the conflict? I am not sure this will fix the issue and also about the order
> in which SMMU and PMCG devices will be populated will have an impact on this.
>
> Please let me know your thoughts on this.

I went back and re-read the patches, I think the point here is that the
perf driver (ie PATCH 2 that, by the way, is not maiinline) uses
devm_ioremap_resource() to map the counters and that's what is causing
failures when PMCG is part of SMMUv3 registers.

It is the resources hierarchy that is wrong and in turn, I do not think
devm_request_mem_region() is the right way of requesting it for the
PMCG driver.

I need to look into this but I suspect that's something that should
be handled in the PMCG driver, that has to request the memory region
_differently_ (ie ioremap copes with this overlap - it is the
devm_request_mem_region() in devm_ioremap_resource() that fails, correct
?).

Certainly we need to create in IORT the resources with the correct
hierarchy (I reckon DT does it already if the right device hierarchy is
specified).

Lorenzo

>
> Thanks,
> Shameer
>
> > default:
> > return NULL;
> > }
> > @@ -1056,6 +1098,15 @@ static int __init
> > iort_add_smmu_platform_device(struct acpi_iort_node *node)
> > if (ret)
> > goto dev_put;
> >
> > + /* End of init for PMCG */
> > + if (node->type == ACPI_IORT_NODE_PMCG) {
> > + ret = platform_device_add(pdev);
> > + if (ret)
> > + goto dev_put;
> > +
> > + return 0;
> > + }
> > +
> > /*
> > * We expect the dma masks to be equivalent for
> > * all SMMUs set-ups
> > @@ -1131,6 +1182,9 @@ static void __init iort_init_platform_devices(void)
> > acpi_free_fwnode_static(fwnode);
> > return;
> > }
> > + } else if (iort_node->type == ACPI_IORT_NODE_PMCG) {
> > + if (iort_add_smmu_platform_device(iort_node))
> > + return;
> > }
> >
> > iort_node = ACPI_ADD_PTR(struct acpi_iort_node, iort_node,
> > diff --git a/include/acpi/actbl2.h b/include/acpi/actbl2.h
> > index 707dda74..2169b6f 100644
> > --- a/include/acpi/actbl2.h
> > +++ b/include/acpi/actbl2.h
> > @@ -695,7 +695,8 @@ enum acpi_iort_node_type {
> > ACPI_IORT_NODE_NAMED_COMPONENT = 0x01,
> > ACPI_IORT_NODE_PCI_ROOT_COMPLEX = 0x02,
> > ACPI_IORT_NODE_SMMU = 0x03,
> > - ACPI_IORT_NODE_SMMU_V3 = 0x04
> > + ACPI_IORT_NODE_SMMU_V3 = 0x04,
> > + ACPI_IORT_NODE_PMCG = 0x05
> > };
> >
> > struct acpi_iort_id_mapping {
> > @@ -811,6 +812,12 @@ struct acpi_iort_smmu_v3 {
> > #define ACPI_IORT_SMMU_V3_COHACC_OVERRIDE (1)
> > #define ACPI_IORT_SMMU_V3_HTTU_OVERRIDE (1<<1)
> >
> > +struct acpi_iort_pmcg {
> > + u64 base_address; /* PMCG base address */
> > + u32 overflow_gsiv;
> > + u32 node_reference;
> > +};
> > +
> >
> > /****************************************************************
> > ***************
> > *
> > * IVRS - I/O Virtualization Reporting Structure
> > --
> > Qualcomm Datacenter Technologies, Inc. as an affiliate of Qualcomm
> > Technologies Inc.
> > Qualcomm Technologies, Inc. is a member of the Code Aurora Forum,
> > a Linux Foundation Collaborative Project.
> >
> >
> > _______________________________________________
> > linux-arm-kernel mailing list
> > [email protected]
> > http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

Subject: RE: [PATCH 1/2] acpi: arm64: add iort support for PMCG

Hi Lorenzo,

> -----Original Message-----
> From: Lorenzo Pieralisi [mailto:[email protected]]
> Sent: Tuesday, January 30, 2018 6:00 PM
> To: Shameerali Kolothum Thodi <[email protected]>
> Cc: Neil Leeder <[email protected]>; Mark Langsdorf
> <[email protected]>; Jon Masters <[email protected]>; Timur Tabi
> <[email protected]>; [email protected]; Mark Brown
> <[email protected]>; Mark Salter <[email protected]>; linux-arm-
> [email protected]; Will Deacon <[email protected]>; Mark
> Rutland <[email protected]>; Linuxarm <[email protected]>
> Subject: Re: [PATCH 1/2] acpi: arm64: add iort support for PMCG
>
> On Tue, Jan 30, 2018 at 10:39:20AM +0000, Shameerali Kolothum Thodi wrote:
> > Hi Neil/Lorenzo,
> >
> > > -----Original Message-----
> > > From: linux-arm-kernel [mailto:linux-arm-kernel-
> [email protected]]
> > > On Behalf Of Neil Leeder
> > > Sent: Friday, August 04, 2017 8:59 PM
> > > To: Will Deacon <[email protected]>; Mark Rutland
> > > <[email protected]>
> > > Cc: Mark Langsdorf <[email protected]>; Jon Masters
> > > <[email protected]>; Timur Tabi <[email protected]>; linux-
> > > [email protected]; Mark Brown <[email protected]>; Mark Salter
> > > <[email protected]>; [email protected]; linux-arm-
> > > [email protected]
> > > Subject: [PATCH 1/2] acpi: arm64: add iort support for PMCG
> > >
> > > Add support for the SMMU Performance Monitor Counter Group
> > > information from ACPI. This is in preparation for its use
> > > in the SMMU v3 PMU driver.
> >
> > [...]
> >
> > > static __init
> > > const struct iort_iommu_config *iort_get_iommu_cfg(struct
> acpi_iort_node
> > > *node)
> > > {
> > > @@ -1001,6 +1041,8 @@ const struct iort_iommu_config
> > > *iort_get_iommu_cfg(struct acpi_iort_node *node)
> > > return &iort_arm_smmu_v3_cfg;
> > > case ACPI_IORT_NODE_SMMU:
> > > return &iort_arm_smmu_cfg;
> > > + case ACPI_IORT_NODE_PMCG:
> > > + return &iort_arm_smmu_pmcg_cfg;
> >
> > I understand this series will be revised based on the iort spec update.
> >
> > This Is to clarify few queries we have with respect to one of our hisilicon
> > platform which has support for SMMU v3 PMCG. In our implementation
> > the base address for the PMCG is defined at a IMP DEF address offset
> > within the SMMUv3 page 0 address space. And as per SMMU spec,
> >
> > " The Performance Monitor Counter Groups are standalone monitoring
> > facilities and, as such, can be implemented in separate components that
> > are all associated with (but not necessarily part of) an SMMU"
> >
> > It looks like PMCG can be part of SMMU, it is not clear whether this kind
> > of address mapping is forbidden or not. If this is an accepted scenario, then
> > the devm_ioremap_resource() call in SMMv3 probe will fail as it reports
> conflict.
> >
> > As far as I can see, the above code just checks the iort node type is PMCG
> > and assumes that its SMMU PMCG. As per IORT spec, it make sense to check
> > the node reference filed and make that call.
> >
> > And to fix the resource conflict issue we have, is it possible to treat the PMCG
> > node as the child of the SMMU and call the platform_device_add()
> appropriately
> > to avoid the conflict? I am not sure this will fix the issue and also about the
> order
> > in which SMMU and PMCG devices will be populated will have an impact on
> this.
> >
> > Please let me know your thoughts on this.
>
> I went back and re-read the patches, I think the point here is that the
> perf driver (ie PATCH 2 that, by the way, is not maiinline) uses
> devm_ioremap_resource() to map the counters and that's what is causing
> failures when PMCG is part of SMMUv3 registers.

Thanks for going through this. No, this is not where we are seeing the failure.
May be I was not clear in my earlier mail. The failure happens in SMMUv3
driver probe function when it calls devm_ioremap_resource().

> It is the resources hierarchy that is wrong and in turn, I do not think
> devm_request_mem_region() is the right way of requesting it for the
> PMCG driver.
>
> I need to look into this but I suspect that's something that should
> be handled in the PMCG driver, that has to request the memory region
> _differently_ (ie ioremap copes with this overlap - it is the
> devm_request_mem_region() in devm_ioremap_resource() that fails, correct
> ?).

It looks like, in IORT code,

iort_add_platform_device()--> platform_device_add()-->insert_resource(), inserts
both SMMUv3 and PMCG resources into the resource tree and then when the probe
of SMMUv3 is called, it detects the conflict.

[ 85.548749] arm-smmu-v3 arm-smmu-v3.0.auto: can't request region for resource [mem 0x148000000-0x14801ffff]

Of course, changing devm_ioremap_resource() to devm_ioremap() in SMMv3
driver probe solves the issue for us, but not sure that's the right approach or not.

Thanks,
Shameer

> Certainly we need to create in IORT the resources with the correct
> hierarchy (I reckon DT does it already if the right device hierarchy is
> specified).



2018-01-31 13:06:19

by Lorenzo Pieralisi

[permalink] [raw]
Subject: Re: [PATCH 1/2] acpi: arm64: add iort support for PMCG

On Wed, Jan 31, 2018 at 12:10:47PM +0000, Shameerali Kolothum Thodi wrote:

[...]

> > I went back and re-read the patches, I think the point here is that the
> > perf driver (ie PATCH 2 that, by the way, is not maiinline) uses
> > devm_ioremap_resource() to map the counters and that's what is causing
> > failures when PMCG is part of SMMUv3 registers.
>
> Thanks for going through this. No, this is not where we are seeing the failure.
> May be I was not clear in my earlier mail. The failure happens in SMMUv3
> driver probe function when it calls devm_ioremap_resource().

Understood - because the PMU PMCG driver calls it first, that's what
I was referring to.

My point is that:

- the PMCG platform device resources should be built with the correct
resource hierarchy
- and even then, I do not think that using devm_ioremap_resource() in
the PMCG PMU driver is the correct way of handling its resource
reservation (ie the kernel must be able to detect that a resource is
contained in a parent one but I am not sure devm_ioremap_resource()
is the way to handle this correctly)

> > It is the resources hierarchy that is wrong and in turn, I do not think
> > devm_request_mem_region() is the right way of requesting it for the
> > PMCG driver.
> >
> > I need to look into this but I suspect that's something that should
> > be handled in the PMCG driver, that has to request the memory region
> > _differently_ (ie ioremap copes with this overlap - it is the
> > devm_request_mem_region() in devm_ioremap_resource() that fails, correct
> > ?).
>
> It looks like, in IORT code,
>
> iort_add_platform_device()--> platform_device_add()-->insert_resource(), inserts
> both SMMUv3 and PMCG resources into the resource tree and then when the probe
> of SMMUv3 is called, it detects the conflict.
>
> [ 85.548749] arm-smmu-v3 arm-smmu-v3.0.auto: can't request region for resource [mem 0x148000000-0x14801ffff]
>
> Of course, changing devm_ioremap_resource() to devm_ioremap() in SMMv3
> driver probe solves the issue for us, but not sure that's the right approach or not.

See above.

Lorenzo

2018-03-29 07:05:38

by Yisheng Xie

[permalink] [raw]
Subject: Re: [PATCH 2/2] perf: add arm64 smmuv3 pmu driver

Hi Neil,

On 2017/8/5 3:59, Neil Leeder wrote:
> + mem_resource_0 = platform_get_resource(pdev, IORESOURCE_MEM, 0);
> + mem_map_0 = devm_ioremap_resource(&pdev->dev, mem_resource_0);
> +
Can we use devm_ioremap instead? for the reg_base of smmu_pmu is
IMPLEMENTATION DEFINED. If the reg of smmu_pmu is inside smmu,
devm_ioremap_resource will failed and return -EBUSY, eg.:

smmu reg ranges: 0x180000000 ~ 0x1801fffff
its smmu_pmu reg ranges: 0x180001000 ~ 0x180001fff

> + if (IS_ERR(mem_map_0)) {
> + dev_err(&pdev->dev, "Can't map SMMU PMU @%pa\n",
> + &mem_resource_0->start);
> + return PTR_ERR(mem_map_0);
> + }
> +
> + smmu_pmu->reg_base = mem_map_0;
> + smmu_pmu->pmu.name =
> + devm_kasprintf(&pdev->dev, GFP_KERNEL, "smmu_0_%llx",
> + (mem_resource_0->start) >> SMMU_PA_SHIFT);
> +
> + if (!smmu_pmu->pmu.name) {
> + dev_err(&pdev->dev, "Failed to create PMU name");
> + return -EINVAL;
> + }
> +
> + ceid_64 = readq(smmu_pmu->reg_base + SMMU_PMCG_CEID0);
> + ceid[0] = ceid_64 & GENMASK(31, 0);
> + ceid[1] = ceid_64 >> 32;
> + ceid_64 = readq(smmu_pmu->reg_base + SMMU_PMCG_CEID1);
> + ceid[2] = ceid_64 & GENMASK(31, 0);
> + ceid[3] = ceid_64 >> 32;
> + bitmap_from_u32array(smmu_pmu->supported_events, SMMU_MAX_EVENT_ID,
> + ceid, SMMU_NUM_EVENTS_U32);
> +
> + /* Determine if page 1 is present */
> + if (readl(smmu_pmu->reg_base + SMMU_PMCG_CFGR) &
> + SMMU_PMCG_CFGR_RELOC_CTRS) {
> + mem_resource_1 = platform_get_resource(pdev, IORESOURCE_MEM, 1);
> + mem_map_1 = devm_ioremap_resource(&pdev->dev, mem_resource_1);
> +
The same as above.

Thanks
Yisheng

> + if (IS_ERR(mem_map_1)) {
> + dev_err(&pdev->dev, "Can't map SMMU PMU @%pa\n",
> + &mem_resource_1->start);
> + return PTR_ERR(mem_map_1);
> + }
> + smmu_pmu->reloc_base = mem_map_1;
> + } else {
> + smmu_pmu->reloc_base = smmu_pmu->reg_base;
> + }
> +
> + irq = platform_get_irq(pdev, 0);
> + if (irq < 0) {
> + dev_err(&pdev->dev,
> + "Failed to get valid irq for smmu @%pa\n",
> + &mem_resource_0->start);
> + return irq;
> + }


2018-04-01 05:46:13

by Neil Leeder

[permalink] [raw]
Subject: Re: [PATCH 2/2] perf: add arm64 smmuv3 pmu driver

Hi Yisheng Xie,

On 3/29/2018 03:03 AM, Yisheng Xie wrote:
>
> Hi Neil,
>
> On 2017/8/5 3:59, Neil Leeder wrote:
>> +    mem_resource_0 = platform_get_resource(pdev, IORESOURCE_MEM, 0);
>> +    mem_map_0 = devm_ioremap_resource(&pdev->dev, mem_resource_0);
>> +
> Can we use devm_ioremap instead? for the reg_base of smmu_pmu is
> IMPLEMENTATION DEFINED. If the reg of smmu_pmu is inside smmu,
> devm_ioremap_resource will failed and return -EBUSY, eg.:
>
>  smmu reg ranges:        0x180000000 ~ 0x1801fffff
>  its smmu_pmu reg ranges:    0x180001000 ~ 0x180001fff
>
Just to let you know that I no longer work at Qualcomm and I won't be
able to provide updates to this patchset. I expect that others from my
former team at Qualcomm will pick up ownership.

Neil

2018-04-02 06:39:30

by Yisheng Xie

[permalink] [raw]
Subject: Re: [PATCH 2/2] perf: add arm64 smmuv3 pmu driver

Hi Neil,

On 2018/4/1 13:44, Neil Leeder wrote:
> Hi Yisheng Xie,
>
> On 3/29/2018 03:03 AM, Yisheng Xie wrote:
>>
>> Hi Neil,
>>
>> On 2017/8/5 3:59, Neil Leeder wrote:
>>> + mem_resource_0 = platform_get_resource(pdev, IORESOURCE_MEM, 0);
>>> + mem_map_0 = devm_ioremap_resource(&pdev->dev, mem_resource_0);
>>> +
>> Can we use devm_ioremap instead? for the reg_base of smmu_pmu is
>> IMPLEMENTATION DEFINED. If the reg of smmu_pmu is inside smmu,
>> devm_ioremap_resource will failed and return -EBUSY, eg.:
>>
>> smmu reg ranges: 0x180000000 ~ 0x1801fffff
>> its smmu_pmu reg ranges: 0x180001000 ~ 0x180001fff
>>
> Just to let you know that I no longer work at Qualcomm and I won't be able to provide updates to this patchset. I expect that others from my former team at Qualcomm will pick up ownership.

Thanks for this infomation.

hi Agustin and Timur,

Is there any new status about this patchset?

Thanks
Yisheng

>
> Neil
>
> .
>


2018-04-02 14:27:19

by Hanjun Guo

[permalink] [raw]
Subject: Re: [PATCH 2/2] perf: add arm64 smmuv3 pmu driver

On 2018/4/2 14:37, Yisheng Xie wrote:
> Hi Neil,
>
> On 2018/4/1 13:44, Neil Leeder wrote:
>> Hi Yisheng Xie,
>>
>> On 3/29/2018 03:03 AM, Yisheng Xie wrote:
>>>
>>> Hi Neil,
>>>
>>> On 2017/8/5 3:59, Neil Leeder wrote:
>>>> + mem_resource_0 = platform_get_resource(pdev, IORESOURCE_MEM, 0);
>>>> + mem_map_0 = devm_ioremap_resource(&pdev->dev, mem_resource_0);
>>>> +
>>> Can we use devm_ioremap instead? for the reg_base of smmu_pmu is
>>> IMPLEMENTATION DEFINED. If the reg of smmu_pmu is inside smmu,
>>> devm_ioremap_resource will failed and return -EBUSY, eg.:
>>>
>>> smmu reg ranges: 0x180000000 ~ 0x1801fffff
>>> its smmu_pmu reg ranges: 0x180001000 ~ 0x180001fff
>>>
>> Just to let you know that I no longer work at Qualcomm and I won't be able to provide updates to this patchset. I expect that others from my former team at Qualcomm will pick up ownership.
>
> Thanks for this infomation.
>
> hi Agustin and Timur,
>
> Is there any new status about this patchset?

I think we need to wait for the new version of IORT spec,
which includes the fix for the two base address for SMMUv3
PMCG (now just represent one).

Thanks
Hanjun


2018-04-02 18:02:41

by Neil Leeder

[permalink] [raw]
Subject: Re: [PATCH 2/2] perf: add arm64 smmuv3 pmu driver

Hi Hanjun,

On 4/2/2018 10:24 AM, Hanjun Guo wrote:
<SNIP>
>
> I think we need to wait for the new version of IORT spec,
> which includes the fix for the two base address for SMMUv3
> PMCG (now just represent one).
>
> Thanks
> Hanjun
>
It's in rev D which is available now:
http://infocenter.arm.com/help/topic/com.arm.doc.den0049d/DEN0049D_IO_Remapping_Table.pdf

Neil

2018-04-03 01:18:26

by Hanjun Guo

[permalink] [raw]
Subject: Re: [PATCH 2/2] perf: add arm64 smmuv3 pmu driver

[+Cc Lorenzo]

Hi Neil,

On 2018/4/3 1:59, Neil Leeder wrote:
> Hi Hanjun,
>
> On 4/2/2018 10:24 AM, Hanjun Guo wrote:
> <SNIP>
>>
>> I think we need to wait for the new version of IORT spec,
>> which includes the fix for the two base address for SMMUv3
>> PMCG (now just represent one).
>>
>> Thanks
>> Hanjun
>>
> It's in rev D which is available now:
> http://infocenter.arm.com/help/topic/com.arm.doc.den0049d/DEN0049D_IO_Remapping_Table.pdf

Thanks for let me know, I didn't notice the update :)

Lorenzo, do you have a plan for ACPICA changes?

Thanks
Hanjun


2018-04-04 11:36:55

by Lorenzo Pieralisi

[permalink] [raw]
Subject: Re: [PATCH 2/2] perf: add arm64 smmuv3 pmu driver

On Tue, Apr 03, 2018 at 09:15:11AM +0800, Hanjun Guo wrote:
> [+Cc Lorenzo]
>
> Hi Neil,
>
> On 2018/4/3 1:59, Neil Leeder wrote:
> > Hi Hanjun,
> >
> > On 4/2/2018 10:24 AM, Hanjun Guo wrote:
> > <SNIP>
> >>
> >> I think we need to wait for the new version of IORT spec,
> >> which includes the fix for the two base address for SMMUv3
> >> PMCG (now just represent one).
> >>
> >> Thanks
> >> Hanjun
> >>
> > It's in rev D which is available now:
> > http://infocenter.arm.com/help/topic/com.arm.doc.den0049d/DEN0049D_IO_Remapping_Table.pdf
>
> Thanks for let me know, I didn't notice the update :)
>
> Lorenzo, do you have a plan for ACPICA changes?

We should be able to send them upstream very soon.

Thanks,
Lorenzo

Subject: RE: [PATCH 2/2] perf: add arm64 smmuv3 pmu driver



> -----Original Message-----
> From: linux-arm-kernel [mailto:[email protected]]
> On Behalf Of Yisheng Xie
> Sent: Thursday, March 29, 2018 8:04 AM
> To: Neil Leeder <[email protected]>; Will Deacon
> <[email protected]>; Mark Rutland <[email protected]>
> Cc: Mark Langsdorf <[email protected]>; Jon Masters
> <[email protected]>; Timur Tabi <[email protected]>; linux-
> [email protected]; Mark Brown <[email protected]>; Mark Salter
> <[email protected]>; [email protected]
> Subject: Re: [PATCH 2/2] perf: add arm64 smmuv3 pmu driver
>
> Hi Neil,
>
> On 2017/8/5 3:59, Neil Leeder wrote:
> > + mem_resource_0 = platform_get_resource(pdev, IORESOURCE_MEM,
> 0);
> > + mem_map_0 = devm_ioremap_resource(&pdev->dev,
> mem_resource_0);
> > +
> Can we use devm_ioremap instead? for the reg_base of smmu_pmu is
> IMPLEMENTATION DEFINED. If the reg of smmu_pmu is inside smmu,
> devm_ioremap_resource will failed and return -EBUSY, eg.:
>
> smmu reg ranges: 0x180000000 ~ 0x1801fffff
> its smmu_pmu reg ranges: 0x180001000 ~ 0x180001fff

I think this will not solve the issue completely as the smmu v3 driver
uses devm_ioremap_resource() currently and that will fail because of
the overlap.

Please find the discussion here:
https://lkml.org/lkml/2018/1/31/235

Thanks,
Shameer

> > + if (IS_ERR(mem_map_0)) {
> > + dev_err(&pdev->dev, "Can't map SMMU PMU @%pa\n",
> > + &mem_resource_0->start);
> > + return PTR_ERR(mem_map_0);
> > + }
> > +
> > + smmu_pmu->reg_base = mem_map_0;
> > + smmu_pmu->pmu.name =
> > + devm_kasprintf(&pdev->dev, GFP_KERNEL, "smmu_0_%llx",
> > + (mem_resource_0->start) >> SMMU_PA_SHIFT);
> > +
> > + if (!smmu_pmu->pmu.name) {
> > + dev_err(&pdev->dev, "Failed to create PMU name");
> > + return -EINVAL;
> > + }
> > +
> > + ceid_64 = readq(smmu_pmu->reg_base + SMMU_PMCG_CEID0);
> > + ceid[0] = ceid_64 & GENMASK(31, 0);
> > + ceid[1] = ceid_64 >> 32;
> > + ceid_64 = readq(smmu_pmu->reg_base + SMMU_PMCG_CEID1);
> > + ceid[2] = ceid_64 & GENMASK(31, 0);
> > + ceid[3] = ceid_64 >> 32;
> > + bitmap_from_u32array(smmu_pmu->supported_events,
> SMMU_MAX_EVENT_ID,
> > + ceid, SMMU_NUM_EVENTS_U32);
> > +
> > + /* Determine if page 1 is present */
> > + if (readl(smmu_pmu->reg_base + SMMU_PMCG_CFGR) &
> > + SMMU_PMCG_CFGR_RELOC_CTRS) {
> > + mem_resource_1 = platform_get_resource(pdev,
> IORESOURCE_MEM, 1);
> > + mem_map_1 = devm_ioremap_resource(&pdev->dev,
> mem_resource_1);
> > +
> The same as above.
>
> Thanks
> Yisheng
>
> > + if (IS_ERR(mem_map_1)) {
> > + dev_err(&pdev->dev, "Can't map SMMU PMU
> @%pa\n",
> > + &mem_resource_1->start);
> > + return PTR_ERR(mem_map_1);
> > + }
> > + smmu_pmu->reloc_base = mem_map_1;
> > + } else {
> > + smmu_pmu->reloc_base = smmu_pmu->reg_base;
> > + }
> > +
> > + irq = platform_get_irq(pdev, 0);
> > + if (irq < 0) {
> > + dev_err(&pdev->dev,
> > + "Failed to get valid irq for smmu @%pa\n",
> > + &mem_resource_0->start);
> > + return irq;
> > + }
>
>
> _______________________________________________
> linux-arm-kernel mailing list
> [email protected]
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

2018-04-19 01:19:53

by Yisheng Xie

[permalink] [raw]
Subject: Re: [PATCH 2/2] perf: add arm64 smmuv3 pmu driver

Hi Shameerali,

On 2018/4/18 19:05, Shameerali Kolothum Thodi wrote:
>
>
>> -----Original Message-----
>> From: linux-arm-kernel [mailto:[email protected]]
>> On Behalf Of Yisheng Xie
>> Sent: Thursday, March 29, 2018 8:04 AM
>> To: Neil Leeder <[email protected]>; Will Deacon
>> <[email protected]>; Mark Rutland <[email protected]>
>> Cc: Mark Langsdorf <[email protected]>; Jon Masters
>> <[email protected]>; Timur Tabi <[email protected]>; linux-
>> [email protected]; Mark Brown <[email protected]>; Mark Salter
>> <[email protected]>; [email protected]
>> Subject: Re: [PATCH 2/2] perf: add arm64 smmuv3 pmu driver
>>
>> Hi Neil,
>>
>> On 2017/8/5 3:59, Neil Leeder wrote:
>>> + mem_resource_0 = platform_get_resource(pdev, IORESOURCE_MEM,
>> 0);
>>> + mem_map_0 = devm_ioremap_resource(&pdev->dev,
>> mem_resource_0);
>>> +
>> Can we use devm_ioremap instead? for the reg_base of smmu_pmu is
>> IMPLEMENTATION DEFINED. If the reg of smmu_pmu is inside smmu,
>> devm_ioremap_resource will failed and return -EBUSY, eg.:
>>
>> smmu reg ranges: 0x180000000 ~ 0x1801fffff
>> its smmu_pmu reg ranges: 0x180001000 ~ 0x180001fff
>
> I think this will not solve the issue completely as the smmu v3 driver
> uses devm_ioremap_resource() currently and that will fail because of
> the overlap.

Right, I get your point.

>
> Please find the discussion here:
> https://lkml.org/lkml/2018/1/31/235

Thanks for the infomation.

Thanks
Yisheng

>
> Thanks,
> Shameer
>
>>> + if (IS_ERR(mem_map_0)) {
>>> + dev_err(&pdev->dev, "Can't map SMMU PMU @%pa\n",
>>> + &mem_resource_0->start);
>>> + return PTR_ERR(mem_map_0);
>>> + }
>>> +
>>> + smmu_pmu->reg_base = mem_map_0;
>>> + smmu_pmu->pmu.name =
>>> + devm_kasprintf(&pdev->dev, GFP_KERNEL, "smmu_0_%llx",
>>> + (mem_resource_0->start) >> SMMU_PA_SHIFT);
>>> +
>>> + if (!smmu_pmu->pmu.name) {
>>> + dev_err(&pdev->dev, "Failed to create PMU name");
>>> + return -EINVAL;
>>> + }
>>> +
>>> + ceid_64 = readq(smmu_pmu->reg_base + SMMU_PMCG_CEID0);
>>> + ceid[0] = ceid_64 & GENMASK(31, 0);
>>> + ceid[1] = ceid_64 >> 32;
>>> + ceid_64 = readq(smmu_pmu->reg_base + SMMU_PMCG_CEID1);
>>> + ceid[2] = ceid_64 & GENMASK(31, 0);
>>> + ceid[3] = ceid_64 >> 32;
>>> + bitmap_from_u32array(smmu_pmu->supported_events,
>> SMMU_MAX_EVENT_ID,
>>> + ceid, SMMU_NUM_EVENTS_U32);
>>> +
>>> + /* Determine if page 1 is present */
>>> + if (readl(smmu_pmu->reg_base + SMMU_PMCG_CFGR) &
>>> + SMMU_PMCG_CFGR_RELOC_CTRS) {
>>> + mem_resource_1 = platform_get_resource(pdev,
>> IORESOURCE_MEM, 1);
>>> + mem_map_1 = devm_ioremap_resource(&pdev->dev,
>> mem_resource_1);
>>> +
>> The same as above.
>>
>> Thanks
>> Yisheng
>>
>>> + if (IS_ERR(mem_map_1)) {
>>> + dev_err(&pdev->dev, "Can't map SMMU PMU
>> @%pa\n",
>>> + &mem_resource_1->start);
>>> + return PTR_ERR(mem_map_1);
>>> + }
>>> + smmu_pmu->reloc_base = mem_map_1;
>>> + } else {
>>> + smmu_pmu->reloc_base = smmu_pmu->reg_base;
>>> + }
>>> +
>>> + irq = platform_get_irq(pdev, 0);
>>> + if (irq < 0) {
>>> + dev_err(&pdev->dev,
>>> + "Failed to get valid irq for smmu @%pa\n",
>>> + &mem_resource_0->start);
>>> + return irq;
>>> + }
>>
>>
>> _______________________________________________
>> linux-arm-kernel mailing list
>> [email protected]
>> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
>
> .
>


2018-05-02 14:22:41

by Agustin Vega-Frias

[permalink] [raw]
Subject: Re: [PATCH 2/2] perf: add arm64 smmuv3 pmu driver

On 2018-04-02 02:37, Yisheng Xie wrote:
> Hi Neil,
>
> On 2018/4/1 13:44, Neil Leeder wrote:
>> Hi Yisheng Xie,
>>
>> On 3/29/2018 03:03 AM, Yisheng Xie wrote:
>>>
>>> Hi Neil,
>>>
>>> On 2017/8/5 3:59, Neil Leeder wrote:
>>>> + mem_resource_0 = platform_get_resource(pdev, IORESOURCE_MEM,
>>>> 0);
>>>> + mem_map_0 = devm_ioremap_resource(&pdev->dev, mem_resource_0);
>>>> +
>>> Can we use devm_ioremap instead? for the reg_base of smmu_pmu is
>>> IMPLEMENTATION DEFINED. If the reg of smmu_pmu is inside smmu,
>>> devm_ioremap_resource will failed and return -EBUSY, eg.:
>>>
>>> smmu reg ranges: 0x180000000 ~ 0x1801fffff
>>> its smmu_pmu reg ranges: 0x180001000 ~ 0x180001fff
>>>
>> Just to let you know that I no longer work at Qualcomm and I won't be
>> able to provide updates to this patchset. I expect that others from my
>> former team at Qualcomm will pick up ownership.
>
> Thanks for this infomation.
>
> hi Agustin and Timur,
>
> Is there any new status about this patchset?
>

Hi,

Apologies for the slow response.
We are having some internal discussions about when/if to do this.
I expect to have more clarity within a few weeks.

For what is worth let me take the opportunity to outline the approach
we would like to see for a V2 either developed by us or somebody else
in the community:

1. Rework to comply with the IORT spec changes.

2. Rework probing to extract extra information from the IORT table
about SMMU/device associations.

With this information and some perf user space work I think it's
possible
to have a single dynamic PMU node and use a similar approach to what
is
used in the Coresight drivers to pass the device we want to monitor
and
for the driver to find the PMU/PMCG. E.g.:

$ lspci
0001:00:00.0 PCI bridge: Airgo Networks, Inc. Device 0401
0002:00:00.0 PCI bridge: Airgo Networks, Inc. Device 0401
0002:01:00.0 Ethernet controller: Mellanox Technologies MT27500 Family
[ConnectX-3]
0003:00:00.0 PCI bridge: Airgo Networks, Inc. Device 0401
0003:01:00.0 Ethernet controller: Mellanox Technologies MT27500 Family
[ConnectX-3]

# Monitor TLB misses on root complex 2 (no stream filter is applied)
perf stat -a -e smmu/tlb_miss,@0002:00:00.0/ <workload>

# Monitor TLB misses on a device on root complex 2 (derive the stream
number from the RID)
perf stat -a -e smmu/tlb_miss,@0002:01:00.0/ <workload>

Thanks,
Agustín

--
Qualcomm Datacenter Technologies, Inc. on behalf of the Qualcomm
Technologies, Inc.
Qualcomm Technologies, Inc. is a member of the Code Aurora Forum, a
Linux Foundation Collaborative Project.

Subject: RE: [PATCH 2/2] perf: add arm64 smmuv3 pmu driver



> -----Original Message-----
> From: linux-arm-kernel [mailto:[email protected]]
> On Behalf Of Agustin Vega-Frias
> Sent: Wednesday, May 02, 2018 3:20 PM
> To: xieyisheng (A) <[email protected]>
> Cc: Mark Rutland <[email protected]>; Mark Langsdorf
> <[email protected]>; Neil Leeder <[email protected]>; Jon
> Masters <[email protected]>; Timur Tabi <[email protected]>; Will
> Deacon <[email protected]>; [email protected]; Mark Brown
> <[email protected]>; Mark Salter <[email protected]>; linux-arm-
> [email protected]
> Subject: Re: [PATCH 2/2] perf: add arm64 smmuv3 pmu driver
>
> On 2018-04-02 02:37, Yisheng Xie wrote:
> > Hi Neil,
> >
> > On 2018/4/1 13:44, Neil Leeder wrote:
> >> Hi Yisheng Xie,
> >>
> >> On 3/29/2018 03:03 AM, Yisheng Xie wrote:
> >>>
> >>> Hi Neil,
> >>>
> >>> On 2017/8/5 3:59, Neil Leeder wrote:
> >>>> + mem_resource_0 = platform_get_resource(pdev,
> IORESOURCE_MEM,
> >>>> 0);
> >>>> + mem_map_0 = devm_ioremap_resource(&pdev->dev,
> mem_resource_0);
> >>>> +
> >>> Can we use devm_ioremap instead? for the reg_base of smmu_pmu is
> >>> IMPLEMENTATION DEFINED. If the reg of smmu_pmu is inside smmu,
> >>> devm_ioremap_resource will failed and return -EBUSY, eg.:
> >>>
> >>> smmu reg ranges: 0x180000000 ~ 0x1801fffff
> >>> its smmu_pmu reg ranges: 0x180001000 ~ 0x180001fff
> >>>
> >> Just to let you know that I no longer work at Qualcomm and I won't be
> >> able to provide updates to this patchset. I expect that others from my
> >> former team at Qualcomm will pick up ownership.
> >
> > Thanks for this infomation.
> >
> > hi Agustin and Timur,
> >
> > Is there any new status about this patchset?
> >
>
> Hi,
>
> Apologies for the slow response.
> We are having some internal discussions about when/if to do this.
> I expect to have more clarity within a few weeks.
>
> For what is worth let me take the opportunity to outline the approach
> we would like to see for a V2 either developed by us or somebody else
> in the community:
>
> 1. Rework to comply with the IORT spec changes.
>
> 2. Rework probing to extract extra information from the IORT table
> about SMMU/device associations.

Thanks for coming back on this. It would be good to address cases where
the PMCG base address is at a IMP DEF address offset within the associated
SMMUv3 page address space. As things stands with pmu v1 currently, the
SMMUv3 driver probe will fail. Please find the discussion here[1].

Thanks,
Shameer
[1] https://lkml.org/lkml/2018/1/31/235

> With this information and some perf user space work I think it's
> possible
> to have a single dynamic PMU node and use a similar approach to what
> is
> used in the Coresight drivers to pass the device we want to monitor
> and
> for the driver to find the PMU/PMCG. E.g.:
>
> $ lspci
> 0001:00:00.0 PCI bridge: Airgo Networks, Inc. Device 0401
> 0002:00:00.0 PCI bridge: Airgo Networks, Inc. Device 0401
> 0002:01:00.0 Ethernet controller: Mellanox Technologies MT27500 Family
> [ConnectX-3]
> 0003:00:00.0 PCI bridge: Airgo Networks, Inc. Device 0401
> 0003:01:00.0 Ethernet controller: Mellanox Technologies MT27500 Family
> [ConnectX-3]
>
> # Monitor TLB misses on root complex 2 (no stream filter is applied)
> perf stat -a -e smmu/tlb_miss,@0002:00:00.0/ <workload>
>
> # Monitor TLB misses on a device on root complex 2 (derive the stream
> number from the RID)
> perf stat -a -e smmu/tlb_miss,@0002:01:00.0/ <workload>
> Thanks,
> Agustín
>
> --
> Qualcomm Datacenter Technologies, Inc. on behalf of the Qualcomm
> Technologies, Inc.
> Qualcomm Technologies, Inc. is a member of the Code Aurora Forum, a
> Linux Foundation Collaborative Project.
>
> _______________________________________________
> linux-arm-kernel mailing list
> [email protected]
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

2017-11-02 20:39:24

by Leeder, Neil

[permalink] [raw]
Subject: Re: [PATCH 0/2] arm64 SMMUv3 PMU driver with IORT support

Hi Yury,

On 10/31/2017 7:33 PM, Yury Norov wrote:
> Hi Neil,
>
> On Fri, Aug 04, 2017 at 03:59:12PM -0400, Neil Leeder wrote:
>> This adds a driver for the SMMUv3 PMU into the perf framework.
>> It includes an IORT update to support PM Counter Groups.
>>
>> IORT has no mechanism for determining device names so PMUs
>> are named based on their physical address.
>>
>> Tested on Qualcomm QDF2400. perf_fuzzer ran for 4+ hours
>> with no failures.
>>
>> Neil Leeder (2):
>> acpi: arm64: add iort support for PMCG
>> perf: add arm64 smmuv3 pmu driver
>>
>> drivers/acpi/arm64/iort.c | 54 +++
>> drivers/perf/Kconfig | 9 +
>> drivers/perf/Makefile | 1 +
>> drivers/perf/arm_smmuv3_pmu.c | 823 ++++++++++++++++++++++++++++++++++++++++++
>> include/acpi/actbl2.h | 9 +-
>> 5 files changed, 895 insertions(+), 1 deletion(-)
>> create mode 100644 drivers/perf/arm_smmuv3_pmu.c
>
> I try to run your driver on ThunderX2, but perf list doesn't show new
> events, and example in description in patch 2 also doesn't work:
> yury@VAL1-25:~/linux$ tools/perf/perf stat -e smmu_0_ff88840/transaction,filter_enable=1, filter_span=1,filter_stream_id=0x42/ -a pwd
> event syntax error: '..ter_enable=1,'
> \___ parser error
> Run 'perf list' for a list of valid events
>
> Usage: perf stat [<options>] [<command>]
>
> -e, --event <event> event selector. use 'perf list' to list available events
>
> I run v4.14-rc7 kernel plus this series. The config is attached. I
> found that platform_match() never return 1 for arm-smmu-pmu and so
> the driver never probed.
>
> Maybe it's my local configuration issue?

Thanks for testing this driver. As some of the review comments pointed out, there are
some changes needed which will be addressed in a future patchset. Notably the IORT
needs updating to handle the two base addresses. The current driver assumes the second
memory area is adjacent to the first, which may not be the case in all implementations.

In order to make the driver probe correctly, you'll need to update your ACPI tables
with entries for PMCG, node type 0x5. This ACPI information will include the two base
addresses. I've heard that the updated spec which will include the second memory
area will be available soon. At that point I will release another patchset for review.

Neil
--
Qualcomm Datacenter Technologies, Inc. as an affiliate of Qualcomm Technologies Inc.
Qualcomm Technologies, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project.

From 1582818110438722842@xxx Tue Oct 31 23:36:19 +0000 2017
X-GM-THRID: 1574831973672201036
X-Gmail-Labels: Inbox,Category Forums

2017-10-31 23:36:19

by Yury Norov

[permalink] [raw]
Subject: Re: [PATCH 0/2] arm64 SMMUv3 PMU driver with IORT support

Hi Neil,

On Fri, Aug 04, 2017 at 03:59:12PM -0400, Neil Leeder wrote:
> This adds a driver for the SMMUv3 PMU into the perf framework.
> It includes an IORT update to support PM Counter Groups.
>
> IORT has no mechanism for determining device names so PMUs
> are named based on their physical address.
>
> Tested on Qualcomm QDF2400. perf_fuzzer ran for 4+ hours
> with no failures.
>
> Neil Leeder (2):
> acpi: arm64: add iort support for PMCG
> perf: add arm64 smmuv3 pmu driver
>
> drivers/acpi/arm64/iort.c | 54 +++
> drivers/perf/Kconfig | 9 +
> drivers/perf/Makefile | 1 +
> drivers/perf/arm_smmuv3_pmu.c | 823 ++++++++++++++++++++++++++++++++++++++++++
> include/acpi/actbl2.h | 9 +-
> 5 files changed, 895 insertions(+), 1 deletion(-)
> create mode 100644 drivers/perf/arm_smmuv3_pmu.c

I try to run your driver on ThunderX2, but perf list doesn't show new
events, and example in description in patch 2 also doesn't work:
yury@VAL1-25:~/linux$ tools/perf/perf stat -e smmu_0_ff88840/transaction,filter_enable=1, filter_span=1,filter_stream_id=0x42/ -a pwd
event syntax error: '..ter_enable=1,'
\___ parser error
Run 'perf list' for a list of valid events

Usage: perf stat [<options>] [<command>]

-e, --event <event> event selector. use 'perf list' to list available events

I run v4.14-rc7 kernel plus this series. The config is attached. I
found that platform_match() never return 1 for arm-smmu-pmu and so
the driver never probed.

Maybe it's my local configuration issue?

Thanks for any help,
Yury


Attachments:
(No filename) (1.61 kB)
t99.config.gz (34.25 kB)
Download all attachments

2017-10-12 11:16:21

by Hanjun Guo

[permalink] [raw]
Subject: Re: [PATCH 0/2] arm64 SMMUv3 PMU driver with IORT support

On 2017/10/12 19:05, Lorenzo Pieralisi wrote:
> On Thu, Oct 12, 2017 at 06:58:33PM +0800, Hanjun Guo wrote:
>> On 2017/8/11 11:28, Leeder, Neil wrote:
>>> On 8/9/2017 9:26 PM, Hanjun Guo wrote:
>>>>> On 2017/8/9 23:48, Leeder, Neil wrote:
>>>>>>>>>>> drivers/perf/Kconfig | 9 +
>>>>>>>>>>> drivers/perf/Makefile | 1 +
>>>>>>>>>>> drivers/perf/arm_smmuv3_pmu.c | 823 ++++++++++++++++++++++++++++++++++++++++++
>>>>>>>>>>> include/acpi/actbl2.h | 9 +-
>>>>>>>>> Do you have the acpica support code? I'm currently
>>>>>>>>> working on SMMUv3 MSI support and I would like to test
>>>>>>>>> it with MSI support for PMCG as well.
>>>>>>> I don't have any code other than what was posted here.
>>>>>>> What additional ACPICA support code is needed?
>>>>> Sorry for not clear, I mean the acpica code for iASL and
>>>>> other tool: github.com/acpica/acpica.git
>>>>>
>>>>> With that code, I can compile my IORT table with PMCG node.
>>> Unfortunately it looks like I'm the first person from Qualcomm Datacenter
>>> Technologies to try to add something to ACPICA, which means I'll have to
>>> kick off an internal legal process which may take some time. Unless someone
>>> else wants to take a crack at it - and really, there's nothing more than is
>>> in the ARM IORT spec - it could be a while before I can do that.
>> Just a update, I sent a pull request to Bob for ACPICA changes,
>> please take a look:
>>
>> https://github.com/acpica/acpica/pull/327
> You should not have done that. We need to update IORT specifications
> for IORT PMCG before merging ACPICA PMCG support.
>
> Ask Robert to drop it straight away or I will.

Sorry, I will do that, thanks for the reminder.

Thanks
Hanjun


From 1581049586185789094@xxx Thu Oct 12 11:06:23 +0000 2017
X-GM-THRID: 1574831973672201036
X-Gmail-Labels: Inbox,Category Forums

2017-10-12 11:06:23

by Lorenzo Pieralisi

[permalink] [raw]
Subject: Re: [PATCH 0/2] arm64 SMMUv3 PMU driver with IORT support

On Thu, Oct 12, 2017 at 06:58:33PM +0800, Hanjun Guo wrote:
> On 2017/8/11 11:28, Leeder, Neil wrote:
> > On 8/9/2017 9:26 PM, Hanjun Guo wrote:
> >> > On 2017/8/9 23:48, Leeder, Neil wrote:
> >>>>> >>>> drivers/perf/Kconfig | 9 +
> >>>>> >>>> drivers/perf/Makefile | 1 +
> >>>>> >>>> drivers/perf/arm_smmuv3_pmu.c | 823 ++++++++++++++++++++++++++++++++++++++++++
> >>>>> >>>> include/acpi/actbl2.h | 9 +-
> >>>> >>> Do you have the acpica support code? I'm currently
> >>>> >>> working on SMMUv3 MSI support and I would like to test
> >>>> >>> it with MSI support for PMCG as well.
> >>> >> I don't have any code other than what was posted here.
> >>> >> What additional ACPICA support code is needed?
> >> >
> >> > Sorry for not clear, I mean the acpica code for iASL and
> >> > other tool: github.com/acpica/acpica.git
> >> >
> >> > With that code, I can compile my IORT table with PMCG node.
> > Unfortunately it looks like I'm the first person from Qualcomm Datacenter
> > Technologies to try to add something to ACPICA, which means I'll have to
> > kick off an internal legal process which may take some time. Unless someone
> > else wants to take a crack at it - and really, there's nothing more than is
> > in the ARM IORT spec - it could be a while before I can do that.
>
> Just a update, I sent a pull request to Bob for ACPICA changes,
> please take a look:
>
> https://github.com/acpica/acpica/pull/327

You should not have done that. We need to update IORT specifications
for IORT PMCG before merging ACPICA PMCG support.

Ask Robert to drop it straight away or I will.

Lorenzo

From 1581049228599645079@xxx Thu Oct 12 11:00:42 +0000 2017
X-GM-THRID: 1574831973672201036
X-Gmail-Labels: Inbox,Category Forums

2017-10-12 11:00:42

by Hanjun Guo

[permalink] [raw]
Subject: Re: [PATCH 0/2] arm64 SMMUv3 PMU driver with IORT support

On 2017/8/11 11:28, Leeder, Neil wrote:
> On 8/9/2017 9:26 PM, Hanjun Guo wrote:
>> > On 2017/8/9 23:48, Leeder, Neil wrote:
>>>>> >>>> drivers/perf/Kconfig | 9 +
>>>>> >>>> drivers/perf/Makefile | 1 +
>>>>> >>>> drivers/perf/arm_smmuv3_pmu.c | 823 ++++++++++++++++++++++++++++++++++++++++++
>>>>> >>>> include/acpi/actbl2.h | 9 +-
>>>> >>> Do you have the acpica support code? I'm currently
>>>> >>> working on SMMUv3 MSI support and I would like to test
>>>> >>> it with MSI support for PMCG as well.
>>> >> I don't have any code other than what was posted here.
>>> >> What additional ACPICA support code is needed?
>> >
>> > Sorry for not clear, I mean the acpica code for iASL and
>> > other tool: github.com/acpica/acpica.git
>> >
>> > With that code, I can compile my IORT table with PMCG node.
> Unfortunately it looks like I'm the first person from Qualcomm Datacenter
> Technologies to try to add something to ACPICA, which means I'll have to
> kick off an internal legal process which may take some time. Unless someone
> else wants to take a crack at it - and really, there's nothing more than is
> in the ARM IORT spec - it could be a while before I can do that.

Just a update, I sent a pull request to Bob for ACPICA changes,
please take a look:

https://github.com/acpica/acpica/pull/327

Thanks
Hanjun


From 1575403800924822039@xxx Fri Aug 11 03:29:02 +0000 2017
X-GM-THRID: 1574831973672201036
X-Gmail-Labels: Inbox,Category Forums