2022-02-24 14:54:12

by Hector Martin

[permalink] [raw]
Subject: [PATCH v2 0/7] irqchip/apple-aic: Add support for AICv2

Hi folks,

In the t6000/t6001 (M1 Pro / Max) SoCs, Apple introduced a new version
of their interrupt controller. This is a significant departure from
AICv1 and seems designed to better scale to larger chips. This series
adds support for it to the existing AIC driver.

Gone are CPU affinities; instead there seems to be some kind of
"automagic" dispatch to willing CPU cores, and cores can also opt-out
via an IMP-DEF sysreg (!). Right now the bootloader just sets up all
cores to accept IRQs, and we ignore all this and let the magic
algorithm pick a CPU to accept the IRQ. In the future, we might start
making use of these finer-grained capabilities for e.g. better
real-time guarantees (CPUs running RT threads might opt out of IRQs).

Legacy IPI support is also gone, so this implements Fast IPI support.
Fast IPIs are implemented entirely in the CPU core complexes, using
FIQs and IMP-DEF sysregs. This is also supported on t8103/M1, so we
enable it there too, but we keep the legacy AIC IPI codepath in case
it is useful for backporting to older chips.

This also adds support for multi-die AIC2 controllers. While no
multi-die products exist yet, the AIC2 in t600x is built to support
up to 2 dies, and it's pretty clear how it works, so let's implement
it. If we're lucky, when multi-die products roll around, this will
let us support them with only DT changes. In order to support the
extra die dimension, this introduces a 4-argument IRQ phandle form
(3-argument is always supported and just implies die 0).

All register offsets are computed based on capability register values,
which should allow forward-compatibility with future AIC2 variants...
except for one. For some inexplicable reason, the number of actually
implemented die register sets is nowhere to be found (t600x has 2,
but claims 1 die in use and 8 dies max, neither of which is what we
need), and this is necessary to compute the event register offset,
which is page-aligned after the die register sets. We have no choice
but to stick this offset in the device tree... which is the same thing
Apple do in their ADT.

Changes since v1:
- Split off the DT binding
- Changed fast-ipi codepath selection to use a static key for performance
- Added fix for PCI driver to support the new 4-cell IRQ form
- Minor style / review feedback fixes

Hector Martin (7):
PCI: apple: Change MSI handling to handle 4-cell AIC fwspec form
dt-bindings: interrupt-controller: apple,aic2: New binding for AICv2
irqchip/apple-aic: Add Fast IPI support
irqchip/apple-aic: Switch to irq_domain_create_tree and sparse hwirqs
irqchip/apple-aic: Dynamically compute register offsets
irqchip/apple-aic: Support multiple dies
irqchip/apple-aic: Add support for AICv2

.../interrupt-controller/apple,aic2.yaml | 99 ++++
MAINTAINERS | 2 +-
drivers/irqchip/irq-apple-aic.c | 432 +++++++++++++++---
drivers/pci/controller/pcie-apple.c | 2 +-
4 files changed, 458 insertions(+), 77 deletions(-)
create mode 100644 Documentation/devicetree/bindings/interrupt-controller/apple,aic2.yaml

--
2.33.0


2022-02-24 15:10:19

by Hector Martin

[permalink] [raw]
Subject: [PATCH v2 7/7] irqchip/apple-aic: Add support for AICv2

Introduce support for the new AICv2 hardware block in t6000/t6001 SoCs.

It seems these blocks are missing the information required to compute
the event register offset in the capability registers, so we specify
that in the DT.

Signed-off-by: Hector Martin <[email protected]>
---
drivers/irqchip/irq-apple-aic.c | 148 ++++++++++++++++++++++++++++----
1 file changed, 129 insertions(+), 19 deletions(-)

diff --git a/drivers/irqchip/irq-apple-aic.c b/drivers/irqchip/irq-apple-aic.c
index 93c622435ba2..602c8b274170 100644
--- a/drivers/irqchip/irq-apple-aic.c
+++ b/drivers/irqchip/irq-apple-aic.c
@@ -103,6 +103,57 @@

#define AIC_MAX_IRQ 0x400

+/*
+ * AIC v2 registers (MMIO)
+ */
+
+#define AIC2_VERSION 0x0000
+#define AIC2_VERSION_VER GENMASK(7, 0)
+
+#define AIC2_INFO1 0x0004
+#define AIC2_INFO1_NR_IRQ GENMASK(15, 0)
+#define AIC2_INFO1_LAST_DIE GENMASK(27, 24)
+
+#define AIC2_INFO2 0x0008
+
+#define AIC2_INFO3 0x000c
+#define AIC2_INFO3_MAX_IRQ GENMASK(15, 0)
+#define AIC2_INFO3_MAX_DIE GENMASK(27, 24)
+
+#define AIC2_RESET 0x0010
+#define AIC2_RESET_RESET BIT(0)
+
+#define AIC2_CONFIG 0x0014
+#define AIC2_CONFIG_ENABLE BIT(0)
+#define AIC2_CONFIG_PREFER_PCPU BIT(28)
+
+#define AIC2_TIMEOUT 0x0028
+#define AIC2_CLUSTER_PRIO 0x0030
+#define AIC2_DELAY_GROUPS 0x0100
+
+#define AIC2_IRQ_CFG 0x2000
+
+/*
+ * AIC2 registers are laid out like this, starting at AIC2_IRQ_CFG:
+ *
+ * Repeat for each die:
+ * IRQ_CFG: u32 * MAX_IRQS
+ * SW_SET: u32 * (MAX_IRQS / 32)
+ * SW_CLR: u32 * (MAX_IRQS / 32)
+ * MASK_SET: u32 * (MAX_IRQS / 32)
+ * MASK_CLR: u32 * (MAX_IRQS / 32)
+ * HW_STATE: u32 * (MAX_IRQS / 32)
+ *
+ * This is followed by a set of event registers, each 16K page aligned.
+ * The first one is the AP event register we will use. Unfortunately,
+ * the actual implemented die count is not specified anywhere in the
+ * capability registers, so we have to explicitly specify the event
+ * register offset in the device tree to remain forward-compatible.
+ */
+
+#define AIC2_IRQ_CFG_TARGET GENMASK(3, 0)
+#define AIC2_IRQ_CFG_DELAY_IDX GENMASK(7, 5)
+
#define MASK_REG(x) (4 * ((x) >> 5))
#define MASK_BIT(x) BIT((x) & GENMASK(4, 0))

@@ -193,6 +244,7 @@ struct aic_info {
/* Register offsets */
u32 event;
u32 target_cpu;
+ u32 irq_cfg;
u32 sw_set;
u32 sw_clr;
u32 mask_set;
@@ -220,6 +272,14 @@ static const struct aic_info aic1_fipi_info = {
.fast_ipi = true,
};

+static const struct aic_info aic2_info = {
+ .version = 2,
+
+ .irq_cfg = AIC2_IRQ_CFG,
+
+ .fast_ipi = true,
+};
+
static const struct of_device_id aic_info_match[] = {
{
.compatible = "apple,t8103-aic",
@@ -229,6 +289,10 @@ static const struct of_device_id aic_info_match[] = {
.compatible = "apple,aic",
.data = &aic1_info,
},
+ {
+ .compatible = "apple,aic2",
+ .data = &aic2_info,
+ },
{}
};

@@ -373,6 +437,14 @@ static struct irq_chip aic_chip = {
.irq_set_type = aic_irq_set_type,
};

+static struct irq_chip aic2_chip = {
+ .name = "AIC2",
+ .irq_mask = aic_irq_mask,
+ .irq_unmask = aic_irq_unmask,
+ .irq_eoi = aic_irq_eoi,
+ .irq_set_type = aic_irq_set_type,
+};
+
/*
* FIQ irqchip
*/
@@ -529,10 +601,15 @@ static struct irq_chip fiq_chip = {
static int aic_irq_domain_map(struct irq_domain *id, unsigned int irq,
irq_hw_number_t hw)
{
+ struct aic_irq_chip *ic = id->host_data;
u32 type = FIELD_GET(AIC_EVENT_TYPE, hw);
+ struct irq_chip *chip = &aic_chip;
+
+ if (ic->info.version == 2)
+ chip = &aic2_chip;

if (type == AIC_EVENT_TYPE_IRQ) {
- irq_domain_set_info(id, irq, hw, &aic_chip, id->host_data,
+ irq_domain_set_info(id, irq, hw, chip, id->host_data,
handle_fasteoi_irq, NULL, NULL);
irqd_set_single_target(irq_desc_get_irq_data(irq_to_desc(irq)));
} else {
@@ -888,24 +965,26 @@ static int aic_init_cpu(unsigned int cpu)
/* Commit all of the above */
isb();

- /*
- * Make sure the kernel's idea of logical CPU order is the same as AIC's
- * If we ever end up with a mismatch here, we will have to introduce
- * a mapping table similar to what other irqchip drivers do.
- */
- WARN_ON(aic_ic_read(aic_irqc, AIC_WHOAMI) != smp_processor_id());
+ if (aic_irqc->info.version == 1) {
+ /*
+ * Make sure the kernel's idea of logical CPU order is the same as AIC's
+ * If we ever end up with a mismatch here, we will have to introduce
+ * a mapping table similar to what other irqchip drivers do.
+ */
+ WARN_ON(aic_ic_read(aic_irqc, AIC_WHOAMI) != smp_processor_id());

- /*
- * Always keep IPIs unmasked at the hardware level (except auto-masking
- * by AIC during processing). We manage masks at the vIPI level.
- * These registers only exist on AICv1, AICv2 always uses fast IPIs.
- */
- aic_ic_write(aic_irqc, AIC_IPI_ACK, AIC_IPI_SELF | AIC_IPI_OTHER);
- if (static_branch_likely(&use_fast_ipi)) {
- aic_ic_write(aic_irqc, AIC_IPI_MASK_SET, AIC_IPI_SELF | AIC_IPI_OTHER);
- } else {
- aic_ic_write(aic_irqc, AIC_IPI_MASK_SET, AIC_IPI_SELF);
- aic_ic_write(aic_irqc, AIC_IPI_MASK_CLR, AIC_IPI_OTHER);
+ /*
+ * Always keep IPIs unmasked at the hardware level (except auto-masking
+ * by AIC during processing). We manage masks at the vIPI level.
+ * These registers only exist on AICv1, AICv2 always uses fast IPIs.
+ */
+ aic_ic_write(aic_irqc, AIC_IPI_ACK, AIC_IPI_SELF | AIC_IPI_OTHER);
+ if (static_branch_likely(&use_fast_ipi)) {
+ aic_ic_write(aic_irqc, AIC_IPI_MASK_SET, AIC_IPI_SELF | AIC_IPI_OTHER);
+ } else {
+ aic_ic_write(aic_irqc, AIC_IPI_MASK_SET, AIC_IPI_SELF);
+ aic_ic_write(aic_irqc, AIC_IPI_MASK_CLR, AIC_IPI_OTHER);
+ }
}

/* Initialize the local mask state */
@@ -960,6 +1039,29 @@ static int __init aic_of_ic_init(struct device_node *node, struct device_node *p

break;
}
+ case 2: {
+ u32 info1, info3;
+
+ info1 = aic_ic_read(irqc, AIC2_INFO1);
+ info3 = aic_ic_read(irqc, AIC2_INFO3);
+
+ irqc->nr_irq = FIELD_GET(AIC2_INFO1_NR_IRQ, info1);
+ irqc->max_irq = FIELD_GET(AIC2_INFO3_MAX_IRQ, info3);
+ irqc->nr_die = FIELD_GET(AIC2_INFO1_LAST_DIE, info1) + 1;
+ irqc->max_die = FIELD_GET(AIC2_INFO3_MAX_DIE, info3);
+
+ off = start_off = irqc->info.irq_cfg;
+ off += sizeof(u32) * irqc->max_irq; /* IRQ_CFG */
+
+ if (of_property_read_u32(node, "apple,event-reg", &irqc->info.event) < 0) {
+ pr_err("Failed to get apple,event-reg property");
+ iounmap(irqc->base);
+ kfree(irqc);
+ return -ENODEV;
+ }
+
+ break;
+ }
}

irqc->info.sw_set = off;
@@ -1011,6 +1113,13 @@ static int __init aic_of_ic_init(struct device_node *node, struct device_node *p
off += irqc->info.die_stride;
}

+ if (irqc->info.version == 2) {
+ u32 config = aic_ic_read(irqc, AIC2_CONFIG);
+
+ config |= AIC2_CONFIG_ENABLE;
+ aic_ic_write(irqc, AIC2_CONFIG, config);
+ }
+
if (!is_kernel_in_hyp_mode())
pr_info("Kernel running in EL1, mapping interrupts");

@@ -1029,4 +1138,5 @@ static int __init aic_of_ic_init(struct device_node *node, struct device_node *p
return 0;
}

-IRQCHIP_DECLARE(apple_m1_aic, "apple,aic", aic_of_ic_init);
+IRQCHIP_DECLARE(apple_aic, "apple,aic", aic_of_ic_init);
+IRQCHIP_DECLARE(apple_aic2, "apple,aic2", aic_of_ic_init);
--
2.33.0

2022-02-24 15:18:21

by Hector Martin

[permalink] [raw]
Subject: [PATCH v2 4/7] irqchip/apple-aic: Switch to irq_domain_create_tree and sparse hwirqs

This allows us to directly use the hardware event number as the hwirq
number. Since IRQ events have bit 16 set (type=1), FIQs now move to
starting at hwirq number 0.

This will become more important once multi-die support is introduced in
a later commit.

Signed-off-by: Hector Martin <[email protected]>
---
drivers/irqchip/irq-apple-aic.c | 71 ++++++++++++++++++---------------
1 file changed, 39 insertions(+), 32 deletions(-)

diff --git a/drivers/irqchip/irq-apple-aic.c b/drivers/irqchip/irq-apple-aic.c
index 613e0ebdabdc..96480389195d 100644
--- a/drivers/irqchip/irq-apple-aic.c
+++ b/drivers/irqchip/irq-apple-aic.c
@@ -68,7 +68,7 @@
*/

#define AIC_INFO 0x0004
-#define AIC_INFO_NR_HW GENMASK(15, 0)
+#define AIC_INFO_NR_IRQ GENMASK(15, 0)

#define AIC_CONFIG 0x0010

@@ -77,7 +77,8 @@
#define AIC_EVENT_TYPE GENMASK(31, 16)
#define AIC_EVENT_NUM GENMASK(15, 0)

-#define AIC_EVENT_TYPE_HW 1
+#define AIC_EVENT_TYPE_FIQ 0 /* Software use */
+#define AIC_EVENT_TYPE_IRQ 1
#define AIC_EVENT_TYPE_IPI 4
#define AIC_EVENT_IPI_OTHER 1
#define AIC_EVENT_IPI_SELF 2
@@ -160,6 +161,11 @@
#define MPIDR_CPU(x) MPIDR_AFFINITY_LEVEL(x, 0)
#define MPIDR_CLUSTER(x) MPIDR_AFFINITY_LEVEL(x, 1)

+#define AIC_IRQ_HWIRQ(x) (FIELD_PREP(AIC_EVENT_TYPE, AIC_EVENT_TYPE_IRQ) | \
+ FIELD_PREP(AIC_EVENT_NUM, x))
+#define AIC_FIQ_HWIRQ(x) (FIELD_PREP(AIC_EVENT_TYPE, AIC_EVENT_TYPE_FIQ) | \
+ FIELD_PREP(AIC_EVENT_NUM, x))
+#define AIC_HWIRQ_IRQ(x) FIELD_GET(AIC_EVENT_NUM, x)
#define AIC_NR_FIQ 4
#define AIC_NR_SWIPI 32

@@ -213,7 +219,7 @@ struct aic_irq_chip {
void __iomem *base;
struct irq_domain *hw_domain;
struct irq_domain *ipi_domain;
- int nr_hw;
+ int nr_irq;

struct aic_info info;
};
@@ -243,18 +249,22 @@ static void aic_ic_write(struct aic_irq_chip *ic, u32 reg, u32 val)

static void aic_irq_mask(struct irq_data *d)
{
+ irq_hw_number_t hwirq = irqd_to_hwirq(d);
struct aic_irq_chip *ic = irq_data_get_irq_chip_data(d);

- aic_ic_write(ic, AIC_MASK_SET + MASK_REG(irqd_to_hwirq(d)),
- MASK_BIT(irqd_to_hwirq(d)));
+ u32 irq = AIC_HWIRQ_IRQ(hwirq);
+
+ aic_ic_write(ic, AIC_MASK_SET + MASK_REG(irq), MASK_BIT(irq));
}

static void aic_irq_unmask(struct irq_data *d)
{
+ irq_hw_number_t hwirq = irqd_to_hwirq(d);
struct aic_irq_chip *ic = irq_data_get_irq_chip_data(d);

- aic_ic_write(ic, AIC_MASK_CLR + MASK_REG(d->hwirq),
- MASK_BIT(irqd_to_hwirq(d)));
+ u32 irq = AIC_HWIRQ_IRQ(hwirq);
+
+ aic_ic_write(ic, AIC_MASK_CLR + MASK_REG(irq), MASK_BIT(irq));
}

static void aic_irq_eoi(struct irq_data *d)
@@ -281,8 +291,8 @@ static void __exception_irq_entry aic_handle_irq(struct pt_regs *regs)
type = FIELD_GET(AIC_EVENT_TYPE, event);
irq = FIELD_GET(AIC_EVENT_NUM, event);

- if (type == AIC_EVENT_TYPE_HW)
- generic_handle_domain_irq(aic_irqc->hw_domain, irq);
+ if (type == AIC_EVENT_TYPE_IRQ)
+ generic_handle_domain_irq(aic_irqc->hw_domain, event);
else if (type == AIC_EVENT_TYPE_IPI && irq == 1)
aic_handle_ipi(regs);
else if (event != 0)
@@ -314,7 +324,7 @@ static int aic_irq_set_affinity(struct irq_data *d,
else
cpu = cpumask_any_and(mask_val, cpu_online_mask);

- aic_ic_write(ic, AIC_TARGET_CPU + hwirq * 4, BIT(cpu));
+ aic_ic_write(ic, AIC_TARGET_CPU + AIC_HWIRQ_IRQ(hwirq) * 4, BIT(cpu));
irq_data_update_effective_affinity(d, cpumask_of(cpu));

return IRQ_SET_MASK_OK;
@@ -344,9 +354,7 @@ static struct irq_chip aic_chip = {

static unsigned long aic_fiq_get_idx(struct irq_data *d)
{
- struct aic_irq_chip *ic = irq_data_get_irq_chip_data(d);
-
- return irqd_to_hwirq(d) - ic->nr_hw;
+ return AIC_HWIRQ_IRQ(irqd_to_hwirq(d));
}

static void aic_fiq_set_mask(struct irq_data *d)
@@ -434,11 +442,11 @@ static void __exception_irq_entry aic_handle_fiq(struct pt_regs *regs)

if (TIMER_FIRING(read_sysreg(cntp_ctl_el0)))
generic_handle_domain_irq(aic_irqc->hw_domain,
- aic_irqc->nr_hw + AIC_TMR_EL0_PHYS);
+ AIC_FIQ_HWIRQ(AIC_TMR_EL0_PHYS));

if (TIMER_FIRING(read_sysreg(cntv_ctl_el0)))
generic_handle_domain_irq(aic_irqc->hw_domain,
- aic_irqc->nr_hw + AIC_TMR_EL0_VIRT);
+ AIC_FIQ_HWIRQ(AIC_TMR_EL0_VIRT));

if (is_kernel_in_hyp_mode()) {
uint64_t enabled = read_sysreg_s(SYS_IMP_APL_VM_TMR_FIQ_ENA_EL2);
@@ -446,12 +454,12 @@ static void __exception_irq_entry aic_handle_fiq(struct pt_regs *regs)
if ((enabled & VM_TMR_FIQ_ENABLE_P) &&
TIMER_FIRING(read_sysreg_s(SYS_CNTP_CTL_EL02)))
generic_handle_domain_irq(aic_irqc->hw_domain,
- aic_irqc->nr_hw + AIC_TMR_EL02_PHYS);
+ AIC_FIQ_HWIRQ(AIC_TMR_EL02_PHYS));

if ((enabled & VM_TMR_FIQ_ENABLE_V) &&
TIMER_FIRING(read_sysreg_s(SYS_CNTV_CTL_EL02)))
generic_handle_domain_irq(aic_irqc->hw_domain,
- aic_irqc->nr_hw + AIC_TMR_EL02_VIRT);
+ AIC_FIQ_HWIRQ(AIC_TMR_EL02_VIRT));
}

if ((read_sysreg_s(SYS_IMP_APL_PMCR0_EL1) & (PMCR0_IMODE | PMCR0_IACT)) ==
@@ -496,9 +504,9 @@ static struct irq_chip fiq_chip = {
static int aic_irq_domain_map(struct irq_domain *id, unsigned int irq,
irq_hw_number_t hw)
{
- struct aic_irq_chip *ic = id->host_data;
+ u32 type = FIELD_GET(AIC_EVENT_TYPE, hw);

- if (hw < ic->nr_hw) {
+ if (type == AIC_EVENT_TYPE_IRQ) {
irq_domain_set_info(id, irq, hw, &aic_chip, id->host_data,
handle_fasteoi_irq, NULL, NULL);
irqd_set_single_target(irq_desc_get_irq_data(irq_to_desc(irq)));
@@ -523,14 +531,14 @@ static int aic_irq_domain_translate(struct irq_domain *id,

switch (fwspec->param[0]) {
case AIC_IRQ:
- if (fwspec->param[1] >= ic->nr_hw)
+ if (fwspec->param[1] >= ic->nr_irq)
return -EINVAL;
- *hwirq = fwspec->param[1];
+ *hwirq = AIC_IRQ_HWIRQ(fwspec->param[1]);
break;
case AIC_FIQ:
if (fwspec->param[1] >= AIC_NR_FIQ)
return -EINVAL;
- *hwirq = ic->nr_hw + fwspec->param[1];
+ *hwirq = AIC_FIQ_HWIRQ(fwspec->param[1]);

/*
* In EL1 the non-redirected registers are the guest's,
@@ -539,10 +547,10 @@ static int aic_irq_domain_translate(struct irq_domain *id,
if (!is_kernel_in_hyp_mode()) {
switch (fwspec->param[1]) {
case AIC_TMR_GUEST_PHYS:
- *hwirq = ic->nr_hw + AIC_TMR_EL0_PHYS;
+ *hwirq = AIC_FIQ_HWIRQ(AIC_TMR_EL0_PHYS);
break;
case AIC_TMR_GUEST_VIRT:
- *hwirq = ic->nr_hw + AIC_TMR_EL0_VIRT;
+ *hwirq = AIC_FIQ_HWIRQ(AIC_TMR_EL0_VIRT);
break;
case AIC_TMR_HV_PHYS:
case AIC_TMR_HV_VIRT:
@@ -900,16 +908,15 @@ static int __init aic_of_ic_init(struct device_node *node, struct device_node *p
aic_irqc = irqc;

info = aic_ic_read(irqc, AIC_INFO);
- irqc->nr_hw = FIELD_GET(AIC_INFO_NR_HW, info);
+ irqc->nr_irq = FIELD_GET(AIC_INFO_NR_IRQ, info);

if (irqc->info.fast_ipi)
static_branch_enable(&use_fast_ipi);
else
static_branch_disable(&use_fast_ipi);

- irqc->hw_domain = irq_domain_create_linear(of_node_to_fwnode(node),
- irqc->nr_hw + AIC_NR_FIQ,
- &aic_irq_domain_ops, irqc);
+ irqc->hw_domain = irq_domain_create_tree(of_node_to_fwnode(node),
+ &aic_irq_domain_ops, irqc);
if (WARN_ON(!irqc->hw_domain)) {
iounmap(irqc->base);
kfree(irqc);
@@ -928,11 +935,11 @@ static int __init aic_of_ic_init(struct device_node *node, struct device_node *p
set_handle_irq(aic_handle_irq);
set_handle_fiq(aic_handle_fiq);

- for (i = 0; i < BITS_TO_U32(irqc->nr_hw); i++)
+ for (i = 0; i < BITS_TO_U32(irqc->nr_irq); i++)
aic_ic_write(irqc, AIC_MASK_SET + i * 4, U32_MAX);
- for (i = 0; i < BITS_TO_U32(irqc->nr_hw); i++)
+ for (i = 0; i < BITS_TO_U32(irqc->nr_irq); i++)
aic_ic_write(irqc, AIC_SW_CLR + i * 4, U32_MAX);
- for (i = 0; i < irqc->nr_hw; i++)
+ for (i = 0; i < irqc->nr_irq; i++)
aic_ic_write(irqc, AIC_TARGET_CPU + i * 4, 1);

if (!is_kernel_in_hyp_mode())
@@ -948,7 +955,7 @@ static int __init aic_of_ic_init(struct device_node *node, struct device_node *p
vgic_set_kvm_info(&vgic_info);

pr_info("Initialized with %d IRQs, %d FIQs, %d vIPIs\n",
- irqc->nr_hw, AIC_NR_FIQ, AIC_NR_SWIPI);
+ irqc->nr_irq, AIC_NR_FIQ, AIC_NR_SWIPI);

return 0;
}
--
2.33.0

2022-02-24 16:00:57

by Hector Martin

[permalink] [raw]
Subject: [PATCH v2 6/7] irqchip/apple-aic: Support multiple dies

Multi-die support in AICv2 uses several sets of IRQ registers. Introduce
a die count and compute the register group offset based on the die ID
field of the hwirq number, as reported by the hardware.

Signed-off-by: Hector Martin <[email protected]>
---
drivers/irqchip/irq-apple-aic.c | 77 +++++++++++++++++++++++----------
1 file changed, 54 insertions(+), 23 deletions(-)

diff --git a/drivers/irqchip/irq-apple-aic.c b/drivers/irqchip/irq-apple-aic.c
index 4b1ba732f476..93c622435ba2 100644
--- a/drivers/irqchip/irq-apple-aic.c
+++ b/drivers/irqchip/irq-apple-aic.c
@@ -74,7 +74,8 @@

#define AIC_WHOAMI 0x2000
#define AIC_EVENT 0x2004
-#define AIC_EVENT_TYPE GENMASK(31, 16)
+#define AIC_EVENT_DIE GENMASK(31, 24)
+#define AIC_EVENT_TYPE GENMASK(23, 16)
#define AIC_EVENT_NUM GENMASK(15, 0)

#define AIC_EVENT_TYPE_FIQ 0 /* Software use */
@@ -159,11 +160,13 @@
#define MPIDR_CPU(x) MPIDR_AFFINITY_LEVEL(x, 0)
#define MPIDR_CLUSTER(x) MPIDR_AFFINITY_LEVEL(x, 1)

-#define AIC_IRQ_HWIRQ(x) (FIELD_PREP(AIC_EVENT_TYPE, AIC_EVENT_TYPE_IRQ) | \
- FIELD_PREP(AIC_EVENT_NUM, x))
+#define AIC_IRQ_HWIRQ(die, irq) (FIELD_PREP(AIC_EVENT_DIE, die) | \
+ FIELD_PREP(AIC_EVENT_TYPE, AIC_EVENT_TYPE_IRQ) | \
+ FIELD_PREP(AIC_EVENT_NUM, irq))
#define AIC_FIQ_HWIRQ(x) (FIELD_PREP(AIC_EVENT_TYPE, AIC_EVENT_TYPE_FIQ) | \
FIELD_PREP(AIC_EVENT_NUM, x))
#define AIC_HWIRQ_IRQ(x) FIELD_GET(AIC_EVENT_NUM, x)
+#define AIC_HWIRQ_DIE(x) FIELD_GET(AIC_EVENT_DIE, x)
#define AIC_NR_FIQ 4
#define AIC_NR_SWIPI 32

@@ -195,6 +198,8 @@ struct aic_info {
u32 mask_set;
u32 mask_clr;

+ u32 die_stride;
+
/* Features */
bool fast_ipi;
};
@@ -234,6 +239,8 @@ struct aic_irq_chip {

int nr_irq;
int max_irq;
+ int nr_die;
+ int max_die;

struct aic_info info;
};
@@ -266,9 +273,10 @@ static void aic_irq_mask(struct irq_data *d)
irq_hw_number_t hwirq = irqd_to_hwirq(d);
struct aic_irq_chip *ic = irq_data_get_irq_chip_data(d);

+ u32 off = AIC_HWIRQ_DIE(hwirq) * ic->info.die_stride;
u32 irq = AIC_HWIRQ_IRQ(hwirq);

- aic_ic_write(ic, ic->info.mask_set + MASK_REG(irq), MASK_BIT(irq));
+ aic_ic_write(ic, ic->info.mask_set + off + MASK_REG(irq), MASK_BIT(irq));
}

static void aic_irq_unmask(struct irq_data *d)
@@ -276,9 +284,10 @@ static void aic_irq_unmask(struct irq_data *d)
irq_hw_number_t hwirq = irqd_to_hwirq(d);
struct aic_irq_chip *ic = irq_data_get_irq_chip_data(d);

+ u32 off = AIC_HWIRQ_DIE(hwirq) * ic->info.die_stride;
u32 irq = AIC_HWIRQ_IRQ(hwirq);

- aic_ic_write(ic, ic->info.mask_clr + MASK_REG(irq), MASK_BIT(irq));
+ aic_ic_write(ic, ic->info.mask_clr + off + MASK_REG(irq), MASK_BIT(irq));
}

static void aic_irq_eoi(struct irq_data *d)
@@ -541,27 +550,41 @@ static int aic_irq_domain_translate(struct irq_domain *id,
unsigned int *type)
{
struct aic_irq_chip *ic = id->host_data;
+ u32 *args;
+ u32 die = 0;

- if (fwspec->param_count != 3 || !is_of_node(fwspec->fwnode))
+ if (fwspec->param_count < 3 || fwspec->param_count > 4 ||
+ !is_of_node(fwspec->fwnode))
return -EINVAL;

+ args = &fwspec->param[1];
+
+ if (fwspec->param_count == 4) {
+ die = args[0];
+ args++;
+ }
+
switch (fwspec->param[0]) {
case AIC_IRQ:
- if (fwspec->param[1] >= ic->nr_irq)
+ if (die >= ic->nr_die)
return -EINVAL;
- *hwirq = AIC_IRQ_HWIRQ(fwspec->param[1]);
+ if (args[0] >= ic->nr_irq)
+ return -EINVAL;
+ *hwirq = AIC_IRQ_HWIRQ(die, args[0]);
break;
case AIC_FIQ:
- if (fwspec->param[1] >= AIC_NR_FIQ)
+ if (die != 0)
+ return -EINVAL;
+ if (args[0] >= AIC_NR_FIQ)
return -EINVAL;
- *hwirq = AIC_FIQ_HWIRQ(fwspec->param[1]);
+ *hwirq = AIC_FIQ_HWIRQ(args[0]);

/*
* In EL1 the non-redirected registers are the guest's,
* not EL2's, so remap the hwirqs to match.
*/
if (!is_kernel_in_hyp_mode()) {
- switch (fwspec->param[1]) {
+ switch (args[0]) {
case AIC_TMR_GUEST_PHYS:
*hwirq = AIC_FIQ_HWIRQ(AIC_TMR_EL0_PHYS);
break;
@@ -580,7 +603,7 @@ static int aic_irq_domain_translate(struct irq_domain *id,
return -EINVAL;
}

- *type = fwspec->param[2] & IRQ_TYPE_SENSE_MASK;
+ *type = args[1] & IRQ_TYPE_SENSE_MASK;

return 0;
}
@@ -899,8 +922,8 @@ static struct gic_kvm_info vgic_info __initdata = {

static int __init aic_of_ic_init(struct device_node *node, struct device_node *parent)
{
- int i;
- u32 off;
+ int i, die;
+ u32 off, start_off;
void __iomem *regs;
struct aic_irq_chip *irqc;
const struct of_device_id *match;
@@ -930,8 +953,9 @@ static int __init aic_of_ic_init(struct device_node *node, struct device_node *p
info = aic_ic_read(irqc, AIC_INFO);
irqc->nr_irq = FIELD_GET(AIC_INFO_NR_IRQ, info);
irqc->max_irq = AIC_MAX_IRQ;
+ irqc->nr_die = irqc->max_die = 1;

- off = irqc->info.target_cpu;
+ off = start_off = irqc->info.target_cpu;
off += sizeof(u32) * irqc->max_irq; /* TARGET_CPU */

break;
@@ -953,6 +977,8 @@ static int __init aic_of_ic_init(struct device_node *node, struct device_node *p
else
static_branch_disable(&use_fast_ipi);

+ irqc->info.die_stride = off - start_off;
+
irqc->hw_domain = irq_domain_create_tree(of_node_to_fwnode(node),
&aic_irq_domain_ops, irqc);
if (WARN_ON(!irqc->hw_domain)) {
@@ -973,12 +999,17 @@ static int __init aic_of_ic_init(struct device_node *node, struct device_node *p
set_handle_irq(aic_handle_irq);
set_handle_fiq(aic_handle_fiq);

- for (i = 0; i < BITS_TO_U32(irqc->nr_irq); i++)
- aic_ic_write(irqc, irqc->info.mask_set + i * 4, U32_MAX);
- for (i = 0; i < BITS_TO_U32(irqc->nr_irq); i++)
- aic_ic_write(irqc, irqc->info.sw_clr + i * 4, U32_MAX);
- for (i = 0; i < irqc->nr_irq; i++)
- aic_ic_write(irqc, irqc->info.target_cpu + i * 4, 1);
+ off = 0;
+ for (die = 0; die < irqc->nr_die; die++) {
+ for (i = 0; i < BITS_TO_U32(irqc->nr_irq); i++)
+ aic_ic_write(irqc, irqc->info.mask_set + off + i * 4, U32_MAX);
+ for (i = 0; i < BITS_TO_U32(irqc->nr_irq); i++)
+ aic_ic_write(irqc, irqc->info.sw_clr + off + i * 4, U32_MAX);
+ if (irqc->info.target_cpu)
+ for (i = 0; i < irqc->nr_irq; i++)
+ aic_ic_write(irqc, irqc->info.target_cpu + off + i * 4, 1);
+ off += irqc->info.die_stride;
+ }

if (!is_kernel_in_hyp_mode())
pr_info("Kernel running in EL1, mapping interrupts");
@@ -992,8 +1023,8 @@ static int __init aic_of_ic_init(struct device_node *node, struct device_node *p

vgic_set_kvm_info(&vgic_info);

- pr_info("Initialized with %d/%d IRQs, %d FIQs, %d vIPIs",
- irqc->nr_irq, irqc->max_irq, AIC_NR_FIQ, AIC_NR_SWIPI);
+ pr_info("Initialized with %d/%d IRQs * %d/%d die(s), %d FIQs, %d vIPIs",
+ irqc->nr_irq, irqc->max_irq, irqc->nr_die, irqc->max_die, AIC_NR_FIQ, AIC_NR_SWIPI);

return 0;
}
--
2.33.0

2022-02-24 16:15:12

by Hector Martin

[permalink] [raw]
Subject: [PATCH v2 1/7] PCI: apple: Change MSI handling to handle 4-cell AIC fwspec form

AIC2 changes the IRQ fwspec to add a cell. Always use the second-to-last
cell for the MSI handling, so it will work for both AIC1 and AIC2 devices.

Signed-off-by: Hector Martin <[email protected]>
---
drivers/pci/controller/pcie-apple.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/pci/controller/pcie-apple.c b/drivers/pci/controller/pcie-apple.c
index 854d95163112..a2c3c207a04b 100644
--- a/drivers/pci/controller/pcie-apple.c
+++ b/drivers/pci/controller/pcie-apple.c
@@ -219,7 +219,7 @@ static int apple_msi_domain_alloc(struct irq_domain *domain, unsigned int virq,
if (hwirq < 0)
return -ENOSPC;

- fwspec.param[1] += hwirq;
+ fwspec.param[fwspec.param_count - 2] += hwirq;

ret = irq_domain_alloc_irqs_parent(domain, virq, nr_irqs, &fwspec);
if (ret)
--
2.33.0

2022-02-24 19:20:38

by Marc Zyngier

[permalink] [raw]
Subject: Re: [PATCH v2 0/7] irqchip/apple-aic: Add support for AICv2

On Thu, 24 Feb 2022 18:26:41 +0000,
Mark Rutland <[email protected]> wrote:
>
> On Thu, Feb 24, 2022 at 10:07:34PM +0900, Hector Martin wrote:
> > Hi folks,
>
> Hi Hector,
>
> > In the t6000/t6001 (M1 Pro / Max) SoCs, Apple introduced a new version
> > of their interrupt controller. This is a significant departure from
> > AICv1 and seems designed to better scale to larger chips. This series
> > adds support for it to the existing AIC driver.
> >
> > Gone are CPU affinities; instead there seems to be some kind of
> > "automagic" dispatch to willing CPU cores, and cores can also opt-out
> > via an IMP-DEF sysreg (!). Right now the bootloader just sets up all
> > cores to accept IRQs, and we ignore all this and let the magic
> > algorithm pick a CPU to accept the IRQ.
>
> Maybe that's ok for the set of peripherals attached, but in general that
> violates existing expectations regarding affinity, and I fear there'll
> be some subtle brokenness resulting from this automatic target
> selection.
>
> For example, in the perf events subsystem there are PMU drivers (even
> those for "uncore" or "system" devices which are shared by many/all
> CPUs) which rely on a combination of interrupt affinity and local IRQ
> masking (without any other locking) to provide exclusion between a PMU's
> IRQ handler and any other management operations for that PMU (which are
> all handled from the same CPU).

It will definitely break anything that relies on managed interrupts,
where the kernel expects to allocate interrupts that have a strict
affinity. Drivers using this feature can legitimately expect that they
can keep their state in per-CPU pointers, and that obviously breaks.

This may affect any PCIe device with more than a couple of queues.
Maybe users of this HW do not care (yet), but we'll have to find a way
to tell drivers of the limitation.

M.

--
Without deviation from the norm, progress is not possible.

2022-02-24 19:46:07

by Mark Rutland

[permalink] [raw]
Subject: Re: [PATCH v2 0/7] irqchip/apple-aic: Add support for AICv2

On Thu, Feb 24, 2022 at 10:07:34PM +0900, Hector Martin wrote:
> Hi folks,

Hi Hector,

> In the t6000/t6001 (M1 Pro / Max) SoCs, Apple introduced a new version
> of their interrupt controller. This is a significant departure from
> AICv1 and seems designed to better scale to larger chips. This series
> adds support for it to the existing AIC driver.
>
> Gone are CPU affinities; instead there seems to be some kind of
> "automagic" dispatch to willing CPU cores, and cores can also opt-out
> via an IMP-DEF sysreg (!). Right now the bootloader just sets up all
> cores to accept IRQs, and we ignore all this and let the magic
> algorithm pick a CPU to accept the IRQ.

Maybe that's ok for the set of peripherals attached, but in general that
violates existing expectations regarding affinity, and I fear there'll
be some subtle brokenness resulting from this automatic target
selection.

For example, in the perf events subsystem there are PMU drivers (even
those for "uncore" or "system" devices which are shared by many/all
CPUs) which rely on a combination of interrupt affinity and local IRQ
masking (without any other locking) to provide exclusion between a PMU's
IRQ handler and any other management operations for that PMU (which are
all handled from the same CPU).

> In the future, we might start making use of these finer-grained
> capabilities for e.g. better real-time guarantees (CPUs running RT
> threads might opt out of IRQs).

What mechanism does the HW have for affinity selection? The wording
above makes it sound like each CPU has to opt-out rather than having a
central affinity selection. Is there a mechanism to select a single
target?

Thanks,
Mark.

> Legacy IPI support is also gone, so this implements Fast IPI support.
> Fast IPIs are implemented entirely in the CPU core complexes, using
> FIQs and IMP-DEF sysregs. This is also supported on t8103/M1, so we
> enable it there too, but we keep the legacy AIC IPI codepath in case
> it is useful for backporting to older chips.
>
> This also adds support for multi-die AIC2 controllers. While no
> multi-die products exist yet, the AIC2 in t600x is built to support
> up to 2 dies, and it's pretty clear how it works, so let's implement
> it. If we're lucky, when multi-die products roll around, this will
> let us support them with only DT changes. In order to support the
> extra die dimension, this introduces a 4-argument IRQ phandle form
> (3-argument is always supported and just implies die 0).
>
> All register offsets are computed based on capability register values,
> which should allow forward-compatibility with future AIC2 variants...
> except for one. For some inexplicable reason, the number of actually
> implemented die register sets is nowhere to be found (t600x has 2,
> but claims 1 die in use and 8 dies max, neither of which is what we
> need), and this is necessary to compute the event register offset,
> which is page-aligned after the die register sets. We have no choice
> but to stick this offset in the device tree... which is the same thing
> Apple do in their ADT.
>
> Changes since v1:
> - Split off the DT binding
> - Changed fast-ipi codepath selection to use a static key for performance
> - Added fix for PCI driver to support the new 4-cell IRQ form
> - Minor style / review feedback fixes
>
> Hector Martin (7):
> PCI: apple: Change MSI handling to handle 4-cell AIC fwspec form
> dt-bindings: interrupt-controller: apple,aic2: New binding for AICv2
> irqchip/apple-aic: Add Fast IPI support
> irqchip/apple-aic: Switch to irq_domain_create_tree and sparse hwirqs
> irqchip/apple-aic: Dynamically compute register offsets
> irqchip/apple-aic: Support multiple dies
> irqchip/apple-aic: Add support for AICv2
>
> .../interrupt-controller/apple,aic2.yaml | 99 ++++
> MAINTAINERS | 2 +-
> drivers/irqchip/irq-apple-aic.c | 432 +++++++++++++++---
> drivers/pci/controller/pcie-apple.c | 2 +-
> 4 files changed, 458 insertions(+), 77 deletions(-)
> create mode 100644 Documentation/devicetree/bindings/interrupt-controller/apple,aic2.yaml
>
> --
> 2.33.0
>

2022-02-25 05:22:53

by Hector Martin

[permalink] [raw]
Subject: Re: [PATCH v2 0/7] irqchip/apple-aic: Add support for AICv2

On 25/02/2022 04.06, Marc Zyngier wrote:
> On Thu, 24 Feb 2022 18:26:41 +0000,
> Mark Rutland <[email protected]> wrote:
>>
>> On Thu, Feb 24, 2022 at 10:07:34PM +0900, Hector Martin wrote:
>>> Hi folks,
>>
>> Hi Hector,
>>
>>> In the t6000/t6001 (M1 Pro / Max) SoCs, Apple introduced a new version
>>> of their interrupt controller. This is a significant departure from
>>> AICv1 and seems designed to better scale to larger chips. This series
>>> adds support for it to the existing AIC driver.
>>>
>>> Gone are CPU affinities; instead there seems to be some kind of
>>> "automagic" dispatch to willing CPU cores, and cores can also opt-out
>>> via an IMP-DEF sysreg (!). Right now the bootloader just sets up all
>>> cores to accept IRQs, and we ignore all this and let the magic
>>> algorithm pick a CPU to accept the IRQ.
>>
>> Maybe that's ok for the set of peripherals attached, but in general that
>> violates existing expectations regarding affinity, and I fear there'll
>> be some subtle brokenness resulting from this automatic target
>> selection.
>>
>> For example, in the perf events subsystem there are PMU drivers (even
>> those for "uncore" or "system" devices which are shared by many/all
>> CPUs) which rely on a combination of interrupt affinity and local IRQ
>> masking (without any other locking) to provide exclusion between a PMU's
>> IRQ handler and any other management operations for that PMU (which are
>> all handled from the same CPU).
>
> It will definitely break anything that relies on managed interrupts,
> where the kernel expects to allocate interrupts that have a strict
> affinity. Drivers using this feature can legitimately expect that they
> can keep their state in per-CPU pointers, and that obviously breaks.
>
> This may affect any PCIe device with more than a couple of queues.
> Maybe users of this HW do not care (yet), but we'll have to find a way
> to tell drivers of the limitation.

Yes, we already had a brief discussion about this in the v1 thread:

https://lore.kernel.org/linux-arm-kernel/[email protected]/

TL;DR there is no explicit per-IRQ affinity control, nor does an unknown
one seem possible, since there just aren't enough bits for it in per-IRQ
registers. AICv1 had that, but AICv2 got rid of it in favor of heuristic
magic and global per-CPU controls.

This hasn't actually been fully tested yet, but current hypothesis is
the mapping goes:

1 IRQ -> group (0-7) -> priority (0-3?) -> 1 CPU (local priority threshold)

This is based on the fact that the per-IRQ group field is 3 bits, and
the per-CPU mask IMP-DEF sysreg is 2 bits. There may or may not be
per-IRQ cluster controls. But that still leaves all IRQs funnelled into,
at most, 3-4 classes per CPU cluster, and 8 groups globally, so there's
no way to implement proper per-IRQ affinity (since we have 10 CPUs on
these platforms).

My guess is Apple has bet on heuristic magic to optimize IRQ delivery to
avoid waking up (deep?-)sleeping CPUs on low-priority events and
optimize for power, and forgone strict per-CPU queues which are how many
drivers optimize for performance. This makes some sense, since these are
largely consumer/prosumer platforms, many of them battery-powered, not
128-CPU datacenter monsters with multiple 10GbE interfaces. They can
probably get away without hard multiqueue stuff.

This won't be an issue for PMU interrupts (including the uncore PMU),
since those do not go through AIC per se but rather the FIQ path (which
is inherently per-CPU), same as the local timers. Marc's PMU support
patch set already takes care of adding support for those FIQ sources.
But it will indeed break some PCIe drivers for devices that users might
have arbitrarily attached through Thunderbolt.

Since we do not support Thunderbolt yet, I suggest we kick this can down
the road until we have test cases for how this breaks and how to fix it :-)

There are also other fun things to be done with the local CPU masking,
e.g. directing low-priority IRQs away from CPUs running real-time
threads. I definitely want to take a look in more detail at the controls
we *do* have, especially since I have a personal interest in RT for
audio production (and these platforms have no SMM/TEE, no latency
spikes, and fast cpufreq, woo!). But for now this works and brings up
the platform, so that yak is probably best shaved in the future. Let me
know if you're interested in having more discussions about RT-centric
features, though. I suspect we'll need some new kernel
mechanisms/interfaces to handle e.g. the CPU IMPDEF mask/prio stuff...

Aside, I wonder how they'll handle multi-die devices... for a single
die, you can probably well get away with no CPU pinning, but for
multi-die, are they going to do NUMA? If so, they'd want at least die
controls to avoid bouncing cache lines between dies too much... though
for some reason, I'm getting the feeling they're just going to
interleave memory and pretend it's UMA. Good chance we find out next
month...

--
Hector Martin ([email protected])
Public Key: https://mrcn.st/pub

2022-02-26 01:36:44

by Hector Martin

[permalink] [raw]
Subject: Re: [PATCH v2 7/7] irqchip/apple-aic: Add support for AICv2

On 26/02/2022 00.27, Marc Zyngier wrote:
> On Thu, 24 Feb 2022 13:07:41 +0000,
> Hector Martin <[email protected]> wrote:
>> - /*
>> - * Make sure the kernel's idea of logical CPU order is the same as AIC's
>> - * If we ever end up with a mismatch here, we will have to introduce
>> - * a mapping table similar to what other irqchip drivers do.
>> - */
>> - WARN_ON(aic_ic_read(aic_irqc, AIC_WHOAMI) != smp_processor_id());
>> + if (aic_irqc->info.version == 1) {
>> + /*
>> + * Make sure the kernel's idea of logical CPU order is the same as AIC's
>> + * If we ever end up with a mismatch here, we will have to introduce
>> + * a mapping table similar to what other irqchip drivers do.
>> + */
>> + WARN_ON(aic_ic_read(aic_irqc, AIC_WHOAMI) != smp_processor_id());
>
> Don't you have a similar issue with AICv2? Or is it that AICv2
> doesn't have this register?

No concept of individual CPUs in AICv2 at all, so no WHOAMI register
either :)

--
Hector Martin ([email protected])
Public Key: https://mrcn.st/pub

2022-02-26 01:50:00

by Marc Zyngier

[permalink] [raw]
Subject: Re: [PATCH v2 7/7] irqchip/apple-aic: Add support for AICv2

On Thu, 24 Feb 2022 13:07:41 +0000,
Hector Martin <[email protected]> wrote:
>
> Introduce support for the new AICv2 hardware block in t6000/t6001 SoCs.
>
> It seems these blocks are missing the information required to compute
> the event register offset in the capability registers, so we specify
> that in the DT.
>
> Signed-off-by: Hector Martin <[email protected]>
> ---
> drivers/irqchip/irq-apple-aic.c | 148 ++++++++++++++++++++++++++++----
> 1 file changed, 129 insertions(+), 19 deletions(-)
>
> diff --git a/drivers/irqchip/irq-apple-aic.c b/drivers/irqchip/irq-apple-aic.c
> index 93c622435ba2..602c8b274170 100644
> --- a/drivers/irqchip/irq-apple-aic.c
> +++ b/drivers/irqchip/irq-apple-aic.c
> @@ -103,6 +103,57 @@
>
> #define AIC_MAX_IRQ 0x400
>
> +/*
> + * AIC v2 registers (MMIO)
> + */
> +
> +#define AIC2_VERSION 0x0000
> +#define AIC2_VERSION_VER GENMASK(7, 0)
> +
> +#define AIC2_INFO1 0x0004
> +#define AIC2_INFO1_NR_IRQ GENMASK(15, 0)
> +#define AIC2_INFO1_LAST_DIE GENMASK(27, 24)
> +
> +#define AIC2_INFO2 0x0008
> +
> +#define AIC2_INFO3 0x000c
> +#define AIC2_INFO3_MAX_IRQ GENMASK(15, 0)
> +#define AIC2_INFO3_MAX_DIE GENMASK(27, 24)
> +
> +#define AIC2_RESET 0x0010
> +#define AIC2_RESET_RESET BIT(0)
> +
> +#define AIC2_CONFIG 0x0014
> +#define AIC2_CONFIG_ENABLE BIT(0)
> +#define AIC2_CONFIG_PREFER_PCPU BIT(28)
> +
> +#define AIC2_TIMEOUT 0x0028
> +#define AIC2_CLUSTER_PRIO 0x0030
> +#define AIC2_DELAY_GROUPS 0x0100
> +
> +#define AIC2_IRQ_CFG 0x2000
> +
> +/*
> + * AIC2 registers are laid out like this, starting at AIC2_IRQ_CFG:
> + *
> + * Repeat for each die:
> + * IRQ_CFG: u32 * MAX_IRQS
> + * SW_SET: u32 * (MAX_IRQS / 32)
> + * SW_CLR: u32 * (MAX_IRQS / 32)
> + * MASK_SET: u32 * (MAX_IRQS / 32)
> + * MASK_CLR: u32 * (MAX_IRQS / 32)
> + * HW_STATE: u32 * (MAX_IRQS / 32)
> + *
> + * This is followed by a set of event registers, each 16K page aligned.
> + * The first one is the AP event register we will use. Unfortunately,
> + * the actual implemented die count is not specified anywhere in the
> + * capability registers, so we have to explicitly specify the event
> + * register offset in the device tree to remain forward-compatible.
> + */
> +
> +#define AIC2_IRQ_CFG_TARGET GENMASK(3, 0)
> +#define AIC2_IRQ_CFG_DELAY_IDX GENMASK(7, 5)
> +
> #define MASK_REG(x) (4 * ((x) >> 5))
> #define MASK_BIT(x) BIT((x) & GENMASK(4, 0))
>
> @@ -193,6 +244,7 @@ struct aic_info {
> /* Register offsets */
> u32 event;
> u32 target_cpu;
> + u32 irq_cfg;
> u32 sw_set;
> u32 sw_clr;
> u32 mask_set;
> @@ -220,6 +272,14 @@ static const struct aic_info aic1_fipi_info = {
> .fast_ipi = true,
> };
>
> +static const struct aic_info aic2_info = {
> + .version = 2,
> +
> + .irq_cfg = AIC2_IRQ_CFG,
> +
> + .fast_ipi = true,
> +};
> +
> static const struct of_device_id aic_info_match[] = {
> {
> .compatible = "apple,t8103-aic",
> @@ -229,6 +289,10 @@ static const struct of_device_id aic_info_match[] = {
> .compatible = "apple,aic",
> .data = &aic1_info,
> },
> + {
> + .compatible = "apple,aic2",
> + .data = &aic2_info,
> + },
> {}
> };
>
> @@ -373,6 +437,14 @@ static struct irq_chip aic_chip = {
> .irq_set_type = aic_irq_set_type,
> };
>
> +static struct irq_chip aic2_chip = {
> + .name = "AIC2",
> + .irq_mask = aic_irq_mask,
> + .irq_unmask = aic_irq_unmask,
> + .irq_eoi = aic_irq_eoi,
> + .irq_set_type = aic_irq_set_type,
> +};
> +
> /*
> * FIQ irqchip
> */
> @@ -529,10 +601,15 @@ static struct irq_chip fiq_chip = {
> static int aic_irq_domain_map(struct irq_domain *id, unsigned int irq,
> irq_hw_number_t hw)
> {
> + struct aic_irq_chip *ic = id->host_data;
> u32 type = FIELD_GET(AIC_EVENT_TYPE, hw);
> + struct irq_chip *chip = &aic_chip;
> +
> + if (ic->info.version == 2)
> + chip = &aic2_chip;
>
> if (type == AIC_EVENT_TYPE_IRQ) {
> - irq_domain_set_info(id, irq, hw, &aic_chip, id->host_data,
> + irq_domain_set_info(id, irq, hw, chip, id->host_data,
> handle_fasteoi_irq, NULL, NULL);
> irqd_set_single_target(irq_desc_get_irq_data(irq_to_desc(irq)));
> } else {
> @@ -888,24 +965,26 @@ static int aic_init_cpu(unsigned int cpu)
> /* Commit all of the above */
> isb();
>
> - /*
> - * Make sure the kernel's idea of logical CPU order is the same as AIC's
> - * If we ever end up with a mismatch here, we will have to introduce
> - * a mapping table similar to what other irqchip drivers do.
> - */
> - WARN_ON(aic_ic_read(aic_irqc, AIC_WHOAMI) != smp_processor_id());
> + if (aic_irqc->info.version == 1) {
> + /*
> + * Make sure the kernel's idea of logical CPU order is the same as AIC's
> + * If we ever end up with a mismatch here, we will have to introduce
> + * a mapping table similar to what other irqchip drivers do.
> + */
> + WARN_ON(aic_ic_read(aic_irqc, AIC_WHOAMI) != smp_processor_id());

Don't you have a similar issue with AICv2? Or is it that AICv2
doesn't have this register?

Thanks,

M.

--
Without deviation from the norm, progress is not possible.