2023-02-09 13:31:12

by Johan Hovold

[permalink] [raw]
Subject: [PATCH v5 19/19] irqdomain: Switch to per-domain locking

The IRQ domain structures are currently protected by the global
irq_domain_mutex. Switch to using more fine-grained per-domain locking,
which can speed up parallel probing by reducing lock contention.

On a recent arm64 laptop, the total time spent waiting for the locks
during boot drops from 160 to 40 ms on average, while the maximum
aggregate wait time drops from 550 to 90 ms over ten runs for example.

Note that the domain lock of the root domain (innermost domain) must be
used for hierarchical domains. For non-hierarchical domains (as for root
domains), the new root pointer is set to the domain itself so that
domain->root->mutex can be used in shared code paths.

Also note that hierarchical domains should be constructed using
irq_domain_create_hierarchy() (or irq_domain_add_hierarchy()) to avoid
poking at irqdomain internals. As a safeguard, the lockdep assertion in
irq_domain_set_mapping() will catch any offenders that fail to set the
root domain pointer.

Tested-by: Hsin-Yi Wang <[email protected]>
Tested-by: Mark-PK Tsai <[email protected]>
Signed-off-by: Johan Hovold <[email protected]>
---
include/linux/irqdomain.h | 4 +++
kernel/irq/irqdomain.c | 61 +++++++++++++++++++++++++--------------
2 files changed, 44 insertions(+), 21 deletions(-)

diff --git a/include/linux/irqdomain.h b/include/linux/irqdomain.h
index 16399de00b48..cad47737a052 100644
--- a/include/linux/irqdomain.h
+++ b/include/linux/irqdomain.h
@@ -125,6 +125,8 @@ struct irq_domain_chip_generic;
* core code.
* @flags: Per irq_domain flags
* @mapcount: The number of mapped interrupts
+ * @mutex: Domain lock, hierarhical domains use root domain's lock
+ * @root: Pointer to root domain, or containing structure if non-hierarchical
*
* Optional elements:
* @fwnode: Pointer to firmware node associated with the irq_domain. Pretty easy
@@ -152,6 +154,8 @@ struct irq_domain {
void *host_data;
unsigned int flags;
unsigned int mapcount;
+ struct mutex mutex;
+ struct irq_domain *root;

/* Optional data */
struct fwnode_handle *fwnode;
diff --git a/kernel/irq/irqdomain.c b/kernel/irq/irqdomain.c
index 804d316329c8..c96aa5e5a94b 100644
--- a/kernel/irq/irqdomain.c
+++ b/kernel/irq/irqdomain.c
@@ -226,6 +226,17 @@ struct irq_domain *__irq_domain_add(struct fwnode_handle *fwnode, unsigned int s

domain->revmap_size = size;

+ /*
+ * Hierarchical domains use the domain lock of the root domain
+ * (innermost domain).
+ *
+ * For non-hierarchical domains (as for root domains), the root
+ * pointer is set to the domain itself so that domain->root->mutex
+ * can be used in shared code paths.
+ */
+ mutex_init(&domain->mutex);
+ domain->root = domain;
+
irq_domain_check_hierarchy(domain);

mutex_lock(&irq_domain_mutex);
@@ -503,7 +514,7 @@ static bool irq_domain_is_nomap(struct irq_domain *domain)
static void irq_domain_clear_mapping(struct irq_domain *domain,
irq_hw_number_t hwirq)
{
- lockdep_assert_held(&irq_domain_mutex);
+ lockdep_assert_held(&domain->root->mutex);

if (irq_domain_is_nomap(domain))
return;
@@ -518,7 +529,11 @@ static void irq_domain_set_mapping(struct irq_domain *domain,
irq_hw_number_t hwirq,
struct irq_data *irq_data)
{
- lockdep_assert_held(&irq_domain_mutex);
+ /*
+ * This also makes sure that all domains point to the same root when
+ * called from irq_domain_insert_irq() for each domain in a hierarchy.
+ */
+ lockdep_assert_held(&domain->root->mutex);

if (irq_domain_is_nomap(domain))
return;
@@ -540,7 +555,7 @@ static void irq_domain_disassociate(struct irq_domain *domain, unsigned int irq)

hwirq = irq_data->hwirq;

- mutex_lock(&irq_domain_mutex);
+ mutex_lock(&domain->mutex);

irq_set_status_flags(irq, IRQ_NOREQUEST);

@@ -562,7 +577,7 @@ static void irq_domain_disassociate(struct irq_domain *domain, unsigned int irq)
/* Clear reverse map for this hwirq */
irq_domain_clear_mapping(domain, hwirq);

- mutex_unlock(&irq_domain_mutex);
+ mutex_unlock(&domain->mutex);
}

static int irq_domain_associate_locked(struct irq_domain *domain, unsigned int virq,
@@ -612,9 +627,9 @@ int irq_domain_associate(struct irq_domain *domain, unsigned int virq,
{
int ret;

- mutex_lock(&irq_domain_mutex);
+ mutex_lock(&domain->mutex);
ret = irq_domain_associate_locked(domain, virq, hwirq);
- mutex_unlock(&irq_domain_mutex);
+ mutex_unlock(&domain->mutex);

return ret;
}
@@ -731,7 +746,7 @@ unsigned int irq_create_mapping_affinity(struct irq_domain *domain,
return 0;
}

- mutex_lock(&irq_domain_mutex);
+ mutex_lock(&domain->mutex);

/* Check if mapping already exists */
virq = irq_find_mapping(domain, hwirq);
@@ -742,7 +757,7 @@ unsigned int irq_create_mapping_affinity(struct irq_domain *domain,

virq = irq_create_mapping_affinity_locked(domain, hwirq, affinity);
out:
- mutex_unlock(&irq_domain_mutex);
+ mutex_unlock(&domain->mutex);

return virq;
}
@@ -811,7 +826,7 @@ unsigned int irq_create_fwspec_mapping(struct irq_fwspec *fwspec)
if (WARN_ON(type & ~IRQ_TYPE_SENSE_MASK))
type &= IRQ_TYPE_SENSE_MASK;

- mutex_lock(&irq_domain_mutex);
+ mutex_lock(&domain->root->mutex);

/*
* If we've already configured this interrupt,
@@ -864,11 +879,11 @@ unsigned int irq_create_fwspec_mapping(struct irq_fwspec *fwspec)
/* Store trigger type */
irqd_set_trigger_type(irq_data, type);
out:
- mutex_unlock(&irq_domain_mutex);
+ mutex_unlock(&domain->root->mutex);

return virq;
err:
- mutex_unlock(&irq_domain_mutex);
+ mutex_unlock(&domain->root->mutex);

return 0;
}
@@ -1132,6 +1147,7 @@ struct irq_domain *irq_domain_create_hierarchy(struct irq_domain *parent,
else
domain = irq_domain_create_tree(fwnode, ops, host_data);
if (domain) {
+ domain->root = parent->root;
domain->parent = parent;
domain->flags |= flags;
}
@@ -1528,10 +1544,10 @@ int __irq_domain_alloc_irqs(struct irq_domain *domain, int irq_base,
return -EINVAL;
}

- mutex_lock(&irq_domain_mutex);
+ mutex_lock(&domain->root->mutex);
ret = irq_domain_alloc_irqs_locked(domain, irq_base, nr_irqs, node, arg,
realloc, affinity);
- mutex_unlock(&irq_domain_mutex);
+ mutex_unlock(&domain->root->mutex);

return ret;
}
@@ -1542,7 +1558,7 @@ static void irq_domain_fix_revmap(struct irq_data *d)
{
void __rcu **slot;

- lockdep_assert_held(&irq_domain_mutex);
+ lockdep_assert_held(&d->domain->root->mutex);

if (irq_domain_is_nomap(d->domain))
return;
@@ -1608,7 +1624,7 @@ int irq_domain_push_irq(struct irq_domain *domain, int virq, void *arg)
if (!parent_irq_data)
return -ENOMEM;

- mutex_lock(&irq_domain_mutex);
+ mutex_lock(&domain->root->mutex);

/* Copy the original irq_data. */
*parent_irq_data = *irq_data;
@@ -1636,7 +1652,7 @@ int irq_domain_push_irq(struct irq_domain *domain, int virq, void *arg)
irq_domain_fix_revmap(parent_irq_data);
irq_domain_set_mapping(domain, irq_data->hwirq, irq_data);
error:
- mutex_unlock(&irq_domain_mutex);
+ mutex_unlock(&domain->root->mutex);

return rv;
}
@@ -1691,7 +1707,7 @@ int irq_domain_pop_irq(struct irq_domain *domain, int virq)
if (WARN_ON(!parent_irq_data))
return -EINVAL;

- mutex_lock(&irq_domain_mutex);
+ mutex_lock(&domain->root->mutex);

irq_data->parent_data = NULL;

@@ -1703,7 +1719,7 @@ int irq_domain_pop_irq(struct irq_domain *domain, int virq)

irq_domain_fix_revmap(irq_data);

- mutex_unlock(&irq_domain_mutex);
+ mutex_unlock(&domain->root->mutex);

kfree(parent_irq_data);

@@ -1719,17 +1735,20 @@ EXPORT_SYMBOL_GPL(irq_domain_pop_irq);
void irq_domain_free_irqs(unsigned int virq, unsigned int nr_irqs)
{
struct irq_data *data = irq_get_irq_data(virq);
+ struct irq_domain *domain;
int i;

if (WARN(!data || !data->domain || !data->domain->ops->free,
"NULL pointer, cannot free irq\n"))
return;

- mutex_lock(&irq_domain_mutex);
+ domain = data->domain;
+
+ mutex_lock(&domain->root->mutex);
for (i = 0; i < nr_irqs; i++)
irq_domain_remove_irq(virq + i);
- irq_domain_free_irqs_hierarchy(data->domain, virq, nr_irqs);
- mutex_unlock(&irq_domain_mutex);
+ irq_domain_free_irqs_hierarchy(domain, virq, nr_irqs);
+ mutex_unlock(&domain->root->mutex);

irq_domain_free_irq_data(virq, nr_irqs);
irq_free_descs(virq, nr_irqs);
--
2.39.1



2023-02-09 16:01:06

by Marc Zyngier

[permalink] [raw]
Subject: Re: [PATCH v5 19/19] irqdomain: Switch to per-domain locking

On Thu, 09 Feb 2023 13:23:23 +0000,
Johan Hovold <[email protected]> wrote:
>
> The IRQ domain structures are currently protected by the global
> irq_domain_mutex. Switch to using more fine-grained per-domain locking,
> which can speed up parallel probing by reducing lock contention.
>
> On a recent arm64 laptop, the total time spent waiting for the locks
> during boot drops from 160 to 40 ms on average, while the maximum
> aggregate wait time drops from 550 to 90 ms over ten runs for example.
>
> Note that the domain lock of the root domain (innermost domain) must be
> used for hierarchical domains. For non-hierarchical domains (as for root
> domains), the new root pointer is set to the domain itself so that
> domain->root->mutex can be used in shared code paths.
>
> Also note that hierarchical domains should be constructed using
> irq_domain_create_hierarchy() (or irq_domain_add_hierarchy()) to avoid
> poking at irqdomain internals. As a safeguard, the lockdep assertion in
> irq_domain_set_mapping() will catch any offenders that fail to set the
> root domain pointer.
>
> Tested-by: Hsin-Yi Wang <[email protected]>
> Tested-by: Mark-PK Tsai <[email protected]>
> Signed-off-by: Johan Hovold <[email protected]>
> ---
> include/linux/irqdomain.h | 4 +++
> kernel/irq/irqdomain.c | 61 +++++++++++++++++++++++++--------------
> 2 files changed, 44 insertions(+), 21 deletions(-)
>
> diff --git a/include/linux/irqdomain.h b/include/linux/irqdomain.h
> index 16399de00b48..cad47737a052 100644
> --- a/include/linux/irqdomain.h
> +++ b/include/linux/irqdomain.h
> @@ -125,6 +125,8 @@ struct irq_domain_chip_generic;
> * core code.
> * @flags: Per irq_domain flags
> * @mapcount: The number of mapped interrupts
> + * @mutex: Domain lock, hierarhical domains use root domain's lock

nit: hierarchical

> + * @root: Pointer to root domain, or containing structure if non-hierarchical
> *
> * Optional elements:
> * @fwnode: Pointer to firmware node associated with the irq_domain. Pretty easy
> @@ -152,6 +154,8 @@ struct irq_domain {
> void *host_data;
> unsigned int flags;
> unsigned int mapcount;
> + struct mutex mutex;
> + struct irq_domain *root;
>
> /* Optional data */
> struct fwnode_handle *fwnode;
> diff --git a/kernel/irq/irqdomain.c b/kernel/irq/irqdomain.c
> index 804d316329c8..c96aa5e5a94b 100644
> --- a/kernel/irq/irqdomain.c
> +++ b/kernel/irq/irqdomain.c
> @@ -226,6 +226,17 @@ struct irq_domain *__irq_domain_add(struct fwnode_handle *fwnode, unsigned int s
>
> domain->revmap_size = size;
>
> + /*
> + * Hierarchical domains use the domain lock of the root domain
> + * (innermost domain).
> + *
> + * For non-hierarchical domains (as for root domains), the root
> + * pointer is set to the domain itself so that domain->root->mutex
> + * can be used in shared code paths.
> + */
> + mutex_init(&domain->mutex);
> + domain->root = domain;
> +
> irq_domain_check_hierarchy(domain);
>
> mutex_lock(&irq_domain_mutex);
> @@ -503,7 +514,7 @@ static bool irq_domain_is_nomap(struct irq_domain *domain)
> static void irq_domain_clear_mapping(struct irq_domain *domain,
> irq_hw_number_t hwirq)
> {
> - lockdep_assert_held(&irq_domain_mutex);
> + lockdep_assert_held(&domain->root->mutex);
>
> if (irq_domain_is_nomap(domain))
> return;
> @@ -518,7 +529,11 @@ static void irq_domain_set_mapping(struct irq_domain *domain,
> irq_hw_number_t hwirq,
> struct irq_data *irq_data)
> {
> - lockdep_assert_held(&irq_domain_mutex);
> + /*
> + * This also makes sure that all domains point to the same root when
> + * called from irq_domain_insert_irq() for each domain in a hierarchy.
> + */
> + lockdep_assert_held(&domain->root->mutex);
>
> if (irq_domain_is_nomap(domain))
> return;
> @@ -540,7 +555,7 @@ static void irq_domain_disassociate(struct irq_domain *domain, unsigned int irq)
>
> hwirq = irq_data->hwirq;
>
> - mutex_lock(&irq_domain_mutex);
> + mutex_lock(&domain->mutex);

So you made that point about being able to uniformly using root>mutex,
which I think is a good invariant. Yet you hardly make use of it. Why?

>
> irq_set_status_flags(irq, IRQ_NOREQUEST);
>
> @@ -562,7 +577,7 @@ static void irq_domain_disassociate(struct irq_domain *domain, unsigned int irq)
> /* Clear reverse map for this hwirq */
> irq_domain_clear_mapping(domain, hwirq);
>
> - mutex_unlock(&irq_domain_mutex);
> + mutex_unlock(&domain->mutex);
> }
>
> static int irq_domain_associate_locked(struct irq_domain *domain, unsigned int virq,
> @@ -612,9 +627,9 @@ int irq_domain_associate(struct irq_domain *domain, unsigned int virq,
> {
> int ret;
>
> - mutex_lock(&irq_domain_mutex);
> + mutex_lock(&domain->mutex);
> ret = irq_domain_associate_locked(domain, virq, hwirq);
> - mutex_unlock(&irq_domain_mutex);
> + mutex_unlock(&domain->mutex);
>
> return ret;
> }
> @@ -731,7 +746,7 @@ unsigned int irq_create_mapping_affinity(struct irq_domain *domain,
> return 0;
> }
>
> - mutex_lock(&irq_domain_mutex);
> + mutex_lock(&domain->mutex);
>
> /* Check if mapping already exists */
> virq = irq_find_mapping(domain, hwirq);
> @@ -742,7 +757,7 @@ unsigned int irq_create_mapping_affinity(struct irq_domain *domain,
>
> virq = irq_create_mapping_affinity_locked(domain, hwirq, affinity);
> out:
> - mutex_unlock(&irq_domain_mutex);
> + mutex_unlock(&domain->mutex);
>
> return virq;
> }
> @@ -811,7 +826,7 @@ unsigned int irq_create_fwspec_mapping(struct irq_fwspec *fwspec)
> if (WARN_ON(type & ~IRQ_TYPE_SENSE_MASK))
> type &= IRQ_TYPE_SENSE_MASK;
>
> - mutex_lock(&irq_domain_mutex);
> + mutex_lock(&domain->root->mutex);
>
> /*
> * If we've already configured this interrupt,
> @@ -864,11 +879,11 @@ unsigned int irq_create_fwspec_mapping(struct irq_fwspec *fwspec)
> /* Store trigger type */
> irqd_set_trigger_type(irq_data, type);
> out:
> - mutex_unlock(&irq_domain_mutex);
> + mutex_unlock(&domain->root->mutex);
>
> return virq;
> err:
> - mutex_unlock(&irq_domain_mutex);
> + mutex_unlock(&domain->root->mutex);
>
> return 0;
> }
> @@ -1132,6 +1147,7 @@ struct irq_domain *irq_domain_create_hierarchy(struct irq_domain *parent,
> else
> domain = irq_domain_create_tree(fwnode, ops, host_data);
> if (domain) {
> + domain->root = parent->root;
> domain->parent = parent;
> domain->flags |= flags;

So we still have a bug here, as we have published a domain that we
keep updating. A parallel probing could find it in the interval and do
something completely wrong.

Splitting the work would help, as per the following patch.

Thanks,

M.

diff --git a/kernel/irq/irqdomain.c b/kernel/irq/irqdomain.c
index c96aa5e5a94b..6c675ef672e9 100644
--- a/kernel/irq/irqdomain.c
+++ b/kernel/irq/irqdomain.c
@@ -126,23 +126,12 @@ void irq_domain_free_fwnode(struct fwnode_handle *fwnode)
}
EXPORT_SYMBOL_GPL(irq_domain_free_fwnode);

-/**
- * __irq_domain_add() - Allocate a new irq_domain data structure
- * @fwnode: firmware node for the interrupt controller
- * @size: Size of linear map; 0 for radix mapping only
- * @hwirq_max: Maximum number of interrupts supported by controller
- * @direct_max: Maximum value of direct maps; Use ~0 for no limit; 0 for no
- * direct mapping
- * @ops: domain callbacks
- * @host_data: Controller private data pointer
- *
- * Allocates and initializes an irq_domain structure.
- * Returns pointer to IRQ domain, or NULL on failure.
- */
-struct irq_domain *__irq_domain_add(struct fwnode_handle *fwnode, unsigned int size,
- irq_hw_number_t hwirq_max, int direct_max,
- const struct irq_domain_ops *ops,
- void *host_data)
+static struct irq_domain *__irq_domain_create(struct fwnode_handle *fwnode,
+ unsigned int size,
+ irq_hw_number_t hwirq_max,
+ int direct_max,
+ const struct irq_domain_ops *ops,
+ void *host_data)
{
struct irqchip_fwid *fwid;
struct irq_domain *domain;
@@ -239,12 +228,44 @@ struct irq_domain *__irq_domain_add(struct fwnode_handle *fwnode, unsigned int s

irq_domain_check_hierarchy(domain);

+ return domain;
+}
+
+static void __irq_domain_publish(struct irq_domain *domain)
+{
mutex_lock(&irq_domain_mutex);
debugfs_add_domain_dir(domain);
list_add(&domain->link, &irq_domain_list);
mutex_unlock(&irq_domain_mutex);

pr_debug("Added domain %s\n", domain->name);
+}
+
+/**
+ * __irq_domain_add() - Allocate a new irq_domain data structure
+ * @fwnode: firmware node for the interrupt controller
+ * @size: Size of linear map; 0 for radix mapping only
+ * @hwirq_max: Maximum number of interrupts supported by controller
+ * @direct_max: Maximum value of direct maps; Use ~0 for no limit; 0 for no
+ * direct mapping
+ * @ops: domain callbacks
+ * @host_data: Controller private data pointer
+ *
+ * Allocates and initializes an irq_domain structure.
+ * Returns pointer to IRQ domain, or NULL on failure.
+ */
+struct irq_domain *__irq_domain_add(struct fwnode_handle *fwnode, unsigned int size,
+ irq_hw_number_t hwirq_max, int direct_max,
+ const struct irq_domain_ops *ops,
+ void *host_data)
+{
+ struct irq_domain *domain;
+
+ domain = __irq_domain_create(fwnode, size, hwirq_max, direct_max,
+ ops, host_data);
+ if (domain)
+ __irq_domain_publish(domain);
+
return domain;
}
EXPORT_SYMBOL_GPL(__irq_domain_add);
@@ -1143,13 +1164,16 @@ struct irq_domain *irq_domain_create_hierarchy(struct irq_domain *parent,
struct irq_domain *domain;

if (size)
- domain = irq_domain_create_linear(fwnode, size, ops, host_data);
+ domain = __irq_domain_create(fwnode, size, size, 0, ops, host_data);
else
- domain = irq_domain_create_tree(fwnode, ops, host_data);
+ domain = __irq_domain_create(fwnode, 0, ~0, 0, ops, host_data);
+
if (domain) {
domain->root = parent->root;
domain->parent = parent;
domain->flags |= flags;
+
+ __irq_domain_publish(domain);
}

return domain;

--
Without deviation from the norm, progress is not possible.

2023-02-10 09:55:31

by Johan Hovold

[permalink] [raw]
Subject: Re: [PATCH v5 19/19] irqdomain: Switch to per-domain locking

On Thu, Feb 09, 2023 at 04:00:55PM +0000, Marc Zyngier wrote:
> On Thu, 09 Feb 2023 13:23:23 +0000,
> Johan Hovold <[email protected]> wrote:
> >
> > The IRQ domain structures are currently protected by the global
> > irq_domain_mutex. Switch to using more fine-grained per-domain locking,
> > which can speed up parallel probing by reducing lock contention.
> >
> > On a recent arm64 laptop, the total time spent waiting for the locks
> > during boot drops from 160 to 40 ms on average, while the maximum
> > aggregate wait time drops from 550 to 90 ms over ten runs for example.
> >
> > Note that the domain lock of the root domain (innermost domain) must be
> > used for hierarchical domains. For non-hierarchical domains (as for root
> > domains), the new root pointer is set to the domain itself so that
> > domain->root->mutex can be used in shared code paths.
> >
> > Also note that hierarchical domains should be constructed using
> > irq_domain_create_hierarchy() (or irq_domain_add_hierarchy()) to avoid
> > poking at irqdomain internals. As a safeguard, the lockdep assertion in
> > irq_domain_set_mapping() will catch any offenders that fail to set the
> > root domain pointer.
> >
> > Tested-by: Hsin-Yi Wang <[email protected]>
> > Tested-by: Mark-PK Tsai <[email protected]>
> > Signed-off-by: Johan Hovold <[email protected]>
> > ---
> > include/linux/irqdomain.h | 4 +++
> > kernel/irq/irqdomain.c | 61 +++++++++++++++++++++++++--------------
> > 2 files changed, 44 insertions(+), 21 deletions(-)
> >
> > diff --git a/include/linux/irqdomain.h b/include/linux/irqdomain.h
> > index 16399de00b48..cad47737a052 100644
> > --- a/include/linux/irqdomain.h
> > +++ b/include/linux/irqdomain.h
> > @@ -125,6 +125,8 @@ struct irq_domain_chip_generic;
> > * core code.
> > * @flags: Per irq_domain flags
> > * @mapcount: The number of mapped interrupts
> > + * @mutex: Domain lock, hierarhical domains use root domain's lock
>
> nit: hierarchical
>
> > + * @root: Pointer to root domain, or containing structure if non-hierarchical

> > @@ -226,6 +226,17 @@ struct irq_domain *__irq_domain_add(struct fwnode_handle *fwnode, unsigned int s
> >
> > domain->revmap_size = size;
> >
> > + /*
> > + * Hierarchical domains use the domain lock of the root domain
> > + * (innermost domain).
> > + *
> > + * For non-hierarchical domains (as for root domains), the root
> > + * pointer is set to the domain itself so that domain->root->mutex
> > + * can be used in shared code paths.
> > + */
> > + mutex_init(&domain->mutex);
> > + domain->root = domain;
> > +
> > irq_domain_check_hierarchy(domain);
> >
> > mutex_lock(&irq_domain_mutex);

> > @@ -518,7 +529,11 @@ static void irq_domain_set_mapping(struct irq_domain *domain,
> > irq_hw_number_t hwirq,
> > struct irq_data *irq_data)
> > {
> > - lockdep_assert_held(&irq_domain_mutex);
> > + /*
> > + * This also makes sure that all domains point to the same root when
> > + * called from irq_domain_insert_irq() for each domain in a hierarchy.
> > + */
> > + lockdep_assert_held(&domain->root->mutex);
> >
> > if (irq_domain_is_nomap(domain))
> > return;
> > @@ -540,7 +555,7 @@ static void irq_domain_disassociate(struct irq_domain *domain, unsigned int irq)
> >
> > hwirq = irq_data->hwirq;
> >
> > - mutex_lock(&irq_domain_mutex);
> > + mutex_lock(&domain->mutex);
>
> So you made that point about being able to uniformly using root>mutex,
> which I think is a good invariant. Yet you hardly make use of it. Why?

I went back and forth over that a bit, but decided to only use
domain->root->mutex in paths that can be called for hierarchical
domains (i.e. the "shared code paths" mentioned above).

Using it in paths that are clearly only called for non-hierarchical
domains where domain->root == domain felt a bit lazy.

The counter argument is of course that using domain->root->lock allows
people to think less about the code they are changing, but that's not
necessarily always a good thing.

Also note that the lockdep asserts in the revmap helpers would catch
anyone using domain->mutex where they should not (i.e. using
domain->mutex for an hierarchical domain).

> > irq_set_status_flags(irq, IRQ_NOREQUEST);
> >
> > @@ -562,7 +577,7 @@ static void irq_domain_disassociate(struct irq_domain *domain, unsigned int irq)
> > /* Clear reverse map for this hwirq */
> > irq_domain_clear_mapping(domain, hwirq);
> >
> > - mutex_unlock(&irq_domain_mutex);
> > + mutex_unlock(&domain->mutex);
> > }
> >
> > static int irq_domain_associate_locked(struct irq_domain *domain, unsigned int virq,
> > @@ -612,9 +627,9 @@ int irq_domain_associate(struct irq_domain *domain, unsigned int virq,
> > {
> > int ret;
> >
> > - mutex_lock(&irq_domain_mutex);
> > + mutex_lock(&domain->mutex);
> > ret = irq_domain_associate_locked(domain, virq, hwirq);
> > - mutex_unlock(&irq_domain_mutex);
> > + mutex_unlock(&domain->mutex);
> >
> > return ret;
> > }
> > @@ -731,7 +746,7 @@ unsigned int irq_create_mapping_affinity(struct irq_domain *domain,
> > return 0;
> > }
> >
> > - mutex_lock(&irq_domain_mutex);
> > + mutex_lock(&domain->mutex);
> >
> > /* Check if mapping already exists */
> > virq = irq_find_mapping(domain, hwirq);
> > @@ -742,7 +757,7 @@ unsigned int irq_create_mapping_affinity(struct irq_domain *domain,
> >
> > virq = irq_create_mapping_affinity_locked(domain, hwirq, affinity);
> > out:
> > - mutex_unlock(&irq_domain_mutex);
> > + mutex_unlock(&domain->mutex);
> >
> > return virq;
> > }
> > @@ -811,7 +826,7 @@ unsigned int irq_create_fwspec_mapping(struct irq_fwspec *fwspec)
> > if (WARN_ON(type & ~IRQ_TYPE_SENSE_MASK))
> > type &= IRQ_TYPE_SENSE_MASK;
> >
> > - mutex_lock(&irq_domain_mutex);
> > + mutex_lock(&domain->root->mutex);
> >
> > /*
> > * If we've already configured this interrupt,
> > @@ -864,11 +879,11 @@ unsigned int irq_create_fwspec_mapping(struct irq_fwspec *fwspec)
> > /* Store trigger type */
> > irqd_set_trigger_type(irq_data, type);
> > out:
> > - mutex_unlock(&irq_domain_mutex);
> > + mutex_unlock(&domain->root->mutex);
> >
> > return virq;
> > err:
> > - mutex_unlock(&irq_domain_mutex);
> > + mutex_unlock(&domain->root->mutex);
> >
> > return 0;
> > }
> > @@ -1132,6 +1147,7 @@ struct irq_domain *irq_domain_create_hierarchy(struct irq_domain *parent,
> > else
> > domain = irq_domain_create_tree(fwnode, ops, host_data);
> > if (domain) {
> > + domain->root = parent->root;
> > domain->parent = parent;
> > domain->flags |= flags;
>
> So we still have a bug here, as we have published a domain that we
> keep updating. A parallel probing could find it in the interval and do
> something completely wrong.

Indeed we do, even if device links should make this harder to hit these
days.

> Splitting the work would help, as per the following patch.

Looks good to me. Do you want to submit that as a patch that I'll rebase
on or should I submit it as part of a v6?

Johan

2023-02-10 11:39:10

by Marc Zyngier

[permalink] [raw]
Subject: Re: [PATCH v5 19/19] irqdomain: Switch to per-domain locking

On Fri, 10 Feb 2023 09:56:03 +0000,
Johan Hovold <[email protected]> wrote:
>
> On Thu, Feb 09, 2023 at 04:00:55PM +0000, Marc Zyngier wrote:
> > On Thu, 09 Feb 2023 13:23:23 +0000,
> > Johan Hovold <[email protected]> wrote:
> > >
> > > The IRQ domain structures are currently protected by the global
> > > irq_domain_mutex. Switch to using more fine-grained per-domain locking,
> > > which can speed up parallel probing by reducing lock contention.
> > >
> > > On a recent arm64 laptop, the total time spent waiting for the locks
> > > during boot drops from 160 to 40 ms on average, while the maximum
> > > aggregate wait time drops from 550 to 90 ms over ten runs for example.
> > >
> > > Note that the domain lock of the root domain (innermost domain) must be
> > > used for hierarchical domains. For non-hierarchical domains (as for root
> > > domains), the new root pointer is set to the domain itself so that
> > > domain->root->mutex can be used in shared code paths.
> > >
> > > Also note that hierarchical domains should be constructed using
> > > irq_domain_create_hierarchy() (or irq_domain_add_hierarchy()) to avoid
> > > poking at irqdomain internals. As a safeguard, the lockdep assertion in
> > > irq_domain_set_mapping() will catch any offenders that fail to set the
> > > root domain pointer.
> > >
> > > Tested-by: Hsin-Yi Wang <[email protected]>
> > > Tested-by: Mark-PK Tsai <[email protected]>
> > > Signed-off-by: Johan Hovold <[email protected]>
> > > ---
> > > include/linux/irqdomain.h | 4 +++
> > > kernel/irq/irqdomain.c | 61 +++++++++++++++++++++++++--------------
> > > 2 files changed, 44 insertions(+), 21 deletions(-)
> > >
> > > diff --git a/include/linux/irqdomain.h b/include/linux/irqdomain.h
> > > index 16399de00b48..cad47737a052 100644
> > > --- a/include/linux/irqdomain.h
> > > +++ b/include/linux/irqdomain.h
> > > @@ -125,6 +125,8 @@ struct irq_domain_chip_generic;
> > > * core code.
> > > * @flags: Per irq_domain flags
> > > * @mapcount: The number of mapped interrupts
> > > + * @mutex: Domain lock, hierarhical domains use root domain's lock
> >
> > nit: hierarchical
> >
> > > + * @root: Pointer to root domain, or containing structure if non-hierarchical
>
> > > @@ -226,6 +226,17 @@ struct irq_domain *__irq_domain_add(struct fwnode_handle *fwnode, unsigned int s
> > >
> > > domain->revmap_size = size;
> > >
> > > + /*
> > > + * Hierarchical domains use the domain lock of the root domain
> > > + * (innermost domain).
> > > + *
> > > + * For non-hierarchical domains (as for root domains), the root
> > > + * pointer is set to the domain itself so that domain->root->mutex
> > > + * can be used in shared code paths.
> > > + */
> > > + mutex_init(&domain->mutex);
> > > + domain->root = domain;
> > > +
> > > irq_domain_check_hierarchy(domain);
> > >
> > > mutex_lock(&irq_domain_mutex);
>
> > > @@ -518,7 +529,11 @@ static void irq_domain_set_mapping(struct irq_domain *domain,
> > > irq_hw_number_t hwirq,
> > > struct irq_data *irq_data)
> > > {
> > > - lockdep_assert_held(&irq_domain_mutex);
> > > + /*
> > > + * This also makes sure that all domains point to the same root when
> > > + * called from irq_domain_insert_irq() for each domain in a hierarchy.
> > > + */
> > > + lockdep_assert_held(&domain->root->mutex);
> > >
> > > if (irq_domain_is_nomap(domain))
> > > return;
> > > @@ -540,7 +555,7 @@ static void irq_domain_disassociate(struct irq_domain *domain, unsigned int irq)
> > >
> > > hwirq = irq_data->hwirq;
> > >
> > > - mutex_lock(&irq_domain_mutex);
> > > + mutex_lock(&domain->mutex);
> >
> > So you made that point about being able to uniformly using root>mutex,
> > which I think is a good invariant. Yet you hardly make use of it. Why?
>
> I went back and forth over that a bit, but decided to only use
> domain->root->mutex in paths that can be called for hierarchical
> domains (i.e. the "shared code paths" mentioned above).
>
> Using it in paths that are clearly only called for non-hierarchical
> domains where domain->root == domain felt a bit lazy.

My concern here is that as this code gets further refactored, it may
become much harder to reason about what is the correct level of
locking.

> The counter argument is of course that using domain->root->lock allows
> people to think less about the code they are changing, but that's not
> necessarily always a good thing.

Eventually, non-hierarchical domains should simply die and be replaced
with a single level hierarchy. Having a unified locking in place will
definitely make the required work clearer.

> Also note that the lockdep asserts in the revmap helpers would catch
> anyone using domain->mutex where they should not (i.e. using
> domain->mutex for an hierarchical domain).

Lockdep is great, but lockdep is a runtime thing. It doesn't help
reasoning about what gets locked when changing this code.

> > > @@ -1132,6 +1147,7 @@ struct irq_domain *irq_domain_create_hierarchy(struct irq_domain *parent,
> > > else
> > > domain = irq_domain_create_tree(fwnode, ops, host_data);
> > > if (domain) {
> > > + domain->root = parent->root;
> > > domain->parent = parent;
> > > domain->flags |= flags;
> >
> > So we still have a bug here, as we have published a domain that we
> > keep updating. A parallel probing could find it in the interval and do
> > something completely wrong.
>
> Indeed we do, even if device links should make this harder to hit these
> days.
>
> > Splitting the work would help, as per the following patch.
>
> Looks good to me. Do you want to submit that as a patch that I'll rebase
> on or should I submit it as part of a v6?

Just take it directly.

Thanks,

M.

--
Without deviation from the norm, progress is not possible.

2023-02-10 12:57:12

by Johan Hovold

[permalink] [raw]
Subject: Re: [PATCH v5 19/19] irqdomain: Switch to per-domain locking

On Fri, Feb 10, 2023 at 11:38:58AM +0000, Marc Zyngier wrote:
> On Fri, 10 Feb 2023 09:56:03 +0000,
> Johan Hovold <[email protected]> wrote:
> >
> > On Thu, Feb 09, 2023 at 04:00:55PM +0000, Marc Zyngier wrote:
> > > On Thu, 09 Feb 2023 13:23:23 +0000,
> > > Johan Hovold <[email protected]> wrote:

> > I went back and forth over that a bit, but decided to only use
> > domain->root->mutex in paths that can be called for hierarchical
> > domains (i.e. the "shared code paths" mentioned above).
> >
> > Using it in paths that are clearly only called for non-hierarchical
> > domains where domain->root == domain felt a bit lazy.
>
> My concern here is that as this code gets further refactored, it may
> become much harder to reason about what is the correct level of
> locking.

Yeah, that's conceivable.

> > The counter argument is of course that using domain->root->lock allows
> > people to think less about the code they are changing, but that's not
> > necessarily always a good thing.
>
> Eventually, non-hierarchical domains should simply die and be replaced
> with a single level hierarchy. Having a unified locking in place will
> definitely make the required work clearer.
>
> > Also note that the lockdep asserts in the revmap helpers would catch
> > anyone using domain->mutex where they should not (i.e. using
> > domain->mutex for an hierarchical domain).
>
> Lockdep is great, but lockdep is a runtime thing. It doesn't help
> reasoning about what gets locked when changing this code.

Contributers are expected to test their changes with lockdep enabled,
right?

But sure, using root->domain->mutex throughout may prevent prevent
people from getting this wrong.

I'll update this for v6.

> > > > @@ -1132,6 +1147,7 @@ struct irq_domain *irq_domain_create_hierarchy(struct irq_domain *parent,
> > > > else
> > > > domain = irq_domain_create_tree(fwnode, ops, host_data);
> > > > if (domain) {
> > > > + domain->root = parent->root;
> > > > domain->parent = parent;
> > > > domain->flags |= flags;
> > >
> > > So we still have a bug here, as we have published a domain that we
> > > keep updating. A parallel probing could find it in the interval and do
> > > something completely wrong.
> >
> > Indeed we do, even if device links should make this harder to hit these
> > days.
> >
> > > Splitting the work would help, as per the following patch.
> >
> > Looks good to me. Do you want to submit that as a patch that I'll rebase
> > on or should I submit it as part of a v6?
>
> Just take it directly.

Ok, thanks.

I guess this turns the "Use irq_domain_create_hierarchy()" patches into
fixes that should be backported as well.

But note that your proposed diff may not be sufficient to prevent
lookups from racing with domain registration generally. Many drivers
still update the bus token after the domain has been added (and
apparently some still set flags also after creating hierarchies I just
noticed, e.g. amd_iommu_create_irq_domain).

It seems we'd need to expose a separate allocation and registration
interface, or at least pass in the bus token to a new combined
interface.

Johan

2023-02-10 15:07:09

by Marc Zyngier

[permalink] [raw]
Subject: Re: [PATCH v5 19/19] irqdomain: Switch to per-domain locking

On Fri, 10 Feb 2023 12:57:40 +0000,
Johan Hovold <[email protected]> wrote:
>
> On Fri, Feb 10, 2023 at 11:38:58AM +0000, Marc Zyngier wrote:
> > On Fri, 10 Feb 2023 09:56:03 +0000,
> > Johan Hovold <[email protected]> wrote:
> > >
> > > On Thu, Feb 09, 2023 at 04:00:55PM +0000, Marc Zyngier wrote:
> > > > On Thu, 09 Feb 2023 13:23:23 +0000,
> > > > Johan Hovold <[email protected]> wrote:
>
> > > I went back and forth over that a bit, but decided to only use
> > > domain->root->mutex in paths that can be called for hierarchical
> > > domains (i.e. the "shared code paths" mentioned above).
> > >
> > > Using it in paths that are clearly only called for non-hierarchical
> > > domains where domain->root == domain felt a bit lazy.
> >
> > My concern here is that as this code gets further refactored, it may
> > become much harder to reason about what is the correct level of
> > locking.
>
> Yeah, that's conceivable.
>
> > > The counter argument is of course that using domain->root->lock allows
> > > people to think less about the code they are changing, but that's not
> > > necessarily always a good thing.
> >
> > Eventually, non-hierarchical domains should simply die and be replaced
> > with a single level hierarchy. Having a unified locking in place will
> > definitely make the required work clearer.
> >
> > > Also note that the lockdep asserts in the revmap helpers would catch
> > > anyone using domain->mutex where they should not (i.e. using
> > > domain->mutex for an hierarchical domain).
> >
> > Lockdep is great, but lockdep is a runtime thing. It doesn't help
> > reasoning about what gets locked when changing this code.
>
> Contributers are expected to test their changes with lockdep enabled,
> right?
>
> But sure, using root->domain->mutex throughout may prevent prevent
> people from getting this wrong.
>
> I'll update this for v6.
>
> > > > > @@ -1132,6 +1147,7 @@ struct irq_domain *irq_domain_create_hierarchy(struct irq_domain *parent,
> > > > > else
> > > > > domain = irq_domain_create_tree(fwnode, ops, host_data);
> > > > > if (domain) {
> > > > > + domain->root = parent->root;
> > > > > domain->parent = parent;
> > > > > domain->flags |= flags;
> > > >
> > > > So we still have a bug here, as we have published a domain that we
> > > > keep updating. A parallel probing could find it in the interval and do
> > > > something completely wrong.
> > >
> > > Indeed we do, even if device links should make this harder to hit these
> > > days.
> > >
> > > > Splitting the work would help, as per the following patch.
> > >
> > > Looks good to me. Do you want to submit that as a patch that I'll rebase
> > > on or should I submit it as part of a v6?
> >
> > Just take it directly.
>
> Ok, thanks.
>
> I guess this turns the "Use irq_domain_create_hierarchy()" patches into
> fixes that should be backported as well.

Maybe. Backports are not my immediate concern.

> But note that your proposed diff may not be sufficient to prevent
> lookups from racing with domain registration generally. Many drivers
> still update the bus token after the domain has been added (and
> apparently some still set flags also after creating hierarchies I just
> noticed, e.g. amd_iommu_create_irq_domain).

The bus token should only rarely be a problem, as it is often set on
an intermediate level which isn't directly looked-up by anything else.
And if it did happen, it would probably result in a the domain not
being found.

Flags, on the other hand, are more problematic. But I consider this a
driver bug which should be fixed independently.

> It seems we'd need to expose a separate allocation and registration
> interface, or at least pass in the bus token to a new combined
> interface.

Potentially, yes. But this could come later down the line. I'm more
concerned in getting this series into -next, as the merge window is
fast approaching.

Thanks,

M.

--
Without deviation from the norm, progress is not possible.

2023-02-11 11:35:00

by Johan Hovold

[permalink] [raw]
Subject: Re: [PATCH v5 19/19] irqdomain: Switch to per-domain locking

On Fri, Feb 10, 2023 at 03:06:37PM +0000, Marc Zyngier wrote:
> On Fri, 10 Feb 2023 12:57:40 +0000,
> Johan Hovold <[email protected]> wrote:
> >
> > On Fri, Feb 10, 2023 at 11:38:58AM +0000, Marc Zyngier wrote:
> > > On Fri, 10 Feb 2023 09:56:03 +0000,
> > > Johan Hovold <[email protected]> wrote:

> > > > > > @@ -1132,6 +1147,7 @@ struct irq_domain *irq_domain_create_hierarchy(struct irq_domain *parent,
> > > > > > else
> > > > > > domain = irq_domain_create_tree(fwnode, ops, host_data);
> > > > > > if (domain) {
> > > > > > + domain->root = parent->root;
> > > > > > domain->parent = parent;
> > > > > > domain->flags |= flags;
> > > > >
> > > > > So we still have a bug here, as we have published a domain that we
> > > > > keep updating. A parallel probing could find it in the interval and do
> > > > > something completely wrong.
> > > >
> > > > Indeed we do, even if device links should make this harder to hit these
> > > > days.
> > > >
> > > > > Splitting the work would help, as per the following patch.
> > > >
> > > > Looks good to me. Do you want to submit that as a patch that I'll rebase
> > > > on or should I submit it as part of a v6?
> > >
> > > Just take it directly.
> >
> > Ok, thanks.

I've added a commit message and turned it into a patch to include in v6
now:

commit 3af395aa894c7df94ef2337e572e5e1710b4bbda (HEAD -> work)
Author: Marc Zyngier <[email protected]>
Date: Thu Feb 9 16:00:55 2023 +0000

irqdomain: Fix domain registration race

Hierarchical domains created using irq_domain_create_hierarchy() are
currently added to the domain list before having been fully initialised.

This specifically means that a racing allocation request might fail to
allocate irq data for the inner domains of a hierarchy in case the
parent domain pointer has not yet been set up.

Note that this is not really any issue for irqchip drivers that are
registered early via IRQCHIP_DECLARE() or IRQCHIP_ACPI_DECLARE(), but
could potentially cause trouble with drivers that are registered later
(e.g. when using IRQCHIP_PLATFORM_DRIVER_BEGIN(), gpiochip drivers,
etc.).

Fixes: afb7da83b9f4 ("irqdomain: Introduce helper function irq_domain_add_hierarchy()")
Cc: [email protected] # 3.19
...
[ johan: add a commit message ]
Signed-off-by: Johan Hovold <[email protected]>

Could you just give your SoB for the diff here so I can credit you as
author?

> > I guess this turns the "Use irq_domain_create_hierarchy()" patches into
> > fixes that should be backported as well.
>
> Maybe. Backports are not my immediate concern.

Turns out all of those drivers are registered early via
IRQCHIP_DECLARE() or IRQCHIP_ACPI_DECLARE() so there shouldn't really be
any risk of hitting this race for those.

> > But note that your proposed diff may not be sufficient to prevent
> > lookups from racing with domain registration generally. Many drivers
> > still update the bus token after the domain has been added (and
> > apparently some still set flags also after creating hierarchies I just
> > noticed, e.g. amd_iommu_create_irq_domain).
>
> The bus token should only rarely be a problem, as it is often set on
> an intermediate level which isn't directly looked-up by anything else.
> And if it did happen, it would probably result in a the domain not
> being found.
>
> Flags, on the other hand, are more problematic. But I consider this a
> driver bug which should be fixed independently.

I agree.

> > It seems we'd need to expose a separate allocation and registration
> > interface, or at least pass in the bus token to a new combined
> > interface.
>
> Potentially, yes. But this could come later down the line. I'm more
> concerned in getting this series into -next, as the merge window is
> fast approaching.

I'll post a v6 first thing Monday if you can give me that SoB before
then.

Johan

2023-02-11 12:52:33

by Marc Zyngier

[permalink] [raw]
Subject: Re: [PATCH v5 19/19] irqdomain: Switch to per-domain locking

AOn Sat, 11 Feb 2023 11:35:32 +0000,
Johan Hovold <[email protected]> wrote:
>
> On Fri, Feb 10, 2023 at 03:06:37PM +0000, Marc Zyngier wrote:
> > On Fri, 10 Feb 2023 12:57:40 +0000,
> > Johan Hovold <[email protected]> wrote:
> > >
> > > On Fri, Feb 10, 2023 at 11:38:58AM +0000, Marc Zyngier wrote:
> > > > On Fri, 10 Feb 2023 09:56:03 +0000,
> > > > Johan Hovold <[email protected]> wrote:
>
> > > > > > > @@ -1132,6 +1147,7 @@ struct irq_domain *irq_domain_create_hierarchy(struct irq_domain *parent,
> > > > > > > else
> > > > > > > domain = irq_domain_create_tree(fwnode, ops, host_data);
> > > > > > > if (domain) {
> > > > > > > + domain->root = parent->root;
> > > > > > > domain->parent = parent;
> > > > > > > domain->flags |= flags;
> > > > > >
> > > > > > So we still have a bug here, as we have published a domain that we
> > > > > > keep updating. A parallel probing could find it in the interval and do
> > > > > > something completely wrong.
> > > > >
> > > > > Indeed we do, even if device links should make this harder to hit these
> > > > > days.
> > > > >
> > > > > > Splitting the work would help, as per the following patch.
> > > > >
> > > > > Looks good to me. Do you want to submit that as a patch that I'll rebase
> > > > > on or should I submit it as part of a v6?
> > > >
> > > > Just take it directly.
> > >
> > > Ok, thanks.
>
> I've added a commit message and turned it into a patch to include in v6
> now:
>
> commit 3af395aa894c7df94ef2337e572e5e1710b4bbda (HEAD -> work)
> Author: Marc Zyngier <[email protected]>
> Date: Thu Feb 9 16:00:55 2023 +0000
>
> irqdomain: Fix domain registration race
>
> Hierarchical domains created using irq_domain_create_hierarchy() are
> currently added to the domain list before having been fully initialised.
>
> This specifically means that a racing allocation request might fail to
> allocate irq data for the inner domains of a hierarchy in case the
> parent domain pointer has not yet been set up.
>
> Note that this is not really any issue for irqchip drivers that are
> registered early via IRQCHIP_DECLARE() or IRQCHIP_ACPI_DECLARE(), but
> could potentially cause trouble with drivers that are registered later
> (e.g. when using IRQCHIP_PLATFORM_DRIVER_BEGIN(), gpiochip drivers,
> etc.).
>
> Fixes: afb7da83b9f4 ("irqdomain: Introduce helper function irq_domain_add_hierarchy()")
> Cc: [email protected] # 3.19
> ...
> [ johan: add a commit message ]
> Signed-off-by: Johan Hovold <[email protected]>
>
> Could you just give your SoB for the diff here so I can credit you as
> author?

Thanks for that. Feel free to add:

Signed-off-by: Marc Zyngier <[email protected]>

>
> > > I guess this turns the "Use irq_domain_create_hierarchy()" patches into
> > > fixes that should be backported as well.
> >
> > Maybe. Backports are not my immediate concern.
>
> Turns out all of those drivers are registered early via
> IRQCHIP_DECLARE() or IRQCHIP_ACPI_DECLARE() so there shouldn't really be
> any risk of hitting this race for those.
>
> > > But note that your proposed diff may not be sufficient to prevent
> > > lookups from racing with domain registration generally. Many drivers
> > > still update the bus token after the domain has been added (and
> > > apparently some still set flags also after creating hierarchies I just
> > > noticed, e.g. amd_iommu_create_irq_domain).
> >
> > The bus token should only rarely be a problem, as it is often set on
> > an intermediate level which isn't directly looked-up by anything else.
> > And if it did happen, it would probably result in a the domain not
> > being found.
> >
> > Flags, on the other hand, are more problematic. But I consider this a
> > driver bug which should be fixed independently.
>
> I agree.
>
> > > It seems we'd need to expose a separate allocation and registration
> > > interface, or at least pass in the bus token to a new combined
> > > interface.
> >
> > Potentially, yes. But this could come later down the line. I'm more
> > concerned in getting this series into -next, as the merge window is
> > fast approaching.
>
> I'll post a v6 first thing Monday if you can give me that SoB before
> then.

You should be all set. Please post the series at your earliest
convenience, and I'll let i simmer in -next for a bit.

Thanks,

M.

--
Without deviation from the norm, progress is not possible.