2023-05-30 21:59:22

by Radu Rendec

[permalink] [raw]
Subject: [RFC PATCH 0/5] irq: sysfs interface improvements for SMP affinity control

This patch set implements new sysfs interfaces that facilitate SMP
affinity control of chained interrupts. It follows the guidelines in
https://lore.kernel.org/all/[email protected]/ with slight
deviations, which are explained below.

The assumption is that irqbalance must be aware of the chained interrupt
topology regardless of how it is exposed to userspace, for the following
reasons:
- Interrupt counters are not updated for the parent interrupt. Counters
must be read separately for each of the chained interrupts and summed
up to assess the CPU usage impact of the group as a whole.
- The affinity setting is shared by all multiplexed interrupts (and the
parent interrupt) and cannot be changed individually.

Since irqbalance must be aware of the topology anyway, it is easier to
move parts of the problem there and reduce the complexity of the kernel
changes.
- Instead of creating a new affinity interface for chained interrupts
that has different semantics from the existing procfs interface (and
changes the affinity of the parent interrupt in the case of muxed
interrupts), it is easier to let irqbalance set the affinity of the
parent interrupt by itself (since it already knows who the parent is).
- Tracking groups of interrupts in the kernel creates additional
synchronization challenges that are otherwise unnecessary. The kernel
already has a (struct irq_desc).parent_irq field that can be (re)used
for this purpose (see below).

Brief description of the patches in this set:
- Patch 1/5 makes the (struct irq_desc).parent_irq field available
unconditionally. So far, it has been used for IRQ-resend and depended
on CONFIG_HARDIRQS_SW_RESEND. But it can be (re)used to track chained
interrupt parents for the general use case, without any changes to the
existing IRQ chip drivers.
- Patch 2/5 is trivial and just exposes (struct irq_desc).parent_irq in
debugfs.
- Patch 3/5 exposes the chained interrupt topology in sysfs in two ways:
the muxed_irqs directory (as described in the original email thread)
and the parent_irq symlink. From a userspace perspective, they are
redundant. However, in the first case the synchronization is likely
incomplete/broken and not so easy to fix.
- Patch 4/5 moves the SMP affinity write handlers from procfs code to
generic code, with the intention to reuse them for a new sysfs
interface.
- Patch 5/5 creates a sysfs interface for the affinity, with identical
semantics to the existing procfs interface. The sole purpose is to
allow userspace (irqbalance) to control the affinity of the parent
interrupt, which is typically *not* visible in procfs.

The only required change to existing chained IRQ chip drivers in order
to support the new affinity control is to call irq_set_parent() in their
.map domain op. If they use the newer hierarchical API, they should call
irq_set_parent() in their .alloc domain op instead. This doesn't affect
the existing procfs based affinity interface in any way.

A few IRQ chip drivers already call irq_set_parent() in their .map
domain op to implement IRQ-resend. No change is required to those
drivers to support the new affinity control.

Last but not least, it turns out that hierarchical domains are entirely
out of the scope of these changes (unless chained interrupts are used
along the path). In the case of hierarchical domains, each interrupt in
the outermost domain has a *single* corresponding Linux virq (that is
mapped to each domain in the hierarchy). That makes it perfectly safe to
implement the .irq_set_affinity chip op as irq_chip_set_affinity_parent
and delegate affinity control to the parent chip/domain. This will *not*
suddenly change the affinity of a different interrupt behind anyone's
back simply because there cannot be another interrupt that shares the
same affinity setting.

Note: I still need to update the Documentation/ directory for the new
sysfs interface, and I will address that in a future version.
At this point, I just want to get feedback about the current
approach.

Radu Rendec (5):
irq: Always enable parent interrupt tracking
irq: Show the parent chained interrupt in debugfs
irq: Expose chained interrupt parents in sysfs
irq: Move SMP affinity write handler out of proc.c
irq: Add smp_affinity/list attributes to sysfs

include/linux/irq.h | 9 +-
include/linux/irqdesc.h | 1 +
kernel/irq/debugfs.c | 1 +
kernel/irq/internals.h | 10 ++
kernel/irq/irqdesc.c | 206 +++++++++++++++++++++++++++++++++++++---
kernel/irq/irqdomain.c | 15 +++
kernel/irq/manage.c | 20 +++-
kernel/irq/proc.c | 72 +-------------
8 files changed, 244 insertions(+), 90 deletions(-)

--
2.40.1



2023-05-30 21:59:28

by Radu Rendec

[permalink] [raw]
Subject: [RFC PATCH 1/5] irq: Always enable parent interrupt tracking

The kernel already has some support for tracking the parent interrupt in
the case of chained interrupts, but it is currently used only by the
IRQ-resend code.

This patch enables the parent interrupt tracking code unconditionally.
The IRQ-resend code still depends on CONFIG_HARDIRQS_SW_RESEND.

The intention is to (re)use the existing parent interrupt tracking
support for different purposes, more specifically to expose chained
interrupt topology to userspace. That, in turn, makes it possible to
control the SMP affinity of chained interrupts in a way that does not
break the existing interface and the promises it makes.

Signed-off-by: Radu Rendec <[email protected]>
---
include/linux/irq.h | 7 -------
kernel/irq/manage.c | 2 --
2 files changed, 9 deletions(-)

diff --git a/include/linux/irq.h b/include/linux/irq.h
index b1b28affb32a7..7710f157e12de 100644
--- a/include/linux/irq.h
+++ b/include/linux/irq.h
@@ -641,14 +641,7 @@ static inline void irq_force_complete_move(struct irq_desc *desc) { }

extern int no_irq_affinity;

-#ifdef CONFIG_HARDIRQS_SW_RESEND
int irq_set_parent(int irq, int parent_irq);
-#else
-static inline int irq_set_parent(int irq, int parent_irq)
-{
- return 0;
-}
-#endif

/*
* Built-in IRQ handlers for various IRQ types,
diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
index eb862b5f91c42..49683e55261eb 100644
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -1004,7 +1004,6 @@ int __irq_set_trigger(struct irq_desc *desc, unsigned long flags)
return ret;
}

-#ifdef CONFIG_HARDIRQS_SW_RESEND
int irq_set_parent(int irq, int parent_irq)
{
unsigned long flags;
@@ -1019,7 +1018,6 @@ int irq_set_parent(int irq, int parent_irq)
return 0;
}
EXPORT_SYMBOL_GPL(irq_set_parent);
-#endif

/*
* Default primary interrupt handler for threaded interrupts. Is
--
2.40.1


2023-05-30 22:00:18

by Radu Rendec

[permalink] [raw]
Subject: [RFC PATCH 2/5] irq: Show the parent chained interrupt in debugfs

This is a trivial change to expose the parent chained interrupt. The
intention is to make it easier to debug chained interrupts, particularly
in the context of setting the SMP affinity.

Signed-off-by: Radu Rendec <[email protected]>
---
kernel/irq/debugfs.c | 1 +
1 file changed, 1 insertion(+)

diff --git a/kernel/irq/debugfs.c b/kernel/irq/debugfs.c
index bbcaac64038ef..3ada976df8612 100644
--- a/kernel/irq/debugfs.c
+++ b/kernel/irq/debugfs.c
@@ -177,6 +177,7 @@ static int irq_debug_show(struct seq_file *m, void *p)
ARRAY_SIZE(irqdesc_istates));
seq_printf(m, "ddepth: %u\n", desc->depth);
seq_printf(m, "wdepth: %u\n", desc->wake_depth);
+ seq_printf(m, "parent: %d\n", desc->parent_irq);
seq_printf(m, "dstate: 0x%08x\n", irqd_get(data));
irq_debug_show_bits(m, 0, irqd_get(data), irqdata_states,
ARRAY_SIZE(irqdata_states));
--
2.40.1


2023-05-30 22:25:51

by Radu Rendec

[permalink] [raw]
Subject: [RFC PATCH 5/5] irq: Add smp_affinity/list attributes to sysfs

This patch adds the smp_affinity and smp_affinity_list attributes to the
sysfs interrupt interface. The implementation is identical to procfs,
and the attributes are visible only when CONFIG_SMP is enabled.

The intention is to allow SMP affinity to be controlled for chained
interrupt parents, which are typically not visible in procfs because
they are not requested through request_irq().

Signed-off-by: Radu Rendec <[email protected]>
---
kernel/irq/irqdesc.c | 55 ++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 55 insertions(+)

diff --git a/kernel/irq/irqdesc.c b/kernel/irq/irqdesc.c
index a46a76c29b8d1..5b014df9fd730 100644
--- a/kernel/irq/irqdesc.c
+++ b/kernel/irq/irqdesc.c
@@ -210,6 +210,9 @@ static struct kobject *irq_kobj_base;
#define IRQ_ATTR_RO(_name) \
static struct kobj_attribute _name##_attr = __ATTR_RO(_name)

+#define IRQ_ATTR_RW(_name) \
+static struct kobj_attribute _name##_attr = __ATTR_RW(_name)
+
static ssize_t per_cpu_count_show(struct kobject *kobj,
struct kobj_attribute *attr, char *buf)
{
@@ -332,6 +335,54 @@ static ssize_t actions_show(struct kobject *kobj,
}
IRQ_ATTR_RO(actions);

+#ifdef CONFIG_SMP
+static ssize_t smp_affinity_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf)
+{
+ struct irq_desc *desc = container_of(kobj, struct irq_desc, kobj);
+ const struct cpumask *mask = desc->irq_common_data.affinity;
+
+#ifdef CONFIG_GENERIC_PENDING_IRQ
+ if (irqd_is_setaffinity_pending(&desc->irq_data))
+ mask = desc->pending_mask;
+#endif
+
+ return scnprintf(buf, PAGE_SIZE, "%*pb\n", cpumask_pr_args(mask));
+}
+static ssize_t smp_affinity_store(struct kobject *kobj,
+ struct kobj_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct irq_desc *desc = container_of(kobj, struct irq_desc, kobj);
+
+ return write_irq_affinity(desc->irq_data.irq, buf, count, false, false);
+}
+IRQ_ATTR_RW(smp_affinity);
+
+static ssize_t smp_affinity_list_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf)
+{
+ struct irq_desc *desc = container_of(kobj, struct irq_desc, kobj);
+ const struct cpumask *mask = desc->irq_common_data.affinity;
+
+#ifdef CONFIG_GENERIC_PENDING_IRQ
+ if (irqd_is_setaffinity_pending(&desc->irq_data))
+ mask = desc->pending_mask;
+#endif
+
+ return scnprintf(buf, PAGE_SIZE, "%*pbl\n", cpumask_pr_args(mask));
+}
+static ssize_t smp_affinity_list_store(struct kobject *kobj,
+ struct kobj_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct irq_desc *desc = container_of(kobj, struct irq_desc, kobj);
+
+ return write_irq_affinity(desc->irq_data.irq, buf, count, true, false);
+}
+IRQ_ATTR_RW(smp_affinity_list);
+#endif
+
static struct attribute *irq_attrs[] = {
&per_cpu_count_attr.attr,
&chip_name_attr.attr,
@@ -340,6 +391,10 @@ static struct attribute *irq_attrs[] = {
&wakeup_attr.attr,
&name_attr.attr,
&actions_attr.attr,
+#ifdef CONFIG_SMP
+ &smp_affinity_attr.attr,
+ &smp_affinity_list_attr.attr,
+#endif
NULL
};
ATTRIBUTE_GROUPS(irq);
--
2.40.1


2023-05-30 22:36:06

by Radu Rendec

[permalink] [raw]
Subject: [RFC PATCH 4/5] irq: Move SMP affinity write handler out of proc.c

This patch prepares the ground for setting the SMP affinity from sysfs.
The bulk of the code is identical for procfs and sysfs, except for the
cpumask parsing functions, where procfs requires the _user variants.

Summary of changes:
- irq_select_affinity_usr() and write_irq_affinity() are moved from
from proc.c to irqdesc.c
- write_irq_affinity() is slightly modified to allow using the other
variant of cpumask parsing functions
- the definition of no_irq_affinity is moved from proc.c to manage.c
and available only when CONFIG_SMP is enabled
- the declaration of no_irq_affinity is available only when CONFIG_SMP
is enabled

Note that all existing use cases of no_irq_affinity were already
confined within CONFIG_SMP preprocessor conditionals.

Signed-off-by: Radu Rendec <[email protected]>
---
include/linux/irq.h | 2 ++
kernel/irq/internals.h | 2 ++
kernel/irq/irqdesc.c | 67 +++++++++++++++++++++++++++++++++++++++
kernel/irq/manage.c | 2 ++
kernel/irq/proc.c | 72 +++---------------------------------------
5 files changed, 78 insertions(+), 67 deletions(-)

diff --git a/include/linux/irq.h b/include/linux/irq.h
index 7710f157e12de..0393fc02cfd46 100644
--- a/include/linux/irq.h
+++ b/include/linux/irq.h
@@ -639,7 +639,9 @@ static inline void irq_move_masked_irq(struct irq_data *data) { }
static inline void irq_force_complete_move(struct irq_desc *desc) { }
#endif

+#ifdef CONFIG_SMP
extern int no_irq_affinity;
+#endif

int irq_set_parent(int irq, int parent_irq);

diff --git a/kernel/irq/internals.h b/kernel/irq/internals.h
index c75cd836155c9..381a0b4c1d381 100644
--- a/kernel/irq/internals.h
+++ b/kernel/irq/internals.h
@@ -147,6 +147,8 @@ extern int irq_do_set_affinity(struct irq_data *data,

#ifdef CONFIG_SMP
extern int irq_setup_affinity(struct irq_desc *desc);
+extern ssize_t write_irq_affinity(unsigned int irq, const char __user *buffer,
+ size_t count, bool is_list, bool is_user);
#else
static inline int irq_setup_affinity(struct irq_desc *desc) { return 0; }
#endif
diff --git a/kernel/irq/irqdesc.c b/kernel/irq/irqdesc.c
index ec52b8b41002e..a46a76c29b8d1 100644
--- a/kernel/irq/irqdesc.c
+++ b/kernel/irq/irqdesc.c
@@ -133,6 +133,73 @@ EXPORT_SYMBOL_GPL(nr_irqs);
static DEFINE_MUTEX(sparse_irq_lock);
static DECLARE_BITMAP(allocated_irqs, IRQ_BITMAP_BITS);

+#ifdef CONFIG_SMP
+
+#ifndef CONFIG_AUTO_IRQ_AFFINITY
+static inline int irq_select_affinity_usr(unsigned int irq)
+{
+ /*
+ * If the interrupt is started up already then this fails. The
+ * interrupt is assigned to an online CPU already. There is no
+ * point to move it around randomly. Tell user space that the
+ * selected mask is bogus.
+ *
+ * If not then any change to the affinity is pointless because the
+ * startup code invokes irq_setup_affinity() which will select
+ * a online CPU anyway.
+ */
+ return -EINVAL;
+}
+#else
+/* ALPHA magic affinity auto selector. Keep it for historical reasons. */
+static inline int irq_select_affinity_usr(unsigned int irq)
+{
+ return irq_select_affinity(irq);
+}
+#endif
+
+ssize_t write_irq_affinity(unsigned int irq, const char __user *buffer,
+ size_t count, bool is_list, bool is_user)
+{
+ cpumask_var_t mask;
+ int err;
+
+ if (!irq_can_set_affinity_usr(irq) || no_irq_affinity)
+ return -EIO;
+
+ if (!zalloc_cpumask_var(&mask, GFP_KERNEL))
+ return -ENOMEM;
+
+ if (is_user)
+ err = is_list ? cpumask_parselist_user(buffer, count, mask) :
+ cpumask_parse_user(buffer, count, mask);
+ else
+ err = is_list ? cpulist_parse(buffer, mask) :
+ cpumask_parse(buffer, mask);
+ if (err)
+ goto free_cpumask;
+
+ /*
+ * Do not allow disabling IRQs completely - it's a too easy
+ * way to make the system unusable accidentally :-) At least
+ * one online CPU still has to be targeted.
+ */
+ if (!cpumask_intersects(mask, cpu_online_mask)) {
+ /*
+ * Special case for empty set - allow the architecture code
+ * to set default SMP affinity.
+ */
+ err = irq_select_affinity_usr(irq) ? -EINVAL : count;
+ } else {
+ err = irq_set_affinity(irq, mask) ?: count;
+ }
+
+free_cpumask:
+ free_cpumask_var(mask);
+ return err;
+}
+#endif
+
#ifdef CONFIG_SPARSE_IRQ

static void irq_kobj_release(struct kobject *kobj);
diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
index eec9b94747439..91cee7270d221 100644
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -143,6 +143,8 @@ EXPORT_SYMBOL(synchronize_irq);
#ifdef CONFIG_SMP
cpumask_var_t irq_default_affinity;

+int no_irq_affinity;
+
static bool __irq_can_set_affinity(struct irq_desc *desc)
{
if (!desc || !irqd_can_balance(&desc->irq_data) ||
diff --git a/kernel/irq/proc.c b/kernel/irq/proc.c
index 623b8136e9af3..76f0dda1f26b8 100644
--- a/kernel/irq/proc.c
+++ b/kernel/irq/proc.c
@@ -100,7 +100,6 @@ static int irq_affinity_hint_proc_show(struct seq_file *m, void *v)
return 0;
}

-int no_irq_affinity;
static int irq_affinity_proc_show(struct seq_file *m, void *v)
{
return show_irq_affinity(AFFINITY, m);
@@ -111,81 +110,20 @@ static int irq_affinity_list_proc_show(struct seq_file *m, void *v)
return show_irq_affinity(AFFINITY_LIST, m);
}

-#ifndef CONFIG_AUTO_IRQ_AFFINITY
-static inline int irq_select_affinity_usr(unsigned int irq)
-{
- /*
- * If the interrupt is started up already then this fails. The
- * interrupt is assigned to an online CPU already. There is no
- * point to move it around randomly. Tell user space that the
- * selected mask is bogus.
- *
- * If not then any change to the affinity is pointless because the
- * startup code invokes irq_setup_affinity() which will select
- * a online CPU anyway.
- */
- return -EINVAL;
-}
-#else
-/* ALPHA magic affinity auto selector. Keep it for historical reasons. */
-static inline int irq_select_affinity_usr(unsigned int irq)
-{
- return irq_select_affinity(irq);
-}
-#endif
-
-static ssize_t write_irq_affinity(int type, struct file *file,
+static ssize_t irq_affinity_proc_write(struct file *file,
const char __user *buffer, size_t count, loff_t *pos)
{
unsigned int irq = (int)(long)pde_data(file_inode(file));
- cpumask_var_t new_value;
- int err;
-
- if (!irq_can_set_affinity_usr(irq) || no_irq_affinity)
- return -EIO;
-
- if (!zalloc_cpumask_var(&new_value, GFP_KERNEL))
- return -ENOMEM;
-
- if (type)
- err = cpumask_parselist_user(buffer, count, new_value);
- else
- err = cpumask_parse_user(buffer, count, new_value);
- if (err)
- goto free_cpumask;

- /*
- * Do not allow disabling IRQs completely - it's a too easy
- * way to make the system unusable accidentally :-) At least
- * one online CPU still has to be targeted.
- */
- if (!cpumask_intersects(new_value, cpu_online_mask)) {
- /*
- * Special case for empty set - allow the architecture code
- * to set default SMP affinity.
- */
- err = irq_select_affinity_usr(irq) ? -EINVAL : count;
- } else {
- err = irq_set_affinity(irq, new_value);
- if (!err)
- err = count;
- }
-
-free_cpumask:
- free_cpumask_var(new_value);
- return err;
-}
-
-static ssize_t irq_affinity_proc_write(struct file *file,
- const char __user *buffer, size_t count, loff_t *pos)
-{
- return write_irq_affinity(0, file, buffer, count, pos);
+ return write_irq_affinity(irq, buffer, count, false, true);
}

static ssize_t irq_affinity_list_proc_write(struct file *file,
const char __user *buffer, size_t count, loff_t *pos)
{
- return write_irq_affinity(1, file, buffer, count, pos);
+ unsigned int irq = (int)(long)pde_data(file_inode(file));
+
+ return write_irq_affinity(irq, buffer, count, true, true);
}

static int irq_affinity_proc_open(struct inode *inode, struct file *file)
--
2.40.1


2023-05-31 13:32:12

by Thomas Gleixner

[permalink] [raw]
Subject: Re: [RFC PATCH 0/5] irq: sysfs interface improvements for SMP affinity control

On Tue, May 30 2023 at 17:45, Radu Rendec wrote:
> Note: I still need to update the Documentation/ directory for the new
> sysfs interface, and I will address that in a future version.
> At this point, I just want to get feedback about the current
> approach.

From a conceptual POV I understand why this is required, which makes me
hate this chained mechanism even more.

Aside of having no visibility (counters, affinity, etc) the worst thing
about these chained hidden interrupts is:

There is no control of runaway interrupts as they circumvent the core,
which has caused hard to diagnose system hangups in the past. See

ba714a9c1dea ("pinctrl/amd: Use regular interrupt instead of chained")

for demonstration.

The argument I heard for this chained interrupt muck is that it's so
much more performant than using regular interrupt handlers via
request_irq(). It's obviously less overhead, but whether it matters and
most importantly whether it justifies the downsides is a different
question.

There is also the argument about double accounting. Right now the
chained handler is not accounted and only the childs are.

Though that is inconsistent with other demultiplex handlers which _must_
use regular interrupt handlers (threaded) to demultiplex interrupt chips
which sit behind SPI/I2C...

The sum of child interrupts is also not necessarily the number of parent
interrupts, unless there is always exactly only one child handler to
invoke.

Quite some of those demux handlers are also not RT compatible.

AFAICT, there is no real technical reason at least not for the vast
majority of usage sites why the demultiplex handler cannot run inside a
regular interrupt handler context.

So I personally would prefer to get rid of this chained oddball and just
have consistent mechanisms for dealing with interrupts, which would just
avoid exposing the affinity files in two different places.

Providing information about child/parent relationship is an orthogonal
issue.

If there is some good reason (aside of the chained muck) to have sysfs
based affinity management, then I'm not objecting as long as the
functionality is the same, i.e. effective affinity needs be exposed too.

Thanks,

tglx

2023-05-31 13:41:35

by Thomas Gleixner

[permalink] [raw]
Subject: Re: [RFC PATCH 1/5] irq: Always enable parent interrupt tracking

Radu!

On Tue, May 30 2023 at 17:45, Radu Rendec wrote:

Please read and follow:

https://www.kernel.org/doc/html/latest/process/maintainer-tip.html

Especially the section about change logs.

Thanks,

tglx

2023-05-31 22:59:38

by Radu Rendec

[permalink] [raw]
Subject: Re: [RFC PATCH 0/5] irq: sysfs interface improvements for SMP affinity control

On Wed, 2023-05-31 at 15:09 +0200, Thomas Gleixner wrote:
> From a conceptual POV I understand why this is required, which makes me
> hate this chained mechanism even more.
>
> [cut]
>
> AFAICT, there is no real technical reason at least not for the vast
> majority of usage sites why the demultiplex handler cannot run inside a
> regular interrupt handler context.

Thanks for taking the time to explain everything in such detail.
Much appreciated!

> So I personally would prefer to get rid of this chained oddball and just
> have consistent mechanisms for dealing with interrupts, which would just
> avoid exposing the affinity files in two different places.

Does this mean that if I came across a chained driver that lacked
affinity support, then changing it to use regular interrupts via
request_irq() would be a viable solution to enable affinity support
and you would consider accepting such a patch?

Taking a step back, in the case of hierarchical domains where *no*
multiplexing is involved, do you consider setting .irq_set_affinity()
to irq_chip_set_affinity_parent() a good way to enable affinity support
(assuming, of course, the driver lacks such support originally)?

At the end of the day, what I'm trying to do is find a way to enable
affinity support for a few specific drivers where it's lacking. My
initial impression, after reading Marc's message[1], was that the fix
lay at the system level, at least for multiplexing drivers. Hence my
naive attempt at a system-level fix. It is now becoming clear that the
fix will have to be evaluated/addressed at the driver level, on a case
by case basis.

As a side note, one thing I particularly like about your approach is
that it doesn't require any changes to irqbalance.

> Providing information about child/parent relationship is an orthogonal
> issue.

Agreed. Do you see any value in doing that? And, if yes, is reusing
(struct irq_desc).parent_irq and irq_set_parent() a good way of doing
it? FWIW, a multiplexing driver could do that regardless of how it
registers a handler for the parent interrupt (chained/regular).

> If there is some good reason (aside of the chained muck) to have sysfs
> based affinity management, then I'm not objecting as long as the
> functionality is the same, i.e. effective affinity needs be exposed too.

I can't think of any other reason. AFAICT, chained interrupts are the
only interrupts that are active but not visible in procfs. For any
other purpose, the procfs interface is good as it is.

Thanks,
Radu

[1] https://lore.kernel.org/all/[email protected]/


2023-05-31 23:18:38

by Radu Rendec

[permalink] [raw]
Subject: Re: [RFC PATCH 1/5] irq: Always enable parent interrupt tracking

On Wed, 2023-05-31 at 15:10 +0200, Thomas Gleixner wrote:
> Please read and follow:
>
>     https://www.kernel.org/doc/html/latest/process/maintainer-tip.html
>
> Especially the section about change logs.

Duly noted. Thanks for pointing me to the relevant documentation!

Best regards,
Radu


2023-06-20 16:27:29

by Radu Rendec

[permalink] [raw]
Subject: Re: [RFC PATCH 0/5] irq: sysfs interface improvements for SMP affinity control

On Wed, 2023-05-31 at 15:09 +0200, Thomas Gleixner wrote:
> On Tue, May 30 2023 at 17:45, Radu Rendec wrote:
> > Note: I still need to update the Documentation/ directory for the new
> >       sysfs interface, and I will address that in a future version.
> >       At this point, I just want to get feedback about the current
> >       approach.
>
> From a conceptual POV I understand why this is required, which makes me
> hate this chained mechanism even more.
>
> Aside of having no visibility (counters, affinity, etc) the worst thing
> about these chained hidden interrupts is:
>
>   There is no control of runaway interrupts as they circumvent the core,
>   which has caused hard to diagnose system hangups in the past. See
>   
>     ba714a9c1dea ("pinctrl/amd: Use regular interrupt instead of chained")
>
>   for demonstration.
>
> The argument I heard for this chained interrupt muck is that it's so
> much more performant than using regular interrupt handlers via
> request_irq(). It's obviously less overhead, but whether it matters and
> most importantly whether it justifies the downsides is a different
> question.
>
> There is also the argument about double accounting. Right now the
> chained handler is not accounted and only the childs are.
>
> Though that is inconsistent with other demultiplex handlers which _must_
> use regular interrupt handlers (threaded) to demultiplex interrupt chips
> which sit behind SPI/I2C...
>
> The sum of child interrupts is also not necessarily the number of parent
> interrupts, unless there is always exactly only one child handler to
> invoke.
>
> Quite some of those demux handlers are also not RT compatible.
>
> AFAICT, there is no real technical reason at least not for the vast
> majority of usage sites why the demultiplex handler cannot run inside a
> regular interrupt handler context.

I was about to send an RFC patch that converts a multiplexing IRQ chip
driver from chained to regular interrupts, when I realized I had come
full circle. Marc rejected a similar patch in the past [1] and the main
argument was that exposing the parent interrupt in procfs is backwards
incompatible. Quote:

The problem of changing affinities for chained (or multiplexing)
interrupts is, to make it short, that it breaks the existing userspace
ABI that a change in affinity affects only the interrupt userspace
acts upon, and no other. Which is why we don't expose any affinity
setting for such an interrupt, as by definition changing its affinity
affects all the interrupts that are muxed onto it.

So, there are two contradictory arguments here:
* Chained interrupts are "muck", mainly because they circumvent the
interrupt core and prevent the interrupt storm detector from kicking
in when hardware misbehaves (and for other reasons as well).
* Exposing the parent interrupt affinity control in procfs breaks the
userspace ABI because all child interrupts inherently share the same
affinity settings. This implies regular interrupts cannot be used.

Meanwhile, both interrupt storms and the lack of affinity control
support for multiplexing drivers are real problems, and my team and I
came across both. FWIW, the interrupt storm is the one we found more
recently, and it's the reason why I wanted to send that RFC patch I
mentioned before.

Is Marc's argument still relevant? Perhaps with both arguments in the
same email it becomes clearer that there needs to be some alignment at
the maintainer level. I would be more than happy to send patches and
help fix both issues. But I think that's not possible until you come up
with a strategy that you *both* agree to.

[1] https://lore.kernel.org/all/[email protected]/

Thanks,
Radu

> So I personally would prefer to get rid of this chained oddball and just
> have consistent mechanisms for dealing with interrupts, which would just
> avoid exposing the affinity files in two different places.
>
> Providing information about child/parent relationship is an orthogonal
> issue.
>
> If there is some good reason (aside of the chained muck) to have sysfs
> based affinity management, then I'm not objecting as long as the
> functionality is the same, i.e. effective affinity needs be exposed too.
>
> Thanks,
>
>         tglx