2020-08-11 06:20:48

by Yunfeng Ye

[permalink] [raw]
Subject: [PATCH] genirq/affinity: show managed irq affinity correctly

The "managed_irq" for isolcpus is supported after the commit
11ea68f553e2 ("genirq, sched/isolation: Isolate from handling managed
interrupts"), but the interrupt affinity shown in proc directory is
still the original affinity.

So modify the interrupt affinity correctly for managed_irq.

Signed-off-by: yeyunfeng <[email protected]>
---
kernel/irq/manage.c | 38 ++++++++++++++++++++++++--------------
1 file changed, 24 insertions(+), 14 deletions(-)

diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
index d55ba625d426..6ad4fe01942a 100644
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -218,8 +218,8 @@ static inline void irq_init_effective_affinity(struct irq_data *data,
const struct cpumask *mask) { }
#endif

-int irq_do_set_affinity(struct irq_data *data, const struct cpumask *mask,
- bool force)
+static int irq_chip_set_affinity(struct irq_data *data,
+ const struct cpumask *mask, bool force)
{
struct irq_desc *desc = irq_data_to_desc(data);
struct irq_chip *chip = irq_data_get_irq_chip(data);
@@ -228,6 +228,26 @@ int irq_do_set_affinity(struct irq_data *data, const struct cpumask *mask,
if (!chip || !chip->irq_set_affinity)
return -EINVAL;

+ ret = chip->irq_set_affinity(data, mask, force);
+ switch (ret) {
+ case IRQ_SET_MASK_OK:
+ case IRQ_SET_MASK_OK_DONE:
+ cpumask_copy(desc->irq_common_data.affinity, mask);
+ /* fall through */
+ case IRQ_SET_MASK_OK_NOCOPY:
+ irq_validate_effective_affinity(data);
+ irq_set_thread_affinity(desc);
+ ret = 0;
+ }
+
+ return ret;
+}
+
+int irq_do_set_affinity(struct irq_data *data, const struct cpumask *mask,
+ bool force)
+{
+ int ret;
+
/*
* If this is a managed interrupt and housekeeping is enabled on
* it check whether the requested affinity mask intersects with
@@ -262,20 +282,10 @@ int irq_do_set_affinity(struct irq_data *data, const struct cpumask *mask,
prog_mask = mask;
else
prog_mask = &tmp_mask;
- ret = chip->irq_set_affinity(data, prog_mask, force);
+ ret = irq_chip_set_affinity(data, prog_mask, force);
raw_spin_unlock(&tmp_mask_lock);
} else {
- ret = chip->irq_set_affinity(data, mask, force);
- }
- switch (ret) {
- case IRQ_SET_MASK_OK:
- case IRQ_SET_MASK_OK_DONE:
- cpumask_copy(desc->irq_common_data.affinity, mask);
- /* fall through */
- case IRQ_SET_MASK_OK_NOCOPY:
- irq_validate_effective_affinity(data);
- irq_set_thread_affinity(desc);
- ret = 0;
+ ret = irq_chip_set_affinity(data, mask, force);
}

return ret;
--
1.8.3.1


2020-08-13 08:10:05

by Thomas Gleixner

[permalink] [raw]
Subject: Re: [PATCH] genirq/affinity: show managed irq affinity correctly

Yunfeng Ye <[email protected]> writes:

> The "managed_irq" for isolcpus is supported after the commit
> 11ea68f553e2 ("genirq, sched/isolation: Isolate from handling managed
> interrupts"), but the interrupt affinity shown in proc directory is
> still the original affinity.
>
> So modify the interrupt affinity correctly for managed_irq.

I really have no idea what you are trying to achieve here.

1) Why are you moving the !chip !chip->irq_set_affinity check out of
irq_do_set_affinity() ?

Just that the whole computation happens for nothing and then returns
an error late.

2) Modifying irqdata->common->affinity is wrong to begin with. It's the
possible affinity mask. Your change causes the managed affinity mask
to become invalid in the worst case.

irq->affinity = 0x0C; // CPU 2 - 3
hkmask = 0x07; // CPU 0 - 2

Invocation #1:
online_mask = 0xFF; // CPU 0 - 7

cpumask_and(&tmp_mask, mask, hk_mask);
--> tmp_mask == 0x04 // CPU 2

irq->affinity = tmp_mask; // CPU 2

CPU 2 goes offline

migrate_one_irq()

affinity = irq->affinity; // CPU 2
online_mask = 0xFB; // CPU 0-1, 3-7

if (cpumask_any_and(affinity, cpu_online_mask) >= nr_cpu_ids) {
/*
* If the interrupt is managed, then shut it down and leave
* the affinity untouched.
*/
if (irqd_affinity_is_managed(d)) {
irqd_set_managed_shutdown(d);
irq_shutdown_and_deactivate(desc);
return false;
}

So the interrupt is shut down which is incorrect. The isolation
logic in irq_do_set_affinity() was clearly designed to prefer
housekeeping CPUs and not to remove them.

You are looking at the wrong file. /proc/irq/$IRQ/smp_affinity* is the
possible mask. If you want to know to which CPU an interrupt is affine
then look at /proc/irq/$IRQ/effective_affinity*

If effective_affinity* is not showing the correct value, then the irq
chip affinity setter is broken and needs to be fixed.

Thanks,

tglx

2020-08-15 21:57:33

by Marc Zyngier

[permalink] [raw]
Subject: Re: [PATCH] genirq/affinity: show managed irq affinity correctly

On 2020-08-13 09:08, Thomas Gleixner wrote:
> Yunfeng Ye <[email protected]> writes:

[...]

> You are looking at the wrong file. /proc/irq/$IRQ/smp_affinity* is the
> possible mask. If you want to know to which CPU an interrupt is affine
> then look at /proc/irq/$IRQ/effective_affinity*
>
> If effective_affinity* is not showing the correct value, then the irq
> chip affinity setter is broken and needs to be fixed.

In order to reassure myself that nothing was untoward in GIC-land,
I went in and looked at an ITS-based VM running whatever is in
Linus' tree today. I see the effective affinity being correctly
setup, and being as expected a subset of the affinity. This is
without isolcpu though.

In any case, I'd be interested in understanding what this patch is
trying to solve, really.

M.
--
Who you jivin' with that Cosmik Debris?