As of today, the existent code uses conjunction of IRQ affinity mask and cpu
online mask to find the cpu id to map an interrupt to.
I looks like the intention was to make sure that and IRQ won't be mapped to an
offline CPU.
Although it works correctly today, there are two problems with it:
1. IRQ affinity mask already consists only of online cpus, thus matching it
to the mask on online cpus is redundant.
2. cpumask_first_and() can return nr_cpu_ids in case of IRQ affinity
containing offline cpus in future, and in this case current implementation
will likely lead to kernel crash in hv_map_interrupt due to an attempt to use
invalid cpu id for getting vp set.
This patch fixes this logic by taking the first bit from the affinity
mask as the cpu to map the IRQ to.
It also adds a paranoia WARN_ON_ONCE for the case when the affinity mask
contains offline cpus.
Signed-off-by: Stanislav Kinsburskii <[email protected]>
CC: "K. Y. Srinivasan" <[email protected]>
CC: Haiyang Zhang <[email protected]>
CC: Wei Liu <[email protected]>
CC: Dexuan Cui <[email protected]>
CC: Thomas Gleixner <[email protected]>
CC: Ingo Molnar <[email protected]>
CC: Borislav Petkov <[email protected]>
CC: Dave Hansen <[email protected]>
CC: [email protected]
CC: "H. Peter Anvin" <[email protected]>
CC: Joerg Roedel <[email protected]>
CC: Will Deacon <[email protected]>
CC: Robin Murphy <[email protected]>
CC: [email protected]
CC: [email protected]
CC: [email protected]
---
arch/x86/hyperv/irqdomain.c | 7 ++++---
drivers/iommu/hyperv-iommu.c | 7 ++++---
2 files changed, 8 insertions(+), 6 deletions(-)
diff --git a/arch/x86/hyperv/irqdomain.c b/arch/x86/hyperv/irqdomain.c
index 42c70d28ef27..759774b5ab2f 100644
--- a/arch/x86/hyperv/irqdomain.c
+++ b/arch/x86/hyperv/irqdomain.c
@@ -192,7 +192,6 @@ static void hv_irq_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
struct pci_dev *dev;
struct hv_interrupt_entry out_entry, *stored_entry;
struct irq_cfg *cfg = irqd_cfg(data);
- const cpumask_t *affinity;
int cpu;
u64 status;
@@ -204,8 +203,10 @@ static void hv_irq_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
return;
}
- affinity = irq_data_get_effective_affinity_mask(data);
- cpu = cpumask_first_and(affinity, cpu_online_mask);
+ cpu = cpumask_first(irq_data_get_effective_affinity_mask(data));
+
+ /* Paranoia check: the cpu must be online */
+ WARN_ON_ONCE(!cpumask_test_cpu(cpu, cpu_online_mask));
if (data->chip_data) {
/*
diff --git a/drivers/iommu/hyperv-iommu.c b/drivers/iommu/hyperv-iommu.c
index 8302db7f783e..632e9c123bbf 100644
--- a/drivers/iommu/hyperv-iommu.c
+++ b/drivers/iommu/hyperv-iommu.c
@@ -197,15 +197,16 @@ hyperv_root_ir_compose_msi_msg(struct irq_data *irq_data, struct msi_msg *msg)
u32 vector;
struct irq_cfg *cfg;
int ioapic_id;
- const struct cpumask *affinity;
int cpu;
struct hv_interrupt_entry entry;
struct hyperv_root_ir_data *data = irq_data->chip_data;
struct IO_APIC_route_entry e;
cfg = irqd_cfg(irq_data);
- affinity = irq_data_get_effective_affinity_mask(irq_data);
- cpu = cpumask_first_and(affinity, cpu_online_mask);
+ cpu = cpumask_first(irq_data_get_effective_affinity_mask(irq_data));
+
+ /* Paranoia check: the cpu must be online */
+ WARN_ON_ONCE(!cpumask_test_cpu(cpu, cpu_online_mask));
vector = cfg->vector;
ioapic_id = data->ioapic_id;