Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753181AbdFTAKM (ORCPT ); Mon, 19 Jun 2017 20:10:12 -0400 Received: from Galois.linutronix.de ([146.0.238.70]:39982 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752927AbdFSX7s (ORCPT ); Mon, 19 Jun 2017 19:59:48 -0400 Message-Id: <20170619235444.691345468@linutronix.de> User-Agent: quilt/0.63-1 Date: Tue, 20 Jun 2017 01:37:19 +0200 From: Thomas Gleixner To: LKML Cc: Marc Zyngier , Christoph Hellwig , Ingo Molnar , Peter Zijlstra , Michael Ellerman , Jens Axboe , Keith Busch Subject: [patch 19/55] genirq: Provide irq_fixup_move_pending() References: <20170619233700.547167146@linutronix.de> MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-15 Content-Disposition: inline; filename=genirq--Provide-irq_fixup_move_pending--.patch Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2377 Lines: 71 If an CPU goes offline, the interrupts are migrated away, but a eventually pending interrupt move, which has not yet been made effective is kept pending even if the outgoing CPU is the sole target of the pending affinity mask. What's worse is, that the pending affinity mask is discarded even if it would contain a valid subset of the online CPUs. Implement a helper function which allows to avoid these issues. Signed-off-by: Thomas Gleixner --- include/linux/irq.h | 5 +++++ kernel/irq/migration.c | 30 ++++++++++++++++++++++++++++++ 2 files changed, 35 insertions(+) --- a/include/linux/irq.h +++ b/include/linux/irq.h @@ -490,9 +490,14 @@ extern void irq_migrate_all_off_this_cpu #if defined(CONFIG_SMP) && defined(CONFIG_GENERIC_PENDING_IRQ) void irq_move_irq(struct irq_data *data); void irq_move_masked_irq(struct irq_data *data); +bool irq_fixup_move_pending(struct irq_desc *desc, bool force_clear); #else static inline void irq_move_irq(struct irq_data *data) { } static inline void irq_move_masked_irq(struct irq_data *data) { } +static inline bool irq_fixup_move_pending(struct irq_desc *desc, bool fclear) +{ + return false; +} #endif extern int no_irq_affinity; --- a/kernel/irq/migration.c +++ b/kernel/irq/migration.c @@ -4,6 +4,36 @@ #include "internals.h" +/** + * irq_fixup_move_pending - Cleanup irq move pending from a dying CPU + * @desc: Interrupt descpriptor to clean up + * @force_clear: If set clear the move pending bit unconditionally. + * If not set, clear it only when the dying CPU is the + * last one in the pending mask. + * + * Returns true if the pending bit was set and the pending mask contains an + * online CPU other than the dying CPU. + */ +bool irq_fixup_move_pending(struct irq_desc *desc, bool force_clear) +{ + struct irq_data *data = irq_desc_get_irq_data(desc); + + if (!irqd_is_setaffinity_pending(data)) + return false; + + /* + * The outgoing CPU might be the last online target in a pending + * interrupt move. If that's the case clear the pending move bit. + */ + if (cpumask_any_and(desc->pending_mask, cpu_online_mask) > nr_cpu_ids) { + irqd_clr_move_pending(data); + return false; + } + if (force_clear) + irqd_clr_move_pending(data); + return true; +} + void irq_move_masked_irq(struct irq_data *idata) { struct irq_desc *desc = irq_data_to_desc(idata);