Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756731AbbDWHgI (ORCPT ); Thu, 23 Apr 2015 03:36:08 -0400 Received: from goliath.siemens.de ([192.35.17.28]:56116 "EHLO goliath.siemens.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752057AbbDWHgH (ORCPT ); Thu, 23 Apr 2015 03:36:07 -0400 Message-ID: <5538A0DF.50401@siemens.com> Date: Thu, 23 Apr 2015 09:35:59 +0200 From: Jan Kiszka User-Agent: Mozilla/5.0 (X11; U; Linux i686 (x86_64); de; rv:1.8.1.12) Gecko/20080226 SUSE/2.0.0.12-1.1 Thunderbird/2.0.0.12 Mnenhy/0.7.5.666 MIME-Version: 1.0 To: Sebastian Andrzej Siewior CC: RT , Linux Kernel Mailing List , Steven Rostedt , Mike Galbraith Subject: [PATCH v2 RT 3.18] irq_work: Provide a soft-irq based queue Content-Type: text/plain; charset=iso-8859-15 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3248 Lines: 103 Instead of turning all irq_work requests into lazy ones on -rt, just move their execution from hard into soft-irq context. This resolves deadlocks of ftrace which will queue work from arbitrary contexts, including those that have locks held that are needed for raising a soft-irq. Signed-off-by: Jan Kiszka --- Changes in v2: - fix execution of raised list (discovered by Mike Galbraith) - added comment of irq_work_run (derived from Mike's suggestion) kernel/irq_work.c | 34 +++++++++++++++++----------------- 1 file changed, 17 insertions(+), 17 deletions(-) diff --git a/kernel/irq_work.c b/kernel/irq_work.c index 9dda38a..171dfac 100644 --- a/kernel/irq_work.c +++ b/kernel/irq_work.c @@ -85,12 +85,9 @@ bool irq_work_queue_on(struct irq_work *work, int cpu) raise_irqwork = llist_add(&work->llnode, &per_cpu(hirq_work_list, cpu)); else - raise_irqwork = llist_add(&work->llnode, - &per_cpu(lazy_list, cpu)); -#else +#endif raise_irqwork = llist_add(&work->llnode, &per_cpu(raised_list, cpu)); -#endif if (raise_irqwork) arch_send_call_function_single_ipi(cpu); @@ -114,21 +111,20 @@ bool irq_work_queue(struct irq_work *work) if (work->flags & IRQ_WORK_HARD_IRQ) { if (llist_add(&work->llnode, this_cpu_ptr(&hirq_work_list))) arch_irq_work_raise(); - } else { + } else +#endif + if (work->flags & IRQ_WORK_LAZY) { if (llist_add(&work->llnode, this_cpu_ptr(&lazy_list)) && tick_nohz_tick_stopped()) +#ifdef CONFIG_PREEMPT_RT_FULL raise_softirq(TIMER_SOFTIRQ); - } #else - if (work->flags & IRQ_WORK_LAZY) { - if (llist_add(&work->llnode, this_cpu_ptr(&lazy_list)) && - tick_nohz_tick_stopped()) arch_irq_work_raise(); +#endif } else { if (llist_add(&work->llnode, this_cpu_ptr(&raised_list))) arch_irq_work_raise(); } -#endif preempt_enable(); @@ -202,6 +198,13 @@ void irq_work_run(void) { #ifdef CONFIG_PREEMPT_RT_FULL irq_work_run_list(this_cpu_ptr(&hirq_work_list)); + /* + * NOTE: we raise softirq via IPI for safety (caller may hold locks + * that raise_softirq needs) and execute in irq_work_tick() to move + * the overhead from hard to soft irq context. + */ + if (!llist_empty(this_cpu_ptr(&raised_list))) + raise_softirq(TIMER_SOFTIRQ); #else irq_work_run_list(this_cpu_ptr(&raised_list)); irq_work_run_list(this_cpu_ptr(&lazy_list)); @@ -211,15 +214,12 @@ EXPORT_SYMBOL_GPL(irq_work_run); void irq_work_tick(void) { -#ifdef CONFIG_PREEMPT_RT_FULL - irq_work_run_list(this_cpu_ptr(&lazy_list)); -#else - struct llist_head *raised = &__get_cpu_var(raised_list); + struct llist_head *raised = this_cpu_ptr(&raised_list); - if (!llist_empty(raised) && !arch_irq_work_has_interrupt()) + if (!llist_empty(raised) && (!arch_irq_work_has_interrupt() || + IS_ENABLED(CONFIG_PREEMPT_RT_FULL))) irq_work_run_list(raised); - irq_work_run_list(&__get_cpu_var(lazy_list)); -#endif + irq_work_run_list(this_cpu_ptr(&lazy_list)); } /* -- 2.1.4 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/