Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751643Ab2ETF1q (ORCPT ); Sun, 20 May 2012 01:27:46 -0400 Received: from mail-pb0-f46.google.com ([209.85.160.46]:63632 "EHLO mail-pb0-f46.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750984Ab2ETF1n (ORCPT ); Sun, 20 May 2012 01:27:43 -0400 Date: Sun, 20 May 2012 13:27:31 +0800 From: Yong Zhang To: Christophe Huriaux Cc: Uwe =?utf-8?Q?Kleine-K=EF=BF=BDnig?= , linux-rt-users@vger.kernel.org, Thomas Gleixner , Steven Rostedt , linux-kernel@vger.kernel.org Subject: [PATCH] genirq: don't sync irq thread if current happen to be the very irq thread Message-ID: <20120520052731.GA3864@zhy> Reply-To: Yong Zhang References: <20120509174931.GB31844@pengutronix.de> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.20 (2009-06-14) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3705 Lines: 99 On Thu, May 10, 2012 at 03:17:17PM +0200, Christophe Huriaux wrote: > 2012/5/9 Uwe Kleine-K?nig : > > If you enable CONFIG_KALLSYMS you get a more usable backtrace. > > Alternatively you can use > > > > ? ? ? ?$CROSS_COMPILE-addr2line -e vmlinux 0xc000e90c > > > > to get the file and line that resulted in the code at that address. > > > > Thanks, I was wondering which config option would enable that. The > complete backtrace is much more usable : Actually I don't think this is a -rt issue, you could also trigger this warning with vanilla if you boot your kernel with 'threadirqs'. Could you pleaes try the follow patch? Thanks, Yong --- From: Yong Zhang Date: Sun, 20 May 2012 12:56:46 +0800 Subject: [PATCH] genirq: don't sync irq thread if current happen to be the very irq thread Christophe reported against -rt: BUG: scheduling while atomic: irq/37-s3c-mci/253/0x00000102 Modules linked in: [] (unwind_backtrace+0x0/0x12c) from [] (__schedule+0x58/0x2c0) [] (__schedule+0x58/0x2c0) from [] (schedule+0x8c/0xb0) [] (schedule+0x8c/0xb0) from [] (synchronize_irq+0xbc/0xd8) [] (synchronize_irq+0xbc/0xd8) from [] (pio_tasklet+0x34/0x11c) [] (pio_tasklet+0x34/0x11c) from [] (__tasklet_action+0x68/0x80) [] (__tasklet_action+0x68/0x80) from [] (__do_softirq+0x88/0x130) [] (__do_softirq+0x88/0x130) from [] (do_softirq+0x48/0x54) [] (do_softirq+0x48/0x54) from [] (local_bh_enable+0x8c/0xc0) [] (local_bh_enable+0x8c/0xc0) from [] (irq_forced_thread_fn+0x4c/0x54) [] (irq_forced_thread_fn+0x4c/0x54) from [] (irq_thread+0xa0/0x1c0) [] (irq_thread+0xa0/0x1c0) from [] (kthread+0x84/0x8c) [] (kthread+0x84/0x8c) from [] (kernel_thread_exit+0x0/0x8) Whe looking at this issue, I find that there is a typical deadlock scenario with forced treaded irq, irq_forced_thread_fn() local_bh_enable(); do_softirq(); disable_irq(); synchronize_irq(); wait_event(); /*DEAD*/ Cure it by unsync if current happen to be the very irq thread. Reported-by: Christophe Huriaux Signed-off-by: Yong Zhang Cc: Steven Rostedt Cc: Thomas Gleixner --- kernel/irq/manage.c | 9 +++++++++ 1 files changed, 9 insertions(+), 0 deletions(-) diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c index 89a3ea8..d5b96e7 100644 --- a/kernel/irq/manage.c +++ b/kernel/irq/manage.c @@ -41,6 +41,7 @@ early_param("threadirqs", setup_forced_irqthreads); void synchronize_irq(unsigned int irq) { struct irq_desc *desc = irq_to_desc(irq); + struct irqaction *action = desc->action; bool inprogress; if (!desc) @@ -67,7 +68,15 @@ void synchronize_irq(unsigned int irq) /* * We made sure that no hardirq handler is running. Now verify * that no threaded handlers are active. + * But for theaded irq, we don't sync if current happens to be + * the irq thread; otherwise we could deadlock. */ + while (action) { + if (action->thread && action->thread == current) + return; + action = action->next; + } + wait_event(desc->wait_for_threads, !atomic_read(&desc->threads_active)); } EXPORT_SYMBOL(synchronize_irq); -- 1.7.1 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/