Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756208Ab2HTJmf (ORCPT ); Mon, 20 Aug 2012 05:42:35 -0400 Received: from e23smtp06.au.ibm.com ([202.81.31.148]:58149 "EHLO e23smtp06.au.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752753Ab2HTJmb (ORCPT ); Mon, 20 Aug 2012 05:42:31 -0400 Message-ID: <5032067E.20505@linux.vnet.ibm.com> Date: Mon, 20 Aug 2012 17:42:22 +0800 From: Michael Wang User-Agent: Mozilla/5.0 (X11; Linux i686; rv:14.0) Gecko/20120714 Thunderbird/14.0 MIME-Version: 1.0 To: Fengguang Wu CC: Thomas Gleixner , Yinghai Lu , Suresh Siddha , LKML Subject: Re: BUG: scheduling while atomic, under native_smp_prepare_cpus() References: <20120817134944.GA539@localhost> <50320163.5080703@linux.vnet.ibm.com> <20120820092744.GA3668@localhost> In-Reply-To: <20120820092744.GA3668@localhost> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit x-cbid: 12082009-7014-0000-0000-000001C04390 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 5572 Lines: 122 On 08/20/2012 05:27 PM, Fengguang Wu wrote: > Hi Michael, > > On Mon, Aug 20, 2012 at 05:20:35PM +0800, Michael Wang wrote: >> On 08/17/2012 09:49 PM, Fengguang Wu wrote: >> >> Hi, FengGuang >> >> native_smp_prepare_cpus has already disabled the preempt before >> reach __irq_alloc_descs(), and sleep in mutex_lock() cause the bug. >> >> May be the follow patch could help to solve the issue(actually I >> think the true problem should be in _cond_resched...). > > Is this a debug patch? Since what it does is to conditionally disable > the warning. No, I use this as a solution, it should work as the bug reported in boot process before init_post called. We have some reference from __might_sleep which also avoid the check if system has not fully booted, so I think this way is acceptable, but I'm not the one to make decision... > >> I can't do test by my self since I can't reproduce the issue on my >> machine, the kernel_init thread never got a need sched flag set at >> that moment in my case... > > I'll try it and report back :) Appreciate :) Regards, Michael Wang > > Thanks, > Fengguang > >> diff --git a/kernel/sched/core.c b/kernel/sched/core.c >> index 4376c9f..3396c33 100644 >> --- a/kernel/sched/core.c >> +++ b/kernel/sched/core.c >> @@ -3321,7 +3321,8 @@ static inline void schedule_debug(struct task_struct *prev) >> * schedule() atomically, we ignore that path for now. >> * Otherwise, whine if we are scheduling when we should not be. >> */ >> - if (unlikely(in_atomic_preempt_off() && !prev->exit_state)) >> + if (unlikely(in_atomic_preempt_off() && !prev->exit_state >> + && system_state == SYSTEM_RUNNING)) >> __schedule_bug(prev); >> rcu_sleep_check(); >> >> >>> Trace one (full config/dmesg attached): >>> >>> [ 0.042794] init IO_APIC IRQs [ 0.043305] apic 2 pin 0 not >>> connected [ 0.043953] BUG: scheduling while atomic: >>> swapper/0/1/0x10000002 [ 0.044017] no locks held by swapper/0/1. [ >>> 0.044692] Pid: 1, comm: swapper/0 Not tainted >>> 3.6.0-rc1-00420-gb7aebb9 #34 [ 0.045861] Call Trace: [ 0.048071] >>> [] __schedule_bug+0x5e/0x70 [ 0.048890] [] >>> __schedule+0x91/0xb10 [ 0.049660] [] ? >>> vsnprintf+0x33a/0x450 [ 0.050444] [] ? >>> lg_local_lock+0x6/0x70 [ 0.051256] [] ? >>> wait_for_xmitr+0x31/0x90 [ 0.052019] [] ? >>> do_raw_spin_unlock+0xa5/0xf0 [ 0.052903] [] ? >>> _raw_spin_unlock+0x22/0x30 [ 0.053759] [] ? >>> up+0x1b/0x70 [ 0.054421] [] __cond_resched+0x1b/0x30 [ >>> 0.055228] [] _cond_resched+0x45/0x50 [ 0.056020] >>> [] mutex_lock_nested+0x28/0x370 [ 0.056884] >>> [] ? console_unlock+0x3a2/0x4e0 [ 0.057741] >>> [] __irq_alloc_descs+0x39/0x1c0 [ 0.058589] >>> [] io_apic_setup_irq_pin+0x2c/0x310 [ 0.060042] >>> [] setup_IO_APIC+0x101/0x744 [ 0.060878] [] ? >>> clear_IO_APIC+0x31/0x50 [ 0.061695] [] >>> native_smp_prepare_cpus+0x538/0x680 [ 0.062644] [] ? >>> do_one_initcall+0x12c/0x12c [ 0.063517] [] ? >>> do_one_initcall+0x12c/0x12c [ 0.064016] [] >>> kernel_init+0x4b/0x17f [ 0.064790] [] ? >>> do_one_initcall+0x12c/0x12c [ 0.065660] [] >>> kernel_thread_helper+0x6/0x10 [ 0.066592] IOAPIC[0]: Set routing >>> entry (2-1 -> 0x41 -> IRQ 1 Mode:0 Active:0 Dest:1) [ 0.068045] >>> IOAPIC[0]: Set routing entry (2-2 -> 0x51 -> IRQ 0 Mode:0 Active:0 >>> Dest:1) >>> >>> Trace two (triggered by another config): >>> >>> [ 0.288018] tlb_flushall_shift is 0xffffffff [ 0.316019] >>> Freeing SMP alternatives: 20k freed [ 0.364022] BUG: scheduling >>> while atomic: swapper/0/1/0x10000002 [ 0.364022] no locks held by >>> swapper/0/1. [ 0.368023] Pid: 1, comm: swapper/0 Not tainted >>> 3.6.0-rc1 #1 [ 0.368023] Call Trace: [ 0.368023] [<79812e23>] >>> __schedule_bug+0x41/0x53 [ 0.372023] [<79820393>] >>> __schedule+0x62/0x488 [ 0.376023] [<792d17ae>] ? >>> radix_tree_lookup+0xa/0xc [ 0.376023] [<79071f4e>] ? >>> rcu_irq_exit+0x61/0x66 [ 0.376023] [<79026be7>] ? >>> irq_exit+0x60/0x6c [ 0.376023] [<790035df>] ? do_IRQ+0x6c/0x80 [ >>> 0.380023] [<7903d794>] __cond_resched+0x16/0x26 [ 0.380023] >>> [<79820888>] _cond_resched+0x13/0x1c [ 0.380023] [<7909f30a>] >>> slab_pre_alloc_hook.isra.44+0x2e/0x33 [ 0.380023] [<790a09c6>] >>> kmem_cache_alloc+0x1b/0xbb [ 0.384024] [<792ce416>] ? >>> alloc_cpumask_var_node+0x1a/0x72 [ 0.384024] [<792ce416>] >>> alloc_cpumask_var_node+0x1a/0x72 [ 0.384024] [<792ce486>] >>> alloc_cpumask_var+0xb/0xd [ 0.388024] [<792ce493>] >>> zalloc_cpumask_var+0xb/0xd [ 0.388024] [<79bfe1fd>] >>> native_smp_prepare_cpus+0x93/0x380 [ 0.388024] [<79bf6a7e>] ? >>> do_one_initcall+0x10c/0x10c [ 0.388024] [<79bf6ac6>] >>> kernel_init+0x48/0x16e [ 0.392024] [<79bf6a7e>] ? >>> do_one_initcall+0x10c/0x10c [ 0.392024] [<7982241e>] >>> kernel_thread_helper+0x6/0xd [ 0.400025] smpboot: SMP disabled [ >>> 0.400025] Performance Events: >>> >>> Thanks, Fengguang >>> > -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/