Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933502AbaJVWrR (ORCPT ); Wed, 22 Oct 2014 18:47:17 -0400 Received: from mail.declera.com ([87.118.141.5]:43829 "HELO mail.declera.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with SMTP id S933037AbaJVWrP (ORCPT ); Wed, 22 Oct 2014 18:47:15 -0400 Date: Thu, 23 Oct 2014 01:40:32 +0300 From: Yanko Kaneti To: Josh Boyer Cc: Paul McKenney , "Eric W. Biederman" , Cong Wang , Kevin Fenzi , netdev , "Linux-Kernel@Vger. Kernel. Org" Subject: Re: localed stuck in recent 3.18 git in copy_net_ns? Message-ID: <20141022224032.GA1240@declera.com> References: <20141020204326.GA25668@redhat.com> <20141020145359.565fe5e6@voldemort.scrye.com> <20141021151225.5df96645@voldemort.scrye.com> <8738aghtyj.fsf@x220.int.ebiederm.org> <20141022181135.GH4977@linux.vnet.ibm.com> <87d29kezby.fsf@x220.int.ebiederm.org> <20141022185511.GI4977@linux.vnet.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed-10/22/14-2014 15:33, Josh Boyer wrote: > On Wed, Oct 22, 2014 at 2:55 PM, Paul E. McKenney > wrote: > > On Wed, Oct 22, 2014 at 01:25:37PM -0500, Eric W. Biederman wrote: > >> "Paul E. McKenney" writes: > >> > >> > On Wed, Oct 22, 2014 at 12:53:24PM -0500, Eric W. Biederman wrote: > >> >> Cong Wang writes: > >> >> > >> >> > (Adding Paul and Eric in Cc) > >> >> > > >> >> > > >> >> > On Wed, Oct 22, 2014 at 10:12 AM, Josh Boyer wrote: > >> >> >> > >> >> >> Someone else is seeing this when they try and modprobe ppp_generic: > >> >> >> > >> >> >> [ 240.599195] INFO: task kworker/u16:5:100 blocked for more than 120 seconds. > >> >> >> [ 240.599338] Not tainted 3.18.0-0.rc1.git2.1.fc22.x86_64 #1 > >> >> >> [ 240.599446] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" > >> >> >> disables this message. > >> >> >> [ 240.599583] kworker/u16:5 D ffff8802202db480 12400 100 2 0x00000000 > >> >> >> [ 240.599744] Workqueue: netns cleanup_net > >> >> >> [ 240.599823] ffff8802202eb9e8 0000000000000096 ffff8802202db480 > >> >> >> 00000000001d5f00 > >> >> >> [ 240.600066] ffff8802202ebfd8 00000000001d5f00 ffff8800368c3480 > >> >> >> ffff8802202db480 > >> >> >> [ 240.600228] ffffffff81ee2690 7fffffffffffffff ffffffff81ee2698 > >> >> >> ffffffff81ee2690 > >> >> >> [ 240.600386] Call Trace: > >> >> >> [ 240.600445] [] schedule+0x29/0x70 > >> >> >> [ 240.600541] [] schedule_timeout+0x26c/0x410 > >> >> >> [ 240.600651] [] ? retint_restore_args+0x13/0x13 > >> >> >> [ 240.600765] [] ? _raw_spin_unlock_irq+0x34/0x50 > >> >> >> [ 240.600879] [] wait_for_completion+0x10c/0x150 > >> >> >> [ 240.601025] [] ? wake_up_state+0x20/0x20 > >> >> >> [ 240.601133] [] _rcu_barrier+0x159/0x200 > >> >> >> [ 240.601237] [] rcu_barrier+0x15/0x20 > >> >> >> [ 240.601335] [] netdev_run_todo+0x6f/0x310 > >> >> >> [ 240.601442] [] ? rollback_registered_many+0x265/0x2e0 > >> >> >> [ 240.601564] [] rtnl_unlock+0xe/0x10 > >> >> >> [ 240.601660] [] default_device_exit_batch+0x156/0x180 > >> >> >> [ 240.601781] [] ? abort_exclusive_wait+0xb0/0xb0 > >> >> >> [ 240.601895] [] ops_exit_list.isra.1+0x53/0x60 > >> >> >> [ 240.602028] [] cleanup_net+0x100/0x1f0 > >> >> >> [ 240.602131] [] process_one_work+0x218/0x850 > >> >> >> [ 240.602241] [] ? process_one_work+0x17f/0x850 > >> >> >> [ 240.602350] [] ? worker_thread+0xe7/0x4a0 > >> >> >> [ 240.602454] [] worker_thread+0x6b/0x4a0 > >> >> >> [ 240.602555] [] ? process_one_work+0x850/0x850 > >> >> >> [ 240.602665] [] kthread+0x10b/0x130 > >> >> >> [ 240.602762] [] ? sched_clock+0x9/0x10 > >> >> >> [ 240.602862] [] ? kthread_create_on_node+0x250/0x250 > >> >> >> [ 240.603004] [] ret_from_fork+0x7c/0xb0 > >> >> >> [ 240.603106] [] ? kthread_create_on_node+0x250/0x250 > >> >> >> [ 240.603224] 4 locks held by kworker/u16:5/100: > >> >> >> [ 240.603304] #0: ("%s""netns"){.+.+.+}, at: [] > >> >> >> process_one_work+0x17f/0x850 > >> >> >> [ 240.603495] #1: (net_cleanup_work){+.+.+.}, at: > >> >> >> [] process_one_work+0x17f/0x850 > >> >> >> [ 240.603691] #2: (net_mutex){+.+.+.}, at: [] > >> >> >> cleanup_net+0x8c/0x1f0 > >> >> >> [ 240.603869] #3: (rcu_sched_state.barrier_mutex){+.+...}, at: > >> >> >> [] _rcu_barrier+0x35/0x200 > >> >> >> [ 240.604211] INFO: task modprobe:1387 blocked for more than 120 seconds. > >> >> >> [ 240.604329] Not tainted 3.18.0-0.rc1.git2.1.fc22.x86_64 #1 > >> >> >> [ 240.604434] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" > >> >> >> disables this message. > >> >> >> [ 240.604570] modprobe D ffff8800cb4f1a40 13112 1387 1386 0x00000080 > >> >> >> [ 240.604719] ffff8800cafbbbe8 0000000000000096 ffff8800cb4f1a40 > >> >> >> 00000000001d5f00 > >> >> >> [ 240.604878] ffff8800cafbbfd8 00000000001d5f00 ffff880223280000 > >> >> >> ffff8800cb4f1a40 > >> >> >> [ 240.605068] ffff8800cb4f1a40 ffffffff81f8fb48 0000000000000246 > >> >> >> ffff8800cb4f1a40 > >> >> >> [ 240.605228] Call Trace: > >> >> >> [ 240.605283] [] schedule_preempt_disabled+0x31/0x80 > >> >> >> [ 240.605400] [] mutex_lock_nested+0x183/0x440 > >> >> >> [ 240.605510] [] ? register_pernet_subsys+0x1f/0x50 > >> >> >> [ 240.605626] [] ? register_pernet_subsys+0x1f/0x50 > >> >> >> [ 240.605757] [] ? 0xffffffffa0701000 > >> >> >> [ 240.605854] [] register_pernet_subsys+0x1f/0x50 > >> >> >> [ 240.606005] [] br_init+0x48/0xd3 [bridge] > >> >> >> [ 240.606112] [] do_one_initcall+0xd8/0x210 > >> >> >> [ 240.606224] [] load_module+0x20c2/0x2870 > >> >> >> [ 240.606327] [] ? store_uevent+0x70/0x70 > >> >> >> [ 240.606433] [] ? lock_release_non_nested+0x3c6/0x3d0 > >> >> >> [ 240.606557] [] SyS_init_module+0xe7/0x140 > >> >> >> [ 240.606664] [] system_call_fastpath+0x12/0x17 > >> >> >> [ 240.606773] 1 lock held by modprobe/1387: > >> >> >> [ 240.606845] #0: (net_mutex){+.+.+.}, at: [] > >> >> >> register_pernet_subsys+0x1f/0x50 > >> >> >> [ 240.607114] INFO: task modprobe:1466 blocked for more than 120 seconds. > >> >> >> [ 240.607231] Not tainted 3.18.0-0.rc1.git2.1.fc22.x86_64 #1 > >> >> >> [ 240.607337] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" > >> >> >> disables this message. > >> >> >> [ 240.607473] modprobe D ffff88020fbab480 13096 1466 1399 0x00000084 > >> >> >> [ 240.607622] ffff88020d1bbbe8 0000000000000096 ffff88020fbab480 > >> >> >> 00000000001d5f00 > >> >> >> [ 240.607791] ffff88020d1bbfd8 00000000001d5f00 ffffffff81e1b580 > >> >> >> ffff88020fbab480 > >> >> >> [ 240.607949] ffff88020fbab480 ffffffff81f8fb48 0000000000000246 > >> >> >> ffff88020fbab480 > >> >> >> [ 240.608138] Call Trace: > >> >> >> [ 240.608193] [] schedule_preempt_disabled+0x31/0x80 > >> >> >> [ 240.608316] [] mutex_lock_nested+0x183/0x440 > >> >> >> [ 240.608425] [] ? register_pernet_device+0x1d/0x70 > >> >> >> [ 240.608542] [] ? register_pernet_device+0x1d/0x70 > >> >> >> [ 240.608662] [] ? 0xffffffffa071d000 > >> >> >> [ 240.608759] [] register_pernet_device+0x1d/0x70 > >> >> >> [ 240.608881] [] ppp_init+0x20/0x1000 [ppp_generic] > >> >> >> [ 240.609021] [] do_one_initcall+0xd8/0x210 > >> >> >> [ 240.609131] [] load_module+0x20c2/0x2870 > >> >> >> [ 240.609235] [] ? store_uevent+0x70/0x70 > >> >> >> [ 240.609339] [] ? lock_release_non_nested+0x3c6/0x3d0 > >> >> >> [ 240.609462] [] SyS_init_module+0xe7/0x140 > >> >> >> [ 240.609568] [] system_call_fastpath+0x12/0x17 > >> >> >> [ 240.609677] 1 lock held by modprobe/1466: > >> >> >> [ 240.609749] #0: (net_mutex){+.+.+.}, at: [] > >> >> >> register_pernet_device+0x1d/0x70 > >> >> >> > >> >> >> Looks like contention on net_mutex or something, but I honestly have > >> >> >> no idea yet. I can't recreate it myself at the moment or I would > >> >> >> bisect. > >> >> >> > >> >> >> Has nobody else run into this with the pre-3.18 kernels? Fedora isn't > >> >> >> carrying any patches in this area. > >> >> > >> >> > I am not aware of any change in net/core/dev.c related here, > >> >> > so I guess it's a bug in rcu_barrier(). > >> >> > >> >> >From the limited trace data I see in this email I have to agree. > >> >> > >> >> It looks like for some reason rcu_barrier is taking forever > >> >> while the rtnl_lock is held in cleanup_net. Because the > >> >> rtnl_lock is held modprobe of the ppp driver is getting stuck. > >> >> > >> >> Is it possible we have an AB BA deadlock between the rtnl_lock > >> >> and rcu. With something the module loading code assumes? > >> > > >> > I am not aware of RCU ever acquiring rtnl_lock, not directly, anyway. > >> > >> Does the module loading code do something strange with rcu? Perhaps > >> blocking an rcu grace period until the module loading completes? > >> > >> If the module loading somehow blocks an rcu grace period that would > >> create an AB deadlock because loading the ppp module grabs the > >> rtnl_lock. And elsewhere we have the rtnl_lock waiting for an rcu grace > >> period. > >> > >> I would think trying and failing to get the rtnl_lock would sleep and > >> thus let any rcu grace period happen but shrug. > >> > >> It looks like something is holding up the rcu grace period, and causing > >> this. Although it is possible that something is causing cleanup_net > >> to run slowly and we are just seeing that slowness show up in > >> rcu_barrier as that is one of the slower bits. With a single trace I > >> can't definitely same that the rcu barrier is getting stuck but it > >> certainly looks that way. > > > > Don't get me wrong -- the fact that this kthread appears to have > > blocked within rcu_barrier() for 120 seconds means that something is > > most definitely wrong here. I am surprised that there are no RCU CPU > > stall warnings, but perhaps the blockage is in the callback execution > > rather than grace-period completion. Or something is preventing this > > kthread from starting up after the wake-up callback executes. Or... > > > > Is this thing reproducible? > > I've added Yanko on CC, who reported the backtrace above and can > recreate it reliably. Apparently reverting the RCU merge commit > (d6dd50e) and rebuilding the latest after that does not show the > issue. I'll let Yanko explain more and answer any questions you have. - It is reproducible - I've done another build here to double check and its definitely the rcu merge that's causing it. Don't think I'll be able to dig deeper, but I can do testing if needed. --Yanko -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/