Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932432Ab2JBV1S (ORCPT ); Tue, 2 Oct 2012 17:27:18 -0400 Received: from cantor2.suse.de ([195.135.220.15]:47774 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932330Ab2JBV1N (ORCPT ); Tue, 2 Oct 2012 17:27:13 -0400 Date: Tue, 2 Oct 2012 23:27:04 +0200 (CEST) From: Jiri Kosina To: "Paul E. McKenney" Cc: "Paul E. McKenney" , Josh Triplett , linux-kernel@vger.kernel.org Subject: Re: Lockdep complains about commit 1331e7a1bb ("rcu: Remove _rcu_barrier() dependency on __stop_machine()") In-Reply-To: <20121002170149.GC2465@linux.vnet.ibm.com> Message-ID: References: <20121002170149.GC2465@linux.vnet.ibm.com> User-Agent: Alpine 2.00 (LNX 1167 2008-08-23) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 6860 Lines: 153 On Tue, 2 Oct 2012, Paul E. McKenney wrote: > > 1331e7a1bbe1f11b19c4327ba0853bee2a606543 is the first bad commit > > commit 1331e7a1bbe1f11b19c4327ba0853bee2a606543 > > Author: Paul E. McKenney > > Date: Thu Aug 2 17:43:50 2012 -0700 > > > > rcu: Remove _rcu_barrier() dependency on __stop_machine() > > > > Currently, _rcu_barrier() relies on preempt_disable() to prevent > > any CPU from going offline, which in turn depends on CPU hotplug's > > use of __stop_machine(). > > > > This patch therefore makes _rcu_barrier() use get_online_cpus() to > > block CPU-hotplug operations. This has the added benefit of removing > > the need for _rcu_barrier() to adopt callbacks: Because CPU-hotplug > > operations are excluded, there can be no callbacks to adopt. This > > commit simplifies the code accordingly. > > > > Signed-off-by: Paul E. McKenney > > Signed-off-by: Paul E. McKenney > > Reviewed-by: Josh Triplett > > == > > > > is causing lockdep to complain (see the full trace below). I haven't yet > > had time to analyze what exactly is happening, and probably will not have > > time to do so until tomorrow, so just sending this as a heads-up in case > > anyone sees the culprit immediately. > > Hmmm... Does the following patch help? It swaps the order in which > rcu_barrier() acquires the hotplug and rcu_barrier locks. It changed the report slightly (see for example the change in possible unsafe locking scenario, rcu_sched_state.barrier_mutex vanished and it's now directly about cpu_hotplug.lock). With the patch applied I get ====================================================== [ INFO: possible circular locking dependency detected ] 3.6.0-03888-g3f99f3b #145 Not tainted ------------------------------------------------------- kworker/u:3/43 is trying to acquire lock: (cpu_hotplug.lock){+.+.+.}, at: [] get_online_cpus+0x37/0x50 but task is already holding lock: (slab_mutex){+.+.+.}, at: [] kmem_cache_destroy+0x45/0xe0 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #1 (slab_mutex){+.+.+.}: [] validate_chain+0x632/0x720 [] __lock_acquire+0x359/0x580 [] lock_acquire+0x121/0x190 [] __mutex_lock_common+0x5c/0x450 [] mutex_lock_nested+0x3e/0x50 [] cpuup_callback+0x2f/0xbe [] notifier_call_chain+0x93/0x140 [] __raw_notifier_call_chain+0x9/0x10 [] _cpu_up+0xc9/0x162 [] cpu_up+0xbc/0x11b [] smp_init+0x6b/0x9f [] kernel_init+0x147/0x1dc [] kernel_thread_helper+0x4/0x10 -> #0 (cpu_hotplug.lock){+.+.+.}: [] check_prev_add+0x3de/0x440 [] validate_chain+0x632/0x720 [] __lock_acquire+0x359/0x580 [] lock_acquire+0x121/0x190 [] __mutex_lock_common+0x5c/0x450 [] mutex_lock_nested+0x3e/0x50 [] get_online_cpus+0x37/0x50 [] _rcu_barrier+0x22/0x1f0 [] rcu_barrier_sched+0x10/0x20 [] rcu_barrier+0x9/0x10 [] kmem_cache_destroy+0xd1/0xe0 [] nf_conntrack_cleanup_net+0xe4/0x110 [nf_conntrack] [] nf_conntrack_cleanup+0x2a/0x70 [nf_conntrack] [] nf_conntrack_net_exit+0x5e/0x80 [nf_conntrack] [] ops_exit_list+0x39/0x60 [] cleanup_net+0xfb/0x1b0 [] process_one_work+0x26b/0x4c0 [] worker_thread+0x12e/0x320 [] kthread+0xde/0xf0 [] kernel_thread_helper+0x4/0x10 other info that might help us debug this: Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(slab_mutex); lock(cpu_hotplug.lock); lock(slab_mutex); lock(cpu_hotplug.lock); *** DEADLOCK *** 4 locks held by kworker/u:3/43: #0: (netns){.+.+.+}, at: [] process_one_work+0x1a2/0x4c0 #1: (net_cleanup_work){+.+.+.}, at: [] process_one_work+0x1a2/0x4c0 #2: (net_mutex){+.+.+.}, at: [] cleanup_net+0x80/0x1b0 #3: (slab_mutex){+.+.+.}, at: [] kmem_cache_destroy+0x45/0xe0 stack backtrace: Pid: 43, comm: kworker/u:3 Not tainted 3.6.0-03888-g3f99f3b #145 Call Trace: [] print_circular_bug+0x10f/0x120 [] check_prev_add+0x3de/0x440 [] validate_chain+0x632/0x720 [] __lock_acquire+0x359/0x580 [] lock_acquire+0x121/0x190 [] ? get_online_cpus+0x37/0x50 [] __mutex_lock_common+0x5c/0x450 [] ? get_online_cpus+0x37/0x50 [] ? mark_held_locks+0x80/0x120 [] ? get_online_cpus+0x37/0x50 [] mutex_lock_nested+0x3e/0x50 [] get_online_cpus+0x37/0x50 [] _rcu_barrier+0x22/0x1f0 [] rcu_barrier_sched+0x10/0x20 [] rcu_barrier+0x9/0x10 [] kmem_cache_destroy+0xd1/0xe0 [] nf_conntrack_cleanup_net+0xe4/0x110 [nf_conntrack] [] nf_conntrack_cleanup+0x2a/0x70 [nf_conntrack] [] nf_conntrack_net_exit+0x5e/0x80 [nf_conntrack] [] ops_exit_list+0x39/0x60 [] cleanup_net+0xfb/0x1b0 [] process_one_work+0x26b/0x4c0 [] ? process_one_work+0x1a2/0x4c0 [] ? worker_thread+0x59/0x320 [] ? net_drop_ns+0x40/0x40 [] worker_thread+0x12e/0x320 [] ? manage_workers+0x1a0/0x1a0 [] kthread+0xde/0xf0 [] kernel_thread_helper+0x4/0x10 [] ? retint_restore_args+0x13/0x13 [] ? __init_kthread_worker+0x70/0x70 [] ? gs_change+0x13/0x13 -- Jiri Kosina SUSE Labs -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/