Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756825Ab3JHXBx (ORCPT ); Tue, 8 Oct 2013 19:01:53 -0400 Received: from e39.co.us.ibm.com ([32.97.110.160]:33480 "EHLO e39.co.us.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755992Ab3JHXBs (ORCPT ); Tue, 8 Oct 2013 19:01:48 -0400 Date: Tue, 8 Oct 2013 16:01:44 -0700 From: "Paul E. McKenney" To: Dave Chinner Cc: Dave Jones , Linux Kernel , xfs@oss.sgi.com Subject: Re: xfs lockdep trace after unlink Message-ID: <20131008230144.GI5790@linux.vnet.ibm.com> Reply-To: paulmck@linux.vnet.ibm.com References: <20131008212056.GA7467@redhat.com> <20131008213742.GB4446@dastard> <20131008215439.GA5790@linux.vnet.ibm.com> <20131008222557.GC4446@dastard> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20131008222557.GC4446@dastard> User-Agent: Mutt/1.5.21 (2010-09-15) X-TM-AS-MML: No X-Content-Scanned: Fidelis XPS MAILER x-cbid: 13100823-9332-0000-0000-000001B1CAAA Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 5781 Lines: 119 On Wed, Oct 09, 2013 at 09:25:57AM +1100, Dave Chinner wrote: > On Tue, Oct 08, 2013 at 02:54:39PM -0700, Paul E. McKenney wrote: > > On Wed, Oct 09, 2013 at 08:37:42AM +1100, Dave Chinner wrote: > > > On Tue, Oct 08, 2013 at 05:20:56PM -0400, Dave Jones wrote: > > > > I was deleting a kernel tree, when this happened.. > > > > RCU, or xfs ? > > > > > > > > BUG: MAX_LOCKDEP_CHAINS too low! > > > > > > Or lockdep? > > > > > > > turning off the locking correctness validator. > > > > Please attach the output of /proc/lock_stat to the bug report > > > > CPU: 2 PID: 8109 Comm: rm Not tainted 3.12.0-rc4+ #96 > > > > ffffffff824bc2e0 ffff880026f416a8 ffffffff8172d798 41e619a2e5098827 > > > > ffff880026f41768 ffffffff810cc91f 0000000000000002 000010a02977b643 > > > > ffff880026f416d8 0000000000000212 ffff880026f416f8 ffffffff810c7329 > > > > Call Trace: > > > > [] dump_stack+0x4e/0x82 > > > > [] __lock_acquire+0x1b7f/0x1be0 > > > > [] ? get_lock_stats+0x19/0x60 > > > > [] ? lock_release_holdtime.part.29+0x9d/0x160 > > > > [] lock_acquire+0x93/0x200 > > > > [] ? try_to_wake_up+0x22a/0x350 > > > > [] _raw_spin_lock+0x40/0x80 > > > > [] ? try_to_wake_up+0x22a/0x350 > > > > [] try_to_wake_up+0x22a/0x350 > > > > [] default_wake_function+0x12/0x20 > > > > [] autoremove_wake_function+0x18/0x40 > > > > [] ? __wake_up+0x23/0x50 > > > > [] __wake_up_common+0x58/0x90 > > > > [] __wake_up+0x39/0x50 > > > > [] rcu_report_qs_rsp+0x48/0x70 > > > > [] rcu_report_unblock_qs_rnp+0x84/0x90 > > > > [] ? rcu_read_unlock_special+0x9f/0x4e0 > > > > [] rcu_read_unlock_special+0x334/0x4e0 > > > > [] ? trace_hardirqs_off_caller+0x1f/0xc0 > > > > [] __rcu_read_unlock+0x8c/0x90 > > > > [] xfs_perag_get+0xde/0x2a0 [xfs] > > > > [] ? xfs_perag_get+0x5/0x2a0 [xfs] > > > > [] _xfs_buf_find+0xd6/0x480 [xfs] > > > > [] xfs_buf_get_map+0x2a/0x260 [xfs] > > > > [] xfs_buf_read_map+0x2c/0x200 [xfs] > > > > [] xfs_trans_read_buf_map+0x4b9/0xa70 [xfs] > > > > [] xfs_da_read_buf+0xb8/0x340 [xfs] > > > > [] ? mark_held_locks+0xbb/0x140 > > > > [] xfs_dir3_block_read+0x39/0x80 [xfs] > > > > [] xfs_dir2_block_lookup_int+0x40/0x260 [xfs] > > > > [] xfs_dir2_block_removename+0x3d/0x390 [xfs] > > > > [] ? xfs_bmap_last_offset+0x4a/0xa0 [xfs] > > > > [] xfs_dir_removename+0x11c/0x180 [xfs] > > > > [] xfs_remove+0x2e5/0x510 [xfs] > > > > [] xfs_vn_unlink+0x4b/0x90 [xfs] > > > > [] vfs_unlink+0x90/0x100 > > > > [] do_unlinkat+0x17f/0x240 > > > > [] ? syscall_trace_enter+0x145/0x2a0 > > > > [] SyS_unlinkat+0x1b/0x40 > > > > [] tracesys+0xdd/0xe2 > > > > > > It's hard to see what in XFS is causing this. You're reading a > > > single block directory, which means we're holding two inode locks > > > here, and then we've done a lookup on a radix tree under > > > rcu_read_lock(). Hence I can't see how we've overrun the lockdep > > > chain depth in the XFS code path. FWIW, it's thrown this warning > > > when calling rcu_read_unlock() here: > > > > > > struct xfs_perag * > > > xfs_perag_get( > > > struct xfs_mount *mp, > > > xfs_agnumber_t agno) > > > { > > > struct xfs_perag *pag; > > > int ref = 0; > > > > > > rcu_read_lock(); > > > pag = radix_tree_lookup(&mp->m_perag_tree, agno); > > > if (pag) { > > > ASSERT(atomic_read(&pag->pag_ref) >= 0); > > > ref = atomic_inc_return(&pag->pag_ref); > > > } > > > >>>>>> rcu_read_unlock(); > > > > Would xfs be holding one of the scheduler's rq or pi locks at this > > point? That could result in deadlock. > > XFS doesn't have any hooks into the scheduler at all. So if there is > a problem with scheduler locks, then it's been leaked by the scheduler > or something intimately familiar with the scheduler... OK, that is what I thought, but had to ask. > > But I doubt that this is the problem, unless radix_tree_lookup() grabs > > one and returns with it held. > > Same thing - if the radix tree code returns with a scheduler lock > held then there's a bug in the scheduler somewhere... > > > Otherwise, you have interrupts disabled > > throughout the RCU read-side critical section, and thus are guaranteed > > to take the lockless fastpath through rcu_read_unlock(). (As opposed > > to merely having extremely high probability of taking that fastpath.) > > XFS doesn't disable interrupts anywhere itself, so I'm assuming that > you are talking about something that is done internally in the > rcu_read_lock()/unlock() calls? No, it is just that the scheduler locks of concern are irq-disabled spinlocks. So unless interrupts are disabled your cannot (or at least really should not) be holding them. In short, looks like something else is going on here. Thanx, Paul -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/