Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751796AbZL0MG3 (ORCPT ); Sun, 27 Dec 2009 07:06:29 -0500 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1751647AbZL0MG1 (ORCPT ); Sun, 27 Dec 2009 07:06:27 -0500 Received: from one.firstfloor.org ([213.235.205.2]:46919 "EHLO one.firstfloor.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751624AbZL0MG0 (ORCPT ); Sun, 27 Dec 2009 07:06:26 -0500 Date: Sun, 27 Dec 2009 13:06:22 +0100 From: Andi Kleen To: linux-kernel@vger.kernel.org, netdev@vger.kernel.org, penberg@cs.helsinki.fi Subject: lockdep possible recursive lock in slab parent->list->rlock in rc2 Message-ID: <20091227120622.GA480@basil.fritz.box> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.5.17 (2007-11-01) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3681 Lines: 87 Get this on a NFS root system while booting This must be a recent change in the last week, I didn't see it in a post rc1 git* from last week (I haven't done a exact bisect) It's triggered by the r8169 driver close function, but looks more like a slab problem? I haven't checked it in detail if the locks are really different or just lockdep not knowing enough classes. -Andi ============================================= [ INFO: possible recursive locking detected ] 2.6.33-rc2 #19 --------------------------------------------- swapper/1 is trying to acquire lock: (&(&parent->list_lock)->rlock){-.-...}, at: [] cache_flusharray+0x55/0x10a but task is already holding lock: (&(&parent->list_lock)->rlock){-.-...}, at: [] cache_flusharray+0x55/0x10a other info that might help us debug this: 2 locks held by swapper/1: #0: (rtnl_mutex){+.+.+.}, at: [] rtnl_lock+0x12/0x14 #1: (&(&parent->list_lock)->rlock){-.-...}, at: [] cache_flusharray+0x55/0x10a stack backtrace: Pid: 1, comm: swapper Not tainted 2.6.33-rc2-MCE6 #19 Call Trace: [] __lock_acquire+0xf94/0x1771 [] ? mark_held_locks+0x4d/0x6b [] ? trace_hardirqs_on_caller+0x10b/0x12f [] ? sched_clock_local+0x1c/0x80 [] ? sched_clock_local+0x1c/0x80 [] lock_acquire+0xbc/0xd9 [] ? cache_flusharray+0x55/0x10a [] _raw_spin_lock+0x31/0x66 [] ? cache_flusharray+0x55/0x10a [] ? kfree_debugcheck+0x11/0x2d [] cache_flusharray+0x55/0x10a [] ? debug_check_no_locks_freed+0x119/0x12f [] kmem_cache_free+0x18f/0x1f2 [] slab_destroy+0x12b/0x138 [] free_block+0x161/0x1a2 [] cache_flusharray+0x9d/0x10a [] ? debug_check_no_locks_freed+0x119/0x12f [] kfree+0x204/0x23b [] ? trace_hardirqs_on+0xd/0xf [] skb_release_data+0xc6/0xcb [] __kfree_skb+0x19/0x86 [] consume_skb+0x2b/0x2d [] rtl8169_rx_clear+0x7f/0xbb [] rtl8169_down+0x12c/0x13b [] rtl8169_close+0x30/0x131 [] ? dev_deactivate+0x168/0x198 [] dev_close+0x8c/0xae [] dev_change_flags+0xba/0x180 [] ic_close_devs+0x2e/0x48 [] ip_auto_config+0x914/0xe1e [] ? sched_clock_local+0x1c/0x80 [] ? trace_hardirqs_off+0xd/0xf [] ? cpu_clock+0x2d/0x3f [] ? lock_release_holdtime+0x24/0x181 [] ? tcp_congestion_default+0x0/0x12 [] ? _raw_spin_unlock+0x26/0x2b [] ? tcp_congestion_default+0x0/0x12 [] ? ip_auto_config+0x0/0xe1e [] do_one_initcall+0x5a/0x14f [] kernel_init+0x141/0x197 [] kernel_thread_helper+0x4/0x10 [] ? restore_args+0x0/0x30 [] ? kernel_init+0x0/0x197 [] ? kernel_thread_helper+0x0/0x10 IP-Config: Retrying forever (NFS root)... r8169: eth0: link up -- ak@linux.intel.com -- Speaking for myself only. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/