Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1946744AbWKAJ6P (ORCPT ); Wed, 1 Nov 2006 04:58:15 -0500 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1946747AbWKAJ6O (ORCPT ); Wed, 1 Nov 2006 04:58:14 -0500 Received: from 80-218-222-94.dclient.hispeed.ch ([80.218.222.94]:21465 "EHLO steudten.com") by vger.kernel.org with ESMTP id S1946744AbWKAJ6O (ORCPT ); Wed, 1 Nov 2006 04:58:14 -0500 Message-ID: <45486FAA.3070209@steudten.org> Date: Wed, 01 Nov 2006 10:58:02 +0100 From: "alpha @ steudten Engineering" Organization: Steudten Engineering MIME-Version: 1.0 To: LKML Subject: INFO: possible circular locking dependency detected 2.6.18-1.2798 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Mailer: Mailer X-Check: bc5ad08d3824f7c7972eaa539235374c on steudten.com Sender: linux-kernel-owner@vger.kernel.org X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4305 Lines: 113 FC5 kernel 2.6.18-1.2798 ======================================================= [ INFO: possible circular locking dependency detected ] 2.6.18-1.2798self #1 ------------------------------------------------------- kswapd0/201 is trying to acquire lock: (&inode->i_mutex){--..}, at: [] mutex_lock+0x1c/0x1f but task is already holding lock: (iprune_mutex){--..}, at: [] mutex_lock+0x1c/0x1f which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #3 (iprune_mutex){--..}: [] __lock_acquire+0x82c/0x904 [] lock_acquire+0x4b/0x6c [] __mutex_lock_slowpath+0xb3/0x226 [] mutex_lock+0x1c/0x1f [] invalidate_inodes+0x20/0xcd [] generic_shutdown_super+0x45/0xf7 [] kill_block_super+0x20/0x32 [] deactivate_super+0x5d/0x6f [] mntput_no_expire+0x42/0x71 [] path_release_on_umount+0x15/0x18 [] sys_umount+0x1e7/0x21b [] sys_oldumount+0xd/0xf [] syscall_call+0x7/0xb [] 0xffffffff -> #2 (&type->s_lock_key#9){--..}: [] __lock_acquire+0x82c/0x904 [] lock_acquire+0x4b/0x6c [] __mutex_lock_slowpath+0xb3/0x226 [] mutex_lock+0x1c/0x1f [] ext3_orphan_add+0x32/0x1d0 [] ext3_setattr+0x152/0x1e1 [] notify_change+0x137/0x2cc [] do_truncate+0x53/0x6c [] may_open+0x1b6/0x204 [] open_namei+0x286/0x638 [] do_filp_open+0x1f/0x35 [] do_sys_open+0x40/0xb5 [] sys_open+0x16/0x18 [] syscall_call+0x7/0xb [] 0xffffffff -> #1 (&inode->i_alloc_sem){--..}: [] __lock_acquire+0x82c/0x904 [] lock_acquire+0x4b/0x6c [] down_write+0x28/0x42 [] notify_change+0xef/0x2cc [] do_truncate+0x53/0x6c [] may_open+0x1b6/0x204 [] open_namei+0x286/0x638 [] do_filp_open+0x1f/0x35 [] do_sys_open+0x40/0xb5 [] sys_open+0x16/0x18 [] syscall_call+0x7/0xb [] 0xffffffff -> #0 (&inode->i_mutex){--..}: [] __lock_acquire+0x760/0x904 [] lock_acquire+0x4b/0x6c [] __mutex_lock_slowpath+0xb3/0x226 [] mutex_lock+0x1c/0x1f [] ntfs_put_inode+0x3d/0x75 [ntfs] [] iput+0x33/0x6a [] ntfs_clear_big_inode+0x99/0xb2 [ntfs] [] clear_inode+0xce/0x11f [] dispose_list+0x4c/0xd1 [] shrink_icache_memory+0x18a/0x1b2 [] shrink_slab+0xd0/0x14a [] kswapd+0x2a2/0x379 [] kthread+0xb0/0xdd [] kernel_thread_helper+0x7/0x10 [] 0xffffffff other info that might help us debug this: 2 locks held by kswapd0/201: #0: (shrinker_rwsem){----}, at: [] shrink_slab+0x25/0x14a #1: (iprune_mutex){--..}, at: [] mutex_lock+0x1c/0x1f stack backtrace: [] show_trace_log_lvl+0x12/0x25 [] show_trace+0xd/0x10 [] dump_stack+0x19/0x1b [] print_circular_bug_tail+0x59/0x64 [] __lock_acquire+0x760/0x904 [] lock_acquire+0x4b/0x6c [] __mutex_lock_slowpath+0xb3/0x226 [] mutex_lock+0x1c/0x1f [] ntfs_put_inode+0x3d/0x75 [ntfs] [] iput+0x33/0x6a [] ntfs_clear_big_inode+0x99/0xb2 [ntfs] [] clear_inode+0xce/0x11f [] dispose_list+0x4c/0xd1 [] shrink_icache_memory+0x18a/0x1b2 [] shrink_slab+0xd0/0x14a [] kswapd+0x2a2/0x379 [] kthread+0xb0/0xdd [] kernel_thread_helper+0x7/0x10 ======================= - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/