Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753935AbYKYLoR (ORCPT ); Tue, 25 Nov 2008 06:44:17 -0500 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1751304AbYKYLoB (ORCPT ); Tue, 25 Nov 2008 06:44:01 -0500 Received: from colobus.isomerica.net ([216.93.242.10]:46890 "EHLO colobus.isomerica.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751119AbYKYLn7 convert rfc822-to-8bit (ORCPT ); Tue, 25 Nov 2008 06:43:59 -0500 Date: Tue, 25 Nov 2008 06:43:57 -0500 From: Dan =?UTF-8?B?Tm/DqQ==?= To: linux-kernel@vger.kernel.org Subject: Lockdep warning for iprune_mutex at shrink_icache_memory Message-ID: <20081125064357.5a4f1420@tuna> Organization: isomerica.net X-Mailer: Claws Mail 3.5.0 (GTK+ 2.14.4; x86_64-pc-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8BIT Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 6599 Lines: 144 I have experienced the following lockdep warning on 2.6.28-rc6. I would be happy to help debug, but I don't know this section of code at all. ======================================================= [ INFO: possible circular locking dependency detected ] 2.6.28-rc6git #1 ------------------------------------------------------- rsync/21485 is trying to acquire lock: (iprune_mutex){--..}, at: [] shrink_icache_memory+0x84/0x290 but task is already holding lock: (&(&ip->i_iolock)->mr_lock){----}, at: [] xfs_ilock+0x75/0xb0 [xfs] which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #1 (&(&ip->i_iolock)->mr_lock){----}: [] __lock_acquire+0xd49/0x11a0 [] lock_acquire+0x91/0xc0 [] down_write_nested+0x57/0x90 [] xfs_ilock+0xa5/0xb0 [xfs] [] xfs_ireclaim+0x46/0x90 [xfs] [] xfs_finish_reclaim+0x5e/0x1a0 [xfs] [] xfs_reclaim+0x11b/0x120 [xfs] [] xfs_fs_clear_inode+0xee/0x120 [xfs] [] clear_inode+0xb1/0x130 [] dispose_list+0x38/0x120 [] shrink_icache_memory+0x243/0x290 [] shrink_slab+0x125/0x180 [] kswapd+0x52a/0x680 [] kthread+0x4e/0x90 [] child_rip+0xa/0x11 [] 0xffffffffffffffff -> #0 (iprune_mutex){--..}: [] __lock_acquire+0xe10/0x11a0 [] lock_acquire+0x91/0xc0 [] __mutex_lock_common+0xb3/0x390 [] mutex_lock_nested+0x44/0x50 [] shrink_icache_memory+0x84/0x290 [] shrink_slab+0x125/0x180 [] do_try_to_free_pages+0x2bb/0x460 [] try_to_free_pages+0x67/0x70 [] __alloc_pages_internal+0x23a/0x530 [] alloc_pages_current+0xad/0x110 [] new_slab+0x2ab/0x350 [] __slab_alloc+0x33c/0x440 [] kmem_cache_alloc+0xd6/0xe0 [] radix_tree_preload+0x3b/0xb0 [] add_to_page_cache_locked+0x68/0x110 [] add_to_page_cache_lru+0x31/0x90 [] mpage_readpages+0x9f/0x120 [] xfs_vm_readpages+0x1f/0x30 [xfs] [] __do_page_cache_readahead+0x1a1/0x250 [] ondemand_readahead+0x1cb/0x250 [] page_cache_async_readahead+0xa9/0xc0 [] generic_file_aio_read+0x447/0x6c0 [] xfs_read+0x12f/0x2c0 [xfs] [] xfs_file_aio_read+0x56/0x60 [xfs] [] do_sync_read+0xf9/0x140 [] vfs_read+0xc8/0x180 [] sys_read+0x55/0x90 [] system_call_fastpath+0x16/0x1b [] 0xffffffffffffffff other info that might help us debug this: 2 locks held by rsync/21485: #0: (&(&ip->i_iolock)->mr_lock){----}, at: [] xfs_ilock+0x75/0xb0 [xfs] #1: (shrinker_rwsem){----}, at: [] shrink_slab+0x37/0x180 stack backtrace: Pid: 21485, comm: rsync Not tainted 2.6.28-rc6git #1 Call Trace: [] print_circular_bug_tail+0xa7/0xf0 [] __lock_acquire+0xe10/0x11a0 [] lock_acquire+0x91/0xc0 [] ? shrink_icache_memory+0x84/0x290 [] __mutex_lock_common+0xb3/0x390 [] ? shrink_icache_memory+0x84/0x290 [] ? shrink_icache_memory+0x84/0x290 [] ? native_sched_clock+0x13/0x60 [] mutex_lock_nested+0x44/0x50 [] shrink_icache_memory+0x84/0x290 [] shrink_slab+0x125/0x180 [] do_try_to_free_pages+0x2bb/0x460 [] try_to_free_pages+0x67/0x70 [] ? isolate_pages_global+0x0/0x260 [] __alloc_pages_internal+0x23a/0x530 [] alloc_pages_current+0xad/0x110 [] new_slab+0x2ab/0x350 [] ? __slab_alloc+0x32d/0x440 [] __slab_alloc+0x33c/0x440 [] ? radix_tree_preload+0x3b/0xb0 [] ? ftrace_call+0x5/0x2b [] ? radix_tree_preload+0x3b/0xb0 [] kmem_cache_alloc+0xd6/0xe0 [] radix_tree_preload+0x3b/0xb0 [] add_to_page_cache_locked+0x68/0x110 [] add_to_page_cache_lru+0x31/0x90 [] mpage_readpages+0x9f/0x120 [] ? xfs_get_blocks+0x0/0x20 [xfs] [] ? __alloc_pages_internal+0xf3/0x530 [] ? xfs_get_blocks+0x0/0x20 [xfs] [] xfs_vm_readpages+0x1f/0x30 [xfs] [] __do_page_cache_readahead+0x1a1/0x250 [] ? __do_page_cache_readahead+0xca/0x250 [] ondemand_readahead+0x1cb/0x250 [] ? raid1_congested+0x0/0xf0 [raid1] [] ? ftrace_call+0x5/0x2b [] page_cache_async_readahead+0xa9/0xc0 [] generic_file_aio_read+0x447/0x6c0 [] ? _spin_unlock_irqrestore+0x44/0x70 [] ? xfs_ilock+0x75/0xb0 [xfs] [] xfs_read+0x12f/0x2c0 [xfs] [] xfs_file_aio_read+0x56/0x60 [xfs] [] do_sync_read+0xf9/0x140 [] ? autoremove_wake_function+0x0/0x40 [] ? ftrace_call+0x5/0x2b [] ? cap_file_permission+0x9/0x10 [] ? security_file_permission+0x16/0x20 [] vfs_read+0xc8/0x180 [] sys_read+0x55/0x90 [] system_call_fastpath+0x16/0x1b Cheers, Dan -- /--------------- - - - - - - | Dan NoƩ | http://isomerica.net/~dpn/ -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/