Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754854Ab2FNBUo (ORCPT ); Wed, 13 Jun 2012 21:20:44 -0400 Received: from mx1.redhat.com ([209.132.183.28]:1864 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750732Ab2FNBUm (ORCPT ); Wed, 13 Jun 2012 21:20:42 -0400 Date: Thu, 14 Jun 2012 11:20:26 +1000 From: Dave Chinner To: Fengguang Wu Cc: Christoph Hellwig , linux-fsdevel@vger.kernel.org, LKML Subject: Re: xfs ip->i_lock: inconsistent {IN-RECLAIM_FS-W} -> {RECLAIM_FS-ON-W} usage Message-ID: <20120614012026.GL3019@devil.redhat.com> References: <20120612012134.GA7706@localhost> <20120613123932.GA1445@localhost> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20120613123932.GA1445@localhost> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3823 Lines: 71 On Wed, Jun 13, 2012 at 08:39:32PM +0800, Fengguang Wu wrote: > Hi Christoph, Dave, > > I got this lockdep warning on XFS when running the xfs tests: > > [ 704.832019] ================================= > [ 704.832019] [ INFO: inconsistent lock state ] > [ 704.832019] 3.5.0-rc1+ #8 Tainted: G W > [ 704.832019] --------------------------------- > [ 704.832019] inconsistent {IN-RECLAIM_FS-W} -> {RECLAIM_FS-ON-W} usage. > [ 704.832019] fsstress/11619 [HC0[0]:SC0[0]:HE1:SE1] takes: > [ 704.832019] (&(&ip->i_lock)->mr_lock){++++?.}, at: [] xfs_ilock_nowait+0xd7/0x1d0 > [ 704.832019] {IN-RECLAIM_FS-W} state was registered at: > [ 704.832019] [] mark_irqflags+0x12d/0x13e > [ 704.832019] [] __lock_acquire+0x243/0x3f9 > [ 704.832019] [] lock_acquire+0x112/0x13d > [ 704.832019] [] down_write_nested+0x54/0x8b > [ 704.832019] [] xfs_ilock+0xd8/0x17d > [ 704.832019] [] xfs_reclaim_inode+0x4a/0x2cb > [ 704.832019] [] xfs_reclaim_inodes_ag+0x1b5/0x28e > [ 704.832019] [] xfs_reclaim_inodes_nr+0x33/0x3a > [ 704.832019] [] xfs_fs_free_cached_objects+0x15/0x17 > [ 704.832019] [] prune_super+0x103/0x154 > [ 704.832019] [] shrink_slab+0x1ec/0x316 > [ 704.832019] [] balance_pgdat+0x308/0x618 > [ 704.832019] [] kswapd+0x1c3/0x1dc > [ 704.832019] [] kthread+0xaf/0xb7 > [ 704.832019] [] kernel_thread_helper+0x4/0x10 ...... > [ 704.832019] stack backtrace: > [ 704.832019] Pid: 11619, comm: fsstress Tainted: G W 3.5.0-rc1+ #8 > [ 704.832019] Call Trace: > [ 704.832019] [] print_usage_bug+0x1f5/0x206 > [ 704.832019] [] ? check_usage_forwards+0xa6/0xa6 > [ 704.832019] [] mark_lock_irq+0x6f/0x120 > [ 704.832019] [] mark_lock+0xaf/0x122 > [ 704.832019] [] mark_held_locks+0x6d/0x95 > [ 704.832019] [] ? local_clock+0x36/0x4d > [ 704.832019] [] __lockdep_trace_alloc+0x6d/0x6f > [ 704.832019] [] lockdep_trace_alloc+0x3d/0x57 > [ 704.832019] [] kmem_cache_alloc_node_trace+0x47/0x1b4 > [ 704.832019] [] ? lock_release_nested+0x9f/0xa6 > [ 704.832019] [] ? _xfs_buf_find+0xaa/0x302 > [ 704.832019] [] ? new_vmap_block.constprop.18+0x3a/0x1de > [ 704.832019] [] new_vmap_block.constprop.18+0x3a/0x1de > [ 704.832019] [] vb_alloc.constprop.16+0x204/0x225 > [ 704.832019] [] vm_map_ram+0x32/0xaa > [ 704.832019] [] _xfs_buf_map_pages+0xb3/0xf5 > [ 704.832019] [] xfs_buf_get+0xd3/0x1ac > [ 704.832019] [] xfs_trans_get_buf+0x180/0x244 > [ 704.832019] [] xfs_da_do_buf+0x2a0/0x5cc > [ 704.832019] [] xfs_da_get_buf+0x21/0x23 > [ 704.832019] [] xfs_dir2_data_init+0x44/0xf9 > [ 704.832019] [] xfs_dir2_sf_to_block+0x1ef/0x5d8 Bug in vm_map_ram - it does an unconditional GFP_KERNEL allocation here, and we are in a GFP_NOFS context. We can't pass a gfp_mask to vm_map_ram(), so until vm_map_ram() grows that we can't fix it... Cheers, Dave. -- Dave Chinner dchinner@redhat.com -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/