Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932187AbaBKVIt (ORCPT ); Tue, 11 Feb 2014 16:08:49 -0500 Received: from ipmail04.adl6.internode.on.net ([150.101.137.141]:24643 "EHLO ipmail04.adl6.internode.on.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756282AbaBKVIq (ORCPT ); Tue, 11 Feb 2014 16:08:46 -0500 X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AtsHAE+Q+lJ5LBMc/2dsb2JhbABagwy6SYVPgRUXdIIlAQEFJxMcMwgDGAklDwUlAyEBEogEyHEXFo5qhDgEmCmKT4dSg0Eo Date: Wed, 12 Feb 2014 08:08:41 +1100 From: Dave Chinner To: Dave Jones , Linux Kernel , xfs@oss.sgi.com Subject: Re: 3.14-rc2 XFS backtrace because irqs_disabled. Message-ID: <20140211210841.GM13647@dastard> References: <20140211172707.GA1749@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20140211172707.GA1749@redhat.com> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Feb 11, 2014 at 12:27:07PM -0500, Dave Jones wrote: > BUG: sleeping function called from invalid context at mm/mempool.c:203 > in_atomic(): 0, irqs_disabled(): 1, pid: 27511, name: trinity-c3 > 5 locks held by trinity-c3/27511: > #0: (sb_writers#9){......}, at: [] mnt_want_write+0x24/0x50 > #1: (&type->i_mutex_dir_key#3){......}, at: [] do_last.isra.51+0x294/0x11f0 > #2: (sb_internal#2){......}, at: [] xfs_trans_alloc+0x24/0x40 [xfs] > #3: (&(&ip->i_lock)->mr_lock/1){......}, at: [] xfs_ilock+0x16f/0x1b0 [xfs] > #4: (&(&ip->i_lock)->mr_lock){......}, at: [] xfs_ilock_nowait+0x184/0x200 [xfs] > CPU: 3 PID: 27511 Comm: trinity-c3 Not tainted 3.14.0-rc2+ #111 > ffffffff83a3fd56 0000000045a3849a ffff88009f368f60 ffffffff8372afea > 0000000000000000 ffff88009f368f88 ffffffff8309ddb5 0000000000000010 > ffff880243566288 0000000000000008 ffff88009f369008 ffffffff831534d3 > Call Trace: > [] dump_stack+0x4e/0x7a > [] __might_sleep+0x105/0x150 > [] mempool_alloc+0xa3/0x170 > [] ? deactivate_slab+0x51a/0x590 > [] bio_alloc_bioset+0x156/0x210 > [] _xfs_buf_ioapply+0x1c1/0x3c0 [xfs] > [] ? xlog_bdstrat+0x22/0x60 [xfs] > [] xfs_buf_iorequest+0x6b/0xf0 [xfs] > [] xlog_bdstrat+0x22/0x60 [xfs] > [] xlog_sync+0x3a7/0x5b0 [xfs] > [] xlog_state_release_iclog+0x10f/0x120 [xfs] > [] xlog_write+0x6f0/0x800 [xfs] > [] xlog_cil_push+0x2f1/0x410 [xfs] > [] xlog_cil_force_lsn+0x1d8/0x210 [xfs] > [] ? xfs_bmbt_get_all+0x18/0x20 [xfs] > [] _xfs_log_force+0x70/0x290 [xfs] > [] ? get_parent_ip+0xd/0x50 > [] xfs_log_force+0x26/0xb0 [xfs] > [] ? _xfs_buf_find+0x1f6/0x3c0 [xfs] > [] xfs_buf_lock+0x133/0x140 [xfs] > [] _xfs_buf_find+0x1f6/0x3c0 [xfs] > [] xfs_buf_get_map+0x2a/0x1b0 [xfs] > [] xfs_trans_get_buf_map+0x1a1/0x240 [xfs] > [] xfs_da_get_buf+0xbd/0x100 [xfs] > [] xfs_dir3_data_init+0x59/0x1d0 [xfs] > [] ? xfs_dir2_grow_inode+0x13b/0x150 [xfs] > [] xfs_dir2_sf_to_block+0x17e/0x7b0 [xfs] > [] ? xfs_dir2_sfe_get_ino+0x1a/0x20 [xfs] > [] ? xfs_dir2_sf_check.isra.7+0x114/0x1b0 [xfs] > [] ? xfs_da_compname+0x1f/0x30 [xfs] > [] ? xfs_dir2_sf_lookup+0x303/0x310 [xfs] > [] xfs_dir2_sf_addname+0x348/0x6d0 [xfs] > [] ? xfs_setup_inode+0x1cd/0x320 [xfs] > [] xfs_dir_createname+0x184/0x1e0 [xfs] > [] xfs_create+0x469/0x580 [xfs] > [] xfs_vn_mknod+0xc4/0x1e0 [xfs] > [] xfs_vn_create+0x13/0x20 [xfs] > [] vfs_create+0x95/0xc0 > [] do_last.isra.51+0x9f8/0x11f0 > [] ? link_path_walk+0x81/0x870 > [] path_openat+0xc9/0x620 > [] ? put_dec+0x72/0x90 > [] do_filp_open+0x4d/0xb0 > [] file_open_name+0xfe/0x160 > [] filp_open+0x44/0x60 > [] do_coredump+0x602/0xf60 > [] get_signal_to_deliver+0x2b8/0x6b0 > [] do_signal+0x57/0x9d0 > [] ? __acct_update_integrals+0x8e/0x120 > [] ? __schedule+0x60/0x850 > [] ? preempt_count_sub+0x6b/0xf0 > [] ? _raw_spin_unlock+0x31/0x50 > [] ? vtime_account_user+0x91/0xa0 > [] ? context_tracking_user_exit+0x9b/0x100 > [] do_notify_resume+0x5c/0xa0 > [] retint_signal+0x46/0x90 There's nowhere in this XFS stack that disables interrupts, so I thinks it's either above or below XFS that is causing this problem. Cheers, Dave. -- Dave Chinner david@fromorbit.com -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/