Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933454AbbFIMAd (ORCPT ); Tue, 9 Jun 2015 08:00:33 -0400 Received: from mx01.imt-systems.com ([212.224.83.170]:36975 "EHLO mx01.imt-systems.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932907AbbFIMAX (ORCPT ); Tue, 9 Jun 2015 08:00:23 -0400 X-Greylist: delayed 362 seconds by postgrey-1.27 at vger.kernel.org; Tue, 09 Jun 2015 08:00:23 EDT Message-ID: <5576D3E7.40302@fedoraproject.org> Date: Tue, 09 Jun 2015 13:54:15 +0200 From: Morten Stevens User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:31.0) Gecko/20100101 Thunderbird/31.7.0 MIME-Version: 1.0 To: linux-kernel@vger.kernel.org Subject: linux 4.1-rc7 deadlock Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit X-Spam-Score: 0 () Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 7363 Lines: 144 Hi, After each restart appears directly this deadlock: [ 28.177939] ====================================================== [ 28.177959] [ INFO: possible circular locking dependency detected ] [ 28.177980] 4.1.0-0.rc7.git0.1.fc23.x86_64+debug #1 Tainted: G W [ 28.178002] ------------------------------------------------------- [ 28.178022] sshd/1764 is trying to acquire lock: [ 28.178037] (&isec->lock){+.+.+.}, at: [] inode_doinit_with_dentry+0xc5/0x6a0 [ 28.178078] but task is already holding lock: [ 28.178097] (&mm->mmap_sem){++++++}, at: [] vm_mmap_pgoff+0x8f/0xf0 [ 28.178131] which lock already depends on the new lock. [ 28.178157] the existing dependency chain (in reverse order) is: [ 28.178180] -> #2 (&mm->mmap_sem){++++++}: [ 28.178201] [] lock_acquire+0xc7/0x2a0 [ 28.178225] [] might_fault+0x8c/0xb0 [ 28.178248] [] filldir+0x9a/0x130 [ 28.178269] [] xfs_dir2_block_getdents.isra.12+0x1a6/0x1d0 [xfs] [ 28.178330] [] xfs_readdir+0x1c4/0x360 [xfs] [ 28.178368] [] xfs_file_readdir+0x2b/0x30 [xfs] [ 28.178404] [] iterate_dir+0x9a/0x140 [ 28.178425] [] SyS_getdents+0x91/0x120 [ 28.178447] [] system_call_fastpath+0x12/0x76 [ 28.178471] -> #1 (&xfs_dir_ilock_class){++++.+}: [ 28.178494] [] lock_acquire+0xc7/0x2a0 [ 28.178515] [] down_read_nested+0x57/0xa0 [ 28.178538] [] xfs_ilock+0x171/0x390 [xfs] [ 28.178579] [] xfs_ilock_attr_map_shared+0x38/0x50 [xfs] [ 28.178618] [] xfs_attr_get+0xbd/0x1b0 [xfs] [ 28.178651] [] xfs_xattr_get+0x3d/0x80 [xfs] [ 28.178688] [] generic_getxattr+0x4f/0x70 [ 28.178711] [] inode_doinit_with_dentry+0x172/0x6a0 [ 28.178737] [] sb_finish_set_opts+0xdb/0x260 [ 28.178759] [] selinux_set_mnt_opts+0x331/0x670 [ 28.178783] [] superblock_doinit+0x77/0xf0 [ 28.178804] [] delayed_superblock_init+0x10/0x20 [ 28.178849] [] iterate_supers+0xba/0x120 [ 28.178872] [] selinux_complete_init+0x33/0x40 [ 28.178897] [] security_load_policy+0x103/0x640 [ 28.178920] [] sel_write_load+0xb6/0x790 [ 28.179482] [] __vfs_write+0x37/0x110 [ 28.180047] [] vfs_write+0xa9/0x1c0 [ 28.180630] [] SyS_write+0x5c/0xd0 [ 28.181168] [] system_call_fastpath+0x12/0x76 [ 28.181740] -> #0 (&isec->lock){+.+.+.}: [ 28.182808] [] __lock_acquire+0x1b31/0x1e40 [ 28.183347] [] lock_acquire+0xc7/0x2a0 [ 28.183897] [] mutex_lock_nested+0x7d/0x460 [ 28.184427] [] inode_doinit_with_dentry+0xc5/0x6a0 [ 28.184944] [] selinux_d_instantiate+0x1c/0x20 [ 28.185470] [] security_d_instantiate+0x1b/0x30 [ 28.185980] [] d_instantiate+0x54/0x80 [ 28.186495] [] __shmem_file_setup+0xdc/0x250 [ 28.186990] [] shmem_zero_setup+0x28/0x70 [ 28.187500] [] mmap_region+0x66c/0x680 [ 28.188006] [] do_mmap_pgoff+0x323/0x410 [ 28.188500] [] vm_mmap_pgoff+0xb0/0xf0 [ 28.189005] [] SyS_mmap_pgoff+0x116/0x2b0 [ 28.189490] [] SyS_mmap+0x1b/0x30 [ 28.189975] [] system_call_fastpath+0x12/0x76 [ 28.190474] other info that might help us debug this: [ 28.191901] Chain exists of: &isec->lock --> &xfs_dir_ilock_class --> &mm->mmap_sem [ 28.193327] Possible unsafe locking scenario: [ 28.194297] CPU0 CPU1 [ 28.194774] ---- ---- [ 28.195254] lock(&mm->mmap_sem); [ 28.195709] lock(&xfs_dir_ilock_class); [ 28.196174] lock(&mm->mmap_sem); [ 28.196654] lock(&isec->lock); [ 28.197108] *** DEADLOCK *** [ 28.198451] 1 lock held by sshd/1764: [ 28.198900] #0: (&mm->mmap_sem){++++++}, at: [] vm_mmap_pgoff+0x8f/0xf0 [ 28.199370] stack backtrace: [ 28.200276] CPU: 2 PID: 1764 Comm: sshd Tainted: G W 4.1.0-0.rc7.git0.1.fc23.x86_64+debug #1 [ 28.200753] Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/20/2014 [ 28.201246] 0000000000000000 00000000eda89a94 ffff8800a86a39c8 ffffffff81896375 [ 28.201771] 0000000000000000 ffffffff82a910d0 ffff8800a86a3a18 ffffffff8110fbd6 [ 28.202275] 0000000000000002 ffff8800a86a3a78 0000000000000001 ffff8800a897b008 [ 28.203099] Call Trace: [ 28.204237] [] dump_stack+0x4c/0x65 [ 28.205362] [] print_circular_bug+0x206/0x280 [ 28.206502] [] __lock_acquire+0x1b31/0x1e40 [ 28.207650] [] lock_acquire+0xc7/0x2a0 [ 28.208758] [] ? inode_doinit_with_dentry+0xc5/0x6a0 [ 28.209902] [] mutex_lock_nested+0x7d/0x460 [ 28.211023] [] ? inode_doinit_with_dentry+0xc5/0x6a0 [ 28.212162] [] ? inode_doinit_with_dentry+0xc5/0x6a0 [ 28.213283] [] ? native_sched_clock+0x2d/0xa0 [ 28.214403] [] ? sched_clock+0x9/0x10 [ 28.215514] [] inode_doinit_with_dentry+0xc5/0x6a0 [ 28.216656] [] selinux_d_instantiate+0x1c/0x20 [ 28.217776] [] security_d_instantiate+0x1b/0x30 [ 28.218902] [] d_instantiate+0x54/0x80 [ 28.219992] [] __shmem_file_setup+0xdc/0x250 [ 28.221112] [] shmem_zero_setup+0x28/0x70 [ 28.222234] [] mmap_region+0x66c/0x680 [ 28.223362] [] do_mmap_pgoff+0x323/0x410 [ 28.224493] [] ? vm_mmap_pgoff+0x8f/0xf0 [ 28.225643] [] vm_mmap_pgoff+0xb0/0xf0 [ 28.226771] [] SyS_mmap_pgoff+0x116/0x2b0 [ 28.227900] [] ? SyS_fcntl+0x5de/0x760 [ 28.229042] [] SyS_mmap+0x1b/0x30 [ 28.230156] [] system_call_fastpath+0x12/0x76 [ 46.520367] Adjusting tsc more than 11% (5419175 vs 7179037) Any ideas? Best regards, Morten -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/