Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752591AbbGVVIN (ORCPT ); Wed, 22 Jul 2015 17:08:13 -0400 Received: from emvm-gh1-uea08.nsa.gov ([63.239.67.9]:53888 "EHLO emvm-gh1-uea08.nsa.gov" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752422AbbGVVIJ (ORCPT ); Wed, 22 Jul 2015 17:08:09 -0400 X-TM-IMSS-Message-ID: <389141e8000760af@nsa.gov> Message-ID: <55B00602.1060005@tycho.nsa.gov> Date: Wed, 22 Jul 2015 17:07:14 -0400 From: Stephen Smalley Organization: National Security Agency User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.7.0 MIME-Version: 1.0 To: Morten Stevens CC: Daniel Wagner , Hugh Dickins , Linus Torvalds , Prarit Bhargava , Dave Chinner , Eric Paris , Eric Sandeen , Andrew Morton , linux-mm@kvack.org, Linux Kernel Subject: Re: mm: shmem_zero_setup skip security check and lockdep conflict with XFS References: <557E6C0C.3050802@monom.org> In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 10481 Lines: 193 On 07/22/2015 08:46 AM, Morten Stevens wrote: > 2015-06-17 13:45 GMT+02:00 Morten Stevens : >> 2015-06-15 8:09 GMT+02:00 Daniel Wagner : >>> On 06/14/2015 06:48 PM, Hugh Dickins wrote: >>>> It appears that, at some point last year, XFS made directory handling >>>> changes which bring it into lockdep conflict with shmem_zero_setup(): >>>> it is surprising that mmap() can clone an inode while holding mmap_sem, >>>> but that has been so for many years. >>>> >>>> Since those few lockdep traces that I've seen all implicated selinux, >>>> I'm hoping that we can use the __shmem_file_setup(,,,S_PRIVATE) which >>>> v3.13's commit c7277090927a ("security: shmem: implement kernel private >>>> shmem inodes") introduced to avoid LSM checks on kernel-internal inodes: >>>> the mmap("/dev/zero") cloned inode is indeed a kernel-internal detail. >>>> >>>> This also covers the !CONFIG_SHMEM use of ramfs to support /dev/zero >>>> (and MAP_SHARED|MAP_ANONYMOUS). I thought there were also drivers >>>> which cloned inode in mmap(), but if so, I cannot locate them now. >>>> >>>> Reported-and-tested-by: Prarit Bhargava >>>> Reported-by: Daniel Wagner >>> >>> Reported-and-tested-by: Daniel Wagner >>> >>> Sorry for the long delay. It took me a while to figure out my original >>> setup. I could verify that this patch made the lockdep message go away >>> on 4.0-rc6 and also on 4.1-rc8. >> >> Yes, it's also fixed for me after applying this patch to 4.1-rc8. > > Here is another deadlock with the latest 4.2.0-rc3: > > Jul 22 14:36:40 fc23 kernel: > ====================================================== > Jul 22 14:36:40 fc23 kernel: [ INFO: possible circular locking > dependency detected ] > Jul 22 14:36:40 fc23 kernel: 4.2.0-0.rc3.git0.1.fc24.x86_64+debug #1 > Tainted: G W > Jul 22 14:36:40 fc23 kernel: > ------------------------------------------------------- > Jul 22 14:36:40 fc23 kernel: httpd/1597 is trying to acquire lock: > Jul 22 14:36:40 fc23 kernel: (&ids->rwsem){+++++.}, at: > [] shm_close+0x34/0x130 > Jul 22 14:36:40 fc23 kernel: #012but task is already holding lock: > Jul 22 14:36:40 fc23 kernel: (&mm->mmap_sem){++++++}, at: > [] SyS_shmdt+0x4b/0x180 > Jul 22 14:36:40 fc23 kernel: #012which lock already depends on the new lock. > Jul 22 14:36:40 fc23 kernel: #012the existing dependency chain (in > reverse order) is: > Jul 22 14:36:40 fc23 kernel: #012-> #3 (&mm->mmap_sem){++++++}: > Jul 22 14:36:40 fc23 kernel: [] lock_acquire+0xc7/0x270 > Jul 22 14:36:40 fc23 kernel: [] __might_fault+0x7a/0xa0 > Jul 22 14:36:40 fc23 kernel: [] filldir+0x9e/0x130 > Jul 22 14:36:40 fc23 kernel: [] > xfs_dir2_block_getdents.isra.12+0x198/0x1c0 [xfs] > Jul 22 14:36:40 fc23 kernel: [] > xfs_readdir+0x1b4/0x330 [xfs] > Jul 22 14:36:40 fc23 kernel: [] > xfs_file_readdir+0x2b/0x30 [xfs] > Jul 22 14:36:40 fc23 kernel: [] iterate_dir+0x97/0x130 > Jul 22 14:36:40 fc23 kernel: [] SyS_getdents+0x91/0x120 > Jul 22 14:36:40 fc23 kernel: [] > entry_SYSCALL_64_fastpath+0x12/0x76 > Jul 22 14:36:40 fc23 kernel: #012-> #2 (&xfs_dir_ilock_class){++++.+}: > Jul 22 14:36:40 fc23 kernel: [] lock_acquire+0xc7/0x270 > Jul 22 14:36:40 fc23 kernel: [] > down_read_nested+0x57/0xa0 > Jul 22 14:36:40 fc23 kernel: [] > xfs_ilock+0x167/0x350 [xfs] > Jul 22 14:36:40 fc23 kernel: [] > xfs_ilock_attr_map_shared+0x38/0x50 [xfs] > Jul 22 14:36:40 fc23 kernel: [] > xfs_attr_get+0xbd/0x190 [xfs] > Jul 22 14:36:40 fc23 kernel: [] > xfs_xattr_get+0x3d/0x70 [xfs] > Jul 22 14:36:40 fc23 kernel: [] > generic_getxattr+0x4f/0x70 > Jul 22 14:36:40 fc23 kernel: [] > inode_doinit_with_dentry+0x162/0x670 > Jul 22 14:36:40 fc23 kernel: [] > sb_finish_set_opts+0xd9/0x230 > Jul 22 14:36:40 fc23 kernel: [] > selinux_set_mnt_opts+0x35c/0x660 > Jul 22 14:36:40 fc23 kernel: [] > superblock_doinit+0x77/0xf0 > Jul 22 14:36:40 fc23 kernel: [] > delayed_superblock_init+0x10/0x20 > Jul 22 14:36:40 fc23 kernel: [] > iterate_supers+0xb3/0x110 > Jul 22 14:36:40 fc23 kernel: [] > selinux_complete_init+0x2f/0x40 > Jul 22 14:36:40 fc23 kernel: [] > security_load_policy+0x103/0x600 > Jul 22 14:36:40 fc23 kernel: [] > sel_write_load+0xc1/0x750 > Jul 22 14:36:40 fc23 kernel: [] __vfs_write+0x37/0x100 > Jul 22 14:36:40 fc23 kernel: [] vfs_write+0xa9/0x1a0 > Jul 22 14:36:40 fc23 kernel: [] SyS_write+0x58/0xd0 > Jul 22 14:36:40 fc23 kernel: [] > entry_SYSCALL_64_fastpath+0x12/0x76 > Jul 22 14:36:40 fc23 kernel: #012-> #1 (&isec->lock){+.+.+.}: > Jul 22 14:36:40 fc23 kernel: [] lock_acquire+0xc7/0x270 > Jul 22 14:36:40 fc23 kernel: [] > mutex_lock_nested+0x7f/0x3e0 > Jul 22 14:36:40 fc23 kernel: [] > inode_doinit_with_dentry+0xb9/0x670 > Jul 22 14:36:40 fc23 kernel: [] > selinux_d_instantiate+0x1c/0x20 > Jul 22 14:36:40 fc23 kernel: [] > security_d_instantiate+0x36/0x60 > Jul 22 14:36:40 fc23 kernel: [] d_instantiate+0x54/0x70 > Jul 22 14:36:40 fc23 kernel: [] > __shmem_file_setup+0xdc/0x240 > Jul 22 14:36:40 fc23 kernel: [] > shmem_file_setup+0x10/0x20 > Jul 22 14:36:40 fc23 kernel: [] newseg+0x290/0x3a0 > Jul 22 14:36:40 fc23 kernel: [] ipcget+0x208/0x2d0 > Jul 22 14:36:40 fc23 kernel: [] SyS_shmget+0x54/0x70 > Jul 22 14:36:40 fc23 kernel: [] > entry_SYSCALL_64_fastpath+0x12/0x76 > Jul 22 14:36:40 fc23 kernel: #012-> #0 (&ids->rwsem){+++++.}: > Jul 22 14:36:40 fc23 kernel: [] > __lock_acquire+0x1a78/0x1d00 > Jul 22 14:36:40 fc23 kernel: [] lock_acquire+0xc7/0x270 > Jul 22 14:36:40 fc23 kernel: [] down_write+0x5a/0xc0 > Jul 22 14:36:40 fc23 kernel: [] shm_close+0x34/0x130 > Jul 22 14:36:40 fc23 kernel: [] remove_vma+0x45/0x80 > Jul 22 14:36:40 fc23 kernel: [] do_munmap+0x2b0/0x460 > Jul 22 14:36:40 fc23 kernel: [] SyS_shmdt+0xb5/0x180 > Jul 22 14:36:40 fc23 kernel: [] > entry_SYSCALL_64_fastpath+0x12/0x76 > Jul 22 14:36:40 fc23 kernel: #012other info that might help us debug this: > Jul 22 14:36:40 fc23 kernel: Chain exists of:#012 &ids->rwsem --> > &xfs_dir_ilock_class --> &mm->mmap_sem > Jul 22 14:36:40 fc23 kernel: Possible unsafe locking scenario: > Jul 22 14:36:40 fc23 kernel: CPU0 CPU1 > Jul 22 14:36:40 fc23 kernel: ---- ---- > Jul 22 14:36:40 fc23 kernel: lock(&mm->mmap_sem); > Jul 22 14:36:40 fc23 kernel: > lock(&xfs_dir_ilock_class); > Jul 22 14:36:40 fc23 kernel: lock(&mm->mmap_sem); > Jul 22 14:36:40 fc23 kernel: lock(&ids->rwsem); > Jul 22 14:36:40 fc23 kernel: #012 *** DEADLOCK *** > Jul 22 14:36:40 fc23 kernel: 1 lock held by httpd/1597: > Jul 22 14:36:40 fc23 kernel: #0: (&mm->mmap_sem){++++++}, at: > [] SyS_shmdt+0x4b/0x180 > Jul 22 14:36:40 fc23 kernel: #012stack backtrace: > Jul 22 14:36:40 fc23 kernel: CPU: 7 PID: 1597 Comm: httpd Tainted: G > W 4.2.0-0.rc3.git0.1.fc24.x86_64+debug #1 > Jul 22 14:36:40 fc23 kernel: Hardware name: VMware, Inc. VMware > Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 > 05/20/2014 > Jul 22 14:36:40 fc23 kernel: 0000000000000000 000000006cb6fe9d > ffff88019ff07c58 ffffffff81868175 > Jul 22 14:36:40 fc23 kernel: 0000000000000000 ffffffff82aea390 > ffff88019ff07ca8 ffffffff81105903 > Jul 22 14:36:40 fc23 kernel: ffff88019ff07c78 ffff88019ff07d08 > 0000000000000001 ffff8800b75108f0 > Jul 22 14:36:40 fc23 kernel: Call Trace: > Jul 22 14:36:40 fc23 kernel: [] dump_stack+0x4c/0x65 > Jul 22 14:36:40 fc23 kernel: [] print_circular_bug+0x1e3/0x250 > Jul 22 14:36:40 fc23 kernel: [] __lock_acquire+0x1a78/0x1d00 > Jul 22 14:36:40 fc23 kernel: [] ? unlink_file_vma+0x33/0x60 > Jul 22 14:36:40 fc23 kernel: [] lock_acquire+0xc7/0x270 > Jul 22 14:36:40 fc23 kernel: [] ? shm_close+0x34/0x130 > Jul 22 14:36:40 fc23 kernel: [] down_write+0x5a/0xc0 > Jul 22 14:36:40 fc23 kernel: [] ? shm_close+0x34/0x130 > Jul 22 14:36:40 fc23 kernel: [] shm_close+0x34/0x130 > Jul 22 14:36:40 fc23 kernel: [] remove_vma+0x45/0x80 > Jul 22 14:36:40 fc23 kernel: [] do_munmap+0x2b0/0x460 > Jul 22 14:36:40 fc23 kernel: [] ? SyS_shmdt+0x4b/0x180 > Jul 22 14:36:40 fc23 kernel: [] SyS_shmdt+0xb5/0x180 > Jul 22 14:36:40 fc23 kernel: [] > entry_SYSCALL_64_fastpath+0x12/0x76 > > Best regards, > > Morten I would think that we could flip shm over to using shmem_kernel_file_setup(), but you might still encounter the same warning on a hugetlb segment, unless we also start marking those as private. I am a little concerned that we are playing whack-a-mole. What's the original change that has caused these deadlock scenarios to arise now (I see the original patch descriptions mentions XFS directory handling changes, but not clear on the details, nor on how this is relevant to shmem inodes), and are these real deadlock scenarios or merely false positives that you are trying to eliminate? -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/