Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754374AbZIRV3c (ORCPT ); Fri, 18 Sep 2009 17:29:32 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1753136AbZIRV31 (ORCPT ); Fri, 18 Sep 2009 17:29:27 -0400 Received: from flusers.ccur.com ([12.192.68.2]:34057 "EHLO gamx.iccur.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1752532AbZIRV30 (ORCPT ); Fri, 18 Sep 2009 17:29:26 -0400 X-Greylist: delayed 792 seconds by postgrey-1.27 at vger.kernel.org; Fri, 18 Sep 2009 17:29:26 EDT Date: Fri, 18 Sep 2009 17:15:50 -0400 From: Joe Korty To: Al Viro , Steve French , Jeff Layton Cc: LKML , Trond Myklebust , Ingo Molnar Subject: circular locking dependency detected panic in filldir when CONFIG_PROVE_LOCKING=y Message-ID: <20090918211550.GA8258@tsunami.ccur.com> Reply-To: Joe Korty Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.4.2.1i Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4263 Lines: 100 I experienced a might_fault panic from NFS's use of filldir in a 2.6.31 kernel compiled with CONFIG_PROVE_LOCKING=y. Looking at filldir, I see it is accessing user space with __put_dir's (which are inatomic) and with one copy_to_user (which is not inatomic). It is the single copy_to_user which is causing the might_fault panic. It doesn't make any sense to be mixing use of inatomic and non-inatomic services in filldir. Either all should be the inatomic version, or none should be. The might_fault condition being reported by the panic looks real to me, so I suspect the wrong answer is converting everything to the inatomic version, since that just suppresses the circular dependency check while leaving the circular dependency in place. Regards, Joe ======================================================= [ INFO: possible circular locking dependency detected ] 2.6.31-debug #1 ------------------------------------------------------- passwd/28023 is trying to acquire lock: (&sb->s_type->i_mutex_key#5){+.+.+.}, at: [] nfs_invalidate_mapping+0x24/0x55 but task is already holding lock: (&mm->mmap_sem){++++++}, at: [] sys_mmap2+0x74/0xa7 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #1 (&mm->mmap_sem){++++++}: [] check_prev_add+0x2dc/0x46e [] check_prevs_add+0x62/0xba [] validate_chain+0x33d/0x3eb [] __lock_acquire+0x508/0x57e [] lock_acquire+0xb2/0xcf [] might_fault+0x60/0x80 [] copy_to_user+0x2d/0x41 [] filldir+0x8d/0xcd [] nfs_do_filldir+0xd2/0x1be [] nfs_readdir+0x6bf/0x74f [] vfs_readdir+0x5b/0x87 [] sys_getdents+0x64/0xa4 [] sysenter_do_call+0x1b/0x48 [] 0xffffffff -> #0 (&sb->s_type->i_mutex_key#5){+.+.+.}: [] check_prev_add+0x78/0x46e [] check_prevs_add+0x62/0xba [] validate_chain+0x33d/0x3eb [] __lock_acquire+0x508/0x57e [] lock_acquire+0xb2/0xcf [] mutex_lock_nested+0x53/0x2e4 [] nfs_invalidate_mapping+0x24/0x55 [] nfs_revalidate_mapping+0x59/0x5e [] nfs_file_mmap+0x55/0x5d [] mmap_region+0x1dc/0x373 [] do_mmap_pgoff+0x249/0x2ab [] sys_mmap2+0x8a/0xa7 [] sysenter_do_call+0x1b/0x48 [] 0xffffffff other info that might help us debug this: 1 lock held by passwd/28023: #0: (&mm->mmap_sem){++++++}, at: [] sys_mmap2+0x74/0xa7 stack backtrace: Pid: 28023, comm: passwd Not tainted 2.6.31-debug #1 Call Trace: [] print_circular_bug_tail+0xa4/0xaf [] check_prev_add+0x78/0x46e [] ? __lock_acquire+0x508/0x57e [] check_prevs_add+0x62/0xba [] validate_chain+0x33d/0x3eb [] __lock_acquire+0x508/0x57e [] ? ____cache_alloc_node+0xf4/0x134 [] lock_acquire+0xb2/0xcf [] ? nfs_invalidate_mapping+0x24/0x55 [] mutex_lock_nested+0x53/0x2e4 [] ? nfs_invalidate_mapping+0x24/0x55 [] ? nfs_invalidate_mapping+0x24/0x55 [] nfs_invalidate_mapping+0x24/0x55 [] nfs_revalidate_mapping+0x59/0x5e [] nfs_file_mmap+0x55/0x5d [] mmap_region+0x1dc/0x373 [] do_mmap_pgoff+0x249/0x2ab [] sys_mmap2+0x8a/0xa7 [] sysenter_do_call+0x1b/0x48 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/