Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754789AbXHGGbv (ORCPT ); Tue, 7 Aug 2007 02:31:51 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1751501AbXHGGbn (ORCPT ); Tue, 7 Aug 2007 02:31:43 -0400 Received: from ausmtp04.au.ibm.com ([202.81.18.152]:50584 "EHLO ausmtp04.au.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751456AbXHGGbm (ORCPT ); Tue, 7 Aug 2007 02:31:42 -0400 Date: Tue, 7 Aug 2007 11:56:00 +0530 From: Dhaval Giani To: menage@google.com Cc: akpm@linux-foundation.com, Srivatsa Vaddagiri , ckrm-tech@lists.sourceforge.net, linux-kernel@vger.kernel.org Subject: Circular Locking Dependency Chain detected in containers code Message-ID: <20070807062600.GH31148@linux.vnet.ibm.com> Reply-To: Dhaval Giani MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.5.14 (2007-02-12) Sender: linux-kernel-owner@vger.kernel.org X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3239 Lines: 88 Hi Paul, I have hit upon a circular locking dependency while doing an rmdir on a directory inside the containers code. I believe that it is safe as no one should be able to rmdir when a container is getting mounted. To reproduce it, just do a rmdir inside the container. ======================================================= [ INFO: possible circular locking dependency detected ] 2.6.23-rc1-mm2-container #1 ------------------------------------------------------- rmdir/4321 is trying to acquire lock: (container_mutex){--..}, at: [] mutex_lock+0x21/0x24 but task is already holding lock: (&inode->i_mutex){--..}, at: [] mutex_lock+0x21/0x24 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #1 (&inode->i_mutex){--..}: [] check_prev_add+0xae/0x18f [] check_prevs_add+0x5a/0xc5 [] validate_chain+0x25e/0x2cd [] __lock_acquire+0x629/0x691 [] lock_acquire+0x61/0x7e [] __mutex_lock_slowpath+0xc8/0x230 [] mutex_lock+0x21/0x24 [] container_get_sb+0x22c/0x283 [] vfs_kern_mount+0x3a/0x73 [] do_new_mount+0x7e/0xdc [] do_mount+0x178/0x191 [] sys_mount+0x66/0x9d [] sysenter_past_esp+0x5f/0x99 [] 0xffffffff -> #0 (container_mutex){--..}: [] check_prev_add+0x2b/0x18f [] check_prevs_add+0x5a/0xc5 [] validate_chain+0x25e/0x2cd [] __lock_acquire+0x629/0x691 [] lock_acquire+0x61/0x7e [] __mutex_lock_slowpath+0xc8/0x230 [] mutex_lock+0x21/0x24 [] container_rmdir+0x15/0x163 [] vfs_rmdir+0x59/0x8f [] do_rmdir+0x8c/0xbe [] sys_rmdir+0x10/0x12 [] sysenter_past_esp+0x5f/0x99 [] 0xffffffff other info that might help us debug this: 2 locks held by rmdir/4321: #0: (&inode->i_mutex/1){--..}, at: [] do_rmdir+0x6c/0xbe #1: (&inode->i_mutex){--..}, at: [] mutex_lock+0x21/0x24 stack backtrace: [] show_trace_log_lvl+0x12/0x22 [] show_trace+0xd/0xf [] dump_stack+0x14/0x16 [] print_circular_bug_tail+0x5b/0x64 [] check_prev_add+0x2b/0x18f [] check_prevs_add+0x5a/0xc5 [] validate_chain+0x25e/0x2cd [] __lock_acquire+0x629/0x691 [] lock_acquire+0x61/0x7e [] __mutex_lock_slowpath+0xc8/0x230 [] mutex_lock+0x21/0x24 [] container_rmdir+0x15/0x163 [] vfs_rmdir+0x59/0x8f [] do_rmdir+0x8c/0xbe [] sys_rmdir+0x10/0x12 [] sysenter_past_esp+0x5f/0x99 ======================= -- regards, Dhaval I would like to change the world but they don't give me the source code! - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/