Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753524AbXLXMFW (ORCPT ); Mon, 24 Dec 2007 07:05:22 -0500 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1752018AbXLXMFL (ORCPT ); Mon, 24 Dec 2007 07:05:11 -0500 Received: from E23SMTP02.au.ibm.com ([202.81.18.163]:33331 "EHLO e23smtp02.au.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751325AbXLXMFJ (ORCPT ); Mon, 24 Dec 2007 07:05:09 -0500 Date: Mon, 24 Dec 2007 16:58:42 +0530 From: Dhaval Giani To: lkml Cc: Ingo Molnar , Balbir Singh , Sudhir Kumar Subject: Circular locking dependency Message-ID: <20071224112842.GA8347@linux.vnet.ibm.com> Reply-To: Dhaval Giani MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.5.17 (2007-11-01) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4219 Lines: 111 Hi, Just hit this on sched-devel. (not sure how to reproduce it yet, can't try now. I believe i can hit it on mainline as well as there is nothing scheduler specific). ======================================================= [ INFO: possible circular locking dependency detected ] 2.6.24-rc6 #1 ------------------------------------------------------- bash/17982 is trying to acquire lock: (&journal->j_list_lock){--..}, at: [] __journal_try_to_free_buffer+0x2a/0x8a but task is already holding lock: (inode_lock){--..}, at: [] drop_pagecache_sb+0x12/0x74 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #1 (inode_lock){--..}: [] check_prev_add+0xb8/0x1ad [] check_prevs_add+0x5d/0xcf [] validate_chain+0x286/0x300 [] __lock_acquire+0x67f/0x6ff [] lock_acquire+0x71/0x8b [] _spin_lock+0x2b/0x38 [] __mark_inode_dirty+0xd0/0x15b [] __set_page_dirty+0x10c/0x11b [] mark_buffer_dirty+0x9a/0xa1 [] __journal_temp_unlink_buffer+0xbf/0xc3 [] __journal_unfile_buffer+0xb/0x15 [] __journal_refile_buffer+0x3c/0x86 [] journal_commit_transaction+0x89c/0xa05 [] kjournald+0xab/0x1ff [] kthread+0x37/0x59 [] kernel_thread_helper+0x7/0x10 [] 0xffffffff -> #0 (&journal->j_list_lock){--..}: [] check_prev_add+0x2e/0x1ad [] check_prevs_add+0x5d/0xcf [] validate_chain+0x286/0x300 [] __lock_acquire+0x67f/0x6ff [] lock_acquire+0x71/0x8b [] _spin_lock+0x2b/0x38 [] __journal_try_to_free_buffer+0x2a/0x8a [] journal_try_to_free_buffers+0x61/0x9e [] ext3_releasepage+0x68/0x74 [] try_to_release_page+0x33/0x47 [] invalidate_complete_page+0x1e/0x35 [] __invalidate_mapping_pages+0x6b/0xc2 [] drop_pagecache_sb+0x4c/0x74 [] drop_pagecache+0x4a/0x78 [] drop_caches_sysctl_handler+0x36/0x4e [] proc_sys_write+0x6b/0x85 [] vfs_write+0x8c/0x10b [] sys_write+0x3d/0x61 [] sysenter_past_esp+0x5f/0xa5 [] 0xffffffff other info that might help us debug this: 2 locks held by bash/17982: #0: (&type->s_umount_key#15){----}, at: [] drop_pagecache+0x3d/0x78 #1: (inode_lock){--..}, at: [] drop_pagecache_sb+0x12/0x74 stack backtrace: Pid: 17982, comm: bash Not tainted 2.6.24-rc6 #1 [] show_trace_log_lvl+0x19/0x2e [] show_trace+0x12/0x14 [] dump_stack+0x6c/0x72 [] print_circular_bug_tail+0x5f/0x68 [] check_prev_add+0x2e/0x1ad [] check_prevs_add+0x5d/0xcf [] validate_chain+0x286/0x300 [] __lock_acquire+0x67f/0x6ff [] lock_acquire+0x71/0x8b [] _spin_lock+0x2b/0x38 [] __journal_try_to_free_buffer+0x2a/0x8a [] journal_try_to_free_buffers+0x61/0x9e [] ext3_releasepage+0x68/0x74 [] try_to_release_page+0x33/0x47 [] invalidate_complete_page+0x1e/0x35 [] __invalidate_mapping_pages+0x6b/0xc2 [] drop_pagecache_sb+0x4c/0x74 [] drop_pagecache+0x4a/0x78 [] drop_caches_sysctl_handler+0x36/0x4e [] proc_sys_write+0x6b/0x85 [] vfs_write+0x8c/0x10b [] sys_write+0x3d/0x61 [] sysenter_past_esp+0x5f/0xa5 ======================= [root@llm11 kernel]# Last thing I did was echo 1 > /proc/sys/vm/drop_cache (Not sure whom to cc, hopefully others will know better, also no time to debug further, sorry!) -- regards, Dhaval -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/