Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1423182AbWJQJB7 (ORCPT ); Tue, 17 Oct 2006 05:01:59 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1423180AbWJQJB7 (ORCPT ); Tue, 17 Oct 2006 05:01:59 -0400 Received: from 84-72-7-39.dclient.hispeed.ch ([84.72.7.39]:18925 "EHLO steudten.com") by vger.kernel.org with ESMTP id S1423182AbWJQJB6 (ORCPT ); Tue, 17 Oct 2006 05:01:58 -0400 Message-ID: <45349BFB.8060201@steudten.org> Date: Tue, 17 Oct 2006 11:01:47 +0200 From: "alpha @ steudten Engineering" Organization: Steudten Engineering MIME-Version: 1.0 To: LKML Subject: INFO: possible circular locking dependency detected ],2.6.18-1.2200_selfsmp #1 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Mailer: Mailer X-Check: 064bde0a5523e0b295c094b219946b81 on steudten.com Sender: linux-kernel-owner@vger.kernel.org X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 7570 Lines: 185 ======================================================= [ INFO: possible circular locking dependency detected ] 2.6.18-1.2200_selfsmp #1 ------------------------------------------------------- init/1 is trying to acquire lock: (&bdev_part_lock_key){--..}, at: [] bd_claim_by_disk+0x5d/0x166 but task is already holding lock: (&new->reconfig_mutex){--..}, at: [] autorun_devices+0x128/0x2b4 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #2 (&new->reconfig_mutex){--..}: [] add_lock_to_list+0x5e/0x79 [] __lock_acquire+0x91c/0x9fd [] md_open+0x22/0x53 [] __lock_acquire+0x9b6/0x9fd [] lock_acquire+0x6d/0x8a [] md_open+0x22/0x53 [] __mutex_lock_interruptible_slowpath+0xde/0x286 [] md_open+0x22/0x53 [] __mutex_lock_slowpath+0x22a/0x232 [] md_open+0x22/0x53 [] do_open+0x85/0x2e2 [] blkdev_open+0x0/0x42 [] blkdev_open+0x1a/0x42 [] __dentry_open+0xc7/0x1ab [] nameidata_to_filp+0x24/0x33 [] do_filp_open+0x32/0x39 [] _spin_unlock+0x14/0x1c [] get_unused_fd+0xb9/0xc3 [] do_sys_open+0x42/0xbe [] sys_open+0x1c/0x1e [] syscall_call+0x7/0xb [] 0xffffffff -> #1 (&bdev->bd_mutex){--..}: [] add_lock_to_list+0x5e/0x79 [] __lock_acquire+0x91c/0x9fd [] do_open+0x58/0x2e2 [] __mutex_unlock_slowpath+0x10a/0x113 [] mark_held_locks+0x46/0x62 [] lock_acquire+0x6d/0x8a [] do_open+0x58/0x2e2 [] __mutex_lock_slowpath+0xde/0x232 [] do_open+0x58/0x2e2 [] kobj_lookup+0x10d/0x168 [] do_open+0x58/0x2e2 [] blkdev_get+0x53/0x5e [] do_open+0xfd/0x2e2 [] blkdev_get+0x53/0x5e [] open_by_devnum+0x2d/0x38 [] md_import_device+0x229/0x247 [] task_has_capability+0x56/0x5e [] printk+0x1f/0xaf [] md_ioctl+0xbe/0x13a4 [] __kernel_text_address+0x18/0x23 [] dump_trace+0x87/0x91 [] __kernel_text_address+0x18/0x23 [] dump_trace+0x87/0x91 [] __kernel_text_address+0x18/0x23 [] dump_trace+0x87/0x91 [] find_usage_backwards+0x64/0x88 [] find_usage_backwards+0x64/0x88 [] check_usage_backwards+0x19/0x41 [] md_open+0x22/0x53 [] mark_lock+0x324/0x3a0 [] blkdev_driver_ioctl+0x4e/0x5e [] blkdev_ioctl+0x64c/0x69b [] trace_hardirqs_on+0x123/0x14d [] avc_has_perm+0x4e/0x58 [] inode_has_perm+0x5b/0x63 [] __lock_acquire+0x9b6/0x9fd [] set_close_on_exec+0x24/0x41 [] fd_install+0x24/0x50 [] file_has_perm+0x8c/0x94 [] block_ioctl+0x18/0x1b [] block_ioctl+0x0/0x1b [] do_ioctl+0x1f/0x62 [] vfs_ioctl+0x24a/0x25c [] sys_ioctl+0x4c/0x66 [] syscall_call+0x7/0xb [] 0xffffffff -> #0 (&bdev_part_lock_key){--..}: [] __lock_acquire+0x82a/0x9fd [] bd_claim_by_disk+0x5d/0x166 [] cache_alloc_debugcheck_after+0xc4/0x13a [] lock_acquire+0x6d/0x8a [] bd_claim_by_disk+0x5d/0x166 [] __mutex_lock_slowpath+0xde/0x232 [] bd_claim_by_disk+0x5d/0x166 [] bd_claim_by_disk+0x5d/0x166 [] bind_rdev_to_array+0x205/0x223 [] trace_hardirqs_on+0x123/0x14d [] autorun_devices+0x128/0x2b4 [] autorun_devices+0x1d6/0x2b4 [] printk+0x1f/0xaf [] md_ioctl+0x11f/0x13a4 [] __kernel_text_address+0x18/0x23 [] dump_trace+0x87/0x91 [] __kernel_text_address+0x18/0x23 [] dump_trace+0x87/0x91 [] __kernel_text_address+0x18/0x23 [] dump_trace+0x87/0x91 [] find_usage_backwards+0x64/0x88 [] find_usage_backwards+0x64/0x88 [] check_usage_backwards+0x19/0x41 [] md_open+0x22/0x53 [] mark_lock+0x324/0x3a0 [] blkdev_driver_ioctl+0x4e/0x5e [] blkdev_ioctl+0x64c/0x69b [] trace_hardirqs_on+0x123/0x14d [] avc_has_perm+0x4e/0x58 [] inode_has_perm+0x5b/0x63 [] __lock_acquire+0x9b6/0x9fd [] set_close_on_exec+0x24/0x41 [] fd_install+0x24/0x50 [] file_has_perm+0x8c/0x94 [] block_ioctl+0x18/0x1b [] block_ioctl+0x0/0x1b [] do_ioctl+0x1f/0x62 [] vfs_ioctl+0x24a/0x25c [] sys_ioctl+0x4c/0x66 [] syscall_call+0x7/0xb [] 0xffffffff other info that might help us debug this: 1 lock held by init/1: #0: (&new->reconfig_mutex){--..}, at: [] autorun_devices+0x128/0x2b4 stack backtrace: [] print_circular_bug_tail+0x5d/0x65 [] __lock_acquire+0x82a/0x9fd [] bd_claim_by_disk+0x5d/0x166 [] cache_alloc_debugcheck_after+0xc4/0x13a [] lock_acquire+0x6d/0x8a [] bd_claim_by_disk+0x5d/0x166 [] __mutex_lock_slowpath+0xde/0x232 [] bd_claim_by_disk+0x5d/0x166 [] bd_claim_by_disk+0x5d/0x166 [] bind_rdev_to_array+0x205/0x223 [] trace_hardirqs_on+0x123/0x14d [] autorun_devices+0x128/0x2b4 [] autorun_devices+0x1d6/0x2b4 [] printk+0x1f/0xaf [] md_ioctl+0x11f/0x13a4 [] __kernel_text_address+0x18/0x23 [] dump_trace+0x87/0x91 [] __kernel_text_address+0x18/0x23 [] dump_trace+0x87/0x91 [] __kernel_text_address+0x18/0x23 [] dump_trace+0x87/0x91 [] find_usage_backwards+0x64/0x88 [] find_usage_backwards+0x64/0x88 [] check_usage_backwards+0x19/0x41 [] md_open+0x22/0x53 [] mark_lock+0x324/0x3a0 [] blkdev_driver_ioctl+0x4e/0x5e [] blkdev_ioctl+0x64c/0x69b [] trace_hardirqs_on+0x123/0x14d [] avc_has_perm+0x4e/0x58 [] inode_has_perm+0x5b/0x63 [] __lock_acquire+0x9b6/0x9fd [] set_close_on_exec+0x24/0x41 [] fd_install+0x24/0x50 [] file_has_perm+0x8c/0x94 [] block_ioctl+0x18/0x1b [] block_ioctl+0x0/0x1b [] do_ioctl+0x1f/0x62 [] vfs_ioctl+0x24a/0x25c [] sys_ioctl+0x4c/0x66 [] syscall_call+0x7/0xb ======================= - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/