Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933510Ab2HVXDC (ORCPT ); Wed, 22 Aug 2012 19:03:02 -0400 Received: from mx1.redhat.com ([209.132.183.28]:54404 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933404Ab2HVXC5 (ORCPT ); Wed, 22 Aug 2012 19:02:57 -0400 Date: Wed, 22 Aug 2012 19:02:53 -0400 From: Dave Jones To: Linux Kernel Cc: Al Viro Subject: 3.6rc3 lockdep trace. tasklist_lock vs fasync Message-ID: <20120822230253.GB18687@redhat.com> Mail-Followup-To: Dave Jones , Linux Kernel , Al Viro MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 13939 Lines: 271 This started showing up after todays pull. ====================================================== [ INFO: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected ] 3.6.0-rc3+ #35 Not tainted ------------------------------------------------------ trinity-main/5043 [HC0[0]:SC0[0]:HE0:SE1] is trying to acquire: blocked: (tasklist_lock){.+.+..}, instance: ffffffff81c05098, at: [] send_sigio+0x4f/0x190 and this task is already holding: held: (&(&new->fa_lock)->rlock){-.-...}, instance: ffff880141552958, at: [] kill_fasync+0xa6/0x2b0 which would create a new lock dependency: (&(&new->fa_lock)->rlock){-.-...} -> (tasklist_lock){.+.+..} but this new dependency connects a HARDIRQ-irq-safe lock: (&(&new->fa_lock)->rlock){-.-...} ... which became HARDIRQ-irq-safe at: [] __lock_acquire+0x78c/0x1ae0 [] lock_acquire+0xad/0x220 [] _raw_spin_lock_irqsave+0x7d/0xd0 [] kill_fasync+0xa6/0x2b0 [] credit_entropy_bits.part.8+0x15d/0x220 [] add_interrupt_randomness+0x17a/0x2c0 [] handle_irq_event_percpu+0xb5/0x3f0 [] handle_irq_event+0x48/0x70 [] handle_edge_irq+0x77/0x110 [] handle_irq+0xbf/0x150 [] do_IRQ+0x5a/0xe0 [] ret_from_intr+0x0/0x1a [] jbd2_journal_file_buffer+0x5e/0xd0 [] jbd2_journal_commit_transaction+0xdb9/0x2550 [] kjournald2+0xe0/0x410 [] kthread+0xb7/0xc0 [] kernel_thread_helper+0x4/0x10 to a HARDIRQ-irq-unsafe lock: (&(&p->alloc_lock)->rlock){+.+...} ... which became HARDIRQ-irq-unsafe at: ... [] __lock_acquire+0x5b2/0x1ae0 [] lock_acquire+0xad/0x220 [] _raw_spin_lock+0x46/0x80 [] set_task_comm+0x32/0x180 [] kthreadd+0x38/0x2e0 [] kernel_thread_helper+0x4/0x10 other info that might help us debug this: Chain exists of: &(&new->fa_lock)->rlock --> tasklist_lock --> &(&p->alloc_lock)->rlock Possible interrupt unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&(&p->alloc_lock)->rlock); local_irq_disable(); lock(&(&new->fa_lock)->rlock); lock(tasklist_lock); lock(&(&new->fa_lock)->rlock); *** DEADLOCK *** 4 locks on stack by trinity-main/5043: #0: held: (file_lock_lock){+.+...}, instance: ffffffff81c84438, at: [] lock_flocks+0x15/0x20 #1: blocked: (rcu_read_lock){.+.+..}, instance: ffffffff81c348e0, at: [] kill_fasync+0x27/0x2b0 #2: held: (&(&new->fa_lock)->rlock){-.-...}, instance: ffff880141552958, at: [] kill_fasync+0xa6/0x2b0 #3: held: (&f->f_owner.lock){.?.?..}, instance: ffff88011a2821a8, at: [] send_sigio+0x24/0x190 the dependencies between HARDIRQ-irq-safe lock and the holding lock: -> (&(&new->fa_lock)->rlock){-.-...} ops: 12894 { IN-HARDIRQ-W at: [] __lock_acquire+0x78c/0x1ae0 [] lock_acquire+0xad/0x220 [] _raw_spin_lock_irqsave+0x7d/0xd0 [] kill_fasync+0xa6/0x2b0 [] credit_entropy_bits.part.8+0x15d/0x220 [] add_interrupt_randomness+0x17a/0x2c0 [] handle_irq_event_percpu+0xb5/0x3f0 [] handle_irq_event+0x48/0x70 [] handle_edge_irq+0x77/0x110 [] handle_irq+0xbf/0x150 [] do_IRQ+0x5a/0xe0 [] ret_from_intr+0x0/0x1a [] jbd2_journal_file_buffer+0x5e/0xd0 [] jbd2_journal_commit_transaction+0xdb9/0x2550 [] kjournald2+0xe0/0x410 [] kthread+0xb7/0xc0 [] kernel_thread_helper+0x4/0x10 IN-SOFTIRQ-W at: [] __lock_acquire+0x587/0x1ae0 [] lock_acquire+0xad/0x220 [] _raw_spin_lock_irqsave+0x7d/0xd0 [] kill_fasync+0xa6/0x2b0 [] credit_entropy_bits.part.8+0x15d/0x220 [] add_timer_randomness+0x116/0x140 [] add_disk_randomness+0x2e/0x30 [] blk_update_bidi_request+0x6c/0xa0 [] blk_end_bidi_request+0x2c/0x80 [] blk_end_request+0x10/0x20 [] scsi_io_completion+0xaf/0x650 [] scsi_finish_command+0xcf/0x140 [] scsi_softirq_done+0x13f/0x160 [] blk_done_softirq+0xa0/0xc0 [] __do_softirq+0x100/0x420 [] call_softirq+0x1c/0x30 [] do_softirq+0xd5/0x110 [] irq_exit+0xd5/0xe0 [] do_IRQ+0x63/0xe0 [] ret_from_intr+0x0/0x1a [] jbd2_journal_file_buffer+0x5e/0xd0 [] jbd2_journal_commit_transaction+0xdb9/0x2550 [] kjournald2+0xe0/0x410 [] kthread+0xb7/0xc0 [] kernel_thread_helper+0x4/0x10 INITIAL USE at: [] __lock_acquire+0x2ff/0x1ae0 [] lock_acquire+0xad/0x220 [] _raw_spin_lock_irqsave+0x7d/0xd0 [] kill_fasync+0xa6/0x2b0 [] wakeup_pipe_readers+0x45/0x50 [] splice_to_pipe+0x1d8/0x250 [] vmsplice_to_pipe+0x245/0x2c0 [] sys_vmsplice+0x9d/0x210 [] system_call_fastpath+0x1a/0x1f } ... key at: [] __key.31598+0x0/0x8 ... acquired at: [] check_irq_usage+0x5b/0xe0 [] __lock_acquire+0xd32/0x1ae0 [] lock_acquire+0xad/0x220 [] _raw_read_lock+0x49/0x80 [] send_sigio+0x4f/0x190 [] kill_fasync+0xd6/0x2b0 [] lease_break_callback+0x1f/0x30 [] __break_lease+0x138/0x2e0 [] do_dentry_open+0x240/0x310 [] finish_open+0x4b/0x70 [] do_last+0x6ee/0xe20 [] path_openat+0xbd/0x500 [] do_filp_open+0x41/0xa0 [] do_sys_open+0xf6/0x1e0 [] sys_open+0x21/0x30 [] system_call_fastpath+0x1a/0x1f the dependencies between the lock to be acquired and HARDIRQ-irq-unsafe lock: -> (&(&p->alloc_lock)->rlock){+.+...} ops: 27135388 { HARDIRQ-ON-W at: [] __lock_acquire+0x5b2/0x1ae0 [] lock_acquire+0xad/0x220 [] _raw_spin_lock+0x46/0x80 [] set_task_comm+0x32/0x180 [] kthreadd+0x38/0x2e0 [] kernel_thread_helper+0x4/0x10 SOFTIRQ-ON-W at: [] __lock_acquire+0x5e0/0x1ae0 [] lock_acquire+0xad/0x220 [] _raw_spin_lock+0x46/0x80 [] set_task_comm+0x32/0x180 [] kthreadd+0x38/0x2e0 [] kernel_thread_helper+0x4/0x10 INITIAL USE at: [] __lock_acquire+0x2ff/0x1ae0 [] lock_acquire+0xad/0x220 [] _raw_spin_lock+0x46/0x80 [] set_task_comm+0x32/0x180 [] kthreadd+0x38/0x2e0 [] kernel_thread_helper+0x4/0x10 } ... key at: [] __key.46250+0x0/0x8 ... acquired at: [] lock_acquire+0xad/0x220 [] _raw_spin_lock+0x46/0x80 [] keyctl_session_to_parent+0xde/0x490 [] sys_keyctl+0x6d/0x1a0 [] system_call_fastpath+0x1a/0x1f -> (tasklist_lock){.+.+..} ops: 294720 { HARDIRQ-ON-R at: [] __lock_acquire+0x4bf/0x1ae0 [] lock_acquire+0xad/0x220 [] _raw_read_lock+0x49/0x80 [] do_wait+0xb2/0x360 [] sys_wait4+0x75/0xf0 [] wait_for_helper+0x82/0xb0 [] kernel_thread_helper+0x4/0x10 SOFTIRQ-ON-R at: [] __lock_acquire+0x5e0/0x1ae0 [] lock_acquire+0xad/0x220 [] _raw_read_lock+0x49/0x80 [] do_wait+0xb2/0x360 [] sys_wait4+0x75/0xf0 [] wait_for_helper+0x82/0xb0 [] kernel_thread_helper+0x4/0x10 INITIAL USE at: [] __lock_acquire+0x2ff/0x1ae0 [] lock_acquire+0xad/0x220 [] _raw_write_lock_irq+0x5c/0xa0 [] copy_process.part.22+0x1041/0x1740 [] do_fork+0x140/0x4e0 [] kernel_thread+0x76/0x80 [] rest_init+0x26/0x154 [] start_kernel+0x3fa/0x407 [] x86_64_start_reservations+0x131/0x135 [] x86_64_start_kernel+0x148/0x157 } ... key at: [] tasklist_lock+0x18/0x80 ... acquired at: [] check_irq_usage+0x5b/0xe0 [] __lock_acquire+0xd32/0x1ae0 [] lock_acquire+0xad/0x220 [] _raw_read_lock+0x49/0x80 [] send_sigio+0x4f/0x190 [] kill_fasync+0xd6/0x2b0 [] lease_break_callback+0x1f/0x30 [] __break_lease+0x138/0x2e0 [] do_dentry_open+0x240/0x310 [] finish_open+0x4b/0x70 [] do_last+0x6ee/0xe20 [] path_openat+0xbd/0x500 [] do_filp_open+0x41/0xa0 [] do_sys_open+0xf6/0x1e0 [] sys_open+0x21/0x30 [] system_call_fastpath+0x1a/0x1f stack backtrace: Pid: 5043, comm: trinity-main Not tainted 3.6.0-rc3+ #35 Call Trace: [] check_usage+0x4e4/0x500 [] ? local_clock+0x99/0xc0 [] ? native_sched_clock+0x19/0x80 [] check_irq_usage+0x5b/0xe0 [] __lock_acquire+0xd32/0x1ae0 [] ? graph_unlock+0x36/0xb0 [] ? __lock_acquire+0xc19/0x1ae0 [] ? __lock_acquire+0x2ff/0x1ae0 [] lock_acquire+0xad/0x220 [] ? send_sigio+0x4f/0x190 [] _raw_read_lock+0x49/0x80 [] ? send_sigio+0x4f/0x190 [] send_sigio+0x4f/0x190 [] ? kill_fasync+0xa6/0x2b0 [] kill_fasync+0xd6/0x2b0 [] ? kill_fasync+0x27/0x2b0 [] ? _raw_spin_lock+0x72/0x80 [] lease_break_callback+0x1f/0x30 [] __break_lease+0x138/0x2e0 [] do_dentry_open+0x240/0x310 [] finish_open+0x4b/0x70 [] do_last+0x6ee/0xe20 [] ? inode_permission+0x18/0x50 [] ? link_path_walk+0x23d/0x8a0 [] path_openat+0xbd/0x500 [] ? might_fault+0x53/0xb0 [] ? native_sched_clock+0x19/0x80 [] ? sched_clock+0x13/0x20 [] ? local_clock+0x99/0xc0 [] do_filp_open+0x41/0xa0 [] ? alloc_fd+0x171/0x220 [] do_sys_open+0xf6/0x1e0 [] sys_open+0x21/0x30 [] system_call_fastpath+0x1a/0x1f -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/