Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933146Ab2EWCFw (ORCPT ); Tue, 22 May 2012 22:05:52 -0400 Received: from mx1.redhat.com ([209.132.183.28]:60630 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932977Ab2EWCFu (ORCPT ); Tue, 22 May 2012 22:05:50 -0400 Date: Tue, 22 May 2012 22:02:47 -0400 From: Dave Jones To: Ming Lei Cc: Linux Kernel , Alan Cox , Greg Kroah-Hartman Subject: Re: 3.4+ tty lockdep trace Message-ID: <20120523020247.GA5653@redhat.com> Mail-Followup-To: Dave Jones , Ming Lei , Linux Kernel , Alan Cox , Greg Kroah-Hartman References: <20120523002645.GA3490@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3612 Lines: 89 A different one. This time with devpts. (With the patch Ming Lei pointed to on top of Linus current) Dave ====================================================== [ INFO: possible circular locking dependency detected ] 3.4.0+ #25 Not tainted ------------------------------------------------------- sshd/632 is trying to acquire lock: (devpts_mutex){+.+.+.}, at: [] pty_close+0x156/0x180 but task is already holding lock: (&tty->legacy_mutex){+.+.+.}, at: [] tty_lock_nested+0x42/0x90 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #1 (&tty->legacy_mutex){+.+.+.}: [] lock_acquire+0x92/0x1f0 [] mutex_lock_nested+0x71/0x3b0 [] tty_lock_nested+0x42/0x90 [] tty_lock+0x10/0x20 [] tty_init_dev+0x6f/0x140 [] ptmx_open+0xa6/0x180 [] chrdev_open+0x9b/0x1b0 [] __dentry_open+0x26b/0x380 [] nameidata_to_filp+0x74/0x80 [] do_last+0x468/0x900 [] path_openat+0xd2/0x3f0 [] do_filp_open+0x41/0xa0 [] do_sys_open+0xed/0x1c0 [] sys_open+0x21/0x30 [] system_call_fastpath+0x16/0x1b -> #0 (devpts_mutex){+.+.+.}: [] __lock_acquire+0x132e/0x1aa0 [] lock_acquire+0x92/0x1f0 [] mutex_lock_nested+0x71/0x3b0 [] pty_close+0x156/0x180 [] tty_release+0x183/0x5d0 [] fput+0x12c/0x300 [] filp_close+0x69/0xa0 [] sys_close+0xad/0x1a0 [] system_call_fastpath+0x16/0x1b other info that might help us debug this: Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&tty->legacy_mutex); lock(devpts_mutex); lock(&tty->legacy_mutex); lock(devpts_mutex); *** DEADLOCK *** 1 lock held by sshd/632: #0: (&tty->legacy_mutex){+.+.+.}, at: [] tty_lock_nested+0x42/0x90 stack backtrace: Pid: 632, comm: sshd Not tainted 3.4.0+ #25 Call Trace: [] print_circular_bug+0x1fb/0x20c [] __lock_acquire+0x132e/0x1aa0 [] lock_acquire+0x92/0x1f0 [] ? pty_close+0x156/0x180 [] mutex_lock_nested+0x71/0x3b0 [] ? pty_close+0x156/0x180 [] ? sub_preempt_count+0x6d/0xd0 [] ? pty_close+0x156/0x180 [] ? _raw_spin_unlock_irqrestore+0x42/0x80 [] ? __wake_up+0x53/0x70 [] pty_close+0x156/0x180 [] tty_release+0x183/0x5d0 [] ? vfsmount_lock_local_unlock_cpu+0x70/0x70 [] fput+0x12c/0x300 [] filp_close+0x69/0xa0 [] sys_close+0xad/0x1a0 [] system_call_fastpath+0x16/0x1b -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/