Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756665Ab2K0SBM (ORCPT ); Tue, 27 Nov 2012 13:01:12 -0500 Received: from www.linutronix.de ([62.245.132.108]:35847 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755552Ab2K0SBK (ORCPT ); Tue, 27 Nov 2012 13:01:10 -0500 Date: Tue, 27 Nov 2012 19:01:08 +0100 From: Sebastian Andrzej Siewior To: Greg Kroah-Hartman Cc: Alan Cox , linux-kernel@vger.kernel.org, linux-usb@vger.kernel.org, Alan Cox Subject: [PATCH RESEND] tty: don't dead lock while flushing workqueue Message-ID: <20121127180108.GA7376@linutronix.de> References: <1353501542-14707-1-git-send-email-bigeasy@linutronix.de> <20121121140426.1860d093@pyramind.ukuu.org.uk> <20121127095357.GA3536@breakpoint.cc> <20121127172249.GB24592@kroah.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: <20121127172249.GB24592@kroah.com> X-Key-Id: 97C4700B X-Key-Fingerprint: 09E2 D1F3 9A3A FF13 C3D3 961C 0688 1C1E 97C4 700B User-Agent: Mutt/1.5.20 (2009-06-14) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 5072 Lines: 153 Since commit 89c8d91e31f2 ("tty: localise the lock") I see a dead lock in one of my dummy_hcd + g_nokia test cases. The first run one was usually okay, the second often resulted in a splat by lockdep and the third was usually a dead lock. Lockdep complained about tty->hangup_work and tty->legacy_mutex taken both ways: | ====================================================== | [ INFO: possible circular locking dependency detected ] | 3.7.0-rc6+ #204 Not tainted | ------------------------------------------------------- | kworker/2:1/35 is trying to acquire lock: | (&tty->legacy_mutex){+.+.+.}, at: [] tty_lock_nested+0x36/0x80 | | but task is already holding lock: | ((&tty->hangup_work)){+.+...}, at: [] process_one_work+0x124/0x5e0 | | which lock already depends on the new lock. | | the existing dependency chain (in reverse order) is: | | -> #2 ((&tty->hangup_work)){+.+...}: | [] lock_acquire+0x84/0x190 | [] flush_work+0x3d/0x240 | [] tty_ldisc_flush_works+0x16/0x30 | [] tty_ldisc_release+0x21/0x70 | [] tty_release+0x35c/0x470 | [] __fput+0xd8/0x270 | [] ____fput+0xd/0x10 | [] task_work_run+0xb9/0xf0 | [] do_notify_resume+0x51/0x80 | [] work_notifysig+0x35/0x3b | | -> #1 (&tty->legacy_mutex/1){+.+...}: | [] lock_acquire+0x84/0x190 | [] mutex_lock_nested+0x6c/0x2f0 | [] tty_lock_nested+0x36/0x80 | [] tty_lock_pair+0x29/0x70 | [] tty_release+0x118/0x470 | [] __fput+0xd8/0x270 | [] ____fput+0xd/0x10 | [] task_work_run+0xb9/0xf0 | [] do_notify_resume+0x51/0x80 | [] work_notifysig+0x35/0x3b | | -> #0 (&tty->legacy_mutex){+.+.+.}: | [] __lock_acquire+0x1189/0x16a0 | [] lock_acquire+0x84/0x190 | [] mutex_lock_nested+0x6c/0x2f0 | [] tty_lock_nested+0x36/0x80 | [] tty_lock+0xf/0x20 | [] __tty_hangup+0x54/0x410 | [] do_tty_hangup+0x12/0x20 | [] process_one_work+0x1a3/0x5e0 | [] worker_thread+0x119/0x3a0 | [] kthread+0x94/0xa0 | [] ret_from_kernel_thread+0x1b/0x28 | |other info that might help us debug this: | |Chain exists of: | &tty->legacy_mutex --> &tty->legacy_mutex/1 --> (&tty->hangup_work) | | Possible unsafe locking scenario: | | CPU0 CPU1 | ---- ---- | lock((&tty->hangup_work)); | lock(&tty->legacy_mutex/1); | lock((&tty->hangup_work)); | lock(&tty->legacy_mutex); | | *** DEADLOCK *** Before the path mentioned tty_ldisc_release() look like this: | tty_ldisc_halt(tty); | tty_ldisc_flush_works(tty); | tty_lock(); As it can be seen, it first flushes the workqueue and then grabs the tty_lock. Now we grab the lock first: | tty_lock_pair(tty, o_tty); | tty_ldisc_halt(tty); | tty_ldisc_flush_works(tty); so lockdep's complaint seems valid. The other user of tty_ldisc_flush_works() is tty_set_ldisc() and I tried to mimnic its logic: - grab tty lock - grab ldisc_mutex lock - release the tty lock - call tty_ldisc_halt() - release ldisc_mutex - call tty_ldisc_flush_works() The code under tty_ldisc_kill() was executed earlier with the tty lock taken so it is taken again. I don't see any problems in my testcase. Cc: stable@vger.kernel.org #v3.7 Acked-by: Alan Cox Signed-off-by: Sebastian Andrzej Siewior --- Greg, here is the resend. I added Acked-By Alan Cox because he wrote |This looks fine to me as by the time we call tty_ldisc_release we have |already set TTY_CLOSING on both sides. See http://lkml.org/lkml/2012/11/21/347 drivers/tty/tty_ldisc.c | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-) diff --git a/drivers/tty/tty_ldisc.c b/drivers/tty/tty_ldisc.c index 0f2a2c5..fb76818 100644 --- a/drivers/tty/tty_ldisc.c +++ b/drivers/tty/tty_ldisc.c @@ -930,16 +930,21 @@ void tty_ldisc_release(struct tty_struct *tty, struct tty_struct *o_tty) */ tty_lock_pair(tty, o_tty); + mutex_lock(&tty->ldisc_mutex); + tty_unlock_pair(tty, o_tty); + tty_ldisc_halt(tty); - tty_ldisc_flush_works(tty); - if (o_tty) { + if (o_tty) tty_ldisc_halt(o_tty); + mutex_unlock(&tty->ldisc_mutex); + + tty_ldisc_flush_works(tty); + if (o_tty) tty_ldisc_flush_works(o_tty); - } + tty_lock_pair(tty, o_tty); /* This will need doing differently if we need to lock */ tty_ldisc_kill(tty); - if (o_tty) tty_ldisc_kill(o_tty); -- 1.7.10.4 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/