Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933589AbdCKPXp (ORCPT ); Sat, 11 Mar 2017 10:23:45 -0500 Received: from shadbolt.e.decadent.org.uk ([88.96.1.126]:53597 "EHLO shadbolt.e.decadent.org.uk" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755567AbdCKPXX (ORCPT ); Sat, 11 Mar 2017 10:23:23 -0500 Content-Type: text/plain; charset="UTF-8" Content-Disposition: inline Content-Transfer-Encoding: 8bit MIME-Version: 1.0 From: Ben Hutchings To: linux-kernel@vger.kernel.org, stable@vger.kernel.org CC: akpm@linux-foundation.org, "Greg Kroah-Hartman" , "Dmitry Vyukov" , "Jiri Slaby" Date: Sat, 11 Mar 2017 15:15:57 +0000 Message-ID: X-Mailer: LinuxStableQueue (scripts by bwh) Subject: [PATCH 3.16 371/372] TTY: n_hdlc, fix lockdep false positive In-Reply-To: X-SA-Exim-Connect-IP: 2a02:8011:400e:2:6f00:88c8:c921:d332 X-SA-Exim-Mail-From: ben@decadent.org.uk X-SA-Exim-Scanned: No (on shadbolt.decadent.org.uk); SAEximRunCond expanded to false Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3619 Lines: 102 3.16.42-rc2 review patch. If anyone has any objections, please let me know. ------------------ From: Jiri Slaby commit e9b736d88af1a143530565929390cadf036dc799 upstream. The class of 4 n_hdls buf locks is the same because a single function n_hdlc_buf_list_init is used to init all the locks. But since flush_tx_queue takes n_hdlc->tx_buf_list.spinlock and then calls n_hdlc_buf_put which takes n_hdlc->tx_free_buf_list.spinlock, lockdep emits a warning: ============================================= [ INFO: possible recursive locking detected ] 4.3.0-25.g91e30a7-default #1 Not tainted --------------------------------------------- a.out/1248 is trying to acquire lock: (&(&list->spinlock)->rlock){......}, at: [] n_hdlc_buf_put+0x20/0x60 [n_hdlc] but task is already holding lock: (&(&list->spinlock)->rlock){......}, at: [] n_hdlc_tty_ioctl+0x127/0x1d0 [n_hdlc] other info that might help us debug this: Possible unsafe locking scenario: CPU0 ---- lock(&(&list->spinlock)->rlock); lock(&(&list->spinlock)->rlock); *** DEADLOCK *** May be due to missing lock nesting notation 2 locks held by a.out/1248: #0: (&tty->ldisc_sem){++++++}, at: [] tty_ldisc_ref_wait+0x20/0x50 #1: (&(&list->spinlock)->rlock){......}, at: [] n_hdlc_tty_ioctl+0x127/0x1d0 [n_hdlc] ... Call Trace: ... [] _raw_spin_lock_irqsave+0x50/0x70 [] n_hdlc_buf_put+0x20/0x60 [n_hdlc] [] n_hdlc_tty_ioctl+0x144/0x1d0 [n_hdlc] [] tty_ioctl+0x3f1/0xe40 ... Fix it by initializing the spin_locks separately. This removes also reduntand memset of a freshly kzallocated space. Signed-off-by: Jiri Slaby Reported-by: Dmitry Vyukov Signed-off-by: Greg Kroah-Hartman Signed-off-by: Ben Hutchings --- drivers/tty/n_hdlc.c | 19 ++++--------------- 1 file changed, 4 insertions(+), 15 deletions(-) diff --git a/drivers/tty/n_hdlc.c b/drivers/tty/n_hdlc.c index bbc4ce66c2c1..bcaba17688f6 100644 --- a/drivers/tty/n_hdlc.c +++ b/drivers/tty/n_hdlc.c @@ -159,7 +159,6 @@ struct n_hdlc { /* * HDLC buffer list manipulation functions */ -static void n_hdlc_buf_list_init(struct n_hdlc_buf_list *list); static void n_hdlc_buf_put(struct n_hdlc_buf_list *list, struct n_hdlc_buf *buf); static struct n_hdlc_buf *n_hdlc_buf_get(struct n_hdlc_buf_list *list); @@ -853,10 +852,10 @@ static struct n_hdlc *n_hdlc_alloc(void) if (!n_hdlc) return NULL; - n_hdlc_buf_list_init(&n_hdlc->rx_free_buf_list); - n_hdlc_buf_list_init(&n_hdlc->tx_free_buf_list); - n_hdlc_buf_list_init(&n_hdlc->rx_buf_list); - n_hdlc_buf_list_init(&n_hdlc->tx_buf_list); + spin_lock_init(&n_hdlc->rx_free_buf_list.spinlock); + spin_lock_init(&n_hdlc->tx_free_buf_list.spinlock); + spin_lock_init(&n_hdlc->rx_buf_list.spinlock); + spin_lock_init(&n_hdlc->tx_buf_list.spinlock); /* allocate free rx buffer list */ for(i=0;ispinlock); -} /* end of n_hdlc_buf_list_init() */ - -/** * n_hdlc_buf_put - add specified HDLC buffer to tail of specified list * @list - pointer to buffer list * @buf - pointer to buffer