Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753110AbdFGXqW (ORCPT ); Wed, 7 Jun 2017 19:46:22 -0400 Received: from wtarreau.pck.nerim.net ([62.212.114.60]:50531 "EHLO 1wt.eu" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752944AbdFGXEU (ORCPT ); Wed, 7 Jun 2017 19:04:20 -0400 From: Willy Tarreau To: linux-kernel@vger.kernel.org, stable@vger.kernel.org, linux@roeck-us.net Cc: Jiri Slaby , Willy Tarreau Subject: [PATCH 3.10 244/250] TTY: n_hdlc, fix lockdep false positive Date: Thu, 8 Jun 2017 01:00:30 +0200 Message-Id: <1496876436-32402-245-git-send-email-w@1wt.eu> X-Mailer: git-send-email 2.8.0.rc2.1.gbe9624a In-Reply-To: <1496876436-32402-1-git-send-email-w@1wt.eu> References: <1496876436-32402-1-git-send-email-w@1wt.eu> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3514 Lines: 100 From: Jiri Slaby commit e9b736d88af1a143530565929390cadf036dc799 upstream. The class of 4 n_hdls buf locks is the same because a single function n_hdlc_buf_list_init is used to init all the locks. But since flush_tx_queue takes n_hdlc->tx_buf_list.spinlock and then calls n_hdlc_buf_put which takes n_hdlc->tx_free_buf_list.spinlock, lockdep emits a warning: ============================================= [ INFO: possible recursive locking detected ] 4.3.0-25.g91e30a7-default #1 Not tainted --------------------------------------------- a.out/1248 is trying to acquire lock: (&(&list->spinlock)->rlock){......}, at: [] n_hdlc_buf_put+0x20/0x60 [n_hdlc] but task is already holding lock: (&(&list->spinlock)->rlock){......}, at: [] n_hdlc_tty_ioctl+0x127/0x1d0 [n_hdlc] other info that might help us debug this: Possible unsafe locking scenario: CPU0 ---- lock(&(&list->spinlock)->rlock); lock(&(&list->spinlock)->rlock); *** DEADLOCK *** May be due to missing lock nesting notation 2 locks held by a.out/1248: #0: (&tty->ldisc_sem){++++++}, at: [] tty_ldisc_ref_wait+0x20/0x50 #1: (&(&list->spinlock)->rlock){......}, at: [] n_hdlc_tty_ioctl+0x127/0x1d0 [n_hdlc] ... Call Trace: ... [] _raw_spin_lock_irqsave+0x50/0x70 [] n_hdlc_buf_put+0x20/0x60 [n_hdlc] [] n_hdlc_tty_ioctl+0x144/0x1d0 [n_hdlc] [] tty_ioctl+0x3f1/0xe40 ... Fix it by initializing the spin_locks separately. This removes also reduntand memset of a freshly kzallocated space. Signed-off-by: Jiri Slaby Reported-by: Dmitry Vyukov Signed-off-by: Jiri Slaby Signed-off-by: Willy Tarreau --- drivers/tty/n_hdlc.c | 19 ++++--------------- 1 file changed, 4 insertions(+), 15 deletions(-) diff --git a/drivers/tty/n_hdlc.c b/drivers/tty/n_hdlc.c index 1b2db9a..f26657c 100644 --- a/drivers/tty/n_hdlc.c +++ b/drivers/tty/n_hdlc.c @@ -159,7 +159,6 @@ struct n_hdlc { /* * HDLC buffer list manipulation functions */ -static void n_hdlc_buf_list_init(struct n_hdlc_buf_list *list); static void n_hdlc_buf_put(struct n_hdlc_buf_list *list, struct n_hdlc_buf *buf); static struct n_hdlc_buf *n_hdlc_buf_get(struct n_hdlc_buf_list *list); @@ -855,10 +854,10 @@ static struct n_hdlc *n_hdlc_alloc(void) memset(n_hdlc, 0, sizeof(*n_hdlc)); - n_hdlc_buf_list_init(&n_hdlc->rx_free_buf_list); - n_hdlc_buf_list_init(&n_hdlc->tx_free_buf_list); - n_hdlc_buf_list_init(&n_hdlc->rx_buf_list); - n_hdlc_buf_list_init(&n_hdlc->tx_buf_list); + spin_lock_init(&n_hdlc->rx_free_buf_list.spinlock); + spin_lock_init(&n_hdlc->tx_free_buf_list.spinlock); + spin_lock_init(&n_hdlc->rx_buf_list.spinlock); + spin_lock_init(&n_hdlc->tx_buf_list.spinlock); /* allocate free rx buffer list */ for(i=0;ispinlock); -} /* end of n_hdlc_buf_list_init() */ - -/** * n_hdlc_buf_put - add specified HDLC buffer to tail of specified list * @list - pointer to buffer list * @buf - pointer to buffer -- 2.8.0.rc2.1.gbe9624a