Received: by 2002:ac0:a582:0:0:0:0:0 with SMTP id m2-v6csp2105521imm; Thu, 11 Oct 2018 05:19:02 -0700 (PDT) X-Google-Smtp-Source: ACcGV63DFLZ1Te9I0wbgIYnIiKLAFq4usWhuGEMlAEz8SsvlRfhkYa8f8s/C1vxBna7d3Bs5JmUx X-Received: by 2002:a17:902:720b:: with SMTP id ba11-v6mr1334031plb.199.1539260342742; Thu, 11 Oct 2018 05:19:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1539260342; cv=none; d=google.com; s=arc-20160816; b=x3jXAXN1zCVFRXQAALXN4Neu9BG+pB67jEFfNDXclPysPtO4BR+6oHMZmzSiR9hpMn jR5lzOWesLRPxzetqh777er3mn/93HxkTfGEKOp2m7CVX3b0SumQ3uOHbMfkGPHIYX4h fa4QB8aedGcDmSFBDPUk21C1J7svJ4zbX/hI7rC/9XJz4idGVDJjUcawx914ttIfP42I 0eOvL9QNWEu9ic3ntXTKe9B/17JOttWWejepHQq5PEZ0EYb6bw0mylUrPjRa4/X5rZM/ xvkUApvp6ZoGd/D5dGVXGmVg73JD5GLUdQCcGylQH2UTlJLZRVzxVw0UijtqauhTPCPh 7EZg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:message-id:date:subject:cc :to:from; bh=fKsHY+YlLrGHncmvvqhwoHT8pHmy8Yu1Ndynz+K0jeE=; b=SzeafK/cRhu0rUx9MzYX32A9GDJjG/ME9R39V0DSOR+KbS5Ha6YI6+Ry2ym2n2noBU 99IiwMNBVNaea/vg0cwsZkRvJsn3n2vC3G6q8TH/Q4CKawSuVDXlum9hsC6FdUs9J9zO P3OVKNhzRNdYYD8DJK+C4zxbYk33FCv0eGWFKYbrX3UzbmlIX54eudnorJNONNsFnpZg /tVaXlfRvXU4h5E8y0sdhuhdD1wYesGYedcx5TAhw+BME/wJJnhW5H6rYyhH2bJaL1JF hhFFF/+CO8wmID42Tuqnpeji2Zmv2TdFM/daP1zYAvNWQZHA7/aDi22hdFy6szwIHJQR g9ww== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 93-v6si24312978plf.0.2018.10.11.05.18.48; Thu, 11 Oct 2018 05:19:02 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728193AbeJKTb5 (ORCPT + 99 others); Thu, 11 Oct 2018 15:31:57 -0400 Received: from mail1.windriver.com ([147.11.146.13]:63727 "EHLO mail1.windriver.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728145AbeJKTb4 (ORCPT ); Thu, 11 Oct 2018 15:31:56 -0400 Received: from ALA-HCB.corp.ad.wrs.com ([147.11.189.41]) by mail1.windriver.com (8.15.2/8.15.1) with ESMTPS id w9BC4ji3023274 (version=TLSv1 cipher=AES128-SHA bits=128 verify=FAIL); Thu, 11 Oct 2018 05:04:45 -0700 (PDT) Received: from pek-yxue-d1.wrs.com (128.224.155.90) by ALA-HCB.corp.ad.wrs.com (147.11.189.41) with Microsoft SMTP Server id 14.3.408.0; Thu, 11 Oct 2018 05:04:38 -0700 From: Ying Xue To: , CC: , , , , Subject: [PATCH net] tipc: eliminate possible recursive locking detected by LOCKDEP Date: Thu, 11 Oct 2018 19:57:56 +0800 Message-ID: <1539259076-8562-1-git-send-email-ying.xue@windriver.com> X-Mailer: git-send-email 2.7.4 MIME-Version: 1.0 Content-Type: text/plain Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org When booting kernel with LOCKDEP option, below warning info was found: WARNING: possible recursive locking detected 4.19.0-rc7+ #14 Not tainted -------------------------------------------- swapper/0/1 is trying to acquire lock: 00000000dcfc0fc8 (&(&list->lock)->rlock#4){+...}, at: spin_lock_bh include/linux/spinlock.h:334 [inline] 00000000dcfc0fc8 (&(&list->lock)->rlock#4){+...}, at: tipc_link_reset+0x125/0xdf0 net/tipc/link.c:850 but task is already holding lock: 00000000cbb9b036 (&(&list->lock)->rlock#4){+...}, at: spin_lock_bh include/linux/spinlock.h:334 [inline] 00000000cbb9b036 (&(&list->lock)->rlock#4){+...}, at: tipc_link_reset+0xfa/0xdf0 net/tipc/link.c:849 other info that might help us debug this: Possible unsafe locking scenario: CPU0 ---- lock(&(&list->lock)->rlock#4); lock(&(&list->lock)->rlock#4); *** DEADLOCK *** May be due to missing lock nesting notation 2 locks held by swapper/0/1: #0: 00000000f7539d34 (pernet_ops_rwsem){+.+.}, at: register_pernet_subsys+0x19/0x40 net/core/net_namespace.c:1051 #1: 00000000cbb9b036 (&(&list->lock)->rlock#4){+...}, at: spin_lock_bh include/linux/spinlock.h:334 [inline] #1: 00000000cbb9b036 (&(&list->lock)->rlock#4){+...}, at: tipc_link_reset+0xfa/0xdf0 net/tipc/link.c:849 stack backtrace: CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.19.0-rc7+ #14 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014 Call Trace: __dump_stack lib/dump_stack.c:77 [inline] dump_stack+0x1af/0x295 lib/dump_stack.c:113 print_deadlock_bug kernel/locking/lockdep.c:1759 [inline] check_deadlock kernel/locking/lockdep.c:1803 [inline] validate_chain kernel/locking/lockdep.c:2399 [inline] __lock_acquire+0xf1e/0x3c60 kernel/locking/lockdep.c:3411 lock_acquire+0x1db/0x520 kernel/locking/lockdep.c:3900 __raw_spin_lock_bh include/linux/spinlock_api_smp.h:135 [inline] _raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:168 spin_lock_bh include/linux/spinlock.h:334 [inline] tipc_link_reset+0x125/0xdf0 net/tipc/link.c:850 tipc_link_bc_create+0xb5/0x1f0 net/tipc/link.c:526 tipc_bcast_init+0x59b/0xab0 net/tipc/bcast.c:521 tipc_init_net+0x472/0x610 net/tipc/core.c:82 ops_init+0xf7/0x520 net/core/net_namespace.c:129 __register_pernet_operations net/core/net_namespace.c:940 [inline] register_pernet_operations+0x453/0xac0 net/core/net_namespace.c:1011 register_pernet_subsys+0x28/0x40 net/core/net_namespace.c:1052 tipc_init+0x83/0x104 net/tipc/core.c:140 do_one_initcall+0x109/0x70a init/main.c:885 do_initcall_level init/main.c:953 [inline] do_initcalls init/main.c:961 [inline] do_basic_setup init/main.c:979 [inline] kernel_init_freeable+0x4bd/0x57f init/main.c:1144 kernel_init+0x13/0x180 init/main.c:1063 ret_from_fork+0x3a/0x50 arch/x86/entry/entry_64.S:413 The reason why the noise above was complained by LOCKDEP is because we nested to hold l->wakeupq.lock and l->inputq->lock in tipc_link_reset function. In fact it's unnecessary to move skb buffer from l->wakeupq queue to l->inputq queue while holding the two locks at the same time. Instead, we can move skb buffers in l->wakeupq queue to a temporary list first and then move the buffers of the temporary list to l->inputq queue, which is also safe for us. Fixes: 3f32d0be6c16 ("tipc: lock wakeup & inputq at tipc_link_reset()") Reported-by: Dmitry Vyukov Signed-off-by: Ying Xue --- net/tipc/link.c | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-) diff --git a/net/tipc/link.c b/net/tipc/link.c index fb886b5..1d21ae4 100644 --- a/net/tipc/link.c +++ b/net/tipc/link.c @@ -843,14 +843,21 @@ static void link_prepare_wakeup(struct tipc_link *l) void tipc_link_reset(struct tipc_link *l) { + struct sk_buff_head list; + + __skb_queue_head_init(&list); + l->in_session = false; l->session++; l->mtu = l->advertised_mtu; + spin_lock_bh(&l->wakeupq.lock); + skb_queue_splice_init(&l->wakeupq, &list); + spin_unlock_bh(&l->wakeupq.lock); + spin_lock_bh(&l->inputq->lock); - skb_queue_splice_init(&l->wakeupq, l->inputq); + skb_queue_splice_init(&list, l->inputq); spin_unlock_bh(&l->inputq->lock); - spin_unlock_bh(&l->wakeupq.lock); __skb_queue_purge(&l->transmq); __skb_queue_purge(&l->deferdq); -- 2.7.4