Received: by 2002:a25:1985:0:0:0:0:0 with SMTP id 127csp1650407ybz; Sun, 26 Apr 2020 01:08:37 -0700 (PDT) X-Google-Smtp-Source: APiQypJRzN/GqOAnCB2zS4WODlmLBxkv484ogDxybnrX9jF7yDt+01Mw4B9M3Y0qil5mA/dLJZTG X-Received: by 2002:a17:906:2792:: with SMTP id j18mr14991993ejc.215.1587888517040; Sun, 26 Apr 2020 01:08:37 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1587888517; cv=none; d=google.com; s=arc-20160816; b=rBkuO17/6H+7lQ7LSwlYg2de05GFFocqjZTCB/sVIPjMoq4DiEEMbiuOnzhTXSByRu 4XFZxdo1UcVW7BzSA84E7/EdcyioywkLtQJjFa1/FS49u2wHeVA3WXUg63fUKLQ6IMhJ FqvKt4VNCcMrIraFpFQtTXRtNSyewfXJ5COo/IbaihZ/n1GgHJt/NEKR03Ird2KhSrq5 RdIu95ADnheRxuKQvFjbdGeT678nni+wWkGN7LYm6Yxdw3+WQKW3HcKyo/GppRAtrqCk ojCPFFBVFdDZ+r7pWAnB6ABr5DuUe86CHYdwdHsA/hbDlC81B057uFL+pkvYC/CHIhYf s3lA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:mime-version:user-agent:date:message-id:subject :from:cc:to; bh=KG+eIczC/k/tgtIaFE3aYaUGWBu6Wg0cXEhv+EFyJl4=; b=EU7l9QH+788+tSbGpOOaN3/ktuPPQPeJTNmOwuS3MjQAWRlzeFbvYwtQAuKaizKiV5 rpLed10V9O2dCdhPhLHke+JtjwvK/49ukhLlusbRAJG8U6K2cWaDqj0SlW64f5ZRLO7w og3YatCtks9iWOoC+fMuDhx8OAY/yv8QPHPeKZ/hXJNz+Q4I6qrN2T7uoxAQ69c6mE7+ tMyFkDezETIxpRkfvSEKJgaKz7p2iO/a2+Yj+0wZ4E7d5vRvo10zBG0JK0o+W0ZB8Ms9 KL2ZwjuZN3cPQXziW89FGK1ytVDq+WFAkhHCtDPCJsmm3S1maTWdZj/8IvVrDus9Me6K RB/g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id gr24si6659145ejb.162.2020.04.26.01.08.11; Sun, 26 Apr 2020 01:08:37 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726133AbgDZIGj (ORCPT + 99 others); Sun, 26 Apr 2020 04:06:39 -0400 Received: from szxga07-in.huawei.com ([45.249.212.35]:37890 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726112AbgDZIGi (ORCPT ); Sun, 26 Apr 2020 04:06:38 -0400 Received: from DGGEMS405-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id A4842D6E4359EE11E246; Sun, 26 Apr 2020 16:06:35 +0800 (CST) Received: from [127.0.0.1] (10.166.215.235) by DGGEMS405-HUB.china.huawei.com (10.3.19.205) with Microsoft SMTP Server id 14.3.487.0; Sun, 26 Apr 2020 16:06:29 +0800 To: Coly Li , , , "linux-kernel@vger.kernel.org" CC: "wubo (T)" , Mingfangsen , Yanxiaodan , linfeilong , renxudong From: Zhiqiang Liu Subject: [PATCH V2] bcache: fix potential deadlock problem in btree_gc_coalesce Message-ID: <8a6f5fe3-33f9-48e2-e347-05781c3295fd@huawei.com> Date: Sun, 26 Apr 2020 16:06:27 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.2.2 MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 8bit X-Originating-IP: [10.166.215.235] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Zhiqiang Liu coccicheck reports: drivers/md//bcache/btree.c:1538:1-7: preceding lock on line 1417 btree_gc_coalesce func is designed to coalesce two adjacent nodes in new_nodes[GC_MERGE_NODES] and finally release one node. All nodes`write_lock, new_nodes[i]->write_lock, are holded before coalescing adjacent nodes, and them will be released after coalescing successfully. However, if the coalescing process fails, such as no enough space of new_nodes[1] to fit all of the remaining keys in new_nodes[0] and realloc keylist failed, we will goto to out_nocoalesce tag directly without releasing new_nodes[i]->write_lock. Then, a deadlock will occur after calling btree_node_free to free new_nodes[i], which also try to acquire new_nodes[i]->write_lock. Here, we add a new tag 'out_unlock_nocoalesce' before out_nocoalesce tag to release new_nodes[i]->write_lock when coalescing process fails. -- V1->V2: rewrite commit log (suggested by Coly Li) and rename the patch Fixes: 2a285686c1 ("bcache: btree locking rework") Signed-off-by: Zhiqiang Liu --- drivers/md/bcache/btree.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c index fa872df4e770..cad8b0b97e33 100644 --- a/drivers/md/bcache/btree.c +++ b/drivers/md/bcache/btree.c @@ -1447,7 +1447,7 @@ static int btree_gc_coalesce(struct btree *b, struct btree_op *op, if (__set_blocks(n1, n1->keys + n2->keys, block_bytes(b->c)) > btree_blocks(new_nodes[i])) - goto out_nocoalesce; + goto out_unlock_nocoalesce; keys = n2->keys; /* Take the key of the node we're getting rid of */ @@ -1476,7 +1476,7 @@ static int btree_gc_coalesce(struct btree *b, struct btree_op *op, if (__bch_keylist_realloc(&keylist, bkey_u64s(&new_nodes[i]->key))) - goto out_nocoalesce; + goto out_unlock_nocoalesce; bch_btree_node_write(new_nodes[i], &cl); bch_keylist_add(&keylist, &new_nodes[i]->key); @@ -1522,6 +1522,10 @@ static int btree_gc_coalesce(struct btree *b, struct btree_op *op, /* Invalidated our iterator */ return -EINTR; +out_unlock_nocoalesce: + for (i = 0; i < nodes; i++) + mutex_unlock(&new_nodes[i]->write_lock); + out_nocoalesce: closure_sync(&cl); -- 2.19.1