Received: by 2002:a05:6358:700f:b0:131:369:b2a3 with SMTP id 15csp980769rwo; Wed, 2 Aug 2023 07:07:12 -0700 (PDT) X-Google-Smtp-Source: APBJJlH+NywgVWzCgzmrFBVS885/9sJoXLNYW4RB86QUlBg8cjSrj8FQbxBVZLZxPfXu1Zxtca60 X-Received: by 2002:a17:902:b584:b0:1bc:1e03:3cbe with SMTP id a4-20020a170902b58400b001bc1e033cbemr5905532pls.18.1690985231613; Wed, 02 Aug 2023 07:07:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1690985231; cv=none; d=google.com; s=arc-20160816; b=lp91si2t0D0iaEcr4UIxsE51O/HhvWYBwCZd8kTl9JGkGAYukYO5Ah87Q42KsjQeyS pTxjArpqpfHagQKkuTEtqLA7SNyUSoHBb6OVQdBhgAeyYjm4r14xxW2jtrrR0a6j67VY aQhXpQb5TMe5ngHpMi1a7A8/m8nZ7ddclTtoB0MDLZtuMqxV8BUELeNqW16ZXAZm/QSl Q93/OHIRBWn/7qAoIKEqLG58Jf+P31xPoVvGOs8Fi4NyC7a4liw/8m21JMaZ61GriQM+ 3WTlydd6Ze7Ec8eekAPqZHe/Pn56a/VTiCeCs3/FyI2P9zipPXPC7eNw9Yqkjdao00G8 PnmA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:message-id:date:subject:cc:to:from :dkim-signature; bh=fHj0ogn13uTD9ShXB2epVwSA8wIkWRIMgchHlQuBxLk=; fh=In1dUm4Wu7rpT/ymCNP/doiTByIlbqsiyJVkxQkBrhA=; b=iYCOwjzVULXFr4T3Aj3b9kj05aWlKOWvhvoJLd8nT9WsaXQHdiLsoW1ytL8QNByAcj QcESVKo3V4V9V0ElU++7MC8sscMINA1B8j1kqjJJLClm7d9J+pdOcBKlQJZRID6L+U0r IFQjakqvpPS2iyhjtGGVFmmTAYSCQF2eGq8k8p+F/bav2puyKSxSbicXMPQb5eQ+7B0e JdpiKHpmHMyxrobB9ka/uRfiK3ePDSpTdCeRaWS80hwwXM+aC+hHltxeH8mM72wpVJbB zA/PAG6ey9Lw1KFButAD6S0jKV7qzwwTLhdV8cg/li1bvKaS7SgWKY2qko0jXR5qdX9z hAMA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20221208 header.b=WJgrzvkL; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id t4-20020a170902e84400b001a94b91f402si1520791plg.218.2023.08.02.07.06.57; Wed, 02 Aug 2023 07:07:11 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20221208 header.b=WJgrzvkL; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234629AbjHBMjS (ORCPT + 99 others); Wed, 2 Aug 2023 08:39:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58868 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234549AbjHBMiz (ORCPT ); Wed, 2 Aug 2023 08:38:55 -0400 Received: from mail-oi1-x232.google.com (mail-oi1-x232.google.com [IPv6:2607:f8b0:4864:20::232]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C002330F3 for ; Wed, 2 Aug 2023 05:38:36 -0700 (PDT) Received: by mail-oi1-x232.google.com with SMTP id 5614622812f47-3a426e70575so4101887b6e.0 for ; Wed, 02 Aug 2023 05:38:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1690979915; x=1691584715; h=message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=fHj0ogn13uTD9ShXB2epVwSA8wIkWRIMgchHlQuBxLk=; b=WJgrzvkLDCxVrGzix111FmRTltpPpjO64wXrQbc3d4n8G+ckVsq12T9rapATZp2PW+ lSIWWFy5GpvB5GHZlX2d0S9+rwXCmZDrWAo4Ujx87DJ4xEWP9lEdJlnieQYQ3xtiVfPI bk3wM9DbfoXAER0cr9qZLNpFF+lxJwht27dY+jWfBPQK5rujlRkkf9+ZgargfUTPoAGo u654rmIL1LsvndT2AqxeattLmn0BrUWY7goKlUuYzMH+EgXpjS3kM0GvJAku+2xS6VP+ iULEc+LfoRtGopXWukQld7lm/yNSxkcB7khdxDqFIMwHM3CvhrDxdd5mScx/P21/DFgt xZzw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690979915; x=1691584715; h=message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=fHj0ogn13uTD9ShXB2epVwSA8wIkWRIMgchHlQuBxLk=; b=LmWGrJtkdnWm0O+uLwEXWot44n3S1Gos5M3ItQVMtfD/0JiOlVvzJTtGLb280dVXZ6 OmjnYqAwcWcvkEMZy8NGYc//pdA7PAE6Kbm3IifYb0qE6Pj5+a3m/0PgTTWhvc5q+lM9 3N5Q65dFJn0Yp4aABmVGOpHXbDv3FmMpCHpsB+3gQDaCW+5WX1FVBim8IubL7jR+95Ml GxaSXTePljgC10df5DtSjR9VjGgqgv5aagtN3Xeb8lENRlFPwjzOs+gFD9v6ie8Hnsxd +upUoRypYloDQ56MQmeHzpsCIGSkqVOwLlZcLaxLuLIHu04Vzc4VlKUVIbJ8372sOgF4 Hyaw== X-Gm-Message-State: ABy/qLYWIVFMGyB/7jzwcYtuK1jQ3/AGDElXLDav+J0MQdIorltGvSQa dWTuJRcwUC+6WvYveau3+GI= X-Received: by 2002:a05:6808:2382:b0:3a0:38c2:2654 with SMTP id bp2-20020a056808238200b003a038c22654mr16938376oib.58.1690979915061; Wed, 02 Aug 2023 05:38:35 -0700 (PDT) Received: from 377044c6c369.cse.ust.hk (191host097.mobilenet.cse.ust.hk. [143.89.191.97]) by smtp.gmail.com with ESMTPSA id 30-20020a17090a001e00b00265c742a262sm1118029pja.4.2023.08.02.05.38.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 02 Aug 2023 05:38:33 -0700 (PDT) From: Chengfeng Ye To: mark@fasheh.com, jlbec@evilplan.org, joseph.qi@linux.alibaba.com, akpm@linux-foundation.org Cc: ocfs2-devel@lists.linux.dev, linux-kernel@vger.kernel.org, Chengfeng Ye Subject: [PATCH v2] ocfs2: cluster: fix potential deadlock on &qs->qs_lock Date: Wed, 2 Aug 2023 12:38:24 +0000 Message-Id: <20230802123824.15301-1-dg573847474@gmail.com> X-Mailer: git-send-email 2.17.1 X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_ENVFROM_END_DIGIT, FREEMAIL_FROM,RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org &qs->qs_lock is acquired by timer o2net_idle_timer() along the following call chain. Thus the acquisition of the lock under process context should disable bottom half, otherwise deadlock could happen if the timer happens to preempt the execution while the lock is held in process context on the same CPU. -> o2net_idle_timer() -> o2quo_conn_err() -> spin_lock(&qs->qs_lock) Several lock acquisition of &qs->qs_lock under process contex do not disable irq or bottom half. The patch fixes these potential deadlocks scenerio by using spin_lock_bh() on &qs->qs_lock. This flaw was found by an experimental static analysis tool I am developing for irq-related deadlock. x86_64 allmodconfig using gcc shows no new warning. Signed-off-by: Chengfeng Ye Changes in v2 - Consistently use spin_lock_bh() on all potential deadlock sites of &qs->qs_lock --- fs/ocfs2/cluster/quorum.c | 26 +++++++++++++------------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/fs/ocfs2/cluster/quorum.c b/fs/ocfs2/cluster/quorum.c index 189c111bc371..15d0ed9c13e5 100644 --- a/fs/ocfs2/cluster/quorum.c +++ b/fs/ocfs2/cluster/quorum.c @@ -93,7 +93,7 @@ static void o2quo_make_decision(struct work_struct *work) int lowest_hb, lowest_reachable = 0, fence = 0; struct o2quo_state *qs = &o2quo_state; - spin_lock(&qs->qs_lock); + spin_lock_bh(&qs->qs_lock); lowest_hb = find_first_bit(qs->qs_hb_bm, O2NM_MAX_NODES); if (lowest_hb != O2NM_MAX_NODES) @@ -146,14 +146,14 @@ static void o2quo_make_decision(struct work_struct *work) out: if (fence) { - spin_unlock(&qs->qs_lock); + spin_unlock_bh(&qs->qs_lock); o2quo_fence_self(); } else { mlog(ML_NOTICE, "not fencing this node, heartbeating: %d, " "connected: %d, lowest: %d (%sreachable)\n", qs->qs_heartbeating, qs->qs_connected, lowest_hb, lowest_reachable ? "" : "un"); - spin_unlock(&qs->qs_lock); + spin_unlock_bh(&qs->qs_lock); } @@ -196,7 +196,7 @@ void o2quo_hb_up(u8 node) { struct o2quo_state *qs = &o2quo_state; - spin_lock(&qs->qs_lock); + spin_lock_bh(&qs->qs_lock); qs->qs_heartbeating++; mlog_bug_on_msg(qs->qs_heartbeating == O2NM_MAX_NODES, @@ -211,7 +211,7 @@ void o2quo_hb_up(u8 node) else o2quo_clear_hold(qs, node); - spin_unlock(&qs->qs_lock); + spin_unlock_bh(&qs->qs_lock); } /* hb going down releases any holds we might have had due to this node from @@ -220,7 +220,7 @@ void o2quo_hb_down(u8 node) { struct o2quo_state *qs = &o2quo_state; - spin_lock(&qs->qs_lock); + spin_lock_bh(&qs->qs_lock); qs->qs_heartbeating--; mlog_bug_on_msg(qs->qs_heartbeating < 0, @@ -233,7 +233,7 @@ void o2quo_hb_down(u8 node) o2quo_clear_hold(qs, node); - spin_unlock(&qs->qs_lock); + spin_unlock_bh(&qs->qs_lock); } /* this tells us that we've decided that the node is still heartbeating @@ -245,14 +245,14 @@ void o2quo_hb_still_up(u8 node) { struct o2quo_state *qs = &o2quo_state; - spin_lock(&qs->qs_lock); + spin_lock_bh(&qs->qs_lock); mlog(0, "node %u\n", node); qs->qs_pending = 1; o2quo_clear_hold(qs, node); - spin_unlock(&qs->qs_lock); + spin_unlock_bh(&qs->qs_lock); } /* This is analogous to hb_up. as a node's connection comes up we delay the @@ -264,7 +264,7 @@ void o2quo_conn_up(u8 node) { struct o2quo_state *qs = &o2quo_state; - spin_lock(&qs->qs_lock); + spin_lock_bh(&qs->qs_lock); qs->qs_connected++; mlog_bug_on_msg(qs->qs_connected == O2NM_MAX_NODES, @@ -279,7 +279,7 @@ void o2quo_conn_up(u8 node) else o2quo_clear_hold(qs, node); - spin_unlock(&qs->qs_lock); + spin_unlock_bh(&qs->qs_lock); } /* we've decided that we won't ever be connecting to the node again. if it's @@ -290,7 +290,7 @@ void o2quo_conn_err(u8 node) { struct o2quo_state *qs = &o2quo_state; - spin_lock(&qs->qs_lock); + spin_lock_bh(&qs->qs_lock); if (test_bit(node, qs->qs_conn_bm)) { qs->qs_connected--; @@ -307,7 +307,7 @@ void o2quo_conn_err(u8 node) mlog(0, "node %u, %d total\n", node, qs->qs_connected); - spin_unlock(&qs->qs_lock); + spin_unlock_bh(&qs->qs_lock); } void o2quo_init(void) -- 2.17.1