Received: by 2002:a05:7412:5112:b0:fa:6e18:a558 with SMTP id fm18csp997109rdb; Wed, 24 Jan 2024 01:19:41 -0800 (PST) X-Google-Smtp-Source: AGHT+IH8MC3LbmCNCM62bK+Pzkv1BjZ40GnprzsZr1PJNTUToJnjZltP6unwxt/rSO685T1ztmUm X-Received: by 2002:a17:90b:1883:b0:290:8d9c:51c3 with SMTP id mn3-20020a17090b188300b002908d9c51c3mr2759546pjb.89.1706087980972; Wed, 24 Jan 2024 01:19:40 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1706087980; cv=pass; d=google.com; s=arc-20160816; b=Pll/FdCvpPLicUxoiWd0v4jptxGtUVoujlO+a7Yfw03IFXKmVKzNOZhwB1O1Fvivem FOn8/Fue/hRtmUdJELhNKzKKiw4UNvv5P5ad2SUvlqYqFdeL6MR3YySwewFwkVuSPkvw eOjoxzHmlQMLRO8iSsQ8Tj9poOZyUjrAKa58zVuSQIgFpeniJklVZj15pxk2sS7pDkaF S7F6MF5y5IDh5Nsze8ZtUyzF14xG6S3BsMJM70mZ7ofcipJiFfb8eZqhoFF0Amz+29F+ YVsr79X9audu1sKktT5MoE31uHwfWyseBbIgE9ssBD1uWwnMiHur6o7mWNaNdxGEobC9 NKag== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from; bh=q/I3PZlqPZHfuuYr+xudQcgSjKTwoe2lO8uLnF1Uji4=; fh=SaJ9rWjnsGFI+7ibUS2Ywr9D7BSRvW01ZC90UJpOFqs=; b=pRpT5WGEu+m1zH8UMvofMb7ucNlefw1uQPMMfswHHPF0Z1fqcB3c76IyUrYzRxlXNu H/MsimTIQxpjYxaHEKcm1HiNOO6aeClXV7RNgzpApm5Q27oxX9AfVL/fjqiguklWbz3A 638HVqeEmqLOVw8EXR2Zl+errWf1rqiWtNNVOMy7dLsnuoKXUD7qO/E7u64baOMQ2Qqw 6t9Ng78tnOvPannnnpOpXHu12unpuB/+hOwuosVc3iD2xjIwmqu3KX/1ZQsvb8rjfrU8 tJiAJ8QjktowgK10/JxyM6Z2mvWKby0PIdAYuvdDTfEe7o+GYc8LjYcBJuaFQo3Tieke n5Jg== ARC-Authentication-Results: i=2; mx.google.com; arc=pass (i=1 spf=pass spfdomain=huawei.com dmarc=pass fromdomain=huawei.com); spf=pass (google.com: domain of linux-kernel+bounces-36693-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-36693-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Return-Path: Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [139.178.88.99]) by mx.google.com with ESMTPS id t20-20020a17090aba9400b00290f427b95dsi859210pjr.165.2024.01.24.01.19.40 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 24 Jan 2024 01:19:40 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-36693-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) client-ip=139.178.88.99; Authentication-Results: mx.google.com; arc=pass (i=1 spf=pass spfdomain=huawei.com dmarc=pass fromdomain=huawei.com); spf=pass (google.com: domain of linux-kernel+bounces-36693-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-36693-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 9918D28ECB5 for ; Wed, 24 Jan 2024 09:19:40 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 4A6A417C77; Wed, 24 Jan 2024 09:19:13 +0000 (UTC) Received: from szxga04-in.huawei.com (szxga04-in.huawei.com [45.249.212.190]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BD2DA17C66; Wed, 24 Jan 2024 09:19:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.190 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706087952; cv=none; b=hiXbemby3h0ZJP1ZmEDSD0uL5Ei4p8obgkJ6l9ooMRiR/TwD1a2dhO5lvX0SE4DaxAUJMamkuQyQGGbKAC6yaKeRas/xc1fUGDRcuaZBeFNZz1IG9vWnDWp53texwkBHJ0ifsxLYeVInS1lLjh8Imt6R8XZ2TZQZI2xqAlRRQLg= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706087952; c=relaxed/simple; bh=X18KtNk96NC0AqflJWJCrao6h9w3a7IJOBSuBGFV//o=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=VexUMXlBCscyq421aCQhVU2rUO+ym5hGkpSGFDIBkH9Cn0BxzbvKc+NU0txWlqUoLiGycxray8JnKimCrasQsvqmA5Wf4Gh4SdvcNHvtWvm9xQmGxxgvRLS1GBFAz/xqNyoXLGiDXTEaCaMQAuyu956PsUZ4y9RM2nOdMMOQS5M= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.190 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.88.214]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4TKdfT6qvxz1xmZt; Wed, 24 Jan 2024 17:18:09 +0800 (CST) Received: from kwepemm600009.china.huawei.com (unknown [7.193.23.164]) by mail.maildlp.com (Postfix) with ESMTPS id 53C0F1A017B; Wed, 24 Jan 2024 17:18:47 +0800 (CST) Received: from huawei.com (10.175.104.67) by kwepemm600009.china.huawei.com (7.193.23.164) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Wed, 24 Jan 2024 17:18:18 +0800 From: Yu Kuai To: , , , , , , , , , , , CC: , , , , Subject: [PATCH v2 02/11] md: don't ignore read-only array in md_check_recovery() Date: Wed, 24 Jan 2024 17:14:12 +0800 Message-ID: <20240124091421.1261579-3-yukuai3@huawei.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240124091421.1261579-1-yukuai3@huawei.com> References: <20240124091421.1261579-1-yukuai3@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To kwepemm600009.china.huawei.com (7.193.23.164) Usually if the array is not read-write, md_check_recovery() won't register new sync_thread in the first place. And if the array is read-write and sync_thread is registered, md_set_readonly() will unregister sync_thread before setting the array read-only. md/raid follow this behavior hence there is no problem. After commit f52f5c71f3d4 ("md: fix stopping sync thread"), following hang can be triggered by test shell/integrity-caching.sh: 1) array is read-only. dm-raid update super block: rs_update_sbs ro = mddev->ro mddev->ro = 0 -> set array read-write md_update_sb 2) register new sync thread concurrently. 3) dm-raid set array back to read-only: rs_update_sbs mddev->ro = ro 4) stop the array: raid_dtr md_stop stop_sync_thread set_bit(MD_RECOVERY_INTR, &mddev->recovery); md_wakeup_thread_directly(mddev->sync_thread); wait_event(..., !test_bit(MD_RECOVERY_RUNNING, &mddev->recovery)) 5) sync thread done: md_do_sync set_bit(MD_RECOVERY_DONE, &mddev->recovery); md_wakeup_thread(mddev->thread); 6) daemon thread can't unregister sync thread: md_check_recovery if (!md_is_rdwr(mddev) && !test_bit(MD_RECOVERY_NEEDED, &mddev->recovery)) return; -> -> MD_RECOVERY_RUNNING can't be cleared, hence step 4 hang; The root cause is that dm-raid manipulate 'mddev->ro' by itself, however, dm-raid really should stop sync thread before setting the array read-only. Unfortunately, I need to read more code before I can refacter the handler of 'mddev->ro' in dm-raid, hence let's fix the problem the easy way for now to prevent dm-raid regression. Reported-by: Mikulas Patocka Closes: https://lore.kernel.org/all/9801e40-8ac7-e225-6a71-309dcf9dc9aa@redhat.com/ Fixes: ecbfb9f118bc ("dm raid: add raid level takeover support") Fixes: f52f5c71f3d4 ("md: fix stopping sync thread") Signed-off-by: Yu Kuai --- drivers/md/md.c | 31 ++++++++++++++++++------------- 1 file changed, 18 insertions(+), 13 deletions(-) diff --git a/drivers/md/md.c b/drivers/md/md.c index 07b80278eaa5..6906d023f1d6 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -9445,6 +9445,20 @@ static void md_start_sync(struct work_struct *ws) sysfs_notify_dirent_safe(mddev->sysfs_action); } +static void unregister_sync_thread(struct mddev *mddev) +{ + if (!test_bit(MD_RECOVERY_DONE, &mddev->recovery)) { + /* resync/recovery still happening */ + clear_bit(MD_RECOVERY_NEEDED, &mddev->recovery); + return; + } + + if (WARN_ON_ONCE(!mddev->sync_thread)) + return; + + md_reap_sync_thread(mddev); +} + /* * This routine is regularly called by all per-raid-array threads to * deal with generic issues like resync and super-block update. @@ -9482,7 +9496,8 @@ void md_check_recovery(struct mddev *mddev) } if (!md_is_rdwr(mddev) && - !test_bit(MD_RECOVERY_NEEDED, &mddev->recovery)) + !test_bit(MD_RECOVERY_NEEDED, &mddev->recovery) && + !test_bit(MD_RECOVERY_DONE, &mddev->recovery)) return; if ( ! ( (mddev->sb_flags & ~ (1<recovery)) { - /* sync_work already queued. */ - clear_bit(MD_RECOVERY_NEEDED, &mddev->recovery); + unregister_sync_thread(mddev); goto unlock; } @@ -9568,16 +9582,7 @@ void md_check_recovery(struct mddev *mddev) * still set. */ if (test_bit(MD_RECOVERY_RUNNING, &mddev->recovery)) { - if (!test_bit(MD_RECOVERY_DONE, &mddev->recovery)) { - /* resync/recovery still happening */ - clear_bit(MD_RECOVERY_NEEDED, &mddev->recovery); - goto unlock; - } - - if (WARN_ON_ONCE(!mddev->sync_thread)) - goto unlock; - - md_reap_sync_thread(mddev); + unregister_sync_thread(mddev); goto unlock; } -- 2.39.2