Received: by 2002:a05:7412:bbc7:b0:fc:a2b0:25d7 with SMTP id kh7csp50571rdb; Thu, 1 Feb 2024 01:31:02 -0800 (PST) X-Google-Smtp-Source: AGHT+IEckD8uPSAIoJhzlLjB4n39Xp+kFFrht1+TtHrUvzfz9lHnZh/mBM6No0CSonfWnb/Q21B8 X-Received: by 2002:a17:90a:ec17:b0:292:ef5a:87ac with SMTP id l23-20020a17090aec1700b00292ef5a87acmr3873313pjy.24.1706779861828; Thu, 01 Feb 2024 01:31:01 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1706779861; cv=pass; d=google.com; s=arc-20160816; b=GGzRBlckeo7BF8/U4LaBXlggpaqlFaTdnriwxrOA3P8Z2xqqFRfjRlCj48YsDXHOay ixGAbBH/QhumiNFvhY5skUuX2QKtj9WGJP3JhzNCwi+kqWoowlZVProQ9PoHOCr/Hn0Q HcpO82iLASov6Or7yMq0dTrYLRBD6M5oh7bjhVC53V6Po+jE21ZbxrRvFcF/j6gl4wuP iq+6OIwDEd9GYW1/fpM0ImCYqV2ZBhu92jukfFx+cFIAF+t1Tnb7LIbPSYCJsi8V7IEA yLRbZvbG0UrxtnUtHWfB5GBr9tJNcwtVD7qmLXlJh0It17TC1jsiB1xMFqGzhnbjkN+K QIOQ== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:message-id:date:subject:cc:to :from; bh=m425i+MgjrCzD1L7PA3iVO5FZOqvzrSpSItWpoOny1A=; fh=3jfEpoLJpAH76fVm3OKzLZvuI1jqkDxvATlJ+3o+dSQ=; b=tGa/CLQOp46KGNpPb4syBrYDjzmZTW0xsRfK+A2WUaPht0sEqkNXNMJAKavDk8jqcy ER5PkLvwwzPxIVrJFyIUr8RNtqv+sBPP7RZ3wZ9JHvWpn/TWfxuJNgTMvekejpB+/lj2 VSSpmbLh1r9XlxCuD6zU6jczr4cthelRUXnEsgcT5X0faBY5CbSw6Gm/iSrCi4rhDlHJ y3NNHhRqUFrPK0UZqMnsNJnC8Sd0jszRWlbN9u8aNbPnlhJMq9mePd5Y0p5t8tcO6Rh0 HPaeXV/WjD7r8XPCTsMXqMrUV+YFCYvfY+Ta6JrzvUOkzzEqugs8Z19TjY+XdMeN8alu ggkg==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; arc=pass (i=1 spf=pass spfdomain=huaweicloud.com); spf=pass (google.com: domain of linux-kernel+bounces-47865-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-47865-linux.lists.archive=gmail.com@vger.kernel.org" X-Forwarded-Encrypted: i=1; AJvYcCXCcWH9ja6DPhjckx46yFvIIHZWUWkrLK+lp2M5rmdSnnSNiJjJ8CNVkmB28RKzpY6Ukf2mTBcXtkUw7ikYB3nZLBDcp2N1W9pJ/nW1Iw== Return-Path: Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [139.178.88.99]) by mx.google.com with ESMTPS id a9-20020a17090a854900b00295b3f29c83si3134404pjw.108.2024.02.01.01.31.01 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 01 Feb 2024 01:31:01 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-47865-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) client-ip=139.178.88.99; Authentication-Results: mx.google.com; arc=pass (i=1 spf=pass spfdomain=huaweicloud.com); spf=pass (google.com: domain of linux-kernel+bounces-47865-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-47865-linux.lists.archive=gmail.com@vger.kernel.org" Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 79EAC28B63B for ; Thu, 1 Feb 2024 09:31:01 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id BB13315CD41; Thu, 1 Feb 2024 09:30:32 +0000 (UTC) Received: from dggsgout11.his.huawei.com (unknown [45.249.212.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5F69215AADE; Thu, 1 Feb 2024 09:30:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.51 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706779831; cv=none; b=VNKwvm5PQywcQJbosNAN6D7LeubMnznBs6vgvuu9v1oXIrU1j7h7n2jg48M/yp1Y33u2RZI3Hiazy2+MvoIw2Bpl/bqMCe3eivIetZry7+1b2q/IbxrXhMHO6uKNd1Lu/XkItAiwc5YixLtTOR/argnR2TdI/jkXsqOWF3pMPQk= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706779831; c=relaxed/simple; bh=3M4jn+ZjDFHK3gQZdTFsiMbMkBoqW8n/JywSDAejL4U=; h=From:To:Cc:Subject:Date:Message-Id:MIME-Version; b=YeBrT8SccL3KJNYq0SN0E0llRD27anuW1uazEn4zEVIIDk+WChVJBLawrE8nyRzp5kRLtBSc8dGMswTq+gN2Blihb6jQ6vAM2bU2eyCPvfVMMirMnJCK1S2r2oLNn6UDannqwKV7ojN7yR90czxv6Pc3Vu++6LBgyq3PRv4A3RA= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.216]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4TQYXt2497z4f3kK1; Thu, 1 Feb 2024 17:30:22 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.112]) by mail.maildlp.com (Postfix) with ESMTP id D91881A038B; Thu, 1 Feb 2024 17:30:24 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.104.67]) by APP1 (Coremail) with SMTP id cCh0CgCXaBGtZLtl8V6KCg--.33515S4; Thu, 01 Feb 2024 17:30:23 +0800 (CST) From: Yu Kuai To: mpatocka@redhat.com, heinzm@redhat.com, xni@redhat.com, blazej.kucman@linux.intel.com, agk@redhat.com, snitzer@kernel.org, dm-devel@lists.linux.dev, song@kernel.org, yukuai3@huawei.com, jbrassow@f14.redhat.com, neilb@suse.de, shli@fb.com, akpm@osdl.org Cc: linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, yukuai1@huaweicloud.com, yi.zhang@huawei.com, yangerkun@huawei.com Subject: [PATCH v5 00/14] dm-raid/md/raid: fix v6.7 regressions Date: Thu, 1 Feb 2024 17:25:45 +0800 Message-Id: <20240201092559.910982-1-yukuai1@huaweicloud.com> X-Mailer: git-send-email 2.39.2 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-CM-TRANSID:cCh0CgCXaBGtZLtl8V6KCg--.33515S4 X-Coremail-Antispam: 1UD129KBjvJXoW3WF4UAF1xZw47Xw17Aw1UJrb_yoW3Gr47pa y3G3WSqrW8CFn2grZxJ3W8XFyYkFyfJa98Ca4fK3yUA345ta4Iyrs3Kay5Wa90kr1akw4U ZrWUta4ruF1jyFJanT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUvY14x267AKxVW5JVWrJwAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2ocxC64kIII0Yj41l84x0c7CEw4AK67xGY2AK02 1l84ACjcxK6xIIjxv20xvE14v26F1j6w1UM28EF7xvwVC0I7IYx2IY6xkF7I0E14v26r4U JVWxJr1l84ACjcxK6I8E87Iv67AKxVW0oVCq3wA2z4x0Y4vEx4A2jsIEc7CjxVAFwI0_Gc CE3s1le2I262IYc4CY6c8Ij28IcVAaY2xG8wAqx4xG64xvF2IEw4CE5I8CrVC2j2WlYx0E 2Ix0cI8IcVAFwI0_Jr0_Jr4lYx0Ex4A2jsIE14v26r1j6r4UMcvjeVCFs4IE7xkEbVWUJV W8JwACjcxG0xvY0x0EwIxGrwACjI8F5VA0II8E6IAqYI8I648v4I1lFIxGxcIEc7CjxVA2 Y2ka0xkIwI1l42xK82IYc2Ij64vIr41l4I8I3I0E4IkC6x0Yz7v_Jr0_Gr1lx2IqxVAqx4 xG67AKxVWUJVWUGwC20s026x8GjcxK67AKxVWUGVWUWwC2zVAF1VAY17CE14v26r4a6rW5 MIIYrxkI7VAKI48JMIIF0xvE2Ix0cI8IcVAFwI0_Jr0_JF4lIxAIcVC0I7IYx2IY6xkF7I 0E14v26r4j6F4UMIIF0xvE42xK8VAvwI8IcIk0rVWrZr1j6s0DMIIF0xvEx4A2jsIE14v2 6r1j6r4UMIIF0xvEx4A2jsIEc7CjxVAFwI0_Gr0_Gr1UYxBIdaVFxhVjvjDU0xZFpf9x0J UQvtAUUUUU= X-CM-SenderInfo: 51xn3trlr6x35dzhxuhorxvhhfrp/ From: Yu Kuai Changes in v5: - remove the patch to wait for bio completion while removing dm disk; - add patch 6; - reorder the patches, patch 1-8 is for md/raid, and patch 9-14 is related to dm-raid; Changes in v4: - add patch 10 to fix a raid456 deadlock(for both md/raid and dm-raid); - add patch 13 to wait for inflight IO completion while removing dm device; Changes in v3: - fix a problem in patch 5; - add patch 12; Changes in v2: - replace revert changes for dm-raid with real fixes; - fix dm-raid5 deadlock that exist for a long time, this deadlock is triggered because another problem is fixed in raid5, and instead of deadlock, user will read wrong data before v6.7, patch 9-11; First regression related to stop sync thread: The lifetime of sync_thread is designed as following: 1) Decide want to start sync_thread, set MD_RECOVERY_NEEDED, and wake up daemon thread; 2) Daemon thread detect that MD_RECOVERY_NEEDED is set, then set MD_RECOVERY_RUNNING and register sync_thread; 3) Execute md_do_sync() for the actual work, if it's done or interrupted, it will set MD_RECOVERY_DONE and wake up daemone thread; 4) Daemon thread detect that MD_RECOVERY_DONE is set, then clear MD_RECOVERY_RUNNING and unregister sync_thread; In v6.7, we fix md/raid to follow this design by commit f52f5c71f3d4 ("md: fix stopping sync thread"), however, dm-raid is not considered at that time, and following test will hang: shell/integrity-caching.sh shell/lvconvert-raid-reshape.sh This patch set fix the broken test by patch 1-4; - patch 1 fix that step 4) is broken by suspended array; - patch 2 fix that step 4) is broken by read-only array; - patch 3 fix that step 3) is broken that md_do_sync() doesn't set MD_RECOVERY_DONE; Noted that this patch will introdece new problem that data will be corrupted, which will be fixed in later patches. - patch 4 fix that setp 1) is broken that sync_thread is register and MD_RECOVERY_RUNNING is set directly, md/raid behaviour, not related to dm-raid; With patch 1-4, the above test won't hang anymore, however, the test will still fail and complain that ext4 is corrupted; Second regression is found by code review, interrupted reshape concurrent with IO can deadlock, patch 5; Third regression fix 'active_io' leakage, patch 6; The fifth regression related to frozen sync thread: Noted that for raid456, if reshape is interrupted, then call "pers->start_reshape" will corrupt data. And dm-raid rely on md_do_sync() doesn't set MD_RECOVERY_DONE so that new sync_thread won't be registered, and patch 3 just break this. - Patch 9 fix this problem by interrupting reshape and frozen sync_thread in dm_suspend(), then unfrozen and continue reshape in dm_resume(). It's verified that dm-raid tests won't complain that ext4 is corrupted anymore. - Patch 10 fix the problem that raid_message() call md_reap_sync_thread() directly, without holding 'reconfig_mutex'. Last regression related to dm-raid456 IO concurrent with reshape: For raid456, if reshape is still in progress, then IO across reshape position will wait for reshape to make progress. However, for dm-raid, in following cases reshape will never make progress hence IO will hang: 1) the array is read-only; 2) MD_RECOVERY_WAIT is set; 3) MD_RECOVERY_FROZEN is set; After commit c467e97f079f ("md/raid6: use valid sector values to determine if an I/O should wait on the reshape") fix the problem that IO across reshape position doesn't wait for reshape, the dm-raid test shell/lvconvert-raid-reshape.sh start to hang at raid5_make_request(). For md/raid, the problem doesn't exist because: 1) If array is read-only, it can switch to read-write by ioctl/sysfs; 2) md/raid never set MD_RECOVERY_WAIT; 3) If MD_RECOVERY_FROZEN is set, mddev_suspend() doesn't hold 'reconfig_mutex' anymore, it can be cleared and reshape can continue by sysfs api 'sync_action'. However, I'm not sure yet how to avoid the problem in dm-raid yet. - patch 11,12 fix this problem by detecting the above 3 cases in dm_suspend(), and fail those IO directly. If user really meet the IO error, then it means they're reading the wrong data before c467e97f079f. And it's safe to read/write the array after reshape make progress successfully. There are also some other minor changes: patch 8 and patch 12; Test result (for v4, I don't think it's necessary to test this patchset again for v5, except for a new fix, patch 6, which is tested separately, there are no other functional changes): I apply this patchset on top of v6.8-rc1, and run lvm2 tests suite with folling cmd for 24 round(for about 2 days): for t in `ls test/shell`; do if cat test/shell/$t | grep raid &> /dev/null; then make check T=shell/$t fi done failed count failed test 1 ### failed: [ndev-vanilla] shell/dmsecuretest.sh 1 ### failed: [ndev-vanilla] shell/dmsetup-integrity-keys.sh 1 ### failed: [ndev-vanilla] shell/dmsetup-keyring.sh 5 ### failed: [ndev-vanilla] shell/duplicate-pvs-md0.sh 1 ### failed: [ndev-vanilla] shell/duplicate-vgid.sh 2 ### failed: [ndev-vanilla] shell/duplicate-vgnames.sh 1 ### failed: [ndev-vanilla] shell/fsadm-crypt.sh 1 ### failed: [ndev-vanilla] shell/integrity.sh 6 ### failed: [ndev-vanilla] shell/lvchange-raid1-writemostly.sh 2 ### failed: [ndev-vanilla] shell/lvchange-rebuild-raid.sh 5 ### failed: [ndev-vanilla] shell/lvconvert-raid-reshape-stripes-load-reload.sh 4 ### failed: [ndev-vanilla] shell/lvconvert-raid-restripe-linear.sh 1 ### failed: [ndev-vanilla] shell/lvconvert-raid1-split-trackchanges.sh 20 ### failed: [ndev-vanilla] shell/lvconvert-repair-raid.sh 20 ### failed: [ndev-vanilla] shell/lvcreate-large-raid.sh 24 ### failed: [ndev-vanilla] shell/lvextend-raid.sh And I ramdomly pick some tests verified by hand that these test will fail in v6.6 as well(not all tests): shell/lvextend-raid.sh shell/lvcreate-large-raid.sh shell/lvconvert-repair-raid.sh shell/lvchange-rebuild-raid.sh shell/lvchange-raid1-writemostly.sh Xiao Ni also test the last version on a real machine, see [1]. [1] https://lore.kernel.org/all/CALTww29QO5kzmN6Vd+jT=-8W5F52tJjHKSgrfUc1Z1ZAeRKHHA@mail.gmail.com/ Yu Kuai (14): md: don't ignore suspended array in md_check_recovery() md: don't ignore read-only array in md_check_recovery() md: make sure md_do_sync() will set MD_RECOVERY_DONE md: don't register sync_thread for reshape directly md: don't suspend the array for interrupted reshape md: fix missing release of 'active_io' for flush md: export helpers to stop sync_thread md: export helper md_is_rdwr() dm-raid: really frozen sync_thread during suspend md/dm-raid: don't call md_reap_sync_thread() directly dm-raid: add a new helper prepare_suspend() in md_personality md/raid456: fix a deadlock for dm-raid456 while io concurrent with reshape dm-raid: fix lockdep waring in "pers->hot_add_disk" dm-raid: remove mddev_suspend/resume() drivers/md/dm-raid.c | 78 +++++++++++++++++++-------- drivers/md/md.c | 126 +++++++++++++++++++++++++++++-------------- drivers/md/md.h | 16 ++++++ drivers/md/raid10.c | 16 +----- drivers/md/raid5.c | 61 +++++++++++---------- 5 files changed, 192 insertions(+), 105 deletions(-) -- 2.39.2