Received: by 2002:a05:6358:3188:b0:123:57c1:9b43 with SMTP id q8csp2130812rwd; Tue, 13 Jun 2023 21:00:27 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5ClQ6UFrjKwwM5W5sQ9sxxkOYDY5FU6rpqfVUjZEb3/wCo8mAIlETwbLG5s1X6UdLybCvQ X-Received: by 2002:a17:907:948f:b0:973:afe2:a01 with SMTP id dm15-20020a170907948f00b00973afe20a01mr12097209ejc.75.1686715226709; Tue, 13 Jun 2023 21:00:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1686715226; cv=none; d=google.com; s=arc-20160816; b=wIWaDNAGRjxeKeRzmBJknWjSph4810keJTeaBc/iOVmqJj5fWvTeFSXworRsVt5/rw ScW8hiljylgLqUS925dWIjCAC2aUMlQKns7B9DNZIknrVAcH8yaj4ankgZd9wByJiV+M 7F2f8LxdLwbgaFG5BjM22JYyP35gjvkG8GnXx2qmgUwyiPx6IN3yb/M+wkzcZu4+vCVK FYJKd8E5fkeCn8Q2f4fQ5tZHT0etRY1JzLDk9eTrtnwOxHHYXUnb9chLf1WqhfExyA09 WlGybwoBrZMzPJWlJMuoQd2/JM8fX5x/APqRVzz7Ns6UCSe4ukRQFFO6I7E+NVoVRnnB aGbA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:cc:to:subject :message-id:date:from:in-reply-to:references:mime-version :dkim-signature; bh=JSPfSftC3LZ+FvSuiQG/grkOZnEfZm0c+OQx2J3gvCo=; b=tp52wlx2oqOuGNb9cq1poMw05dIOeKaPyzBXdgcAzwsrxfw8mHeC/oH3aVXpt6B9YL mubyuVd4HBtbFu32/BMpLU/UhAb61rrwOg7pOLhu60nD8paxNIQMLnb13eRUK5mgTrGq A5IzJDu9GKespHfzW4PJ7o1F1bWDd1bj8X3840BbhAGG+t9obZ/iPftfrK2g2MKbNCS+ mUitaXdy6qpZS5SkNkB6NgVYPFdUt+jrG9SB9kEAolApjQcHEcEzHWpQk7cZhOMO6g07 XfXgeY/vBUHsd5vv/jQTzY79rEKMJt13qfCyIog6aTmCSa2fzx258AuLffDDjV843zMw 9KxA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=CXm4EKBp; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id rv23-20020a17090710d700b009786e7ebc36si7836148ejb.136.2023.06.13.21.00.00; Tue, 13 Jun 2023 21:00:26 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=CXm4EKBp; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236001AbjFNDsd (ORCPT + 99 others); Tue, 13 Jun 2023 23:48:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53934 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233439AbjFNDsb (ORCPT ); Tue, 13 Jun 2023 23:48:31 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 21D6710EC for ; Tue, 13 Jun 2023 20:47:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1686714465; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=JSPfSftC3LZ+FvSuiQG/grkOZnEfZm0c+OQx2J3gvCo=; b=CXm4EKBp3HPWc15pYARArfgkvzE5v81GMzxtq0qxAmJ0GhaXgXkgTZb9/FWSJjboRtp09X QrT6OWpJ1TwCuN9YPH6sYWfaajwwxEoqqpBIgTkFHPahJEtQoIKDh1rsRcMNpxf7gBR7TM Gt5UED6WdGnduIduVV4gE7Ufel7vOqo= Received: from mail-oi1-f199.google.com (mail-oi1-f199.google.com [209.85.167.199]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-604-qh9lUf6KPduBLSFIGIy_dQ-1; Tue, 13 Jun 2023 23:47:44 -0400 X-MC-Unique: qh9lUf6KPduBLSFIGIy_dQ-1 Received: by mail-oi1-f199.google.com with SMTP id 5614622812f47-395ff4dc3abso4670842b6e.3 for ; Tue, 13 Jun 2023 20:47:43 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686714463; x=1689306463; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=JSPfSftC3LZ+FvSuiQG/grkOZnEfZm0c+OQx2J3gvCo=; b=lDK4V/+UHsrO7UpuZ2ED6xRptXaWhABv3J6uSwJ1FrgjsVwo8pADhiukHDSpgEI/i1 qxzWodOUqKKCkY0ITu2+DGZx/S7J1dLnMeKigFZpSPyo+AfJetffa4aqu3zKh1adTiaD psTThaCjr09CdNIQmW2U3uBAw60WzGEYzHNYd1VS5xvgu/CzXEaYfCLqRHBO5HGHL31T PxA3zS6/uklNZmCMmY1ZWMh/nb6PlZaBqeCLHtIaQtKfU4PEq/SboalvAbbKZqSG0HUW 2WAdvtXheVKmL7OEpi6HQ9AFQaah1eDvTmdWArF4MafDJqS/HmfWQfDMCkplyJyNa3VU 4+Ng== X-Gm-Message-State: AC+VfDzdVro7YvdajLPyBMWjapZGZ2pPkbuAsjh2VAvx/oWouallVd3O F3q2ySpmgxDrOmFp7VgSPZS0HeoQ9qKNx6a7bZN6vIM0i6Iwwd/Lb/VJfvC4pednMje3+XOHKzI sk1dudU8EWW0+9HBpwVDRgz30iCS+t7cqFSU7jYzR X-Received: by 2002:a05:6808:df7:b0:39a:bdc8:d4d6 with SMTP id g55-20020a0568080df700b0039abdc8d4d6mr8543831oic.40.1686714463239; Tue, 13 Jun 2023 20:47:43 -0700 (PDT) X-Received: by 2002:a05:6808:df7:b0:39a:bdc8:d4d6 with SMTP id g55-20020a0568080df700b0039abdc8d4d6mr8543822oic.40.1686714462987; Tue, 13 Jun 2023 20:47:42 -0700 (PDT) MIME-Version: 1.0 References: <20230529132037.2124527-1-yukuai1@huaweicloud.com> <20230529132037.2124527-5-yukuai1@huaweicloud.com> <05aa3b09-7bb9-a65a-6231-4707b4b078a0@redhat.com> <74b404c4-4fdb-6eb3-93f1-0e640793bba6@huaweicloud.com> In-Reply-To: <74b404c4-4fdb-6eb3-93f1-0e640793bba6@huaweicloud.com> From: Xiao Ni Date: Wed, 14 Jun 2023 11:47:31 +0800 Message-ID: Subject: Re: [dm-devel] [PATCH -next v2 4/6] md: refactor idle/frozen_sync_thread() to fix deadlock To: Yu Kuai Cc: guoqing.jiang@linux.dev, agk@redhat.com, snitzer@kernel.org, dm-devel@redhat.com, song@kernel.org, linux-raid@vger.kernel.org, yangerkun@huawei.com, linux-kernel@vger.kernel.org, yi.zhang@huawei.com, "yukuai (C)" Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H5,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE, T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Jun 14, 2023 at 9:48=E2=80=AFAM Yu Kuai w= rote: > > Hi, > > =E5=9C=A8 2023/06/13 22:50, Xiao Ni =E5=86=99=E9=81=93: > > > > =E5=9C=A8 2023/5/29 =E4=B8=8B=E5=8D=889:20, Yu Kuai =E5=86=99=E9=81=93: > >> From: Yu Kuai > >> > >> Our test found a following deadlock in raid10: > >> > >> 1) Issue a normal write, and such write failed: > >> > >> raid10_end_write_request > >> set_bit(R10BIO_WriteError, &r10_bio->state) > >> one_write_done > >> reschedule_retry > >> > >> // later from md thread > >> raid10d > >> handle_write_completed > >> list_add(&r10_bio->retry_list, &conf->bio_end_io_list) > >> > >> // later from md thread > >> raid10d > >> if (!test_bit(MD_SB_CHANGE_PENDING, &mddev->sb_flags)) > >> list_move(conf->bio_end_io_list.prev, &tmp) > >> r10_bio =3D list_first_entry(&tmp, struct r10bio, retry_list) > >> raid_end_bio_io(r10_bio) > >> > >> Dependency chain 1: normal io is waiting for updating superblock > > > > Hi Kuai > > > > It looks like the above situation is more complex. It only needs a > > normal write and md_write_start needs to > > > > wait until the metadata is written to member disks, right? If so, it > > doesn't need to introduce raid10 write failure > > > > here. I guess, it should be your test case. It's nice, if you can put > > your test steps in the patch. But for the analysis > > > > of the deadlock here, it's better to be simple. > > Test script can be found here, it's pretty easy to trigger: > > https://patchwork.kernel.org/project/linux-raid/patch/20230529132826.2125= 392-4-yukuai1@huaweicloud.com/ Thanks for this. > > While reviewing the related code, I found that io can only be added to > list bio_end_io_list from handle_write_completed() if such io failed, so > I think io failure is needed to trigger deadlock from daemon thread. > > I think the key point is how MD_SB_CHANGE_PENDING is set: > > 1) raid10_error() and rdev_set_badblocks(), trigger by io failure; > 2) raid10_write_request() related to reshape; > 3) md_write_start() and md_allow_write(), and mddev->in_sync is set, > however, I was thinking this is not a common case; > > 1) is used here because it's quite easy to trigger and this is what > we meet in real test. 3) is possible but I will say let's keep 1), I > don't think it's necessary to reporduce this deadlock through another > path again. It makes sense. Let's go back to the first path mentioned in the patch. > 1) Issue a normal write, and such write failed: > > raid10_end_write_request > set_bit(R10BIO_WriteError, &r10_bio->state) > one_write_done > reschedule_retry This is good. > > // later from md thread > raid10d > handle_write_completed > list_add(&r10_bio->retry_list, &conf->bio_end_io_list) I have a question here. It should run narrow_write_error in handle_write_completed. In the test case, will narrow_write_error run successfully? Or it fails and will call rdev_set_badblocks and md_error. So MD_RECOVERY_PENDING will be set? > > // later from md thread > raid10d > if (!test_bit(MD_SB_CHANGE_PENDING, &mddev->sb_flags)) > list_move(conf->bio_end_io_list.prev, &tmp) > r10_bio =3D list_first_entry(&tmp, struct r10bio, retry_list) > raid_end_bio_io(r10_bio) > > Dependency chain 1: normal io is waiting for updating superblock It's a little hard to understand. Because it doesn't show how normal io waits for a superblock update. And based on your last email, I guess you want to say rdev_set_badblock sets MD_RECOVERY_PENDING, but the flag can't be cleared, so the bios can't be added to bio_end_io_list, so the io rquests can't be finished. Regards Xiao > > Thanks, > Kuai > > > >> > >> 2) Trigger a recovery: > >> > >> raid10_sync_request > >> raise_barrier > >> > >> Dependency chain 2: sync thread is waiting for normal io > >> > >> 3) echo idle/frozen to sync_action: > >> > >> action_store > >> mddev_lock > >> md_unregister_thread > >> kthread_stop > >> > >> Dependency chain 3: drop 'reconfig_mutex' is waiting for sync thread > >> > >> 4) md thread can't update superblock: > >> > >> raid10d > >> md_check_recovery > >> if (mddev_trylock(mddev)) > >> md_update_sb > >> > >> Dependency chain 4: update superblock is waiting for 'reconfig_mutex' > >> > >> Hence cyclic dependency exist, in order to fix the problem, we must > >> break one of them. Dependency 1 and 2 can't be broken because they are > >> foundation design. Dependency 4 may be possible if it can be guarantee= d > >> that no io can be inflight, however, this requires a new mechanism whi= ch > >> seems complex. Dependency 3 is a good choice, because idle/frozen only > >> requires sync thread to finish, which can be done asynchronously that = is > >> already implemented, and 'reconfig_mutex' is not needed anymore. > >> > >> This patch switch 'idle' and 'frozen' to wait sync thread to be done > >> asynchronously, and this patch also add a sequence counter to record h= ow > >> many times sync thread is done, so that 'idle' won't keep waiting on n= ew > >> started sync thread. > > > > In the patch, sync_seq is added in md_reap_sync_thread. In > > idle_sync_thread, if sync_seq isn't equal > > > > mddev->sync_seq, it should mean there is someone that stops the sync > > thread already, right? Why do > > > > you say 'new started sync thread' here? > > > > Regards > > > > Xiao > > > > > >> > >> Noted that raid456 has similiar deadlock([1]), and it's verified[2] th= is > >> deadlock can be fixed by this patch as well. > >> > >> [1] > >> https://lore.kernel.org/linux-raid/5ed54ffc-ce82-bf66-4eff-390cb23bc1a= c@molgen.mpg.de/T/#t > >> > >> [2] > >> https://lore.kernel.org/linux-raid/e9067438-d713-f5f3-0d3d-9e6b0e9efa0= e@huaweicloud.com/ > >> > >> Signed-off-by: Yu Kuai > >> --- > >> drivers/md/md.c | 23 +++++++++++++++++++---- > >> drivers/md/md.h | 2 ++ > >> 2 files changed, 21 insertions(+), 4 deletions(-) > >> > >> diff --git a/drivers/md/md.c b/drivers/md/md.c > >> index 63a993b52cd7..7912de0e4d12 100644 > >> --- a/drivers/md/md.c > >> +++ b/drivers/md/md.c > >> @@ -652,6 +652,7 @@ void mddev_init(struct mddev *mddev) > >> timer_setup(&mddev->safemode_timer, md_safemode_timeout, 0); > >> atomic_set(&mddev->active, 1); > >> atomic_set(&mddev->openers, 0); > >> + atomic_set(&mddev->sync_seq, 0); > >> spin_lock_init(&mddev->lock); > >> atomic_set(&mddev->flush_pending, 0); > >> init_waitqueue_head(&mddev->sb_wait); > >> @@ -4776,19 +4777,27 @@ static void stop_sync_thread(struct mddev *mdd= ev) > >> if (work_pending(&mddev->del_work)) > >> flush_workqueue(md_misc_wq); > >> - if (mddev->sync_thread) { > >> - set_bit(MD_RECOVERY_INTR, &mddev->recovery); > >> - md_reap_sync_thread(mddev); > >> - } > >> + set_bit(MD_RECOVERY_INTR, &mddev->recovery); > >> + /* > >> + * Thread might be blocked waiting for metadata update which will > >> now > >> + * never happen > >> + */ > >> + md_wakeup_thread_directly(mddev->sync_thread); > >> mddev_unlock(mddev); > >> } > >> static void idle_sync_thread(struct mddev *mddev) > >> { > >> + int sync_seq =3D atomic_read(&mddev->sync_seq); > >> + > >> mutex_lock(&mddev->sync_mutex); > >> clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery); > >> stop_sync_thread(mddev); > >> + > >> + wait_event(resync_wait, sync_seq !=3D atomic_read(&mddev->sync_se= q) || > >> + !test_bit(MD_RECOVERY_RUNNING, &mddev->recovery)); > >> + > >> mutex_unlock(&mddev->sync_mutex); > >> } > >> @@ -4797,6 +4806,10 @@ static void frozen_sync_thread(struct mddev > >> *mddev) > >> mutex_init(&mddev->delete_mutex); > >> set_bit(MD_RECOVERY_FROZEN, &mddev->recovery); > >> stop_sync_thread(mddev); > >> + > >> + wait_event(resync_wait, mddev->sync_thread =3D=3D NULL && > >> + !test_bit(MD_RECOVERY_RUNNING, &mddev->recovery)); > >> + > >> mutex_unlock(&mddev->sync_mutex); > >> } > >> @@ -9472,6 +9485,8 @@ void md_reap_sync_thread(struct mddev *mddev) > >> /* resync has finished, collect result */ > >> md_unregister_thread(&mddev->sync_thread); > >> + atomic_inc(&mddev->sync_seq); > >> + > >> if (!test_bit(MD_RECOVERY_INTR, &mddev->recovery) && > >> !test_bit(MD_RECOVERY_REQUESTED, &mddev->recovery) && > >> mddev->degraded !=3D mddev->raid_disks) { > >> diff --git a/drivers/md/md.h b/drivers/md/md.h > >> index 2fa903de5bd0..7cab9c7c45b8 100644 > >> --- a/drivers/md/md.h > >> +++ b/drivers/md/md.h > >> @@ -539,6 +539,8 @@ struct mddev { > >> /* Used to synchronize idle and frozen for action_store() */ > >> struct mutex sync_mutex; > >> + /* The sequence number for sync thread */ > >> + atomic_t sync_seq; > >> bool has_superblocks:1; > >> bool fail_last_dev:1; > > > > -- > > dm-devel mailing list > > dm-devel@redhat.com > > https://listman.redhat.com/mailman/listinfo/dm-devel >