Received: by 2002:a05:6358:3188:b0:123:57c1:9b43 with SMTP id q8csp251723rwd; Fri, 26 May 2023 19:04:58 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6QLxkmFXxgC3CrXsGmQeV2qosfz7vI7MXt+YjNRflZZzAkZEZKCI2MHhuupF+mZur278LA X-Received: by 2002:a05:6a20:7d88:b0:10c:29e5:941e with SMTP id v8-20020a056a207d8800b0010c29e5941emr1567203pzj.59.1685153098609; Fri, 26 May 2023 19:04:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685153098; cv=none; d=google.com; s=arc-20160816; b=C9GxU8a9f91HEWq7AdwH6wVh/fboc2qMAIuSuIG3i+FB9Wf5PzUGVBID57RVoqaCAE rKXujADTBrkwcgohfw/ZUtiMKRArrfT2OojG8/hKKYQbCHvK+dX2HQh+A3JBTNm5hoKT eWtirP09yM8a/5SIFzhfizdS9OAGOVCOZKpkt5acve30o5PCm7xa5XvmbsQlu2zGIKc0 xIE20WWJyLhgNgxqK3PmWNAD1c7ErEpWUWaWRqvNHVp4vq1cTV18bhKtn0mFXBUocflZ HvJWqfkvpgRKGMTD+jVmesvrrzny+w2lqvRurpvTluLFXcyamftKBvDLKgApLCjdCarI S4tg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to :mime-version:user-agent:date:message-id:from:references:cc:to :subject; bh=jkIrXSLuLMuFLZ31aw1txh0ewQU8x39M/wud9yZ7fO4=; b=yhddj4Uv7aAVytYX7W8JHtnPmTmDCtc4/daFlGQANk6iY/u29z6ZognzALSo1PXehZ oRvYXteSXK5FTF3/IQv/Dybnf5o91lTPdD3pJVtUMSI162vr8d/bI7UU+slR/p7Vm8CJ 5N0Cqv11+Dw2n7LY4aaFgErS/g+DdvnDrqZ9XGgbRxOcF4tZ+iEAHtnF4KGbmhsR0Lsp qJDxyflWvMvTnm/mLOnCWVEEuCjfSRshnzXdkimos8VTKB73BU+1siyvAivMVY5EQvkz fVU0dG08yp5UN6pZI3up1YVkzZcAtckYghZNKwFaG3ItCFoAoQGkrPgbS156rdR8eeNJ Z/DQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id p15-20020a637f4f000000b0053f32b910c3si4614260pgn.769.2023.05.26.19.04.46; Fri, 26 May 2023 19:04:58 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238165AbjE0BlU (ORCPT + 99 others); Fri, 26 May 2023 21:41:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34760 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229563AbjE0BlT (ORCPT ); Fri, 26 May 2023 21:41:19 -0400 Received: from dggsgout12.his.huawei.com (unknown [45.249.212.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 98C96DF; Fri, 26 May 2023 18:41:17 -0700 (PDT) Received: from mail02.huawei.com (unknown [172.30.67.153]) by dggsgout12.his.huawei.com (SkyGuard) with ESMTP id 4QSkyx1D6Cz4f3khL; Sat, 27 May 2023 09:41:13 +0800 (CST) Received: from [10.174.176.73] (unknown [10.174.176.73]) by APP4 (Coremail) with SMTP id gCh0CgBXwLO5X3FkxzUvKQ--.43805S3; Sat, 27 May 2023 09:41:14 +0800 (CST) Subject: Re: [PATCH v2 2/4] md/raid10: improve code of mrdev in raid10_sync_request To: linan666@huaweicloud.com, song@kernel.org, bingjingc@synology.com, allenpeng@synology.com, alexwu@synology.com, shli@fb.com, neilb@suse.de Cc: linux-raid@vger.kernel.org, linux-kernel@vger.kernel.org, linan122@huawei.com, yi.zhang@huawei.com, houtao1@huawei.com, yangerkun@huawei.com, "yukuai (C)" References: <20230526074551.669792-1-linan666@huaweicloud.com> <20230526074551.669792-3-linan666@huaweicloud.com> From: Yu Kuai Message-ID: Date: Sat, 27 May 2023 09:41:12 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.8.0 MIME-Version: 1.0 In-Reply-To: <20230526074551.669792-3-linan666@huaweicloud.com> Content-Type: text/plain; charset=gbk; format=flowed Content-Transfer-Encoding: 8bit X-CM-TRANSID: gCh0CgBXwLO5X3FkxzUvKQ--.43805S3 X-Coremail-Antispam: 1UD129KBjvJXoWxZrW3Jw1xAr43JFyUuw45Wrg_yoW5CF48p3 y3tFySyry7J3yUGw1DA3WDuF1SvrZ7tFWjkr43G34fW3sIgryDuFyrWFW5Xr1qvFWrXw15 Xw1DXws8Ca4IqFJanT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUU9F14x267AKxVW8JVW5JwAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2ocxC64kIII0Yj41l84x0c7CEw4AK67xGY2AK02 1l84ACjcxK6xIIjxv20xvE14v26ryj6F1UM28EF7xvwVC0I7IYx2IY6xkF7I0E14v26F4j 6r4UJwA2z4x0Y4vEx4A2jsIE14v26rxl6s0DM28EF7xvwVC2z280aVCY1x0267AKxVW0oV Cq3wAS0I0E0xvYzxvE52x082IY62kv0487Mc02F40EFcxC0VAKzVAqx4xG6I80ewAv7VC0 I7IYx2IY67AKxVWUJVWUGwAv7VC2z280aVAFwI0_Jr0_Gr1lOx8S6xCaFVCjc4AY6r1j6r 4UM4x0Y48IcVAKI48JM4x0x7Aq67IIx4CEVc8vx2IErcIFxwACI402YVCY1x02628vn2kI c2xKxwCYjI0SjxkI62AI1cAE67vIY487MxAIw28IcxkI7VAKI48JMxC20s026xCaFVCjc4 AY6r1j6r4UMI8I3I0E5I8CrVAFwI0_Jr0_Jr4lx2IqxVCjr7xvwVAFwI0_JrI_JrWlx4CE 17CEb7AF67AKxVWUtVW8ZwCIc40Y0x0EwIxGrwCI42IY6xIIjxv20xvE14v26r1j6r1xMI IF0xvE2Ix0cI8IcVCY1x0267AKxVW8JVWxJwCI42IY6xAIw20EY4v20xvaj40_Wr1j6rW3 Jr1lIxAIcVC2z280aVAFwI0_Jr0_Gr1lIxAIcVC2z280aVCY1x0267AKxVW8JVW8JrUvcS sGvfC2KfnxnUUI43ZEXa7VUbXdbUUUUUU== X-CM-SenderInfo: 51xn3trlr6x35dzhxuhorxvhhfrp/ X-CFilter-Loop: Reflected X-Spam-Status: No, score=-1.6 required=5.0 tests=BAYES_00,KHOP_HELO_FCRDNS, NICE_REPLY_A,SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org ?? 2023/05/26 15:45, linan666@huaweicloud.com ะด??: > From: Li Nan > > 'need_recover' and 'mrdev' are equivalent in raid10_sync_request(), and > inc mrdev->nr_pending is unreasonable if don't need recovery. Replace > 'need_recover' with 'mrdev', and only inc nr_pending when needed. LGTM, feel free to add: Reviewed-by: Yu Kuai > > Suggested-by: Yu Kuai > Signed-off-by: Li Nan > --- > drivers/md/raid10.c | 22 ++++++++++++---------- > 1 file changed, 12 insertions(+), 10 deletions(-) > > diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c > index e21502c03b45..9de9eabff209 100644 > --- a/drivers/md/raid10.c > +++ b/drivers/md/raid10.c > @@ -3437,7 +3437,6 @@ static sector_t raid10_sync_request(struct mddev *mddev, sector_t sector_nr, > sector_t sect; > int must_sync; > int any_working; > - int need_recover = 0; > struct raid10_info *mirror = &conf->mirrors[i]; > struct md_rdev *mrdev, *mreplace; > > @@ -3446,14 +3445,14 @@ static sector_t raid10_sync_request(struct mddev *mddev, sector_t sector_nr, > mreplace = rcu_dereference(mirror->replacement); > > if (mrdev != NULL && > - !test_bit(Faulty, &mrdev->flags) && > - !test_bit(In_sync, &mrdev->flags)) > - need_recover = 1; > + (test_bit(Faulty, &mrdev->flags) || > + test_bit(In_sync, &mrdev->flags))) > + mrdev = NULL; > if (mreplace != NULL && > test_bit(Faulty, &mreplace->flags)) > mreplace = NULL; > > - if (!need_recover && !mreplace) { > + if (!mrdev && !mreplace) { > rcu_read_unlock(); > continue; > } > @@ -3487,7 +3486,8 @@ static sector_t raid10_sync_request(struct mddev *mddev, sector_t sector_nr, > rcu_read_unlock(); > continue; > } > - atomic_inc(&mrdev->nr_pending); > + if (mrdev) > + atomic_inc(&mrdev->nr_pending); > if (mreplace) > atomic_inc(&mreplace->nr_pending); > rcu_read_unlock(); > @@ -3574,7 +3574,7 @@ static sector_t raid10_sync_request(struct mddev *mddev, sector_t sector_nr, > r10_bio->devs[1].devnum = i; > r10_bio->devs[1].addr = to_addr; > > - if (need_recover) { > + if (mrdev) { > bio = r10_bio->devs[1].bio; > bio->bi_next = biolist; > biolist = bio; > @@ -3619,7 +3619,7 @@ static sector_t raid10_sync_request(struct mddev *mddev, sector_t sector_nr, > for (k = 0; k < conf->copies; k++) > if (r10_bio->devs[k].devnum == i) > break; > - if (!test_bit(In_sync, > + if (mrdev && !test_bit(In_sync, > &mrdev->flags) > && !rdev_set_badblocks( > mrdev, > @@ -3645,12 +3645,14 @@ static sector_t raid10_sync_request(struct mddev *mddev, sector_t sector_nr, > if (rb2) > atomic_dec(&rb2->remaining); > r10_bio = rb2; > - rdev_dec_pending(mrdev, mddev); > + if (mrdev) > + rdev_dec_pending(mrdev, mddev); > if (mreplace) > rdev_dec_pending(mreplace, mddev); > break; > } > - rdev_dec_pending(mrdev, mddev); > + if (mrdev) > + rdev_dec_pending(mrdev, mddev); > if (mreplace) > rdev_dec_pending(mreplace, mddev); > if (r10_bio->devs[0].bio->bi_opf & MD_FAILFAST) { >