Received: by 2002:a05:6358:3188:b0:123:57c1:9b43 with SMTP id q8csp12225578rwd; Fri, 23 Jun 2023 03:13:50 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7lQtuRATNwdLCPQCLCLRPff9JfTuSC/GmV7JF6FTJzK9VKYTX1NAe7fQtDlqm09GJA0NHs X-Received: by 2002:a05:6a00:1390:b0:666:e954:8abc with SMTP id t16-20020a056a00139000b00666e9548abcmr28117633pfg.2.1687515229967; Fri, 23 Jun 2023 03:13:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1687515229; cv=none; d=google.com; s=arc-20160816; b=Lp2w0fT3WZ6S7YHrh0UzdGy2Zup2GL+fVydSTWcCh2mNqUoivshLaggaBw+dcTMckb InghLuHHtATxkOK1Dws6DxKXIC6xQ+IPT4HS7Q3OmFwqnFKm158P358bdPKWJQ22zmUA xJ0IVPjYSUxzm3ezj6S/Ws500s+TSXWgMOR2Jqo5wmTLe8M2UE6SBRGyTZS0AP2fC9ph Uw8dQ4jvBTuJtp6Mo9Jq8YS9vqUYh3t5nYaIm7k2PqX0xLpzogoYju3hOvWvfN2r7rfv SamV2re8tp25D5q3n5QU3TUXoXSBje7l3jD7LM4sci4WVUYxUxSsUualzp/bSli8RMQo olZw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:from :content-language:references:cc:to:subject:user-agent:mime-version :date:message-id; bh=n1KIbwwmO2FKJLESvH/n7EUaBgYWEPA3MRuORtu808g=; b=kLvynU+OVDPYfHv7HLaPQ9tjVQChVeCZSHsPe4sGPGx3fMp4tFvNev/ZbVDeMioYuo YqceMT0jNN258P2ZJPmbmgDARPAT5SCBMXuTf5lFHIpyfWGs4BGsKNPQqOwVpQkX0TIw v3P1qW/AZhT2nbm2GUvGuwDBaI/SnaLYaIiIPYzuA04DNFgphm+0EhASvyCszUszNSGG S6zvgZtheVH5hGH+w0vCfBdKFEW2FBRicIEW0osHRm5hgUxeHRSTjRmfstd6+iWC0WT6 kmb6BJTOXNdeu/i9sy7aEYMsjjGq3ZEJ+a1hll4wZL8Q+Z65obUby33q77UVV8r0JgDr ffhA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id x9-20020aa79a49000000b00666e788d909si8467385pfj.147.2023.06.23.03.13.36; Fri, 23 Jun 2023 03:13:49 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231228AbjFWKE7 (ORCPT + 99 others); Fri, 23 Jun 2023 06:04:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43640 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231937AbjFWKEY (ORCPT ); Fri, 23 Jun 2023 06:04:24 -0400 Received: from mx3.molgen.mpg.de (mx3.molgen.mpg.de [141.14.17.11]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E878F26AD; Fri, 23 Jun 2023 03:04:13 -0700 (PDT) Received: from [141.14.220.45] (g45.guest.molgen.mpg.de [141.14.220.45]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) (Authenticated sender: pmenzel) by mx.molgen.mpg.de (Postfix) with ESMTPSA id EBEE761E5FE03; Fri, 23 Jun 2023 12:03:40 +0200 (CEST) Message-ID: Date: Fri, 23 Jun 2023 12:03:40 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.12.0 Subject: Re: [PATCH 1/3] md/raid10: optimize fix_read_error To: linan666@huaweicloud.com Cc: song@kernel.org, linux-raid@vger.kernel.org, linux-kernel@vger.kernel.org, linan122@huawei.com, yukuai3@huawei.com, yi.zhang@huawei.com, houtao1@huawei.com, yangerkun@huawei.com References: <20230623173236.2513554-1-linan666@huaweicloud.com> <20230623173236.2513554-2-linan666@huaweicloud.com> Content-Language: en-US From: Paul Menzel In-Reply-To: <20230623173236.2513554-2-linan666@huaweicloud.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.3 required=5.0 tests=BAYES_00,NICE_REPLY_A, RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Dear Li, Thank you for your patch. Am 23.06.23 um 19:32 schrieb linan666@huaweicloud.com: > From: Li Nan > > We dereference r10_bio->read_slot too many times in fix_read_error(). > Optimize it by using a variable to store read_slot. I am always cautious reading about optimizations without any benchmarks or object code analysis. Although your explanation makes sense, did you check, that performance didn’t decrease in some way? (Maybe the compiler even generates the same code.) Kind regards, Paul > Signed-off-by: Li Nan > --- > drivers/md/raid10.c | 20 ++++++++++---------- > 1 file changed, 10 insertions(+), 10 deletions(-) > > diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c > index 381c21f7fb06..94ae294c8a3c 100644 > --- a/drivers/md/raid10.c > +++ b/drivers/md/raid10.c > @@ -2725,10 +2725,10 @@ static int r10_sync_page_io(struct md_rdev *rdev, sector_t sector, > static void fix_read_error(struct r10conf *conf, struct mddev *mddev, struct r10bio *r10_bio) > { > int sect = 0; /* Offset from r10_bio->sector */ > - int sectors = r10_bio->sectors; > + int sectors = r10_bio->sectors, slot = r10_bio->read_slot; > struct md_rdev *rdev; > int max_read_errors = atomic_read(&mddev->max_corr_read_errors); > - int d = r10_bio->devs[r10_bio->read_slot].devnum; > + int d = r10_bio->devs[slot].devnum; > > /* still own a reference to this rdev, so it cannot > * have been cleared recently. > @@ -2749,13 +2749,13 @@ static void fix_read_error(struct r10conf *conf, struct mddev *mddev, struct r10 > pr_notice("md/raid10:%s: %pg: Failing raid device\n", > mdname(mddev), rdev->bdev); > md_error(mddev, rdev); > - r10_bio->devs[r10_bio->read_slot].bio = IO_BLOCKED; > + r10_bio->devs[slot].bio = IO_BLOCKED; > return; > } > > while(sectors) { > int s = sectors; > - int sl = r10_bio->read_slot; > + int sl = slot; > int success = 0; > int start; > > @@ -2790,7 +2790,7 @@ static void fix_read_error(struct r10conf *conf, struct mddev *mddev, struct r10 > sl++; > if (sl == conf->copies) > sl = 0; > - } while (!success && sl != r10_bio->read_slot); > + } while (!success && sl != slot); > rcu_read_unlock(); > > if (!success) { > @@ -2798,16 +2798,16 @@ static void fix_read_error(struct r10conf *conf, struct mddev *mddev, struct r10 > * as bad on the first device to discourage future > * reads. > */ > - int dn = r10_bio->devs[r10_bio->read_slot].devnum; > + int dn = r10_bio->devs[slot].devnum; > rdev = conf->mirrors[dn].rdev; > > if (!rdev_set_badblocks( > rdev, > - r10_bio->devs[r10_bio->read_slot].addr > + r10_bio->devs[slot].addr > + sect, > s, 0)) { > md_error(mddev, rdev); > - r10_bio->devs[r10_bio->read_slot].bio > + r10_bio->devs[slot].bio > = IO_BLOCKED; > } > break; > @@ -2816,7 +2816,7 @@ static void fix_read_error(struct r10conf *conf, struct mddev *mddev, struct r10 > start = sl; > /* write it back and re-read */ > rcu_read_lock(); > - while (sl != r10_bio->read_slot) { > + while (sl != slot) { > if (sl==0) > sl = conf->copies; > sl--; > @@ -2850,7 +2850,7 @@ static void fix_read_error(struct r10conf *conf, struct mddev *mddev, struct r10 > rcu_read_lock(); > } > sl = start; > - while (sl != r10_bio->read_slot) { > + while (sl != slot) { > if (sl==0) > sl = conf->copies; > sl--;