Received: by 2002:ab2:7903:0:b0:1fb:b500:807b with SMTP id a3csp949454lqj; Mon, 3 Jun 2024 06:06:53 -0700 (PDT) X-Forwarded-Encrypted: i=3; AJvYcCU1x8d38Lhnyp1s9Bn9JCmU4hbO9EZogaWJLGl5xdPAEb+oTgeZeWtBGlA3TwxUYq+HGVPNaw4ocu7+xWR2oitfu1C04rIPDaRPVbkP2A== X-Google-Smtp-Source: AGHT+IGF4PWbzLNd7g80UnUbh585KdFF/n8z91S35kk79/jVn+q0B2O3oop/4RJ+K3BeHFMhs6CW X-Received: by 2002:a17:902:c944:b0:1f6:82e7:66d with SMTP id d9443c01a7336-1f682e707f3mr16194485ad.12.1717420012704; Mon, 03 Jun 2024 06:06:52 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1717420012; cv=pass; d=google.com; s=arc-20160816; b=j1oMI0N5NRRE2jkE+zSKnYoDpiESk5tYa/1ABOav0A+XDmasFIlLOo2DIlFpjj7f33 0SCpxgbJQrEpujD/7o1zPolP7tfPUV+68g8A7/seMKc1PS/x4r3vISsMf5q4f1oPid4G R9bi4Q7y/Sfzu7H1foaRM/Re68Bm8XcSPaWxapVtD6/ISpl4GYRZYrA9t4m2iKVERGOL 77a6t0VtlDDr68namcbFd66aQ86PyQ74IQ1X1kuTWlyBgn3T+jS333fzLmBq1Ooezn4g Y+TTYFVh5PQri3aMAZ40JDK0JLe4R42pB4xvonSmDMx7wrIVrTUs81kO49NHywtQEpVO tYig== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from; bh=/XUiQECM7h93+yfgQkwwzAFfuJjdfGuKjL1IyXcMyIk=; fh=1LhuwRtrwnHwkJ8SSfr9g2yPev2+fxl/FqZerKdeGVc=; b=qENO5nozSNf4PTd6cSIw5/TteZTy7Wepf9dmqQ20FYndtWpao5EUUgtvpRsIflDPxC gb44TWcVG6WjCPuHGJwHQOs9+c4xK+gXgHRZVNQsN+YxhRQpkFYM9qwUuFns+we2Virl sbbf21l4YZl2aspHItPkd7w4k8kq3GjvEjhBEasbBjIfp7INRUHPL5fVsXgmRN5DV5hT ZAvCurlqVnTtxYK9DTk9BsxUWa7dIB4XXh1LDsXhar3okjF4FWFwUmOa7Rycm2zAixnx 6wTew3pi01Pq2YcNwQOfONGmFZ5IqCQ9UT1n52LHFRXo8iTKPzBds5ngOT+bq9N/kNU9 tgHA==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; arc=pass (i=1 spf=pass spfdomain=huawei.com dmarc=pass fromdomain=huawei.com); spf=pass (google.com: domain of linux-kernel+bounces-199165-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-199165-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Return-Path: Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [139.178.88.99]) by mx.google.com with ESMTPS id d9443c01a7336-1f6323e98cdsi64578625ad.329.2024.06.03.06.06.52 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 03 Jun 2024 06:06:52 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel+bounces-199165-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) client-ip=139.178.88.99; Authentication-Results: mx.google.com; arc=pass (i=1 spf=pass spfdomain=huawei.com dmarc=pass fromdomain=huawei.com); spf=pass (google.com: domain of linux-kernel+bounces-199165-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-199165-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 7105828B62F for ; Mon, 3 Jun 2024 13:03:25 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id BB6B512C81F; Mon, 3 Jun 2024 13:00:46 +0000 (UTC) Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7E3D312C552; Mon, 3 Jun 2024 13:00:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.188 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1717419646; cv=none; b=dvE774k934Pg8fGs+xeKPRZtAp4KY1JDCNZnXtuKnzZdel1L24Qu2nrOkjmk3dTeBUOkqbvt8Qa1XIb1UOLs9d3pBH1qEaboAB2k1zgRKnjLDZomlk0B4D09KU3Nf5cHi+kacWAV1CpMeVvX6pDjXHMizZdS0M4htRyu4zLPFLw= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1717419646; c=relaxed/simple; bh=VzkuVyLAD0EzOcgP5NNPguozUkIKwVKBp4GGC4ZZue8=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=LGHzd6BD6YmdIJpTEWGpDC2+EMGyTpEoLd+UoHnozsv3/Aiz4VHgxtfioLrgkkmAUIq7zgPhw7TSMLk34TrCt12SJBtd3wlLSwPovSBbRX/uSBv+s9JZNQB9auaix1mkcFG9XOhSwwV2epbgjjenAVEddjHCqa7rv5TXnM3SrUo= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.188 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.88.105]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4VtDHc2nDmzmW8y; Mon, 3 Jun 2024 20:56:12 +0800 (CST) Received: from kwepemm600009.china.huawei.com (unknown [7.193.23.164]) by mail.maildlp.com (Postfix) with ESMTPS id AD7FD1403D2; Mon, 3 Jun 2024 21:00:27 +0800 (CST) Received: from huawei.com (10.175.104.67) by kwepemm600009.china.huawei.com (7.193.23.164) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Mon, 3 Jun 2024 20:58:19 +0800 From: Yu Kuai To: , , , , , , CC: , , , , , , Subject: [PATCH 11/12] md: pass in max_sectors for pers->sync_request() Date: Mon, 3 Jun 2024 20:58:14 +0800 Message-ID: <20240603125815.2199072-12-yukuai3@huawei.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240603125815.2199072-1-yukuai3@huawei.com> References: <20240603125815.2199072-1-yukuai3@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To kwepemm600009.china.huawei.com (7.193.23.164) For different sync_action, sync_thread will use different max_sectors, see details in md_sync_max_sectors(), currently both md_do_sync() and pers->sync_request() in eatch iteration have to get the same max_sectors. Hence pass in max_sectors for pers->sync_request() to prevent redundant code. Signed-off-by: Yu Kuai --- drivers/md/md.c | 5 +++-- drivers/md/md.h | 3 ++- drivers/md/raid1.c | 5 ++--- drivers/md/raid10.c | 8 ++------ drivers/md/raid5.c | 3 +-- 5 files changed, 10 insertions(+), 14 deletions(-) diff --git a/drivers/md/md.c b/drivers/md/md.c index a1f9d9b69911..e1555971ebe8 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -9191,7 +9191,8 @@ void md_do_sync(struct md_thread *thread) if (test_bit(MD_RECOVERY_INTR, &mddev->recovery)) break; - sectors = mddev->pers->sync_request(mddev, j, &skipped); + sectors = mddev->pers->sync_request(mddev, j, max_sectors, + &skipped); if (sectors == 0) { set_bit(MD_RECOVERY_INTR, &mddev->recovery); break; @@ -9281,7 +9282,7 @@ void md_do_sync(struct md_thread *thread) mddev->curr_resync_completed = mddev->curr_resync; sysfs_notify_dirent_safe(mddev->sysfs_completed); } - mddev->pers->sync_request(mddev, max_sectors, &skipped); + mddev->pers->sync_request(mddev, max_sectors, max_sectors, &skipped); if (!test_bit(MD_RECOVERY_CHECK, &mddev->recovery) && mddev->curr_resync > MD_RESYNC_ACTIVE) { diff --git a/drivers/md/md.h b/drivers/md/md.h index 296a78568fc4..3458ac5ce9c1 100644 --- a/drivers/md/md.h +++ b/drivers/md/md.h @@ -729,7 +729,8 @@ struct md_personality int (*hot_add_disk) (struct mddev *mddev, struct md_rdev *rdev); int (*hot_remove_disk) (struct mddev *mddev, struct md_rdev *rdev); int (*spare_active) (struct mddev *mddev); - sector_t (*sync_request)(struct mddev *mddev, sector_t sector_nr, int *skipped); + sector_t (*sync_request)(struct mddev *mddev, sector_t sector_nr, + sector_t max_sector, int *skipped); int (*resize) (struct mddev *mddev, sector_t sectors); sector_t (*size) (struct mddev *mddev, sector_t sectors, int raid_disks); int (*check_reshape) (struct mddev *mddev); diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c index 7b8a71ca66dd..387f9a53e35f 100644 --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -2757,12 +2757,12 @@ static struct r1bio *raid1_alloc_init_r1buf(struct r1conf *conf) */ static sector_t raid1_sync_request(struct mddev *mddev, sector_t sector_nr, - int *skipped) + sector_t max_sector, int *skipped) { struct r1conf *conf = mddev->private; struct r1bio *r1_bio; struct bio *bio; - sector_t max_sector, nr_sectors; + sector_t nr_sectors; int disk = -1; int i; int wonly = -1; @@ -2778,7 +2778,6 @@ static sector_t raid1_sync_request(struct mddev *mddev, sector_t sector_nr, if (init_resync(conf)) return 0; - max_sector = mddev->dev_sectors; if (sector_nr >= max_sector) { /* If we aborted, we need to abort the * sync on the 'current' bitmap chunk (there will diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c index a4556d2e46bf..af54ac30a667 100644 --- a/drivers/md/raid10.c +++ b/drivers/md/raid10.c @@ -3140,12 +3140,12 @@ static void raid10_set_cluster_sync_high(struct r10conf *conf) */ static sector_t raid10_sync_request(struct mddev *mddev, sector_t sector_nr, - int *skipped) + sector_t max_sector, int *skipped) { struct r10conf *conf = mddev->private; struct r10bio *r10_bio; struct bio *biolist = NULL, *bio; - sector_t max_sector, nr_sectors; + sector_t nr_sectors; int i; int max_sync; sector_t sync_blocks; @@ -3175,10 +3175,6 @@ static sector_t raid10_sync_request(struct mddev *mddev, sector_t sector_nr, return 0; skipped: - max_sector = mddev->dev_sectors; - if (test_bit(MD_RECOVERY_SYNC, &mddev->recovery) || - test_bit(MD_RECOVERY_RESHAPE, &mddev->recovery)) - max_sector = mddev->resync_max_sectors; if (sector_nr >= max_sector) { conf->cluster_sync_low = 0; conf->cluster_sync_high = 0; diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c index 2bd1ce9b3922..69a083ca41a3 100644 --- a/drivers/md/raid5.c +++ b/drivers/md/raid5.c @@ -6458,11 +6458,10 @@ static sector_t reshape_request(struct mddev *mddev, sector_t sector_nr, int *sk } static inline sector_t raid5_sync_request(struct mddev *mddev, sector_t sector_nr, - int *skipped) + sector_t max_sector, int *skipped) { struct r5conf *conf = mddev->private; struct stripe_head *sh; - sector_t max_sector = mddev->dev_sectors; sector_t sync_blocks; int still_degraded = 0; int i; -- 2.39.2