Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752774AbbEGGfC (ORCPT ); Thu, 7 May 2015 02:35:02 -0400 Received: from mail.kernel.org ([198.145.29.136]:47244 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752670AbbEGGeu (ORCPT ); Thu, 7 May 2015 02:34:50 -0400 From: Ming Lin To: linux-kernel@vger.kernel.org Cc: Christoph Hellwig , Jens Axboe , Kent Overstreet , Dongsu Park , Ming Lin , Neil Brown , linux-raid@vger.kernel.org Subject: [PATCH v3 07/11] md/raid5: split bio for chunk_aligned_read Date: Wed, 6 May 2015 23:34:17 -0700 Message-Id: <1430980461-5235-8-git-send-email-mlin@kernel.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1430980461-5235-1-git-send-email-mlin@kernel.org> References: <1430980461-5235-1-git-send-email-mlin@kernel.org> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3253 Lines: 101 If a read request fits entirely in a chunk, it will be passed directly to the underlying device (providing it hasn't failed of course). If it doesn't fit, the slightly less efficient path that uses the stripe_cache is used. Requests that get to the stripe cache are always completely split up as necessary. So with RAID5, ripping out the merge_bvec_fn doesn't cause it to stop work, but could cause it to take the less efficient path more often. All that is needed to manage this is for 'chunk_aligned_read' do some bio splitting, much like the RAID0 code does. Cc: Neil Brown Cc: linux-raid@vger.kernel.org Signed-off-by: Ming Lin --- drivers/md/raid5.c | 42 +++++++++++++++++++++++++++++++++++++----- 1 file changed, 37 insertions(+), 5 deletions(-) diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c index 7f4a717..b18f548 100644 --- a/drivers/md/raid5.c +++ b/drivers/md/raid5.c @@ -4738,7 +4738,7 @@ static void raid5_align_endio(struct bio *bi, int error) add_bio_to_retry(raid_bi, conf); } -static int chunk_aligned_read(struct mddev *mddev, struct bio * raid_bio) +static int raid5_read_one_chunk(struct mddev *mddev, struct bio *raid_bio) { struct r5conf *conf = mddev->private; int dd_idx; @@ -4747,7 +4747,7 @@ static int chunk_aligned_read(struct mddev *mddev, struct bio * raid_bio) sector_t end_sector; if (!in_chunk_boundary(mddev, raid_bio)) { - pr_debug("chunk_aligned_read : non aligned\n"); + pr_debug("%s: non aligned\n", __func__); return 0; } /* @@ -4822,6 +4822,36 @@ static int chunk_aligned_read(struct mddev *mddev, struct bio * raid_bio) } } +static struct bio *chunk_aligned_read(struct mddev *mddev, struct bio *raid_bio) +{ + struct bio *split; + + do { + sector_t sector = raid_bio->bi_iter.bi_sector; + unsigned chunk_sects = mddev->chunk_sectors; + unsigned sectors; + + if (likely(is_power_of_2(chunk_sects))) + sectors = chunk_sects - (sector & (chunk_sects-1)); + else + sectors = chunk_sects - sector_div(sector, chunk_sects); + + if (sectors < bio_sectors(raid_bio)) { + split = bio_split(raid_bio, sectors, GFP_NOIO, fs_bio_set); + bio_chain(split, raid_bio); + } else + split = raid_bio; + + if (!raid5_read_one_chunk(mddev, split)) { + if (split != raid_bio) + generic_make_request(raid_bio); + return split; + } + } while (split != raid_bio); + + return NULL; +} + /* __get_priority_stripe - get the next stripe to process * * Full stripe writes are allowed to pass preread active stripes up until @@ -5099,9 +5129,11 @@ static void make_request(struct mddev *mddev, struct bio * bi) * data on failed drives. */ if (rw == READ && mddev->degraded == 0 && - mddev->reshape_position == MaxSector && - chunk_aligned_read(mddev,bi)) - return; + mddev->reshape_position == MaxSector) { + bi = chunk_aligned_read(mddev, bi); + if (!bi) + return; + } if (unlikely(bi->bi_rw & REQ_DISCARD)) { make_discard_request(mddev, bi); -- 1.9.1 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/