Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1758921AbdLRM0M (ORCPT ); Mon, 18 Dec 2017 07:26:12 -0500 Received: from mx1.redhat.com ([209.132.183.28]:45232 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752845AbdLRM0K (ORCPT ); Mon, 18 Dec 2017 07:26:10 -0500 From: Ming Lei To: Jens Axboe , Christoph Hellwig , Alexander Viro , Kent Overstreet Cc: Huang Ying , linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, "Theodore Ts'o" , "Darrick J . Wong" , Coly Li , Filipe Manana , Ming Lei Subject: [PATCH V4 13/45] block: blk-merge: try to make front segments in full size Date: Mon, 18 Dec 2017 20:22:15 +0800 Message-Id: <20171218122247.3488-14-ming.lei@redhat.com> In-Reply-To: <20171218122247.3488-1-ming.lei@redhat.com> References: <20171218122247.3488-1-ming.lei@redhat.com> X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.32]); Mon, 18 Dec 2017 12:26:09 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3526 Lines: 124 When merging one bvec into segment, if the bvec is too big to merge, current policy is to move the whole bvec into another new segment. This patchset changes the policy into trying to maximize size of front segments, that means in above situation, part of bvec is merged into current segment, and the remainder is put into next segment. This patch prepares for support multipage bvec because it can be quite common to see this case and we should try to make front segments in full size. Signed-off-by: Ming Lei --- block/blk-merge.c | 54 +++++++++++++++++++++++++++++++++++++++++++++++++----- 1 file changed, 49 insertions(+), 5 deletions(-) diff --git a/block/blk-merge.c b/block/blk-merge.c index a476337a8ff4..42ceb89bc566 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -109,6 +109,7 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, bool do_split = true; struct bio *new = NULL; const unsigned max_sectors = get_max_io_size(q, bio); + unsigned advance = 0; bio_for_each_segment(bv, bio, iter) { /* @@ -134,12 +135,32 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, } if (bvprvp && blk_queue_cluster(q)) { - if (seg_size + bv.bv_len > queue_max_segment_size(q)) - goto new_segment; if (!BIOVEC_PHYS_MERGEABLE(bvprvp, &bv)) goto new_segment; if (!BIOVEC_SEG_BOUNDARY(q, bvprvp, &bv)) goto new_segment; + if (seg_size + bv.bv_len > queue_max_segment_size(q)) { + /* + * On assumption is that initial value of + * @seg_size(equals to bv.bv_len) won't be + * bigger than max segment size, but will + * becomes false after multipage bvec comes. + */ + advance = queue_max_segment_size(q) - seg_size; + + if (advance > 0) { + seg_size += advance; + sectors += advance >> 9; + bv.bv_len -= advance; + bv.bv_offset += advance; + } + + /* + * Still need to put remainder of current + * bvec into a new segment. + */ + goto new_segment; + } seg_size += bv.bv_len; bvprv = bv; @@ -161,6 +182,12 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, seg_size = bv.bv_len; sectors += bv.bv_len >> 9; + /* restore the bvec for iterator */ + if (advance) { + bv.bv_len += advance; + bv.bv_offset -= advance; + advance = 0; + } } do_split = false; @@ -361,16 +388,29 @@ __blk_segment_map_sg(struct request_queue *q, struct bio_vec *bvec, { int nbytes = bvec->bv_len; + unsigned advance = 0; if (*sg && *cluster) { - if ((*sg)->length + nbytes > queue_max_segment_size(q)) - goto new_segment; - if (!BIOVEC_PHYS_MERGEABLE(bvprv, bvec)) goto new_segment; if (!BIOVEC_SEG_BOUNDARY(q, bvprv, bvec)) goto new_segment; + /* + * try best to merge part of the bvec into previous + * segment and follow same policy with + * blk_bio_segment_split() + */ + if ((*sg)->length + nbytes > queue_max_segment_size(q)) { + advance = queue_max_segment_size(q) - (*sg)->length; + if (advance) { + (*sg)->length += advance; + bvec->bv_offset += advance; + bvec->bv_len -= advance; + } + goto new_segment; + } + (*sg)->length += nbytes; } else { new_segment: @@ -393,6 +433,10 @@ __blk_segment_map_sg(struct request_queue *q, struct bio_vec *bvec, sg_set_page(*sg, bvec->bv_page, nbytes, bvec->bv_offset); (*nsegs)++; + + /* for making iterator happy */ + bvec->bv_offset -= advance; + bvec->bv_len += advance; } *bvprv = *bvec; } -- 2.9.5