Received: by 2002:ac0:a5b6:0:0:0:0:0 with SMTP id m51-v6csp1976021imm; Sat, 9 Jun 2018 05:34:01 -0700 (PDT) X-Google-Smtp-Source: ADUXVKJAWzJnWG0MmTjZnQnJ1Vl2m8Ovy491RmYnIz2T/SwRat02F5EbRufSiURV2oWPP1THu0RT X-Received: by 2002:a62:8910:: with SMTP id v16-v6mr9853202pfd.13.1528547641762; Sat, 09 Jun 2018 05:34:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1528547641; cv=none; d=google.com; s=arc-20160816; b=dxvFXgJ/oxz1BEIqVIkQCv3ZRypVHbwcmsobd9E0tGqfHrE8CbWNezi/ug8UgjLkcv D2tPX/ox9gmX+B6TBxfPM+jwaREJIgctCJg2zosivi94FqWITBF1abO2Io+1Gaff2uko gUsYyCrQtbBlO88olVPNCPuBF6SrcTy2YSNUXAZ2uErcw5K9icFG/BfhO5J51HSFPvPq sIvk/X4N/ILEuZdUG+Ztuf40Oxt9ub0Xl0/rdYTUGm+TTb0/z96Lv/1bJVQhCqCtoXjC Lj0kLcg8GlBQPlAzBmIjJe54XdMBqdsuGVajq6+dDujBGeDW/ed4SfM9QO1gGJI1y1mO m/dg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=hqyHF8YWYn9EL8eWgmQyDZYkOkVAVCxCrsvIKnf1IzY=; b=AtYZbphtv7tYzABRyPKA65wC3fWoyuQF/p4VrP10hXHntq7kWtR1LGbOyKDQP6GwGb KhJRoZlaxsSIlUHMhuVTwqNX+RamVzJr1ed+S35TePQlltS4pByHHcZ3Aa+mCaxiIqHO tqpD5mQI/0cZhDk1ZsHpAkuISn0ZmjEk0as7xHFHeySCJAwV+nhT123J3Iubc/ivwG9S VkZaBvxFB388hMJ2kGdpSzpD04qIsS+bylssovG0iaenoJmo2w/CuuHVydukP0n6S/kd 60qkkIVW1p8+u25S+8vI06tFt3Dq45kxN9Uq5DWGqqWEHZWWsvc/4ER/NrC5WB43l3r+ nsVQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id i65-v6si21032449pfg.218.2018.06.09.05.33.47; Sat, 09 Jun 2018 05:34:01 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753398AbeFIMcF (ORCPT + 99 others); Sat, 9 Jun 2018 08:32:05 -0400 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:42070 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1753216AbeFIMcC (ORCPT ); Sat, 9 Jun 2018 08:32:02 -0400 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id DBBCBBB42D; Sat, 9 Jun 2018 12:32:01 +0000 (UTC) Received: from localhost (ovpn-12-40.pek2.redhat.com [10.72.12.40]) by smtp.corp.redhat.com (Postfix) with ESMTP id 338E72166BB2; Sat, 9 Jun 2018 12:31:52 +0000 (UTC) From: Ming Lei To: Jens Axboe , Christoph Hellwig , Alexander Viro , Kent Overstreet Cc: David Sterba , Huang Ying , linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, Theodore Ts'o , "Darrick J . Wong" , Coly Li , Filipe Manana , Randy Dunlap , Ming Lei Subject: [PATCH V6 07/30] block: use bio_for_each_chunk() to map sg Date: Sat, 9 Jun 2018 20:29:51 +0800 Message-Id: <20180609123014.8861-8-ming.lei@redhat.com> In-Reply-To: <20180609123014.8861-1-ming.lei@redhat.com> References: <20180609123014.8861-1-ming.lei@redhat.com> X-Scanned-By: MIMEDefang 2.78 on 10.11.54.6 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.1]); Sat, 09 Jun 2018 12:32:01 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.1]); Sat, 09 Jun 2018 12:32:01 +0000 (UTC) for IP:'10.11.54.6' DOMAIN:'int-mx06.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'ming.lei@redhat.com' RCPT:'' Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org It is more efficient to use bio_for_each_chunk() to map sg, meantime we have to consider splitting multipage bvec as done in blk_bio_segment_split(). Signed-off-by: Ming Lei --- block/blk-merge.c | 72 +++++++++++++++++++++++++++++++++++++++---------------- 1 file changed, 52 insertions(+), 20 deletions(-) diff --git a/block/blk-merge.c b/block/blk-merge.c index 2493fe027953..044db0fa2f89 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -424,6 +424,56 @@ static int blk_phys_contig_segment(struct request_queue *q, struct bio *bio, return 0; } +static struct scatterlist *blk_next_sg(struct scatterlist **sg, + struct scatterlist *sglist) +{ + if (!*sg) + return sglist; + else { + /* + * If the driver previously mapped a shorter + * list, we could see a termination bit + * prematurely unless it fully inits the sg + * table on each mapping. We KNOW that there + * must be more entries here or the driver + * would be buggy, so force clear the + * termination bit to avoid doing a full + * sg_init_table() in drivers for each command. + */ + sg_unmark_end(*sg); + return sg_next(*sg); + } +} + +static unsigned blk_bvec_map_sg(struct request_queue *q, + struct bio_vec *bvec, struct scatterlist *sglist, + struct scatterlist **sg) +{ + unsigned nbytes = bvec->bv_len; + unsigned nsegs = 0, total = 0; + + while (nbytes > 0) { + unsigned seg_size; + struct page *pg; + unsigned offset, idx; + + *sg = blk_next_sg(sg, sglist); + + seg_size = min(nbytes, queue_max_segment_size(q)); + offset = (total + bvec->bv_offset) % PAGE_SIZE; + idx = (total + bvec->bv_offset) / PAGE_SIZE; + pg = nth_page(bvec->bv_page, idx); + + sg_set_page(*sg, pg, seg_size, offset); + + total += seg_size; + nbytes -= seg_size; + nsegs++; + } + + return nsegs; +} + static inline void __blk_segment_map_sg(struct request_queue *q, struct bio_vec *bvec, struct scatterlist *sglist, struct bio_vec *bvprv, @@ -444,25 +494,7 @@ __blk_segment_map_sg(struct request_queue *q, struct bio_vec *bvec, (*sg)->length += nbytes; } else { new_segment: - if (!*sg) - *sg = sglist; - else { - /* - * If the driver previously mapped a shorter - * list, we could see a termination bit - * prematurely unless it fully inits the sg - * table on each mapping. We KNOW that there - * must be more entries here or the driver - * would be buggy, so force clear the - * termination bit to avoid doing a full - * sg_init_table() in drivers for each command. - */ - sg_unmark_end(*sg); - *sg = sg_next(*sg); - } - - sg_set_page(*sg, bvec->bv_page, nbytes, bvec->bv_offset); - (*nsegs)++; + (*nsegs) += blk_bvec_map_sg(q, bvec, sglist, sg); } *bvprv = *bvec; } @@ -484,7 +516,7 @@ static int __blk_bios_map_sg(struct request_queue *q, struct bio *bio, int cluster = blk_queue_cluster(q), nsegs = 0; for_each_bio(bio) - bio_for_each_segment(bvec, bio, iter) + bio_for_each_chunk(bvec, bio, iter) __blk_segment_map_sg(q, &bvec, sglist, &bvprv, sg, &nsegs, &cluster); -- 2.9.5