Received: by 2002:ac0:a581:0:0:0:0:0 with SMTP id m1-v6csp6311895imm; Wed, 27 Jun 2018 05:52:38 -0700 (PDT) X-Google-Smtp-Source: ADUXVKLda1ww+hWyJ/VMXG2fVxO4tFoS23HuslsLIr7lVmwFyTd1iGMg2t+YZ6Y15VnO2mEMyq/K X-Received: by 2002:a63:a902:: with SMTP id u2-v6mr4973258pge.67.1530103958854; Wed, 27 Jun 2018 05:52:38 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1530103958; cv=none; d=google.com; s=arc-20160816; b=Bwq7sojxi1GNlzNymYLDoswmXeoVnXmHcZbfYjkxG+fRDIp2rRn0nkGGq2e6Ne+93y AZpch7byJYMzbPG/N1SaXi0bvoibD/U8ixh3Pf40YoXgWRU+TJtaMspe3FFrICtHeb8b BPruPmS0OFdOYk83V+teJQvg74JWks457WS8uhHhZVp5wPXjTca6zElfC2Hc+DJQXdOV VrWZphcn4CHaLBLpsE4t2SNOlKajYZ0t0C8Lg66lSkEdtKdwewd20aUq/h+dz7sMbM9y ityNokOFRF84IlDLbDwOrOvt8IMXqRIDf0PpaJCifZzccRcACN3I7I+2Xw42d+nP/syw 9Ucg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=fpMKvsAS6Q/rXfZA1526g7rOEkf3VVqlC3lS7ZuugNM=; b=GslTFyx4sHXFXWovBpsdRDlbMRzCqnEvXo/uWi62vIPwi36J0ZuikXzRWFP9dOMpzu gBPA7AquTt3RUsFnZ8sorfp+jQZr3YJNsg6kaGxB3pwIzEg9YzhJlH/3aCSL3sxdYupq LsrpIWcc46ASFgCIkYwWiCI/i/4R65xCkQXDC80s1F1DD/8Wkg2Z6RjPX2/+MFPaUnj/ jemRPUFnM+hArWhpQcWJxI/kjU2dPCTzKpRS6Ibsmc2Y/B1MxOVz91frEG8b8wbV8zVz 3WCG5909qqhxchl1gKPQtJ+JQ9v6reaNfwBZZpZqqBwzVP0K9991iI+aJn/pHL7x7vSS mkUA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d79-v6si3789461pfj.311.2018.06.27.05.52.24; Wed, 27 Jun 2018 05:52:38 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934511AbeF0Msi (ORCPT + 99 others); Wed, 27 Jun 2018 08:48:38 -0400 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:39380 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S934000AbeF0Msf (ORCPT ); Wed, 27 Jun 2018 08:48:35 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 03A92401EF04; Wed, 27 Jun 2018 12:48:35 +0000 (UTC) Received: from localhost (ovpn-12-44.pek2.redhat.com [10.72.12.44]) by smtp.corp.redhat.com (Postfix) with ESMTP id DE9EB1C674; Wed, 27 Jun 2018 12:48:22 +0000 (UTC) From: Ming Lei To: Jens Axboe , Christoph Hellwig , Kent Overstreet Cc: David Sterba , Huang Ying , Mike Snitzer , linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, Theodore Ts'o , "Darrick J . Wong" , Coly Li , Filipe Manana , Randy Dunlap , Ming Lei Subject: [PATCH V7 13/24] block: use bio_for_each_bvec() to map sg Date: Wed, 27 Jun 2018 20:45:37 +0800 Message-Id: <20180627124548.3456-14-ming.lei@redhat.com> In-Reply-To: <20180627124548.3456-1-ming.lei@redhat.com> References: <20180627124548.3456-1-ming.lei@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.5]); Wed, 27 Jun 2018 12:48:35 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.5]); Wed, 27 Jun 2018 12:48:35 +0000 (UTC) for IP:'10.11.54.5' DOMAIN:'int-mx05.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'ming.lei@redhat.com' RCPT:'' Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org It is more efficient to use bio_for_each_bvec() to map sg, meantime we have to consider splitting multipage bvec as done in blk_bio_segment_split(). Signed-off-by: Ming Lei --- block/blk-merge.c | 72 +++++++++++++++++++++++++++++++++++++++---------------- 1 file changed, 52 insertions(+), 20 deletions(-) diff --git a/block/blk-merge.c b/block/blk-merge.c index bf1dceb9656a..0f7769c5feb5 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -424,6 +424,56 @@ static int blk_phys_contig_segment(struct request_queue *q, struct bio *bio, return 0; } +static struct scatterlist *blk_next_sg(struct scatterlist **sg, + struct scatterlist *sglist) +{ + if (!*sg) + return sglist; + else { + /* + * If the driver previously mapped a shorter + * list, we could see a termination bit + * prematurely unless it fully inits the sg + * table on each mapping. We KNOW that there + * must be more entries here or the driver + * would be buggy, so force clear the + * termination bit to avoid doing a full + * sg_init_table() in drivers for each command. + */ + sg_unmark_end(*sg); + return sg_next(*sg); + } +} + +static unsigned blk_bvec_map_sg(struct request_queue *q, + struct bio_vec *bvec, struct scatterlist *sglist, + struct scatterlist **sg) +{ + unsigned nbytes = bvec->bv_len; + unsigned nsegs = 0, total = 0; + + while (nbytes > 0) { + unsigned seg_size; + struct page *pg; + unsigned offset, idx; + + *sg = blk_next_sg(sg, sglist); + + seg_size = min(nbytes, queue_max_segment_size(q)); + offset = (total + bvec->bv_offset) % PAGE_SIZE; + idx = (total + bvec->bv_offset) / PAGE_SIZE; + pg = nth_page(bvec->bv_page, idx); + + sg_set_page(*sg, pg, seg_size, offset); + + total += seg_size; + nbytes -= seg_size; + nsegs++; + } + + return nsegs; +} + static inline void __blk_segment_map_sg(struct request_queue *q, struct bio_vec *bvec, struct scatterlist *sglist, struct bio_vec *bvprv, @@ -444,25 +494,7 @@ __blk_segment_map_sg(struct request_queue *q, struct bio_vec *bvec, (*sg)->length += nbytes; } else { new_segment: - if (!*sg) - *sg = sglist; - else { - /* - * If the driver previously mapped a shorter - * list, we could see a termination bit - * prematurely unless it fully inits the sg - * table on each mapping. We KNOW that there - * must be more entries here or the driver - * would be buggy, so force clear the - * termination bit to avoid doing a full - * sg_init_table() in drivers for each command. - */ - sg_unmark_end(*sg); - *sg = sg_next(*sg); - } - - sg_set_page(*sg, bvec->bv_page, nbytes, bvec->bv_offset); - (*nsegs)++; + (*nsegs) += blk_bvec_map_sg(q, bvec, sglist, sg); } *bvprv = *bvec; } @@ -484,7 +516,7 @@ static int __blk_bios_map_sg(struct request_queue *q, struct bio *bio, int cluster = blk_queue_cluster(q), nsegs = 0; for_each_bio(bio) - bio_for_each_segment(bvec, bio, iter) + bio_for_each_bvec(bvec, bio, iter) __blk_segment_map_sg(q, &bvec, sglist, &bvprv, sg, &nsegs, &cluster); -- 2.9.5