Received: by 2002:ac0:a594:0:0:0:0:0 with SMTP id m20-v6csp2998146imm; Thu, 24 May 2018 20:50:11 -0700 (PDT) X-Google-Smtp-Source: AB8JxZoDy7z5msHF6jg8wCyuB6gKtKoWdUYbkr1AlK5XBrp1Lp0EHQ3fmmm+VElG5fhaVbrCg+U/ X-Received: by 2002:a17:902:8c92:: with SMTP id t18-v6mr803818plo.337.1527220211007; Thu, 24 May 2018 20:50:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527220210; cv=none; d=google.com; s=arc-20160816; b=n4hkOSDYoLSRwCwha3cGOxXkIAeFh50cyMqPIg0AKi9nhSrfvgEXkD4Vkpyrr0/OAJ VsU27GVISLJiw/lJMBdD6mglFWGBxF3WhwM6pI0plnet0rGJVitDPX/f4ikvzaCYRa3R tSspmRwxJZlmSQRwsldmiFTVN2/DT05hANI7eUxuP53k67oiitK4ycjeLd5TBMyQ8S2z 5t4Xxt5xy4UUoYd94RzVwv2SPL9TIivawEY81/6h04QB/YvTpiAA6s0M0UgeTU4cS4Tp q19TBAQZrDHFzrXaKDog4qanbAWz3Jih54eUF+cCQ1UPNmyfdY6GaIiyWuZcCWuAMzw+ Dm+w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=6js4vyP6PDRQrmgMKN7eDjkqCBRPbcxRpB41F+KEUJI=; b=Yu3hvF3UU2LfK/n1mcJqa8yBR0Gnx8cKvwnWZhq6D7fpg+JsldV/8cEttkTtyC2P8d 6ygnyv1rSe9r3zEext9tU8sqFYolZpV7vV+9dbW3SWswhn9xttofkOQG+V+5eBiQ+UcB FCW4WRqYTxgZil9S+T2SojmX/TXuYR1E1Iqoo81WIw3ffrZ5HEksLjc6UfKdfl1+RWGH qIJKrAtF03O+MUYCp7kNnVMUlWwA3uRrKcxGDyRGk4oUSoW9Uj0U7QS8CNM0ySZ/LDHT KfXds3N3HaJxX3TfIBx5EuhteeNgvXtm4DqsSRSOs30jpzGQMWnMt9EwwCLiWtsbjq+r c55w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t8-v6si5609412pgq.19.2018.05.24.20.49.56; Thu, 24 May 2018 20:50:10 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755386AbeEYDsP (ORCPT + 99 others); Thu, 24 May 2018 23:48:15 -0400 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:41336 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1755141AbeEYDsO (ORCPT ); Thu, 24 May 2018 23:48:14 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 7BE8A401EF03; Fri, 25 May 2018 03:48:13 +0000 (UTC) Received: from localhost (ovpn-12-30.pek2.redhat.com [10.72.12.30]) by smtp.corp.redhat.com (Postfix) with ESMTP id 372BF83B80; Fri, 25 May 2018 03:48:02 +0000 (UTC) From: Ming Lei To: Jens Axboe , Christoph Hellwig , Alexander Viro , Kent Overstreet Cc: David Sterba , Huang Ying , linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, Theodore Ts'o , "Darrick J . Wong" , Coly Li , Filipe Manana , Ming Lei Subject: [RESEND PATCH V5 07/33] block: use bio_for_each_segment() to map sg Date: Fri, 25 May 2018 11:45:55 +0800 Message-Id: <20180525034621.31147-8-ming.lei@redhat.com> In-Reply-To: <20180525034621.31147-1-ming.lei@redhat.com> References: <20180525034621.31147-1-ming.lei@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.5]); Fri, 25 May 2018 03:48:13 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.5]); Fri, 25 May 2018 03:48:13 +0000 (UTC) for IP:'10.11.54.5' DOMAIN:'int-mx05.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'ming.lei@redhat.com' RCPT:'' Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org It is more efficient to use bio_for_each_segment() to map sg, meantime we have to consider splitting multipage bvec as done in blk_bio_segment_split(). Signed-off-by: Ming Lei --- block/blk-merge.c | 72 +++++++++++++++++++++++++++++++++++++++---------------- 1 file changed, 52 insertions(+), 20 deletions(-) diff --git a/block/blk-merge.c b/block/blk-merge.c index d157b752d965..9fc96c9f6061 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -414,6 +414,56 @@ static int blk_phys_contig_segment(struct request_queue *q, struct bio *bio, return 0; } +static struct scatterlist *blk_next_sg(struct scatterlist **sg, + struct scatterlist *sglist) +{ + if (!*sg) + return sglist; + else { + /* + * If the driver previously mapped a shorter + * list, we could see a termination bit + * prematurely unless it fully inits the sg + * table on each mapping. We KNOW that there + * must be more entries here or the driver + * would be buggy, so force clear the + * termination bit to avoid doing a full + * sg_init_table() in drivers for each command. + */ + sg_unmark_end(*sg); + return sg_next(*sg); + } +} + +static unsigned blk_bvec_map_sg(struct request_queue *q, + struct bio_vec *bvec, struct scatterlist *sglist, + struct scatterlist **sg) +{ + unsigned nbytes = bvec->bv_len; + unsigned nsegs = 0, total = 0; + + while (nbytes > 0) { + unsigned seg_size; + struct page *pg; + unsigned offset, idx; + + *sg = blk_next_sg(sg, sglist); + + seg_size = min(nbytes, queue_max_segment_size(q)); + offset = (total + bvec->bv_offset) % PAGE_SIZE; + idx = (total + bvec->bv_offset) / PAGE_SIZE; + pg = nth_page(bvec->bv_page, idx); + + sg_set_page(*sg, pg, seg_size, offset); + + total += seg_size; + nbytes -= seg_size; + nsegs++; + } + + return nsegs; +} + static inline void __blk_segment_map_sg(struct request_queue *q, struct bio_vec *bvec, struct scatterlist *sglist, struct bio_vec *bvprv, @@ -434,25 +484,7 @@ __blk_segment_map_sg(struct request_queue *q, struct bio_vec *bvec, (*sg)->length += nbytes; } else { new_segment: - if (!*sg) - *sg = sglist; - else { - /* - * If the driver previously mapped a shorter - * list, we could see a termination bit - * prematurely unless it fully inits the sg - * table on each mapping. We KNOW that there - * must be more entries here or the driver - * would be buggy, so force clear the - * termination bit to avoid doing a full - * sg_init_table() in drivers for each command. - */ - sg_unmark_end(*sg); - *sg = sg_next(*sg); - } - - sg_set_page(*sg, bvec->bv_page, nbytes, bvec->bv_offset); - (*nsegs)++; + (*nsegs) += blk_bvec_map_sg(q, bvec, sglist, sg); } *bvprv = *bvec; } @@ -474,7 +506,7 @@ static int __blk_bios_map_sg(struct request_queue *q, struct bio *bio, int cluster = blk_queue_cluster(q), nsegs = 0; for_each_bio(bio) - bio_for_each_page(bvec, bio, iter) + bio_for_each_segment(bvec, bio, iter) __blk_segment_map_sg(q, &bvec, sglist, &bvprv, sg, &nsegs, &cluster); -- 2.9.5