Return-Path: Received: from mail-pf1-f195.google.com ([209.85.210.195]:40615 "EHLO mail-pf1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727308AbeK0JfU (ORCPT ); Tue, 27 Nov 2018 04:35:20 -0500 Received: by mail-pf1-f195.google.com with SMTP id i12so7296496pfo.7 for ; Mon, 26 Nov 2018 14:39:41 -0800 (PST) Date: Mon, 26 Nov 2018 14:39:38 -0800 From: Omar Sandoval To: Ming Lei Cc: Jens Axboe , linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Theodore Ts'o , Omar Sandoval , Sagi Grimberg , Dave Chinner , Kent Overstreet , Mike Snitzer , dm-devel@redhat.com, Alexander Viro , linux-fsdevel@vger.kernel.org, Shaohua Li , linux-raid@vger.kernel.org, David Sterba , linux-btrfs@vger.kernel.org, "Darrick J . Wong" , linux-xfs@vger.kernel.org, Gao Xiang , Christoph Hellwig , linux-ext4@vger.kernel.org, Coly Li , linux-bcache@vger.kernel.org, Boaz Harrosh , Bob Peterson , cluster-devel@redhat.com Subject: Re: [PATCH V12 13/20] block: loop: pass multi-page bvec to iov_iter Message-ID: <20181126223938.GJ30411@vader> References: <20181126021720.19471-1-ming.lei@redhat.com> <20181126021720.19471-14-ming.lei@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20181126021720.19471-14-ming.lei@redhat.com> Sender: linux-ext4-owner@vger.kernel.org List-ID: On Mon, Nov 26, 2018 at 10:17:13AM +0800, Ming Lei wrote: > iov_iter is implemented on bvec itererator helpers, so it is safe to pass > multi-page bvec to it, and this way is much more efficient than passing one > page in each bvec. > > Reviewed-by: Christoph Hellwig Reviewed-by: Omar Sandoval > Signed-off-by: Ming Lei > --- > drivers/block/loop.c | 20 ++++++++++---------- > 1 file changed, 10 insertions(+), 10 deletions(-) > > diff --git a/drivers/block/loop.c b/drivers/block/loop.c > index 176ab1f28eca..e3683211f12d 100644 > --- a/drivers/block/loop.c > +++ b/drivers/block/loop.c > @@ -510,21 +510,22 @@ static int lo_rw_aio(struct loop_device *lo, struct loop_cmd *cmd, > loff_t pos, bool rw) > { > struct iov_iter iter; > + struct req_iterator rq_iter; > struct bio_vec *bvec; > struct request *rq = blk_mq_rq_from_pdu(cmd); > struct bio *bio = rq->bio; > struct file *file = lo->lo_backing_file; > + struct bio_vec tmp; > unsigned int offset; > - int segments = 0; > + int nr_bvec = 0; > int ret; > > + rq_for_each_bvec(tmp, rq, rq_iter) > + nr_bvec++; > + > if (rq->bio != rq->biotail) { > - struct req_iterator iter; > - struct bio_vec tmp; > > - __rq_for_each_bio(bio, rq) > - segments += bio_segments(bio); > - bvec = kmalloc_array(segments, sizeof(struct bio_vec), > + bvec = kmalloc_array(nr_bvec, sizeof(struct bio_vec), > GFP_NOIO); > if (!bvec) > return -EIO; > @@ -533,10 +534,10 @@ static int lo_rw_aio(struct loop_device *lo, struct loop_cmd *cmd, > /* > * The bios of the request may be started from the middle of > * the 'bvec' because of bio splitting, so we can't directly > - * copy bio->bi_iov_vec to new bvec. The rq_for_each_segment > + * copy bio->bi_iov_vec to new bvec. The rq_for_each_bvec > * API will take care of all details for us. > */ > - rq_for_each_segment(tmp, rq, iter) { > + rq_for_each_bvec(tmp, rq, rq_iter) { > *bvec = tmp; > bvec++; > } > @@ -550,11 +551,10 @@ static int lo_rw_aio(struct loop_device *lo, struct loop_cmd *cmd, > */ > offset = bio->bi_iter.bi_bvec_done; > bvec = __bvec_iter_bvec(bio->bi_io_vec, bio->bi_iter); > - segments = bio_segments(bio); > } > atomic_set(&cmd->ref, 2); > > - iov_iter_bvec(&iter, rw, bvec, segments, blk_rq_bytes(rq)); > + iov_iter_bvec(&iter, rw, bvec, nr_bvec, blk_rq_bytes(rq)); > iter.iov_offset = offset; > > cmd->iocb.ki_pos = pos; > -- > 2.9.5 >