Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933982AbdLRMdW (ORCPT ); Mon, 18 Dec 2017 07:33:22 -0500 Received: from mx1.redhat.com ([209.132.183.28]:44190 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933549AbdLRMdT (ORCPT ); Mon, 18 Dec 2017 07:33:19 -0500 From: Ming Lei To: Jens Axboe , Christoph Hellwig , Alexander Viro , Kent Overstreet Cc: Huang Ying , linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, "Theodore Ts'o" , "Darrick J . Wong" , Coly Li , Filipe Manana , Ming Lei Subject: [PATCH V4 43/45] block: bio: pass segments to bio if bio_add_page() is bypassed Date: Mon, 18 Dec 2017 20:22:45 +0800 Message-Id: <20171218122247.3488-44-ming.lei@redhat.com> In-Reply-To: <20171218122247.3488-1-ming.lei@redhat.com> References: <20171218122247.3488-1-ming.lei@redhat.com> X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.30]); Mon, 18 Dec 2017 12:33:19 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2567 Lines: 93 Under some situations, such as block direct I/O, we can't use bio_add_page() for merging pages into multipage bvec, so a new function is implemented for converting page array into one segment array, then these cases can benefit from multipage bvec too. Signed-off-by: Ming Lei --- block/bio.c | 54 ++++++++++++++++++++++++++++++++++++++++++++++++------ 1 file changed, 48 insertions(+), 6 deletions(-) diff --git a/block/bio.c b/block/bio.c index 34af328681a8..e808d8352067 100644 --- a/block/bio.c +++ b/block/bio.c @@ -882,6 +882,41 @@ int bio_add_page(struct bio *bio, struct page *page, } EXPORT_SYMBOL(bio_add_page); +static unsigned convert_to_segs(struct bio* bio, struct page **pages, + unsigned char *page_cnt, + unsigned nr_pages) +{ + + unsigned idx; + unsigned nr_seg = 0; + struct request_queue *q = NULL; + + if (bio->bi_disk) + q = bio->bi_disk->queue; + + if (!q || !blk_queue_cluster(q)) { + memset(page_cnt, 0, nr_pages); + return nr_pages; + } + + page_cnt[nr_seg] = 0; + for (idx = 1; idx < nr_pages; idx++) { + struct page *pg_s = pages[nr_seg]; + struct page *pg = pages[idx]; + + if (page_to_pfn(pg_s) + page_cnt[nr_seg] + 1 == + page_to_pfn(pg)) { + page_cnt[nr_seg]++; + } else { + page_cnt[++nr_seg] = 0; + if (nr_seg < idx) + pages[nr_seg] = pg; + } + } + + return nr_seg + 1; +} + /** * bio_iov_iter_get_pages - pin user or kernel pages and add them to a bio * @bio: bio to add pages to @@ -897,6 +932,8 @@ int bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter) struct page **pages = (struct page **)bv; size_t offset, diff; ssize_t size; + unsigned short nr_segs; + unsigned char page_cnt[nr_pages]; /* at most 256 pages */ size = iov_iter_get_pages(iter, pages, LONG_MAX, nr_pages, &offset); if (unlikely(size <= 0)) @@ -912,13 +949,18 @@ int bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter) * need to be reflected here as well. */ bio->bi_iter.bi_size += size; - bio->bi_vcnt += nr_pages; - diff = (nr_pages * PAGE_SIZE - offset) - size; - while (nr_pages--) { - bv[nr_pages].bv_page = pages[nr_pages]; - bv[nr_pages].bv_len = PAGE_SIZE; - bv[nr_pages].bv_offset = 0; + + /* convert into segments */ + nr_segs = convert_to_segs(bio, pages, page_cnt, nr_pages); + bio->bi_vcnt += nr_segs; + + while (nr_segs--) { + unsigned cnt = (unsigned)page_cnt[nr_segs] + 1; + + bv[nr_segs].bv_page = pages[nr_segs]; + bv[nr_segs].bv_len = PAGE_SIZE * cnt; + bv[nr_segs].bv_offset = 0; } bv[0].bv_offset += offset; -- 2.9.5