Return-Path: Received: from mx1.redhat.com ([209.132.183.28]:47806 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726001AbeKUUxz (ORCPT ); Wed, 21 Nov 2018 15:53:55 -0500 Date: Wed, 21 Nov 2018 18:19:38 +0800 From: Ming Lei To: Sagi Grimberg Cc: Christoph Hellwig , Jens Axboe , linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Dave Chinner , Kent Overstreet , Mike Snitzer , dm-devel@redhat.com, Alexander Viro , linux-fsdevel@vger.kernel.org, Shaohua Li , linux-raid@vger.kernel.org, linux-erofs@lists.ozlabs.org, David Sterba , linux-btrfs@vger.kernel.org, "Darrick J . Wong" , linux-xfs@vger.kernel.org, Gao Xiang , Theodore Ts'o , linux-ext4@vger.kernel.org, Coly Li , linux-bcache@vger.kernel.org, Boaz Harrosh , Bob Peterson , cluster-devel@redhat.com Subject: Re: [PATCH V10 09/19] block: introduce bio_bvecs() Message-ID: <20181121101937.GA16204@ming.t460p> References: <002fe56b-25e4-573e-c09b-bb12c3e8d25a@grimberg.me> <20181120161651.GB2629@lst.de> <53526aae-fb9b-ee38-0a01-e5899e2d4e4d@grimberg.me> <20181121005902.GA31748@ming.t460p> <2d9bee7a-f010-dcf4-1184-094101058584@grimberg.me> <20181121034415.GA8408@ming.t460p> <2a47d336-c19b-6bf4-c247-d7382871eeea@grimberg.me> <7378bf49-5a7e-5622-d4d1-808ba37ce656@grimberg.me> <20181121050359.GA31915@ming.t460p> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Sender: linux-ext4-owner@vger.kernel.org List-ID: On Tue, Nov 20, 2018 at 09:35:07PM -0800, Sagi Grimberg wrote: > > > > Wait, I see that the bvec is still a single array per bio. When you said > > > a table I thought you meant a 2-dimentional array... > > > > I mean a new 1-d table A has to be created for multiple bios in one rq, > > and build it in the following way > > > > rq_for_each_bvec(tmp, rq, rq_iter) > > *A = tmp; > > > > Then you can pass A to iov_iter_bvec() & send(). > > > > Given it is over TCP, I guess it should be doable for you to preallocate one > > 256-bvec table in one page for each request, then sets the max segment size as > > (unsigned int)-1, and max segment number as 256, the preallocated table > > should work anytime. > > 256 bvec table is really a lot to preallocate, especially when its not > needed, I can easily initialize the bvec_iter on the bio bvec. If this > involves preallocation of the worst-case than I don't consider this to > be an improvement. If you don't provide one single bvec table, I understand you may not send this req via one send(). The bvec_iter initialization is easy to do: bvec_iter = bio->bi_iter when you move to a new a bio, please refer to __bio_for_each_bvec() or __bio_for_each_segment(). Thanks, Ming