Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752175AbdHHMgp (ORCPT ); Tue, 8 Aug 2017 08:36:45 -0400 Received: from server.coly.li ([162.144.45.48]:56674 "EHLO server.coly.li" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751880AbdHHMgm (ORCPT ); Tue, 8 Aug 2017 08:36:42 -0400 Subject: Re: [PATCH v3 07/49] bcache: comment on direct access to bvec table To: Ming Lei , Jens Axboe , Christoph Hellwig , Huang Ying , Andrew Morton , Alexander Viro Cc: linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-bcache@vger.kernel.org References: <20170808084548.18963-1-ming.lei@redhat.com> <20170808084548.18963-8-ming.lei@redhat.com> From: Coly Li Message-ID: <12a50c71-7b66-f9a4-6f9b-c10987426e30@coly.li> Date: Tue, 8 Aug 2017 20:36:21 +0800 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.12; rv:52.0) Gecko/20100101 Thunderbird/52.2.1 MIME-Version: 1.0 In-Reply-To: <20170808084548.18963-8-ming.lei@redhat.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 8bit X-AntiAbuse: This header was added to track abuse, please include it with any abuse report X-AntiAbuse: Primary Hostname - server.coly.li X-AntiAbuse: Original Domain - vger.kernel.org X-AntiAbuse: Originator/Caller UID/GID - [47 12] / [47 12] X-AntiAbuse: Sender Address Domain - coly.li X-Get-Message-Sender-Via: server.coly.li: authenticated_id: i@coly.li X-Authenticated-Sender: server.coly.li: i@coly.li X-Source: X-Source-Args: X-Source-Dir: Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2914 Lines: 86 On 2017/8/8 下午4:45, Ming Lei wrote: > Looks all are safe after multipage bvec is supported. > > Cc: linux-bcache@vger.kernel.org > Signed-off-by: Ming Lei Acked-by: Coly Li Coly Li > --- > drivers/md/bcache/btree.c | 1 + > drivers/md/bcache/super.c | 6 ++++++ > drivers/md/bcache/util.c | 7 +++++++ > 3 files changed, 14 insertions(+) > > diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c > index 866dcf78ff8e..3da595ae565b 100644 > --- a/drivers/md/bcache/btree.c > +++ b/drivers/md/bcache/btree.c > @@ -431,6 +431,7 @@ static void do_btree_node_write(struct btree *b) > > continue_at(cl, btree_node_write_done, NULL); > } else { > + /* No harm for multipage bvec since the new is just allocated */ > b->bio->bi_vcnt = 0; > bch_bio_map(b->bio, i); > > diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c > index 8352fad765f6..6808f548cd13 100644 > --- a/drivers/md/bcache/super.c > +++ b/drivers/md/bcache/super.c > @@ -208,6 +208,7 @@ static void write_bdev_super_endio(struct bio *bio) > > static void __write_super(struct cache_sb *sb, struct bio *bio) > { > + /* single page bio, safe for multipage bvec */ > struct cache_sb *out = page_address(bio->bi_io_vec[0].bv_page); > unsigned i; > > @@ -1154,6 +1155,8 @@ static void register_bdev(struct cache_sb *sb, struct page *sb_page, > dc->bdev->bd_holder = dc; > > bio_init(&dc->sb_bio, dc->sb_bio.bi_inline_vecs, 1); > + > + /* single page bio, safe for multipage bvec */ > dc->sb_bio.bi_io_vec[0].bv_page = sb_page; > get_page(sb_page); > > @@ -1799,6 +1802,7 @@ void bch_cache_release(struct kobject *kobj) > for (i = 0; i < RESERVE_NR; i++) > free_fifo(&ca->free[i]); > > + /* single page bio, safe for multipage bvec */ > if (ca->sb_bio.bi_inline_vecs[0].bv_page) > put_page(ca->sb_bio.bi_io_vec[0].bv_page); > > @@ -1854,6 +1858,8 @@ static int register_cache(struct cache_sb *sb, struct page *sb_page, > ca->bdev->bd_holder = ca; > > bio_init(&ca->sb_bio, ca->sb_bio.bi_inline_vecs, 1); > + > + /* single page bio, safe for multipage bvec */ > ca->sb_bio.bi_io_vec[0].bv_page = sb_page; > get_page(sb_page); > > diff --git a/drivers/md/bcache/util.c b/drivers/md/bcache/util.c > index 8c3a938f4bf0..11b4230ea6ad 100644 > --- a/drivers/md/bcache/util.c > +++ b/drivers/md/bcache/util.c > @@ -223,6 +223,13 @@ uint64_t bch_next_delay(struct bch_ratelimit *d, uint64_t done) > : 0; > } > > +/* > + * Generally it isn't good to access .bi_io_vec and .bi_vcnt > + * directly, the preferred way is bio_add_page, but in > + * this case, bch_bio_map() supposes that the bvec table > + * is empty, so it is safe to access .bi_vcnt & .bi_io_vec > + * in this way even after multipage bvec is supported. > + */ > void bch_bio_map(struct bio *bio, void *base) > { > size_t size = bio->bi_iter.bi_size; >