Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752705AbdF0JhN (ORCPT ); Tue, 27 Jun 2017 05:37:13 -0400 Received: from smtp2.provo.novell.com ([137.65.250.81]:54679 "EHLO smtp2.provo.novell.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752403AbdF0Jg7 (ORCPT ); Tue, 27 Jun 2017 05:36:59 -0400 Subject: Re: [PATCH v2 11/51] md: raid1: initialize bvec table via bio_add_page() To: Ming Lei , Jens Axboe , Christoph Hellwig , Huang Ying , Andrew Morton , Alexander Viro References: <20170626121034.3051-1-ming.lei@redhat.com> <20170626121034.3051-12-ming.lei@redhat.com> Cc: linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, Shaohua Li , linux-raid@vger.kernel.org From: Guoqing Jiang Message-ID: <59522727.7040700@suse.com> Date: Tue, 27 Jun 2017 17:36:39 +0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.3.0 MIME-Version: 1.0 In-Reply-To: <20170626121034.3051-12-ming.lei@redhat.com> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2476 Lines: 74 On 06/26/2017 08:09 PM, Ming Lei wrote: > We will support multipage bvec soon, so initialize bvec > table using the standardy way instead of writing the > talbe directly. Otherwise it won't work any more once > multipage bvec is enabled. > > Cc: Shaohua Li > Cc: linux-raid@vger.kernel.org > Signed-off-by: Ming Lei > --- > drivers/md/raid1.c | 27 ++++++++++++++------------- > 1 file changed, 14 insertions(+), 13 deletions(-) > > diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c > index 3febfc8391fb..835c42396861 100644 > --- a/drivers/md/raid1.c > +++ b/drivers/md/raid1.c > @@ -2086,10 +2086,8 @@ static void process_checks(struct r1bio *r1_bio) > /* Fix variable parts of all bios */ > vcnt = (r1_bio->sectors + PAGE_SIZE / 512 - 1) >> (PAGE_SHIFT - 9); > for (i = 0; i < conf->raid_disks * 2; i++) { > - int j; > int size; > blk_status_t status; > - struct bio_vec *bi; > struct bio *b = r1_bio->bios[i]; > struct resync_pages *rp = get_resync_pages(b); > if (b->bi_end_io != end_sync_read) > @@ -2098,8 +2096,6 @@ static void process_checks(struct r1bio *r1_bio) > status = b->bi_status; > bio_reset(b); > b->bi_status = status; > - b->bi_vcnt = vcnt; > - b->bi_iter.bi_size = r1_bio->sectors << 9; > b->bi_iter.bi_sector = r1_bio->sector + > conf->mirrors[i].rdev->data_offset; > b->bi_bdev = conf->mirrors[i].rdev->bdev; > @@ -2107,15 +2103,20 @@ static void process_checks(struct r1bio *r1_bio) > rp->raid_bio = r1_bio; > b->bi_private = rp; > > - size = b->bi_iter.bi_size; > - bio_for_each_segment_all(bi, b, j) { > - bi->bv_offset = 0; > - if (size > PAGE_SIZE) > - bi->bv_len = PAGE_SIZE; > - else > - bi->bv_len = size; > - size -= PAGE_SIZE; > - } > + /* initialize bvec table again */ > + rp->idx = 0; > + size = r1_bio->sectors << 9; > + do { > + struct page *page = resync_fetch_page(rp, rp->idx++); > + int len = min_t(int, size, PAGE_SIZE); > + > + /* > + * won't fail because the vec table is big > + * enough to hold all these pages > + */ > + bio_add_page(b, page, len, 0); > + size -= len; > + } while (rp->idx < RESYNC_PAGES && size > 0); > } Seems above section is similar as reset_bvec_table introduced in next patch, why there is difference between raid1 and raid10? Maybe add reset_bvec_table into md.c, then call it in raid1 or raid10 is better, just my 2 cents. Thanks, Guoqing