Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751435AbdDCFRm (ORCPT ); Mon, 3 Apr 2017 01:17:42 -0400 Received: from LGEAMRELO11.lge.com ([156.147.23.51]:60762 "EHLO lgeamrelo11.lge.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751058AbdDCFRl (ORCPT ); Mon, 3 Apr 2017 01:17:41 -0400 X-Original-SENDERIP: 156.147.1.125 X-Original-MAILFROM: minchan@kernel.org X-Original-SENDERIP: 165.244.249.25 X-Original-MAILFROM: minchan@kernel.org X-Original-SENDERIP: 10.177.223.161 X-Original-MAILFROM: minchan@kernel.org From: Minchan Kim To: Andrew Morton CC: , Sergey Senozhatsky , , Minchan Kim , Jens Axboe , Hannes Reinecke , Johannes Thumshirn Subject: [PATCH 1/5] zram: handle multiple pages attached bio's bvec Date: Mon, 3 Apr 2017 14:17:29 +0900 Message-ID: <1491196653-7388-2-git-send-email-minchan@kernel.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1491196653-7388-1-git-send-email-minchan@kernel.org> References: <1491196653-7388-1-git-send-email-minchan@kernel.org> X-MIMETrack: Itemize by SMTP Server on LGEKRMHUB01/LGE/LG Group(Release 8.5.3FP6|November 21, 2013) at 2017/04/03 14:17:38, Serialize by Router on LGEKRMHUB01/LGE/LG Group(Release 8.5.3FP6|November 21, 2013) at 2017/04/03 14:17:38, Serialize complete at 2017/04/03 14:17:38 MIME-Version: 1.0 Content-Type: text/plain Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3655 Lines: 105 Johannes Thumshirn reported system goes the panic when using NVMe over Fabrics loopback target with zram. The reason is zram expects each bvec in bio contains a single page but nvme can attach a huge bulk of pages attached to the bio's bvec so that zram's index arithmetic could be wrong so that out-of-bound access makes panic. It can be solved by limiting max_sectors with SECTORS_PER_PAGE like [1] but it makes zram slow because bio should split with each pages so this patch makes zram aware of multiple pages in a bvec so it could solve without any regression. [1] 0bc315381fe9, zram: set physical queue limits to avoid array out of bounds accesses Cc: Jens Axboe Cc: Hannes Reinecke Reported-by: Johannes Thumshirn Tested-by: Johannes Thumshirn Reviewed-by: Johannes Thumshirn Signed-off-by: Johannes Thumshirn Signed-off-by: Minchan Kim --- drivers/block/zram/zram_drv.c | 39 ++++++++++----------------------------- 1 file changed, 10 insertions(+), 29 deletions(-) diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c index 01944419b1f3..28c2836f8c96 100644 --- a/drivers/block/zram/zram_drv.c +++ b/drivers/block/zram/zram_drv.c @@ -137,8 +137,7 @@ static inline bool valid_io_request(struct zram *zram, static void update_position(u32 *index, int *offset, struct bio_vec *bvec) { - if (*offset + bvec->bv_len >= PAGE_SIZE) - (*index)++; + *index += (*offset + bvec->bv_len) / PAGE_SIZE; *offset = (*offset + bvec->bv_len) % PAGE_SIZE; } @@ -838,34 +837,20 @@ static void __zram_make_request(struct zram *zram, struct bio *bio) } bio_for_each_segment(bvec, bio, iter) { - int max_transfer_size = PAGE_SIZE - offset; - - if (bvec.bv_len > max_transfer_size) { - /* - * zram_bvec_rw() can only make operation on a single - * zram page. Split the bio vector. - */ - struct bio_vec bv; - - bv.bv_page = bvec.bv_page; - bv.bv_len = max_transfer_size; - bv.bv_offset = bvec.bv_offset; + struct bio_vec bv = bvec; + unsigned int remained = bvec.bv_len; + do { + bv.bv_len = min_t(unsigned int, PAGE_SIZE, remained); if (zram_bvec_rw(zram, &bv, index, offset, - op_is_write(bio_op(bio))) < 0) + op_is_write(bio_op(bio))) < 0) goto out; - bv.bv_len = bvec.bv_len - max_transfer_size; - bv.bv_offset += max_transfer_size; - if (zram_bvec_rw(zram, &bv, index + 1, 0, - op_is_write(bio_op(bio))) < 0) - goto out; - } else - if (zram_bvec_rw(zram, &bvec, index, offset, - op_is_write(bio_op(bio))) < 0) - goto out; + bv.bv_offset += bv.bv_len; + remained -= bv.bv_len; - update_position(&index, &offset, &bvec); + update_position(&index, &offset, &bv); + } while (remained); } bio_endio(bio); @@ -882,8 +867,6 @@ static blk_qc_t zram_make_request(struct request_queue *queue, struct bio *bio) { struct zram *zram = queue->queuedata; - blk_queue_split(queue, &bio, queue->bio_split); - if (!valid_io_request(zram, bio->bi_iter.bi_sector, bio->bi_iter.bi_size)) { atomic64_inc(&zram->stats.invalid_io); @@ -1191,8 +1174,6 @@ static int zram_add(void) blk_queue_io_min(zram->disk->queue, PAGE_SIZE); blk_queue_io_opt(zram->disk->queue, PAGE_SIZE); zram->disk->queue->limits.discard_granularity = PAGE_SIZE; - zram->disk->queue->limits.max_sectors = SECTORS_PER_PAGE; - zram->disk->queue->limits.chunk_sectors = 0; blk_queue_max_discard_sectors(zram->disk->queue, UINT_MAX); /* * zram_bio_discard() will clear all logical blocks if logical block -- 2.7.4