Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754763AbdCGGJT (ORCPT ); Tue, 7 Mar 2017 01:09:19 -0500 Received: from LGEAMRELO13.lge.com ([156.147.23.53]:48119 "EHLO lgeamrelo13.lge.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752338AbdCGGJL (ORCPT ); Tue, 7 Mar 2017 01:09:11 -0500 X-Original-SENDERIP: 156.147.1.125 X-Original-MAILFROM: minchan@kernel.org X-Original-SENDERIP: 10.177.223.161 X-Original-MAILFROM: minchan@kernel.org Date: Tue, 7 Mar 2017 14:22:42 +0900 From: Minchan Kim To: Johannes Thumshirn Cc: Jens Axboe , Nitin Gupta , Christoph Hellwig , Sergey Senozhatsky , Hannes Reinecke , yizhan@redhat.com, Linux Block Layer Mailinglist , Linux Kernel Mailinglist Subject: Re: [PATCH] zram: set physical queue limits to avoid array out of bounds accesses Message-ID: <20170307052242.GA29458@bbox> References: <20170306102335.9180-1-jthumshirn@suse.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170306102335.9180-1-jthumshirn@suse.de> User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1524 Lines: 40 Hello Johannes, On Mon, Mar 06, 2017 at 11:23:35AM +0100, Johannes Thumshirn wrote: > zram can handle at most SECTORS_PER_PAGE sectors in a bio's bvec. When using > the NVMe over Fabrics loopback target which potentially sends a huge bulk of > pages attached to the bio's bvec this results in a kernel panic because of > array out of bounds accesses in zram_decompress_page(). First of all, thanks for the report and fix up! Unfortunately, I'm not familiar with that interface of block layer. It seems this is a material for stable so I want to understand it clear. Could you say more specific things to educate me? What scenario/When/How it is problem? It will help for me to understand! Thanks. > > Signed-off-by: Johannes Thumshirn > --- > drivers/block/zram/zram_drv.c | 2 ++ > 1 file changed, 2 insertions(+) > > diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c > index e27d89a..dceb5ed 100644 > --- a/drivers/block/zram/zram_drv.c > +++ b/drivers/block/zram/zram_drv.c > @@ -1189,6 +1189,8 @@ static int zram_add(void) > blk_queue_io_min(zram->disk->queue, PAGE_SIZE); > blk_queue_io_opt(zram->disk->queue, PAGE_SIZE); > zram->disk->queue->limits.discard_granularity = PAGE_SIZE; > + zram->disk->queue->limits.max_sectors = SECTORS_PER_PAGE; > + zram->disk->queue->limits.chunk_sectors = 0; > blk_queue_max_discard_sectors(zram->disk->queue, UINT_MAX); > /* > * zram_bio_discard() will clear all logical blocks if logical block > -- > 1.8.5.6 >