Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755371AbdLLNEP (ORCPT ); Tue, 12 Dec 2017 08:04:15 -0500 Received: from mail.linuxfoundation.org ([140.211.169.12]:34906 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754844AbdLLNBd (ORCPT ); Tue, 12 Dec 2017 08:01:33 -0500 From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Johannes Thumshirn , Hannes Reinecke , Sergey Senozhatsky , Jens Axboe , Sasha Levin Subject: [PATCH 4.9 110/148] zram: set physical queue limits to avoid array out of bounds accesses Date: Tue, 12 Dec 2017 13:45:20 +0100 Message-Id: <20171212124437.115284927@linuxfoundation.org> X-Mailer: git-send-email 2.15.1 In-Reply-To: <20171212124431.207182779@linuxfoundation.org> References: <20171212124431.207182779@linuxfoundation.org> User-Agent: quilt/0.65 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1423 Lines: 36 4.9-stable review patch. If anyone has any objections, please let me know. ------------------ From: Johannes Thumshirn [ Upstream commit 0bc315381fe9ed9fb91db8b0e82171b645ac008f ] zram can handle at most SECTORS_PER_PAGE sectors in a bio's bvec. When using the NVMe over Fabrics loopback target which potentially sends a huge bulk of pages attached to the bio's bvec this results in a kernel panic because of array out of bounds accesses in zram_decompress_page(). Signed-off-by: Johannes Thumshirn Reviewed-by: Hannes Reinecke Reviewed-by: Sergey Senozhatsky Signed-off-by: Jens Axboe Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman --- drivers/block/zram/zram_drv.c | 2 ++ 1 file changed, 2 insertions(+) --- a/drivers/block/zram/zram_drv.c +++ b/drivers/block/zram/zram_drv.c @@ -1286,6 +1286,8 @@ static int zram_add(void) blk_queue_io_min(zram->disk->queue, PAGE_SIZE); blk_queue_io_opt(zram->disk->queue, PAGE_SIZE); zram->disk->queue->limits.discard_granularity = PAGE_SIZE; + zram->disk->queue->limits.max_sectors = SECTORS_PER_PAGE; + zram->disk->queue->limits.chunk_sectors = 0; blk_queue_max_discard_sectors(zram->disk->queue, UINT_MAX); /* * zram_bio_discard() will clear all logical blocks if logical block