Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752331AbdDCXNU (ORCPT ); Mon, 3 Apr 2017 19:13:20 -0400 Received: from LGEAMRELO11.lge.com ([156.147.23.51]:34574 "EHLO lgeamrelo11.lge.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751535AbdDCXNS (ORCPT ); Mon, 3 Apr 2017 19:13:18 -0400 X-Original-SENDERIP: 156.147.1.125 X-Original-MAILFROM: minchan.kim@lge.com X-Original-SENDERIP: 165.244.249.26 X-Original-MAILFROM: minchan.kim@lge.com X-Original-SENDERIP: 10.177.223.161 X-Original-MAILFROM: minchan.kim@lge.com Date: Tue, 4 Apr 2017 08:13:15 +0900 From: Minchan Kim To: Andrew Morton CC: Minchan Kim , , Sergey Senozhatsky , , Jens Axboe , Hannes Reinecke , Johannes Thumshirn Subject: Re: [PATCH 1/5] zram: handle multiple pages attached bio's bvec Message-ID: <20170403231315.GA8903@bbox> References: <1491196653-7388-1-git-send-email-minchan@kernel.org> <1491196653-7388-2-git-send-email-minchan@kernel.org> <20170403154528.6470165dd791cf8a23ae57c8@linux-foundation.org> MIME-Version: 1.0 In-Reply-To: <20170403154528.6470165dd791cf8a23ae57c8@linux-foundation.org> User-Agent: Mutt/1.5.24 (2015-08-30) X-MIMETrack: Itemize by SMTP Server on LGEKRMHUB08/LGE/LG Group(Release 8.5.3FP6|November 21, 2013) at 2017/04/04 08:13:15, Serialize by Router on LGEKRMHUB08/LGE/LG Group(Release 8.5.3FP6|November 21, 2013) at 2017/04/04 08:13:15, Serialize complete at 2017/04/04 08:13:15 Content-Type: text/plain; charset="us-ascii" Content-Disposition: inline Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1919 Lines: 50 Hi Andrew, On Mon, Apr 03, 2017 at 03:45:28PM -0700, Andrew Morton wrote: > On Mon, 3 Apr 2017 14:17:29 +0900 Minchan Kim wrote: > > > Johannes Thumshirn reported system goes the panic when using NVMe over > > Fabrics loopback target with zram. > > > > The reason is zram expects each bvec in bio contains a single page > > but nvme can attach a huge bulk of pages attached to the bio's bvec > > so that zram's index arithmetic could be wrong so that out-of-bound > > access makes panic. > > > > It can be solved by limiting max_sectors with SECTORS_PER_PAGE like > > [1] but it makes zram slow because bio should split with each pages > > so this patch makes zram aware of multiple pages in a bvec so it > > could solve without any regression. > > > > [1] 0bc315381fe9, zram: set physical queue limits to avoid array out of > > bounds accesses > > This isn't a cleanup - it fixes a panic (or is it a BUG or is it an > oops, or...) I should have written more carefully. Johannes reported the problem with fix[1] and Jens already sent it to the mainline to fix it. However, during the discussion, we can solve the problem nice way so this is revert of [1] plus solving the problem with other way which no need to split bio. Thanks. > > How serious is this bug? Should the fix be backported into -stable > kernels? etc. > > A better description of the bug's behaviour would be appropriate. > > > Cc: Jens Axboe > > Cc: Hannes Reinecke > > Reported-by: Johannes Thumshirn > > Tested-by: Johannes Thumshirn > > Reviewed-by: Johannes Thumshirn > > Signed-off-by: Johannes Thumshirn > > Signed-off-by: Minchan Kim > > This signoff trail is confusing. It somewhat implies that Johannes > authored the patch which I don't think is the case? > >