Received: by 2002:a25:8b12:0:0:0:0:0 with SMTP id i18csp5227280ybl; Tue, 27 Aug 2019 01:08:23 -0700 (PDT) X-Google-Smtp-Source: APXvYqzQ8CblvZnETuPg6gEsr3C9TTzwgAkuIndBTBK9NabqgTaIGSBNhRbz59aAOcGYT9gcFUqF X-Received: by 2002:aa7:9477:: with SMTP id t23mr25128482pfq.29.1566893303422; Tue, 27 Aug 2019 01:08:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1566893303; cv=none; d=google.com; s=arc-20160816; b=AaX6WSQEUhA+XzvQYUjehI6ka3iZkxY1fihLPMoF/1lYYbaKiSnKwY6JX3TOs9ZwvU wbtj1I4DQfbtIDzBaLjwSxD2egAFPLISOtRI5rUODOZ8YHWGzrQqnFo4sE6Q9yIXYK0i kQvgcx11gAaRhM3gW9pzZGRIvWiqVxRbBczfv0s2cUwuABuE0GkyTsLJZr/ttTc0mOIc 1RBFEunledaGEJ17eQLGLse3HCDg9y/uh2dEsDJmy0pNePSVdcAwpGFEPeylccHK6xD8 pMeIH6OahWECMCGvYjykdFDMzkw9Un7U5o1sYDZUAMVMZXvTnZSRkAaeZ+xXHjw0vv9u B7GQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=9Tdby1XDd9O1BLggIWeXXOniOOmpohHmqnagSYp75Is=; b=yJx2NefZz0+TuC8tLMe40j0rvIEth83l0JlJCDW31QOR9682HoiRpnndq/6cPdjhy/ J0czZlZpHkNHTdyFpcsqs4npXNi7mgu9VbP/E0UEhla6+ObtmLSwFDMHdQG6fVupfYeU 7vQtkPBJY9fbtKRFeJWQ5zAIxkOWTmpedP838I16X2I0+/1rGw1yc4ROLUXTqTpjuf0P GR8fhrhXL+vCpvbjDsbqMVrFfvg8V+riYyItdhRtbu/JzBAVdqPE6PdfsRZTlxSZwORc bCzsMrQe65Jx6xkeNriNDlbmez2XVyww8PJn0ysNfE+2Lh1jzpJEDxvwovL5GTYA2qTs HbaA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=0MClKOgt; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e1si12061072plt.276.2019.08.27.01.08.08; Tue, 27 Aug 2019 01:08:23 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=0MClKOgt; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732192AbfH0IFw (ORCPT + 99 others); Tue, 27 Aug 2019 04:05:52 -0400 Received: from mail.kernel.org ([198.145.29.99]:35728 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732279AbfH0IFt (ORCPT ); Tue, 27 Aug 2019 04:05:49 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id D68992173E; Tue, 27 Aug 2019 08:05:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1566893148; bh=GKWfCIS+VGed4cqENG/dJ1poEMyqYBzvOrLTWjqa9XE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=0MClKOgt4EFl6VNhOneyaWKD0pET7VD4GvyEorUd8ejwGfCWpPZ09RURcGzsoFA5a U8iNPmsxAMwQXB1uS2XuoLh42bD+Cb/APsaMUoaRq7CnWE6LippLTBrESmtvQD/eWl FOFpOXYkmi+cDUM2V6Soj+RwqLiCNBHolYu2OyJs= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Bryan Gurney , Mike Snitzer Subject: [PATCH 5.2 136/162] dm dust: use dust block size for badblocklist index Date: Tue, 27 Aug 2019 09:51:04 +0200 Message-Id: <20190827072743.332150234@linuxfoundation.org> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20190827072738.093683223@linuxfoundation.org> References: <20190827072738.093683223@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Bryan Gurney commit 08c04c84a5cde3af9baac0645a7496d6dcd76822 upstream. Change the "frontend" dust_remove_block, dust_add_block, and dust_query_block functions to store the "dust block number", instead of the sector number corresponding to the "dust block number". For the "backend" functions dust_map_read and dust_map_write, right-shift by sect_per_block_shift. This fixes the inability to emulate failure beyond the first sector of each "dust block" (for devices with a "dust block size" larger than 512 bytes). Fixes: e4f3fabd67480bf ("dm: add dust target") Cc: stable@vger.kernel.org Signed-off-by: Bryan Gurney Signed-off-by: Mike Snitzer Signed-off-by: Greg Kroah-Hartman --- drivers/md/dm-dust.c | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) --- a/drivers/md/dm-dust.c +++ b/drivers/md/dm-dust.c @@ -25,6 +25,7 @@ struct dust_device { unsigned long long badblock_count; spinlock_t dust_lock; unsigned int blksz; + int sect_per_block_shift; unsigned int sect_per_block; sector_t start; bool fail_read_on_bb:1; @@ -79,7 +80,7 @@ static int dust_remove_block(struct dust unsigned long flags; spin_lock_irqsave(&dd->dust_lock, flags); - bblock = dust_rb_search(&dd->badblocklist, block * dd->sect_per_block); + bblock = dust_rb_search(&dd->badblocklist, block); if (bblock == NULL) { if (!dd->quiet_mode) { @@ -113,7 +114,7 @@ static int dust_add_block(struct dust_de } spin_lock_irqsave(&dd->dust_lock, flags); - bblock->bb = block * dd->sect_per_block; + bblock->bb = block; if (!dust_rb_insert(&dd->badblocklist, bblock)) { if (!dd->quiet_mode) { DMERR("%s: block %llu already in badblocklist", @@ -138,7 +139,7 @@ static int dust_query_block(struct dust_ unsigned long flags; spin_lock_irqsave(&dd->dust_lock, flags); - bblock = dust_rb_search(&dd->badblocklist, block * dd->sect_per_block); + bblock = dust_rb_search(&dd->badblocklist, block); if (bblock != NULL) DMINFO("%s: block %llu found in badblocklist", __func__, block); else @@ -165,6 +166,7 @@ static int dust_map_read(struct dust_dev int ret = DM_MAPIO_REMAPPED; if (fail_read_on_bb) { + thisblock >>= dd->sect_per_block_shift; spin_lock_irqsave(&dd->dust_lock, flags); ret = __dust_map_read(dd, thisblock); spin_unlock_irqrestore(&dd->dust_lock, flags); @@ -195,6 +197,7 @@ static int dust_map_write(struct dust_de unsigned long flags; if (fail_read_on_bb) { + thisblock >>= dd->sect_per_block_shift; spin_lock_irqsave(&dd->dust_lock, flags); __dust_map_write(dd, thisblock); spin_unlock_irqrestore(&dd->dust_lock, flags); @@ -331,6 +334,8 @@ static int dust_ctr(struct dm_target *ti dd->blksz = blksz; dd->start = tmp; + dd->sect_per_block_shift = __ffs(sect_per_block); + /* * Whether to fail a read on a "bad" block. * Defaults to false; enabled later by message.