Received: by 2002:a25:8b12:0:0:0:0:0 with SMTP id i18csp5215264ybl; Tue, 27 Aug 2019 00:56:15 -0700 (PDT) X-Google-Smtp-Source: APXvYqwGfzO0+9esTd2qrOI/ishhK/9b7CjCyQdoQKWK6SK4FMsaCynMX8dmEcY6GOaKFjwFj3U7 X-Received: by 2002:a17:90a:94c3:: with SMTP id j3mr23665033pjw.10.1566892575026; Tue, 27 Aug 2019 00:56:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1566892575; cv=none; d=google.com; s=arc-20160816; b=CqJvxRTKK8oL2cm+xI+AV7e1qWE3BobMXk1ZfZzfsjHZkkzo0biu7Xfr+JwABclqr4 HZiUak6B0TYIHO0h1VI0wAcW4hVKEW8OtjxLNjue2Hon+qgSpcQ85Wyt7NVAMOgx39tZ IJn37CD5G/yHCDpMVhCdv8IpCVHDD+L5ZrYX4f1cXBBpdcr3EQuClLkHj49zaUf60o9l crReuOZXvZdhZZH3grXiixcJ4XKx4AmlE/lstSR3lxqNtuNHw4fR9nVfQ3SEgtZrRHGH oAyiNCFCAQd5GOL46QKYLwCB7MtKSCBMu9HGAc2GgULhQSIgTB01aWQILS0jgtWoq5DI ihBQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=WCo5iaeaj/Ja3RUAXIHEWwA1dtW9dA/I/CzJcl95OZE=; b=EcpBU6vCbtYQsTzdy3+DbrmKDD8VRdNpQvHPBX88vL7Ukqr+W3o3MKxLxY0h6JGjkP V0PfJFpwfybr6pEeoSZMTHUtAkf7b73Q5Rkt3Kz+X6xRsTuvGSL++qi/d8wBcYjCssl/ qDuccezzMqyj7DmFnDMKPjK8+iAL/yJmPuVW+chff5k0/OfdaxGXPSAIT7G6xgGId7nh FLMguVh2qav4BWLyw7XOY3L24xVWVMtSxQe0Saa2kJm9LGetPhRALD3NbZz0JNrb7QhE cYarYouqooCSs0Swz+a0CxYS95jIJ2syOQy3ODvvwHVrGlO8iPDCXmY4VYjQ5R4CjUNL YOVg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=1cuijysv; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e33si11874676pld.436.2019.08.27.00.56.00; Tue, 27 Aug 2019 00:56:15 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=1cuijysv; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730035AbfH0HyW (ORCPT + 99 others); Tue, 27 Aug 2019 03:54:22 -0400 Received: from mail.kernel.org ([198.145.29.99]:45928 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730011AbfH0HyT (ORCPT ); Tue, 27 Aug 2019 03:54:19 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id D659D217F5; Tue, 27 Aug 2019 07:54:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1566892458; bh=zBFIuOVncQ0mQk1/RZt6zBngG9P9fsV2Cwm1yrUWVrM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=1cuijysvWaE/eSItsiDr+HsUPYjR2F0eqUtb16+Hx5hi2FPn+lZOGL50JHXZXrW2f r636FpjgLuiDRycfyA18WHmJWs9Qg1oIYDFwBv0RrJTUvEcS5qksI8BMoOwQsSXsE7 hhBDBZJw48TQVqG+bQYJ9QZj0AZxaotBUf7FUl0o= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Dmitry Fomichev , Damien Le Moal , Mike Snitzer Subject: [PATCH 4.14 54/62] dm zoned: improve error handling in i/o map code Date: Tue, 27 Aug 2019 09:50:59 +0200 Message-Id: <20190827072703.645002259@linuxfoundation.org> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20190827072659.803647352@linuxfoundation.org> References: <20190827072659.803647352@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Dmitry Fomichev commit d7428c50118e739e672656c28d2b26b09375d4e0 upstream. Some errors are ignored in the I/O path during queueing chunks for processing by chunk works. Since at least these errors are transient in nature, it should be possible to retry the failed incoming commands. The fix - Errors that can happen while queueing chunks are carried upwards to the main mapping function and it now returns DM_MAPIO_REQUEUE for any incoming requests that can not be properly queued. Error logging/debug messages are added where needed. Fixes: 3b1a94c88b79 ("dm zoned: drive-managed zoned block device target") Cc: stable@vger.kernel.org Signed-off-by: Dmitry Fomichev Reviewed-by: Damien Le Moal Signed-off-by: Mike Snitzer Signed-off-by: Greg Kroah-Hartman --- drivers/md/dm-zoned-target.c | 22 ++++++++++++++++------ 1 file changed, 16 insertions(+), 6 deletions(-) --- a/drivers/md/dm-zoned-target.c +++ b/drivers/md/dm-zoned-target.c @@ -513,22 +513,24 @@ static void dmz_flush_work(struct work_s * Get a chunk work and start it to process a new BIO. * If the BIO chunk has no work yet, create one. */ -static void dmz_queue_chunk_work(struct dmz_target *dmz, struct bio *bio) +static int dmz_queue_chunk_work(struct dmz_target *dmz, struct bio *bio) { unsigned int chunk = dmz_bio_chunk(dmz->dev, bio); struct dm_chunk_work *cw; + int ret = 0; mutex_lock(&dmz->chunk_lock); /* Get the BIO chunk work. If one is not active yet, create one */ cw = radix_tree_lookup(&dmz->chunk_rxtree, chunk); if (!cw) { - int ret; /* Create a new chunk work */ cw = kmalloc(sizeof(struct dm_chunk_work), GFP_NOIO); - if (!cw) + if (unlikely(!cw)) { + ret = -ENOMEM; goto out; + } INIT_WORK(&cw->work, dmz_chunk_work); atomic_set(&cw->refcount, 0); @@ -539,7 +541,6 @@ static void dmz_queue_chunk_work(struct ret = radix_tree_insert(&dmz->chunk_rxtree, chunk, cw); if (unlikely(ret)) { kfree(cw); - cw = NULL; goto out; } } @@ -547,10 +548,12 @@ static void dmz_queue_chunk_work(struct bio_list_add(&cw->bio_list, bio); dmz_get_chunk_work(cw); + dmz_reclaim_bio_acc(dmz->reclaim); if (queue_work(dmz->chunk_wq, &cw->work)) dmz_get_chunk_work(cw); out: mutex_unlock(&dmz->chunk_lock); + return ret; } /* @@ -564,6 +567,7 @@ static int dmz_map(struct dm_target *ti, sector_t sector = bio->bi_iter.bi_sector; unsigned int nr_sectors = bio_sectors(bio); sector_t chunk_sector; + int ret; dmz_dev_debug(dev, "BIO op %d sector %llu + %u => chunk %llu, block %llu, %u blocks", bio_op(bio), (unsigned long long)sector, nr_sectors, @@ -601,8 +605,14 @@ static int dmz_map(struct dm_target *ti, dm_accept_partial_bio(bio, dev->zone_nr_sectors - chunk_sector); /* Now ready to handle this BIO */ - dmz_reclaim_bio_acc(dmz->reclaim); - dmz_queue_chunk_work(dmz, bio); + ret = dmz_queue_chunk_work(dmz, bio); + if (ret) { + dmz_dev_debug(dmz->dev, + "BIO op %d, can't process chunk %llu, err %i\n", + bio_op(bio), (u64)dmz_bio_chunk(dmz->dev, bio), + ret); + return DM_MAPIO_REQUEUE; + } return DM_MAPIO_SUBMITTED; }