Received: by 2002:a25:8b12:0:0:0:0:0 with SMTP id i18csp5226579ybl; Tue, 27 Aug 2019 01:07:33 -0700 (PDT) X-Google-Smtp-Source: APXvYqx1zzLNBlqZljo6wLT9GfGCm0EIf5Qy8QePjq82R/1opr/E+fQEht/kcdUvvlXAwzcfZPzR X-Received: by 2002:aa7:93c4:: with SMTP id y4mr25093146pff.39.1566893253169; Tue, 27 Aug 2019 01:07:33 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1566893253; cv=none; d=google.com; s=arc-20160816; b=NqrMgf41zKvY68fnHl9AjNDfNrIYpoOSQtYbg75IVjqQeP7mJqgFmClGky94X6aNQ5 CIk7uj+B8Pi6t+LJxbOhnwwr/xbDTC1xDyIp7mtn/9HWqUBwRvQgJxr1Z9mNh3mdzW2e bHczb0wO8HGU8mfibCfrcTiDT68sCEIfUyDYp8Sw7yW0m8w7PeR//N/XHvt9pWsW1ubc Jy0dqVCvj0OBd4YIdOB8zwpblgdpsPJJO54sL80GE30NR1t5goeL4NxY1Ky5dAAAILiU g+la+VYi1Ls5/JeIlREFI+cePvxgvP8XuRiDybE3/FM1SeCGEtKY0o95t6Rf+MO4nU8o upkw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=OA35cbAqrOZnDIB/dCxKI0mVu4CBZR3G47OaAkoNl4E=; b=q0zcbRdf74u3BE/z43qnDkDWkLdIP6dZa7On9EJkno2Q8WFd9IfKD7lzoCBzDr9HoP FtrpGWkstenPs3DOM8uAsTYVfA6mSwQdBCmQd4v+DfPhdZfqidYpjDokeL66sp/62jMo y433aZKxkqX3zPAhN64TbmMI064G4vixuR1IcxWzu+paHhhpKiZv9GjSaRvjTk38jHdf NHMOp5ravEG9KC++tkE2ekngGig1lRqoKY0v4JUGupNE/hGrfO8aIMQEqXd8U6XzsysU D2aIRsdDxYxUAKYD9IQq+DIrcA2+DH8TFsRzApk07NR2b37E2LrjFgD0Bt4MrbDfJPgW BbhA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=DYrdvN8d; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id q11si11684370pls.424.2019.08.27.01.07.17; Tue, 27 Aug 2019 01:07:33 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=DYrdvN8d; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732570AbfH0IGP (ORCPT + 99 others); Tue, 27 Aug 2019 04:06:15 -0400 Received: from mail.kernel.org ([198.145.29.99]:36210 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732588AbfH0IGM (ORCPT ); Tue, 27 Aug 2019 04:06:12 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id DF65B2186A; Tue, 27 Aug 2019 08:06:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1566893171; bh=t6k67GIlIUXzT/gXOlEDFqmMVmK3wQOp4nkJAV6YrEI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=DYrdvN8dDcLnq4HGegLPUE0mVwAdUcjCxJ4TD9oGr434QUfVJlzOUiMEzWZGJwTW1 mEY50IPryRfGykfhCS85Ht6mUzHt8+VRUuy66YrU5s+cG/aY/V8wB2irlm0+lW0swV lPrlNJRNtp3ajsJL2lbAlO4lHbQx31SchkWzRKQM= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Dmitry Fomichev , Damien Le Moal , Mike Snitzer Subject: [PATCH 5.2 143/162] dm zoned: improve error handling in i/o map code Date: Tue, 27 Aug 2019 09:51:11 +0200 Message-Id: <20190827072743.655237170@linuxfoundation.org> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20190827072738.093683223@linuxfoundation.org> References: <20190827072738.093683223@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Dmitry Fomichev commit d7428c50118e739e672656c28d2b26b09375d4e0 upstream. Some errors are ignored in the I/O path during queueing chunks for processing by chunk works. Since at least these errors are transient in nature, it should be possible to retry the failed incoming commands. The fix - Errors that can happen while queueing chunks are carried upwards to the main mapping function and it now returns DM_MAPIO_REQUEUE for any incoming requests that can not be properly queued. Error logging/debug messages are added where needed. Fixes: 3b1a94c88b79 ("dm zoned: drive-managed zoned block device target") Cc: stable@vger.kernel.org Signed-off-by: Dmitry Fomichev Reviewed-by: Damien Le Moal Signed-off-by: Mike Snitzer Signed-off-by: Greg Kroah-Hartman --- drivers/md/dm-zoned-target.c | 22 ++++++++++++++++------ 1 file changed, 16 insertions(+), 6 deletions(-) --- a/drivers/md/dm-zoned-target.c +++ b/drivers/md/dm-zoned-target.c @@ -513,22 +513,24 @@ static void dmz_flush_work(struct work_s * Get a chunk work and start it to process a new BIO. * If the BIO chunk has no work yet, create one. */ -static void dmz_queue_chunk_work(struct dmz_target *dmz, struct bio *bio) +static int dmz_queue_chunk_work(struct dmz_target *dmz, struct bio *bio) { unsigned int chunk = dmz_bio_chunk(dmz->dev, bio); struct dm_chunk_work *cw; + int ret = 0; mutex_lock(&dmz->chunk_lock); /* Get the BIO chunk work. If one is not active yet, create one */ cw = radix_tree_lookup(&dmz->chunk_rxtree, chunk); if (!cw) { - int ret; /* Create a new chunk work */ cw = kmalloc(sizeof(struct dm_chunk_work), GFP_NOIO); - if (!cw) + if (unlikely(!cw)) { + ret = -ENOMEM; goto out; + } INIT_WORK(&cw->work, dmz_chunk_work); refcount_set(&cw->refcount, 0); @@ -539,7 +541,6 @@ static void dmz_queue_chunk_work(struct ret = radix_tree_insert(&dmz->chunk_rxtree, chunk, cw); if (unlikely(ret)) { kfree(cw); - cw = NULL; goto out; } } @@ -547,10 +548,12 @@ static void dmz_queue_chunk_work(struct bio_list_add(&cw->bio_list, bio); dmz_get_chunk_work(cw); + dmz_reclaim_bio_acc(dmz->reclaim); if (queue_work(dmz->chunk_wq, &cw->work)) dmz_get_chunk_work(cw); out: mutex_unlock(&dmz->chunk_lock); + return ret; } /* @@ -564,6 +567,7 @@ static int dmz_map(struct dm_target *ti, sector_t sector = bio->bi_iter.bi_sector; unsigned int nr_sectors = bio_sectors(bio); sector_t chunk_sector; + int ret; dmz_dev_debug(dev, "BIO op %d sector %llu + %u => chunk %llu, block %llu, %u blocks", bio_op(bio), (unsigned long long)sector, nr_sectors, @@ -601,8 +605,14 @@ static int dmz_map(struct dm_target *ti, dm_accept_partial_bio(bio, dev->zone_nr_sectors - chunk_sector); /* Now ready to handle this BIO */ - dmz_reclaim_bio_acc(dmz->reclaim); - dmz_queue_chunk_work(dmz, bio); + ret = dmz_queue_chunk_work(dmz, bio); + if (ret) { + dmz_dev_debug(dmz->dev, + "BIO op %d, can't process chunk %llu, err %i\n", + bio_op(bio), (u64)dmz_bio_chunk(dmz->dev, bio), + ret); + return DM_MAPIO_REQUEUE; + } return DM_MAPIO_SUBMITTED; }