Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp2468588yba; Mon, 15 Apr 2019 12:16:37 -0700 (PDT) X-Google-Smtp-Source: APXvYqyRZ9vg9xRc2/6QmtQVlEHOJ+556MEuZy+OxGW7a+tF3uwfhoyHZvK3rWm1TwKQ/R8BeDup X-Received: by 2002:a62:5582:: with SMTP id j124mr78283138pfb.53.1555355796942; Mon, 15 Apr 2019 12:16:36 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1555355796; cv=none; d=google.com; s=arc-20160816; b=ONWP2Yz4p6/Z/jYa/7Fxax4oSEenEXy0qioCvaRoG3xY+q82AUUpVG5yIHRhxcIYlx tEPmzTvA7CQuJEmIecx1A4hY56QaKlKcCTBMKeL55eG/ir6bLHOD9ayewUnoQyaCwHb1 +9/DP5IETv9F5ERhubD1w1WiurDIvJbcB/+uESsWEOdrKG8N3abrqPK6oCHF1JRJBZGV EcR70OBr4WXFO2xL/2c0A03/eVASGsycum/4ay/o1pJwTJ8E08c3Iup0wwLyr8vXtLPe kg0b5SAmArbN3HRovM2ppyWq9kHWLWy/95g1oLeVjQ51KHboBF/qY3WGNqoDfiMJDS6Y WjJw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=HdN7JF6yBgp9B7+RNoyUrNBVF316isZxQfqDwARGddU=; b=EGgIVji+6O5wQu6RQbl7iHkTO+4Q0/ZnAS9ITCbb/tZinrZP2ZZCTs8gRKkMCtmatG f89XEeO02P76ZXY0uAqwOg0T1LQY5xMFZCkAwi6eY13io0AGJjuwdsLXLQOh7QwuWyFT FcosFfeaCEXABKaDHcW8sv1f/8WTfcpCA5HGUarLqpISpjNE4XWQIT6SF/Ey7AKvEX5s Ed7JhOJHnzSrfY85fTKI6bdIF4e1TYTg2nIsTEMi65OWoI3/qDaoqQXIZTP/CG2V9Em+ 1n69wAbW+//iRR+QDGP2veEfEG7cY6cgi89Sv4iV/zlIcNG57Uw5dr6uKDGNd8o3F9Sb QkhQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=iNZJjLmR; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 33si47649919plk.421.2019.04.15.12.16.21; Mon, 15 Apr 2019 12:16:36 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=iNZJjLmR; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731749AbfDOTO1 (ORCPT + 99 others); Mon, 15 Apr 2019 15:14:27 -0400 Received: from mail.kernel.org ([198.145.29.99]:52624 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731372AbfDOTOZ (ORCPT ); Mon, 15 Apr 2019 15:14:25 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 9F213218A1; Mon, 15 Apr 2019 19:14:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1555355664; bh=1RN0fwYUgpVwFH6JVrcY5uUlzqCf0oeHvk8Yfv61y+k=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=iNZJjLmRb1CbqDGohkNI4H7bO+E4obLwVA58RD3JLgFc1UzGgK+oWfteTDGZSah62 4bRs2YyIX9HkakDynUaQAKpN2fPRs+IYsYM8nT9etkPpPN9MledU3s2UHvsxYlVpth L9Sw/KTcLtDrXGDvqydIAAS01++yC6DnR5qepeDU= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, David Jeffery , Mike Snitzer Subject: [PATCH 5.0 113/117] dm: disable DISCARD if the underlying storage no longer supports it Date: Mon, 15 Apr 2019 21:01:23 +0200 Message-Id: <20190415183750.508911553@linuxfoundation.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190415183744.887851196@linuxfoundation.org> References: <20190415183744.887851196@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Mike Snitzer commit bcb44433bba5eaff293888ef22ffa07f1f0347d6 upstream. Storage devices which report supporting discard commands like WRITE_SAME_16 with unmap, but reject discard commands sent to the storage device. This is a clear storage firmware bug but it doesn't change the fact that should a program cause discards to be sent to a multipath device layered on this buggy storage, all paths can end up failed at the same time from the discards, causing possible I/O loss. The first discard to a path will fail with Illegal Request, Invalid field in cdb, e.g.: kernel: sd 8:0:8:19: [sdfn] tag#0 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE kernel: sd 8:0:8:19: [sdfn] tag#0 Sense Key : Illegal Request [current] kernel: sd 8:0:8:19: [sdfn] tag#0 Add. Sense: Invalid field in cdb kernel: sd 8:0:8:19: [sdfn] tag#0 CDB: Write same(16) 93 08 00 00 00 00 00 a0 08 00 00 00 80 00 00 00 kernel: blk_update_request: critical target error, dev sdfn, sector 10487808 The SCSI layer converts this to the BLK_STS_TARGET error number, the sd device disables its support for discard on this path, and because of the BLK_STS_TARGET error multipath fails the discard without failing any path or retrying down a different path. But subsequent discards can cause path failures. Any discards sent to the path which already failed a discard ends up failing with EIO from blk_cloned_rq_check_limits with an "over max size limit" error since the discard limit was set to 0 by the sd driver for the path. As the error is EIO, this now fails the path and multipath tries to send the discard down the next path. This cycle continues as discards are sent until all paths fail. Fix this by training DM core to disable DISCARD if the underlying storage already did so. Also, fix branching in dm_done() and clone_endio() to reflect the mutually exclussive nature of the IO operations in question. Cc: stable@vger.kernel.org Reported-by: David Jeffery Signed-off-by: Mike Snitzer Signed-off-by: Greg Kroah-Hartman --- drivers/md/dm-core.h | 1 + drivers/md/dm-rq.c | 11 +++++++---- drivers/md/dm.c | 20 ++++++++++++++++---- 3 files changed, 24 insertions(+), 8 deletions(-) --- a/drivers/md/dm-core.h +++ b/drivers/md/dm-core.h @@ -115,6 +115,7 @@ struct mapped_device { struct srcu_struct io_barrier; }; +void disable_discard(struct mapped_device *md); void disable_write_same(struct mapped_device *md); void disable_write_zeroes(struct mapped_device *md); --- a/drivers/md/dm-rq.c +++ b/drivers/md/dm-rq.c @@ -206,11 +206,14 @@ static void dm_done(struct request *clon } if (unlikely(error == BLK_STS_TARGET)) { - if (req_op(clone) == REQ_OP_WRITE_SAME && - !clone->q->limits.max_write_same_sectors) + if (req_op(clone) == REQ_OP_DISCARD && + !clone->q->limits.max_discard_sectors) + disable_discard(tio->md); + else if (req_op(clone) == REQ_OP_WRITE_SAME && + !clone->q->limits.max_write_same_sectors) disable_write_same(tio->md); - if (req_op(clone) == REQ_OP_WRITE_ZEROES && - !clone->q->limits.max_write_zeroes_sectors) + else if (req_op(clone) == REQ_OP_WRITE_ZEROES && + !clone->q->limits.max_write_zeroes_sectors) disable_write_zeroes(tio->md); } --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -963,6 +963,15 @@ static void dec_pending(struct dm_io *io } } +void disable_discard(struct mapped_device *md) +{ + struct queue_limits *limits = dm_get_queue_limits(md); + + /* device doesn't really support DISCARD, disable it */ + limits->max_discard_sectors = 0; + blk_queue_flag_clear(QUEUE_FLAG_DISCARD, md->queue); +} + void disable_write_same(struct mapped_device *md) { struct queue_limits *limits = dm_get_queue_limits(md); @@ -988,11 +997,14 @@ static void clone_endio(struct bio *bio) dm_endio_fn endio = tio->ti->type->end_io; if (unlikely(error == BLK_STS_TARGET) && md->type != DM_TYPE_NVME_BIO_BASED) { - if (bio_op(bio) == REQ_OP_WRITE_SAME && - !bio->bi_disk->queue->limits.max_write_same_sectors) + if (bio_op(bio) == REQ_OP_DISCARD && + !bio->bi_disk->queue->limits.max_discard_sectors) + disable_discard(md); + else if (bio_op(bio) == REQ_OP_WRITE_SAME && + !bio->bi_disk->queue->limits.max_write_same_sectors) disable_write_same(md); - if (bio_op(bio) == REQ_OP_WRITE_ZEROES && - !bio->bi_disk->queue->limits.max_write_zeroes_sectors) + else if (bio_op(bio) == REQ_OP_WRITE_ZEROES && + !bio->bi_disk->queue->limits.max_write_zeroes_sectors) disable_write_zeroes(md); }