Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751343AbdFAO4c (ORCPT ); Thu, 1 Jun 2017 10:56:32 -0400 Received: from mx1.redhat.com ([209.132.183.28]:44420 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751163AbdFAO4a (ORCPT ); Thu, 1 Jun 2017 10:56:30 -0400 DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 5B9DA448D95 Authentication-Results: ext-mx05.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx05.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=ming.lei@redhat.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com 5B9DA448D95 Date: Thu, 1 Jun 2017 22:56:10 +0800 From: Ming Lei To: Christoph Hellwig Cc: Rakesh Pandit , Jens Axboe , Sagi Grimberg , linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, Keith Busch , Andy Lutomirski Subject: Re: [PATCH V2] nvme: fix nvme_remove going to uninterruptible sleep for ever Message-ID: <20170601145605.GA29369@ming.t460p> References: <20170530071610.GA2679@hercules.tuxera.com> <4da7c939-1f54-80e5-48fc-06e58e14f018@grimberg.me> <20170530142346.GA39428@dhcp-216.srv.tuxera.com> <20170601114338.GA24855@lst.de> <20170601122818.GA18830@dhcp-216.srv.tuxera.com> <20170601123650.GA18883@dhcp-216.srv.tuxera.com> <20170601124631.GA28652@lst.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170601124631.GA28652@lst.de> User-Agent: Mutt/1.8.0 (2017-02-23) X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.29]); Thu, 01 Jun 2017 14:56:24 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1421 Lines: 39 On Thu, Jun 01, 2017 at 02:46:32PM +0200, Christoph Hellwig wrote: > On Thu, Jun 01, 2017 at 03:36:50PM +0300, Rakesh Pandit wrote: > > Also Sagi pointed out that user space set_features ioctl if fired up > > in a window after nvme_removal it can also result in this issue seems > > to be correct. I would prefer to keep this as it is and introduce > > similar check higher up in nvme_ioctrl instead so that we don't send > > sync commands if queues are killed already. > > > > Would you prefer a patch ? Thanks, > > If we want to kill everyone we probably should do it in ->queue_rq. Looks ->queue_rq has done it already via checking nvmeq->cq_vector > Or is the block layer blocking you somewhere else? blk-mq doesn't handle dying in the I/O path. Maybe it is similar with 806f026f9b901eaf1a(nvme: use blk_mq_start_hw_queues() in nvme_kill_queues()), seems we need to do it for admin_q too. Can the following change fix the issue? diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index e44326d5cf19..360758488124 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -2438,6 +2438,7 @@ void nvme_kill_queues(struct nvme_ctrl *ctrl) struct nvme_ns *ns; mutex_lock(&ctrl->namespaces_mutex); + blk_mq_start_hw_queues(ctrl->admin_q); list_for_each_entry(ns, &ctrl->namespaces, list) { /* * Revalidating a dead namespace sets capacity to 0. This will Thanks, Ming