Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751297AbdFATdI (ORCPT ); Thu, 1 Jun 2017 15:33:08 -0400 Received: from mx1.mpynet.fi ([82.197.21.84]:53696 "EHLO mx1.mpynet.fi" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751097AbdFATdH (ORCPT ); Thu, 1 Jun 2017 15:33:07 -0400 Date: Thu, 1 Jun 2017 22:33:04 +0300 From: Rakesh Pandit To: Ming Lei CC: Christoph Hellwig , Jens Axboe , Sagi Grimberg , , , Keith Busch , "Andy Lutomirski" Subject: Re: [PATCH V2] nvme: fix nvme_remove going to uninterruptible sleep for ever Message-ID: <20170601193304.GA19500@dhcp-216.srv.tuxera.com> References: <20170530071610.GA2679@hercules.tuxera.com> <4da7c939-1f54-80e5-48fc-06e58e14f018@grimberg.me> <20170530142346.GA39428@dhcp-216.srv.tuxera.com> <20170601114338.GA24855@lst.de> <20170601122818.GA18830@dhcp-216.srv.tuxera.com> <20170601123650.GA18883@dhcp-216.srv.tuxera.com> <20170601124631.GA28652@lst.de> <20170601145605.GA29369@ming.t460p> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Disposition: inline In-Reply-To: <20170601145605.GA29369@ming.t460p> User-Agent: Mutt/1.7.1 (2016-10-04) X-ClientProxiedBy: tuxera-exch.ad.tuxera.com (10.20.48.11) To tuxera-exch.ad.tuxera.com (10.20.48.11) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1569 Lines: 40 On Thu, Jun 01, 2017 at 10:56:10PM +0800, Ming Lei wrote: > On Thu, Jun 01, 2017 at 02:46:32PM +0200, Christoph Hellwig wrote: > > On Thu, Jun 01, 2017 at 03:36:50PM +0300, Rakesh Pandit wrote: > > > Also Sagi pointed out that user space set_features ioctl if fired up > > > in a window after nvme_removal it can also result in this issue seems > > > to be correct. I would prefer to keep this as it is and introduce > > > similar check higher up in nvme_ioctrl instead so that we don't send > > > sync commands if queues are killed already. > > > > > > Would you prefer a patch ? Thanks, > > > > If we want to kill everyone we probably should do it in ->queue_rq. > > Looks ->queue_rq has done it already via checking nvmeq->cq_vector > > > Or is the block layer blocking you somewhere else? > > blk-mq doesn't handle dying in the I/O path. > > Maybe it is similar with 806f026f9b901eaf1a(nvme: use blk_mq_start_hw_queues() in > nvme_kill_queues()), seems we need to do it for admin_q too. > > Can the following change fix the issue? > > diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c > index e44326d5cf19..360758488124 100644 > --- a/drivers/nvme/host/core.c > +++ b/drivers/nvme/host/core.c > @@ -2438,6 +2438,7 @@ void nvme_kill_queues(struct nvme_ctrl *ctrl) > struct nvme_ns *ns; > > mutex_lock(&ctrl->namespaces_mutex); > + blk_mq_start_hw_queues(ctrl->admin_q); > list_for_each_entry(ns, &ctrl->namespaces, list) { > /* > * Revalidating a dead namespace sets capacity to 0. This will > > Yes change fixes the issue.