Received: by 2002:ac0:a5b6:0:0:0:0:0 with SMTP id m51-v6csp5390304imm; Tue, 19 Jun 2018 09:37:45 -0700 (PDT) X-Google-Smtp-Source: ADUXVKKhL7WtWrcOlUoF/lyW1ckvu2fjW6zlezmSAG2tlDfqRqJrZ+smXPAI4hUNCvTSqnesIjHc X-Received: by 2002:a62:9385:: with SMTP id r5-v6mr18841138pfk.59.1529426265291; Tue, 19 Jun 2018 09:37:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1529426265; cv=none; d=google.com; s=arc-20160816; b=LjgZ1sso7UXzzmabxGpWPPdI9wcs5UNT53aLoIRbHufpZbPLKxZgKvWcpP1wXs9JRA KGBTTzUqD654BIU3rdhtyaGQoVy2Jxl8tTRm29o1AHrdcBo0ZO/Idx2OoVQj0X2IgfZv CTeAunXL2JAVOqW2LrL3E437Kd3f9kN+NmBQ4D8ZmWHDMPgZO1Kn+F9QRHVkEm73GC05 N90NHrPvhm0YaqHqokZDAsiglsdKsm6hsO4MIAM57RSsw/se08Z3xrMVxzSNeOKvCRTX Hd1aQjJi7nbbbVDMPI3WEorAceeFuGIivEzvVd8S9QuT3MM8/pAgWehu1y2pSANF0bGG pjrg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=jjrxhkVHG/CJr6AZIefmwP47jeoHIjYHaA3arMwnL14=; b=VmZCh5IOrweXUrJdawb98UNqe9tbGJalH08FQRA7Grv4xHAzMd4kwSAuAp2rvZZe50 LljfiP+QAlFxFiF8UC+wpQWtSzobWlgg1KO5pi1I6DsGWhCQKAHTH/NGGX6dhkiCb1Zq lKzBwg8dEQGHGuv8uXnt+gF+HVhhoOxOuipd2gEoQgQfTlOC0FVGia/O5rk+pTgXK+iK 6P7OtT4vq2BQ48cQvHiEX+fIqRg3pLbWOhxBAh0ARj0GKnIdKGsHhSKNnJBg7Nt/7VK5 4oFUcpQDmtBk6E8rsMCN1AJAkd/SVR4n1ZviPKkHovNXnHqtUrqeAGLhizprCMiCo4gV uSGA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f10-v6si45608plr.265.2018.06.19.09.37.30; Tue, 19 Jun 2018 09:37:45 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1030216AbeFSQgp (ORCPT + 99 others); Tue, 19 Jun 2018 12:36:45 -0400 Received: from mga09.intel.com ([134.134.136.24]:36110 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S966956AbeFSQgo (ORCPT ); Tue, 19 Jun 2018 12:36:44 -0400 X-Amp-Result: UNSCANNABLE X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 19 Jun 2018 09:36:43 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.51,243,1526367600"; d="scan'208";a="64539481" Received: from unknown (HELO localhost.localdomain) ([10.232.112.44]) by fmsmga004.fm.intel.com with ESMTP; 19 Jun 2018 09:36:42 -0700 Date: Tue, 19 Jun 2018 10:39:52 -0600 From: Keith Busch To: Jianchao Wang Cc: keith.busch@intel.com, axboe@fb.com, hch@lst.de, sagi@grimberg.me, linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org Subject: Re: [PATCH] nvme-pci: not invoke nvme_remove_dead_ctrl when change state fails Message-ID: <20180619163952.GE19922@localhost.localdomain> References: <1529397050-7524-1-git-send-email-jianchao.w.wang@oracle.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1529397050-7524-1-git-send-email-jianchao.w.wang@oracle.com> User-Agent: Mutt/1.9.1 (2017-09-22) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Jun 19, 2018 at 04:30:50PM +0800, Jianchao Wang wrote: > There is race between nvme_remove and nvme_reset_work that can > lead to io hang. > > nvme_remove nvme_reset_work > -> change state to DELETING > -> fail to change state to LIVE > -> nvme_remove_dead_ctrl > -> nvme_dev_disable > -> quiesce request_queue > -> queue remove_work > -> cancel_work_sync reset_work > -> nvme_remove_namespaces > -> splice ctrl->namespaces > nvme_remove_dead_ctrl_work > -> nvme_kill_queues > -> nvme_ns_remove do nothing > -> blk_cleanup_queue > -> blk_freeze_queue > Finally, the request_queue is quiesced state when wait freeze, > we will get io hang here. > > In fact, when fails to change state in nvme_reset_work, the only > reason is someone has changed state to DELETING. So it is not > necessary to invoke nvme_remove_dead_ctrl in that case. > > Signed-off-by: Jianchao Wang Good catch. I think the fix should either have the nvme_dev_disable set shutdown to true to indicate the controller isn't coming back online, or move the nvme_kill_queues inside nvme_remove_dead_ctrl.