Received: by 10.223.185.116 with SMTP id b49csp4160388wrg; Tue, 13 Feb 2018 13:53:43 -0800 (PST) X-Google-Smtp-Source: AH8x227VIebZmJvPnRFPiy3k9fcDHBqZ35g8J5m/un9aCrlHWPVSQwFy5mspgIhO1Z6RoQkbDQQY X-Received: by 2002:a17:902:12f:: with SMTP id 44-v6mr2359639plb.403.1518558823276; Tue, 13 Feb 2018 13:53:43 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1518558823; cv=none; d=google.com; s=arc-20160816; b=SiY7zpLwyGi6XT6UZeWBjy0A/yEhYUS6CwuQK/X9cLd7qjHsWxFpyM/61fUwdQuqiI VOYvQgFeyUxZlsX84mkvfsz4+qHN5Nxkr6zoo3NmnfiCfdvuGW4NJiFGv2KHT8iUpCvz Uwg2JfBorDWTnbAGUfVxH50yYGWD4XN5sI4FVfLSUzwXTInKY2tHVHhO84eWwnzxffr9 CkvGv8L96T7vJ2D/rayae8T9+kezhDz+4T12tMnI3wAhpJtMreb33DOaWRpNZaxrCLB1 Pzxa5MHWkGCXCTag9two/Kk8ZbdNYRybGjoWNUlND3eMF3H31LOWgAiosmeE4c08fX8r pVEA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=ZF7AzKP/iVXEYUPN/n+okbflJ5xGIRxhl6+w7yBwsxE=; b=CysH+lQmnKuBSOAZA9uxKmQwv0XjZrr9R76Gz+cs4f/9438M+ZgvftbNz0AImlJvZa 4ZEcC02+ZrdNaUC1Up8d0oisfaqNRjPILr6jmU0To0wCspsGWKnwQbQxwGemqCPbl935 2jK0wLtj3CaF4zJf6N/5LYoL2MS8ks/qNI9R6638Upp0TNKKj+oxMDvBeccL/UaJJ7Fw loSM6qduzLY01QiABfqkUBtHqHGr8I81Qm67jXiP8qmqGDAnHg/aAwJqz9mKlHjztq0W GuhEtZxlcW88oIFp4Y+9t91ORzlUq0rNxi1NZXEyWoaYxkY4JnPRAoiFeQjuy+Ch9+Ov WHzQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c6si243853pgn.681.2018.02.13.13.53.27; Tue, 13 Feb 2018 13:53:43 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S965911AbeBMVwk (ORCPT + 99 others); Tue, 13 Feb 2018 16:52:40 -0500 Received: from mga05.intel.com ([192.55.52.43]:35319 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S965843AbeBMVwj (ORCPT ); Tue, 13 Feb 2018 16:52:39 -0500 X-Amp-Result: UNKNOWN X-Amp-Original-Verdict: FILE UNKNOWN X-Amp-File-Uploaded: False Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga105.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 13 Feb 2018 13:52:39 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.46,509,1511856000"; d="scan'208";a="19763202" Received: from unknown (HELO localhost.localdomain) ([10.232.112.44]) by fmsmga002.fm.intel.com with ESMTP; 13 Feb 2018 13:52:38 -0800 Date: Tue, 13 Feb 2018 14:52:29 -0700 From: Keith Busch To: Jianchao Wang Cc: axboe@fb.com, hch@lst.de, sagi@grimberg.me, linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH RESENT] nvme-pci: suspend queues based on online_queues Message-ID: <20180213215229.GF20962@localhost.localdomain> References: <1518440713-1031-1-git-send-email-jianchao.w.wang@oracle.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1518440713-1031-1-git-send-email-jianchao.w.wang@oracle.com> User-Agent: Mutt/1.9.1 (2017-09-22) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Feb 12, 2018 at 09:05:13PM +0800, Jianchao Wang wrote: > @@ -1315,9 +1315,6 @@ static int nvme_suspend_queue(struct nvme_queue *nvmeq) > nvmeq->cq_vector = -1; > spin_unlock_irq(&nvmeq->q_lock); > > - if (!nvmeq->qid && nvmeq->dev->ctrl.admin_q) > - blk_mq_quiesce_queue(nvmeq->dev->ctrl.admin_q); > - This doesn't look related to this patch. > pci_free_irq(to_pci_dev(nvmeq->dev->dev), vector, nvmeq); > > return 0; > @@ -1461,13 +1458,14 @@ static int nvme_create_queue(struct nvme_queue *nvmeq, int qid) > nvme_init_queue(nvmeq, qid); > result = queue_request_irq(nvmeq); > if (result < 0) > - goto release_sq; > + goto offline; > > return result; > > - release_sq: > +offline: > + dev->online_queues--; > adapter_delete_sq(dev, qid); In addition to the above, set nvmeq->cq_vector to -1. Everything else that follows doesn't appear to be necessary. > @@ -2167,6 +2167,7 @@ static void nvme_dev_disable(struct nvme_dev *dev, bool shutdown) > int i; > bool dead = true; > struct pci_dev *pdev = to_pci_dev(dev->dev); > + int onlines; > > mutex_lock(&dev->shutdown_lock); > if (pci_is_enabled(pdev)) { > @@ -2175,8 +2176,11 @@ static void nvme_dev_disable(struct nvme_dev *dev, bool shutdown) > if (dev->ctrl.state == NVME_CTRL_LIVE || > dev->ctrl.state == NVME_CTRL_RESETTING) > nvme_start_freeze(&dev->ctrl); > - dead = !!((csts & NVME_CSTS_CFS) || !(csts & NVME_CSTS_RDY) || > - pdev->error_state != pci_channel_io_normal); > + > + dead = !!((csts & NVME_CSTS_CFS) || > + !(csts & NVME_CSTS_RDY) || > + (pdev->error_state != pci_channel_io_normal) || > + (dev->online_queues == 0)); > } > > /* > @@ -2200,9 +2204,14 @@ static void nvme_dev_disable(struct nvme_dev *dev, bool shutdown) > nvme_disable_io_queues(dev); > nvme_disable_admin_queue(dev, shutdown); > } > - for (i = dev->ctrl.queue_count - 1; i >= 0; i--) > + > + onlines = dev->online_queues; > + for (i = onlines - 1; i >= 0; i--) > nvme_suspend_queue(&dev->queues[i]); > > + if (dev->ctrl.admin_q) > + blk_mq_quiesce_queue(dev->ctrl.admin_q); > + > nvme_pci_disable(dev); > > blk_mq_tagset_busy_iter(&dev->tagset, nvme_cancel_request, &dev->ctrl); > @@ -2341,16 +2350,18 @@ static void nvme_reset_work(struct work_struct *work) > if (result) > goto out; > > - /* > - * Keep the controller around but remove all namespaces if we don't have > - * any working I/O queue. > - */ > - if (dev->online_queues < 2) { > + > + /* In case of online_queues is zero, it has gone to out */ > + if (dev->online_queues == 1) { > + /* > + * Keep the controller around but remove all namespaces if we > + * don't have any working I/O queue. > + */ > dev_warn(dev->ctrl.device, "IO queues not created\n"); > nvme_kill_queues(&dev->ctrl); > nvme_remove_namespaces(&dev->ctrl); > new_state = NVME_CTRL_ADMIN_ONLY; > - } else { > + } else if (dev->online_queues > 1) { > nvme_start_queues(&dev->ctrl); > nvme_wait_freeze(&dev->ctrl); > /* hit this only when allocate tagset fails */ > --