Received: by 10.223.185.116 with SMTP id b49csp1387349wrg; Wed, 14 Feb 2018 16:41:48 -0800 (PST) X-Google-Smtp-Source: AH8x226N6yz0pa7VJHBH2OysLIbiV5LCfETjZgHLrVnduux5UaAVYh69kHU/Ytoqg7HpQUu82aLH X-Received: by 10.98.248.4 with SMTP id d4mr857689pfh.78.1518655308134; Wed, 14 Feb 2018 16:41:48 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1518655308; cv=none; d=google.com; s=arc-20160816; b=n6cmf2+yxy/0HnGg1WVuLb143daefUvGJlEsTWOdyDokwPIJDvo0RKEN+NM3kpsFtM g4v+Jb3bYbes1uSC110g+lADVds45rr4PaS4G17OGvJ7x0LL8hELayRCBs/M2nVvTxay fymhUhPvmIztFy2QG24uzEx/PuL4DxF1NxvEyaV6iZqS3CmKdrIGy1aXsnXaye/mZslb 7xBbZyyHJGIOkBqYTaMdTYiKTBMBKqmp304T5I9scIsNpBO22uoQtpJeJAaY3ezdH+pI 5PHzTk8fnqK2WkHpgOUeIeZCa4MGFNoL9hrU/teCNeHX/vQgMN4cA7sOguwDsNJuTBQI Uwsg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject:dkim-signature :arc-authentication-results; bh=6NsiXCmKHwAXwTbx+90g5NFThQD8xUr6Or4PUleI7jM=; b=sqIzoHKnQYQ2apN5ULY+KqRI33H8z5+yCv5SWnIQ7f9bVO60Srajk+daL2XmKR2gDQ tcWTvtoeRzw+I6dWLjfAx7Ch+Tyn5qZxXYnUJysPjj38vFgFCQQG/c4sT4PUuolUgB29 n7XnR/yhsHgaajSDKLfjaQjOMHuLjMbboYNAFzo8f5b8pR27+JiIxoEf5Pfwexiqabsf EWqvhvMgejFE9jUNe6vaTsGAxWtlReAMxknCzaRFkMNoKClkg68GCOW1UNw6mEfY5TIl WmAgAW2zbw/noknPZ331XqGprPUalnnrupb/4jUQvJc90OwoUyKFmhrJ0aZ8ukT6hLKQ Y+Vg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2017-10-26 header.b=rTVyRPkm; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id q7si512672pgp.351.2018.02.14.16.41.33; Wed, 14 Feb 2018 16:41:48 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2017-10-26 header.b=rTVyRPkm; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1032305AbeBOAii (ORCPT + 99 others); Wed, 14 Feb 2018 19:38:38 -0500 Received: from userp2130.oracle.com ([156.151.31.86]:45122 "EHLO userp2130.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1032243AbeBOAih (ORCPT ); Wed, 14 Feb 2018 19:38:37 -0500 Received: from pps.filterd (userp2130.oracle.com [127.0.0.1]) by userp2130.oracle.com (8.16.0.22/8.16.0.22) with SMTP id w1F0c94m030367; Thu, 15 Feb 2018 00:38:12 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc : references : from : message-id : date : mime-version : in-reply-to : content-type : content-transfer-encoding; s=corp-2017-10-26; bh=6NsiXCmKHwAXwTbx+90g5NFThQD8xUr6Or4PUleI7jM=; b=rTVyRPkm2KVj+UQ6MCiL2HDsE8RpLnpjxGkrWRHRYvsk6T/b61Ji6C3zmk9q8ArnGmN2 HzLXfy6UTK8MCG84RN7mlGqwI9K6hHRzTZ21MXhJmTiE5vtOyD5yKgdctHYZeiYwZ+rX j+JeeTgXp5zrCtFRrUgnbH9UOqIL5jyTupETJ9c6JeHeXIA99WxHLtwW1ENR3Y2G/hd+ pf2UEAQ9WEuieiUtr0UDhGyoWKwuGPTsxh2JHOzzzzIccrwUlFDMmVAZfAPOhoHzRoAe XtYHnCfhGr5KiWlozzUZ9GLcYGS3vZS9a2VSvdWv3n9pnQqEGmIm8Ii+a9U76Gl5HzkN kQ== Received: from aserv0021.oracle.com (aserv0021.oracle.com [141.146.126.233]) by userp2130.oracle.com with ESMTP id 2g4y6wg4pg-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 15 Feb 2018 00:38:11 +0000 Received: from userv0121.oracle.com (userv0121.oracle.com [156.151.31.72]) by aserv0021.oracle.com (8.14.4/8.14.4) with ESMTP id w1F0XfSo001413 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Thu, 15 Feb 2018 00:33:41 GMT Received: from abhmp0017.oracle.com (abhmp0017.oracle.com [141.146.116.23]) by userv0121.oracle.com (8.14.4/8.13.8) with ESMTP id w1F0XeVG024014; Thu, 15 Feb 2018 00:33:40 GMT Received: from [10.191.7.204] (/10.191.7.204) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Wed, 14 Feb 2018 16:33:40 -0800 Subject: Re: [PATCH RESENT] nvme-pci: suspend queues based on online_queues To: Keith Busch Cc: axboe@fb.com, hch@lst.de, sagi@grimberg.me, linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org References: <1518440713-1031-1-git-send-email-jianchao.w.wang@oracle.com> <20180213215229.GF20962@localhost.localdomain> From: "jianchao.wang" Message-ID: Date: Thu, 15 Feb 2018 08:33:35 +0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.6.0 MIME-Version: 1.0 In-Reply-To: <20180213215229.GF20962@localhost.localdomain> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 8bit X-Proofpoint-Virus-Version: vendor=nai engine=5900 definitions=8805 signatures=668671 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=2 malwarescore=0 phishscore=0 bulkscore=0 spamscore=0 mlxscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1711220000 definitions=main-1802150006 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Keith Thanks for your kindly response and directive. And 恭喜发财 大吉大利!! On 02/14/2018 05:52 AM, Keith Busch wrote: > On Mon, Feb 12, 2018 at 09:05:13PM +0800, Jianchao Wang wrote: >> @@ -1315,9 +1315,6 @@ static int nvme_suspend_queue(struct nvme_queue *nvmeq) >> nvmeq->cq_vector = -1; >> spin_unlock_irq(&nvmeq->q_lock); >> >> - if (!nvmeq->qid && nvmeq->dev->ctrl.admin_q) >> - blk_mq_quiesce_queue(nvmeq->dev->ctrl.admin_q); >> - > > This doesn't look related to this patch. > >> pci_free_irq(to_pci_dev(nvmeq->dev->dev), vector, nvmeq); >> >> return 0; >> @@ -1461,13 +1458,14 @@ static int nvme_create_queue(struct nvme_queue *nvmeq, int qid) >> nvme_init_queue(nvmeq, qid); >> result = queue_request_irq(nvmeq); >> if (result < 0) >> - goto release_sq; >> + goto offline; >> >> return result; >> >> - release_sq: >> +offline: >> + dev->online_queues--; >> adapter_delete_sq(dev, qid); > > In addition to the above, set nvmeq->cq_vector to -1. Yes, absolutely. > > Everything else that follows doesn't appear to be necessary. Yes, nvme_suspend_queues will return when it finds the cq_vector is -1. Sincerely Jianchao > >> @@ -2167,6 +2167,7 @@ static void nvme_dev_disable(struct nvme_dev *dev, bool shutdown) >> int i; >> bool dead = true; >> struct pci_dev *pdev = to_pci_dev(dev->dev); >> + int onlines; >> >> mutex_lock(&dev->shutdown_lock); >> if (pci_is_enabled(pdev)) { >> @@ -2175,8 +2176,11 @@ static void nvme_dev_disable(struct nvme_dev *dev, bool shutdown) >> if (dev->ctrl.state == NVME_CTRL_LIVE || >> dev->ctrl.state == NVME_CTRL_RESETTING) >> nvme_start_freeze(&dev->ctrl); >> - dead = !!((csts & NVME_CSTS_CFS) || !(csts & NVME_CSTS_RDY) || >> - pdev->error_state != pci_channel_io_normal); >> + >> + dead = !!((csts & NVME_CSTS_CFS) || >> + !(csts & NVME_CSTS_RDY) || >> + (pdev->error_state != pci_channel_io_normal) || >> + (dev->online_queues == 0)); >> } >> >> /* >> @@ -2200,9 +2204,14 @@ static void nvme_dev_disable(struct nvme_dev *dev, bool shutdown) >> nvme_disable_io_queues(dev); >> nvme_disable_admin_queue(dev, shutdown); >> } >> - for (i = dev->ctrl.queue_count - 1; i >= 0; i--) >> + >> + onlines = dev->online_queues; >> + for (i = onlines - 1; i >= 0; i--) >> nvme_suspend_queue(&dev->queues[i]); >> >> + if (dev->ctrl.admin_q) >> + blk_mq_quiesce_queue(dev->ctrl.admin_q); >> + >> nvme_pci_disable(dev); >> >> blk_mq_tagset_busy_iter(&dev->tagset, nvme_cancel_request, &dev->ctrl); >> @@ -2341,16 +2350,18 @@ static void nvme_reset_work(struct work_struct *work) >> if (result) >> goto out; >> >> - /* >> - * Keep the controller around but remove all namespaces if we don't have >> - * any working I/O queue. >> - */ >> - if (dev->online_queues < 2) { >> + >> + /* In case of online_queues is zero, it has gone to out */ >> + if (dev->online_queues == 1) { >> + /* >> + * Keep the controller around but remove all namespaces if we >> + * don't have any working I/O queue. >> + */ >> dev_warn(dev->ctrl.device, "IO queues not created\n"); >> nvme_kill_queues(&dev->ctrl); >> nvme_remove_namespaces(&dev->ctrl); >> new_state = NVME_CTRL_ADMIN_ONLY; >> - } else { >> + } else if (dev->online_queues > 1) { >> nvme_start_queues(&dev->ctrl); >> nvme_wait_freeze(&dev->ctrl); >> /* hit this only when allocate tagset fails */ >> -- >