Received: by 10.223.176.5 with SMTP id f5csp2294744wra; Mon, 5 Feb 2018 01:24:21 -0800 (PST) X-Google-Smtp-Source: AH8x227BV1RhzMkwU/xb91QYUjvY+chQXdh1xttLpZdZtnLINmG707f659iLn9kGIsYrHbwfWwNr X-Received: by 2002:a17:902:9005:: with SMTP id a5-v6mr42841604plp.251.1517822660935; Mon, 05 Feb 2018 01:24:20 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1517822660; cv=none; d=google.com; s=arc-20160816; b=bl6+sQtQ3JRkKVX9tQ8Oldf+Eu8tUfwG4I+HuwaN5La3+DJgXubXOJBcD4bTbeIGVR ja4uvl1FQTqK/BudbaEgufxZNGe06sNhRErz4PDcqG3iPpmBHsIw3vY0Ua/87EbTyFhh HZuIhz9kYwLiPSNMM2i38aOARV8ojKH1fkqcYyZ3MXC0430x3jJ1rW33VX1uOU0VGsfn eNdM98JjMnoe++RPdO5r27cfm4N+K27bmfUb0GSYqQbPmPV0kgL0/MBmYgLJEsBUYBEw Eh8wg1cjVavlggwsIcSPfuepKZfICHFt0hKqolypWBD2eDc+v7iI7gDLrwF9fB2N7UX9 QhTA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=DB6DCN6TjHLv1I3Sj5D8zxBcrxCiO9SaIFsbvFiE0ok=; b=lX7CIEpnvfhyNQRsF/JzZX98jF49F+xNPnS/nUWvBoICwl755QRABSinvSXQCLe26O /sJzZrxbws1yyKnwWQUpx54D6rnwyVIeDbs6ck+QlLRnuLC3wP7nW2rHRoOu4LxRSfkZ M8MxWpPTSTWPpmCVuWzZ0jQNM3wwgmWLkGIia6dR3jWHYoYW3UdDGLi7aJPAZMOHVds5 i6mR45jHvorICGfo3c0THJ3bz8VWa0uI0VR87HIt6e7+fnyB6boeYwnALkTXJbtLvqwA xq/7kRoeLH6joRXxRFw/Qvt9FrvGX4M24wSK9QeGP5cOOUz5D4QvrUyekfuVCTHvH/pX 6gtg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2017-10-26 header.b=mSNjHGnn; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z11-v6si6506653plo.356.2018.02.05.01.24.06; Mon, 05 Feb 2018 01:24:20 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2017-10-26 header.b=mSNjHGnn; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752858AbeBEJWL (ORCPT + 99 others); Mon, 5 Feb 2018 04:22:11 -0500 Received: from userp2120.oracle.com ([156.151.31.85]:39526 "EHLO userp2120.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752764AbeBEJVQ (ORCPT ); Mon, 5 Feb 2018 04:21:16 -0500 Received: from pps.filterd (userp2120.oracle.com [127.0.0.1]) by userp2120.oracle.com (8.16.0.22/8.16.0.22) with SMTP id w159HRV3009052; Mon, 5 Feb 2018 09:20:25 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references; s=corp-2017-10-26; bh=DB6DCN6TjHLv1I3Sj5D8zxBcrxCiO9SaIFsbvFiE0ok=; b=mSNjHGnnofKlKAiIhuoC7t25XicSM3TJdEWWujo9DFpOjyZHNf4DNi8iROeZF2b4bvaN jaE7/hDFwvy/9F++PS6GIg/BGul58y4AqKlGiHGwdadMg0lVG5+elcm3HWkDeLIqD2I0 ubIFd9BVh7DLNSMTWWQZvQhuIst8nwqrNF4zJ7+IOxTu8NBFJ0DgFaQcjz2Vpb8wqNZz xYdmBvOgQK8uuw5d9sgHB+LQsiO18dY57Rus6fLm3EoTv9wdQ/YOyu+XZ6dWOGfCdJkK WUIrLlRwMAAGSKj+a8Smw5D3X3DogP9e5sylmh2vxOZl9Fgde0KIKr5yn2H3Df4C+/wO Kg== Received: from aserv0021.oracle.com (aserv0021.oracle.com [141.146.126.233]) by userp2120.oracle.com with ESMTP id 2fxjg0rhcp-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 05 Feb 2018 09:20:24 +0000 Received: from aserv0122.oracle.com (aserv0122.oracle.com [141.146.126.236]) by aserv0021.oracle.com (8.14.4/8.14.4) with ESMTP id w159KOVY026968 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Mon, 5 Feb 2018 09:20:24 GMT Received: from abhmp0013.oracle.com (abhmp0013.oracle.com [141.146.116.19]) by aserv0122.oracle.com (8.14.4/8.14.4) with ESMTP id w159KNm0008206; Mon, 5 Feb 2018 09:20:23 GMT Received: from will-ThinkCentre-M910s.cn.oracle.com (/10.182.70.254) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Mon, 05 Feb 2018 01:20:23 -0800 From: Jianchao Wang To: keith.busch@intel.com, axboe@fb.com, hch@lst.de, sagi@grimberg.me Cc: linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH V2 4/6] nvme-pci: suspend queues based on online_queues Date: Mon, 5 Feb 2018 17:20:13 +0800 Message-Id: <1517822415-11710-5-git-send-email-jianchao.w.wang@oracle.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1517822415-11710-1-git-send-email-jianchao.w.wang@oracle.com> References: <1517822415-11710-1-git-send-email-jianchao.w.wang@oracle.com> X-Proofpoint-Virus-Version: vendor=nai engine=5900 definitions=8795 signatures=668662 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=2 malwarescore=0 phishscore=0 bulkscore=0 spamscore=0 mlxscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1711220000 definitions=main-1802050117 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org nvme cq irq is freed based on queue_count. When the sq/cq creation fails, irq will not be setup. free_irq will warn 'Try to free already-free irq'. To fix it, we only increase online_queues when adminq/sq/cq are created and associated irq is setup. Then suspend queues based on online_queues. Signed-off-by: Jianchao Wang --- drivers/nvme/host/pci.c | 30 +++++++++++++++++++----------- 1 file changed, 19 insertions(+), 11 deletions(-) diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index a7fa397..117b837 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -1315,9 +1315,6 @@ static int nvme_suspend_queue(struct nvme_queue *nvmeq) nvmeq->cq_vector = -1; spin_unlock_irq(&nvmeq->q_lock); - if (!nvmeq->qid && nvmeq->dev->ctrl.admin_q) - blk_mq_quiesce_queue(nvmeq->dev->ctrl.admin_q); - pci_free_irq(to_pci_dev(nvmeq->dev->dev), vector, nvmeq); return 0; @@ -1461,13 +1458,14 @@ static int nvme_create_queue(struct nvme_queue *nvmeq, int qid) nvme_init_queue(nvmeq, qid); result = queue_request_irq(nvmeq); if (result < 0) - goto release_sq; + goto offline; return result; - release_sq: +offline: + dev->online_queues--; adapter_delete_sq(dev, qid); - release_cq: +release_cq: adapter_delete_cq(dev, qid); return result; } @@ -1607,6 +1605,7 @@ static int nvme_pci_configure_admin_queue(struct nvme_dev *dev) result = queue_request_irq(nvmeq); if (result) { nvmeq->cq_vector = -1; + dev->online_queues--; return result; } @@ -1954,6 +1953,7 @@ static int nvme_setup_io_queues(struct nvme_dev *dev) result = queue_request_irq(adminq); if (result) { adminq->cq_vector = -1; + dev->online_queues--; return result; } return nvme_create_io_queues(dev); @@ -2167,13 +2167,16 @@ static void nvme_dev_disable(struct nvme_dev *dev, bool shutdown) int i; bool dead = true; struct pci_dev *pdev = to_pci_dev(dev->dev); + int onlines; mutex_lock(&dev->shutdown_lock); if (pci_is_enabled(pdev)) { u32 csts = readl(dev->bar + NVME_REG_CSTS); - dead = !!((csts & NVME_CSTS_CFS) || !(csts & NVME_CSTS_RDY) || - pdev->error_state != pci_channel_io_normal); + dead = !!((csts & NVME_CSTS_CFS) || + !(csts & NVME_CSTS_RDY) || + (pdev->error_state != pci_channel_io_normal) || + (dev->online_queues == 0)); } /* Just freeze the queue for shutdown case */ @@ -2203,9 +2206,14 @@ static void nvme_dev_disable(struct nvme_dev *dev, bool shutdown) nvme_disable_io_queues(dev); nvme_disable_admin_queue(dev, shutdown); } - for (i = dev->ctrl.queue_count - 1; i >= 0; i--) + + onlines = dev->online_queues; + for (i = onlines - 1; i >= 0; i--) nvme_suspend_queue(&dev->queues[i]); + if (dev->ctrl.admin_q) + blk_mq_quiesce_queue(dev->ctrl.admin_q); + nvme_pci_disable(dev); blk_mq_tagset_busy_iter(&dev->tagset, nvme_cancel_request, &dev->ctrl); @@ -2348,12 +2356,12 @@ static void nvme_reset_work(struct work_struct *work) * Keep the controller around but remove all namespaces if we don't have * any working I/O queue. */ - if (dev->online_queues < 2) { + if (dev->online_queues == 1) { dev_warn(dev->ctrl.device, "IO queues not created\n"); nvme_kill_queues(&dev->ctrl); nvme_remove_namespaces(&dev->ctrl); new_state = NVME_CTRL_ADMIN_ONLY; - } else { + } else if (dev->online_queues > 1) { /* hit this only when allocate tagset fails */ if (nvme_dev_add(dev)) new_state = NVME_CTRL_ADMIN_ONLY; -- 2.7.4