Received: by 10.223.176.5 with SMTP id f5csp397515wra; Thu, 1 Feb 2018 23:04:54 -0800 (PST) X-Google-Smtp-Source: AH8x227dIM2SinBT2l+DLPdjT4YccZtc1a7tk1I3hCFSsah2mY2FSRMtZJkJohsUa+Ri3JfUX3D8 X-Received: by 10.99.104.194 with SMTP id d185mr30958117pgc.404.1517555093944; Thu, 01 Feb 2018 23:04:53 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1517555093; cv=none; d=google.com; s=arc-20160816; b=zmc560hJAjYkJBDJSyrCHZMtH0G3N6+Ybe/qU2QCSWSC7wKc0IEZbMhpM8z3oJryyw cWvBptyJYeZYPs8LW2XF0NjistApvRb8FOulVJVNgr9fwhCEBSw4l7nLvNZqk5vEUsEo JOVcEytycVZ5w5+/uJIuGCwl2Z/JNlXFZnNcmd7+x2oP4ezq8I3chFvPPmSDlhmMM61U dRUEJBdU9H1VEfp/n7JaBoBRDeJKdazAwdYnRujw6DVaSfYAtVWpcQWs0N08ki1EJ0Pe Ied2whtvmh9HpmH4yb7JlX9G/tZ4w9NgQvjFcEQUpBG+eYG3pxBcPvwP0qKbbaCRs8X4 7bqA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=d3XzxgEltSHx0bLhmeCTWp7GEyBmwpe5lQAsPFEAoCA=; b=EHriaCv2PG9GqnyBLBHVNxpFlAmWNuUGzds1SJ5BKYhoZc3mAKyE0jGlA6QexlTbFJ f9M9rmLkCfy00bRSofxtZcaZWSwCcePOjTm72KMYfkHLALtcXgK3xqz5dnQBj7dxROpi wBre3rBn3ri6AiTYJiAU2yAbptn+eJBVDt2MbOVKt0XCFexq3Gd5WgeEZser/KjefDuE vQ5sYnh76CzZ+A6gZK3sO3weUTfeWNfb3Gtp4gqy8JVnf8XZU9vNP3ml/vNwkdPj1i+s LLkKdUyzMeqvFUXBsrYALE5s2tkXwIGrDvGyO7XraDbkiyfdkygOGgCSf/+ckgEUoW3A OpqA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2017-10-26 header.b=jpOmG+5V; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id g6-v6si1224365pll.488.2018.02.01.23.04.39; Thu, 01 Feb 2018 23:04:53 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2017-10-26 header.b=jpOmG+5V; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751664AbeBBHC2 (ORCPT + 99 others); Fri, 2 Feb 2018 02:02:28 -0500 Received: from userp2120.oracle.com ([156.151.31.85]:50816 "EHLO userp2120.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751941AbeBBHCR (ORCPT ); Fri, 2 Feb 2018 02:02:17 -0500 Received: from pps.filterd (userp2120.oracle.com [127.0.0.1]) by userp2120.oracle.com (8.16.0.22/8.16.0.22) with SMTP id w126vOU6075054; Fri, 2 Feb 2018 07:01:31 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references; s=corp-2017-10-26; bh=d3XzxgEltSHx0bLhmeCTWp7GEyBmwpe5lQAsPFEAoCA=; b=jpOmG+5V9XXOGoOH7G/2sDZ7lutdfTjrOpaxloW0cxPYmf0KrQ0btrw8eum8XOj6+Dye R5/P/VTymXns8DaA9OVuL+Sb2lU6Rh/5VcZPGIe/OdrdGXaSiz8Qbfv0vK1Z5e+3c+1r QgAaXyfy/MFmO5xbv5ZFhk0odUEAZcMGfi8AebziUTo5hnybYjbME1N9CY9vVwZXtsAT Ec05LLTSKBUBnEbWWGGIiZTLQHBTUs0xu9zdxL83qHOeueEC65kxmgabS8wuLnw3T78G IQvyWxrOJw2IkBD7k6WT/cTVzQdQTwpp/Btq/uFR6FquqJVYTu0VIaaXpZNwaWDEzti8 yw== Received: from aserv0021.oracle.com (aserv0021.oracle.com [141.146.126.233]) by userp2120.oracle.com with ESMTP id 2fvahc9hpq-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 02 Feb 2018 07:01:31 +0000 Received: from userv0121.oracle.com (userv0121.oracle.com [156.151.31.72]) by aserv0021.oracle.com (8.14.4/8.14.4) with ESMTP id w1271UUK014514 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Fri, 2 Feb 2018 07:01:30 GMT Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7]) by userv0121.oracle.com (8.14.4/8.13.8) with ESMTP id w1271Nhl029251; Fri, 2 Feb 2018 07:01:24 GMT Received: from will-ThinkCentre-M910s.cn.oracle.com (/10.182.70.254) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Thu, 01 Feb 2018 23:01:23 -0800 From: Jianchao Wang To: keith.busch@intel.com, axboe@fb.com, hch@lst.de, sagi@grimberg.me Cc: linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH 6/6] nvme-pci: suspend queues based on online_queues Date: Fri, 2 Feb 2018 15:00:49 +0800 Message-Id: <1517554849-7802-7-git-send-email-jianchao.w.wang@oracle.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1517554849-7802-1-git-send-email-jianchao.w.wang@oracle.com> References: <1517554849-7802-1-git-send-email-jianchao.w.wang@oracle.com> X-Proofpoint-Virus-Version: vendor=nai engine=5900 definitions=8792 signatures=668660 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=2 malwarescore=0 phishscore=0 bulkscore=0 spamscore=0 mlxscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1711220000 definitions=main-1802020080 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org nvme cq irq is freed based on queue_count. When the sq/cq creation fails, irq will not be setup. free_irq will warn 'Try to free already-free irq'. To fix it, we only increase online_queues when adminq/sq/cq are created and associated irq is setup. Then suspend queues based on online_queues. Signed-off-by: Jianchao Wang --- drivers/nvme/host/pci.c | 31 ++++++++++++++++++++----------- 1 file changed, 20 insertions(+), 11 deletions(-) diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index a838713c..e37f209 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -1349,9 +1349,6 @@ static int nvme_suspend_queue(struct nvme_queue *nvmeq) nvmeq->cq_vector = -1; spin_unlock_irq(&nvmeq->q_lock); - if (!nvmeq->qid && nvmeq->dev->ctrl.admin_q) - blk_mq_quiesce_queue(nvmeq->dev->ctrl.admin_q); - pci_free_irq(to_pci_dev(nvmeq->dev->dev), vector, nvmeq); return 0; @@ -1495,13 +1492,15 @@ static int nvme_create_queue(struct nvme_queue *nvmeq, int qid) nvme_init_queue(nvmeq, qid); result = queue_request_irq(nvmeq); if (result < 0) - goto release_sq; + goto offline; return result; - release_sq: +offline: + dev->online_queues--; +release_sq: adapter_delete_sq(dev, qid); - release_cq: +release_cq: adapter_delete_cq(dev, qid); return result; } @@ -1641,6 +1640,7 @@ static int nvme_pci_configure_admin_queue(struct nvme_dev *dev) result = queue_request_irq(nvmeq); if (result) { nvmeq->cq_vector = -1; + dev->online_queues--; return result; } @@ -1988,6 +1988,7 @@ static int nvme_setup_io_queues(struct nvme_dev *dev) result = queue_request_irq(adminq); if (result) { adminq->cq_vector = -1; + dev->online_queues--; return result; } return nvme_create_io_queues(dev); @@ -2257,13 +2258,16 @@ static void nvme_dev_disable(struct nvme_dev *dev, bool shutdown) int i; bool dead = true; struct pci_dev *pdev = to_pci_dev(dev->dev); + int onlines; mutex_lock(&dev->shutdown_lock); if (pci_is_enabled(pdev)) { u32 csts = readl(dev->bar + NVME_REG_CSTS); - dead = !!((csts & NVME_CSTS_CFS) || !(csts & NVME_CSTS_RDY) || - pdev->error_state != pci_channel_io_normal); + dead = !!((csts & NVME_CSTS_CFS) || + !(csts & NVME_CSTS_RDY) || + (pdev->error_state != pci_channel_io_normal) || + (dev->online_queues == 0)); } /* Just freeze the queue for shutdown case */ @@ -2297,9 +2301,14 @@ static void nvme_dev_disable(struct nvme_dev *dev, bool shutdown) nvme_disable_io_queues(dev); nvme_disable_admin_queue(dev, shutdown); } - for (i = dev->ctrl.queue_count - 1; i >= 0; i--) + + onlines = dev->online_queues; + for (i = onlines - 1; i >= 0; i--) nvme_suspend_queue(&dev->queues[i]); + if (dev->ctrl.admin_q) + blk_mq_quiesce_queue(dev->ctrl.admin_q); + nvme_pci_disable(dev); blk_mq_tagset_busy_iter(&dev->tagset, nvme_pci_cancel_rq, &dev->ctrl); @@ -2444,12 +2453,12 @@ static void nvme_reset_work(struct work_struct *work) * Keep the controller around but remove all namespaces if we don't have * any working I/O queue. */ - if (dev->online_queues < 2) { + if (dev->online_queues == 1) { dev_warn(dev->ctrl.device, "IO queues not created\n"); nvme_kill_queues(&dev->ctrl); nvme_remove_namespaces(&dev->ctrl); new_state = NVME_CTRL_ADMIN_ONLY; - } else { + } else if (dev->online_queues > 1) { /* hit this only when allocate tagset fails */ if (nvme_dev_add(dev)) new_state = NVME_CTRL_ADMIN_ONLY; -- 2.7.4