Received: by 2002:a25:ad19:0:0:0:0:0 with SMTP id y25csp123104ybi; Mon, 15 Jul 2019 17:49:47 -0700 (PDT) X-Google-Smtp-Source: APXvYqz31fAdtbeANayqrqmbxWnFoykS69N/003qrT7Js4LC6f9/OWbFg7k2kQ6fry2WgD4p0CIm X-Received: by 2002:a17:90a:23a4:: with SMTP id g33mr33400455pje.115.1563238187863; Mon, 15 Jul 2019 17:49:47 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1563238187; cv=none; d=google.com; s=arc-20160816; b=UbY/jeQ3PaiF9/v1K8+bV3H16+I+iExh4HPw92nmJfXYMqow5W6ftPpp3FyLwMiepC 2yZd8UPyivOL1ACEvyg6oLjcMY7VMYo7QaF0YnU8YL0m9ZmX/KwKN33X/VH+P47+SkDw DKIU+RlH1eQ2PDW8M6APXcGPHhZqItSPCGcp/ODf7WdSKhjE3wd79XHsMmTpNdiimxmH Xl4x+FnMd+HMkoTM8L3uLxN3/FzAHgSIShZVHfBhrab2HTWOp1WmJhkA4DfP7Uoyt+uF C3JFKzWccvA34FXgxssMZd87rwLf8Tbelca/6zKrcNFbB5MCsVSRWop1R6hc1/xTtND1 mMWg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=si0EhXS75gHolqOurXh6vCM/RBSHMxVeQyqCEbmIe1k=; b=KfkJIBqKLB5IDiBlc2SBil8DkPjvTgOldAvmZ6k/aGfiK/4YlQoZOkOVhWyhHeiv2/ Av1PWBBM0OCo1C8P1f4mIwAeFIsFuaTHjjpdhDjZfr0rKWK2O82BIIqLFScG9Xd79CPH KV6qmXZDDdhAviGzIJDclXdFOUkQ3DDJnUddf2yKIxml57GeQPyuUG34cToMIVSiaorB +NjiqMtR/1zFrIXp5wO1biRFTBT5Xjw729ngtf1V92OGsjXmSOLevffiKInx/AG8/wpd WThoRY+B9AgTKGoaWH1XLnInt5ZT41W7Mu26Os9k37ecFku69fIJYdJdK+AZJIWWQ8L/ vGPQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d39si11423424pla.371.2019.07.15.17.49.30; Mon, 15 Jul 2019 17:49:47 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1733148AbfGPAr1 (ORCPT + 99 others); Mon, 15 Jul 2019 20:47:27 -0400 Received: from gate.crashing.org ([63.228.1.57]:36343 "EHLO gate.crashing.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1733097AbfGPArX (ORCPT ); Mon, 15 Jul 2019 20:47:23 -0400 Received: from ufdda393ec48b57.ant.amazon.com (localhost.localdomain [127.0.0.1]) by gate.crashing.org (8.14.1/8.14.1) with ESMTP id x6G0l1P6001806; Mon, 15 Jul 2019 19:47:05 -0500 From: Benjamin Herrenschmidt To: linux-nvme@lists.infradead.org Cc: linux-kernel@vger.kernel.org, Jens Axboe , Keith Busch , Christoph Hellwig , Paul Pawlowski , Benjamin Herrenschmidt Subject: [PATCH 2/3] nvme: Retrieve the required IO queue entry size from the controller Date: Tue, 16 Jul 2019 10:46:48 +1000 Message-Id: <20190716004649.17799-2-benh@kernel.crashing.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190716004649.17799-1-benh@kernel.crashing.org> References: <20190716004649.17799-1-benh@kernel.crashing.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On PCIe based NVME devices, this will retrieve the IO queue entry size from the controller and use the "required" setting. It should always be 6 (64 bytes) by spec. However some controllers such as Apple's are not properly implementing the spec and require the size to be 7 (128 bytes). This provides the ground work for the subsequent quirks for these controllers. Signed-off-by: Benjamin Herrenschmidt --- drivers/nvme/host/core.c | 25 +++++++++++++++++++++++++ drivers/nvme/host/nvme.h | 1 + drivers/nvme/host/pci.c | 9 ++++++--- include/linux/nvme.h | 1 + 4 files changed, 33 insertions(+), 3 deletions(-) diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index cc09b81fc7f4..716ebe87a2b8 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -1986,6 +1986,7 @@ int nvme_enable_ctrl(struct nvme_ctrl *ctrl, u64 cap) ctrl->ctrl_config = NVME_CC_CSS_NVM; ctrl->ctrl_config |= (page_shift - 12) << NVME_CC_MPS_SHIFT; ctrl->ctrl_config |= NVME_CC_AMS_RR | NVME_CC_SHN_NONE; + /* Use default IOSQES. We'll update it later if needed */ ctrl->ctrl_config |= NVME_CC_IOSQES | NVME_CC_IOCQES; ctrl->ctrl_config |= NVME_CC_ENABLE; @@ -2698,6 +2699,30 @@ int nvme_init_identify(struct nvme_ctrl *ctrl) ctrl->hmmin = le32_to_cpu(id->hmmin); ctrl->hmminds = le32_to_cpu(id->hmminds); ctrl->hmmaxd = le16_to_cpu(id->hmmaxd); + + /* Grab required IO queue size */ + ctrl->iosqes = id->sqes & 0xf; + if (ctrl->iosqes < NVME_NVM_IOSQES) { + dev_err(ctrl->device, + "unsupported required IO queue size %d\n", ctrl->iosqes); + ret = -EINVAL; + goto out_free; + } + /* + * If our IO queue size isn't the default, update the setting + * in CC:IOSQES. + */ + if (ctrl->iosqes != NVME_NVM_IOSQES) { + ctrl->ctrl_config &= ~(0xfu << NVME_CC_IOSQES_SHIFT); + ctrl->ctrl_config |= ctrl->iosqes << NVME_CC_IOSQES_SHIFT; + ret = ctrl->ops->reg_write32(ctrl, NVME_REG_CC, + ctrl->ctrl_config); + if (ret) { + dev_err(ctrl->device, + "error updating CC register\n"); + goto out_free; + } + } } ret = nvme_mpath_init(ctrl, id); diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h index 716a876119c8..34ef35fcd8a5 100644 --- a/drivers/nvme/host/nvme.h +++ b/drivers/nvme/host/nvme.h @@ -244,6 +244,7 @@ struct nvme_ctrl { u32 hmmin; u32 hmminds; u16 hmmaxd; + u8 iosqes; /* Fabrics only */ u16 sqsize; diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index 8f006638452b..54b35ea4af88 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -28,7 +28,7 @@ #include "trace.h" #include "nvme.h" -#define SQ_SIZE(q) ((q)->q_depth * sizeof(struct nvme_command)) +#define SQ_SIZE(q) ((q)->q_depth << (q)->sqes) #define CQ_SIZE(q) ((q)->q_depth * sizeof(struct nvme_completion)) #define SGES_PER_PAGE (PAGE_SIZE / sizeof(struct nvme_sgl_desc)) @@ -162,7 +162,7 @@ static inline struct nvme_dev *to_nvme_dev(struct nvme_ctrl *ctrl) struct nvme_queue { struct nvme_dev *dev; spinlock_t sq_lock; - struct nvme_command *sq_cmds; + void *sq_cmds; /* only used for poll queues: */ spinlock_t cq_poll_lock ____cacheline_aligned_in_smp; volatile struct nvme_completion *cqes; @@ -178,6 +178,7 @@ struct nvme_queue { u16 last_cq_head; u16 qid; u8 cq_phase; + u8 sqes; unsigned long flags; #define NVMEQ_ENABLED 0 #define NVMEQ_SQ_CMB 1 @@ -488,7 +489,8 @@ static void nvme_submit_cmd(struct nvme_queue *nvmeq, struct nvme_command *cmd, bool write_sq) { spin_lock(&nvmeq->sq_lock); - memcpy(&nvmeq->sq_cmds[nvmeq->sq_tail], cmd, sizeof(*cmd)); + memcpy(nvmeq->sq_cmds + (nvmeq->sq_tail << nvmeq->sqes), + cmd, sizeof(*cmd)); if (++nvmeq->sq_tail == nvmeq->q_depth) nvmeq->sq_tail = 0; nvme_write_sq_db(nvmeq, write_sq); @@ -1465,6 +1467,7 @@ static int nvme_alloc_queue(struct nvme_dev *dev, int qid, int depth) if (dev->ctrl.queue_count > qid) return 0; + nvmeq->sqes = qid ? dev->ctrl.iosqes : NVME_NVM_ADMSQES; nvmeq->q_depth = depth; nvmeq->cqes = dma_alloc_coherent(dev->dev, CQ_SIZE(nvmeq), &nvmeq->cq_dma_addr, GFP_KERNEL); diff --git a/include/linux/nvme.h b/include/linux/nvme.h index 01aa6a6c241d..7af18965fb57 100644 --- a/include/linux/nvme.h +++ b/include/linux/nvme.h @@ -141,6 +141,7 @@ enum { * (In bytes and specified as a power of two (2^n)). */ #define NVME_NVM_IOSQES 6 +#define NVME_NVM_ADMSQES 6 #define NVME_NVM_IOCQES 4 enum { -- 2.17.1