Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751972AbdI2FaA (ORCPT ); Fri, 29 Sep 2017 01:30:00 -0400 Received: from mail-qk0-f176.google.com ([209.85.220.176]:45475 "EHLO mail-qk0-f176.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750996AbdI2F37 (ORCPT ); Fri, 29 Sep 2017 01:29:59 -0400 X-Google-Smtp-Source: AOwi7QDlOwkJ97VvbB5D4ieUM0w2OGWYEXP0dHyIiMGt6cBg9NhH0sJLFEe2R8jZ7G1X6e53MWDA1g== From: Abhishek Shah To: Keith Busch , Jens Axboe , Christoph Hellwig , Sagi Grimberg , linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org Cc: bcm-kernel-feedback-list@broadcom.com, Abhishek Shah , stable@vger.kernel.org Subject: [PATCH] nvme-pci: Use PCI bus address for data/queues in CMB Date: Fri, 29 Sep 2017 10:59:26 +0530 Message-Id: <1506662966-10865-1-git-send-email-abhishek.shah@broadcom.com> X-Mailer: git-send-email 2.7.4 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2928 Lines: 90 Currently, NVMe PCI host driver is programming CMB dma address as I/O SQs addresses. This results in failures on systems where 1:1 outbound mapping is not used (example Broadcom iProc SOCs) because CMB BAR will be progammed with PCI bus address but NVMe PCI EP will try to access CMB using dma address. To have CMB working on systems without 1:1 outbound mapping, we program PCI bus address for I/O SQs instead of dma address. This approach will work on systems with/without 1:1 outbound mapping. The patch is tested on Broadcom Stingray platform(arm64), which does not have 1:1 outbound mapping, as well as on x86 platform, which has 1:1 outbound mapping. Fixes: 8ffaadf7 ("NVMe: Use CMB for the IO SQes if available") Cc: stable@vger.kernel.org Signed-off-by: Abhishek Shah Reviewed-by: Anup Patel Reviewed-by: Ray Jui Reviewed-by: Scott Branden --- drivers/nvme/host/pci.c | 30 +++++++++++++++++++++++++++++- 1 file changed, 29 insertions(+), 1 deletion(-) diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index 4a21213..29e3bd8 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -94,6 +94,7 @@ struct nvme_dev { bool subsystem; void __iomem *cmb; dma_addr_t cmb_dma_addr; + pci_bus_addr_t cmb_bus_addr; u64 cmb_size; u32 cmbsz; u32 cmbloc; @@ -1220,7 +1221,7 @@ static int nvme_alloc_sq_cmds(struct nvme_dev *dev, struct nvme_queue *nvmeq, if (qid && dev->cmb && use_cmb_sqes && NVME_CMB_SQS(dev->cmbsz)) { unsigned offset = (qid - 1) * roundup(SQ_SIZE(depth), dev->ctrl.page_size); - nvmeq->sq_dma_addr = dev->cmb_dma_addr + offset; + nvmeq->sq_dma_addr = dev->cmb_bus_addr + offset; nvmeq->sq_cmds_io = dev->cmb + offset; } else { nvmeq->sq_cmds = dma_alloc_coherent(dev->dev, SQ_SIZE(depth), @@ -1514,8 +1515,28 @@ static ssize_t nvme_cmb_show(struct device *dev, } static DEVICE_ATTR(cmb, S_IRUGO, nvme_cmb_show, NULL); +static int nvme_find_cmb_bus_addr(struct pci_dev *pdev, + dma_addr_t dma_addr, + u64 size, + pci_bus_addr_t *bus_addr) +{ + struct resource *res; + struct pci_bus_region region; + struct resource tres = DEFINE_RES_MEM(dma_addr, size); + + res = pci_find_resource(pdev, &tres); + if (!res) + return -EIO; + + pcibios_resource_to_bus(pdev->bus, ®ion, res); + *bus_addr = region.start + (dma_addr - res->start); + + return 0; +} + static void __iomem *nvme_map_cmb(struct nvme_dev *dev) { + int rc; u64 szu, size, offset; resource_size_t bar_size; struct pci_dev *pdev = to_pci_dev(dev->dev); @@ -1553,6 +1574,13 @@ static void __iomem *nvme_map_cmb(struct nvme_dev *dev) dev->cmb_dma_addr = dma_addr; dev->cmb_size = size; + + rc = nvme_find_cmb_bus_addr(pdev, dma_addr, size, &dev->cmb_bus_addr); + if (rc) { + iounmap(cmb); + return NULL; + } + return cmb; } -- 2.7.4