Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752048AbdI3JAU (ORCPT ); Sat, 30 Sep 2017 05:00:20 -0400 Received: from mail-vk0-f41.google.com ([209.85.213.41]:53864 "EHLO mail-vk0-f41.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751345AbdI3JAS (ORCPT ); Sat, 30 Sep 2017 05:00:18 -0400 X-Google-Smtp-Source: AOwi7QDBbVclWQt2TEn6oHHRlJh7ewOMvuGEYmD1wnRR0Jo7jXkEZ5UzDvRY45TyAgdDS3AjonXvaN5tYGVQS1y4FV8= MIME-Version: 1.0 In-Reply-To: <20170929144242.GN8463@localhost.localdomain> References: <1506662966-10865-1-git-send-email-abhishek.shah@broadcom.com> <20170929144242.GN8463@localhost.localdomain> From: Abhishek Shah Date: Sat, 30 Sep 2017 14:30:16 +0530 Message-ID: Subject: Re: [PATCH] nvme-pci: Use PCI bus address for data/queues in CMB To: Keith Busch Cc: Jens Axboe , Christoph Hellwig , Sagi Grimberg , linux-nvme@lists.infradead.org, open list , BCM Kernel Feedback , stable@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2785 Lines: 80 Hi Keith, On Fri, Sep 29, 2017 at 8:12 PM, Keith Busch wrote: > > On Fri, Sep 29, 2017 at 10:59:26AM +0530, Abhishek Shah wrote: > > Currently, NVMe PCI host driver is programming CMB dma address as > > I/O SQs addresses. This results in failures on systems where 1:1 > > outbound mapping is not used (example Broadcom iProc SOCs) because > > CMB BAR will be progammed with PCI bus address but NVMe PCI EP will > > try to access CMB using dma address. > > > > To have CMB working on systems without 1:1 outbound mapping, we > > program PCI bus address for I/O SQs instead of dma address. This > > approach will work on systems with/without 1:1 outbound mapping. > > > > The patch is tested on Broadcom Stingray platform(arm64), which > > does not have 1:1 outbound mapping, as well as on x86 platform, > > which has 1:1 outbound mapping. > > > > Fixes: 8ffaadf7 ("NVMe: Use CMB for the IO SQes if available") > > Cc: stable@vger.kernel.org > > Signed-off-by: Abhishek Shah > > Reviewed-by: Anup Patel > > Reviewed-by: Ray Jui > > Reviewed-by: Scott Branden > > Thanks for the patch. > > On a similar note, we also break CMB usage in virutalization with direct > assigned devices: the guest doesn't know the host physical bus address, > so it sets the CMB queue address incorrectly there, too. I don't know of > a way to fix that other than disabling CMB. I don't have much idea on CMB usage in virtualization... will let someone else comment on this. > > > > > static void __iomem *nvme_map_cmb(struct nvme_dev *dev) > > { > > + int rc; > > u64 szu, size, offset; > > resource_size_t bar_size; > > struct pci_dev *pdev = to_pci_dev(dev->dev); > > @@ -1553,6 +1574,13 @@ static void __iomem *nvme_map_cmb(struct nvme_dev *dev) > > > > dev->cmb_dma_addr = dma_addr; > > dev->cmb_size = size; > > + > > + rc = nvme_find_cmb_bus_addr(pdev, dma_addr, size, &dev->cmb_bus_addr); > > + if (rc) { > > + iounmap(cmb); > > + return NULL; > > + } > > + > > return cmb; > > } > > Minor suggestion: it's a little simpler if you find the bus address > before ioremap: > > --- > @@ -1554,6 +1554,10 @@ static void __iomem *nvme_map_cmb(struct nvme_dev *dev) > size = bar_size - offset; > > dma_addr = pci_resource_start(pdev, NVME_CMB_BIR(dev->cmbloc)) + offset; > + > + if (nvme_find_cmb_bus_addr(pdev, dma_addr, size, &dev->cmb_bus_addr)) > + return NULL; > + > cmb = ioremap_wc(dma_addr, size); > if (!cmb) > return NULL; > -- Thanks for the suggestion, will push patch with this change. Regards, Abhishek