Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752558AbdI2Onr (ORCPT ); Fri, 29 Sep 2017 10:43:47 -0400 Received: from mga01.intel.com ([192.55.52.88]:39423 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752505AbdI2OnR (ORCPT ); Fri, 29 Sep 2017 10:43:17 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.42,452,1500966000"; d="scan'208";a="905125818" Date: Fri, 29 Sep 2017 08:42:42 -0600 From: Keith Busch To: Abhishek Shah Cc: Jens Axboe , Christoph Hellwig , Sagi Grimberg , linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org, bcm-kernel-feedback-list@broadcom.com, stable@vger.kernel.org Subject: Re: [PATCH] nvme-pci: Use PCI bus address for data/queues in CMB Message-ID: <20170929144242.GN8463@localhost.localdomain> References: <1506662966-10865-1-git-send-email-abhishek.shah@broadcom.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1506662966-10865-1-git-send-email-abhishek.shah@broadcom.com> User-Agent: Mutt/1.7.1 (2016-10-04) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2271 Lines: 66 On Fri, Sep 29, 2017 at 10:59:26AM +0530, Abhishek Shah wrote: > Currently, NVMe PCI host driver is programming CMB dma address as > I/O SQs addresses. This results in failures on systems where 1:1 > outbound mapping is not used (example Broadcom iProc SOCs) because > CMB BAR will be progammed with PCI bus address but NVMe PCI EP will > try to access CMB using dma address. > > To have CMB working on systems without 1:1 outbound mapping, we > program PCI bus address for I/O SQs instead of dma address. This > approach will work on systems with/without 1:1 outbound mapping. > > The patch is tested on Broadcom Stingray platform(arm64), which > does not have 1:1 outbound mapping, as well as on x86 platform, > which has 1:1 outbound mapping. > > Fixes: 8ffaadf7 ("NVMe: Use CMB for the IO SQes if available") > Cc: stable@vger.kernel.org > Signed-off-by: Abhishek Shah > Reviewed-by: Anup Patel > Reviewed-by: Ray Jui > Reviewed-by: Scott Branden Thanks for the patch. On a similar note, we also break CMB usage in virutalization with direct assigned devices: the guest doesn't know the host physical bus address, so it sets the CMB queue address incorrectly there, too. I don't know of a way to fix that other than disabling CMB. > static void __iomem *nvme_map_cmb(struct nvme_dev *dev) > { > + int rc; > u64 szu, size, offset; > resource_size_t bar_size; > struct pci_dev *pdev = to_pci_dev(dev->dev); > @@ -1553,6 +1574,13 @@ static void __iomem *nvme_map_cmb(struct nvme_dev *dev) > > dev->cmb_dma_addr = dma_addr; > dev->cmb_size = size; > + > + rc = nvme_find_cmb_bus_addr(pdev, dma_addr, size, &dev->cmb_bus_addr); > + if (rc) { > + iounmap(cmb); > + return NULL; > + } > + > return cmb; > } Minor suggestion: it's a little simpler if you find the bus address before ioremap: --- @@ -1554,6 +1554,10 @@ static void __iomem *nvme_map_cmb(struct nvme_dev *dev) size = bar_size - offset; dma_addr = pci_resource_start(pdev, NVME_CMB_BIR(dev->cmbloc)) + offset; + + if (nvme_find_cmb_bus_addr(pdev, dma_addr, size, &dev->cmb_bus_addr)) + return NULL; + cmb = ioremap_wc(dma_addr, size); if (!cmb) return NULL; --