Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756037AbcJFPuu (ORCPT ); Thu, 6 Oct 2016 11:50:50 -0400 Received: from mail-wm0-f50.google.com ([74.125.82.50]:38202 "EHLO mail-wm0-f50.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753972AbcJFPul (ORCPT ); Thu, 6 Oct 2016 11:50:41 -0400 From: Ard Biesheuvel To: linux-kernel@vger.kernel.org, nouveau@lists.freedesktop.org, dri-devel@lists.freedesktop.org Cc: airlied@linux.ie, bskeggs@redhat.com, gnurou@gmail.com, Ard Biesheuvel Subject: [PATCH v5 1/3] drm/nouveau: set streaming DMA mask early Date: Thu, 6 Oct 2016 16:49:28 +0100 Message-Id: <1475768970-32512-2-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1475768970-32512-1-git-send-email-ard.biesheuvel@linaro.org> References: <1475768970-32512-1-git-send-email-ard.biesheuvel@linaro.org> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2641 Lines: 63 Some subdevices (i.e., fb/nv50.c and fb/gf100.c) map a scratch page using dma_map_page() way before the TTM layer has had a chance to set the DMA mask. This may prevent the driver from loading at all on platforms whose system memory is not covered by the default DMA mask of 32-bit (i.e., when all RAM is above 4 GB). So set a preliminary DMA mask right after constructing the PCI device, and base it on the .dma_bits member of the MMU subdevice, which is what the TTM layer will base the DMA mask on as well. Signed-off-by: Ard Biesheuvel --- drivers/gpu/drm/nouveau/nvkm/engine/device/pci.c | 37 ++++++++++++++------ 1 file changed, 27 insertions(+), 10 deletions(-) diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/device/pci.c b/drivers/gpu/drm/nouveau/nvkm/engine/device/pci.c index 62ad0300cfa5..0030cd9543b2 100644 --- a/drivers/gpu/drm/nouveau/nvkm/engine/device/pci.c +++ b/drivers/gpu/drm/nouveau/nvkm/engine/device/pci.c @@ -1665,14 +1665,31 @@ nvkm_device_pci_new(struct pci_dev *pci_dev, const char *cfg, const char *dbg, *pdevice = &pdev->device; pdev->pdev = pci_dev; - return nvkm_device_ctor(&nvkm_device_pci_func, quirk, &pci_dev->dev, - pci_is_pcie(pci_dev) ? NVKM_DEVICE_PCIE : - pci_find_capability(pci_dev, PCI_CAP_ID_AGP) ? - NVKM_DEVICE_AGP : NVKM_DEVICE_PCI, - (u64)pci_domain_nr(pci_dev->bus) << 32 | - pci_dev->bus->number << 16 | - PCI_SLOT(pci_dev->devfn) << 8 | - PCI_FUNC(pci_dev->devfn), name, - cfg, dbg, detect, mmio, subdev_mask, - &pdev->device); + ret = nvkm_device_ctor(&nvkm_device_pci_func, quirk, &pci_dev->dev, + pci_is_pcie(pci_dev) ? NVKM_DEVICE_PCIE : + pci_find_capability(pci_dev, PCI_CAP_ID_AGP) ? + NVKM_DEVICE_AGP : NVKM_DEVICE_PCI, + (u64)pci_domain_nr(pci_dev->bus) << 32 | + pci_dev->bus->number << 16 | + PCI_SLOT(pci_dev->devfn) << 8 | + PCI_FUNC(pci_dev->devfn), name, + cfg, dbg, detect, mmio, subdev_mask, + &pdev->device); + + if (ret) + return ret; + + /* + * Set a preliminary DMA mask based on the .dma_bits member of the + * MMU subdevice. This allows other subdevices to create DMA mappings + * in their init() or oneinit() methods, which may be called before the + * TTM layer sets the DMA mask definitively. + * This is necessary for platforms where the default DMA mask of 32 + * does not cover any system memory, i.e., when all RAM is > 4 GB. + */ + if (subdev_mask & BIT(NVKM_SUBDEV_MMU)) + dma_set_mask_and_coherent(&pci_dev->dev, + DMA_BIT_MASK(pdev->device.mmu->dma_bits)); + + return 0; } -- 2.7.4