Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S964813AbcJFPuy (ORCPT ); Thu, 6 Oct 2016 11:50:54 -0400 Received: from mail-wm0-f48.google.com ([74.125.82.48]:38284 "EHLO mail-wm0-f48.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755763AbcJFPul (ORCPT ); Thu, 6 Oct 2016 11:50:41 -0400 From: Ard Biesheuvel To: linux-kernel@vger.kernel.org, nouveau@lists.freedesktop.org, dri-devel@lists.freedesktop.org Cc: airlied@linux.ie, bskeggs@redhat.com, gnurou@gmail.com, Ard Biesheuvel Subject: [PATCH v5 3/3] drm/nouveau/fb/nv50: defer DMA mapping of scratch page to oneinit() hook Date: Thu, 6 Oct 2016 16:49:30 +0100 Message-Id: <1475768970-32512-4-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1475768970-32512-1-git-send-email-ard.biesheuvel@linaro.org> References: <1475768970-32512-1-git-send-email-ard.biesheuvel@linaro.org> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2272 Lines: 73 The 100c08 scratch page is mapped using dma_map_page() before the TTM layer has had a chance to set the DMA mask. This means we are still running with the default of 32 when this code executes, and this causes problems for platforms with no memory below 4 GB (such as AMD Seattle) So move the dma_map_page() to the .oneinit hook, which executes after the DMA mask has been set. Signed-off-by: Ard Biesheuvel --- drivers/gpu/drm/nouveau/nvkm/subdev/fb/nv50.c | 33 ++++++++++++++------ 1 file changed, 23 insertions(+), 10 deletions(-) diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/nv50.c b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/nv50.c index 1b5fb02eab2a..d9bc4d11f145 100644 --- a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/nv50.c +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/nv50.c @@ -210,6 +210,28 @@ nv50_fb_intr(struct nvkm_fb *base) nvkm_fifo_chan_put(fifo, flags, &chan); } +static int +nv50_fb_oneinit(struct nvkm_fb *base) +{ + struct nv50_fb *fb = nv50_fb(base); + + fb->r100c08_page = alloc_page(GFP_KERNEL | __GFP_ZERO); + if (!fb->r100c08_page) { + nvkm_error(&fb->base.subdev, "failed 100c08 page alloc\n"); + return -ENOMEM; + } + + fb->r100c08 = dma_map_page(device->dev, fb->r100c08_page, 0, PAGE_SIZE, + DMA_BIDIRECTIONAL); + if (dma_mapping_error(device->dev, fb->r100c08)) { + nvkm_error(&fb->base.subdev, "failed to map 100c08 page\n"); + __free_page(fb->r100c08_page); + return -EFAULT; + } + + return 0; +} + static void nv50_fb_init(struct nvkm_fb *base) { @@ -245,6 +267,7 @@ nv50_fb_dtor(struct nvkm_fb *base) static const struct nvkm_fb_func nv50_fb_ = { .dtor = nv50_fb_dtor, + .oneinit = nv50_fb_oneinit, .init = nv50_fb_init, .intr = nv50_fb_intr, .ram_new = nv50_fb_ram_new, @@ -263,16 +286,6 @@ nv50_fb_new_(const struct nv50_fb_func *func, struct nvkm_device *device, fb->func = func; *pfb = &fb->base; - fb->r100c08_page = alloc_page(GFP_KERNEL | __GFP_ZERO); - if (fb->r100c08_page) { - fb->r100c08 = dma_map_page(device->dev, fb->r100c08_page, 0, - PAGE_SIZE, DMA_BIDIRECTIONAL); - if (dma_mapping_error(device->dev, fb->r100c08)) - return -EFAULT; - } else { - nvkm_warn(&fb->base.subdev, "failed 100c08 page alloc\n"); - } - return 0; } -- 2.7.4