Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753151AbdHQM6v (ORCPT ); Thu, 17 Aug 2017 08:58:51 -0400 Received: from 8bytes.org ([81.169.241.247]:43394 "EHLO theia.8bytes.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752729AbdHQM45 (ORCPT ); Thu, 17 Aug 2017 08:56:57 -0400 From: Joerg Roedel To: iommu@lists.linux-foundation.org Cc: linux-kernel@vger.kernel.org, Suravee Suthikulpanit , Joerg Roedel , Ben Skeggs , David Airlie , dri-devel@lists.freedesktop.org, nouveau@lists.freedesktop.org Subject: [PATCH 08/13] drm/nouveau/imem/gk20a: Use sychronized interface of the IOMMU-API Date: Thu, 17 Aug 2017 14:56:31 +0200 Message-Id: <1502974596-23835-9-git-send-email-joro@8bytes.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1502974596-23835-1-git-send-email-joro@8bytes.org> References: <1502974596-23835-1-git-send-email-joro@8bytes.org> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2302 Lines: 60 From: Joerg Roedel The map and unmap functions of the IOMMU-API changed their semantics: They do no longer guarantee that the hardware TLBs are synchronized with the page-table updates they made. To make conversion easier, new synchronized functions have been introduced which give these guarantees again until the code is converted to use the new TLB-flush interface of the IOMMU-API, which allows certain optimizations. But for now, just convert this code to use the synchronized functions so that it will behave as before. Cc: Ben Skeggs Cc: David Airlie Cc: dri-devel@lists.freedesktop.org Cc: nouveau@lists.freedesktop.org Signed-off-by: Joerg Roedel --- drivers/gpu/drm/nouveau/nvkm/subdev/instmem/gk20a.c | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/instmem/gk20a.c b/drivers/gpu/drm/nouveau/nvkm/subdev/instmem/gk20a.c index cd5adbe..3f0de47 100644 --- a/drivers/gpu/drm/nouveau/nvkm/subdev/instmem/gk20a.c +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/instmem/gk20a.c @@ -322,8 +322,9 @@ gk20a_instobj_dtor_iommu(struct nvkm_memory *memory) /* Unmap pages from GPU address space and free them */ for (i = 0; i < node->base.mem.size; i++) { - iommu_unmap(imem->domain, - (r->offset + i) << imem->iommu_pgshift, PAGE_SIZE); + iommu_unmap_sync(imem->domain, + (r->offset + i) << imem->iommu_pgshift, + PAGE_SIZE); dma_unmap_page(dev, node->dma_addrs[i], PAGE_SIZE, DMA_BIDIRECTIONAL); __free_page(node->pages[i]); @@ -458,14 +459,15 @@ gk20a_instobj_ctor_iommu(struct gk20a_instmem *imem, u32 npages, u32 align, for (i = 0; i < npages; i++) { u32 offset = (r->offset + i) << imem->iommu_pgshift; - ret = iommu_map(imem->domain, offset, node->dma_addrs[i], - PAGE_SIZE, IOMMU_READ | IOMMU_WRITE); + ret = iommu_map_sync(imem->domain, offset, node->dma_addrs[i], + PAGE_SIZE, IOMMU_READ | IOMMU_WRITE); if (ret < 0) { nvkm_error(subdev, "IOMMU mapping failure: %d\n", ret); while (i-- > 0) { offset -= PAGE_SIZE; - iommu_unmap(imem->domain, offset, PAGE_SIZE); + iommu_unmap_sync(imem->domain, offset, + PAGE_SIZE); } goto release_area; } -- 2.7.4