Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753145AbdHQND2 (ORCPT ); Thu, 17 Aug 2017 09:03:28 -0400 Received: from 8bytes.org ([81.169.241.247]:43268 "EHLO theia.8bytes.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752681AbdHQM44 (ORCPT ); Thu, 17 Aug 2017 08:56:56 -0400 From: Joerg Roedel To: iommu@lists.linux-foundation.org Cc: linux-kernel@vger.kernel.org, Suravee Suthikulpanit , Joerg Roedel , Robin Murphy , Will Deacon , Nate Watterson , Eric Auger , Mitchel Humpherys Subject: [PATCH 04/13] iommu/dma: Use sychronized interface of the IOMMU-API Date: Thu, 17 Aug 2017 14:56:27 +0200 Message-Id: <1502974596-23835-5-git-send-email-joro@8bytes.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1502974596-23835-1-git-send-email-joro@8bytes.org> References: <1502974596-23835-1-git-send-email-joro@8bytes.org> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2456 Lines: 66 From: Joerg Roedel The map and unmap functions of the IOMMU-API changed their semantics: They do no longer guarantee that the hardware TLBs are synchronized with the page-table updates they made. To make conversion easier, new synchronized functions have been introduced which give these guarantees again until the code is converted to use the new TLB-flush interface of the IOMMU-API, which allows certain optimizations. But for now, just convert this code to use the synchronized functions so that it will behave as before. Cc: Robin Murphy Cc: Will Deacon Cc: Nate Watterson Cc: Eric Auger Cc: Mitchel Humpherys Signed-off-by: Joerg Roedel --- drivers/iommu/dma-iommu.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 9d1cebe..38c41a2 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -417,7 +417,7 @@ static void __iommu_dma_unmap(struct iommu_domain *domain, dma_addr_t dma_addr, dma_addr -= iova_off; size = iova_align(iovad, size + iova_off); - WARN_ON(iommu_unmap(domain, dma_addr, size) != size); + WARN_ON(iommu_unmap_sync(domain, dma_addr, size) != size); iommu_dma_free_iova(cookie, dma_addr, size); } @@ -572,7 +572,7 @@ struct page **iommu_dma_alloc(struct device *dev, size_t size, gfp_t gfp, sg_miter_stop(&miter); } - if (iommu_map_sg(domain, iova, sgt.sgl, sgt.orig_nents, prot) + if (iommu_map_sg_sync(domain, iova, sgt.sgl, sgt.orig_nents, prot) < size) goto out_free_sg; @@ -631,7 +631,7 @@ static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys, if (!iova) return IOMMU_MAPPING_ERROR; - if (iommu_map(domain, iova, phys - iova_off, size, prot)) { + if (iommu_map_sync(domain, iova, phys - iova_off, size, prot)) { iommu_dma_free_iova(cookie, iova, size); return IOMMU_MAPPING_ERROR; } @@ -791,7 +791,7 @@ int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg, * We'll leave any physical concatenation to the IOMMU driver's * implementation - it knows better than we do. */ - if (iommu_map_sg(domain, iova, sg, nents, prot) < iova_len) + if (iommu_map_sg_sync(domain, iova, sg, nents, prot) < iova_len) goto out_free_iova; return __finalise_sg(dev, sg, nents, iova); -- 2.7.4