Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp4707874yba; Mon, 20 May 2019 02:16:24 -0700 (PDT) X-Google-Smtp-Source: APXvYqz3qIzP5U+WXpIETq62yro/QCvyu/OqYJ2DwScJvSCbbm7O7Kv/a3JLbHV+/646sog6Wxlw X-Received: by 2002:a17:902:2983:: with SMTP id h3mr48606692plb.267.1558343784758; Mon, 20 May 2019 02:16:24 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1558343784; cv=none; d=google.com; s=arc-20160816; b=HRc1GE0GAwjy6iHt6+CDJWaJSdfsHrYxi5Tdl+tezphHsWNZlDDUNC3GXY0QB1FxSB x15UaoxqvWbXM419lnMbZBgDgrKDfIdxDlqMbhRZ0rl1oRBumBxi9z6G9/ceLDjuRfW8 2HE6fMFVXLPVcRe+/J/c675CI4txRrFlvsWD0/ouC32XG9Ur5iY7LyzXaHJCtEZI7Tgz bVZ4tf2mdc/f6q7+m3586/K3alSORjhhi7UCvIpn3Qmy7NhgQ6wyB8BfRWnf4oeFz1OI k0WxjEI/U61qH7WoG8dJ8tnL99Bbm0bjQ2BcJKzlzZEo1meiT/DX6NRZ6PfMdo0dyfr7 cc9g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=4yq3kV3thLlqiEKl0TkXIPpsrbT4sL5fdW/emb+GzxE=; b=wP5/W7Q5zZM+4DIet/oTmD6BuFQoBFdcXtnHSkqeI1L7Jyada+GFFbLXpB3YSpWy1X 4PQKqac/B+WciHkbdpD6mSkWHDtjN1hCVAzbtayxqXJtevYKNxoCRF0re5YdynYvkbqJ vEai3Ezqi+4xM/HJTR+8zM7UesrFSK/OMeQNuFJL2/onA28579wDXNkTpxPHC36/Z8BK oYOKHKzMcatXboSdvhhvlqhGcq4taynFpcbAreBeDTdfB+4Rmm4GB1xlqVfE0cN1HV3e cdQbHcD29DqPr/2MgLNlWnmjDbzSQHMI+Qr0i1NTz9hh2WEOeu66AHXHVh00OQfnn8jS c5mw== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b="TCxpu/Fd"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f14si16290707plr.1.2019.05.20.02.16.09; Mon, 20 May 2019 02:16:24 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b="TCxpu/Fd"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730910AbfETHbH (ORCPT + 99 others); Mon, 20 May 2019 03:31:07 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:43700 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730847AbfETHbF (ORCPT ); Mon, 20 May 2019 03:31:05 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=4yq3kV3thLlqiEKl0TkXIPpsrbT4sL5fdW/emb+GzxE=; b=TCxpu/Fdw6MIxnFwk/8KV7HD/p 1Nej32hH8cWBycuJvqWrgEE/woM+RIeLEVRi2rrE+RTGOXpT2myPCtaZXAM1qrVks8JeENalx+3Pt mA1pDOsWjTHZwNohQIS3T1+eraudWLHI0BUd9L5bWsZVgsTCeSLKGMdCVjxbPxbkgkrvqgR0w2o5I wm1hZClL2LCyCP4nOS1Vpw6y2zLjMjQM8WZjJRVk1As5NfVKZXDxAiaMMP5GjdFNTzpYUoAR0m2/h 99kCYIuaTCE84bhjosvt3eB0ObvczHL45WkTnam75Kf5w8IKHN1n/pgCHCioLdz3o/NMuVFpwnwvb LTeAPjnQ==; Received: from 089144206147.atnat0015.highway.bob.at ([89.144.206.147] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1hScl8-00043d-GL; Mon, 20 May 2019 07:30:58 +0000 From: Christoph Hellwig To: Robin Murphy Cc: Joerg Roedel , Catalin Marinas , Will Deacon , Tom Murphy , iommu@lists.linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH 07/24] iommu/dma: Move domain lookup into __iommu_dma_{map,unmap} Date: Mon, 20 May 2019 09:29:31 +0200 Message-Id: <20190520072948.11412-8-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190520072948.11412-1-hch@lst.de> References: <20190520072948.11412-1-hch@lst.de> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Robin Murphy Most of the callers don't care, and the couple that do already have the domain to hand for other reasons are in slow paths where the (trivial) overhead of a repeated lookup will be utterly immaterial. Signed-off-by: Robin Murphy [hch: dropped the hunk touching iommu_dma_get_msi_page to avoid a conflict with another series] Signed-off-by: Christoph Hellwig --- drivers/iommu/dma-iommu.c | 29 ++++++++++++++--------------- 1 file changed, 14 insertions(+), 15 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index c406abe3be01..6ece8f477fc8 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -448,9 +448,10 @@ static void iommu_dma_free_iova(struct iommu_dma_cookie *cookie, size >> iova_shift(iovad)); } -static void __iommu_dma_unmap(struct iommu_domain *domain, dma_addr_t dma_addr, +static void __iommu_dma_unmap(struct device *dev, dma_addr_t dma_addr, size_t size) { + struct iommu_domain *domain = iommu_get_dma_domain(dev); struct iommu_dma_cookie *cookie = domain->iova_cookie; struct iova_domain *iovad = &cookie->iovad; size_t iova_off = iova_offset(iovad, dma_addr); @@ -465,8 +466,9 @@ static void __iommu_dma_unmap(struct iommu_domain *domain, dma_addr_t dma_addr, } static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys, - size_t size, int prot, struct iommu_domain *domain) + size_t size, int prot) { + struct iommu_domain *domain = iommu_get_dma_domain(dev); struct iommu_dma_cookie *cookie = domain->iova_cookie; size_t iova_off = 0; dma_addr_t iova; @@ -565,7 +567,7 @@ static struct page **__iommu_dma_alloc_pages(struct device *dev, static void __iommu_dma_free(struct device *dev, struct page **pages, size_t size, dma_addr_t *handle) { - __iommu_dma_unmap(iommu_get_dma_domain(dev), *handle, size); + __iommu_dma_unmap(dev, *handle, size); __iommu_dma_free_pages(pages, PAGE_ALIGN(size) >> PAGE_SHIFT); *handle = DMA_MAPPING_ERROR; } @@ -718,14 +720,13 @@ static void iommu_dma_sync_sg_for_device(struct device *dev, static dma_addr_t __iommu_dma_map_page(struct device *dev, struct page *page, unsigned long offset, size_t size, int prot) { - return __iommu_dma_map(dev, page_to_phys(page) + offset, size, prot, - iommu_get_dma_domain(dev)); + return __iommu_dma_map(dev, page_to_phys(page) + offset, size, prot); } static void __iommu_dma_unmap_page(struct device *dev, dma_addr_t handle, size_t size, enum dma_data_direction dir, unsigned long attrs) { - __iommu_dma_unmap(iommu_get_dma_domain(dev), handle, size); + __iommu_dma_unmap(dev, handle, size); } static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page, @@ -734,11 +735,10 @@ static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page, { phys_addr_t phys = page_to_phys(page) + offset; bool coherent = dev_is_dma_coherent(dev); + int prot = dma_info_to_prot(dir, coherent, attrs); dma_addr_t dma_handle; - dma_handle =__iommu_dma_map(dev, phys, size, - dma_info_to_prot(dir, coherent, attrs), - iommu_get_dma_domain(dev)); + dma_handle =__iommu_dma_map(dev, phys, size, prot); if (!coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC) && dma_handle != DMA_MAPPING_ERROR) arch_sync_dma_for_device(dev, phys, size, dir); @@ -750,7 +750,7 @@ static void iommu_dma_unmap_page(struct device *dev, dma_addr_t dma_handle, { if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC)) iommu_dma_sync_single_for_cpu(dev, dma_handle, size, dir); - __iommu_dma_unmap(iommu_get_dma_domain(dev), dma_handle, size); + __iommu_dma_unmap(dev, dma_handle, size); } /* @@ -931,21 +931,20 @@ static void iommu_dma_unmap_sg(struct device *dev, struct scatterlist *sg, sg = tmp; } end = sg_dma_address(sg) + sg_dma_len(sg); - __iommu_dma_unmap(iommu_get_dma_domain(dev), start, end - start); + __iommu_dma_unmap(dev, start, end - start); } static dma_addr_t iommu_dma_map_resource(struct device *dev, phys_addr_t phys, size_t size, enum dma_data_direction dir, unsigned long attrs) { return __iommu_dma_map(dev, phys, size, - dma_info_to_prot(dir, false, attrs) | IOMMU_MMIO, - iommu_get_dma_domain(dev)); + dma_info_to_prot(dir, false, attrs) | IOMMU_MMIO); } static void iommu_dma_unmap_resource(struct device *dev, dma_addr_t handle, size_t size, enum dma_data_direction dir, unsigned long attrs) { - __iommu_dma_unmap(iommu_get_dma_domain(dev), handle, size); + __iommu_dma_unmap(dev, handle, size); } static void *iommu_dma_alloc(struct device *dev, size_t size, @@ -1222,7 +1221,7 @@ static struct iommu_dma_msi_page *iommu_dma_get_msi_page(struct device *dev, if (!msi_page) return NULL; - iova = __iommu_dma_map(dev, msi_addr, size, prot, domain); + iova = __iommu_dma_map(dev, msi_addr, size, prot); if (iova == DMA_MAPPING_ERROR) goto out_free_page; -- 2.20.1