Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp3741072imu; Fri, 30 Nov 2018 05:24:26 -0800 (PST) X-Google-Smtp-Source: AFSGD/X5gcc5tUV2KPVcrHSSruKFUN+ZzZ+PPQykRpaGQJWKLM3T7vgVuoTs/gm2hQWOABGRI7WU X-Received: by 2002:a62:15c3:: with SMTP id 186mr5818949pfv.240.1543584266604; Fri, 30 Nov 2018 05:24:26 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1543584266; cv=none; d=google.com; s=arc-20160816; b=TeFpBcpUcDTm2Gv4gEsu/DA2NyKFBOk5FOf8rYsY4IRyso72L8Z8PUK6ZIgHg2uj1Q JDfs1IxbdiPMf0EoMN8XzEsQ1KLJ8NJjwzSdOpypkoWJDsBcTPIJuiUIBznjcEiF37h9 aml9m1BJvSW93mWzCGsUp7yWWVOvrQnYGFYQnGf/xoqiKUs5KNMLxRYu4YxgrabLoXoF CXaMaNmP84UGO/Ga7v6r67y8PXAhYhYJSADa4JWH8CBA+dQ2MOqGOGmaQiGaR+H1wrI3 hmn6b2VAThdq4ftxwy0UTIbT66oLxOuncfBs3HHGrMY57upO4NWXfgF7ZbJenBRdNjiB 2bAw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=P+UvCct1WNWlr7BvnCUjU9cL23hPt3qBaGIteJJVSYQ=; b=BSyO4VMlkiT4dmajIhZnr/bQlA/PTBZfIRkzOHKO64kHU8/7hw/6awBseBvVc95wFq pqVwu/qJs1+dxbU2A1u+tqlbMoW7f12/VMaOT+4P2YOOwXlib8LaZ8KOMlNC+zxMYL+/ v7Q+6PQJadg0MK9Ty29pKdOK605QPhGhpb15NGfILPvRzbULg7vyZjRNXijQlmpeV1ZX tV/lwbIOt0Ucz9VUvIxxwEFUR8GxCOLUCdkx981LRKz+KiDXVQOUc1uJjEC1e+v4ZjG+ LIObzz4UGIrYjT99lOeEDPqZMF+9enTvdyKEAXNICj2h/OUa285HYkvjgpA1CKuSxApk /Qog== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b=YRKnJrmn; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x33si5385652plb.43.2018.11.30.05.24.11; Fri, 30 Nov 2018 05:24:26 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b=YRKnJrmn; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727176AbeLAAcl (ORCPT + 99 others); Fri, 30 Nov 2018 19:32:41 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:46410 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726827AbeLAAck (ORCPT ); Fri, 30 Nov 2018 19:32:40 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=P+UvCct1WNWlr7BvnCUjU9cL23hPt3qBaGIteJJVSYQ=; b=YRKnJrmn5YRYBD0aiEXXFK/+Co ws3iqLEwPJWuyeth9usdLWnnu2eGX+FXrovl+pNJjBtrEEdz/pqili/4mWbbxFeTstPfGqlli36fE AclbWZBl7M3nYJ4Ng6XgPS6DzZU+cBtqgglPTmW6EU56eLJM2kGgi98jg0mdHtL7P3xQ/BlxABeLJ cgkVFm/R5BDkhbRLdFjCx5YBjxN2nI2Bp098/v7mDSf2nzd9hj9PDtKQ3A4gcpCDqdn/Ngdix99b9 nD2RL1vuuAkTKvOwwGvykiDYEhu3BJpu/e0z4sCAH6gfMtZ9fG42V037E6hmxuS80FXMbliuqeVHa q1Us7IuQ==; Received: from 089144206221.atnat0015.highway.bob.at ([89.144.206.221] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1gSilD-0004Ng-7K; Fri, 30 Nov 2018 13:23:11 +0000 From: Christoph Hellwig To: iommu@lists.linux-foundation.org Cc: Linus Torvalds , Jon Mason , Joerg Roedel , David Woodhouse , Marek Szyprowski , Robin Murphy , x86@kernel.org, linux-alpha@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-ia64@vger.kernel.org, linux-parisc@vger.kernel.org, xen-devel@lists.xenproject.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 04/23] powerpc/iommu: remove the mapping_error dma_map_ops method Date: Fri, 30 Nov 2018 14:22:12 +0100 Message-Id: <20181130132231.16512-5-hch@lst.de> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20181130132231.16512-1-hch@lst.de> References: <20181130132231.16512-1-hch@lst.de> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The powerpc iommu code already returns (~(dma_addr_t)0x0) on mapping failures, so we can switch over to returning DMA_MAPPING_ERROR and let the core dma-mapping code handle the rest. Signed-off-by: Christoph Hellwig --- arch/powerpc/include/asm/iommu.h | 4 ---- arch/powerpc/kernel/dma-iommu.c | 6 ------ arch/powerpc/kernel/iommu.c | 28 ++++++++++++++-------------- arch/powerpc/platforms/cell/iommu.c | 1 - arch/powerpc/platforms/pseries/vio.c | 3 +-- 5 files changed, 15 insertions(+), 27 deletions(-) diff --git a/arch/powerpc/include/asm/iommu.h b/arch/powerpc/include/asm/iommu.h index 35db0cbc9222..55312990d1d2 100644 --- a/arch/powerpc/include/asm/iommu.h +++ b/arch/powerpc/include/asm/iommu.h @@ -143,8 +143,6 @@ struct scatterlist; #ifdef CONFIG_PPC64 -#define IOMMU_MAPPING_ERROR (~(dma_addr_t)0x0) - static inline void set_iommu_table_base(struct device *dev, struct iommu_table *base) { @@ -242,8 +240,6 @@ static inline int __init tce_iommu_bus_notifier_init(void) } #endif /* !CONFIG_IOMMU_API */ -int dma_iommu_mapping_error(struct device *dev, dma_addr_t dma_addr); - #else static inline void *get_iommu_table_base(struct device *dev) diff --git a/arch/powerpc/kernel/dma-iommu.c b/arch/powerpc/kernel/dma-iommu.c index f9fe2080ceb9..5ebacf0fe41a 100644 --- a/arch/powerpc/kernel/dma-iommu.c +++ b/arch/powerpc/kernel/dma-iommu.c @@ -106,11 +106,6 @@ static u64 dma_iommu_get_required_mask(struct device *dev) return mask; } -int dma_iommu_mapping_error(struct device *dev, dma_addr_t dma_addr) -{ - return dma_addr == IOMMU_MAPPING_ERROR; -} - struct dma_map_ops dma_iommu_ops = { .alloc = dma_iommu_alloc_coherent, .free = dma_iommu_free_coherent, @@ -121,6 +116,5 @@ struct dma_map_ops dma_iommu_ops = { .map_page = dma_iommu_map_page, .unmap_page = dma_iommu_unmap_page, .get_required_mask = dma_iommu_get_required_mask, - .mapping_error = dma_iommu_mapping_error, }; EXPORT_SYMBOL(dma_iommu_ops); diff --git a/arch/powerpc/kernel/iommu.c b/arch/powerpc/kernel/iommu.c index f0dc680e659a..ca7f73488c62 100644 --- a/arch/powerpc/kernel/iommu.c +++ b/arch/powerpc/kernel/iommu.c @@ -197,11 +197,11 @@ static unsigned long iommu_range_alloc(struct device *dev, if (unlikely(npages == 0)) { if (printk_ratelimit()) WARN_ON(1); - return IOMMU_MAPPING_ERROR; + return DMA_MAPPING_ERROR; } if (should_fail_iommu(dev)) - return IOMMU_MAPPING_ERROR; + return DMA_MAPPING_ERROR; /* * We don't need to disable preemption here because any CPU can @@ -277,7 +277,7 @@ static unsigned long iommu_range_alloc(struct device *dev, } else { /* Give up */ spin_unlock_irqrestore(&(pool->lock), flags); - return IOMMU_MAPPING_ERROR; + return DMA_MAPPING_ERROR; } } @@ -309,13 +309,13 @@ static dma_addr_t iommu_alloc(struct device *dev, struct iommu_table *tbl, unsigned long attrs) { unsigned long entry; - dma_addr_t ret = IOMMU_MAPPING_ERROR; + dma_addr_t ret = DMA_MAPPING_ERROR; int build_fail; entry = iommu_range_alloc(dev, tbl, npages, NULL, mask, align_order); - if (unlikely(entry == IOMMU_MAPPING_ERROR)) - return IOMMU_MAPPING_ERROR; + if (unlikely(entry == DMA_MAPPING_ERROR)) + return DMA_MAPPING_ERROR; entry += tbl->it_offset; /* Offset into real TCE table */ ret = entry << tbl->it_page_shift; /* Set the return dma address */ @@ -327,12 +327,12 @@ static dma_addr_t iommu_alloc(struct device *dev, struct iommu_table *tbl, /* tbl->it_ops->set() only returns non-zero for transient errors. * Clean up the table bitmap in this case and return - * IOMMU_MAPPING_ERROR. For all other errors the functionality is + * DMA_MAPPING_ERROR. For all other errors the functionality is * not altered. */ if (unlikely(build_fail)) { __iommu_free(tbl, ret, npages); - return IOMMU_MAPPING_ERROR; + return DMA_MAPPING_ERROR; } /* Flush/invalidate TLB caches if necessary */ @@ -477,7 +477,7 @@ int ppc_iommu_map_sg(struct device *dev, struct iommu_table *tbl, DBG(" - vaddr: %lx, size: %lx\n", vaddr, slen); /* Handle failure */ - if (unlikely(entry == IOMMU_MAPPING_ERROR)) { + if (unlikely(entry == DMA_MAPPING_ERROR)) { if (!(attrs & DMA_ATTR_NO_WARN) && printk_ratelimit()) dev_info(dev, "iommu_alloc failed, tbl %p " @@ -544,7 +544,7 @@ int ppc_iommu_map_sg(struct device *dev, struct iommu_table *tbl, */ if (outcount < incount) { outs = sg_next(outs); - outs->dma_address = IOMMU_MAPPING_ERROR; + outs->dma_address = DMA_MAPPING_ERROR; outs->dma_length = 0; } @@ -562,7 +562,7 @@ int ppc_iommu_map_sg(struct device *dev, struct iommu_table *tbl, npages = iommu_num_pages(s->dma_address, s->dma_length, IOMMU_PAGE_SIZE(tbl)); __iommu_free(tbl, vaddr, npages); - s->dma_address = IOMMU_MAPPING_ERROR; + s->dma_address = DMA_MAPPING_ERROR; s->dma_length = 0; } if (s == outs) @@ -776,7 +776,7 @@ dma_addr_t iommu_map_page(struct device *dev, struct iommu_table *tbl, unsigned long mask, enum dma_data_direction direction, unsigned long attrs) { - dma_addr_t dma_handle = IOMMU_MAPPING_ERROR; + dma_addr_t dma_handle = DMA_MAPPING_ERROR; void *vaddr; unsigned long uaddr; unsigned int npages, align; @@ -796,7 +796,7 @@ dma_addr_t iommu_map_page(struct device *dev, struct iommu_table *tbl, dma_handle = iommu_alloc(dev, tbl, vaddr, npages, direction, mask >> tbl->it_page_shift, align, attrs); - if (dma_handle == IOMMU_MAPPING_ERROR) { + if (dma_handle == DMA_MAPPING_ERROR) { if (!(attrs & DMA_ATTR_NO_WARN) && printk_ratelimit()) { dev_info(dev, "iommu_alloc failed, tbl %p " @@ -868,7 +868,7 @@ void *iommu_alloc_coherent(struct device *dev, struct iommu_table *tbl, io_order = get_iommu_order(size, tbl); mapping = iommu_alloc(dev, tbl, ret, nio_pages, DMA_BIDIRECTIONAL, mask >> tbl->it_page_shift, io_order, 0); - if (mapping == IOMMU_MAPPING_ERROR) { + if (mapping == DMA_MAPPING_ERROR) { free_pages((unsigned long)ret, order); return NULL; } diff --git a/arch/powerpc/platforms/cell/iommu.c b/arch/powerpc/platforms/cell/iommu.c index 12352a58072a..af2a3c15e0ec 100644 --- a/arch/powerpc/platforms/cell/iommu.c +++ b/arch/powerpc/platforms/cell/iommu.c @@ -654,7 +654,6 @@ static const struct dma_map_ops dma_iommu_fixed_ops = { .dma_supported = dma_suported_and_switch, .map_page = dma_fixed_map_page, .unmap_page = dma_fixed_unmap_page, - .mapping_error = dma_iommu_mapping_error, }; static void cell_dma_dev_setup(struct device *dev) diff --git a/arch/powerpc/platforms/pseries/vio.c b/arch/powerpc/platforms/pseries/vio.c index 88f1ad1d6309..a29ad7db918a 100644 --- a/arch/powerpc/platforms/pseries/vio.c +++ b/arch/powerpc/platforms/pseries/vio.c @@ -519,7 +519,7 @@ static dma_addr_t vio_dma_iommu_map_page(struct device *dev, struct page *page, { struct vio_dev *viodev = to_vio_dev(dev); struct iommu_table *tbl; - dma_addr_t ret = IOMMU_MAPPING_ERROR; + dma_addr_t ret = DMA_MAPPING_ERROR; tbl = get_iommu_table_base(dev); if (vio_cmo_alloc(viodev, roundup(size, IOMMU_PAGE_SIZE(tbl)))) { @@ -625,7 +625,6 @@ static const struct dma_map_ops vio_dma_mapping_ops = { .unmap_page = vio_dma_iommu_unmap_page, .dma_supported = vio_dma_iommu_dma_supported, .get_required_mask = vio_dma_get_required_mask, - .mapping_error = dma_iommu_mapping_error, }; /** -- 2.19.1