Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp3743012imu; Fri, 30 Nov 2018 05:26:25 -0800 (PST) X-Google-Smtp-Source: AFSGD/WuKxIy+0gWx3wiTLwNTcEmTaYePhsdp0RE+b1by/4jZTvlEwFaSksYGer0yhbHFiWFYZug X-Received: by 2002:a62:5444:: with SMTP id i65mr5785403pfb.193.1543584385452; Fri, 30 Nov 2018 05:26:25 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1543584385; cv=none; d=google.com; s=arc-20160816; b=tc7PaoT6QHqPCdjmen/9jWfV9mZfwQdIwV+4wt8oJaOxnkKgbt6NsYJW28xec+6Eh0 VfwEbj/eyauLS6lbqvwExxnBI+QpbrCXmTnyv2C9mvbK7yGdlreV0R00oQLpCoo7lTn5 K7xD8QywaeYeSPEHV/KYvDwdRvWk3GbRj8olluyx4NGqm6dyptpJMtBYz7Pwrv2v2yUm L+X0hnw5naCASQ0jT0sSShcti+lbu/qaG+gb8Pe7B5zccD0GRnU1pQk1jAMtLUYZYk0c DEMMnmIAVxfpefgZqzcT5FyA50ugdCfs592fUHR/MqQdJgvYV8EvfY6GpvBD/7SI2XGF f27A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=84+ojv63c5aopj+ytHcF2VnJPsXc8faE0MW67dhNiSs=; b=JbkCkufGN9v8PJdhRYqLA3b+DL2BnmHTgR5qWs/KI2kUM+FxEBP2YvWLT1deiOzYBL B2fWmkR2+VOb+BnzYYyeR1zwyoiHxPAkzOofWT0ZqfHeYeIcpgcoQLhzZC4J0tOMWEHF Ffraz1N4r5Aa6fYvzjRhq+fBYCEOlQ6NSgXneX92ImVWCmvycHvQaaqaz9X9yQbBFRUy j6pDa6glMjEUBV0l0tRxHf/yswnNrqHKot2ctS9+WS1yujzM7XZFgFYSDZ0GzCcJ73uA 6svdc3zlxv0ACSDn9qGdlzvqVVeEgwJRyl9d1zLEJNbdTOLDlj7i8p5WcXwsLDqqTqIJ kDOA== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b="KuTsoub/"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t136si5933540pfc.262.2018.11.30.05.26.10; Fri, 30 Nov 2018 05:26:25 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b="KuTsoub/"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726746AbeLAAdN (ORCPT + 99 others); Fri, 30 Nov 2018 19:33:13 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:48896 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726270AbeLAAdN (ORCPT ); Fri, 30 Nov 2018 19:33:13 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=84+ojv63c5aopj+ytHcF2VnJPsXc8faE0MW67dhNiSs=; b=KuTsoub/m++IodieS/bg2nPNnR FCmQlFMQpoeBxRJ7HhrlZCvEB21VRdhZViBzpSbNjPepIIx57kqaVFlq1/ZcvdwGYsAoHoqZlOI1q ubOkUL3rJ8Ko+1SuyMtj/J5Q9M9zuYEeeH5ASBQi7/2A4UiS9Rdut4IjlMjXWn/iEIvW7w+YzAc3S 7LLGH1I9gmAcTaf8PHMfOe1HoSsvY1aXLSNy7esv8CHHwtLwv6mmOV4fy1mJom5BWWfZwiXPjQu/Q b8S4iATKnw4VfSX3cnoaUuxXsAGjscmT+qF4TNQXpHgRlsh2CplaN4c0hZf6OC6qvzn2inz5H/fqx MSX5FtDg==; Received: from 089144206221.atnat0015.highway.bob.at ([89.144.206.221] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1gSilk-0004rK-Fp; Fri, 30 Nov 2018 13:23:45 +0000 From: Christoph Hellwig To: iommu@lists.linux-foundation.org Cc: Linus Torvalds , Jon Mason , Joerg Roedel , David Woodhouse , Marek Szyprowski , Robin Murphy , x86@kernel.org, linux-alpha@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-ia64@vger.kernel.org, linux-parisc@vger.kernel.org, xen-devel@lists.xenproject.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 15/23] x86/amd_gart: remove the mapping_error dma_map_ops method Date: Fri, 30 Nov 2018 14:22:23 +0100 Message-Id: <20181130132231.16512-16-hch@lst.de> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20181130132231.16512-1-hch@lst.de> References: <20181130132231.16512-1-hch@lst.de> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Return DMA_MAPPING_ERROR instead of the magic bad_dma_addr on a dma mapping failure and let the core dma-mapping code handle the rest. Remove the magic EMERGENCY_PAGES that the bad_dma_addr gets redirected to. Signed-off-by: Christoph Hellwig --- arch/x86/kernel/amd_gart_64.c | 39 ++++++----------------------------- 1 file changed, 6 insertions(+), 33 deletions(-) diff --git a/arch/x86/kernel/amd_gart_64.c b/arch/x86/kernel/amd_gart_64.c index 3f9d1b4019bb..4e733de93f41 100644 --- a/arch/x86/kernel/amd_gart_64.c +++ b/arch/x86/kernel/amd_gart_64.c @@ -50,8 +50,6 @@ static unsigned long iommu_pages; /* .. and in pages */ static u32 *iommu_gatt_base; /* Remapping table */ -static dma_addr_t bad_dma_addr; - /* * If this is disabled the IOMMU will use an optimized flushing strategy * of only flushing when an mapping is reused. With it true the GART is @@ -74,8 +72,6 @@ static u32 gart_unmapped_entry; (((x) & 0xfffff000) | (((x) >> 32) << 4) | GPTE_VALID | GPTE_COHERENT) #define GPTE_DECODE(x) (((x) & 0xfffff000) | (((u64)(x) & 0xff0) << 28)) -#define EMERGENCY_PAGES 32 /* = 128KB */ - #ifdef CONFIG_AGP #define AGPEXTERN extern #else @@ -184,14 +180,6 @@ static void iommu_full(struct device *dev, size_t size, int dir) */ dev_err(dev, "PCI-DMA: Out of IOMMU space for %lu bytes\n", size); - - if (size > PAGE_SIZE*EMERGENCY_PAGES) { - if (dir == PCI_DMA_FROMDEVICE || dir == PCI_DMA_BIDIRECTIONAL) - panic("PCI-DMA: Memory would be corrupted\n"); - if (dir == PCI_DMA_TODEVICE || dir == PCI_DMA_BIDIRECTIONAL) - panic(KERN_ERR - "PCI-DMA: Random memory would be DMAed\n"); - } #ifdef CONFIG_IOMMU_LEAK dump_leak(); #endif @@ -220,7 +208,7 @@ static dma_addr_t dma_map_area(struct device *dev, dma_addr_t phys_mem, int i; if (unlikely(phys_mem + size > GART_MAX_PHYS_ADDR)) - return bad_dma_addr; + return DMA_MAPPING_ERROR; iommu_page = alloc_iommu(dev, npages, align_mask); if (iommu_page == -1) { @@ -229,7 +217,7 @@ static dma_addr_t dma_map_area(struct device *dev, dma_addr_t phys_mem, if (panic_on_overflow) panic("dma_map_area overflow %lu bytes\n", size); iommu_full(dev, size, dir); - return bad_dma_addr; + return DMA_MAPPING_ERROR; } for (i = 0; i < npages; i++) { @@ -271,7 +259,7 @@ static void gart_unmap_page(struct device *dev, dma_addr_t dma_addr, int npages; int i; - if (dma_addr < iommu_bus_base + EMERGENCY_PAGES*PAGE_SIZE || + if (dma_addr == DMA_MAPPING_ERROR || dma_addr >= iommu_bus_base + iommu_size) return; @@ -315,7 +303,7 @@ static int dma_map_sg_nonforce(struct device *dev, struct scatterlist *sg, if (nonforced_iommu(dev, addr, s->length)) { addr = dma_map_area(dev, addr, s->length, dir, 0); - if (addr == bad_dma_addr) { + if (addr == DMA_MAPPING_ERROR) { if (i > 0) gart_unmap_sg(dev, sg, i, dir, 0); nents = 0; @@ -471,7 +459,7 @@ static int gart_map_sg(struct device *dev, struct scatterlist *sg, int nents, iommu_full(dev, pages << PAGE_SHIFT, dir); for_each_sg(sg, s, nents, i) - s->dma_address = bad_dma_addr; + s->dma_address = DMA_MAPPING_ERROR; return 0; } @@ -490,7 +478,7 @@ gart_alloc_coherent(struct device *dev, size_t size, dma_addr_t *dma_addr, *dma_addr = dma_map_area(dev, virt_to_phys(vaddr), size, DMA_BIDIRECTIONAL, (1UL << get_order(size)) - 1); flush_gart(); - if (unlikely(*dma_addr == bad_dma_addr)) + if (unlikely(*dma_addr == DMA_MAPPING_ERROR)) goto out_free; return vaddr; out_free: @@ -507,11 +495,6 @@ gart_free_coherent(struct device *dev, size_t size, void *vaddr, dma_direct_free_pages(dev, size, vaddr, dma_addr, attrs); } -static int gart_mapping_error(struct device *dev, dma_addr_t dma_addr) -{ - return (dma_addr == bad_dma_addr); -} - static int no_agp; static __init unsigned long check_iommu_size(unsigned long aper, u64 aper_size) @@ -695,7 +678,6 @@ static const struct dma_map_ops gart_dma_ops = { .unmap_page = gart_unmap_page, .alloc = gart_alloc_coherent, .free = gart_free_coherent, - .mapping_error = gart_mapping_error, .dma_supported = dma_direct_supported, }; @@ -784,19 +766,12 @@ int __init gart_iommu_init(void) } #endif - /* - * Out of IOMMU space handling. - * Reserve some invalid pages at the beginning of the GART. - */ - bitmap_set(iommu_gart_bitmap, 0, EMERGENCY_PAGES); - pr_info("PCI-DMA: Reserving %luMB of IOMMU area in the AGP aperture\n", iommu_size >> 20); agp_memory_reserved = iommu_size; iommu_start = aper_size - iommu_size; iommu_bus_base = info.aper_base + iommu_start; - bad_dma_addr = iommu_bus_base; iommu_gatt_base = agp_gatt_table + (iommu_start>>PAGE_SHIFT); /* @@ -838,8 +813,6 @@ int __init gart_iommu_init(void) if (!scratch) panic("Cannot allocate iommu scratch page"); gart_unmapped_entry = GPTE_ENCODE(__pa(scratch)); - for (i = EMERGENCY_PAGES; i < iommu_pages; i++) - iommu_gatt_base[i] = gart_unmapped_entry; flush_gart(); dma_ops = &gart_dma_ops; -- 2.19.1