Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp6089741yba; Thu, 11 Apr 2019 11:50:34 -0700 (PDT) X-Google-Smtp-Source: APXvYqzSbuPhetMCVm2c3/MkPI+gRV+uxouDYLpbJfLJjZpE9HH6j3iwndBqY7OpI/rKjK0O9dQf X-Received: by 2002:a63:5854:: with SMTP id i20mr47451456pgm.171.1555008634478; Thu, 11 Apr 2019 11:50:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1555008634; cv=none; d=google.com; s=arc-20160816; b=nqPWyk9cOQg78uUmgspqBxMnDG9EJJSMYWEXRPBmF/Eg+mW3P1LAXeMA/L88arAsK+ pyFBSC3GLCmBnzNcB/KiceECiond+vkX83TP8b2J8J8QlhVpKB4IJzstPZfkn8Dssf49 QP02go29U/5hTlOILo2mup5MVW2OUFr/7FgMskyaBot57cGGXK8B0ytz5MbxnvqEo03s eJaRTVf0icGrXwYVq0Lx8bcokBtyH6d0qh326TlDdwFxX3YQ55ZTXn5Gk4bbAkcabLkq CQaFU31eBn/IZOCjMIoVa8BUrn3Ac8jSTAg88Qdp5af8DDW8gOioPuwApKrGaQihI0CB 5u4A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature; bh=h4NTbwpzCv6yGyNv0iD8bNRgN2z2J8QjAYzKFda07OQ=; b=EbKsAP62VoiXMWGMVEOOgG4l8wkSOdQNSeHChQy9U29hOyz5S+Dhiz0uBj8XWsly+U OEyI9FDHIps0babv7JvOFKZYgdgAeAts7BuUyqs3sZgLgjDnvxM1WHGIiHTGHn1mjxK6 x9Ue5R9jl8zFQkUsGFqzFxZ2tD6x2T32/0yR315OQA10+jHHvIhCbmm4W0tZNzQn4Ydm rzbTFb9e6oGLxyj/lS+G3xgCfewmzvgrSOsSlSXmSpdcobvlucWZMIjchXKIx8Ot/Hqz 7npJWb4ssqLqZO0LnDReZznQ9E3DBh3HIo0OhxfZZhYwuX+qBihQdJzWcWWa8k7X/Uyw gBJw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@arista.com header.s=googlenew header.b=XmDArHBq; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=REJECT dis=NONE) header.from=arista.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c22si21410649plo.412.2019.04.11.11.50.19; Thu, 11 Apr 2019 11:50:34 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@arista.com header.s=googlenew header.b=XmDArHBq; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=REJECT dis=NONE) header.from=arista.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726970AbfDKStV (ORCPT + 99 others); Thu, 11 Apr 2019 14:49:21 -0400 Received: from mail-ed1-f67.google.com ([209.85.208.67]:39189 "EHLO mail-ed1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726957AbfDKStU (ORCPT ); Thu, 11 Apr 2019 14:49:20 -0400 Received: by mail-ed1-f67.google.com with SMTP id k45so6141854edb.6 for ; Thu, 11 Apr 2019 11:49:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=arista.com; s=googlenew; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=h4NTbwpzCv6yGyNv0iD8bNRgN2z2J8QjAYzKFda07OQ=; b=XmDArHBqr3lcdwiD6rmDPJRrqBJX1smBT6QSTA2zzebiEMqn8UHCyGmgPzWi6atDRP PPUqmOGZYXS23R7Ffooko6LD7qj6HcfZPk0svG1qlEfih5yv1ge96oJLIsCqu83GWIXF fFHKmEUgNre/+8pORg56gg+YLbsw7/yz0pu3XRA4ZWqdbuqPvZl8lQN5tYR2Au1L2hT5 igQqnGL7YIapccrpw4JKLDDC6E2iYOHKdL2mhqwUn6fNR6HmbSsje5e87puCLeQBvOF7 qJ1AZEmghQioW31glddOQqlmAOHRJBc2Gufce5uL/GHbS7h6Ym95OptpDOHCVcqAvJgm WlwA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=h4NTbwpzCv6yGyNv0iD8bNRgN2z2J8QjAYzKFda07OQ=; b=inoJlpWWrN25lwza7SFihDn1drjeCM0jFFpIsaiCFiF8KEdSrgz1WpHSjy+bZrZBtB 2YnY370QePirDoE0p1Z2Lcucp0hDDcQ+GdIBGZqeR8Rym8U2VfNmhuqZsZIjffJw6/mF gtxSn9umWq/RlkVSHj1PTotM5Id1RjaeSjnc4QMRzmRbGQ5ZgbgazDD/xy9Roq5a3MyT Yzkm2EGaT9FgakgLkZsBIV4X7bWexvAbJ6uru/2+YJ2K/4WFwSEnq2YPO72IrCv6OTAB OXyoWiZkIMVN8vEOWTJuq3b9p2njkh438d9HgftjBoysiz1Eo//Bq8v0LNiAtmklnt5r kbJw== X-Gm-Message-State: APjAAAWeDqP6jAMhkyPAhcxnBebT+Uv8/FIEmoLDvoXh1ohTpZfXN1L8 HS+y+lF2sD3yCnhTh28P4KE5Cw== X-Received: by 2002:a50:93a6:: with SMTP id o35mr33093218eda.245.1555008557237; Thu, 11 Apr 2019 11:49:17 -0700 (PDT) Received: from localhost.localdomain ([80.233.32.123]) by smtp.gmail.com with ESMTPSA id b10sm1285130edi.28.2019.04.11.11.49.15 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 11 Apr 2019 11:49:16 -0700 (PDT) From: Tom Murphy To: iommu@lists.linux-foundation.org Cc: dima@arista.com, jamessewart@arista.com, murphyt7@tcd.ie, Tom Murphy , Joerg Roedel , Will Deacon , Robin Murphy , Marek Szyprowski , Kukjin Kim , Krzysztof Kozlowski , Matthias Brugger , Andy Gross , David Brown , Rob Clark , Heiko Stuebner , Marc Zyngier , Thomas Gleixner , linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-samsung-soc@vger.kernel.org, linux-mediatek@lists.infradead.org, linux-arm-msm@vger.kernel.org, linux-rockchip@lists.infradead.org Subject: [PATCH 7/9] iommu/amd: Use the dma-iommu api Date: Thu, 11 Apr 2019 19:47:36 +0100 Message-Id: <20190411184741.27540-8-tmurphy@arista.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190411184741.27540-1-tmurphy@arista.com> References: <20190411184741.27540-1-tmurphy@arista.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Convert the AMD iommu driver to use the dma-iommu api. Signed-off-by: Tom Murphy --- drivers/iommu/Kconfig | 1 + drivers/iommu/amd_iommu.c | 217 +++++++++++++------------------------- 2 files changed, 77 insertions(+), 141 deletions(-) diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig index 6f07f3b21816..cc728305524b 100644 --- a/drivers/iommu/Kconfig +++ b/drivers/iommu/Kconfig @@ -136,6 +136,7 @@ config AMD_IOMMU select PCI_PASID select IOMMU_API select IOMMU_IOVA + select IOMMU_DMA depends on X86_64 && PCI && ACPI ---help--- With this option you can enable support for AMD IOMMU hardware in diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c index b45e0e033adc..218faf3a6d9c 100644 --- a/drivers/iommu/amd_iommu.c +++ b/drivers/iommu/amd_iommu.c @@ -32,6 +32,7 @@ #include #include #include +#include #include #include #include @@ -1845,21 +1846,21 @@ static void iova_domain_flush_tlb(struct iova_domain *iovad) * Free a domain, only used if something went wrong in the * allocation path and we need to free an already allocated page table */ -static void dma_ops_domain_free(struct dma_ops_domain *dom) +static void dma_ops_domain_free(struct protection_domain *domain) { - if (!dom) + if (!domain) return; - del_domain_from_list(&dom->domain); + del_domain_from_list(domain); - put_iova_domain(&dom->iovad); + iommu_put_dma_cookie(&domain->domain); - free_pagetable(&dom->domain); + free_pagetable(domain); - if (dom->domain.id) - domain_id_free(dom->domain.id); + if (domain->id) + domain_id_free(domain->id); - kfree(dom); + kfree(domain); } /* @@ -1867,37 +1868,46 @@ static void dma_ops_domain_free(struct dma_ops_domain *dom) * It also initializes the page table and the address allocator data * structures required for the dma_ops interface */ -static struct dma_ops_domain *dma_ops_domain_alloc(void) +static struct protection_domain *dma_ops_domain_alloc(void) { - struct dma_ops_domain *dma_dom; + struct protection_domain *domain; + u64 size; - dma_dom = kzalloc(sizeof(struct dma_ops_domain), GFP_KERNEL); - if (!dma_dom) + domain = kzalloc(sizeof(struct protection_domain), GFP_KERNEL); + if (!domain) return NULL; - if (protection_domain_init(&dma_dom->domain)) - goto free_dma_dom; + if (protection_domain_init(domain)) + goto free_domain; - dma_dom->domain.mode = PAGE_MODE_3_LEVEL; - dma_dom->domain.pt_root = (void *)get_zeroed_page(GFP_KERNEL); - dma_dom->domain.flags = PD_DMA_OPS_MASK; - if (!dma_dom->domain.pt_root) - goto free_dma_dom; + domain->mode = PAGE_MODE_3_LEVEL; + domain->pt_root = (void *)get_zeroed_page(GFP_KERNEL); + domain->flags = PD_DMA_OPS_MASK; + if (!domain->pt_root) + goto free_domain; - init_iova_domain(&dma_dom->iovad, PAGE_SIZE, IOVA_START_PFN); + domain->domain.pgsize_bitmap = AMD_IOMMU_PGSIZES; + domain->domain.type = IOMMU_DOMAIN_DMA; + domain->domain.ops = &amd_iommu_ops; + if (iommu_get_dma_cookie(&domain->domain) == -ENOMEM) + goto free_domain; - if (init_iova_flush_queue(&dma_dom->iovad, iova_domain_flush_tlb, NULL)) - goto free_dma_dom; + size = 0;/* Size is only required if force_apperture is set */ + if (iommu_dma_init_domain(&domain->domain, IOVA_START_PFN << PAGE_SHIFT, + size, NULL)) + goto free_cookie; /* Initialize reserved ranges */ - copy_reserved_iova(&reserved_iova_ranges, &dma_dom->iovad); + iommu_dma_copy_reserved_iova(&reserved_iova_ranges, &domain->domain); - add_domain_to_list(&dma_dom->domain); + add_domain_to_list(domain); - return dma_dom; + return domain; -free_dma_dom: - dma_ops_domain_free(dma_dom); +free_cookie: + iommu_put_dma_cookie(&domain->domain); +free_domain: + dma_ops_domain_free(domain); return NULL; } @@ -2328,6 +2338,26 @@ static struct iommu_group *amd_iommu_device_group(struct device *dev) return acpihid_device_group(dev); } +static int amd_iommu_domain_get_attr(struct iommu_domain *domain, + enum iommu_attr attr, void *data) +{ + switch (domain->type) { + case IOMMU_DOMAIN_UNMANAGED: + return -ENODEV; + case IOMMU_DOMAIN_DMA: + switch (attr) { + case DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE: + *(int *)data = !amd_iommu_unmap_flush; + return 0; + default: + return -ENODEV; + } + break; + default: + return -EINVAL; + } +} + /***************************************************************************** * * The next functions belong to the dma_ops mapping/unmapping code. @@ -2509,21 +2539,15 @@ static dma_addr_t map_page(struct device *dev, struct page *page, enum dma_data_direction dir, unsigned long attrs) { - phys_addr_t paddr = page_to_phys(page) + offset; - struct protection_domain *domain; - struct dma_ops_domain *dma_dom; - u64 dma_mask; + int prot = dir2prot(dir); + struct protection_domain *domain = get_domain(dev); - domain = get_domain(dev); if (PTR_ERR(domain) == -EINVAL) - return (dma_addr_t)paddr; + return (dma_addr_t)page_to_phys(page) + offset; else if (IS_ERR(domain)) return DMA_MAPPING_ERROR; - dma_mask = *dev->dma_mask; - dma_dom = to_dma_ops_domain(domain); - - return __map_single(dev, dma_dom, paddr, size, dir, dma_mask); + return iommu_dma_map_page(dev, page, offset, size, prot); } /* @@ -2532,16 +2556,11 @@ static dma_addr_t map_page(struct device *dev, struct page *page, static void unmap_page(struct device *dev, dma_addr_t dma_addr, size_t size, enum dma_data_direction dir, unsigned long attrs) { - struct protection_domain *domain; - struct dma_ops_domain *dma_dom; - - domain = get_domain(dev); + struct protection_domain *domain = get_domain(dev); if (IS_ERR(domain)) return; - dma_dom = to_dma_ops_domain(domain); - - __unmap_single(dma_dom, dma_addr, size, dir); + iommu_dma_unmap_page(dev, dma_addr, size, dir, attrs); } static int sg_num_pages(struct device *dev, @@ -2578,77 +2597,10 @@ static int map_sg(struct device *dev, struct scatterlist *sglist, int nelems, enum dma_data_direction direction, unsigned long attrs) { - int mapped_pages = 0, npages = 0, prot = 0, i; - struct protection_domain *domain; - struct dma_ops_domain *dma_dom; - struct scatterlist *s; - unsigned long address; - u64 dma_mask; - int ret; - - domain = get_domain(dev); + struct protection_domain *domain = get_domain(dev); if (IS_ERR(domain)) return 0; - - dma_dom = to_dma_ops_domain(domain); - dma_mask = *dev->dma_mask; - - npages = sg_num_pages(dev, sglist, nelems); - - address = dma_ops_alloc_iova(dev, dma_dom, npages, dma_mask); - if (address == DMA_MAPPING_ERROR) - goto out_err; - - prot = dir2prot(direction); - - /* Map all sg entries */ - for_each_sg(sglist, s, nelems, i) { - int j, pages = iommu_num_pages(sg_phys(s), s->length, PAGE_SIZE); - - for (j = 0; j < pages; ++j) { - unsigned long bus_addr, phys_addr; - - bus_addr = address + s->dma_address + (j << PAGE_SHIFT); - phys_addr = (sg_phys(s) & PAGE_MASK) + (j << PAGE_SHIFT); - ret = iommu_map_page(domain, bus_addr, phys_addr, PAGE_SIZE, prot, GFP_ATOMIC); - if (ret) - goto out_unmap; - - mapped_pages += 1; - } - } - - /* Everything is mapped - write the right values into s->dma_address */ - for_each_sg(sglist, s, nelems, i) { - s->dma_address += address + s->offset; - s->dma_length = s->length; - } - - return nelems; - -out_unmap: - dev_err(dev, "IOMMU mapping error in map_sg (io-pages: %d reason: %d)\n", - npages, ret); - - for_each_sg(sglist, s, nelems, i) { - int j, pages = iommu_num_pages(sg_phys(s), s->length, PAGE_SIZE); - - for (j = 0; j < pages; ++j) { - unsigned long bus_addr; - - bus_addr = address + s->dma_address + (j << PAGE_SHIFT); - iommu_unmap_page(domain, bus_addr, PAGE_SIZE); - - if (--mapped_pages == 0) - goto out_free_iova; - } - } - -out_free_iova: - free_iova_fast(&dma_dom->iovad, address >> PAGE_SHIFT, npages); - -out_err: - return 0; + return iommu_dma_map_sg(dev, sglist, nelems, dir2prot(direction)); } /* @@ -2659,20 +2611,11 @@ static void unmap_sg(struct device *dev, struct scatterlist *sglist, int nelems, enum dma_data_direction dir, unsigned long attrs) { - struct protection_domain *domain; - struct dma_ops_domain *dma_dom; - unsigned long startaddr; - int npages = 2; - - domain = get_domain(dev); + struct protection_domain *domain = get_domain(dev); if (IS_ERR(domain)) return; - startaddr = sg_dma_address(sglist) & PAGE_MASK; - dma_dom = to_dma_ops_domain(domain); - npages = sg_num_pages(dev, sglist, nelems); - - __unmap_single(dma_dom, startaddr, npages << PAGE_SHIFT, dir); + iommu_dma_unmap_sg(dev, sglist, nelems, dir, attrs); } /* @@ -2684,7 +2627,6 @@ static void *alloc_coherent(struct device *dev, size_t size, { u64 dma_mask = dev->coherent_dma_mask; struct protection_domain *domain; - struct dma_ops_domain *dma_dom; struct page *page; domain = get_domain(dev); @@ -2695,7 +2637,6 @@ static void *alloc_coherent(struct device *dev, size_t size, } else if (IS_ERR(domain)) return NULL; - dma_dom = to_dma_ops_domain(domain); size = PAGE_ALIGN(size); dma_mask = dev->coherent_dma_mask; flag &= ~(__GFP_DMA | __GFP_HIGHMEM | __GFP_DMA32); @@ -2715,9 +2656,8 @@ static void *alloc_coherent(struct device *dev, size_t size, if (!dma_mask) dma_mask = *dev->dma_mask; - *dma_addr = __map_single(dev, dma_dom, page_to_phys(page), - size, DMA_BIDIRECTIONAL, dma_mask); - + *dma_addr = iommu_dma_map_page_coherent(dev, page, 0, size, + dir2prot(DMA_BIDIRECTIONAL)); if (*dma_addr == DMA_MAPPING_ERROR) goto out_free; @@ -2739,7 +2679,6 @@ static void free_coherent(struct device *dev, size_t size, unsigned long attrs) { struct protection_domain *domain; - struct dma_ops_domain *dma_dom; struct page *page; page = virt_to_page(virt_addr); @@ -2749,9 +2688,8 @@ static void free_coherent(struct device *dev, size_t size, if (IS_ERR(domain)) goto free_mem; - dma_dom = to_dma_ops_domain(domain); - - __unmap_single(dma_dom, dma_addr, size, DMA_BIDIRECTIONAL); + iommu_dma_unmap_page(dev, dma_addr, size, DMA_BIDIRECTIONAL, + attrs); free_mem: if (!dma_release_from_contiguous(dev, page, size >> PAGE_SHIFT)) @@ -2948,7 +2886,6 @@ static struct protection_domain *protection_domain_alloc(void) static struct iommu_domain *amd_iommu_domain_alloc(unsigned type) { struct protection_domain *pdomain; - struct dma_ops_domain *dma_domain; switch (type) { case IOMMU_DOMAIN_UNMANAGED: @@ -2969,12 +2906,11 @@ static struct iommu_domain *amd_iommu_domain_alloc(unsigned type) break; case IOMMU_DOMAIN_DMA: - dma_domain = dma_ops_domain_alloc(); - if (!dma_domain) { + pdomain = dma_ops_domain_alloc(); + if (!pdomain) { pr_err("Failed to allocate\n"); return NULL; } - pdomain = &dma_domain->domain; break; case IOMMU_DOMAIN_IDENTITY: pdomain = protection_domain_alloc(); @@ -2993,7 +2929,6 @@ static struct iommu_domain *amd_iommu_domain_alloc(unsigned type) static void amd_iommu_domain_free(struct iommu_domain *dom) { struct protection_domain *domain; - struct dma_ops_domain *dma_dom; domain = to_pdomain(dom); @@ -3008,8 +2943,7 @@ static void amd_iommu_domain_free(struct iommu_domain *dom) switch (dom->type) { case IOMMU_DOMAIN_DMA: /* Now release the domain */ - dma_dom = to_dma_ops_domain(domain); - dma_ops_domain_free(dma_dom); + dma_ops_domain_free(domain); break; default: if (domain->mode != PAGE_MODE_NONE) @@ -3278,9 +3212,10 @@ const struct iommu_ops amd_iommu_ops = { .add_device = amd_iommu_add_device, .remove_device = amd_iommu_remove_device, .device_group = amd_iommu_device_group, + .domain_get_attr = amd_iommu_domain_get_attr, .get_resv_regions = amd_iommu_get_resv_regions, .put_resv_regions = amd_iommu_put_resv_regions, - .apply_resv_region = amd_iommu_apply_resv_region, + .apply_resv_region = iommu_dma_apply_resv_region, .is_attach_deferred = amd_iommu_is_attach_deferred, .pgsize_bitmap = AMD_IOMMU_PGSIZES, .flush_iotlb_all = amd_iommu_flush_iotlb_all, -- 2.17.1