Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp2094452yba; Fri, 19 Apr 2019 12:01:55 -0700 (PDT) X-Google-Smtp-Source: APXvYqx0m+N+QhuOF54k1kjv8OC5MpoZR8gm3RENTgBAovFX6lmzVYJ5IEmk1Msv7QFLWFxBFlP+ X-Received: by 2002:a65:4183:: with SMTP id a3mr5397262pgq.121.1555700515399; Fri, 19 Apr 2019 12:01:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1555700515; cv=none; d=google.com; s=arc-20160816; b=OIzOwp/2e0ITyTFW1obwAhUQcU81sKNR6yGXbCM2SkjYhno4HkzjTAk2Xah/W7BveT NDa519kYpLCAkGnp8qdE4RU9oamAE0+ypgC88vgQQtw4y97DrJFOlGF/iI8k68WR1yvi swi8A6hLXfTb26bNamjs6R2dHzhpMUgJU41LapWrRIrKge88vJz4Ma7mRwdEyUEmiehr tLJfjrzs9vg3Muq6VDC0o6FACKUsf49Q6jZOoEIMmpSU4cgeAvRDkit+MgaqwWt/K9iF WW8sPYUqwGNRO3MYDBm38DwNdjFhsIn01W1SCcZ6lZb0ePVrUl3CRu1K6lIsSIoEa18h wnqA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=nSktCS7AVNtiwjZ52T5IwoCeqmado3dBYb6eGpCRnHs=; b=Kv7r0bK0ZmOMVVhD2XilQGEetdT55ioRZ9V/feFQxR5DbGO7HFbZm18eQKWeULt8BN ZtuUugQzJ4Xu/kL++zMULxJPhGDbMKpR1d5VtpD+qYMW8IoeyU0MMM1tqYlVByjFCg5m tcMR+U7tWLQ+BEJNItEFda0F+5mWPaziBom72tP7MEwDkUGe0ApT84EDHN/CMO3sAh5D HIMzhyBqGb63eGNOHqfjs1ZsqBoX83MdoAArlJ7DJSykc7z5mQgRORS9tybslfP+n2+L tF/kOuTAP2B9zb1OeKDnne9xmFh3QMVjJnAU7psy2tFcOcai12Aus9PmCvc0UjV465ts 6ZmA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c186si6048914pfg.160.2019.04.19.12.01.39; Fri, 19 Apr 2019 12:01:55 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728771AbfDSTAI (ORCPT + 99 others); Fri, 19 Apr 2019 15:00:08 -0400 Received: from verein.lst.de ([213.95.11.211]:56373 "EHLO newverein.lst.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728752AbfDSTAD (ORCPT ); Fri, 19 Apr 2019 15:00:03 -0400 Received: by newverein.lst.de (Postfix, from userid 2407) id 15E2C68BFE; Fri, 19 Apr 2019 10:23:49 +0200 (CEST) Date: Fri, 19 Apr 2019 10:23:48 +0200 From: Christoph Hellwig To: Robin Murphy Cc: Christoph Hellwig , Joerg Roedel , Catalin Marinas , Will Deacon , Tom Lendacky , iommu@lists.linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH 12/21] dma-iommu: factor atomic pool allocations into helpers Message-ID: <20190419082348.GA22299@lst.de> References: <20190327080448.5500-1-hch@lst.de> <20190327080448.5500-13-hch@lst.de> <20190410061157.GA5278@lst.de> <20190417063358.GA24139@lst.de> <83615173-a8b4-e0eb-bac3-1a58d61ea4ef@arm.com> <20190418163512.GA25347@lst.de> <228ee57a-d7b2-48e0-a34e-81d5fba0a090@arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <228ee57a-d7b2-48e0-a34e-81d5fba0a090@arm.com> User-Agent: Mutt/1.5.17 (2007-11-01) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Apr 18, 2019 at 07:15:00PM +0100, Robin Murphy wrote: > Still, I've worked in the vm_map_pages() stuff pending in MM and given them > the same treatment to finish the picture. Both x86_64_defconfig and > i386_defconfig do indeed compile and link fine as I expected, so I really > would like to understand the concern around #ifdefs better. This looks generally fine to me. One thing I'd like to do is to generally make use of the fact that __iommu_dma_get_pages returns NULL for the force contigous case as that cleans up a few things. Also for the !DMA_REMAP case we need to try the page allocator when dma_alloc_from_contiguous does not return a page. What do you thing of the following incremental diff? If that is fine with you I can fold that in and add back in the remaining patches from my series not obsoleted by your patches and resend. diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 1bc8d1de1a1d..50b44e220de3 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -894,7 +894,7 @@ static void iommu_dma_unmap_resource(struct device *dev, dma_addr_t handle, static void __iommu_dma_free(struct device *dev, void *cpu_addr, size_t size) { - struct page *page, **pages; + struct page *page = NULL; int count = size >> PAGE_SHIFT; /* Non-coherent atomic allocation? Easy */ @@ -902,24 +902,26 @@ static void __iommu_dma_free(struct device *dev, void *cpu_addr, size_t size) dma_free_from_pool(cpu_addr, size)) return; - /* Lowmem means a coherent atomic or CMA allocation */ - if (!IS_ENABLED(CONFIG_DMA_REMAP) || !is_vmalloc_addr(cpu_addr)) { - page = virt_to_page(cpu_addr); - if (!dma_release_from_contiguous(dev, page, count)) - __free_pages(page, get_order(size)); - return; - } - /* - * If it's remapped, then it's either non-coherent or highmem CMA, or - * an iommu_dma_alloc_remap() construction. - */ - page = vmalloc_to_page(cpu_addr); - if (!dma_release_from_contiguous(dev, page, count)) { - pages = __iommu_dma_get_pages(cpu_addr); + if (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && is_vmalloc_addr(cpu_addr)) { + /* + * If it the address is remapped, then it's either non-coherent + * or highmem CMA, or an iommu_dma_alloc_remap() construction. + */ + struct page **pages = __iommu_dma_get_pages(cpu_addr); + if (pages) __iommu_dma_free_pages(pages, count); + else + page = vmalloc_to_page(cpu_addr); + + dma_common_free_remap(cpu_addr, size, VM_USERMAP); + } else { + /* Lowmem means a coherent atomic or CMA allocation */ + page = virt_to_page(cpu_addr); } - dma_common_free_remap(cpu_addr, size, VM_USERMAP); + + if (page && !dma_release_from_contiguous(dev, page, count)) + __free_pages(page, get_order(size)); } static void *iommu_dma_alloc(struct device *dev, size_t size, @@ -935,25 +937,26 @@ static void *iommu_dma_alloc(struct device *dev, size_t size, gfp |= __GFP_ZERO; + if (IS_ENABLED(CONFIG_DMA_REMAP) && gfpflags_allow_blocking(gfp) && + !(attrs & DMA_ATTR_FORCE_CONTIGUOUS)) + return iommu_dma_alloc_remap(dev, size, handle, gfp, attrs); + if (!gfpflags_allow_blocking(gfp)) { - if (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !coherent) + if (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !coherent) { cpu_addr = dma_alloc_from_pool(alloc_size, &page, gfp); - else - page = alloc_pages(gfp, page_order); - } else if (!IS_ENABLED(CONFIG_DMA_REMAP) || - (attrs & DMA_ATTR_FORCE_CONTIGUOUS)) { + if (!cpu_addr) + return NULL; + goto do_iommu_map; + } + } else { page = dma_alloc_from_contiguous(dev, count, page_order, gfp & __GFP_NOWARN); - } else { - return iommu_dma_alloc_remap(dev, size, handle, gfp, attrs); } - + if (!page) + page = alloc_pages(gfp, page_order); if (!page) return NULL; - if (cpu_addr) - goto do_iommu_map; - if (IS_ENABLED(CONFIG_DMA_REMAP) && (!coherent || PageHighMem(page))) { pgprot_t prot = arch_dma_mmap_pgprot(dev, PAGE_KERNEL, attrs); @@ -1007,16 +1010,14 @@ static int iommu_dma_mmap(struct device *dev, struct vm_area_struct *vma, if (off >= nr_pages || vma_pages(vma) > nr_pages - off) return -ENXIO; - if (!is_vmalloc_addr(cpu_addr)) { - pfn = page_to_pfn(virt_to_page(cpu_addr)); - } else if (!IS_ENABLED(CONFIG_DMA_REMAP) || - (attrs & DMA_ATTR_FORCE_CONTIGUOUS)) { + if (IS_ENABLED(CONFIG_DMA_REMAP) && is_vmalloc_addr(cpu_addr)) { + struct page **pages = __iommu_dma_get_pages(cpu_addr); + + if (pages) + return vm_map_pages(vma, pages, nr_pages); pfn = vmalloc_to_pfn(cpu_addr); } else { - struct page **pages = __iommu_dma_get_pages(cpu_addr); - if (!pages) - return -ENXIO; - return vm_map_pages(vma, pages, nr_pages); + pfn = page_to_pfn(virt_to_page(cpu_addr)); } return remap_pfn_range(vma, vma->vm_start, pfn + off, @@ -1028,26 +1029,25 @@ static int iommu_dma_get_sgtable(struct device *dev, struct sg_table *sgt, void *cpu_addr, dma_addr_t dma_addr, size_t size, unsigned long attrs) { - struct page *page = NULL, **pages = NULL; - int ret = -ENXIO; + struct page *page; + int ret; - if (!is_vmalloc_addr(cpu_addr)) - page = virt_to_page(cpu_addr); - else if (!IS_ENABLED(CONFIG_DMA_REMAP) || - (attrs & DMA_ATTR_FORCE_CONTIGUOUS)) + if (IS_ENABLED(CONFIG_DMA_REMAP) && is_vmalloc_addr(cpu_addr)) { + struct page **pages = __iommu_dma_get_pages(cpu_addr); + + if (pages) + return sg_alloc_table_from_pages(sgt, + __iommu_dma_get_pages(cpu_addr), + PAGE_ALIGN(size) >> PAGE_SHIFT, 0, size, + GFP_KERNEL); page = vmalloc_to_page(cpu_addr); - else - pages = __iommu_dma_get_pages(cpu_addr); - - if (page) { - ret = sg_alloc_table(sgt, 1, GFP_KERNEL); - if (!ret) - sg_set_page(sgt->sgl, page, PAGE_ALIGN(size), 0); - } else if (pages) { - ret = sg_alloc_table_from_pages(sgt, pages, - PAGE_ALIGN(size) >> PAGE_SHIFT, - 0, size, GFP_KERNEL); + } else { + page = virt_to_page(cpu_addr); } + + ret = sg_alloc_table(sgt, 1, GFP_KERNEL); + if (!ret) + sg_set_page(sgt->sgl, page, PAGE_ALIGN(size), 0); return ret; }