Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp3740924yba; Mon, 29 Apr 2019 07:47:53 -0700 (PDT) X-Google-Smtp-Source: APXvYqw9DJWZYngpFei4K9P+6PEcSR3hR6vRpmYkOY8ItmdKb51ezjxTLfY9E2EnMPfGf4nAX6pQ X-Received: by 2002:a62:424b:: with SMTP id p72mr61517492pfa.167.1556549273377; Mon, 29 Apr 2019 07:47:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1556549273; cv=none; d=google.com; s=arc-20160816; b=KEEwTmi0MCivgvWJLK3OvJOZZpWfzxmQdcrPA7voCeTA/lXuGXQhIpOFg5xwt9u2NH CzLaoYElsQpyF7RG6iBPN3fAVshSvt+Li4r6AuPHqfHgmwdGzQJKXOX1M3elTEQYTNa8 fvFgTxBoUMA3IqdunaUkWVB6rXEZWjq2sV8IifchOwY3Vx/YLNMLO9QdDLNWeaVzUxG1 SppLLWwdD/Cc6oe0McGVmv3sHi4qP3209tq081TZ7vu7sWNq81Ll+3ZKhAnRdmHdNoMj b0XKBO0wh848PKRH4J+d47yHhqwzYF0ptHGfoR5A+VSt9YTtVhDAM3O+8xiZB6D5qf79 0hPw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=lFu+2IDiOPJZ/bwVoUaHG5dg7hdABwa5phSe+Ly2X7Y=; b=iJ40S1/iX575a67JoYc10pKrh7OAxAtGywtMTOc7MrhLFUSJhJfXM7WCNuyjII2uT4 6ShNO6ZDxDk8nr9BnCHO0pdFOfjvf+xr0CME+bA6OcyHVlNFTH6Jfo2yU6QVzf4hAIfR wg1v/E3In+NOEHO2NM/cixrx5nFlr+Uyzjm8sUEvNFTV6MCr+70Nq3zBYTbPUUHcEg0y tFoPGOoSZuV7JLI14yWajX80SG9/nTPKglENkS7iNu2Wr2iQcDiesrTLH8IAvs6/bYaA GfmQiZug9BjivSNZU02CkdzI6ATrB/izWdFp+m9QAobJQMHauGunMB1aiiy4Ru5VkrNf 5cyw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e7si31527457pgc.307.2019.04.29.07.47.37; Mon, 29 Apr 2019 07:47:53 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728582AbfD2OpX (ORCPT + 99 others); Mon, 29 Apr 2019 10:45:23 -0400 Received: from foss.arm.com ([217.140.101.70]:59094 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728331AbfD2OpU (ORCPT ); Mon, 29 Apr 2019 10:45:20 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7829780D; Mon, 29 Apr 2019 07:45:20 -0700 (PDT) Received: from [10.1.196.75] (e110467-lin.cambridge.arm.com [10.1.196.75]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id EADA83F5C1; Mon, 29 Apr 2019 07:45:17 -0700 (PDT) Subject: Re: [PATCH 20/26] iommu/dma: Refactor iommu_dma_alloc, part 2 To: Christoph Hellwig Cc: Joerg Roedel , Catalin Marinas , Will Deacon , Tom Lendacky , iommu@lists.linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org References: <20190422175942.18788-1-hch@lst.de> <20190422175942.18788-21-hch@lst.de> From: Robin Murphy Message-ID: <9412baed-0d13-dab7-0bdc-90cfdf8e92f0@arm.com> Date: Mon, 29 Apr 2019 15:45:16 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.6.1 MIME-Version: 1.0 In-Reply-To: <20190422175942.18788-21-hch@lst.de> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-GB Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 22/04/2019 18:59, Christoph Hellwig wrote: > From: Robin Murphy Honestly I don't think anything left of my patch here... > Apart from the iommu_dma_alloc_remap() case which remains sufficiently > different that it's better off being self-contained, the rest of the > logic can now be consolidated into a single flow which separates the > logcially-distinct steps of allocating pages, getting the CPU address, > and finally getting the IOMMU address. ...and it certainly doesn't do that any more. It's clear that we have fundamentally different ways of reading code, so I don't think it's productive to keep arguing personal preference - I still find the end result here a fair bit more tolerable than before, so if you update the commit message to reflect the actual change (at which point there's really nothing left of my authorship) I can live with it. Robin. > Signed-off-by: Robin Murphy > [hch: split the page allocation into a new helper to simplify the flow] > Signed-off-by: Christoph Hellwig > --- > drivers/iommu/dma-iommu.c | 65 +++++++++++++++++++++------------------ > 1 file changed, 35 insertions(+), 30 deletions(-) > > diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c > index 9b269f0792f3..acdfe866cb29 100644 > --- a/drivers/iommu/dma-iommu.c > +++ b/drivers/iommu/dma-iommu.c > @@ -955,35 +955,14 @@ static void iommu_dma_free(struct device *dev, size_t size, void *cpu_addr, > __iommu_dma_free(dev, size, cpu_addr); > } > > -static void *iommu_dma_alloc(struct device *dev, size_t size, > - dma_addr_t *handle, gfp_t gfp, unsigned long attrs) > +static void *iommu_dma_alloc_pages(struct device *dev, size_t size, > + struct page **pagep, gfp_t gfp, unsigned long attrs) > { > bool coherent = dev_is_dma_coherent(dev); > - int ioprot = dma_info_to_prot(DMA_BIDIRECTIONAL, coherent, attrs); > size_t alloc_size = PAGE_ALIGN(size); > struct page *page = NULL; > void *cpu_addr; > > - gfp |= __GFP_ZERO; > - > - if (gfpflags_allow_blocking(gfp) && > - !(attrs & DMA_ATTR_FORCE_CONTIGUOUS)) > - return iommu_dma_alloc_remap(dev, size, handle, gfp, attrs); > - > - if (!gfpflags_allow_blocking(gfp) && !coherent) { > - cpu_addr = dma_alloc_from_pool(alloc_size, &page, gfp); > - if (!cpu_addr) > - return NULL; > - > - *handle = __iommu_dma_map(dev, page_to_phys(page), size, > - ioprot); > - if (*handle == DMA_MAPPING_ERROR) { > - dma_free_from_pool(cpu_addr, alloc_size); > - return NULL; > - } > - return cpu_addr; > - } > - > if (gfpflags_allow_blocking(gfp)) > page = dma_alloc_from_contiguous(dev, alloc_size >> PAGE_SHIFT, > get_order(alloc_size), > @@ -993,33 +972,59 @@ static void *iommu_dma_alloc(struct device *dev, size_t size, > if (!page) > return NULL; > > - *handle = __iommu_dma_map(dev, page_to_phys(page), size, ioprot); > - if (*handle == DMA_MAPPING_ERROR) > - goto out_free_pages; > - > if (!coherent || PageHighMem(page)) { > pgprot_t prot = arch_dma_mmap_pgprot(dev, PAGE_KERNEL, attrs); > > cpu_addr = dma_common_contiguous_remap(page, alloc_size, > VM_USERMAP, prot, __builtin_return_address(0)); > if (!cpu_addr) > - goto out_unmap; > + goto out_free_pages; > > if (!coherent) > arch_dma_prep_coherent(page, size); > } else { > cpu_addr = page_address(page); > } > + > + *pagep = page; > memset(cpu_addr, 0, alloc_size); > return cpu_addr; > -out_unmap: > - __iommu_dma_unmap(dev, *handle, size); > out_free_pages: > if (!dma_release_from_contiguous(dev, page, alloc_size >> PAGE_SHIFT)) > __free_pages(page, get_order(alloc_size)); > return NULL; > } > > +static void *iommu_dma_alloc(struct device *dev, size_t size, > + dma_addr_t *handle, gfp_t gfp, unsigned long attrs) > +{ > + bool coherent = dev_is_dma_coherent(dev); > + int ioprot = dma_info_to_prot(DMA_BIDIRECTIONAL, coherent, attrs); > + struct page *page = NULL; > + void *cpu_addr; > + > + gfp |= __GFP_ZERO; > + > + if (gfpflags_allow_blocking(gfp) && > + !(attrs & DMA_ATTR_FORCE_CONTIGUOUS)) > + return iommu_dma_alloc_remap(dev, size, handle, gfp, attrs); > + > + if (!gfpflags_allow_blocking(gfp) && !coherent) > + cpu_addr = dma_alloc_from_pool(PAGE_ALIGN(size), &page, gfp); > + else > + cpu_addr = iommu_dma_alloc_pages(dev, size, &page, gfp, attrs); > + if (!cpu_addr) > + return NULL; > + > + *handle = __iommu_dma_map(dev, page_to_phys(page), size, ioprot); > + if (*handle == DMA_MAPPING_ERROR) { > + __iommu_dma_free(dev, size, cpu_addr); > + return NULL; > + } > + > + return cpu_addr; > +} > + > static int __iommu_dma_mmap_pfn(struct vm_area_struct *vma, > unsigned long pfn, size_t size) > { >