Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753644AbcKHLiF (ORCPT ); Tue, 8 Nov 2016 06:38:05 -0500 Received: from foss.arm.com ([217.140.101.70]:57326 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753017AbcKHLh0 (ORCPT ); Tue, 8 Nov 2016 06:37:26 -0500 Subject: Re: [PATCH] iommu/dma-iommu: properly respect configured address space size To: Marek Szyprowski , iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org References: <1478523973-8828-1-git-send-email-m.szyprowski@samsung.com> Cc: Joerg Roedel From: Robin Murphy Message-ID: <68e7a18b-739e-b73e-eacf-3cb6c1bd279a@arm.com> Date: Tue, 8 Nov 2016 11:37:23 +0000 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.4.0 MIME-Version: 1.0 In-Reply-To: <1478523973-8828-1-git-send-email-m.szyprowski@samsung.com> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2355 Lines: 54 Hi Marek, On 07/11/16 13:06, Marek Szyprowski wrote: > When one called iommu_dma_init_domain() with size smaller than device's > DMA mask, the alloc_iova() will not respect it and always assume that all > IOVA addresses will be allocated from the the (base ... dev->dma_mask) range. Is that actually a problem for anything? > This patch fixes this issue by taking the configured address space size > parameter into account (if it is smaller than the device's dma_mask). TBH I've been pondering ripping the size stuff out of dma-iommu, as it all stems from me originally failing to understand what dma_32bit_pfn is actually for. The truth is that iova_domains just don't have a size or upper limit; however if devices with both large and small DMA masks share a domain, then the top-down nature of the allocator means that allocating for the less-capable devices would involve walking through every out-of-range entry in the tree every time. Having cached32_node based on dma_32bit_pfn just provides an optimised starting point for searching within the smaller mask. Would it hurt any of your use-cases to relax/rework the reinitialisation checks in iommu_dma_init_domain()? Alternatively if we really do have a case for wanting a hard upper limit, it might make more sense to add an end_pfn to the iova_domain and handle it in the allocator itself. Robin. > Signed-off-by: Marek Szyprowski > --- > drivers/iommu/dma-iommu.c | 4 +++- > 1 file changed, 3 insertions(+), 1 deletion(-) > > diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c > index c5ab8667e6f2..8b4b72654359 100644 > --- a/drivers/iommu/dma-iommu.c > +++ b/drivers/iommu/dma-iommu.c > @@ -212,11 +212,13 @@ static struct iova *__alloc_iova(struct iommu_domain *domain, size_t size, > > if (domain->geometry.force_aperture) > dma_limit = min(dma_limit, domain->geometry.aperture_end); > + > + dma_limit = min(dma_limit >> shift, (dma_addr_t)iovad->dma_32bit_pfn); > /* > * Enforce size-alignment to be safe - there could perhaps be an > * attribute to control this per-device, or at least per-domain... > */ > - return alloc_iova(iovad, length, dma_limit >> shift, true); > + return alloc_iova(iovad, length, dma_limit, true); > } > > /* The IOVA allocator knows what we mapped, so just unmap whatever that was */ >