Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp4689451pxk; Wed, 30 Sep 2020 09:11:22 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzuwVhKwJAQxVShuylNghvyEir/mN6BQ49NMTNVRJf8GIigbnFSsHAAOryNg0TvvKxN2c/V X-Received: by 2002:a50:9f6f:: with SMTP id b102mr3495999edf.272.1601482282634; Wed, 30 Sep 2020 09:11:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1601482282; cv=none; d=google.com; s=arc-20160816; b=Er54HD2CUi83KVOLyLHEcdiXNtVWqXIs4zYyhSlE/AW2zJGXPWgFbHIxxCIkXzio5w 8cIh28iaJW1cf3zYZ/CQ1bXGpbyJmUVRV+CfxAU0SfP5p8E9bCYcR3hOAsiRH3b3FTaZ DDaxHljyYZKmhImxLD0zZZmYoPZjQx9mkjBZewplHPyPA+GcK7yEF2u+z/aw6o5087rk UajrXFsnOL4zOhjOnbUKeCDT4j/TnooJgtFOIyq9q0kHIovcSD83kKW6LKXuMdeDT3r2 8V70U6GxfcgWi3aXfOD7Xq/oEV+f6vnCk39A2jjhvs7bKEyV6hmW2rBxH+LqcxWFp9nR oSHQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=n8t3zfDKftEl3wniB1iex4k9WYjEfWPCbzhvTIk320k=; b=r5YZUiodp76efKHKA80LyJ06YyxECURy8oN+qxUf5FvaQvtcZjBvcLHa+oPHMXNAyc 2lh7sJmWUXnO4v3W73sfpNnxyomvMO7FxvKiqjxNaZnPYMS1gcfaLzEnY5fer2dV3zsu 2UqZXaJrWHHO5tuOLbFFaWSMySLQ3nCeNkYzPiqmx5O/Zv9WVwxmwsoDHepJ/ERL+SGl QsQgnRFvtJG0xG2/0BHLlJ46spsjgPwrQyMyDEIbvEPuKCJXKjq+4/nd9vDljPvF5G+i exY8ipWOGhWCrAW0yrx+D9h+KeqY9u+LR9lVl0SUl/S5rDtl3YtJHPuphHGoaWNK8dom GHgw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=fzFAqzzF; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id f19si1450989ejx.348.2020.09.30.09.10.59; Wed, 30 Sep 2020 09:11:22 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=fzFAqzzF; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731139AbgI3QJn (ORCPT + 99 others); Wed, 30 Sep 2020 12:09:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48586 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730980AbgI3QJi (ORCPT ); Wed, 30 Sep 2020 12:09:38 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4D591C0613D2; Wed, 30 Sep 2020 09:09:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=n8t3zfDKftEl3wniB1iex4k9WYjEfWPCbzhvTIk320k=; b=fzFAqzzFRvhRqbzPvKHX/UJfQ3 /NI7O+NLFeOPs/aVaRFnad9SHfEGa0AMJH3Ww74NM485dJ6rUlZXctWmJlRLtyMoDdWNgig/bjYeI GI1v6PBkrOb+y+AowzYW4JDG6DLNMz+263+qVoyx/zEHHw6ZSM8kMvZl+XMT/dYrUtRE/0n2Ildd0 M1cdlAvzx1TXOdTQ8/fu+WFi7lghDWFZ/XxW0a9wOe6vS909VHX6CTJRh97xEm5B7FsxsYRy0CFcK jqLQrKdLfns0WNpLBm6gpHCG6fknpYwbiccBrcQtmHhN/B/jCHggvhMeQt7g7UuCrN8AzHSDWLMHt opJTXyMg==; Received: from [2001:4bb8:180:7b62:c70:4a89:bc61:4] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1kNefc-0003EI-Ei; Wed, 30 Sep 2020 16:09:32 +0000 From: Christoph Hellwig To: Mauro Carvalho Chehab , Marek Szyprowski , Tomasz Figa , iommu@lists.linux-foundation.org Cc: Robin Murphy , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-media@vger.kernel.org Subject: [PATCH 8/8] WIP: add a dma_alloc_contiguous API Date: Wed, 30 Sep 2020 18:09:17 +0200 Message-Id: <20200930160917.1234225-9-hch@lst.de> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200930160917.1234225-1-hch@lst.de> References: <20200930160917.1234225-1-hch@lst.de> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Add a new API that returns a virtually non-contigous array of pages and dma address. This API is only implemented for dma-iommu and will not be implemented for non-iommu DMA API instances that have to allocate contiguous memory. It is up to the caller to check if the API is available. The intent is that media drivers can use this API if either: - no kernel mapping or only temporary kernel mappings are required. That is as a better replacement for DMA_ATTR_NO_KERNEL_MAPPING - a kernel mapping is required for cached and DMA mapped pages, but the driver also needs the pages to e.g. map them to userspace. In that sense it is a replacement for some aspects of the recently removed and never fully implemented DMA_ATTR_NON_CONSISTENT Signed-off-by: Christoph Hellwig --- drivers/iommu/dma-iommu.c | 73 +++++++++++++++++++++++++------------ include/linux/dma-mapping.h | 9 +++++ kernel/dma/mapping.c | 35 ++++++++++++++++++ 3 files changed, 93 insertions(+), 24 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 7922f545cd5eef..158026a856622c 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -565,23 +565,12 @@ static struct page **__iommu_dma_alloc_pages(struct device *dev, return pages; } -/** - * iommu_dma_alloc_remap - Allocate and map a buffer contiguous in IOVA space - * @dev: Device to allocate memory for. Must be a real device - * attached to an iommu_dma_domain - * @size: Size of buffer in bytes - * @dma_handle: Out argument for allocated DMA handle - * @gfp: Allocation flags - * @prot: pgprot_t to use for the remapped mapping - * @attrs: DMA attributes for this allocation - * - * If @size is less than PAGE_SIZE, then a full CPU page will be allocated, +/* + * If size is less than PAGE_SIZE, then a full CPU page will be allocated, * but an IOMMU which supports smaller pages might not map the whole thing. - * - * Return: Mapped virtual address, or NULL on failure. */ -static void *iommu_dma_alloc_remap(struct device *dev, size_t size, - dma_addr_t *dma_handle, gfp_t gfp, pgprot_t prot, +static struct page **__iommu_dma_alloc_noncontiguous(struct device *dev, + size_t size, dma_addr_t *dma_handle, gfp_t gfp, pgprot_t prot, unsigned long attrs) { struct iommu_domain *domain = iommu_get_dma_domain(dev); @@ -593,7 +582,6 @@ static void *iommu_dma_alloc_remap(struct device *dev, size_t size, struct page **pages; struct sg_table sgt; dma_addr_t iova; - void *vaddr; *dma_handle = DMA_MAPPING_ERROR; @@ -636,17 +624,10 @@ static void *iommu_dma_alloc_remap(struct device *dev, size_t size, < size) goto out_free_sg; - vaddr = dma_common_pages_remap(pages, size, prot, - __builtin_return_address(0)); - if (!vaddr) - goto out_unmap; - *dma_handle = iova; sg_free_table(&sgt); - return vaddr; + return pages; -out_unmap: - __iommu_dma_unmap(dev, iova, size); out_free_sg: sg_free_table(&sgt); out_free_iova: @@ -656,6 +637,46 @@ static void *iommu_dma_alloc_remap(struct device *dev, size_t size, return NULL; } +static void *iommu_dma_alloc_remap(struct device *dev, size_t size, + dma_addr_t *dma_handle, gfp_t gfp, pgprot_t prot, + unsigned long attrs) +{ + struct page **pages; + void *vaddr; + + pages = __iommu_dma_alloc_noncontiguous(dev, size, dma_handle, gfp, + prot, attrs); + if (!pages) + return NULL; + vaddr = dma_common_pages_remap(pages, size, prot, + __builtin_return_address(0)); + if (!vaddr) + goto out_unmap; + return vaddr; + +out_unmap: + __iommu_dma_unmap(dev, *dma_handle, size); + __iommu_dma_free_pages(pages, PAGE_ALIGN(size) >> PAGE_SHIFT); + return NULL; +} + +#ifdef CONFIG_DMA_REMAP +static struct page **iommu_dma_alloc_noncontiguous(struct device *dev, + size_t size, dma_addr_t *dma_handle, gfp_t gfp, + unsigned long attrs) +{ + return __iommu_dma_alloc_noncontiguous(dev, size, dma_handle, gfp, + PAGE_KERNEL, attrs); +} + +static void iommu_dma_free_noncontiguous(struct device *dev, size_t size, + struct page **pages, dma_addr_t dma_handle) +{ + __iommu_dma_unmap(dev, dma_handle, size); + __iommu_dma_free_pages(pages, PAGE_ALIGN(size) >> PAGE_SHIFT); +} +#endif + static void iommu_dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, size_t size, enum dma_data_direction dir) { @@ -1110,6 +1131,10 @@ static const struct dma_map_ops iommu_dma_ops = { .free = iommu_dma_free, .alloc_pages = dma_common_alloc_pages, .free_pages = dma_common_free_pages, +#ifdef CONFIG_DMA_REMAP + .alloc_noncontiguous = iommu_dma_alloc_noncontiguous, + .free_noncontiguous = iommu_dma_free_noncontiguous, +#endif .mmap = iommu_dma_mmap, .get_sgtable = iommu_dma_get_sgtable, .map_page = iommu_dma_map_page, diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index 4b9b1d64f5ec9e..51bbc32365bb8d 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -74,6 +74,10 @@ struct dma_map_ops { gfp_t gfp); void (*free_pages)(struct device *dev, size_t size, struct page *vaddr, dma_addr_t dma_handle, enum dma_data_direction dir); + struct page **(*alloc_noncontiguous)(struct device *dev, size_t size, + dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs); + void (*free_noncontiguous)(struct device *dev, size_t size, + struct page **pages, dma_addr_t dma_handle); int (*mmap)(struct device *, struct vm_area_struct *, void *, dma_addr_t, size_t, unsigned long attrs); @@ -384,6 +388,11 @@ void *dma_alloc_noncoherent(struct device *dev, size_t size, dma_addr_t *dma_handle, enum dma_data_direction dir, gfp_t gfp); void dma_free_noncoherent(struct device *dev, size_t size, void *vaddr, dma_addr_t dma_handle, enum dma_data_direction dir); +bool dma_can_alloc_noncontiguous(struct device *dev); +struct page **dma_alloc_noncontiguous(struct device *dev, size_t size, + dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs); +void dma_free_noncontiguous(struct device *dev, size_t size, + struct page **pages, dma_addr_t dma_handle); static inline dma_addr_t dma_map_single_attrs(struct device *dev, void *ptr, size_t size, enum dma_data_direction dir, unsigned long attrs) diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index 06115f59f4ffbf..6d975d1a20dd72 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -529,6 +529,41 @@ void dma_free_noncoherent(struct device *dev, size_t size, void *vaddr, } EXPORT_SYMBOL_GPL(dma_free_noncoherent); +bool dma_can_alloc_noncontiguous(struct device *dev) +{ + const struct dma_map_ops *ops = get_dma_ops(dev); + + return ops && ops->free_noncontiguous; +} +EXPORT_SYMBOL_GPL(dma_can_alloc_noncontiguous); + +struct page **dma_alloc_noncontiguous(struct device *dev, size_t size, + dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) +{ + const struct dma_map_ops *ops = get_dma_ops(dev); + + if (WARN_ON_ONCE(!dma_can_alloc_noncontiguous(dev))) + return NULL; + if (attrs & ~DMA_ATTR_ALLOC_SINGLE_PAGES) { + dev_warn(dev, "invalid flags (0x%lx) for %s\n", + attrs, __func__); + return NULL; + } + return ops->alloc_noncontiguous(dev, size, dma_handle, gfp, attrs); +} +EXPORT_SYMBOL_GPL(dma_alloc_noncontiguous); + +void dma_free_noncontiguous(struct device *dev, size_t size, + struct page **pages, dma_addr_t dma_handle) +{ + const struct dma_map_ops *ops = get_dma_ops(dev); + + if (WARN_ON_ONCE(!dma_can_alloc_noncontiguous(dev))) + return; + ops->free_noncontiguous(dev, size, pages, dma_handle); +} +EXPORT_SYMBOL_GPL(dma_free_noncontiguous); + int dma_supported(struct device *dev, u64 mask) { const struct dma_map_ops *ops = get_dma_ops(dev); -- 2.28.0