2021-03-01 08:59:11

by Christoph Hellwig

[permalink] [raw]
Subject: add a new dma_alloc_noncontiguous API v3

Hi all,

this series adds the new noncontiguous DMA allocation API requested by
various media driver maintainers.

Changes since v2:
- rebased to Linux 5.12-rc1
- dropped one already merged patch
- pass an attrs argument to dma_alloc_noncontigous
- clarify the dma_vmap_noncontiguous documentation a bit
- fix double assignments in uvcvideo

Changes since v1:
- document that flush_kernel_vmap_range and invalidate_kernel_vmap_range
must be called once an allocation is mapped into KVA
- add dma-debug support
- remove the separate dma_handle argument, and instead create fully formed
DMA mapped scatterlists
- use a directional allocation in uvcvideo
- call invalidate_kernel_vmap_range from uvcvideo


2021-03-01 09:01:43

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 4/6] dma-iommu: refactor iommu_dma_alloc_remap

Split out a new helper that only allocates a sg_table worth of
memory without mapping it into contiguous kernel address space.

Signed-off-by: Christoph Hellwig <[email protected]>
Reviewed-by: Tomasz Figa <[email protected]>
Tested-by: Ricardo Ribalda <[email protected]>
---
drivers/iommu/dma-iommu.c | 67 ++++++++++++++++++++-------------------
1 file changed, 35 insertions(+), 32 deletions(-)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 9ab6ee22c11088..b4d7bfffb3a0d2 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -649,23 +649,12 @@ static struct page **__iommu_dma_alloc_pages(struct device *dev,
return pages;
}

-/**
- * iommu_dma_alloc_remap - Allocate and map a buffer contiguous in IOVA space
- * @dev: Device to allocate memory for. Must be a real device
- * attached to an iommu_dma_domain
- * @size: Size of buffer in bytes
- * @dma_handle: Out argument for allocated DMA handle
- * @gfp: Allocation flags
- * @prot: pgprot_t to use for the remapped mapping
- * @attrs: DMA attributes for this allocation
- *
- * If @size is less than PAGE_SIZE, then a full CPU page will be allocated,
+/*
+ * If size is less than PAGE_SIZE, then a full CPU page will be allocated,
* but an IOMMU which supports smaller pages might not map the whole thing.
- *
- * Return: Mapped virtual address, or NULL on failure.
*/
-static void *iommu_dma_alloc_remap(struct device *dev, size_t size,
- dma_addr_t *dma_handle, gfp_t gfp, pgprot_t prot,
+static struct page **__iommu_dma_alloc_noncontiguous(struct device *dev,
+ size_t size, struct sg_table *sgt, gfp_t gfp, pgprot_t prot,
unsigned long attrs)
{
struct iommu_domain *domain = iommu_get_dma_domain(dev);
@@ -675,11 +664,7 @@ static void *iommu_dma_alloc_remap(struct device *dev, size_t size,
int ioprot = dma_info_to_prot(DMA_BIDIRECTIONAL, coherent, attrs);
unsigned int count, min_size, alloc_sizes = domain->pgsize_bitmap;
struct page **pages;
- struct sg_table sgt;
dma_addr_t iova;
- void *vaddr;
-
- *dma_handle = DMA_MAPPING_ERROR;

if (static_branch_unlikely(&iommu_deferred_attach_enabled) &&
iommu_deferred_attach(dev, domain))
@@ -706,38 +691,56 @@ static void *iommu_dma_alloc_remap(struct device *dev, size_t size,
if (!iova)
goto out_free_pages;

- if (sg_alloc_table_from_pages(&sgt, pages, count, 0, size, GFP_KERNEL))
+ if (sg_alloc_table_from_pages(sgt, pages, count, 0, size, GFP_KERNEL))
goto out_free_iova;

if (!(ioprot & IOMMU_CACHE)) {
struct scatterlist *sg;
int i;

- for_each_sg(sgt.sgl, sg, sgt.orig_nents, i)
+ for_each_sg(sgt->sgl, sg, sgt->orig_nents, i)
arch_dma_prep_coherent(sg_page(sg), sg->length);
}

- if (iommu_map_sg_atomic(domain, iova, sgt.sgl, sgt.orig_nents, ioprot)
+ if (iommu_map_sg_atomic(domain, iova, sgt->sgl, sgt->orig_nents, ioprot)
< size)
goto out_free_sg;

+ sgt->sgl->dma_address = iova;
+ return pages;
+
+out_free_sg:
+ sg_free_table(sgt);
+out_free_iova:
+ iommu_dma_free_iova(cookie, iova, size, NULL);
+out_free_pages:
+ __iommu_dma_free_pages(pages, count);
+ return NULL;
+}
+
+static void *iommu_dma_alloc_remap(struct device *dev, size_t size,
+ dma_addr_t *dma_handle, gfp_t gfp, pgprot_t prot,
+ unsigned long attrs)
+{
+ struct page **pages;
+ struct sg_table sgt;
+ void *vaddr;
+
+ pages = __iommu_dma_alloc_noncontiguous(dev, size, &sgt, gfp, prot,
+ attrs);
+ if (!pages)
+ return NULL;
+ *dma_handle = sgt.sgl->dma_address;
+ sg_free_table(&sgt);
vaddr = dma_common_pages_remap(pages, size, prot,
__builtin_return_address(0));
if (!vaddr)
goto out_unmap;
-
- *dma_handle = iova;
- sg_free_table(&sgt);
return vaddr;

out_unmap:
- __iommu_dma_unmap(dev, iova, size);
-out_free_sg:
- sg_free_table(&sgt);
-out_free_iova:
- iommu_dma_free_iova(cookie, iova, size, NULL);
-out_free_pages:
- __iommu_dma_free_pages(pages, count);
+ __iommu_dma_unmap(dev, *dma_handle, size);
+ __iommu_dma_free_pages(pages, PAGE_ALIGN(size) >> PAGE_SHIFT);
return NULL;
}

--
2.29.2

2021-03-01 09:01:50

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 2/6] dma-mapping: refactor dma_{alloc,free}_pages

Factour out internal versions without the dma_debug calls in preparation
for callers that will need different dma_debug calls.

Note that this changes the dma_debug calls to get the not page aligned
size values, but as long as alloc and free agree on one variant we are
fine.

Signed-off-by: Christoph Hellwig <[email protected]>
Reviewed-by: Tomasz Figa <[email protected]>
Tested-by: Ricardo Ribalda <[email protected]>
---
kernel/dma/mapping.c | 29 +++++++++++++++++++----------
1 file changed, 19 insertions(+), 10 deletions(-)

diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c
index 9ce86c77651c6f..07f964ebcda15e 100644
--- a/kernel/dma/mapping.c
+++ b/kernel/dma/mapping.c
@@ -477,11 +477,10 @@ void dma_free_attrs(struct device *dev, size_t size, void *cpu_addr,
}
EXPORT_SYMBOL(dma_free_attrs);

-struct page *dma_alloc_pages(struct device *dev, size_t size,
+static struct page *__dma_alloc_pages(struct device *dev, size_t size,
dma_addr_t *dma_handle, enum dma_data_direction dir, gfp_t gfp)
{
const struct dma_map_ops *ops = get_dma_ops(dev);
- struct page *page;

if (WARN_ON_ONCE(!dev->coherent_dma_mask))
return NULL;
@@ -490,31 +489,41 @@ struct page *dma_alloc_pages(struct device *dev, size_t size,

size = PAGE_ALIGN(size);
if (dma_alloc_direct(dev, ops))
- page = dma_direct_alloc_pages(dev, size, dma_handle, dir, gfp);
- else if (ops->alloc_pages)
- page = ops->alloc_pages(dev, size, dma_handle, dir, gfp);
- else
+ return dma_direct_alloc_pages(dev, size, dma_handle, dir, gfp);
+ if (!ops->alloc_pages)
return NULL;
+ return ops->alloc_pages(dev, size, dma_handle, dir, gfp);
+}

- debug_dma_map_page(dev, page, 0, size, dir, *dma_handle);
+struct page *dma_alloc_pages(struct device *dev, size_t size,
+ dma_addr_t *dma_handle, enum dma_data_direction dir, gfp_t gfp)
+{
+ struct page *page = __dma_alloc_pages(dev, size, dma_handle, dir, gfp);

+ if (page)
+ debug_dma_map_page(dev, page, 0, size, dir, *dma_handle);
return page;
}
EXPORT_SYMBOL_GPL(dma_alloc_pages);

-void dma_free_pages(struct device *dev, size_t size, struct page *page,
+static void __dma_free_pages(struct device *dev, size_t size, struct page *page,
dma_addr_t dma_handle, enum dma_data_direction dir)
{
const struct dma_map_ops *ops = get_dma_ops(dev);

size = PAGE_ALIGN(size);
- debug_dma_unmap_page(dev, dma_handle, size, dir);
-
if (dma_alloc_direct(dev, ops))
dma_direct_free_pages(dev, size, page, dma_handle, dir);
else if (ops->free_pages)
ops->free_pages(dev, size, page, dma_handle, dir);
}
+
+void dma_free_pages(struct device *dev, size_t size, struct page *page,
+ dma_addr_t dma_handle, enum dma_data_direction dir)
+{
+ debug_dma_unmap_page(dev, dma_handle, size, dir);
+ __dma_free_pages(dev, size, page, dma_handle, dir);
+}
EXPORT_SYMBOL_GPL(dma_free_pages);

int dma_mmap_pages(struct device *dev, struct vm_area_struct *vma,
--
2.29.2

2021-03-01 09:02:20

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 3/6] dma-mapping: add a dma_alloc_noncontiguous API

Add a new API that returns a potentiall virtually non-contigous sg_table
and a DMA address. This API is only properly implemented for dma-iommu
and will simply return a contigious chunk as a fallback.

The intent is that drivers can use this API if either:

- no kernel mapping or only temporary kernel mappings are required.
That is as a better replacement for DMA_ATTR_NO_KERNEL_MAPPING
- a kernel mapping is required for cached and DMA mapped pages, but
the driver also needs the pages to e.g. map them to userspace.
In that sense it is a replacement for some aspects of the recently
removed and never fully implemented DMA_ATTR_NON_CONSISTENT

Signed-off-by: Christoph Hellwig <[email protected]>
Reviewed-by: Tomasz Figa <[email protected]>
Tested-by: Ricardo Ribalda <[email protected]>
---
Documentation/core-api/dma-api.rst | 78 +++++++++++++++++++++
include/linux/dma-map-ops.h | 19 ++++++
include/linux/dma-mapping.h | 32 +++++++++
kernel/dma/mapping.c | 106 +++++++++++++++++++++++++++++
4 files changed, 235 insertions(+)

diff --git a/Documentation/core-api/dma-api.rst b/Documentation/core-api/dma-api.rst
index 157a474ae54416..00a1d4fa3f9e4e 100644
--- a/Documentation/core-api/dma-api.rst
+++ b/Documentation/core-api/dma-api.rst
@@ -594,6 +594,84 @@ dev, size, dma_handle and dir must all be the same as those passed into
dma_alloc_noncoherent(). cpu_addr must be the virtual address returned by
dma_alloc_noncoherent().

+::
+
+ struct sg_table *
+ dma_alloc_noncontiguous(struct device *dev, size_t size,
+ enum dma_data_direction dir, gfp_t gfp,
+ unsigned long attrs);
+
+This routine allocates <size> bytes of non-coherent and possibly non-contiguous
+memory. It returns a pointer to struct sg_table that describes the allocated
+and DMA mapped memory, or NULL if the allocation failed. The resulting memory
+can be used for struct page mapped into a scatterlist are suitable for.
+
+The return sg_table is guaranteed to have 1 single DMA mapped segment as
+indicated by sgt->nents, but it might have multiple CPU side segments as
+indicated by sgt->orig_nents.
+
+The dir parameter specified if data is read and/or written by the device,
+see dma_map_single() for details.
+
+The gfp parameter allows the caller to specify the ``GFP_`` flags (see
+kmalloc()) for the allocation, but rejects flags used to specify a memory
+zone such as GFP_DMA or GFP_HIGHMEM.
+
+The attrs argument must be either 0 or DMA_ATTR_ALLOC_SINGLE_PAGES.
+
+Before giving the memory to the device, dma_sync_sgtable_for_device() needs
+to be called, and before reading memory written by the device,
+dma_sync_sgtable_for_cpu(), just like for streaming DMA mappings that are
+reused.
+
+::
+
+ void
+ dma_free_noncontiguous(struct device *dev, size_t size,
+ struct sg_table *sgt,
+ enum dma_data_direction dir)
+
+Free memory previously allocated using dma_alloc_noncontiguous(). dev, size,
+and dir must all be the same as those passed into dma_alloc_noncontiguous().
+sgt must be the pointer returned by dma_alloc_noncontiguous().
+
+::
+
+ void *
+ dma_vmap_noncontiguous(struct device *dev, size_t size,
+ struct sg_table *sgt)
+
+Return a contiguous kernel mapping for an allocation returned from
+dma_alloc_noncontiguous(). dev and size must be the same as those passed into
+dma_alloc_noncontiguous(). sgt must be the pointer returned by
+dma_alloc_noncontiguous().
+
+Once a non-contiguous allocation is mapped using this function, the
+flush_kernel_vmap_range() and invalidate_kernel_vmap_range() APIs must be used
+to manage the coherency between the kernel mapping, the device and user space
+mappings (if any).
+
+::
+
+ void
+ dma_vunmap_noncontiguous(struct device *dev, void *vaddr)
+
+Unmap a kernel mapping returned by dma_vmap_noncontiguous(). dev must be the
+same the one passed into dma_alloc_noncontiguous(). vaddr must be the pointer
+returned by dma_vmap_noncontiguous().
+
+
+::
+
+ int
+ dma_mmap_noncontiguous(struct device *dev, struct vm_area_struct *vma,
+ size_t size, struct sg_table *sgt)
+
+Map an allocation returned from dma_alloc_noncontiguous() into a user address
+space. dev and size must be the same as those passed into
+dma_alloc_noncontiguous(). sgt must be the pointer returned by
+dma_alloc_noncontiguous().
+
::

int
diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h
index 51872e736e7b1d..0d53a96a3d641f 100644
--- a/include/linux/dma-map-ops.h
+++ b/include/linux/dma-map-ops.h
@@ -22,6 +22,11 @@ struct dma_map_ops {
gfp_t gfp);
void (*free_pages)(struct device *dev, size_t size, struct page *vaddr,
dma_addr_t dma_handle, enum dma_data_direction dir);
+ struct sg_table *(*alloc_noncontiguous)(struct device *dev, size_t size,
+ enum dma_data_direction dir, gfp_t gfp,
+ unsigned long attrs);
+ void (*free_noncontiguous)(struct device *dev, size_t size,
+ struct sg_table *sgt, enum dma_data_direction dir);
int (*mmap)(struct device *, struct vm_area_struct *,
void *, dma_addr_t, size_t, unsigned long attrs);

@@ -198,6 +203,20 @@ static inline int dma_mmap_from_global_coherent(struct vm_area_struct *vma,
}
#endif /* CONFIG_DMA_DECLARE_COHERENT */

+/*
+ * This is the actual return value from the ->alloc_noncontiguous method.
+ * The users of the DMA API should only care about the sg_table, but to make
+ * the DMA-API internal vmaping and freeing easier we stash away the page
+ * array as well (except for the fallback case). This can go away any time,
+ * e.g. when a vmap-variant that takes a scatterlist comes along.
+ */
+struct dma_sgt_handle {
+ struct sg_table sgt;
+ struct page **pages;
+};
+#define sgt_handle(sgt) \
+ container_of((sgt), struct dma_sgt_handle, sgt)
+
int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt,
void *cpu_addr, dma_addr_t dma_addr, size_t size,
unsigned long attrs);
diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
index 2b8dce756e1fa1..954847f9a3e0fa 100644
--- a/include/linux/dma-mapping.h
+++ b/include/linux/dma-mapping.h
@@ -144,6 +144,15 @@ u64 dma_get_required_mask(struct device *dev);
size_t dma_max_mapping_size(struct device *dev);
bool dma_need_sync(struct device *dev, dma_addr_t dma_addr);
unsigned long dma_get_merge_boundary(struct device *dev);
+struct sg_table *dma_alloc_noncontiguous(struct device *dev, size_t size,
+ enum dma_data_direction dir, gfp_t gfp, unsigned long attrs);
+void dma_free_noncontiguous(struct device *dev, size_t size,
+ struct sg_table *sgt, enum dma_data_direction dir);
+void *dma_vmap_noncontiguous(struct device *dev, size_t size,
+ struct sg_table *sgt);
+void dma_vunmap_noncontiguous(struct device *dev, void *vaddr);
+int dma_mmap_noncontiguous(struct device *dev, struct vm_area_struct *vma,
+ size_t size, struct sg_table *sgt);
#else /* CONFIG_HAS_DMA */
static inline dma_addr_t dma_map_page_attrs(struct device *dev,
struct page *page, size_t offset, size_t size,
@@ -257,6 +266,29 @@ static inline unsigned long dma_get_merge_boundary(struct device *dev)
{
return 0;
}
+static inline struct sg_table *dma_alloc_noncontiguous(struct device *dev,
+ size_t size, enum dma_data_direction dir, gfp_t gfp,
+ unsigned long attrs)
+{
+ return NULL;
+}
+static inline void dma_free_noncontiguous(struct device *dev, size_t size,
+ struct sg_table *sgt, enum dma_data_direction dir)
+{
+}
+static inline void *dma_vmap_noncontiguous(struct device *dev, size_t size,
+ struct sg_table *sgt)
+{
+ return NULL;
+}
+static inline void dma_vunmap_noncontiguous(struct device *dev, void *vaddr)
+{
+}
+static inline int dma_mmap_noncontiguous(struct device *dev,
+ struct vm_area_struct *vma, size_t size, struct sg_table *sgt)
+{
+ return -EINVAL;
+}
#endif /* CONFIG_HAS_DMA */

struct page *dma_alloc_pages(struct device *dev, size_t size,
diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c
index 07f964ebcda15e..2b06a809d0b9df 100644
--- a/kernel/dma/mapping.c
+++ b/kernel/dma/mapping.c
@@ -539,6 +539,112 @@ int dma_mmap_pages(struct device *dev, struct vm_area_struct *vma,
}
EXPORT_SYMBOL_GPL(dma_mmap_pages);

+static struct sg_table *alloc_single_sgt(struct device *dev, size_t size,
+ enum dma_data_direction dir, gfp_t gfp)
+{
+ struct sg_table *sgt;
+ struct page *page;
+
+ sgt = kmalloc(sizeof(*sgt), gfp);
+ if (!sgt)
+ return NULL;
+ if (sg_alloc_table(sgt, 1, gfp))
+ goto out_free_sgt;
+ page = __dma_alloc_pages(dev, size, &sgt->sgl->dma_address, dir, gfp);
+ if (!page)
+ goto out_free_table;
+ sg_set_page(sgt->sgl, page, PAGE_ALIGN(size), 0);
+ sg_dma_len(sgt->sgl) = sgt->sgl->length;
+ return sgt;
+out_free_table:
+ sg_free_table(sgt);
+out_free_sgt:
+ kfree(sgt);
+ return NULL;
+}
+
+struct sg_table *dma_alloc_noncontiguous(struct device *dev, size_t size,
+ enum dma_data_direction dir, gfp_t gfp, unsigned long attrs)
+{
+ const struct dma_map_ops *ops = get_dma_ops(dev);
+ struct sg_table *sgt;
+
+ if (WARN_ON_ONCE(attrs & ~DMA_ATTR_ALLOC_SINGLE_PAGES))
+ return NULL;
+
+ if (ops && ops->alloc_noncontiguous)
+ sgt = ops->alloc_noncontiguous(dev, size, dir, gfp, attrs);
+ else
+ sgt = alloc_single_sgt(dev, size, dir, gfp);
+
+ if (sgt) {
+ sgt->nents = 1;
+ debug_dma_map_sg(dev, sgt->sgl, sgt->orig_nents, 1, dir);
+ }
+ return sgt;
+}
+EXPORT_SYMBOL_GPL(dma_alloc_noncontiguous);
+
+static void free_single_sgt(struct device *dev, size_t size,
+ struct sg_table *sgt, enum dma_data_direction dir)
+{
+ __dma_free_pages(dev, size, sg_page(sgt->sgl), sgt->sgl->dma_address,
+ dir);
+ sg_free_table(sgt);
+ kfree(sgt);
+}
+
+void dma_free_noncontiguous(struct device *dev, size_t size,
+ struct sg_table *sgt, enum dma_data_direction dir)
+{
+ const struct dma_map_ops *ops = get_dma_ops(dev);
+
+ debug_dma_unmap_sg(dev, sgt->sgl, sgt->orig_nents, dir);
+ if (ops && ops->free_noncontiguous)
+ ops->free_noncontiguous(dev, size, sgt, dir);
+ else
+ free_single_sgt(dev, size, sgt, dir);
+}
+EXPORT_SYMBOL_GPL(dma_free_noncontiguous);
+
+void *dma_vmap_noncontiguous(struct device *dev, size_t size,
+ struct sg_table *sgt)
+{
+ const struct dma_map_ops *ops = get_dma_ops(dev);
+ unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT;
+
+ if (ops && ops->alloc_noncontiguous)
+ return vmap(sgt_handle(sgt)->pages, count, VM_MAP, PAGE_KERNEL);
+ return page_address(sg_page(sgt->sgl));
+}
+EXPORT_SYMBOL_GPL(dma_vmap_noncontiguous);
+
+void dma_vunmap_noncontiguous(struct device *dev, void *vaddr)
+{
+ const struct dma_map_ops *ops = get_dma_ops(dev);
+
+ if (ops && ops->alloc_noncontiguous)
+ vunmap(vaddr);
+}
+EXPORT_SYMBOL_GPL(dma_vunmap_noncontiguous);
+
+int dma_mmap_noncontiguous(struct device *dev, struct vm_area_struct *vma,
+ size_t size, struct sg_table *sgt)
+{
+ const struct dma_map_ops *ops = get_dma_ops(dev);
+
+ if (ops && ops->alloc_noncontiguous) {
+ unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT;
+
+ if (vma->vm_pgoff >= count ||
+ vma_pages(vma) > count - vma->vm_pgoff)
+ return -ENXIO;
+ return vm_map_pages(vma, sgt_handle(sgt)->pages, count);
+ }
+ return dma_mmap_pages(dev, vma, size, sg_page(sgt->sgl));
+}
+EXPORT_SYMBOL_GPL(dma_mmap_noncontiguous);
+
int dma_supported(struct device *dev, u64 mask)
{
const struct dma_map_ops *ops = get_dma_ops(dev);
--
2.29.2

2021-03-01 09:07:38

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 5/6] dma-iommu: implement ->alloc_noncontiguous

Implement support for allocating a non-contiguous DMA region.

Signed-off-by: Christoph Hellwig <[email protected]>
Reviewed-by: Tomasz Figa <[email protected]>
Tested-by: Ricardo Ribalda <[email protected]>
---
drivers/iommu/dma-iommu.c | 36 ++++++++++++++++++++++++++++++++++++
1 file changed, 36 insertions(+)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index b4d7bfffb3a0d2..714fa930d7b576 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -707,6 +707,7 @@ static struct page **__iommu_dma_alloc_noncontiguous(struct device *dev,
goto out_free_sg;

sgt->sgl->dma_address = iova;
+ sgt->sgl->dma_length = size;
return pages;

out_free_sg:
@@ -744,6 +745,37 @@ static void *iommu_dma_alloc_remap(struct device *dev, size_t size,
return NULL;
}

+#ifdef CONFIG_DMA_REMAP
+static struct sg_table *iommu_dma_alloc_noncontiguous(struct device *dev,
+ size_t size, enum dma_data_direction dir, gfp_t gfp,
+ unsigned long attrs)
+{
+ struct dma_sgt_handle *sh;
+
+ sh = kmalloc(sizeof(*sh), gfp);
+ if (!sh)
+ return NULL;
+
+ sh->pages = __iommu_dma_alloc_noncontiguous(dev, size, &sh->sgt, gfp,
+ PAGE_KERNEL, attrs);
+ if (!sh->pages) {
+ kfree(sh);
+ return NULL;
+ }
+ return &sh->sgt;
+}
+
+static void iommu_dma_free_noncontiguous(struct device *dev, size_t size,
+ struct sg_table *sgt, enum dma_data_direction dir)
+{
+ struct dma_sgt_handle *sh = sgt_handle(sgt);
+
+ __iommu_dma_unmap(dev, sgt->sgl->dma_address, size);
+ __iommu_dma_free_pages(sh->pages, PAGE_ALIGN(size) >> PAGE_SHIFT);
+ sg_free_table(&sh->sgt);
+}
+#endif /* CONFIG_DMA_REMAP */
+
static void iommu_dma_sync_single_for_cpu(struct device *dev,
dma_addr_t dma_handle, size_t size, enum dma_data_direction dir)
{
@@ -1260,6 +1292,10 @@ static const struct dma_map_ops iommu_dma_ops = {
.free = iommu_dma_free,
.alloc_pages = dma_common_alloc_pages,
.free_pages = dma_common_free_pages,
+#ifdef CONFIG_DMA_REMAP
+ .alloc_noncontiguous = iommu_dma_alloc_noncontiguous,
+ .free_noncontiguous = iommu_dma_free_noncontiguous,
+#endif
.mmap = iommu_dma_mmap,
.get_sgtable = iommu_dma_get_sgtable,
.map_page = iommu_dma_map_page,
--
2.29.2

2021-03-03 01:25:10

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 6/6] media: uvcvideo: Use dma_alloc_noncontiguos API

From: Ricardo Ribalda <[email protected]>

On architectures where the is no coherent caching such as ARM use the
dma_alloc_noncontiguos API and handle manually the cache flushing using
dma_sync_sgtable().

With this patch on the affected architectures we can measure up to 20x
performance improvement in uvc_video_copy_data_work().

Eg: aarch64 with an external usb camera

NON_CONTIGUOUS
frames: 999
packets: 999
empty: 0 (0 %)
errors: 0
invalid: 0
pts: 0 early, 0 initial, 999 ok
scr: 0 count ok, 0 diff ok
sof: 2048 <= sof <= 0, freq 0.000 kHz
bytes 67034480 : duration 33303
FPS: 29.99
URB: 523446/4993 uS/qty: 104.836 avg 132.532 std 13.230 min 831.094 max (uS)
header: 76564/4993 uS/qty: 15.334 avg 15.229 std 3.438 min 186.875 max (uS)
latency: 468945/4992 uS/qty: 93.939 avg 132.577 std 9.531 min 824.010 max (uS)
decode: 54161/4993 uS/qty: 10.847 avg 6.313 std 1.614 min 111.458 max (uS)
raw decode speed: 9.931 Gbits/s
raw URB handling speed: 1.025 Gbits/s
throughput: 16.102 Mbits/s
URB decode CPU usage 0.162600 %

COHERENT
frames: 999
packets: 999
empty: 0 (0 %)
errors: 0
invalid: 0
pts: 0 early, 0 initial, 999 ok
scr: 0 count ok, 0 diff ok
sof: 2048 <= sof <= 0, freq 0.000 kHz
bytes 54683536 : duration 33302
FPS: 29.99
URB: 1478135/4000 uS/qty: 369.533 avg 390.357 std 22.968 min 3337.865 max (uS)
header: 79761/4000 uS/qty: 19.940 avg 18.495 std 1.875 min 336.719 max (uS)
latency: 281077/4000 uS/qty: 70.269 avg 83.102 std 5.104 min 735.000 max (uS)
decode: 1197057/4000 uS/qty: 299.264 avg 318.080 std 1.615 min 2806.667 max (uS)
raw decode speed: 365.470 Mbits/s
raw URB handling speed: 295.986 Mbits/s
throughput: 13.136 Mbits/s
URB decode CPU usage 3.594500 %

Signed-off-by: Ricardo Ribalda <[email protected]>
Reviewed-by: Tomasz Figa <[email protected]>
Signed-off-by: Christoph Hellwig <[email protected]>
---
drivers/media/usb/uvc/uvc_video.c | 79 ++++++++++++++++++++++---------
drivers/media/usb/uvc/uvcvideo.h | 4 +-
2 files changed, 60 insertions(+), 23 deletions(-)

diff --git a/drivers/media/usb/uvc/uvc_video.c b/drivers/media/usb/uvc/uvc_video.c
index f2f565281e63ff..d008c68fb6c806 100644
--- a/drivers/media/usb/uvc/uvc_video.c
+++ b/drivers/media/usb/uvc/uvc_video.c
@@ -6,11 +6,13 @@
* Laurent Pinchart ([email protected])
*/

+#include <linux/highmem.h>
#include <linux/kernel.h>
#include <linux/list.h>
#include <linux/module.h>
#include <linux/slab.h>
#include <linux/usb.h>
+#include <linux/usb/hcd.h>
#include <linux/videodev2.h>
#include <linux/vmalloc.h>
#include <linux/wait.h>
@@ -1096,6 +1098,26 @@ static int uvc_video_decode_start(struct uvc_streaming *stream,
return data[0];
}

+static inline struct device *stream_to_dmadev(struct uvc_streaming *stream)
+{
+ return bus_to_hcd(stream->dev->udev->bus)->self.sysdev;
+}
+
+static void uvc_urb_dma_sync(struct uvc_urb *uvc_urb, bool for_device)
+{
+ struct device *dma_dev = stream_to_dmadev(uvc_urb->stream);
+
+ if (for_device) {
+ dma_sync_sgtable_for_device(dma_dev, uvc_urb->sgt,
+ DMA_FROM_DEVICE);
+ } else {
+ dma_sync_sgtable_for_cpu(dma_dev, uvc_urb->sgt,
+ DMA_FROM_DEVICE);
+ invalidate_kernel_vmap_range(uvc_urb->buffer,
+ uvc_urb->stream->urb_size);
+ }
+}
+
/*
* uvc_video_decode_data_work: Asynchronous memcpy processing
*
@@ -1117,6 +1139,8 @@ static void uvc_video_copy_data_work(struct work_struct *work)
uvc_queue_buffer_release(op->buf);
}

+ uvc_urb_dma_sync(uvc_urb, true);
+
ret = usb_submit_urb(uvc_urb->urb, GFP_KERNEL);
if (ret < 0)
dev_err(&uvc_urb->stream->intf->dev,
@@ -1541,10 +1565,12 @@ static void uvc_video_complete(struct urb *urb)
* Process the URB headers, and optionally queue expensive memcpy tasks
* to be deferred to a work queue.
*/
+ uvc_urb_dma_sync(uvc_urb, false);
stream->decode(uvc_urb, buf, buf_meta);

/* If no async work is needed, resubmit the URB immediately. */
if (!uvc_urb->async_operations) {
+ uvc_urb_dma_sync(uvc_urb, true);
ret = usb_submit_urb(uvc_urb->urb, GFP_ATOMIC);
if (ret < 0)
dev_err(&stream->intf->dev,
@@ -1560,24 +1586,46 @@ static void uvc_video_complete(struct urb *urb)
*/
static void uvc_free_urb_buffers(struct uvc_streaming *stream)
{
+ struct device *dma_dev = stream_to_dmadev(stream);
struct uvc_urb *uvc_urb;

for_each_uvc_urb(uvc_urb, stream) {
if (!uvc_urb->buffer)
continue;

-#ifndef CONFIG_DMA_NONCOHERENT
- usb_free_coherent(stream->dev->udev, stream->urb_size,
- uvc_urb->buffer, uvc_urb->dma);
-#else
- kfree(uvc_urb->buffer);
-#endif
+ dma_vunmap_noncontiguous(dma_dev, uvc_urb->buffer);
+ dma_free_noncontiguous(dma_dev, stream->urb_size, uvc_urb->sgt,
+ DMA_FROM_DEVICE);
+
uvc_urb->buffer = NULL;
}

stream->urb_size = 0;
}

+static bool uvc_alloc_urb_buffer(struct uvc_streaming *stream,
+ struct uvc_urb *uvc_urb, gfp_t gfp_flags)
+{
+ struct device *dma_dev = stream_to_dmadev(stream);
+
+
+ uvc_urb->sgt = dma_alloc_noncontiguous(dma_dev, stream->urb_size,
+ DMA_FROM_DEVICE, gfp_flags, 0);
+ if (!uvc_urb->sgt)
+ return false;
+ uvc_urb->dma = uvc_urb->sgt->sgl->dma_address;
+
+ uvc_urb->buffer = dma_vmap_noncontiguous(dma_dev, stream->urb_size,
+ uvc_urb->sgt);
+ if (!uvc_urb->buffer) {
+ dma_free_noncontiguous(dma_dev, stream->urb_size,
+ uvc_urb->sgt, DMA_FROM_DEVICE);
+ return false;
+ }
+
+ return true;
+}
+
/*
* Allocate transfer buffers. This function can be called with buffers
* already allocated when resuming from suspend, in which case it will
@@ -1608,19 +1656,11 @@ static int uvc_alloc_urb_buffers(struct uvc_streaming *stream,

/* Retry allocations until one succeed. */
for (; npackets > 1; npackets /= 2) {
+ stream->urb_size = psize * npackets;
for (i = 0; i < UVC_URBS; ++i) {
struct uvc_urb *uvc_urb = &stream->uvc_urb[i];

- stream->urb_size = psize * npackets;
-#ifndef CONFIG_DMA_NONCOHERENT
- uvc_urb->buffer = usb_alloc_coherent(
- stream->dev->udev, stream->urb_size,
- gfp_flags | __GFP_NOWARN, &uvc_urb->dma);
-#else
- uvc_urb->buffer =
- kmalloc(stream->urb_size, gfp_flags | __GFP_NOWARN);
-#endif
- if (!uvc_urb->buffer) {
+ if (!uvc_alloc_urb_buffer(stream, uvc_urb, gfp_flags)) {
uvc_free_urb_buffers(stream);
break;
}
@@ -1730,12 +1770,8 @@ static int uvc_init_video_isoc(struct uvc_streaming *stream,
urb->context = uvc_urb;
urb->pipe = usb_rcvisocpipe(stream->dev->udev,
ep->desc.bEndpointAddress);
-#ifndef CONFIG_DMA_NONCOHERENT
urb->transfer_flags = URB_ISO_ASAP | URB_NO_TRANSFER_DMA_MAP;
urb->transfer_dma = uvc_urb->dma;
-#else
- urb->transfer_flags = URB_ISO_ASAP;
-#endif
urb->interval = ep->desc.bInterval;
urb->transfer_buffer = uvc_urb->buffer;
urb->complete = uvc_video_complete;
@@ -1795,10 +1831,8 @@ static int uvc_init_video_bulk(struct uvc_streaming *stream,

usb_fill_bulk_urb(urb, stream->dev->udev, pipe, uvc_urb->buffer,
size, uvc_video_complete, uvc_urb);
-#ifndef CONFIG_DMA_NONCOHERENT
urb->transfer_flags = URB_NO_TRANSFER_DMA_MAP;
urb->transfer_dma = uvc_urb->dma;
-#endif

uvc_urb->urb = urb;
}
@@ -1895,6 +1929,7 @@ static int uvc_video_start_transfer(struct uvc_streaming *stream,

/* Submit the URBs. */
for_each_uvc_urb(uvc_urb, stream) {
+ uvc_urb_dma_sync(uvc_urb, true);
ret = usb_submit_urb(uvc_urb->urb, gfp_flags);
if (ret < 0) {
dev_err(&stream->intf->dev,
diff --git a/drivers/media/usb/uvc/uvcvideo.h b/drivers/media/usb/uvc/uvcvideo.h
index 97df5ecd66c9a7..fec24f1eca1c96 100644
--- a/drivers/media/usb/uvc/uvcvideo.h
+++ b/drivers/media/usb/uvc/uvcvideo.h
@@ -545,7 +545,8 @@ struct uvc_copy_op {
* @urb: the URB described by this context structure
* @stream: UVC streaming context
* @buffer: memory storage for the URB
- * @dma: DMA coherent addressing for the urb_buffer
+ * @dma: Allocated DMA handle
+ * @sgt: sgt_table with the urb locations in memory
* @async_operations: counter to indicate the number of copy operations
* @copy_operations: work descriptors for asynchronous copy operations
* @work: work queue entry for asynchronous decode
@@ -556,6 +557,7 @@ struct uvc_urb {

char *buffer;
dma_addr_t dma;
+ struct sg_table *sgt;

unsigned int async_operations;
struct uvc_copy_op copy_operations[UVC_MAX_PACKETS];
--
2.29.2

2021-03-11 16:56:49

by Christoph Hellwig

[permalink] [raw]
Subject: Re: add a new dma_alloc_noncontiguous API v3

Any comments? Especially on the uvcvideo conversion?

On Mon, Mar 01, 2021 at 09:52:30AM +0100, Christoph Hellwig wrote:
> Hi all,
>
> this series adds the new noncontiguous DMA allocation API requested by
> various media driver maintainers.
>
> Changes since v2:
> - rebased to Linux 5.12-rc1
> - dropped one already merged patch
> - pass an attrs argument to dma_alloc_noncontigous
> - clarify the dma_vmap_noncontiguous documentation a bit
> - fix double assignments in uvcvideo
>
> Changes since v1:
> - document that flush_kernel_vmap_range and invalidate_kernel_vmap_range
> must be called once an allocation is mapped into KVA
> - add dma-debug support
> - remove the separate dma_handle argument, and instead create fully formed
> DMA mapped scatterlists
> - use a directional allocation in uvcvideo
> - call invalidate_kernel_vmap_range from uvcvideo
> _______________________________________________
> iommu mailing list
> [email protected]
> https://lists.linuxfoundation.org/mailman/listinfo/iommu
---end quoted text---

2021-03-11 17:00:28

by Ricardo Ribalda

[permalink] [raw]
Subject: Re: add a new dma_alloc_noncontiguous API v3

Hi Christoph

I tried to run it in an arm device and it worked fine.


On Thu, Mar 11, 2021 at 5:52 PM Christoph Hellwig <[email protected]> wrote:
>
> Any comments? Especially on the uvcvideo conversion?
>
> On Mon, Mar 01, 2021 at 09:52:30AM +0100, Christoph Hellwig wrote:
> > Hi all,
> >
> > this series adds the new noncontiguous DMA allocation API requested by
> > various media driver maintainers.
> >
> > Changes since v2:
> > - rebased to Linux 5.12-rc1
> > - dropped one already merged patch
> > - pass an attrs argument to dma_alloc_noncontigous
> > - clarify the dma_vmap_noncontiguous documentation a bit
> > - fix double assignments in uvcvideo
> >
> > Changes since v1:
> > - document that flush_kernel_vmap_range and invalidate_kernel_vmap_range
> > must be called once an allocation is mapped into KVA
> > - add dma-debug support
> > - remove the separate dma_handle argument, and instead create fully formed
> > DMA mapped scatterlists
> > - use a directional allocation in uvcvideo
> > - call invalidate_kernel_vmap_range from uvcvideo
> > _______________________________________________
> > iommu mailing list
> > [email protected]
> > https://lists.linuxfoundation.org/mailman/listinfo/iommu
> ---end quoted text---



--
Ricardo Ribalda

2021-03-12 01:45:52

by Laurent Pinchart

[permalink] [raw]
Subject: Re: [PATCH 6/6] media: uvcvideo: Use dma_alloc_noncontiguos API

And I forgot to mention:

On Fri, Mar 12, 2021 at 03:42:14AM +0200, Laurent Pinchart wrote:
> Hi Christoph and Ricardo,
>
> Thank you for the patch.
>
> On Mon, Mar 01, 2021 at 09:52:36AM +0100, Christoph Hellwig wrote:
> > From: Ricardo Ribalda <[email protected]>
> >
> > On architectures where the is no coherent caching such as ARM use the
>
> s/the is/there is/
>
> > dma_alloc_noncontiguos API and handle manually the cache flushing using
>
> s/dma_alloc_noncontiguos/dma_alloc_noncontiguous/
>
> (and in the subject line too)
>
> > dma_sync_sgtable().
> >
> > With this patch on the affected architectures we can measure up to 20x
> > performance improvement in uvc_video_copy_data_work().

Wow, great work ! :-)

> >
> > Eg: aarch64 with an external usb camera
> >
> > NON_CONTIGUOUS
> > frames: 999
> > packets: 999
> > empty: 0 (0 %)
> > errors: 0
> > invalid: 0
> > pts: 0 early, 0 initial, 999 ok
> > scr: 0 count ok, 0 diff ok
> > sof: 2048 <= sof <= 0, freq 0.000 kHz
> > bytes 67034480 : duration 33303
> > FPS: 29.99
> > URB: 523446/4993 uS/qty: 104.836 avg 132.532 std 13.230 min 831.094 max (uS)
> > header: 76564/4993 uS/qty: 15.334 avg 15.229 std 3.438 min 186.875 max (uS)
> > latency: 468945/4992 uS/qty: 93.939 avg 132.577 std 9.531 min 824.010 max (uS)
> > decode: 54161/4993 uS/qty: 10.847 avg 6.313 std 1.614 min 111.458 max (uS)
> > raw decode speed: 9.931 Gbits/s
> > raw URB handling speed: 1.025 Gbits/s
> > throughput: 16.102 Mbits/s
> > URB decode CPU usage 0.162600 %
> >
> > COHERENT
> > frames: 999
> > packets: 999
> > empty: 0 (0 %)
> > errors: 0
> > invalid: 0
> > pts: 0 early, 0 initial, 999 ok
> > scr: 0 count ok, 0 diff ok
> > sof: 2048 <= sof <= 0, freq 0.000 kHz
> > bytes 54683536 : duration 33302
> > FPS: 29.99
> > URB: 1478135/4000 uS/qty: 369.533 avg 390.357 std 22.968 min 3337.865 max (uS)
> > header: 79761/4000 uS/qty: 19.940 avg 18.495 std 1.875 min 336.719 max (uS)
> > latency: 281077/4000 uS/qty: 70.269 avg 83.102 std 5.104 min 735.000 max (uS)
> > decode: 1197057/4000 uS/qty: 299.264 avg 318.080 std 1.615 min 2806.667 max (uS)
> > raw decode speed: 365.470 Mbits/s
> > raw URB handling speed: 295.986 Mbits/s
> > throughput: 13.136 Mbits/s
> > URB decode CPU usage 3.594500 %
> >
> > Signed-off-by: Ricardo Ribalda <[email protected]>
> > Reviewed-by: Tomasz Figa <[email protected]>
> > Signed-off-by: Christoph Hellwig <[email protected]>
> > ---
> > drivers/media/usb/uvc/uvc_video.c | 79 ++++++++++++++++++++++---------
> > drivers/media/usb/uvc/uvcvideo.h | 4 +-
> > 2 files changed, 60 insertions(+), 23 deletions(-)
> >
> > diff --git a/drivers/media/usb/uvc/uvc_video.c b/drivers/media/usb/uvc/uvc_video.c
> > index f2f565281e63ff..d008c68fb6c806 100644
> > --- a/drivers/media/usb/uvc/uvc_video.c
> > +++ b/drivers/media/usb/uvc/uvc_video.c
> > @@ -6,11 +6,13 @@
> > * Laurent Pinchart ([email protected])
> > */
> >
>
> Should we include <linux/dma-mapping.h> ?
>
> > +#include <linux/highmem.h>
> > #include <linux/kernel.h>
> > #include <linux/list.h>
> > #include <linux/module.h>
> > #include <linux/slab.h>
> > #include <linux/usb.h>
> > +#include <linux/usb/hcd.h>
> > #include <linux/videodev2.h>
> > #include <linux/vmalloc.h>
> > #include <linux/wait.h>
> > @@ -1096,6 +1098,26 @@ static int uvc_video_decode_start(struct uvc_streaming *stream,
> > return data[0];
> > }
> >
> > +static inline struct device *stream_to_dmadev(struct uvc_streaming *stream)
> > +{
> > + return bus_to_hcd(stream->dev->udev->bus)->self.sysdev;
> > +}
> > +
> > +static void uvc_urb_dma_sync(struct uvc_urb *uvc_urb, bool for_device)
> > +{
> > + struct device *dma_dev = stream_to_dmadev(uvc_urb->stream);
> > +
> > + if (for_device) {
> > + dma_sync_sgtable_for_device(dma_dev, uvc_urb->sgt,
> > + DMA_FROM_DEVICE);
>
> The uvcvideo driver also supports video output devices (they are fairly
> rare, but they exist). We thus need to handle DMA_TO_DEVICE too.
>
> > + } else {
> > + dma_sync_sgtable_for_cpu(dma_dev, uvc_urb->sgt,
> > + DMA_FROM_DEVICE);
> > + invalidate_kernel_vmap_range(uvc_urb->buffer,
> > + uvc_urb->stream->urb_size);
> > + }
> > +}
> > +
> > /*
> > * uvc_video_decode_data_work: Asynchronous memcpy processing
> > *
> > @@ -1117,6 +1139,8 @@ static void uvc_video_copy_data_work(struct work_struct *work)
> > uvc_queue_buffer_release(op->buf);
> > }
> >
> > + uvc_urb_dma_sync(uvc_urb, true);
> > +
> > ret = usb_submit_urb(uvc_urb->urb, GFP_KERNEL);
> > if (ret < 0)
> > dev_err(&uvc_urb->stream->intf->dev,
> > @@ -1541,10 +1565,12 @@ static void uvc_video_complete(struct urb *urb)
> > * Process the URB headers, and optionally queue expensive memcpy tasks
> > * to be deferred to a work queue.
> > */
> > + uvc_urb_dma_sync(uvc_urb, false);
> > stream->decode(uvc_urb, buf, buf_meta);
> >
> > /* If no async work is needed, resubmit the URB immediately. */
> > if (!uvc_urb->async_operations) {
> > + uvc_urb_dma_sync(uvc_urb, true);
> > ret = usb_submit_urb(uvc_urb->urb, GFP_ATOMIC);
> > if (ret < 0)
> > dev_err(&stream->intf->dev,
> > @@ -1560,24 +1586,46 @@ static void uvc_video_complete(struct urb *urb)
> > */
> > static void uvc_free_urb_buffers(struct uvc_streaming *stream)
> > {
> > + struct device *dma_dev = stream_to_dmadev(stream);
> > struct uvc_urb *uvc_urb;
> >
> > for_each_uvc_urb(uvc_urb, stream) {
> > if (!uvc_urb->buffer)
> > continue;
> >
> > -#ifndef CONFIG_DMA_NONCOHERENT
> > - usb_free_coherent(stream->dev->udev, stream->urb_size,
> > - uvc_urb->buffer, uvc_urb->dma);
> > -#else
> > - kfree(uvc_urb->buffer);
> > -#endif
> > + dma_vunmap_noncontiguous(dma_dev, uvc_urb->buffer);
> > + dma_free_noncontiguous(dma_dev, stream->urb_size, uvc_urb->sgt,
> > + DMA_FROM_DEVICE);
> > +
> > uvc_urb->buffer = NULL;
>
> Maybe also
>
> uvc_urb->sgt = NULL;
>
> ? It's not strictly mandatory, but may make use-after-free bugs easier
> to spot.
>
> > }
> >
> > stream->urb_size = 0;
> > }
> >
> > +static bool uvc_alloc_urb_buffer(struct uvc_streaming *stream,
> > + struct uvc_urb *uvc_urb, gfp_t gfp_flags)
> > +{
> > + struct device *dma_dev = stream_to_dmadev(stream);
> > +
> > +
>
> Extra blank line.
>
> > + uvc_urb->sgt = dma_alloc_noncontiguous(dma_dev, stream->urb_size,
> > + DMA_FROM_DEVICE, gfp_flags, 0);
> > + if (!uvc_urb->sgt)
> > + return false;
> > + uvc_urb->dma = uvc_urb->sgt->sgl->dma_address;
> > +
> > + uvc_urb->buffer = dma_vmap_noncontiguous(dma_dev, stream->urb_size,
> > + uvc_urb->sgt);
> > + if (!uvc_urb->buffer) {
> > + dma_free_noncontiguous(dma_dev, stream->urb_size,
> > + uvc_urb->sgt, DMA_FROM_DEVICE);
>
> Same here,
>
> uvc_urb->sgt = NULL;
>
> > + return false;
> > + }
> > +
> > + return true;
> > +}
> > +
> > /*
> > * Allocate transfer buffers. This function can be called with buffers
> > * already allocated when resuming from suspend, in which case it will
> > @@ -1608,19 +1656,11 @@ static int uvc_alloc_urb_buffers(struct uvc_streaming *stream,
> >
> > /* Retry allocations until one succeed. */
> > for (; npackets > 1; npackets /= 2) {
> > + stream->urb_size = psize * npackets;
>
> A blank line here would be nice.
>
> > for (i = 0; i < UVC_URBS; ++i) {
> > struct uvc_urb *uvc_urb = &stream->uvc_urb[i];
> >
> > - stream->urb_size = psize * npackets;
> > -#ifndef CONFIG_DMA_NONCOHERENT
> > - uvc_urb->buffer = usb_alloc_coherent(
> > - stream->dev->udev, stream->urb_size,
> > - gfp_flags | __GFP_NOWARN, &uvc_urb->dma);
> > -#else
> > - uvc_urb->buffer =
> > - kmalloc(stream->urb_size, gfp_flags | __GFP_NOWARN);
> > -#endif
> > - if (!uvc_urb->buffer) {
> > + if (!uvc_alloc_urb_buffer(stream, uvc_urb, gfp_flags)) {
> > uvc_free_urb_buffers(stream);
> > break;
> > }
> > @@ -1730,12 +1770,8 @@ static int uvc_init_video_isoc(struct uvc_streaming *stream,
> > urb->context = uvc_urb;
> > urb->pipe = usb_rcvisocpipe(stream->dev->udev,
> > ep->desc.bEndpointAddress);
> > -#ifndef CONFIG_DMA_NONCOHERENT
> > urb->transfer_flags = URB_ISO_ASAP | URB_NO_TRANSFER_DMA_MAP;
> > urb->transfer_dma = uvc_urb->dma;
> > -#else
> > - urb->transfer_flags = URB_ISO_ASAP;
> > -#endif
> > urb->interval = ep->desc.bInterval;
> > urb->transfer_buffer = uvc_urb->buffer;
> > urb->complete = uvc_video_complete;
> > @@ -1795,10 +1831,8 @@ static int uvc_init_video_bulk(struct uvc_streaming *stream,
> >
> > usb_fill_bulk_urb(urb, stream->dev->udev, pipe, uvc_urb->buffer,
> > size, uvc_video_complete, uvc_urb);
> > -#ifndef CONFIG_DMA_NONCOHERENT
> > urb->transfer_flags = URB_NO_TRANSFER_DMA_MAP;
> > urb->transfer_dma = uvc_urb->dma;
> > -#endif
> >
> > uvc_urb->urb = urb;
> > }
> > @@ -1895,6 +1929,7 @@ static int uvc_video_start_transfer(struct uvc_streaming *stream,
> >
> > /* Submit the URBs. */
> > for_each_uvc_urb(uvc_urb, stream) {
> > + uvc_urb_dma_sync(uvc_urb, true);
> > ret = usb_submit_urb(uvc_urb->urb, gfp_flags);
> > if (ret < 0) {
> > dev_err(&stream->intf->dev,
> > diff --git a/drivers/media/usb/uvc/uvcvideo.h b/drivers/media/usb/uvc/uvcvideo.h
> > index 97df5ecd66c9a7..fec24f1eca1c96 100644
> > --- a/drivers/media/usb/uvc/uvcvideo.h
> > +++ b/drivers/media/usb/uvc/uvcvideo.h
> > @@ -545,7 +545,8 @@ struct uvc_copy_op {
> > * @urb: the URB described by this context structure
> > * @stream: UVC streaming context
> > * @buffer: memory storage for the URB
> > - * @dma: DMA coherent addressing for the urb_buffer
> > + * @dma: Allocated DMA handle
> > + * @sgt: sgt_table with the urb locations in memory
> > * @async_operations: counter to indicate the number of copy operations
> > * @copy_operations: work descriptors for asynchronous copy operations
> > * @work: work queue entry for asynchronous decode
> > @@ -556,6 +557,7 @@ struct uvc_urb {
> >
> > char *buffer;
> > dma_addr_t dma;
> > + struct sg_table *sgt;
>
> A forward declaration of struct sg_table with the other forward
> declarations would be useful.
>
> >
> > unsigned int async_operations;
> > struct uvc_copy_op copy_operations[UVC_MAX_PACKETS];

--
Regards,

Laurent Pinchart

2021-03-12 01:46:24

by Laurent Pinchart

[permalink] [raw]
Subject: Re: [PATCH 6/6] media: uvcvideo: Use dma_alloc_noncontiguos API

Hi Christoph and Ricardo,

Thank you for the patch.

On Mon, Mar 01, 2021 at 09:52:36AM +0100, Christoph Hellwig wrote:
> From: Ricardo Ribalda <[email protected]>
>
> On architectures where the is no coherent caching such as ARM use the

s/the is/there is/

> dma_alloc_noncontiguos API and handle manually the cache flushing using

s/dma_alloc_noncontiguos/dma_alloc_noncontiguous/

(and in the subject line too)

> dma_sync_sgtable().
>
> With this patch on the affected architectures we can measure up to 20x
> performance improvement in uvc_video_copy_data_work().
>
> Eg: aarch64 with an external usb camera
>
> NON_CONTIGUOUS
> frames: 999
> packets: 999
> empty: 0 (0 %)
> errors: 0
> invalid: 0
> pts: 0 early, 0 initial, 999 ok
> scr: 0 count ok, 0 diff ok
> sof: 2048 <= sof <= 0, freq 0.000 kHz
> bytes 67034480 : duration 33303
> FPS: 29.99
> URB: 523446/4993 uS/qty: 104.836 avg 132.532 std 13.230 min 831.094 max (uS)
> header: 76564/4993 uS/qty: 15.334 avg 15.229 std 3.438 min 186.875 max (uS)
> latency: 468945/4992 uS/qty: 93.939 avg 132.577 std 9.531 min 824.010 max (uS)
> decode: 54161/4993 uS/qty: 10.847 avg 6.313 std 1.614 min 111.458 max (uS)
> raw decode speed: 9.931 Gbits/s
> raw URB handling speed: 1.025 Gbits/s
> throughput: 16.102 Mbits/s
> URB decode CPU usage 0.162600 %
>
> COHERENT
> frames: 999
> packets: 999
> empty: 0 (0 %)
> errors: 0
> invalid: 0
> pts: 0 early, 0 initial, 999 ok
> scr: 0 count ok, 0 diff ok
> sof: 2048 <= sof <= 0, freq 0.000 kHz
> bytes 54683536 : duration 33302
> FPS: 29.99
> URB: 1478135/4000 uS/qty: 369.533 avg 390.357 std 22.968 min 3337.865 max (uS)
> header: 79761/4000 uS/qty: 19.940 avg 18.495 std 1.875 min 336.719 max (uS)
> latency: 281077/4000 uS/qty: 70.269 avg 83.102 std 5.104 min 735.000 max (uS)
> decode: 1197057/4000 uS/qty: 299.264 avg 318.080 std 1.615 min 2806.667 max (uS)
> raw decode speed: 365.470 Mbits/s
> raw URB handling speed: 295.986 Mbits/s
> throughput: 13.136 Mbits/s
> URB decode CPU usage 3.594500 %
>
> Signed-off-by: Ricardo Ribalda <[email protected]>
> Reviewed-by: Tomasz Figa <[email protected]>
> Signed-off-by: Christoph Hellwig <[email protected]>
> ---
> drivers/media/usb/uvc/uvc_video.c | 79 ++++++++++++++++++++++---------
> drivers/media/usb/uvc/uvcvideo.h | 4 +-
> 2 files changed, 60 insertions(+), 23 deletions(-)
>
> diff --git a/drivers/media/usb/uvc/uvc_video.c b/drivers/media/usb/uvc/uvc_video.c
> index f2f565281e63ff..d008c68fb6c806 100644
> --- a/drivers/media/usb/uvc/uvc_video.c
> +++ b/drivers/media/usb/uvc/uvc_video.c
> @@ -6,11 +6,13 @@
> * Laurent Pinchart ([email protected])
> */
>

Should we include <linux/dma-mapping.h> ?

> +#include <linux/highmem.h>
> #include <linux/kernel.h>
> #include <linux/list.h>
> #include <linux/module.h>
> #include <linux/slab.h>
> #include <linux/usb.h>
> +#include <linux/usb/hcd.h>
> #include <linux/videodev2.h>
> #include <linux/vmalloc.h>
> #include <linux/wait.h>
> @@ -1096,6 +1098,26 @@ static int uvc_video_decode_start(struct uvc_streaming *stream,
> return data[0];
> }
>
> +static inline struct device *stream_to_dmadev(struct uvc_streaming *stream)
> +{
> + return bus_to_hcd(stream->dev->udev->bus)->self.sysdev;
> +}
> +
> +static void uvc_urb_dma_sync(struct uvc_urb *uvc_urb, bool for_device)
> +{
> + struct device *dma_dev = stream_to_dmadev(uvc_urb->stream);
> +
> + if (for_device) {
> + dma_sync_sgtable_for_device(dma_dev, uvc_urb->sgt,
> + DMA_FROM_DEVICE);

The uvcvideo driver also supports video output devices (they are fairly
rare, but they exist). We thus need to handle DMA_TO_DEVICE too.

> + } else {
> + dma_sync_sgtable_for_cpu(dma_dev, uvc_urb->sgt,
> + DMA_FROM_DEVICE);
> + invalidate_kernel_vmap_range(uvc_urb->buffer,
> + uvc_urb->stream->urb_size);
> + }
> +}
> +
> /*
> * uvc_video_decode_data_work: Asynchronous memcpy processing
> *
> @@ -1117,6 +1139,8 @@ static void uvc_video_copy_data_work(struct work_struct *work)
> uvc_queue_buffer_release(op->buf);
> }
>
> + uvc_urb_dma_sync(uvc_urb, true);
> +
> ret = usb_submit_urb(uvc_urb->urb, GFP_KERNEL);
> if (ret < 0)
> dev_err(&uvc_urb->stream->intf->dev,
> @@ -1541,10 +1565,12 @@ static void uvc_video_complete(struct urb *urb)
> * Process the URB headers, and optionally queue expensive memcpy tasks
> * to be deferred to a work queue.
> */
> + uvc_urb_dma_sync(uvc_urb, false);
> stream->decode(uvc_urb, buf, buf_meta);
>
> /* If no async work is needed, resubmit the URB immediately. */
> if (!uvc_urb->async_operations) {
> + uvc_urb_dma_sync(uvc_urb, true);
> ret = usb_submit_urb(uvc_urb->urb, GFP_ATOMIC);
> if (ret < 0)
> dev_err(&stream->intf->dev,
> @@ -1560,24 +1586,46 @@ static void uvc_video_complete(struct urb *urb)
> */
> static void uvc_free_urb_buffers(struct uvc_streaming *stream)
> {
> + struct device *dma_dev = stream_to_dmadev(stream);
> struct uvc_urb *uvc_urb;
>
> for_each_uvc_urb(uvc_urb, stream) {
> if (!uvc_urb->buffer)
> continue;
>
> -#ifndef CONFIG_DMA_NONCOHERENT
> - usb_free_coherent(stream->dev->udev, stream->urb_size,
> - uvc_urb->buffer, uvc_urb->dma);
> -#else
> - kfree(uvc_urb->buffer);
> -#endif
> + dma_vunmap_noncontiguous(dma_dev, uvc_urb->buffer);
> + dma_free_noncontiguous(dma_dev, stream->urb_size, uvc_urb->sgt,
> + DMA_FROM_DEVICE);
> +
> uvc_urb->buffer = NULL;

Maybe also

uvc_urb->sgt = NULL;

? It's not strictly mandatory, but may make use-after-free bugs easier
to spot.

> }
>
> stream->urb_size = 0;
> }
>
> +static bool uvc_alloc_urb_buffer(struct uvc_streaming *stream,
> + struct uvc_urb *uvc_urb, gfp_t gfp_flags)
> +{
> + struct device *dma_dev = stream_to_dmadev(stream);
> +
> +

Extra blank line.

> + uvc_urb->sgt = dma_alloc_noncontiguous(dma_dev, stream->urb_size,
> + DMA_FROM_DEVICE, gfp_flags, 0);
> + if (!uvc_urb->sgt)
> + return false;
> + uvc_urb->dma = uvc_urb->sgt->sgl->dma_address;
> +
> + uvc_urb->buffer = dma_vmap_noncontiguous(dma_dev, stream->urb_size,
> + uvc_urb->sgt);
> + if (!uvc_urb->buffer) {
> + dma_free_noncontiguous(dma_dev, stream->urb_size,
> + uvc_urb->sgt, DMA_FROM_DEVICE);

Same here,

uvc_urb->sgt = NULL;

> + return false;
> + }
> +
> + return true;
> +}
> +
> /*
> * Allocate transfer buffers. This function can be called with buffers
> * already allocated when resuming from suspend, in which case it will
> @@ -1608,19 +1656,11 @@ static int uvc_alloc_urb_buffers(struct uvc_streaming *stream,
>
> /* Retry allocations until one succeed. */
> for (; npackets > 1; npackets /= 2) {
> + stream->urb_size = psize * npackets;

A blank line here would be nice.

> for (i = 0; i < UVC_URBS; ++i) {
> struct uvc_urb *uvc_urb = &stream->uvc_urb[i];
>
> - stream->urb_size = psize * npackets;
> -#ifndef CONFIG_DMA_NONCOHERENT
> - uvc_urb->buffer = usb_alloc_coherent(
> - stream->dev->udev, stream->urb_size,
> - gfp_flags | __GFP_NOWARN, &uvc_urb->dma);
> -#else
> - uvc_urb->buffer =
> - kmalloc(stream->urb_size, gfp_flags | __GFP_NOWARN);
> -#endif
> - if (!uvc_urb->buffer) {
> + if (!uvc_alloc_urb_buffer(stream, uvc_urb, gfp_flags)) {
> uvc_free_urb_buffers(stream);
> break;
> }
> @@ -1730,12 +1770,8 @@ static int uvc_init_video_isoc(struct uvc_streaming *stream,
> urb->context = uvc_urb;
> urb->pipe = usb_rcvisocpipe(stream->dev->udev,
> ep->desc.bEndpointAddress);
> -#ifndef CONFIG_DMA_NONCOHERENT
> urb->transfer_flags = URB_ISO_ASAP | URB_NO_TRANSFER_DMA_MAP;
> urb->transfer_dma = uvc_urb->dma;
> -#else
> - urb->transfer_flags = URB_ISO_ASAP;
> -#endif
> urb->interval = ep->desc.bInterval;
> urb->transfer_buffer = uvc_urb->buffer;
> urb->complete = uvc_video_complete;
> @@ -1795,10 +1831,8 @@ static int uvc_init_video_bulk(struct uvc_streaming *stream,
>
> usb_fill_bulk_urb(urb, stream->dev->udev, pipe, uvc_urb->buffer,
> size, uvc_video_complete, uvc_urb);
> -#ifndef CONFIG_DMA_NONCOHERENT
> urb->transfer_flags = URB_NO_TRANSFER_DMA_MAP;
> urb->transfer_dma = uvc_urb->dma;
> -#endif
>
> uvc_urb->urb = urb;
> }
> @@ -1895,6 +1929,7 @@ static int uvc_video_start_transfer(struct uvc_streaming *stream,
>
> /* Submit the URBs. */
> for_each_uvc_urb(uvc_urb, stream) {
> + uvc_urb_dma_sync(uvc_urb, true);
> ret = usb_submit_urb(uvc_urb->urb, gfp_flags);
> if (ret < 0) {
> dev_err(&stream->intf->dev,
> diff --git a/drivers/media/usb/uvc/uvcvideo.h b/drivers/media/usb/uvc/uvcvideo.h
> index 97df5ecd66c9a7..fec24f1eca1c96 100644
> --- a/drivers/media/usb/uvc/uvcvideo.h
> +++ b/drivers/media/usb/uvc/uvcvideo.h
> @@ -545,7 +545,8 @@ struct uvc_copy_op {
> * @urb: the URB described by this context structure
> * @stream: UVC streaming context
> * @buffer: memory storage for the URB
> - * @dma: DMA coherent addressing for the urb_buffer
> + * @dma: Allocated DMA handle
> + * @sgt: sgt_table with the urb locations in memory
> * @async_operations: counter to indicate the number of copy operations
> * @copy_operations: work descriptors for asynchronous copy operations
> * @work: work queue entry for asynchronous decode
> @@ -556,6 +557,7 @@ struct uvc_urb {
>
> char *buffer;
> dma_addr_t dma;
> + struct sg_table *sgt;

A forward declaration of struct sg_table with the other forward
declarations would be useful.

>
> unsigned int async_operations;
> struct uvc_copy_op copy_operations[UVC_MAX_PACKETS];

--
Regards,

Laurent Pinchart

2023-06-29 17:37:20

by Robin Murphy

[permalink] [raw]
Subject: Re: [PATCH 3/6] dma-mapping: add a dma_alloc_noncontiguous API

[Archaeology ensues...]

On 2021-03-01 08:52, Christoph Hellwig wrote:
[...]
> +static struct sg_table *alloc_single_sgt(struct device *dev, size_t size,
> + enum dma_data_direction dir, gfp_t gfp)
> +{
> + struct sg_table *sgt;
> + struct page *page;
> +
> + sgt = kmalloc(sizeof(*sgt), gfp);
> + if (!sgt)
> + return NULL;
> + if (sg_alloc_table(sgt, 1, gfp))
> + goto out_free_sgt;
> + page = __dma_alloc_pages(dev, size, &sgt->sgl->dma_address, dir, gfp);
> + if (!page)
> + goto out_free_table;
> + sg_set_page(sgt->sgl, page, PAGE_ALIGN(size), 0);
> + sg_dma_len(sgt->sgl) = sgt->sgl->length;
> + return sgt;
> +out_free_table:
> + sg_free_table(sgt);
> +out_free_sgt:
> + kfree(sgt);
> + return NULL;
> +}
> +
> +struct sg_table *dma_alloc_noncontiguous(struct device *dev, size_t size,
> + enum dma_data_direction dir, gfp_t gfp, unsigned long attrs)
> +{
> + const struct dma_map_ops *ops = get_dma_ops(dev);
> + struct sg_table *sgt;
> +
> + if (WARN_ON_ONCE(attrs & ~DMA_ATTR_ALLOC_SINGLE_PAGES))
> + return NULL;
> +
> + if (ops && ops->alloc_noncontiguous)
> + sgt = ops->alloc_noncontiguous(dev, size, dir, gfp, attrs);
> + else
> + sgt = alloc_single_sgt(dev, size, dir, gfp);
> +
> + if (sgt) {
> + sgt->nents = 1;
> + debug_dma_map_sg(dev, sgt->sgl, sgt->orig_nents, 1, dir);

It turns out this is liable to trip up DMA_API_DEBUG_SG (potentially
even in the alloc_single_sgt() case), since we've filled in sgt without
paying attention to the device's segment boundary/size parameters.

Now, it would be entirely possible to make the allocators "properly"
partition the pages into multiple segments per those constraints, but
given that there's no actual dma_map_sg() operation involved, and AFAIR
the intent here is really only to describe a single DMA-contiguous
buffer as pages, rather than represent a true scatter-gather operation,
I'm now wondering whether it makes more sense to just make dma-debug a
bit cleverer instead. Any other opinions?

Thanks,
Robin.

> + }
> + return sgt;
> +}

2023-07-05 11:44:16

by Tomasz Figa

[permalink] [raw]
Subject: Re: [PATCH 3/6] dma-mapping: add a dma_alloc_noncontiguous API

On Fri, Jun 30, 2023 at 2:21 AM Robin Murphy <[email protected]> wrote:
>
> [Archaeology ensues...]
>
> On 2021-03-01 08:52, Christoph Hellwig wrote:
> [...]
> > +static struct sg_table *alloc_single_sgt(struct device *dev, size_t size,
> > + enum dma_data_direction dir, gfp_t gfp)
> > +{
> > + struct sg_table *sgt;
> > + struct page *page;
> > +
> > + sgt = kmalloc(sizeof(*sgt), gfp);
> > + if (!sgt)
> > + return NULL;
> > + if (sg_alloc_table(sgt, 1, gfp))
> > + goto out_free_sgt;
> > + page = __dma_alloc_pages(dev, size, &sgt->sgl->dma_address, dir, gfp);
> > + if (!page)
> > + goto out_free_table;
> > + sg_set_page(sgt->sgl, page, PAGE_ALIGN(size), 0);
> > + sg_dma_len(sgt->sgl) = sgt->sgl->length;
> > + return sgt;
> > +out_free_table:
> > + sg_free_table(sgt);
> > +out_free_sgt:
> > + kfree(sgt);
> > + return NULL;
> > +}
> > +
> > +struct sg_table *dma_alloc_noncontiguous(struct device *dev, size_t size,
> > + enum dma_data_direction dir, gfp_t gfp, unsigned long attrs)
> > +{
> > + const struct dma_map_ops *ops = get_dma_ops(dev);
> > + struct sg_table *sgt;
> > +
> > + if (WARN_ON_ONCE(attrs & ~DMA_ATTR_ALLOC_SINGLE_PAGES))
> > + return NULL;
> > +
> > + if (ops && ops->alloc_noncontiguous)
> > + sgt = ops->alloc_noncontiguous(dev, size, dir, gfp, attrs);
> > + else
> > + sgt = alloc_single_sgt(dev, size, dir, gfp);
> > +
> > + if (sgt) {
> > + sgt->nents = 1;
> > + debug_dma_map_sg(dev, sgt->sgl, sgt->orig_nents, 1, dir);
>
> It turns out this is liable to trip up DMA_API_DEBUG_SG (potentially
> even in the alloc_single_sgt() case), since we've filled in sgt without
> paying attention to the device's segment boundary/size parameters.
>
> Now, it would be entirely possible to make the allocators "properly"
> partition the pages into multiple segments per those constraints, but
> given that there's no actual dma_map_sg() operation involved, and AFAIR
> the intent here is really only to describe a single DMA-contiguous
> buffer as pages, rather than represent a true scatter-gather operation,

Yeah, the name noncontiguous comes from potentially allocating
non-contiguous physical pages, which based on a few people I talked
with, ended up being quite confusing, but I can't really think of a
better name either.

Do we know how common devices with segment boundary/size constraints
are and how likely they are to use this API?

> I'm now wondering whether it makes more sense to just make dma-debug a
> bit cleverer instead. Any other opinions?

If we could assume that drivers for those devices shouldn't use this
API, we could just fail if the segment boundary/size are set to
something other than unlimited.

Best regards,
Tomasz

>
> Thanks,
> Robin.
>
> > + }
> > + return sgt;
> > +}