2020-09-25 14:15:28

by Marek Szyprowski

[permalink] [raw]
Subject: [PATCH 0/8] IOMMU-DMA - support old allocation algorithm used on ARM

Hi,

This patchset is a continuation of the planned rework of the ARM
IOMMU/DMA-mapping code proposed by Robin Murphy in [1]. However, there
are drivers (for example S5P-MFC and Exynos4-IS) which depend on the way
the old ARM IOMMU/DMA-mapping glue code worked (it used 'first-fit' IOVA
allocation algorithm), so before switching ARM to the generic code, such
drivers have to be updated.

This patchset provides the needed extensions to the generic IOMMU-DMA
framework to enable support for the drivers that relied on the old ARM
IOMMU/DMA-mapping behavior. This patchset is based on the idea proposed
by Robin Murphy in [2] after the discussion of the workaround implemented
directly in the mentioned drivers [3].

Here is a git branch with this patchset and [1] patches applied on top of
linux next-20200925:
https://github.com/mszyprow/linux/tree/v5.9-next-20200925-arm-dma-iommu-low-address

Best regards,
Marek Szyprowski


References:

[1] https://lore.kernel.org/lkml/[email protected]/
[2] https://lore.kernel.org/linux-iommu/[email protected]/
[3] https://lore.kernel.org/linux-samsung-soc/[email protected]/T/


Patch summary:

Marek Szyprowski (8):
dma-mapping: add DMA_ATTR_LOW_ADDRESS attribute
iommu: iova: properly handle 0 as a valid IOVA address
iommu: iova: add support for 'first-fit' algorithm
iommu: dma-iommu: refactor iommu_dma_alloc_iova()
iommu: dma-iommu: add support for DMA_ATTR_LOW_ADDRESS
media: platform: exynos4-is: remove all references to physicall
addresses
media: platform: exynos4-is: use DMA_ATTR_LOW_ADDRESS
media: platform: s5p-mfc: use DMA_ATTR_LOW_ADDRESS

drivers/iommu/dma-iommu.c | 79 ++++++++++++-----
drivers/iommu/intel/iommu.c | 12 +--
drivers/iommu/iova.c | 88 ++++++++++++++++++-
.../media/platform/exynos4-is/fimc-capture.c | 6 +-
drivers/media/platform/exynos4-is/fimc-core.c | 28 +++---
drivers/media/platform/exynos4-is/fimc-core.h | 18 ++--
drivers/media/platform/exynos4-is/fimc-is.c | 23 ++---
drivers/media/platform/exynos4-is/fimc-is.h | 6 +-
.../media/platform/exynos4-is/fimc-lite-reg.c | 4 +-
drivers/media/platform/exynos4-is/fimc-lite.c | 2 +-
drivers/media/platform/exynos4-is/fimc-lite.h | 4 +-
drivers/media/platform/exynos4-is/fimc-m2m.c | 8 +-
drivers/media/platform/exynos4-is/fimc-reg.c | 18 ++--
drivers/media/platform/exynos4-is/fimc-reg.h | 4 +-
drivers/media/platform/s5p-mfc/s5p_mfc.c | 8 +-
include/linux/dma-mapping.h | 6 ++
include/linux/iova.h | 4 +
17 files changed, 221 insertions(+), 97 deletions(-)

--
2.17.1


2020-09-25 14:15:54

by Marek Szyprowski

[permalink] [raw]
Subject: [PATCH 2/8] iommu: iova: properly handle 0 as a valid IOVA address

Zero is a valid DMA and IOVA address on many architectures, so adjust the
IOVA management code to properly handle it. A new value IOVA_BAD_ADDR
(~0UL) is introduced as a generic value for the error case. Adjust all
callers of the alloc_iova_fast() function for the new return value.

Signed-off-by: Marek Szyprowski <[email protected]>
---
drivers/iommu/dma-iommu.c | 18 ++++++++++--------
drivers/iommu/intel/iommu.c | 12 ++++++------
drivers/iommu/iova.c | 10 ++++++----
include/linux/iova.h | 2 ++
4 files changed, 24 insertions(+), 18 deletions(-)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index cd6e3c70ebb3..91dd8f46dae1 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -405,7 +405,7 @@ static dma_addr_t iommu_dma_alloc_iova(struct iommu_domain *domain,
{
struct iommu_dma_cookie *cookie = domain->iova_cookie;
struct iova_domain *iovad = &cookie->iovad;
- unsigned long shift, iova_len, iova = 0;
+ unsigned long shift, iova_len, iova = IOVA_BAD_ADDR;

if (cookie->type == IOMMU_DMA_MSI_COOKIE) {
cookie->msi_iova += size;
@@ -433,11 +433,13 @@ static dma_addr_t iommu_dma_alloc_iova(struct iommu_domain *domain,
iova = alloc_iova_fast(iovad, iova_len,
DMA_BIT_MASK(32) >> shift, false);

- if (!iova)
+ if (iova == IOVA_BAD_ADDR)
iova = alloc_iova_fast(iovad, iova_len, dma_limit >> shift,
true);

- return (dma_addr_t)iova << shift;
+ if (iova != IOVA_BAD_ADDR)
+ return (dma_addr_t)iova << shift;
+ return DMA_MAPPING_ERROR;
}

static void iommu_dma_free_iova(struct iommu_dma_cookie *cookie,
@@ -493,8 +495,8 @@ static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys,
size = iova_align(iovad, size + iova_off);

iova = iommu_dma_alloc_iova(domain, size, dma_mask, dev);
- if (!iova)
- return DMA_MAPPING_ERROR;
+ if (iova == DMA_MAPPING_ERROR)
+ return iova;

if (iommu_map_atomic(domain, iova, phys - iova_off, size, prot)) {
iommu_dma_free_iova(cookie, iova, size);
@@ -617,7 +619,7 @@ static void *iommu_dma_alloc_remap(struct device *dev, size_t size,

size = iova_align(iovad, size);
iova = iommu_dma_alloc_iova(domain, size, dev->coherent_dma_mask, dev);
- if (!iova)
+ if (iova == DMA_MAPPING_ERROR)
goto out_free_pages;

if (sg_alloc_table_from_pages(&sgt, pages, count, 0, size, GFP_KERNEL))
@@ -887,7 +889,7 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg,
}

iova = iommu_dma_alloc_iova(domain, iova_len, dma_get_mask(dev), dev);
- if (!iova)
+ if (iova == DMA_MAPPING_ERROR)
goto out_restore_sg;

/*
@@ -1181,7 +1183,7 @@ static struct iommu_dma_msi_page *iommu_dma_get_msi_page(struct device *dev,
return NULL;

iova = iommu_dma_alloc_iova(domain, size, dma_get_mask(dev), dev);
- if (!iova)
+ if (iova == DMA_MAPPING_ERROR)
goto out_free_page;

if (iommu_map(domain, iova, msi_addr, size, prot))
diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
index 00963cedfd83..885d0dee39cc 100644
--- a/drivers/iommu/intel/iommu.c
+++ b/drivers/iommu/intel/iommu.c
@@ -3416,15 +3416,15 @@ static unsigned long intel_alloc_iova(struct device *dev,
*/
iova_pfn = alloc_iova_fast(&domain->iovad, nrpages,
IOVA_PFN(DMA_BIT_MASK(32)), false);
- if (iova_pfn)
+ if (iova_pfn != IOVA_BAD_ADDR)
return iova_pfn;
}
iova_pfn = alloc_iova_fast(&domain->iovad, nrpages,
IOVA_PFN(dma_mask), true);
- if (unlikely(!iova_pfn)) {
+ if (unlikely(iova_pfn == IOVA_BAD_ADDR)) {
dev_err_once(dev, "Allocating %ld-page iova failed\n",
nrpages);
- return 0;
+ return IOVA_BAD_ADDR;
}

return iova_pfn;
@@ -3454,7 +3454,7 @@ static dma_addr_t __intel_map_single(struct device *dev, phys_addr_t paddr,
size = aligned_nrpages(paddr, size);

iova_pfn = intel_alloc_iova(dev, domain, dma_to_mm_pfn(size), dma_mask);
- if (!iova_pfn)
+ if (iova_pfn == IOVA_BAD_ADDR)
goto error;

/*
@@ -3663,7 +3663,7 @@ static int intel_map_sg(struct device *dev, struct scatterlist *sglist, int nele

iova_pfn = intel_alloc_iova(dev, domain, dma_to_mm_pfn(size),
*dev->dma_mask);
- if (!iova_pfn) {
+ if (iova_pfn == IOVA_BAD_ADDR) {
sglist->dma_length = 0;
return 0;
}
@@ -3760,7 +3760,7 @@ bounce_map_single(struct device *dev, phys_addr_t paddr, size_t size,
nrpages = aligned_nrpages(0, size);
iova_pfn = intel_alloc_iova(dev, domain,
dma_to_mm_pfn(nrpages), dma_mask);
- if (!iova_pfn)
+ if (iova_pfn == IOVA_BAD_ADDR)
return DMA_MAPPING_ERROR;

/*
diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
index 30d969a4c5fd..87555ed1737a 100644
--- a/drivers/iommu/iova.c
+++ b/drivers/iommu/iova.c
@@ -407,6 +407,8 @@ EXPORT_SYMBOL_GPL(free_iova);
* This function tries to satisfy an iova allocation from the rcache,
* and falls back to regular allocation on failure. If regular allocation
* fails too and the flush_rcache flag is set then the rcache will be flushed.
+ * Returns a pfn the allocated iova starts at or IOVA_BAD_ADDR in the case
+ * of a failure.
*/
unsigned long
alloc_iova_fast(struct iova_domain *iovad, unsigned long size,
@@ -416,7 +418,7 @@ alloc_iova_fast(struct iova_domain *iovad, unsigned long size,
struct iova *new_iova;

iova_pfn = iova_rcache_get(iovad, size, limit_pfn + 1);
- if (iova_pfn)
+ if (iova_pfn != IOVA_BAD_ADDR)
return iova_pfn;

retry:
@@ -425,7 +427,7 @@ alloc_iova_fast(struct iova_domain *iovad, unsigned long size,
unsigned int cpu;

if (!flush_rcache)
- return 0;
+ return IOVA_BAD_ADDR;

/* Try replenishing IOVAs by flushing rcache. */
flush_rcache = false;
@@ -956,7 +958,7 @@ static unsigned long __iova_rcache_get(struct iova_rcache *rcache,
unsigned long limit_pfn)
{
struct iova_cpu_rcache *cpu_rcache;
- unsigned long iova_pfn = 0;
+ unsigned long iova_pfn = IOVA_BAD_ADDR;
bool has_pfn = false;
unsigned long flags;

@@ -998,7 +1000,7 @@ static unsigned long iova_rcache_get(struct iova_domain *iovad,
unsigned int log_size = order_base_2(size);

if (log_size >= IOVA_RANGE_CACHE_MAX_SIZE)
- return 0;
+ return IOVA_BAD_ADDR;

return __iova_rcache_get(&iovad->rcaches[log_size], limit_pfn - size);
}
diff --git a/include/linux/iova.h b/include/linux/iova.h
index a0637abffee8..69737e6bcef6 100644
--- a/include/linux/iova.h
+++ b/include/linux/iova.h
@@ -22,6 +22,8 @@ struct iova {
unsigned long pfn_lo; /* Lowest allocated pfn */
};

+#define IOVA_BAD_ADDR (~0UL)
+
struct iova_magazine;
struct iova_cpu_rcache;

--
2.17.1

2020-09-25 14:16:04

by Marek Szyprowski

[permalink] [raw]
Subject: [PATCH 4/8] iommu: dma-iommu: refactor iommu_dma_alloc_iova()

Change the parameters passed to iommu_dma_alloc_iova(): the dma_limit can
be easily extracted from the parameters of the passed struct device, so
replace it with a flags parameter, which can later hold more information
about the way the IOVA allocator should do it job. While touching the
parameter list, move struct device to the second position to better match
the convention of the DMA-mapping related functions.

Signed-off-by: Marek Szyprowski <[email protected]>
---
drivers/iommu/dma-iommu.c | 23 +++++++++++++----------
1 file changed, 13 insertions(+), 10 deletions(-)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 91dd8f46dae1..0ea87023306f 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -400,12 +400,16 @@ static int dma_info_to_prot(enum dma_data_direction dir, bool coherent,
}
}

+#define DMA_ALLOC_IOVA_COHERENT BIT(0)
+
static dma_addr_t iommu_dma_alloc_iova(struct iommu_domain *domain,
- size_t size, u64 dma_limit, struct device *dev)
+ struct device *dev, size_t size, unsigned int flags)
{
struct iommu_dma_cookie *cookie = domain->iova_cookie;
struct iova_domain *iovad = &cookie->iovad;
unsigned long shift, iova_len, iova = IOVA_BAD_ADDR;
+ u64 dma_limit = (flags & DMA_ALLOC_IOVA_COHERENT) ?
+ dev->coherent_dma_mask : dma_get_mask(dev);

if (cookie->type == IOMMU_DMA_MSI_COOKIE) {
cookie->msi_iova += size;
@@ -481,7 +485,7 @@ static void __iommu_dma_unmap(struct device *dev, dma_addr_t dma_addr,
}

static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys,
- size_t size, int prot, u64 dma_mask)
+ size_t size, int prot, unsigned int flags)
{
struct iommu_domain *domain = iommu_get_dma_domain(dev);
struct iommu_dma_cookie *cookie = domain->iova_cookie;
@@ -494,7 +498,7 @@ static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys,

size = iova_align(iovad, size + iova_off);

- iova = iommu_dma_alloc_iova(domain, size, dma_mask, dev);
+ iova = iommu_dma_alloc_iova(domain, dev, size, flags);
if (iova == DMA_MAPPING_ERROR)
return iova;

@@ -618,7 +622,7 @@ static void *iommu_dma_alloc_remap(struct device *dev, size_t size,
return NULL;

size = iova_align(iovad, size);
- iova = iommu_dma_alloc_iova(domain, size, dev->coherent_dma_mask, dev);
+ iova = iommu_dma_alloc_iova(domain, dev, size, DMA_ALLOC_IOVA_COHERENT);
if (iova == DMA_MAPPING_ERROR)
goto out_free_pages;

@@ -733,7 +737,7 @@ static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page,
int prot = dma_info_to_prot(dir, coherent, attrs);
dma_addr_t dma_handle;

- dma_handle = __iommu_dma_map(dev, phys, size, prot, dma_get_mask(dev));
+ dma_handle = __iommu_dma_map(dev, phys, size, prot, 0);
if (!coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
dma_handle != DMA_MAPPING_ERROR)
arch_sync_dma_for_device(phys, size, dir);
@@ -888,7 +892,7 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg,
prev = s;
}

- iova = iommu_dma_alloc_iova(domain, iova_len, dma_get_mask(dev), dev);
+ iova = iommu_dma_alloc_iova(domain, dev, iova_len, 0);
if (iova == DMA_MAPPING_ERROR)
goto out_restore_sg;

@@ -936,8 +940,7 @@ static dma_addr_t iommu_dma_map_resource(struct device *dev, phys_addr_t phys,
size_t size, enum dma_data_direction dir, unsigned long attrs)
{
return __iommu_dma_map(dev, phys, size,
- dma_info_to_prot(dir, false, attrs) | IOMMU_MMIO,
- dma_get_mask(dev));
+ dma_info_to_prot(dir, false, attrs) | IOMMU_MMIO, 0);
}

static void iommu_dma_unmap_resource(struct device *dev, dma_addr_t handle,
@@ -1045,7 +1048,7 @@ static void *iommu_dma_alloc(struct device *dev, size_t size,
return NULL;

*handle = __iommu_dma_map(dev, page_to_phys(page), size, ioprot,
- dev->coherent_dma_mask);
+ DMA_ALLOC_IOVA_COHERENT);
if (*handle == DMA_MAPPING_ERROR) {
__iommu_dma_free(dev, size, cpu_addr);
return NULL;
@@ -1182,7 +1185,7 @@ static struct iommu_dma_msi_page *iommu_dma_get_msi_page(struct device *dev,
if (!msi_page)
return NULL;

- iova = iommu_dma_alloc_iova(domain, size, dma_get_mask(dev), dev);
+ iova = iommu_dma_alloc_iova(domain, dev, size, 0);
if (iova == DMA_MAPPING_ERROR)
goto out_free_page;

--
2.17.1

2020-09-25 14:16:50

by Marek Szyprowski

[permalink] [raw]
Subject: [PATCH 5/8] iommu: dma-iommu: add support for DMA_ATTR_LOW_ADDRESS

Implement support for the DMA_ATTR_LOW_ADDRESS DMA attribute. If it has
been set, call alloc_iova_first_fit() instead of the alloc_iova_fast() to
allocate the new IOVA from the beginning of the address space.

Signed-off-by: Marek Szyprowski <[email protected]>
---
drivers/iommu/dma-iommu.c | 50 +++++++++++++++++++++++++++++----------
1 file changed, 38 insertions(+), 12 deletions(-)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 0ea87023306f..ab39659c727a 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -401,6 +401,18 @@ static int dma_info_to_prot(enum dma_data_direction dir, bool coherent,
}

#define DMA_ALLOC_IOVA_COHERENT BIT(0)
+#define DMA_ALLOC_IOVA_FIRST_FIT BIT(1)
+
+static unsigned int dma_attrs_to_alloc_flags(unsigned long attrs, bool coherent)
+{
+ unsigned int flags = 0;
+
+ if (coherent)
+ flags |= DMA_ALLOC_IOVA_COHERENT;
+ if (attrs & DMA_ATTR_LOW_ADDRESS)
+ flags |= DMA_ALLOC_IOVA_FIRST_FIT;
+ return flags;
+}

static dma_addr_t iommu_dma_alloc_iova(struct iommu_domain *domain,
struct device *dev, size_t size, unsigned int flags)
@@ -433,13 +445,23 @@ static dma_addr_t iommu_dma_alloc_iova(struct iommu_domain *domain,
dma_limit = min(dma_limit, (u64)domain->geometry.aperture_end);

/* Try to get PCI devices a SAC address */
- if (dma_limit > DMA_BIT_MASK(32) && dev_is_pci(dev))
- iova = alloc_iova_fast(iovad, iova_len,
- DMA_BIT_MASK(32) >> shift, false);
+ if (dma_limit > DMA_BIT_MASK(32) && dev_is_pci(dev)) {
+ if (unlikely(flags & DMA_ALLOC_IOVA_FIRST_FIT))
+ iova = alloc_iova_first_fit(iovad, iova_len,
+ DMA_BIT_MASK(32) >> shift);
+ else
+ iova = alloc_iova_fast(iovad, iova_len,
+ DMA_BIT_MASK(32) >> shift, false);
+ }

- if (iova == IOVA_BAD_ADDR)
- iova = alloc_iova_fast(iovad, iova_len, dma_limit >> shift,
- true);
+ if (iova == IOVA_BAD_ADDR) {
+ if (unlikely(flags & DMA_ALLOC_IOVA_FIRST_FIT))
+ iova = alloc_iova_first_fit(iovad, iova_len,
+ dma_limit >> shift);
+ else
+ iova = alloc_iova_fast(iovad, iova_len,
+ dma_limit >> shift, true);
+ }

if (iova != IOVA_BAD_ADDR)
return (dma_addr_t)iova << shift;
@@ -593,6 +615,7 @@ static void *iommu_dma_alloc_remap(struct device *dev, size_t size,
struct iova_domain *iovad = &cookie->iovad;
bool coherent = dev_is_dma_coherent(dev);
int ioprot = dma_info_to_prot(DMA_BIDIRECTIONAL, coherent, attrs);
+ unsigned int flags = dma_attrs_to_alloc_flags(attrs, true);
pgprot_t prot = dma_pgprot(dev, PAGE_KERNEL, attrs);
unsigned int count, min_size, alloc_sizes = domain->pgsize_bitmap;
struct page **pages;
@@ -622,7 +645,7 @@ static void *iommu_dma_alloc_remap(struct device *dev, size_t size,
return NULL;

size = iova_align(iovad, size);
- iova = iommu_dma_alloc_iova(domain, dev, size, DMA_ALLOC_IOVA_COHERENT);
+ iova = iommu_dma_alloc_iova(domain, dev, size, flags);
if (iova == DMA_MAPPING_ERROR)
goto out_free_pages;

@@ -732,12 +755,13 @@ static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page,
unsigned long offset, size_t size, enum dma_data_direction dir,
unsigned long attrs)
{
+ unsigned int flags = dma_attrs_to_alloc_flags(attrs, false);
phys_addr_t phys = page_to_phys(page) + offset;
bool coherent = dev_is_dma_coherent(dev);
int prot = dma_info_to_prot(dir, coherent, attrs);
dma_addr_t dma_handle;

- dma_handle = __iommu_dma_map(dev, phys, size, prot, 0);
+ dma_handle = __iommu_dma_map(dev, phys, size, prot, flags);
if (!coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
dma_handle != DMA_MAPPING_ERROR)
arch_sync_dma_for_device(phys, size, dir);
@@ -842,6 +866,7 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg,
struct iova_domain *iovad = &cookie->iovad;
struct scatterlist *s, *prev = NULL;
int prot = dma_info_to_prot(dir, dev_is_dma_coherent(dev), attrs);
+ unsigned int flags = dma_attrs_to_alloc_flags(attrs, false);
dma_addr_t iova;
size_t iova_len = 0;
unsigned long mask = dma_get_seg_boundary(dev);
@@ -892,7 +917,7 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg,
prev = s;
}

- iova = iommu_dma_alloc_iova(domain, dev, iova_len, 0);
+ iova = iommu_dma_alloc_iova(domain, dev, iova_len, flags);
if (iova == DMA_MAPPING_ERROR)
goto out_restore_sg;

@@ -940,7 +965,8 @@ static dma_addr_t iommu_dma_map_resource(struct device *dev, phys_addr_t phys,
size_t size, enum dma_data_direction dir, unsigned long attrs)
{
return __iommu_dma_map(dev, phys, size,
- dma_info_to_prot(dir, false, attrs) | IOMMU_MMIO, 0);
+ dma_info_to_prot(dir, false, attrs) | IOMMU_MMIO,
+ dma_attrs_to_alloc_flags(attrs, false));
}

static void iommu_dma_unmap_resource(struct device *dev, dma_addr_t handle,
@@ -1027,6 +1053,7 @@ static void *iommu_dma_alloc_pages(struct device *dev, size_t size,
static void *iommu_dma_alloc(struct device *dev, size_t size,
dma_addr_t *handle, gfp_t gfp, unsigned long attrs)
{
+ unsigned int flags = dma_attrs_to_alloc_flags(attrs, true);
bool coherent = dev_is_dma_coherent(dev);
int ioprot = dma_info_to_prot(DMA_BIDIRECTIONAL, coherent, attrs);
struct page *page = NULL;
@@ -1047,8 +1074,7 @@ static void *iommu_dma_alloc(struct device *dev, size_t size,
if (!cpu_addr)
return NULL;

- *handle = __iommu_dma_map(dev, page_to_phys(page), size, ioprot,
- DMA_ALLOC_IOVA_COHERENT);
+ *handle = __iommu_dma_map(dev, page_to_phys(page), size, ioprot, flags);
if (*handle == DMA_MAPPING_ERROR) {
__iommu_dma_free(dev, size, cpu_addr);
return NULL;
--
2.17.1

2020-09-25 14:16:51

by Marek Szyprowski

[permalink] [raw]
Subject: [PATCH 7/8] media: platform: exynos4-is: use DMA_ATTR_LOW_ADDRESS

Exynos4-IS driver relied on the way the ARM DMA-IOMMU glue code worked -
mainly it relied on the fact that the allocator used first-fit algorithm
and the first allocated buffer were at 0x0 DMA/IOVA address. This is not
true for the generic IOMMU-DMA glue code that will be used for ARM
architecture soon, so add the needed DMA attribute to force such behavior
of the DMA-mapping code.

Signed-off-by: Marek Szyprowski <[email protected]>
---
drivers/media/platform/exynos4-is/fimc-is.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/drivers/media/platform/exynos4-is/fimc-is.c b/drivers/media/platform/exynos4-is/fimc-is.c
index 41b841a96338..9d3556eae5d3 100644
--- a/drivers/media/platform/exynos4-is/fimc-is.c
+++ b/drivers/media/platform/exynos4-is/fimc-is.c
@@ -335,8 +335,9 @@ static int fimc_is_alloc_cpu_memory(struct fimc_is *is)
{
struct device *dev = &is->pdev->dev;

- is->memory.vaddr = dma_alloc_coherent(dev, FIMC_IS_CPU_MEM_SIZE,
- &is->memory.addr, GFP_KERNEL);
+ is->memory.vaddr = dma_alloc_attrs(dev, FIMC_IS_CPU_MEM_SIZE,
+ &is->memory.addr, GFP_KERNEL,
+ DMA_ATTR_LOW_ADDRESS);
if (is->memory.vaddr == NULL)
return -ENOMEM;

--
2.17.1

2020-09-25 14:17:41

by Marek Szyprowski

[permalink] [raw]
Subject: [PATCH 1/8] dma-mapping: add DMA_ATTR_LOW_ADDRESS attribute

Some devices require to allocate a special buffer (usually for the
firmware) just at the beginning of the address space to ensure that all
further allocations can be expressed as a positive offset from that
special buffer. When IOMMU is used for managing the DMA address space,
such requirement can be easily fulfilled, simply by enforcing the
'first-fit' IOVA allocation algorithm.

This patch adds a DMA attribute for such case.

Signed-off-by: Marek Szyprowski <[email protected]>
---
include/linux/dma-mapping.h | 6 ++++++
1 file changed, 6 insertions(+)

diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
index bb138ac6f5e6..c8c568ba375b 100644
--- a/include/linux/dma-mapping.h
+++ b/include/linux/dma-mapping.h
@@ -66,6 +66,12 @@
* at least read-only at lesser-privileged levels).
*/
#define DMA_ATTR_PRIVILEGED (1UL << 9)
+/*
+ * DMA_ATTR_LOW_ADDRESS: used to indicate that the buffer should be allocated
+ * at the lowest possible DMA address, usually just at the beginning of the
+ * DMA/IOVA address space ('first-fit' allocation algorithm).
+ */
+#define DMA_ATTR_LOW_ADDRESS (1UL << 10)

/*
* A dma_addr_t can hold any valid DMA or bus address for the platform.
--
2.17.1

2020-09-25 16:23:54

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH 1/8] dma-mapping: add DMA_ATTR_LOW_ADDRESS attribute

> #define DMA_ATTR_PRIVILEGED (1UL << 9)
> +/*
> + * DMA_ATTR_LOW_ADDRESS: used to indicate that the buffer should be allocated
> + * at the lowest possible DMA address, usually just at the beginning of the
> + * DMA/IOVA address space ('first-fit' allocation algorithm).
> + */
> +#define DMA_ATTR_LOW_ADDRESS (1UL << 10)

I think we need better comments explaining that this is best effort
and only applies to DMA API implementations that actually have an
allocatable IOVA space.