2024-02-05 19:03:09

by Will Deacon

[permalink] [raw]
Subject: [PATCH v3 3/3] swiotlb: Honour dma_alloc_coherent() alignment in swiotlb_alloc()

core-api/dma-api-howto.rst states the following properties of
dma_alloc_coherent():

| The CPU virtual address and the DMA address are both guaranteed to
| be aligned to the smallest PAGE_SIZE order which is greater than or
| equal to the requested size.

However, swiotlb_alloc() passes zero for the 'alloc_align_mask'
parameter of swiotlb_find_slots() and so this property is not upheld.
Instead, allocations larger than a page are aligned to PAGE_SIZE,

Calculate the mask corresponding to the page order suitable for holding
the allocation and pass that to swiotlb_find_slots().

Cc: Christoph Hellwig <[email protected]>
Cc: Marek Szyprowski <[email protected]>
Cc: Robin Murphy <[email protected]>
Cc: Petr Tesarik <[email protected]>
Cc: Dexuan Cui <[email protected]>
Fixes: e81e99bacc9f ("swiotlb: Support aligned swiotlb buffers")
Signed-off-by: Will Deacon <[email protected]>
---
kernel/dma/swiotlb.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index adbb3143238b..283eea33dd22 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -1633,12 +1633,14 @@ struct page *swiotlb_alloc(struct device *dev, size_t size)
struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
struct io_tlb_pool *pool;
phys_addr_t tlb_addr;
+ unsigned int align;
int index;

if (!mem)
return NULL;

- index = swiotlb_find_slots(dev, 0, size, 0, &pool);
+ align = (1 << (get_order(size) + PAGE_SHIFT)) - 1;
+ index = swiotlb_find_slots(dev, 0, size, align, &pool);
if (index == -1)
return NULL;

--
2.43.0.594.gd9cf4e227d-goog



2024-02-19 12:56:27

by Petr Tesařík

[permalink] [raw]
Subject: Re: [PATCH v3 3/3] swiotlb: Honour dma_alloc_coherent() alignment in swiotlb_alloc()

On Mon, 5 Feb 2024 19:01:27 +0000
Will Deacon <[email protected]> wrote:

> core-api/dma-api-howto.rst states the following properties of
> dma_alloc_coherent():
>
> | The CPU virtual address and the DMA address are both guaranteed to
> | be aligned to the smallest PAGE_SIZE order which is greater than or
> | equal to the requested size.
>
> However, swiotlb_alloc() passes zero for the 'alloc_align_mask'
> parameter of swiotlb_find_slots() and so this property is not upheld.
> Instead, allocations larger than a page are aligned to PAGE_SIZE,
>
> Calculate the mask corresponding to the page order suitable for holding
> the allocation and pass that to swiotlb_find_slots().
>
> Cc: Christoph Hellwig <[email protected]>
> Cc: Marek Szyprowski <[email protected]>
> Cc: Robin Murphy <[email protected]>
> Cc: Petr Tesarik <[email protected]>
> Cc: Dexuan Cui <[email protected]>
> Fixes: e81e99bacc9f ("swiotlb: Support aligned swiotlb buffers")
> Signed-off-by: Will Deacon <[email protected]>

Reviewed-by: Petr Tesarik <[email protected]>

Petr T

> ---
> kernel/dma/swiotlb.c | 4 +++-
> 1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index adbb3143238b..283eea33dd22 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -1633,12 +1633,14 @@ struct page *swiotlb_alloc(struct device *dev, size_t size)
> struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
> struct io_tlb_pool *pool;
> phys_addr_t tlb_addr;
> + unsigned int align;
> int index;
>
> if (!mem)
> return NULL;
>
> - index = swiotlb_find_slots(dev, 0, size, 0, &pool);
> + align = (1 << (get_order(size) + PAGE_SHIFT)) - 1;
> + index = swiotlb_find_slots(dev, 0, size, align, &pool);
> if (index == -1)
> return NULL;
>