2022-04-12 20:10:23

by Chao Gao

[permalink] [raw]
Subject: [PATCH] dma-direct: avoid redundant memory sync for swiotlb

When we looked into FIO performance with swiotlb enabled in VM, we found
swiotlb_bounce() is always called one more time than expected for each DMA
read request.

It turns out that the bounce buffer is copied to original DMA buffer twice
after the completion of a DMA request (one is done by in
dma_direct_sync_single_for_cpu(), the other by swiotlb_tbl_unmap_single()).
But the content in bounce buffer actually doesn't change between the two
rounds of copy. So, one round of copy is redundant.

Pass DMA_ATTR_SKIP_CPU_SYNC flag to swiotlb_tbl_unmap_single() to
skip the memory copy in it.

This fix increases FIO 64KB sequential read throughput in a guest with
swiotlb=force by 5.6%.

Reported-by: Wang Zhaoyang1 <[email protected]>
Reported-by: Gao Liang <[email protected]>
Signed-off-by: Chao Gao <[email protected]>
Reviewed-by: Kevin Tian <[email protected]>
---
kernel/dma/direct.h | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h
index 4632b0f4f72e..8a6cd53dbe8c 100644
--- a/kernel/dma/direct.h
+++ b/kernel/dma/direct.h
@@ -114,6 +114,7 @@ static inline void dma_direct_unmap_page(struct device *dev, dma_addr_t addr,
dma_direct_sync_single_for_cpu(dev, addr, size, dir);

if (unlikely(is_swiotlb_buffer(dev, phys)))
- swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs);
+ swiotlb_tbl_unmap_single(dev, phys, size, dir,
+ attrs | DMA_ATTR_SKIP_CPU_SYNC);
}
#endif /* _KERNEL_DMA_DIRECT_H */
--
2.25.1


2022-04-12 20:21:06

by Chao Gao

[permalink] [raw]
Subject: Re: [PATCH] dma-direct: avoid redundant memory sync for swiotlb

On Tue, Apr 12, 2022 at 07:38:05PM +0800, Chao Gao wrote:
>When we looked into FIO performance with swiotlb enabled in VM, we found
>swiotlb_bounce() is always called one more time than expected for each DMA
>read request.
>
>It turns out that the bounce buffer is copied to original DMA buffer twice
>after the completion of a DMA request (one is done by in
>dma_direct_sync_single_for_cpu(), the other by swiotlb_tbl_unmap_single()).
>But the content in bounce buffer actually doesn't change between the two
>rounds of copy. So, one round of copy is redundant.
>
>Pass DMA_ATTR_SKIP_CPU_SYNC flag to swiotlb_tbl_unmap_single() to
>skip the memory copy in it.
>
>This fix increases FIO 64KB sequential read throughput in a guest with
>swiotlb=force by 5.6%.
>

Sorry. A fixes tag is missing:

Fixes: 55897af63091 ("dma-direct: merge swiotlb_dma_ops into the dma_direct code")

>Reported-by: Wang Zhaoyang1 <[email protected]>
>Reported-by: Gao Liang <[email protected]>
>Signed-off-by: Chao Gao <[email protected]>
>Reviewed-by: Kevin Tian <[email protected]>
>---
> kernel/dma/direct.h | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
>diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h
>index 4632b0f4f72e..8a6cd53dbe8c 100644
>--- a/kernel/dma/direct.h
>+++ b/kernel/dma/direct.h
>@@ -114,6 +114,7 @@ static inline void dma_direct_unmap_page(struct device *dev, dma_addr_t addr,
> dma_direct_sync_single_for_cpu(dev, addr, size, dir);
>
> if (unlikely(is_swiotlb_buffer(dev, phys)))
>- swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs);
>+ swiotlb_tbl_unmap_single(dev, phys, size, dir,
>+ attrs | DMA_ATTR_SKIP_CPU_SYNC);
> }
> #endif /* _KERNEL_DMA_DIRECT_H */
>--
>2.25.1
>

2022-04-13 00:00:17

by Robin Murphy

[permalink] [raw]
Subject: Re: [PATCH] dma-direct: avoid redundant memory sync for swiotlb

On 12/04/2022 12:38 pm, Chao Gao wrote:
> When we looked into FIO performance with swiotlb enabled in VM, we found
> swiotlb_bounce() is always called one more time than expected for each DMA
> read request.
>
> It turns out that the bounce buffer is copied to original DMA buffer twice
> after the completion of a DMA request (one is done by in
> dma_direct_sync_single_for_cpu(), the other by swiotlb_tbl_unmap_single()).
> But the content in bounce buffer actually doesn't change between the two
> rounds of copy. So, one round of copy is redundant.
>
> Pass DMA_ATTR_SKIP_CPU_SYNC flag to swiotlb_tbl_unmap_single() to
> skip the memory copy in it.

It's still a little suboptimal and non-obvious to call into SWIOTLB
twice though - even better might be for SWIOTLB to call
arch_sync_dma_for_cpu() at the appropriate place internally, then put
the dma_direct_sync in an else path here. I'm really not sure why we
have the current disparity between map and unmap in this regard... :/

Robin.

> This fix increases FIO 64KB sequential read throughput in a guest with
> swiotlb=force by 5.6%.
>
> Reported-by: Wang Zhaoyang1 <[email protected]>
> Reported-by: Gao Liang <[email protected]>
> Signed-off-by: Chao Gao <[email protected]>
> Reviewed-by: Kevin Tian <[email protected]>
> ---
> kernel/dma/direct.h | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h
> index 4632b0f4f72e..8a6cd53dbe8c 100644
> --- a/kernel/dma/direct.h
> +++ b/kernel/dma/direct.h
> @@ -114,6 +114,7 @@ static inline void dma_direct_unmap_page(struct device *dev, dma_addr_t addr,
> dma_direct_sync_single_for_cpu(dev, addr, size, dir);
>
> if (unlikely(is_swiotlb_buffer(dev, phys)))
> - swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs);
> + swiotlb_tbl_unmap_single(dev, phys, size, dir,
> + attrs | DMA_ATTR_SKIP_CPU_SYNC);
> }
> #endif /* _KERNEL_DMA_DIRECT_H */

2022-04-13 06:30:05

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH] dma-direct: avoid redundant memory sync for swiotlb

On Wed, Apr 13, 2022 at 09:02:02AM +0800, Chao Gao wrote:
> dma_direct_sync_single_for_cpu() also calls arch_sync_dma_for_cpu_all()
> and arch_dma_mark_clean() in some cases. if SWIOTLB does sync internally,
> should these two functions be called by SWIOTLB?
>
> Personally, it might be better if swiotlb can just focus on bounce buffer
> alloc/free. Adding more DMA coherence logic into swiotlb will make it
> a little complicated.
>
> How about an open-coded version of dma_direct_sync_single_for_cpu
> in dma_direct_unmap_page with swiotlb_sync_single_for_cpu replaced by
> swiotlb_tbl_unmap_single?

I don't think the swiotlb and non-coherent case ever fully worked.
Before the merge of swiotlb into dma-direct they obviously were
mutally exclusive, and even now all the cache maintainance is done
on the physical address of the original data, not the swiotlb buffer.

If we want to fix that properly all the arch dma calls will need to
move into swiotlb, but that is a much bigger patch.

So for now I'd be happy with the one liner presented here, but
eventually the whole area could use an overhaul.

2022-04-13 08:01:33

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH] dma-direct: avoid redundant memory sync for swiotlb

On Wed, Apr 13, 2022 at 01:46:06PM +0800, Chao Gao wrote:
> On Wed, Apr 13, 2022 at 06:59:58AM +0200, Christoph Hellwig wrote:
> >So for now I'd be happy with the one liner presented here, but
> >eventually the whole area could use an overhaul.
>
> Thanks. Do you want me to post a new version with the Fixes tag or you
> will take care of it?

I can add the fixes tag. I'll wait another day or two for more feedback,
though.

>
> Fixes: 55897af63091 ("dma-direct: merge swiotlb_dma_ops into the dma_direct code")
---end quoted text---

2022-04-13 08:38:54

by Chao Gao

[permalink] [raw]
Subject: Re: [PATCH] dma-direct: avoid redundant memory sync for swiotlb

On Tue, Apr 12, 2022 at 02:33:05PM +0100, Robin Murphy wrote:
>On 12/04/2022 12:38 pm, Chao Gao wrote:
>> When we looked into FIO performance with swiotlb enabled in VM, we found
>> swiotlb_bounce() is always called one more time than expected for each DMA
>> read request.
>>
>> It turns out that the bounce buffer is copied to original DMA buffer twice
>> after the completion of a DMA request (one is done by in
>> dma_direct_sync_single_for_cpu(), the other by swiotlb_tbl_unmap_single()).
>> But the content in bounce buffer actually doesn't change between the two
>> rounds of copy. So, one round of copy is redundant.
>>
>> Pass DMA_ATTR_SKIP_CPU_SYNC flag to swiotlb_tbl_unmap_single() to
>> skip the memory copy in it.
>
>It's still a little suboptimal and non-obvious to call into SWIOTLB twice
>though - even better might be for SWIOTLB to call arch_sync_dma_for_cpu() at
>the appropriate place internally,

Hi Robin,

dma_direct_sync_single_for_cpu() also calls arch_sync_dma_for_cpu_all()
and arch_dma_mark_clean() in some cases. if SWIOTLB does sync internally,
should these two functions be called by SWIOTLB?

Personally, it might be better if swiotlb can just focus on bounce buffer
alloc/free. Adding more DMA coherence logic into swiotlb will make it
a little complicated.

How about an open-coded version of dma_direct_sync_single_for_cpu
in dma_direct_unmap_page with swiotlb_sync_single_for_cpu replaced by
swiotlb_tbl_unmap_single?

2022-04-13 08:39:02

by Chao Gao

[permalink] [raw]
Subject: Re: [PATCH] dma-direct: avoid redundant memory sync for swiotlb

On Wed, Apr 13, 2022 at 06:59:58AM +0200, Christoph Hellwig wrote:
>So for now I'd be happy with the one liner presented here, but
>eventually the whole area could use an overhaul.

Thanks. Do you want me to post a new version with the Fixes tag or you
will take care of it?

Fixes: 55897af63091 ("dma-direct: merge swiotlb_dma_ops into the dma_direct code")

2022-04-14 09:50:32

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH] dma-direct: avoid redundant memory sync for swiotlb

On Wed, Apr 13, 2022 at 02:10:56PM +0100, Robin Murphy wrote:
> Are you sure? AFAICS swiotlb_map() does the right thing, and
> dma_direct_{sync,unmap} are working off the DMA address, which is that of
> the bounce slot when SWIOTLB is involved (not least, how would the
> is_swiotlb_buffer() checks work otherwise?)

Yeah, actually this should be fine.

2022-04-15 02:57:29

by Robin Murphy

[permalink] [raw]
Subject: Re: [PATCH] dma-direct: avoid redundant memory sync for swiotlb

On 2022-04-13 05:59, Christoph Hellwig wrote:
> On Wed, Apr 13, 2022 at 09:02:02AM +0800, Chao Gao wrote:
>> dma_direct_sync_single_for_cpu() also calls arch_sync_dma_for_cpu_all()
>> and arch_dma_mark_clean() in some cases. if SWIOTLB does sync internally,
>> should these two functions be called by SWIOTLB?
>>
>> Personally, it might be better if swiotlb can just focus on bounce buffer
>> alloc/free. Adding more DMA coherence logic into swiotlb will make it
>> a little complicated.
>>
>> How about an open-coded version of dma_direct_sync_single_for_cpu
>> in dma_direct_unmap_page with swiotlb_sync_single_for_cpu replaced by
>> swiotlb_tbl_unmap_single?
>
> I don't think the swiotlb and non-coherent case ever fully worked.
> Before the merge of swiotlb into dma-direct they obviously were
> mutally exclusive, and even now all the cache maintainance is done
> on the physical address of the original data, not the swiotlb buffer.

Are you sure? AFAICS swiotlb_map() does the right thing, and
dma_direct_{sync,unmap} are working off the DMA address, which is that
of the bounce slot when SWIOTLB is involved (not least, how would the
is_swiotlb_buffer() checks work otherwise?)

> If we want to fix that properly all the arch dma calls will need to
> move into swiotlb, but that is a much bigger patch.
>
> So for now I'd be happy with the one liner presented here, but
> eventually the whole area could use an overhaul.

Sure, whoever gets round to tackling DMA_ATTR_NO_SNOOP first will need
to go through all the cache maintenance hooks anyway, so happy to kick
the can down the road until then.

Thanks,
Robin.