Currently, riscv defines ARCH_DMA_MINALIGN as L1_CACHE_BYTES, I.E
64Bytes, if CONFIG_RISCV_DMA_NONCOHERENT=y. To support unified kernel
Image, usually we have to enable CONFIG_RISCV_DMA_NONCOHERENT, thus
it brings some bad effects to coherent platforms:
Firstly, it wastes memory, kmalloc-96, kmalloc-32, kmalloc-16 and
kmalloc-8 slab caches don't exist any more, they are replaced with
either kmalloc-128 or kmalloc-64.
Secondly, larger than necessary kmalloc aligned allocations results
in unnecessary cache/TLB pressure.
This issue also exists on arm64 platforms. From last year, Catalin
tried to solve this issue by decoupling ARCH_KMALLOC_MINALIGN from
ARCH_DMA_MINALIGN, limiting kmalloc() minimum alignment to
dma_get_cache_alignment() and replacing ARCH_KMALLOC_MINALIGN usage
in various drivers with ARCH_DMA_MINALIGN etc.[1]
One fact we can make use of for riscv: if the CPU doesn't support
ZICBOM or T-HEAD CMO, we know the platform is coherent. Based on
Catalin's work and above fact, we can easily solve the kmalloc align
issue for riscv: we can override dma_get_cache_alignment(), then let
it return ARCH_DMA_MINALIGN at the beginning and return 1 once we know
the underlying HW neither supports ZICBOM nor supports T-HEAD CMO.
So what about if the CPU supports ZICBOM and T-HEAD CMO, but all the
devices are dma coherent? Well, we use ARCH_DMA_MINALIGN as the
kmalloc minimum alignment, nothing changed in this case. This case
can be improved in the future.
After this patch, a simple test of booting to a small buildroot rootfs
on qemu shows:
kmalloc-96 5041 5041 96 ...
kmalloc-64 9606 9606 64 ...
kmalloc-32 5128 5128 32 ...
kmalloc-16 7682 7682 16 ...
kmalloc-8 10246 10246 8 ...
So we save about 1268KB memory. The saving will be much larger in normal
OS env on real HW platforms.
patch1 allows kmalloc() caches aligned to the smallest value.
patch2 enables DMA_BOUNCE_UNALIGNED_KMALLOC.
After this series:
As for coherent platforms, kmalloc-{8,16,32,96} caches come back on
coherent both RV32 and RV64 platforms, I.E !ZICBOM and !THEAD_CMO.
As for noncoherent RV32 platforms, nothing changed.
As for noncoherent RV64 platforms, I.E either ZICBOM or THEAD_CMO, the
above kmalloc caches also come back if > 4GB memory or users pass
"swiotlb=mmnn,force" to force swiotlb creation if <= 4GB memory. How
much mmnn should be depends on the specific platform, it need to be
tried and tested all possible usage case on the specific hardware. For
example, I can use the minimal I/O TLB slabs on Sipeed M1S Dock.
[1] Link: https://lore.kernel.org/linux-arm-kernel/[email protected]/
Since v1
- remove preparation patches since they have been merged
- adjust Kconfig entry to keep entries sorted
- add new function riscv_set_dma_cache_alignment() to set the
dma_cache_alignment var.
Jisheng Zhang (2):
riscv: allow kmalloc() caches aligned to the smallest value
riscv: enable DMA_BOUNCE_UNALIGNED_KMALLOC for !dma_coherent
arch/riscv/Kconfig | 1 +
arch/riscv/include/asm/cache.h | 14 ++++++++++++++
arch/riscv/include/asm/cacheflush.h | 2 ++
arch/riscv/kernel/setup.c | 1 +
arch/riscv/mm/dma-noncoherent.c | 8 ++++++++
5 files changed, 26 insertions(+)
--
2.40.1
With the DMA bouncing of unaligned kmalloc() buffers now in place,
enable it for riscv when RISCV_DMA_NONCOHERENT=y to allow the
kmalloc-{8,16,32,96} caches. Since RV32 doesn't enable SWIOTLB
yet, and I didn't see any dma noncoherent RV32 platforms in the
mainline, so skip RV32 now by only enabling
DMA_BOUNCE_UNALIGNED_KMALLOC if SWIOTLB is available. Once we see
such requirement on RV32, we can enable it then.
NOTE: we didn't force to create the swiotlb buffer even when the
end of RAM is within the 32-bit physical address range. That's to
say:
For RV64 with > 4GB memory, the feature is enabled.
For RV64 with <= 4GB memory, the feature isn't enabled by default. We
rely on users to pass "swiotlb=mmnn,force" where mmnn is the Number of
I/O TLB slabs, see kernel-parameters.txt for details.
Tested on Sipeed Lichee Pi 4A with 8GB DDR and Sipeed M1S BL808 Dock
board.
Signed-off-by: Jisheng Zhang <[email protected]>
---
arch/riscv/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index 4c07b9189c86..6681bd6ed2d7 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -267,6 +267,7 @@ config RISCV_DMA_NONCOHERENT
select ARCH_HAS_SETUP_DMA_OPS
select ARCH_HAS_SYNC_DMA_FOR_CPU
select ARCH_HAS_SYNC_DMA_FOR_DEVICE
+ select DMA_BOUNCE_UNALIGNED_KMALLOC if SWIOTLB
select DMA_DIRECT_REMAP
config AS_HAS_INSN
--
2.40.1
Currently, riscv defines ARCH_DMA_MINALIGN as L1_CACHE_BYTES, I.E
64Bytes, if CONFIG_RISCV_DMA_NONCOHERENT=y. To support unified kernel
Image, usually we have to enable CONFIG_RISCV_DMA_NONCOHERENT, thus
it brings some bad effects to coherent platforms:
Firstly, it wastes memory, kmalloc-96, kmalloc-32, kmalloc-16 and
kmalloc-8 slab caches don't exist any more, they are replaced with
either kmalloc-128 or kmalloc-64.
Secondly, larger than necessary kmalloc aligned allocations results
in unnecessary cache/TLB pressure.
This issue also exists on arm64 platforms. From last year, Catalin
tried to solve this issue by decoupling ARCH_KMALLOC_MINALIGN from
ARCH_DMA_MINALIGN, limiting kmalloc() minimum alignment to
dma_get_cache_alignment() and replacing ARCH_KMALLOC_MINALIGN usage
in various drivers with ARCH_DMA_MINALIGN etc.[1]
One fact we can make use of for riscv: if the CPU doesn't support
ZICBOM or T-HEAD CMO, we know the platform is coherent. Based on
Catalin's work and above fact, we can easily solve the kmalloc align
issue for riscv: we can override dma_get_cache_alignment(), then let
it return ARCH_DMA_MINALIGN at the beginning and return 1 once we know
the underlying HW neither supports ZICBOM nor supports T-HEAD CMO.
So what about if the CPU supports ZICBOM and T-HEAD CMO, but all the
devices are dma coherent? Well, we use ARCH_DMA_MINALIGN as the
kmalloc minimum alignment, nothing changed in this case. This case
can be improved in the future.
After this patch, a simple test of booting to a small buildroot rootfs
on qemu shows:
kmalloc-96 5041 5041 96 ...
kmalloc-64 9606 9606 64 ...
kmalloc-32 5128 5128 32 ...
kmalloc-16 7682 7682 16 ...
kmalloc-8 10246 10246 8 ...
So we save about 1268KB memory. The saving will be much larger in normal
OS env on real HW platforms.
[1] Link: https://lore.kernel.org/linux-arm-kernel/[email protected]/
Signed-off-by: Jisheng Zhang <[email protected]>
Change-Id: Ica249d0f8058a02bd4bc6543b4ffc2946a4734a2
---
arch/riscv/include/asm/cache.h | 14 ++++++++++++++
arch/riscv/include/asm/cacheflush.h | 2 ++
arch/riscv/kernel/setup.c | 1 +
arch/riscv/mm/dma-noncoherent.c | 8 ++++++++
4 files changed, 25 insertions(+)
diff --git a/arch/riscv/include/asm/cache.h b/arch/riscv/include/asm/cache.h
index d3036df23ccb..2174fe7bac9a 100644
--- a/arch/riscv/include/asm/cache.h
+++ b/arch/riscv/include/asm/cache.h
@@ -13,6 +13,7 @@
#ifdef CONFIG_RISCV_DMA_NONCOHERENT
#define ARCH_DMA_MINALIGN L1_CACHE_BYTES
+#define ARCH_KMALLOC_MINALIGN (8)
#endif
/*
@@ -23,4 +24,17 @@
#define ARCH_SLAB_MINALIGN 16
#endif
+#ifndef __ASSEMBLY__
+
+#ifdef CONFIG_RISCV_DMA_NONCOHERENT
+extern int dma_cache_alignment;
+#define dma_get_cache_alignment dma_get_cache_alignment
+static inline int dma_get_cache_alignment(void)
+{
+ return dma_cache_alignment;
+}
+#endif
+
+#endif /* __ASSEMBLY__ */
+
#endif /* _ASM_RISCV_CACHE_H */
diff --git a/arch/riscv/include/asm/cacheflush.h b/arch/riscv/include/asm/cacheflush.h
index 8091b8bf4883..c640ab6f843b 100644
--- a/arch/riscv/include/asm/cacheflush.h
+++ b/arch/riscv/include/asm/cacheflush.h
@@ -55,8 +55,10 @@ void riscv_init_cbo_blocksizes(void);
#ifdef CONFIG_RISCV_DMA_NONCOHERENT
void riscv_noncoherent_supported(void);
+void __init riscv_set_dma_cache_alignment(void);
#else
static inline void riscv_noncoherent_supported(void) {}
+static inline void riscv_set_dma_cache_alignment(void) {}
#endif
/*
diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c
index 971fe776e2f8..027879b1557a 100644
--- a/arch/riscv/kernel/setup.c
+++ b/arch/riscv/kernel/setup.c
@@ -311,6 +311,7 @@ void __init setup_arch(char **cmdline_p)
if (IS_ENABLED(CONFIG_RISCV_ISA_ZICBOM) &&
riscv_isa_extension_available(NULL, ZICBOM))
riscv_noncoherent_supported();
+ riscv_set_dma_cache_alignment();
}
static int __init topology_init(void)
diff --git a/arch/riscv/mm/dma-noncoherent.c b/arch/riscv/mm/dma-noncoherent.c
index d51a75864e53..811227e54bbd 100644
--- a/arch/riscv/mm/dma-noncoherent.c
+++ b/arch/riscv/mm/dma-noncoherent.c
@@ -11,6 +11,8 @@
#include <asm/cacheflush.h>
static bool noncoherent_supported __ro_after_init;
+int dma_cache_alignment __ro_after_init = ARCH_DMA_MINALIGN;
+EXPORT_SYMBOL(dma_cache_alignment);
void arch_sync_dma_for_device(phys_addr_t paddr, size_t size,
enum dma_data_direction dir)
@@ -78,3 +80,9 @@ void riscv_noncoherent_supported(void)
"Non-coherent DMA support enabled without a block size\n");
noncoherent_supported = true;
}
+
+void __init riscv_set_dma_cache_alignment(void)
+{
+ if (!noncoherent_supported)
+ dma_cache_alignment = 1;
+}
--
2.40.1
On Mon, Jul 17, 2023 at 12:51:45AM +0800, Jisheng Zhang wrote:
> Since v1
> - remove preparation patches since they have been merged
> - adjust Kconfig entry to keep entries sorted
> - add new function riscv_set_dma_cache_alignment() to set the
> dma_cache_alignment var.
Yeah, looks a lot more straightforward now than in v1, thanks.
Hey Jisheng,
On Mon, Jul 17, 2023 at 12:51:46AM +0800, Jisheng Zhang wrote:
> Currently, riscv defines ARCH_DMA_MINALIGN as L1_CACHE_BYTES, I.E
> 64Bytes, if CONFIG_RISCV_DMA_NONCOHERENT=y. To support unified kernel
> Image, usually we have to enable CONFIG_RISCV_DMA_NONCOHERENT, thus
> it brings some bad effects to coherent platforms:
>
> Firstly, it wastes memory, kmalloc-96, kmalloc-32, kmalloc-16 and
> kmalloc-8 slab caches don't exist any more, they are replaced with
> either kmalloc-128 or kmalloc-64.
>
> Secondly, larger than necessary kmalloc aligned allocations results
> in unnecessary cache/TLB pressure.
>
> This issue also exists on arm64 platforms. From last year, Catalin
> tried to solve this issue by decoupling ARCH_KMALLOC_MINALIGN from
> ARCH_DMA_MINALIGN, limiting kmalloc() minimum alignment to
> dma_get_cache_alignment() and replacing ARCH_KMALLOC_MINALIGN usage
> in various drivers with ARCH_DMA_MINALIGN etc.[1]
>
> One fact we can make use of for riscv: if the CPU doesn't support
> ZICBOM or T-HEAD CMO, we know the platform is coherent. Based on
> Catalin's work and above fact, we can easily solve the kmalloc align
> issue for riscv: we can override dma_get_cache_alignment(), then let
> it return ARCH_DMA_MINALIGN at the beginning and return 1 once we know
> the underlying HW neither supports ZICBOM nor supports T-HEAD CMO.
>
> So what about if the CPU supports ZICBOM and T-HEAD CMO, but all the
> devices are dma coherent? Well, we use ARCH_DMA_MINALIGN as the
> kmalloc minimum alignment, nothing changed in this case. This case
> can be improved in the future.
>
> After this patch, a simple test of booting to a small buildroot rootfs
> on qemu shows:
>
> kmalloc-96 5041 5041 96 ...
> kmalloc-64 9606 9606 64 ...
> kmalloc-32 5128 5128 32 ...
> kmalloc-16 7682 7682 16 ...
> kmalloc-8 10246 10246 8 ...
>
> So we save about 1268KB memory. The saving will be much larger in normal
> OS env on real HW platforms.
>
> [1] Link: https://lore.kernel.org/linux-arm-kernel/[email protected]/
In the future,
Link: https://lore.kernel.org/linux-arm-kernel/[email protected]/ [1]
> Signed-off-by: Jisheng Zhang <[email protected]>
> Change-Id: Ica249d0f8058a02bd4bc6543b4ffc2946a4734a2
How come this has ended up with a Change-ID? Checkpatch says this is
something to do with Gerrit & needs to be removed.
> ---
> arch/riscv/include/asm/cache.h | 14 ++++++++++++++
> arch/riscv/include/asm/cacheflush.h | 2 ++
> arch/riscv/kernel/setup.c | 1 +
> arch/riscv/mm/dma-noncoherent.c | 8 ++++++++
> 4 files changed, 25 insertions(+)
>
> diff --git a/arch/riscv/include/asm/cache.h b/arch/riscv/include/asm/cache.h
> index d3036df23ccb..2174fe7bac9a 100644
> --- a/arch/riscv/include/asm/cache.h
> +++ b/arch/riscv/include/asm/cache.h
> @@ -13,6 +13,7 @@
>
> #ifdef CONFIG_RISCV_DMA_NONCOHERENT
> #define ARCH_DMA_MINALIGN L1_CACHE_BYTES
> +#define ARCH_KMALLOC_MINALIGN (8)
> #endif
>
> /*
> @@ -23,4 +24,17 @@
> #define ARCH_SLAB_MINALIGN 16
> #endif
>
> +#ifndef __ASSEMBLY__
> +
> +#ifdef CONFIG_RISCV_DMA_NONCOHERENT
> +extern int dma_cache_alignment;
> +#define dma_get_cache_alignment dma_get_cache_alignment
> +static inline int dma_get_cache_alignment(void)
> +{
> + return dma_cache_alignment;
> +}
> +#endif
> +
> +#endif /* __ASSEMBLY__ */
> +
> #endif /* _ASM_RISCV_CACHE_H */
> diff --git a/arch/riscv/include/asm/cacheflush.h b/arch/riscv/include/asm/cacheflush.h
> index 8091b8bf4883..c640ab6f843b 100644
> --- a/arch/riscv/include/asm/cacheflush.h
> +++ b/arch/riscv/include/asm/cacheflush.h
> @@ -55,8 +55,10 @@ void riscv_init_cbo_blocksizes(void);
>
> #ifdef CONFIG_RISCV_DMA_NONCOHERENT
> void riscv_noncoherent_supported(void);
> +void __init riscv_set_dma_cache_alignment(void);
> #else
> static inline void riscv_noncoherent_supported(void) {}
> +static inline void riscv_set_dma_cache_alignment(void) {}
> #endif
>
> /*
> diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c
> index 971fe776e2f8..027879b1557a 100644
> --- a/arch/riscv/kernel/setup.c
> +++ b/arch/riscv/kernel/setup.c
> @@ -311,6 +311,7 @@ void __init setup_arch(char **cmdline_p)
> if (IS_ENABLED(CONFIG_RISCV_ISA_ZICBOM) &&
> riscv_isa_extension_available(NULL, ZICBOM))
> riscv_noncoherent_supported();
> + riscv_set_dma_cache_alignment();
> }
>
> static int __init topology_init(void)
> diff --git a/arch/riscv/mm/dma-noncoherent.c b/arch/riscv/mm/dma-noncoherent.c
> index d51a75864e53..811227e54bbd 100644
> --- a/arch/riscv/mm/dma-noncoherent.c
> +++ b/arch/riscv/mm/dma-noncoherent.c
> @@ -11,6 +11,8 @@
> #include <asm/cacheflush.h>
>
> static bool noncoherent_supported __ro_after_init;
> +int dma_cache_alignment __ro_after_init = ARCH_DMA_MINALIGN;
> +EXPORT_SYMBOL(dma_cache_alignment);
Why is this not EXPORT_SYMBOL_GPL()?
Otherwise, this is generally good to me, thanks.
Conor.
>
> void arch_sync_dma_for_device(phys_addr_t paddr, size_t size,
> enum dma_data_direction dir)
> @@ -78,3 +80,9 @@ void riscv_noncoherent_supported(void)
> "Non-coherent DMA support enabled without a block size\n");
> noncoherent_supported = true;
> }
> +
> +void __init riscv_set_dma_cache_alignment(void)
> +{
> + if (!noncoherent_supported)
> + dma_cache_alignment = 1;
> +}
> --
> 2.40.1
>
On Mon, Jul 17, 2023 at 12:51:47AM +0800, Jisheng Zhang wrote:
> With the DMA bouncing of unaligned kmalloc() buffers now in place,
> enable it for riscv when RISCV_DMA_NONCOHERENT=y to allow the
> kmalloc-{8,16,32,96} caches. Since RV32 doesn't enable SWIOTLB
> yet, and I didn't see any dma noncoherent RV32 platforms in the
> mainline, so skip RV32 now by only enabling
> DMA_BOUNCE_UNALIGNED_KMALLOC if SWIOTLB is available. Once we see
> such requirement on RV32, we can enable it then.
>
> NOTE: we didn't force to create the swiotlb buffer even when the
> end of RAM is within the 32-bit physical address range. That's to
> say:
> For RV64 with > 4GB memory, the feature is enabled.
> For RV64 with <= 4GB memory, the feature isn't enabled by default. We
> rely on users to pass "swiotlb=mmnn,force" where mmnn is the Number of
> I/O TLB slabs, see kernel-parameters.txt for details.
>
> Tested on Sipeed Lichee Pi 4A with 8GB DDR and Sipeed M1S BL808 Dock
> board.
>
> Signed-off-by: Jisheng Zhang <[email protected]>
Reviewed-by: Conor Dooley <[email protected]>
Thanks,
Conor.
On Tue, Jul 18, 2023 at 11:23:50AM +0100, Conor Dooley wrote:
> Hey Jisheng,
>
> On Mon, Jul 17, 2023 at 12:51:46AM +0800, Jisheng Zhang wrote:
> > Currently, riscv defines ARCH_DMA_MINALIGN as L1_CACHE_BYTES, I.E
> > 64Bytes, if CONFIG_RISCV_DMA_NONCOHERENT=y. To support unified kernel
> > Image, usually we have to enable CONFIG_RISCV_DMA_NONCOHERENT, thus
> > it brings some bad effects to coherent platforms:
> >
> > Firstly, it wastes memory, kmalloc-96, kmalloc-32, kmalloc-16 and
> > kmalloc-8 slab caches don't exist any more, they are replaced with
> > either kmalloc-128 or kmalloc-64.
> >
> > Secondly, larger than necessary kmalloc aligned allocations results
> > in unnecessary cache/TLB pressure.
> >
> > This issue also exists on arm64 platforms. From last year, Catalin
> > tried to solve this issue by decoupling ARCH_KMALLOC_MINALIGN from
> > ARCH_DMA_MINALIGN, limiting kmalloc() minimum alignment to
> > dma_get_cache_alignment() and replacing ARCH_KMALLOC_MINALIGN usage
> > in various drivers with ARCH_DMA_MINALIGN etc.[1]
> >
> > One fact we can make use of for riscv: if the CPU doesn't support
> > ZICBOM or T-HEAD CMO, we know the platform is coherent. Based on
> > Catalin's work and above fact, we can easily solve the kmalloc align
> > issue for riscv: we can override dma_get_cache_alignment(), then let
> > it return ARCH_DMA_MINALIGN at the beginning and return 1 once we know
> > the underlying HW neither supports ZICBOM nor supports T-HEAD CMO.
> >
> > So what about if the CPU supports ZICBOM and T-HEAD CMO, but all the
> > devices are dma coherent? Well, we use ARCH_DMA_MINALIGN as the
> > kmalloc minimum alignment, nothing changed in this case. This case
> > can be improved in the future.
> >
> > After this patch, a simple test of booting to a small buildroot rootfs
> > on qemu shows:
> >
> > kmalloc-96 5041 5041 96 ...
> > kmalloc-64 9606 9606 64 ...
> > kmalloc-32 5128 5128 32 ...
> > kmalloc-16 7682 7682 16 ...
> > kmalloc-8 10246 10246 8 ...
> >
> > So we save about 1268KB memory. The saving will be much larger in normal
> > OS env on real HW platforms.
> >
> > [1] Link: https://lore.kernel.org/linux-arm-kernel/[email protected]/
>
> In the future,
>
> Link: https://lore.kernel.org/linux-arm-kernel/[email protected]/ [1]
>
> > Signed-off-by: Jisheng Zhang <[email protected]>
> > Change-Id: Ica249d0f8058a02bd4bc6543b4ffc2946a4734a2
>
> How come this has ended up with a Change-ID? Checkpatch says this is
> something to do with Gerrit & needs to be removed.
Oops, when amending, I forgot to add "-n". And I thought only amending
doesn't need to try checkpatch. Will fix it soon.
Thank you.
>
> > ---
> > arch/riscv/include/asm/cache.h | 14 ++++++++++++++
> > arch/riscv/include/asm/cacheflush.h | 2 ++
> > arch/riscv/kernel/setup.c | 1 +
> > arch/riscv/mm/dma-noncoherent.c | 8 ++++++++
> > 4 files changed, 25 insertions(+)
> >
> > diff --git a/arch/riscv/include/asm/cache.h b/arch/riscv/include/asm/cache.h
> > index d3036df23ccb..2174fe7bac9a 100644
> > --- a/arch/riscv/include/asm/cache.h
> > +++ b/arch/riscv/include/asm/cache.h
> > @@ -13,6 +13,7 @@
> >
> > #ifdef CONFIG_RISCV_DMA_NONCOHERENT
> > #define ARCH_DMA_MINALIGN L1_CACHE_BYTES
> > +#define ARCH_KMALLOC_MINALIGN (8)
> > #endif
> >
> > /*
> > @@ -23,4 +24,17 @@
> > #define ARCH_SLAB_MINALIGN 16
> > #endif
> >
> > +#ifndef __ASSEMBLY__
> > +
> > +#ifdef CONFIG_RISCV_DMA_NONCOHERENT
> > +extern int dma_cache_alignment;
> > +#define dma_get_cache_alignment dma_get_cache_alignment
> > +static inline int dma_get_cache_alignment(void)
> > +{
> > + return dma_cache_alignment;
> > +}
> > +#endif
> > +
> > +#endif /* __ASSEMBLY__ */
> > +
> > #endif /* _ASM_RISCV_CACHE_H */
> > diff --git a/arch/riscv/include/asm/cacheflush.h b/arch/riscv/include/asm/cacheflush.h
> > index 8091b8bf4883..c640ab6f843b 100644
> > --- a/arch/riscv/include/asm/cacheflush.h
> > +++ b/arch/riscv/include/asm/cacheflush.h
> > @@ -55,8 +55,10 @@ void riscv_init_cbo_blocksizes(void);
> >
> > #ifdef CONFIG_RISCV_DMA_NONCOHERENT
> > void riscv_noncoherent_supported(void);
> > +void __init riscv_set_dma_cache_alignment(void);
> > #else
> > static inline void riscv_noncoherent_supported(void) {}
> > +static inline void riscv_set_dma_cache_alignment(void) {}
> > #endif
> >
> > /*
> > diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c
> > index 971fe776e2f8..027879b1557a 100644
> > --- a/arch/riscv/kernel/setup.c
> > +++ b/arch/riscv/kernel/setup.c
> > @@ -311,6 +311,7 @@ void __init setup_arch(char **cmdline_p)
> > if (IS_ENABLED(CONFIG_RISCV_ISA_ZICBOM) &&
> > riscv_isa_extension_available(NULL, ZICBOM))
> > riscv_noncoherent_supported();
> > + riscv_set_dma_cache_alignment();
> > }
> >
> > static int __init topology_init(void)
> > diff --git a/arch/riscv/mm/dma-noncoherent.c b/arch/riscv/mm/dma-noncoherent.c
> > index d51a75864e53..811227e54bbd 100644
> > --- a/arch/riscv/mm/dma-noncoherent.c
> > +++ b/arch/riscv/mm/dma-noncoherent.c
> > @@ -11,6 +11,8 @@
> > #include <asm/cacheflush.h>
> >
> > static bool noncoherent_supported __ro_after_init;
> > +int dma_cache_alignment __ro_after_init = ARCH_DMA_MINALIGN;
> > +EXPORT_SYMBOL(dma_cache_alignment);
>
> Why is this not EXPORT_SYMBOL_GPL()?
>
> Otherwise, this is generally good to me, thanks.
>
> Conor.
>
> >
> > void arch_sync_dma_for_device(phys_addr_t paddr, size_t size,
> > enum dma_data_direction dir)
> > @@ -78,3 +80,9 @@ void riscv_noncoherent_supported(void)
> > "Non-coherent DMA support enabled without a block size\n");
> > noncoherent_supported = true;
> > }
> > +
> > +void __init riscv_set_dma_cache_alignment(void)
> > +{
> > + if (!noncoherent_supported)
> > + dma_cache_alignment = 1;
> > +}
> > --
> > 2.40.1
> >