From: Alastair D'Silva <[email protected]>
This series addresses a few issues discovered in how we flush caches:
1. Flushes were truncated at 4GB, so larger flushes were incorrect.
2. Flushing the dcache in arch_add_memory was unnecessary
This series also converts much of the cache assembler to C, with the
aim of making it easier to maintain.
Changelog:
V4:
- Split out VDSO patch
- Pass/cast the correct types in 'powerpc: Convert
flush_icache_range & friends to C'
V3:
- factor out chunking loop
- Replace __asm__ __volatile__ with asm volatile
- Replace flush_coherent_icache_or_return macro with
flush_coherent_icache function
- factor our invalidate_icache_range
- Replace code duplicating clean_dcache_range() in
__flush_dcache_icache() with a call to clean_dcache_range()
- Remove redundant #ifdef CONFIG_44x
- Fix preprocessor logic:
#if !defined(CONFIG_PPC_8xx) & !defined(CONFIG_PPC64)
- Added loop(1|2) to earlyclobbers in flush_dcache_icache_phys
- Drop "Remove extern" patch
- Replace 32 bit shifts in 64 bit VDSO with 64 bit ones
V2:
- Replace C implementation of flush_dcache_icache_phys() with
inline assembler authored by Christophe Leroy
- Add memory clobbers for iccci implementation
- Give __flush_dcache_icache a real implementation, it can't
just be a wrapper around flush_icache_range()
- Remove PPC64_CACHES from misc_64.S
- Replace code duplicating clean_dcache_range() in
flush_icache_range() with a call to clean_dcache_range()
- Replace #ifdef CONFIG_44x with IS_ENABLED(...) in
flush_icache_cange()
- Use 1GB chunks instead of 16GB in arch_*_memory
Alastair D'Silva (6):
powerpc: Allow flush_icache_range to work across ranges >4GB
powerpc: Allow 64bit VDSO __kernel_sync_dicache to work across ranges
>4GB
powerpc: define helpers to get L1 icache sizes
powerpc: Convert flush_icache_range & friends to C
powerpc: Chunk calls to flush_dcache_range in arch_*_memory
powerpc: Don't flush caches when adding memory
arch/powerpc/include/asm/cache.h | 55 +++++---
arch/powerpc/include/asm/cacheflush.h | 36 +++--
arch/powerpc/kernel/misc_32.S | 117 ----------------
arch/powerpc/kernel/misc_64.S | 102 --------------
arch/powerpc/kernel/vdso64/cacheflush.S | 4 +-
arch/powerpc/mm/mem.c | 176 +++++++++++++++++++++++-
6 files changed, 228 insertions(+), 262 deletions(-)
--
2.21.0
From: Alastair D'Silva <[email protected]>
When calling __kernel_sync_dicache with a size >4GB, we were masking
off the upper 32 bits, so we would incorrectly flush a range smaller
than intended.
This patch replaces the 32 bit shifts with 64 bit ones, so that
the full size is accounted for.
Signed-off-by: Alastair D'Silva <[email protected]>
Cc: [email protected]
---
arch/powerpc/kernel/vdso64/cacheflush.S | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/kernel/vdso64/cacheflush.S b/arch/powerpc/kernel/vdso64/cacheflush.S
index 3f92561a64c4..526f5ba2593e 100644
--- a/arch/powerpc/kernel/vdso64/cacheflush.S
+++ b/arch/powerpc/kernel/vdso64/cacheflush.S
@@ -35,7 +35,7 @@ V_FUNCTION_BEGIN(__kernel_sync_dicache)
subf r8,r6,r4 /* compute length */
add r8,r8,r5 /* ensure we get enough */
lwz r9,CFG_DCACHE_LOGBLOCKSZ(r10)
- srw. r8,r8,r9 /* compute line count */
+ srd. r8,r8,r9 /* compute line count */
crclr cr0*4+so
beqlr /* nothing to do? */
mtctr r8
@@ -52,7 +52,7 @@ V_FUNCTION_BEGIN(__kernel_sync_dicache)
subf r8,r6,r4 /* compute length */
add r8,r8,r5
lwz r9,CFG_ICACHE_LOGBLOCKSZ(r10)
- srw. r8,r8,r9 /* compute line count */
+ srd. r8,r8,r9 /* compute line count */
crclr cr0*4+so
beqlr /* nothing to do? */
mtctr r8
--
2.21.0