The previous patch removed the tlb_flush_end() implementation which
used tlb_flush_range(). This means:
- csky did double invalidates, a range invalidate per vma and a full
invalidate at the end
- csky actually has range invalidates and as such the generic
tlb_flush implementation is more efficient for it.
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
---
arch/csky/include/asm/tlb.h | 2 --
1 file changed, 2 deletions(-)
--- a/arch/csky/include/asm/tlb.h
+++ b/arch/csky/include/asm/tlb.h
@@ -4,8 +4,6 @@
#define __ASM_CSKY_TLB_H
#include <asm/cacheflush.h>
-#define tlb_flush(tlb) flush_tlb_mm((tlb)->mm)
-
#include <asm-generic/tlb.h>
#endif /* __ASM_CSKY_TLB_H */
On Fri, Jul 08, 2022 at 09:18:04AM +0200, Peter Zijlstra wrote:
> The previous patch removed the tlb_flush_end() implementation which
> used tlb_flush_range(). This means:
>
> - csky did double invalidates, a range invalidate per vma and a full
> invalidate at the end
>
> - csky actually has range invalidates and as such the generic
> tlb_flush implementation is more efficient for it.
>
> Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
> ---
> arch/csky/include/asm/tlb.h | 2 --
> 1 file changed, 2 deletions(-)
>
> --- a/arch/csky/include/asm/tlb.h
> +++ b/arch/csky/include/asm/tlb.h
> @@ -4,8 +4,6 @@
> #define __ASM_CSKY_TLB_H
>
> #include <asm/cacheflush.h>
> -#define tlb_flush(tlb) flush_tlb_mm((tlb)->mm)
Looks right to me, and the generic code handles the fullmm case when
!CONFIG_MMU_GATHER_NO_RANGE so:
Acked-by: Will Deacon <[email protected]>
Will