Most of the time, flush_tlb_range() is called on single pages.
At the time being, flush_tlb_range() inconditionnaly calls
flush_tlb_mm() which flushes at least the entire PID pages and on
older CPUs like 4xx or 8xx it flushes the entire TLB table.
This patch calls flush_tlb_page() instead of flush_tlb_mm() when
the range is a single page.
Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/mm/tlb_nohash.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/mm/tlb_nohash.c b/arch/powerpc/mm/tlb_nohash.c
index bfc4a0869609..15fe5f0c8665 100644
--- a/arch/powerpc/mm/tlb_nohash.c
+++ b/arch/powerpc/mm/tlb_nohash.c
@@ -388,7 +388,10 @@ void flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
unsigned long end)
{
- flush_tlb_mm(vma->vm_mm);
+ if (end - start == PAGE_SIZE && !(start & ~PAGE_MASK))
+ flush_tlb_page(vma, start);
+ else
+ flush_tlb_mm(vma->vm_mm);
}
EXPORT_SYMBOL(flush_tlb_range);
--
2.13.3
On Tue, 2018-01-23 at 13:22:50 UTC, Christophe Leroy wrote:
> Most of the time, flush_tlb_range() is called on single pages.
> At the time being, flush_tlb_range() inconditionnaly calls
> flush_tlb_mm() which flushes at least the entire PID pages and on
> older CPUs like 4xx or 8xx it flushes the entire TLB table.
>
> This patch calls flush_tlb_page() instead of flush_tlb_mm() when
> the range is a single page.
>
> Signed-off-by: Christophe Leroy <[email protected]>
Applied to powerpc next, thanks.
https://git.kernel.org/powerpc/c/5c8136fa1af7c0e9b4aec89cf2832f
cheers