A new mechanism, LUF(Lazy Unmap Flush), defers tlb flush until folios
that have been unmapped and freed, eventually get allocated again. It's
safe for folios that had been mapped read only and were unmapped, since
the contents of the folios don't change while staying in pcp or buddy
so we can still read the data through the stale tlb entries.
This is a preparation for the mechanism that requires to avoid redundant
tlb flush by manipulating tlb batch's arch data. To achieve that, we
need to separate the part clearing the tlb batch's arch data out of
arch_tlbbatch_flush().
Signed-off-by: Byungchul Park <[email protected]>
---
arch/riscv/mm/tlbflush.c | 1 -
arch/x86/mm/tlb.c | 2 --
mm/rmap.c | 1 +
3 files changed, 1 insertion(+), 3 deletions(-)
diff --git a/arch/riscv/mm/tlbflush.c b/arch/riscv/mm/tlbflush.c
index 07d743f87b3f..9cbd27148357 100644
--- a/arch/riscv/mm/tlbflush.c
+++ b/arch/riscv/mm/tlbflush.c
@@ -234,5 +234,4 @@ void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
{
__flush_tlb_range(&batch->cpumask, FLUSH_TLB_NO_ASID, 0,
FLUSH_TLB_MAX_SIZE, PAGE_SIZE);
- cpumask_clear(&batch->cpumask);
}
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 44ac64f3a047..24bce69222cd 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -1265,8 +1265,6 @@ void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
local_irq_enable();
}
- cpumask_clear(&batch->cpumask);
-
put_flush_tlb_info();
put_cpu();
}
diff --git a/mm/rmap.c b/mm/rmap.c
index 2608c40dffad..cf8a99a49aef 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -649,6 +649,7 @@ void try_to_unmap_flush(void)
return;
arch_tlbbatch_flush(&tlb_ubc->arch);
+ arch_tlbbatch_clear(&tlb_ubc->arch);
tlb_ubc->flush_required = false;
tlb_ubc->writable = false;
}
--
2.17.1