2024-05-31 09:23:06

by Byungchul Park

[permalink] [raw]
Subject: [PATCH v11 09/12] mm: implement LUF(Lazy Unmap Flush) defering tlb flush when folios get unmapped

A new mechanism, LUF(Lazy Unmap Flush), defers tlb flush until folios
that have been unmapped and freed, eventually get allocated again. It's
safe for folios that had been mapped read-only and were unmapped, as
long as the contents of the folios don't change while staying in pcp or
buddy so we can still read the data through the stale tlb entries.

tlb flush can be defered when folios get unmapped as long as it
guarantees to perform tlb flush needed, before the folios actually
become used, of course, only if all the corresponding ptes don't have
write permission. Otherwise, the system will get messed up.

To achieve that, for the folios that map only to non-writable tlb
entries, prevent tlb flush during unmapping but perform it just before
the folios actually become used, out of buddy or pcp.

However, we should cancel the pending by LUF and perform the deferred
TLB flush right away when:

1. a writable pte is newly set through fault handler
2. a file is updated
3. kasan needs poisoning on free
4. the kernel wants to init pages on free

No matter what type of workload is used for performance evaluation, the
result would be positive thanks to the unconditional reduction of tlb
flushes, tlb misses and interrupts. For the test, I picked up one of
the most popular and heavy workload, llama.cpp that is a
LLM(Large Language Model) inference engine.

The result would depend on memory latency and how often reclaim runs,
which implies tlb miss overhead and how many times unmapping happens.
In my system, the result shows:

1. tlb shootdown interrupts are reduced about 97%.
2. The test program runtime is reduced about 4.5%.

The test environment and the result is like:

Machine: bare metal, x86_64, Intel(R) Xeon(R) Gold 6430
CPU: 1 socket 64 core with hyper thread on
Numa: 2 nodes (64 CPUs DRAM 42GB, no CPUs CXL expander 98GB)
Config: swap off, numa balancing tiering on, demotion enabled

The test set:

llama.cpp/main -m $(70G_model1) -p "who are you?" -s 1 -t 15 -n 20 &
llama.cpp/main -m $(70G_model2) -p "who are you?" -s 1 -t 15 -n 20 &
llama.cpp/main -m $(70G_model3) -p "who are you?" -s 1 -t 15 -n 20 &
wait

where -t: nr of threads, -s: seed used to make the runtime stable,
-n: nr of tokens that determines the runtime, -p: prompt to ask,
-m: LLM model to use.

Run the test set 5 times successively with caches dropped every run
via 'echo 3 > /proc/sys/vm/drop_caches'. Each inference prints its
runtime at the end of each.

1. Runtime from the output of llama.cpp:

BEFORE
------
llama_print_timings: total time = 883450.54 ms / 24 tokens
llama_print_timings: total time = 861665.91 ms / 24 tokens
llama_print_timings: total time = 898079.02 ms / 24 tokens
llama_print_timings: total time = 879897.69 ms / 24 tokens
llama_print_timings: total time = 892360.75 ms / 24 tokens
llama_print_timings: total time = 884587.85 ms / 24 tokens
llama_print_timings: total time = 861023.19 ms / 24 tokens
llama_print_timings: total time = 900022.18 ms / 24 tokens
llama_print_timings: total time = 878771.88 ms / 24 tokens
llama_print_timings: total time = 889027.98 ms / 24 tokens
llama_print_timings: total time = 880783.90 ms / 24 tokens
llama_print_timings: total time = 856475.29 ms / 24 tokens
llama_print_timings: total time = 896842.21 ms / 24 tokens
llama_print_timings: total time = 878883.53 ms / 24 tokens
llama_print_timings: total time = 890122.10 ms / 24 tokens

AFTER
-----
llama_print_timings: total time = 871060.86 ms / 24 tokens
llama_print_timings: total time = 825609.53 ms / 24 tokens
llama_print_timings: total time = 836854.81 ms / 24 tokens
llama_print_timings: total time = 843147.99 ms / 24 tokens
llama_print_timings: total time = 831426.65 ms / 24 tokens
llama_print_timings: total time = 873939.23 ms / 24 tokens
llama_print_timings: total time = 826127.69 ms / 24 tokens
llama_print_timings: total time = 835489.26 ms / 24 tokens
llama_print_timings: total time = 842589.62 ms / 24 tokens
llama_print_timings: total time = 833700.66 ms / 24 tokens
llama_print_timings: total time = 875996.19 ms / 24 tokens
llama_print_timings: total time = 826401.73 ms / 24 tokens
llama_print_timings: total time = 839341.28 ms / 24 tokens
llama_print_timings: total time = 841075.10 ms / 24 tokens
llama_print_timings: total time = 835136.41 ms / 24 tokens

2. tlb shootdowns from 'cat /proc/interrupts':

BEFORE
------
TLB:
80911532 93691786 100296251 111062810 109769109 109862429
108968588 119175230 115779676 118377498 119325266 120300143
124514185 116697222 121068466 118031913 122660681 117494403
121819907 116960596 120936335 117217061 118630217 122322724
119595577 111693298 119232201 120030377 115334687 113179982
118808254 116353592 140987367 137095516 131724276 139742240
136501150 130428761 127585535 132483981 133430250 133756207
131786710 126365824 129812539 133850040 131742690 125142213
128572830 132234350 131945922 128417707 133355434 129972846
126331823 134050849 133991626 121129038 124637283 132830916
126875507 122322440 125776487 124340278 TLB shootdowns

AFTER
-----
TLB:
2121206 2615108 2983494 2911950 3055086 3092672
3204894 3346082 3286744 3307310 3357296 3315940
3428034 3112596 3143325 3185551 3186493 3322314
3330523 3339663 3156064 3272070 3296309 3198962
3332662 3315870 3234467 3353240 3281234 3300666
3345452 3173097 4009196 3932215 3898735 3726531
3717982 3671726 3728788 3724613 3799147 3691764
3620630 3684655 3666688 3393974 3448651 3487593
3446357 3618418 3671920 3712949 3575264 3715385
3641513 3630897 3691047 3630690 3504933 3662647
3629926 3443044 3832970 3548813 TLB shootdowns

Signed-off-by: Byungchul Park <[email protected]>
---
include/linux/fs.h | 6 +
include/linux/mm_types.h | 8 +
include/linux/sched.h | 9 ++
mm/compaction.c | 2 +-
mm/internal.h | 42 +++++-
mm/memory.c | 39 ++++-
mm/page_alloc.c | 17 ++-
mm/rmap.c | 315 ++++++++++++++++++++++++++++++++++++++-
8 files changed, 420 insertions(+), 18 deletions(-)

diff --git a/include/linux/fs.h b/include/linux/fs.h
index 0283cf366c2a..03683bf66031 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -2872,6 +2872,12 @@ static inline void file_end_write(struct file *file)
if (!S_ISREG(file_inode(file)->i_mode))
return;
sb_end_write(file_inode(file)->i_sb);
+
+ /*
+ * XXX: If needed, can be optimized by avoiding luf_flush() if
+ * the address space of the file has never been involved by luf.
+ */
+ luf_flush();
}

/**
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 37eb3000267c..cd52c996e8aa 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -1223,6 +1223,14 @@ static inline unsigned int mm_cid_size(void)
}
#endif /* CONFIG_SCHED_MM_CID */

+#if defined(CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH)
+void check_luf_flush(unsigned short int ugen);
+void luf_flush(void);
+#else
+static inline void check_luf_flush(unsigned short int ugen) {}
+static inline void luf_flush(void) {}
+#endif
+
struct mmu_gather;
extern void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm);
extern void tlb_gather_mmu_fullmm(struct mmu_gather *tlb, struct mm_struct *mm);
diff --git a/include/linux/sched.h b/include/linux/sched.h
index d9722c014157..613ed175e5f2 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1341,8 +1341,17 @@ struct task_struct {

struct tlbflush_unmap_batch tlb_ubc;
struct tlbflush_unmap_batch tlb_ubc_ro;
+ struct tlbflush_unmap_batch tlb_ubc_luf;
unsigned short int ugen;

+#if defined(CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH)
+ /*
+ * whether all the mappings of a folio during unmap are read-only
+ * so that luf can work on the folio
+ */
+ bool can_luf;
+#endif
+
/* Cache last used pipe for splice(): */
struct pipe_inode_info *splice_pipe;

diff --git a/mm/compaction.c b/mm/compaction.c
index 13799fbb2a9a..4a75c56af0b0 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -1925,7 +1925,7 @@ static void compaction_free(struct folio *dst, unsigned long data)
struct page *page = &dst->page;

if (folio_put_testzero(dst)) {
- free_pages_prepare(page, order);
+ free_pages_prepare(page, order, NULL);
list_add(&dst->lru, &cc->freepages[order]);
cc->nr_freepages += 1 << order;
}
diff --git a/mm/internal.h b/mm/internal.h
index ca6fb5b2a640..b3d7a5e5f7e3 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -657,7 +657,8 @@ extern void prep_compound_page(struct page *page, unsigned int order);

extern void post_alloc_hook(struct page *page, unsigned int order,
gfp_t gfp_flags);
-extern bool free_pages_prepare(struct page *page, unsigned int order);
+extern bool free_pages_prepare(struct page *page, unsigned int order,
+ unsigned short int *ugen);

extern int user_min_free_kbytes;

@@ -1541,6 +1542,36 @@ void workingset_update_node(struct xa_node *node);
extern struct list_lru shadow_nodes;

#if defined(CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH)
+unsigned short int try_to_unmap_luf(void);
+
+/*
+ * Reset the indicator indicating there are no writable mappings at the
+ * beginning of every rmap traverse for unmap. luf can work only when
+ * all the mappings are read-only.
+ */
+static inline void can_luf_init(void)
+{
+ current->can_luf = true;
+}
+
+/*
+ * Mark the folio is not applicable to luf once it found a writble or
+ * dirty pte during rmap traverse for unmap.
+ */
+static inline void can_luf_fail(void)
+{
+ current->can_luf = false;
+}
+
+/*
+ * Check if all the mappings are read-only and read-only mappings even
+ * exist.
+ */
+static inline bool can_luf_test(void)
+{
+ return current->can_luf && current->tlb_ubc_ro.flush_required;
+}
+
static inline unsigned short int ugen_latest(unsigned short int a, unsigned short int b)
{
if (!a || !b)
@@ -1570,10 +1601,7 @@ static inline unsigned short int hand_over_task_ugen(void)

static inline void check_flush_task_ugen(void)
{
- /*
- * XXX: luf mechanism will handle this. For now, do nothing but
- * reset current's ugen to finalize this turn.
- */
+ check_luf_flush(current->ugen);
current->ugen = 0;
}

@@ -1602,6 +1630,10 @@ static inline bool can_luf_folio(struct folio *f)
return can_luf;
}
#else /* CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH */
+static inline unsigned short int try_to_unmap_luf(void) { return 0; }
+static inline void can_luf_init(void) {}
+static inline void can_luf_fail(void) {}
+static inline bool can_luf_test(void) { return false; }
static inline unsigned short int ugen_latest(unsigned short int a, unsigned short int b) { return 0; }
static inline void update_task_ugen(unsigned short int ugen) {}
static inline unsigned short int hand_over_task_ugen(void) { return 0; }
diff --git a/mm/memory.c b/mm/memory.c
index 100f54fc9e6c..12c9e87e489d 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3011,6 +3011,15 @@ static inline int pte_unmap_same(struct vm_fault *vmf)
return same;
}

+static bool need_luf_flush(struct vm_fault *vmf)
+{
+ if ((vmf->flags & FAULT_FLAG_ORIG_PTE_VALID) &&
+ pte_write(vmf->orig_pte))
+ return false;
+
+ return pte_write(ptep_get(vmf->pte));
+}
+
/*
* Return:
* 0: copied succeeded
@@ -3026,6 +3035,7 @@ static inline int __wp_page_copy_user(struct page *dst, struct page *src,
struct vm_area_struct *vma = vmf->vma;
struct mm_struct *mm = vma->vm_mm;
unsigned long addr = vmf->address;
+ bool luf = false;

if (likely(src)) {
if (copy_mc_user_highpage(dst, src, addr, vma)) {
@@ -3059,8 +3069,10 @@ static inline int __wp_page_copy_user(struct page *dst, struct page *src,
* Other thread has already handled the fault
* and update local tlb only
*/
- if (vmf->pte)
+ if (vmf->pte) {
update_mmu_tlb(vma, addr, vmf->pte);
+ luf = need_luf_flush(vmf);
+ }
ret = -EAGAIN;
goto pte_unlock;
}
@@ -3084,8 +3096,10 @@ static inline int __wp_page_copy_user(struct page *dst, struct page *src,
vmf->pte = pte_offset_map_lock(mm, vmf->pmd, addr, &vmf->ptl);
if (unlikely(!vmf->pte || !pte_same(ptep_get(vmf->pte), vmf->orig_pte))) {
/* The PTE changed under us, update local tlb */
- if (vmf->pte)
+ if (vmf->pte) {
update_mmu_tlb(vma, addr, vmf->pte);
+ luf = need_luf_flush(vmf);
+ }
ret = -EAGAIN;
goto pte_unlock;
}
@@ -3112,6 +3126,8 @@ static inline int __wp_page_copy_user(struct page *dst, struct page *src,
pte_unmap_unlock(vmf->pte, vmf->ptl);
pagefault_enable();
kunmap_local(kaddr);
+ if (luf)
+ luf_flush();
flush_dcache_page(dst);

return ret;
@@ -3446,6 +3462,8 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf)
} else if (vmf->pte) {
update_mmu_tlb(vma, vmf->address, vmf->pte);
pte_unmap_unlock(vmf->pte, vmf->ptl);
+ if (need_luf_flush(vmf))
+ luf_flush();
}

mmu_notifier_invalidate_range_end(&range);
@@ -3501,6 +3519,8 @@ static vm_fault_t finish_mkwrite_fault(struct vm_fault *vmf, struct folio *folio
if (!pte_same(ptep_get(vmf->pte), vmf->orig_pte)) {
update_mmu_tlb(vmf->vma, vmf->address, vmf->pte);
pte_unmap_unlock(vmf->pte, vmf->ptl);
+ if (need_luf_flush(vmf))
+ luf_flush();
return VM_FAULT_NOPAGE;
}
wp_page_reuse(vmf, folio);
@@ -4469,6 +4489,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
vm_fault_t ret = 0;
int nr_pages = 1;
pte_t entry;
+ bool luf = false;

/* File mapping without ->vm_ops ? */
if (vma->vm_flags & VM_SHARED)
@@ -4492,6 +4513,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
goto unlock;
if (vmf_pte_changed(vmf)) {
update_mmu_tlb(vma, vmf->address, vmf->pte);
+ luf = need_luf_flush(vmf);
goto unlock;
}
ret = check_stable_address_space(vma->vm_mm);
@@ -4536,9 +4558,11 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
goto release;
if (nr_pages == 1 && vmf_pte_changed(vmf)) {
update_mmu_tlb(vma, addr, vmf->pte);
+ luf = need_luf_flush(vmf);
goto release;
} else if (nr_pages > 1 && !pte_range_none(vmf->pte, nr_pages)) {
update_mmu_tlb_range(vma, addr, vmf->pte, nr_pages);
+ luf = need_luf_flush(vmf);
goto release;
}

@@ -4570,6 +4594,8 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
unlock:
if (vmf->pte)
pte_unmap_unlock(vmf->pte, vmf->ptl);
+ if (luf)
+ luf_flush();
return ret;
release:
folio_put(folio);
@@ -4796,6 +4822,7 @@ vm_fault_t finish_fault(struct vm_fault *vmf)
vm_fault_t ret;
bool is_cow = (vmf->flags & FAULT_FLAG_WRITE) &&
!(vma->vm_flags & VM_SHARED);
+ bool luf = false;

/* Did we COW the page? */
if (is_cow)
@@ -4841,10 +4868,14 @@ vm_fault_t finish_fault(struct vm_fault *vmf)
ret = 0;
} else {
update_mmu_tlb(vma, vmf->address, vmf->pte);
+ luf = need_luf_flush(vmf);
ret = VM_FAULT_NOPAGE;
}

pte_unmap_unlock(vmf->pte, vmf->ptl);
+
+ if (luf)
+ luf_flush();
return ret;
}

@@ -5397,6 +5428,7 @@ static vm_fault_t wp_huge_pud(struct vm_fault *vmf, pud_t orig_pud)
static vm_fault_t handle_pte_fault(struct vm_fault *vmf)
{
pte_t entry;
+ bool luf = false;

if (unlikely(pmd_none(*vmf->pmd))) {
/*
@@ -5440,6 +5472,7 @@ static vm_fault_t handle_pte_fault(struct vm_fault *vmf)
entry = vmf->orig_pte;
if (unlikely(!pte_same(ptep_get(vmf->pte), entry))) {
update_mmu_tlb(vmf->vma, vmf->address, vmf->pte);
+ luf = need_luf_flush(vmf);
goto unlock;
}
if (vmf->flags & (FAULT_FLAG_WRITE|FAULT_FLAG_UNSHARE)) {
@@ -5469,6 +5502,8 @@ static vm_fault_t handle_pte_fault(struct vm_fault *vmf)
}
unlock:
pte_unmap_unlock(vmf->pte, vmf->ptl);
+ if (luf)
+ luf_flush();
return 0;
}

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index c9acb4da91e0..4007c9757c3f 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1048,7 +1048,7 @@ void kernel_init_pages(struct page *page, int numpages)
}

__always_inline bool free_pages_prepare(struct page *page,
- unsigned int order)
+ unsigned int order, unsigned short int *ugen)
{
int bad = 0;
bool skip_kasan_poison = should_skip_kasan_poison(page);
@@ -1062,6 +1062,15 @@ __always_inline bool free_pages_prepare(struct page *page,
*/
set_page_private(page, 0);

+ /*
+ * The contents of the pages will be updated for some reasons.
+ * So we should give up luf.
+ */
+ if ((!skip_kasan_poison || init) && ugen && *ugen) {
+ check_luf_flush(*ugen);
+ *ugen = 0;
+ }
+
trace_mm_page_free(page, order);
kmsan_free_page(page, order);

@@ -1236,7 +1245,7 @@ static void __free_pages_ok(struct page *page, unsigned int order,
unsigned long pfn = page_to_pfn(page);
struct zone *zone = page_zone(page);

- if (!free_pages_prepare(page, order))
+ if (!free_pages_prepare(page, order, NULL))
return;

free_one_page(zone, page, pfn, order, fpi_flags, 0);
@@ -2664,7 +2673,7 @@ void free_unref_page(struct page *page, unsigned int order,
return;
}

- if (!free_pages_prepare(page, order))
+ if (!free_pages_prepare(page, order, &ugen))
return;

/*
@@ -2712,7 +2721,7 @@ void free_unref_folios(struct folio_batch *folios, unsigned short int ugen)
unsigned int order = folio_order(folio);

folio_undo_large_rmappable(folio);
- if (!free_pages_prepare(&folio->page, order))
+ if (!free_pages_prepare(&folio->page, order, &ugen))
continue;
/*
* Free orders not handled on the PCP directly to the
diff --git a/mm/rmap.c b/mm/rmap.c
index 1a246788e867..459d4d1631f0 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -634,6 +634,274 @@ struct anon_vma *folio_lock_anon_vma_read(struct folio *folio,
}

#ifdef CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH
+static struct tlbflush_unmap_batch luf_ubc;
+static DEFINE_SPINLOCK(luf_lock);
+
+/*
+ * Don't be zero to distinguish from invalid ugen, 0.
+ */
+static unsigned short int ugen_next(unsigned short int a)
+{
+ return a + 1 ?: a + 2;
+}
+
+static bool ugen_before(unsigned short int a, unsigned short int b)
+{
+ return (short int)(a - b) < 0;
+}
+
+/*
+ * Need to synchronize between tlb flush and managing pending CPUs in
+ * luf_ubc. Take a look at the following scenario, where CPU0 is in
+ * try_to_unmap_flush() and CPU1 is in migrate_pages_batch():
+ *
+ * CPU0 CPU1
+ * ---- ----
+ * tlb flush
+ * unmap folios (needing tlb flush)
+ * add pending CPUs to luf_ubc
+ * <-- not performed tlb flush needed by
+ * the unmap above yet but the request
+ * will be cleared by CPU0 shortly. bug!
+ * clear the CPUs from luf_ubc
+ *
+ * The pending CPUs added in CPU1 should not be cleared from luf_ubc
+ * in CPU0 because the tlb flush for luf_ubc added in CPU1 has not
+ * been performed this turn. To avoid this, using 'on_flushing'
+ * variable, prevent adding pending CPUs to luf_ubc and give up luf
+ * mechanism if someone is in the middle of tlb flush, like:
+ *
+ * CPU0 CPU1
+ * ---- ----
+ * on_flushing++
+ * tlb flush
+ * unmap folios (needing tlb flush)
+ * if on_flushing == 0:
+ * add pending CPUs to luf_ubc
+ * else: <-- hit
+ * give up luf mechanism
+ * clear the CPUs from luf_ubc
+ * on_flushing--
+ *
+ * Only the following case would be allowed for luf mechanism to work:
+ *
+ * CPU0 CPU1
+ * ---- ----
+ * unmap folios (needing tlb flush)
+ * if on_flushing == 0: <-- hit
+ * add pending CPUs to luf_ubc
+ * else:
+ * give up luf mechanism
+ * on_flushing++
+ * tlb flush
+ * clear the CPUs from luf_ubc
+ * on_flushing--
+ */
+static int on_flushing;
+
+/*
+ * When more than one thread enter check_luf_flush() at the same
+ * time, each should wait for the request on progress to be done to
+ * avoid the following scenario, where the both CPUs are in
+ * check_luf_flush():
+ *
+ * CPU0 CPU1
+ * ---- ----
+ * if !luf_ubc.flush_required:
+ * return
+ * luf_ubc.flush_required = false
+ * if !luf_ubc.flush_requied: <-- hit
+ * return <-- not performed tlb flush
+ * needed yet but return. bug!
+ * luf_ubc.flush_required = false
+ * try_to_unmap_flush()
+ * finalize
+ * try_to_unmap_flush() <-- performs tlb flush needed
+ * finalize
+ *
+ * So it should be handled:
+ *
+ * CPU0 CPU1
+ * ---- ----
+ * atomically execute {
+ * if luf_on_flushing:
+ * wait for the completion
+ * return
+ * if !luf_ubc.flush_required:
+ * return
+ * luf_ubc.flush_required = false
+ * luf_on_flushing = true
+ * }
+ * atomically execute {
+ * if luf_on_flushing: <-- hit
+ * wait for the completion
+ * return <-- tlb flush needed is done
+ * if !luf_ubc.flush_requied:
+ * return
+ * luf_ubc.flush_required = false
+ * luf_on_flushing = true
+ * }
+ *
+ * try_to_unmap_flush()
+ * luf_on_flushing = false
+ * finalize
+ * try_to_unmap_flush() <-- performs tlb flush needed
+ * luf_on_flushing = false
+ * finalize
+ */
+static bool luf_on_flushing;
+
+/*
+ * Generation number for the current request of deferred tlb flush.
+ */
+static unsigned short int luf_gen;
+
+/*
+ * Generation number for the next request.
+ */
+static unsigned short int luf_gen_next = 1;
+
+/*
+ * Generation number for the latest request handled.
+ */
+static unsigned short int luf_gen_done;
+
+unsigned short int try_to_unmap_luf(void)
+{
+ struct tlbflush_unmap_batch *tlb_ubc = &current->tlb_ubc;
+ struct tlbflush_unmap_batch *tlb_ubc_luf = &current->tlb_ubc_luf;
+ unsigned long flags;
+ unsigned short int ugen;
+
+ if (!spin_trylock_irqsave(&luf_lock, flags)) {
+ /*
+ * Give up luf mechanism. Just let tlb flush needed
+ * handled by try_to_unmap_flush() at the caller side.
+ */
+ fold_ubc(tlb_ubc, tlb_ubc_luf);
+ return 0;
+ }
+
+ if (on_flushing || luf_on_flushing) {
+ spin_unlock_irqrestore(&luf_lock, flags);
+
+ /*
+ * Give up luf mechanism. Just let tlb flush needed
+ * handled by try_to_unmap_flush() at the caller side.
+ */
+ fold_ubc(tlb_ubc, tlb_ubc_luf);
+ return 0;
+ }
+
+ fold_ubc(&luf_ubc, tlb_ubc_luf);
+ ugen = luf_gen = luf_gen_next;
+ spin_unlock_irqrestore(&luf_lock, flags);
+
+ return ugen;
+}
+
+static bool rmap_flush_start(void)
+{
+ unsigned long flags;
+
+ if (!spin_trylock_irqsave(&luf_lock, flags))
+ return false;
+
+ on_flushing++;
+ spin_unlock_irqrestore(&luf_lock, flags);
+ return true;
+}
+
+static void rmap_flush_end(struct tlbflush_unmap_batch *batch)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&luf_lock, flags);
+ if (arch_tlbbatch_done(&luf_ubc.arch, &batch->arch)) {
+ luf_ubc.flush_required = false;
+ luf_ubc.writable = false;
+ }
+ on_flushing--;
+ spin_unlock_irqrestore(&luf_lock, flags);
+}
+
+/*
+ * It must be guaranteed to have completed tlb flush requested on return.
+ */
+void check_luf_flush(unsigned short int ugen)
+{
+ struct tlbflush_unmap_batch *tlb_ubc = &current->tlb_ubc;
+ unsigned long flags;
+
+ /*
+ * Nothing has been requested. We are done.
+ */
+ if (!ugen)
+ return;
+retry:
+ /*
+ * We can see a larger value than or equal to luf_gen_done,
+ * which means the tlb flush we need has been done.
+ */
+ if (!ugen_before(READ_ONCE(luf_gen_done), ugen))
+ return;
+
+ spin_lock_irqsave(&luf_lock, flags);
+
+ /*
+ * With luf_lock held, we might read luf_gen_done updated.
+ */
+ if (ugen_next(luf_gen_done) != ugen) {
+ spin_unlock_irqrestore(&luf_lock, flags);
+ return;
+ }
+
+ /*
+ * Others are already working for us.
+ */
+ if (luf_on_flushing) {
+ spin_unlock_irqrestore(&luf_lock, flags);
+ goto retry;
+ }
+
+ if (!luf_ubc.flush_required) {
+ spin_unlock_irqrestore(&luf_lock, flags);
+ return;
+ }
+
+ fold_ubc(tlb_ubc, &luf_ubc);
+ luf_gen_next = ugen_next(luf_gen);
+ luf_on_flushing = true;
+ spin_unlock_irqrestore(&luf_lock, flags);
+
+ try_to_unmap_flush();
+
+ spin_lock_irqsave(&luf_lock, flags);
+ luf_on_flushing = false;
+
+ /*
+ * luf_gen_done can be read by another with luf_lock not
+ * held so use WRITE_ONCE() to prevent tearing.
+ */
+ WRITE_ONCE(luf_gen_done, ugen);
+ spin_unlock_irqrestore(&luf_lock, flags);
+}
+
+void luf_flush(void)
+{
+ unsigned long flags;
+ unsigned short int ugen;
+
+ /*
+ * Obtain the latest ugen number.
+ */
+ spin_lock_irqsave(&luf_lock, flags);
+ ugen = luf_gen;
+ spin_unlock_irqrestore(&luf_lock, flags);
+
+ check_luf_flush(ugen);
+}
+EXPORT_SYMBOL(luf_flush);

void fold_ubc(struct tlbflush_unmap_batch *dst,
struct tlbflush_unmap_batch *src)
@@ -665,13 +933,18 @@ void fold_ubc(struct tlbflush_unmap_batch *dst,
void try_to_unmap_flush(void)
{
struct tlbflush_unmap_batch *tlb_ubc = &current->tlb_ubc;
- struct tlbflush_unmap_batch *tlb_ubc_ro = &current->tlb_ubc_ro;
+ struct tlbflush_unmap_batch *tlb_ubc_luf = &current->tlb_ubc_luf;
+ bool started;

- fold_ubc(tlb_ubc, tlb_ubc_ro);
+ fold_ubc(tlb_ubc, tlb_ubc_luf);
if (!tlb_ubc->flush_required)
return;

+ started = rmap_flush_start();
arch_tlbbatch_flush(&tlb_ubc->arch);
+ if (started)
+ rmap_flush_end(tlb_ubc);
+
arch_tlbbatch_clear(&tlb_ubc->arch);
tlb_ubc->flush_required = false;
tlb_ubc->writable = false;
@@ -681,9 +954,9 @@ void try_to_unmap_flush(void)
void try_to_unmap_flush_dirty(void)
{
struct tlbflush_unmap_batch *tlb_ubc = &current->tlb_ubc;
- struct tlbflush_unmap_batch *tlb_ubc_ro = &current->tlb_ubc_ro;
+ struct tlbflush_unmap_batch *tlb_ubc_luf = &current->tlb_ubc_luf;

- if (tlb_ubc->writable || tlb_ubc_ro->writable)
+ if (tlb_ubc->writable || tlb_ubc_luf->writable)
try_to_unmap_flush();
}

@@ -707,9 +980,15 @@ static void set_tlb_ubc_flush_pending(struct mm_struct *mm, pte_t pteval,
if (!pte_accessible(mm, pteval))
return;

- if (pte_write(pteval))
+ if (pte_write(pteval)) {
tlb_ubc = &current->tlb_ubc;
- else
+
+ /*
+ * luf cannot work with the folio once it found a
+ * writable or dirty mapping on it.
+ */
+ can_luf_fail();
+ } else
tlb_ubc = &current->tlb_ubc_ro;

arch_tlbbatch_add_pending(&tlb_ubc->arch, mm, uaddr);
@@ -2004,11 +2283,23 @@ void try_to_unmap(struct folio *folio, enum ttu_flags flags)
.done = folio_not_mapped,
.anon_lock = folio_lock_anon_vma_read,
};
+ struct tlbflush_unmap_batch *tlb_ubc = &current->tlb_ubc;
+ struct tlbflush_unmap_batch *tlb_ubc_ro = &current->tlb_ubc_ro;
+ struct tlbflush_unmap_batch *tlb_ubc_luf = &current->tlb_ubc_luf;
+ bool can_luf;
+
+ can_luf_init();

if (flags & TTU_RMAP_LOCKED)
rmap_walk_locked(folio, &rwc);
else
rmap_walk(folio, &rwc);
+
+ can_luf = can_luf_folio(folio) && can_luf_test();
+ if (can_luf)
+ fold_ubc(tlb_ubc_luf, tlb_ubc_ro);
+ else
+ fold_ubc(tlb_ubc, tlb_ubc_ro);
}

/*
@@ -2353,6 +2644,10 @@ void try_to_migrate(struct folio *folio, enum ttu_flags flags)
.done = folio_not_mapped,
.anon_lock = folio_lock_anon_vma_read,
};
+ struct tlbflush_unmap_batch *tlb_ubc = &current->tlb_ubc;
+ struct tlbflush_unmap_batch *tlb_ubc_ro = &current->tlb_ubc_ro;
+ struct tlbflush_unmap_batch *tlb_ubc_luf = &current->tlb_ubc_luf;
+ bool can_luf;

/*
* Migration always ignores mlock and only supports TTU_RMAP_LOCKED and
@@ -2377,10 +2672,18 @@ void try_to_migrate(struct folio *folio, enum ttu_flags flags)
if (!folio_test_ksm(folio) && folio_test_anon(folio))
rwc.invalid_vma = invalid_migration_vma;

+ can_luf_init();
+
if (flags & TTU_RMAP_LOCKED)
rmap_walk_locked(folio, &rwc);
else
rmap_walk(folio, &rwc);
+
+ can_luf = can_luf_folio(folio) && can_luf_test();
+ if (can_luf)
+ fold_ubc(tlb_ubc_luf, tlb_ubc_ro);
+ else
+ fold_ubc(tlb_ubc, tlb_ubc_ro);
}

#ifdef CONFIG_DEVICE_PRIVATE
--
2.17.1



2024-05-31 16:14:12

by Dave Hansen

[permalink] [raw]
Subject: Re: [PATCH v11 09/12] mm: implement LUF(Lazy Unmap Flush) defering tlb flush when folios get unmapped

On 5/31/24 02:19, Byungchul Park wrote:
..
> diff --git a/include/linux/fs.h b/include/linux/fs.h
> index 0283cf366c2a..03683bf66031 100644
> --- a/include/linux/fs.h
> +++ b/include/linux/fs.h
> @@ -2872,6 +2872,12 @@ static inline void file_end_write(struct file *file)
> if (!S_ISREG(file_inode(file)->i_mode))
> return;
> sb_end_write(file_inode(file)->i_sb);
> +
> + /*
> + * XXX: If needed, can be optimized by avoiding luf_flush() if
> + * the address space of the file has never been involved by luf.
> + */
> + luf_flush();
> }
..
> +void luf_flush(void)
> +{
> + unsigned long flags;
> + unsigned short int ugen;
> +
> + /*
> + * Obtain the latest ugen number.
> + */
> + spin_lock_irqsave(&luf_lock, flags);
> + ugen = luf_gen;
> + spin_unlock_irqrestore(&luf_lock, flags);
> +
> + check_luf_flush(ugen);
> +}

Am I reading this right? There's now an unconditional global spinlock
acquired in the sys_write() path? How can this possibly scale?

So, yeah, I think an optimization is absolutely needed. But, on a more
fundamental level, I just don't believe these patches are being tested.
Even a simple microbenchmark should show a pretty nasty regression on
any decently large system:

> https://github.com/antonblanchard/will-it-scale/blob/master/tests/write1.c

Second, I was just pointing out sys_write() as an example of how the
page cache could change. Couldn't a separate, read/write mmap() of the
file do the same thing and *not* go through sb_end_write()?

So:

fd = open("foo");
ptr1 = mmap(fd, PROT_READ);
ptr2 = mmap(fd, PROT_READ|PROT_WRITE);

foo = *ptr1; // populate the page cache
... page cache page is reclaimed and LUF'd
*ptr2 = bar; // new page cache page is allocated and written to

printk("*ptr1: %d\n", *ptr1);

Doesn't the printk() see stale data?

I think tglx would call all of this "tinkering". The approach to this
series is to "fix" narrow, specific cases that reviewers point out, make
it compile, then send it out again, hoping someone will apply it.

So, for me, until the approach to this series changes: NAK, for x86.
Andrew, please don't take this series. Or, if you do, please drop the
patch enabling it on x86.

I also have the feeling our VFS friends won't take kindly to having
random luf_foo() hooks in their hot paths, optimized or not. I don't
see any of them on cc.

2024-05-31 18:05:11

by Byungchul Park

[permalink] [raw]
Subject: Re: [PATCH v11 09/12] mm: implement LUF(Lazy Unmap Flush) defering tlb flush when folios get unmapped

Dave Hansen <[email protected]> wrote:
>
> On 5/31/24 02:19, Byungchul Park wrote:
> ..
> > diff --git a/include/linux/fs.h b/include/linux/fs.h
> > index 0283cf366c2a..03683bf66031 100644
> > --- a/include/linux/fs.h
> > +++ b/include/linux/fs.h
> > @@ -2872,6 +2872,12 @@ static inline void file_end_write(struct file *file)
> > if (!S_ISREG(file_inode(file)->i_mode))
> > return;
> > sb_end_write(file_inode(file)->i_sb);
> > +
> > + /*
> > + * XXX: If needed, can be optimized by avoiding luf_flush() if
> > + * the address space of the file has never been involved by luf.
> > + */
> > + luf_flush();
> > }
> ..
> > +void luf_flush(void)
> > +{
> > + unsigned long flags;
> > + unsigned short int ugen;
> > +
> > + /*
> > + * Obtain the latest ugen number.
> > + */
> > + spin_lock_irqsave(&luf_lock, flags);
> > + ugen = luf_gen;
> > + spin_unlock_irqrestore(&luf_lock, flags);
> > +
> > + check_luf_flush(ugen);
> > +}
>
> Am I reading this right? There's now an unconditional global spinlock

It looked *too much* to split the lock to several locks as rcu does until
version 11. However, this code introduced in v11 looks problematic.

> acquired in the sys_write() path? How can this possibly scale?

I should find a better way.

> So, yeah, I think an optimization is absolutely needed. But, on a more
> fundamental level, I just don't believe these patches are being tested.
> Even a simple microbenchmark should show a pretty nasty regression on
> any decently large system:
>
> > https://github.com/antonblanchard/will-it-scale/blob/master/tests/write1.c
>
> Second, I was just pointing out sys_write() as an example of how the
> page cache could change. Couldn't a separate, read/write mmap() of the
> file do the same thing and *not* go through sb_end_write()?
>
> So:
>
> fd = open("foo");
> ptr1 = mmap(fd, PROT_READ);
> ptr2 = mmap(fd, PROT_READ|PROT_WRITE);
>
> foo = *ptr1; // populate the page cache
> ... page cache page is reclaimed and LUF'd
> *ptr2 = bar; // new page cache page is allocated and written to

I think this part would work but I'm not convinced. I will check again.

> printk("*ptr1: %d\n", *ptr1);
>
> Doesn't the printk() see stale data?
>
> I think tglx would call all of this "tinkering". The approach to this
> series is to "fix" narrow, specific cases that reviewers point out, make
> it compile, then send it out again, hoping someone will apply it.

Sorry for not perfect work and bothering you but you know what? I
can see what is happening in this community too. Of course, I bet
you would post better quality mm patches from the 1st version than
me but might not in other subsystems.

> So, for me, until the approach to this series changes: NAK, for x86.

I understand why you got mad and feel sorry but I couldn't expect
the regression you mentioned above. And I admit the patches have
had problems I couldn't find in advance until you, Hildenbrand and
Ying. I will do better.

> Andrew, please don't take this series. Or, if you do, please drop the
> patch enabling it on x86.

I don't want to ask to merge either, if there are still issues.

> I also have the feeling our VFS friends won't take kindly to having

That is also what I thought it was. What should I do then?
I don't believe you do not agree with the concept itself. Thing is
the current version is not good enough. I will do my best by doing
what I can do.

> random luf_foo() hooks in their hot paths, optimized or not. I don't
> see any of them on cc.

Yes. I should've cc'd them. I will.

Byungchul

2024-05-31 21:47:54

by Dave Hansen

[permalink] [raw]
Subject: Re: [PATCH v11 09/12] mm: implement LUF(Lazy Unmap Flush) defering tlb flush when folios get unmapped

On 5/31/24 11:04, Byungchul Park wrote:
...
> I don't believe you do not agree with the concept itself. Thing is
> the current version is not good enough. I will do my best by doing
> what I can do.

More performance is good. I agree with that.

But it has to be weighed against the risk and the complexity. The more
I look at this approach, the more I think this is not a good trade off.
There's a lot of risk and a lot of complexity and we haven't seen the
full complexity picture. The gaps are being fixed by adding complexity
in new subsystems (the VFS in this case).

There are going to be winners and losers, and this version for example
makes file writes lose performance.

Just to be crystal clear: I disagree with the concept of leaving stale
TLB entries in place in an attempt to gain performance.


2024-05-31 22:10:15

by Matthew Wilcox

[permalink] [raw]
Subject: Re: [PATCH v11 09/12] mm: implement LUF(Lazy Unmap Flush) defering tlb flush when folios get unmapped

On Fri, May 31, 2024 at 02:46:23PM -0700, Dave Hansen wrote:
> On 5/31/24 11:04, Byungchul Park wrote:
> ...
> > I don't believe you do not agree with the concept itself. Thing is
> > the current version is not good enough. I will do my best by doing
> > what I can do.
>
> More performance is good. I agree with that.
>
> But it has to be weighed against the risk and the complexity. The more
> I look at this approach, the more I think this is not a good trade off.
> There's a lot of risk and a lot of complexity and we haven't seen the
> full complexity picture. The gaps are being fixed by adding complexity
> in new subsystems (the VFS in this case).
>
> There are going to be winners and losers, and this version for example
> makes file writes lose performance.
>
> Just to be crystal clear: I disagree with the concept of leaving stale
> TLB entries in place in an attempt to gain performance.

FWIW, I agree with Dave. This feels insanely dangerous and I don't
think you're paranoid enough about things that can go wrong.


2024-06-01 02:20:30

by Byungchul Park

[permalink] [raw]
Subject: Re: [PATCH v11 09/12] mm: implement LUF(Lazy Unmap Flush) defering tlb flush when folios get unmapped

Dave Hansen <[email protected]> wrote:
>
> On 5/31/24 11:04, Byungchul Park wrote:
> ...
> > I don't believe you do not agree with the concept itself. Thing is
> > the current version is not good enough. I will do my best by doing
> > what I can do.
>
> More performance is good. I agree with that.
>
> But it has to be weighed against the risk and the complexity. The more
> I look at this approach, the more I think this is not a good trade off.
> There's a lot of risk and a lot of complexity and we haven't seen the

All the complexity comes from the fact that I can't use a new space in
struct page - that can make the design even lockless.

I agree that keeping things simple is the best but I don't think all the
existing fields in struct page are the result of trying to make things
simple that you love. Some of them are more complicated.

I'd like to find a better way together instead of yelling "it's unworthy
cuz it's too complicated and there's too little space in mm world to
accommodate new things".

However, for the issues already discussed, I will think about it more
before the next spin.

Byungchul

> full complexity picture. The gaps are being fixed by adding complexity
> in new subsystems (the VFS in this case).
>
> There are going to be winners and losers, and this version for example
> makes file writes lose performance.
>
> Just to be crystal clear: I disagree with the concept of leaving stale
> TLB entries in place in an attempt to gain performance.
>

2024-06-01 07:22:32

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH v11 09/12] mm: implement LUF(Lazy Unmap Flush) defering tlb flush when folios get unmapped

On 31.05.24 23:46, Dave Hansen wrote:
> On 5/31/24 11:04, Byungchul Park wrote:
> ...
>> I don't believe you do not agree with the concept itself. Thing is
>> the current version is not good enough. I will do my best by doing
>> what I can do.
>
> More performance is good. I agree with that.
>
> But it has to be weighed against the risk and the complexity. The more
> I look at this approach, the more I think this is not a good trade off.
> There's a lot of risk and a lot of complexity and we haven't seen the
> full complexity picture. The gaps are being fixed by adding complexity
> in new subsystems (the VFS in this case).
>
> There are going to be winners and losers, and this version for example
> makes file writes lose performance.
>
> Just to be crystal clear: I disagree with the concept of leaving stale
> TLB entries in place in an attempt to gain performance.

There is the inherent problem that a CPU reading from such (unmapped but
not flushed yet) memory will not get a page fault, which I think is the
most controversial part here (besides interaction with other deferred
TLB flushing, and how this glues into the buddy).

What we used to do so far was limiting the timeframe where that could
happen, under well-controlled circumstances. On the common unmap/zap
path, we perform the batched TLB flush before any page faults / VMA
changes would have be possible and munmap() would have returned with
"succeess". Now that time frame could be significantly longer.

So in current code, at the point in time where we would process a page
fault, mmap()/munmap()/... the TLB would have been flushed already.

To "mimic" the old behavior, we'd essentially have to force any page
faults/mmap/whatsoever to perform the deferred flush such that the CPU
will see the "reality" again. Not sure how that could be done in a
*consistent* way (check whenever we take the mmap/vma lock etc ...) and
if there would still be a performance win.

--
Cheers,

David / dhildenb


2024-06-03 09:42:36

by Byungchul Park

[permalink] [raw]
Subject: Re: [PATCH v11 09/12] mm: implement LUF(Lazy Unmap Flush) defering tlb flush when folios get unmapped

On Sat, Jun 01, 2024 at 09:22:17AM +0200, David Hildenbrand wrote:
> On 31.05.24 23:46, Dave Hansen wrote:
> > On 5/31/24 11:04, Byungchul Park wrote:
> > ...
> > > I don't believe you do not agree with the concept itself. Thing is
> > > the current version is not good enough. I will do my best by doing
> > > what I can do.
> >
> > More performance is good. I agree with that.
> >
> > But it has to be weighed against the risk and the complexity. The more
> > I look at this approach, the more I think this is not a good trade off.
> > There's a lot of risk and a lot of complexity and we haven't seen the
> > full complexity picture. The gaps are being fixed by adding complexity
> > in new subsystems (the VFS in this case).
> >
> > There are going to be winners and losers, and this version for example
> > makes file writes lose performance.
> >
> > Just to be crystal clear: I disagree with the concept of leaving stale
> > TLB entries in place in an attempt to gain performance.
>
> There is the inherent problem that a CPU reading from such (unmapped but not
> flushed yet) memory will not get a page fault, which I think is the most
> controversial part here (besides interaction with other deferred TLB
> flushing, and how this glues into the buddy).
>
> What we used to do so far was limiting the timeframe where that could
> happen, under well-controlled circumstances. On the common unmap/zap path,
> we perform the batched TLB flush before any page faults / VMA changes would
> have be possible and munmap() would have returned with "succeess". Now that
> time frame could be significantly longer.
>
> So in current code, at the point in time where we would process a page
> fault, mmap()/munmap()/... the TLB would have been flushed already.
>
> To "mimic" the old behavior, we'd essentially have to force any page
> faults/mmap/whatsoever to perform the deferred flush such that the CPU will
> see the "reality" again. Not sure how that could be done in a *consistent*

In luf's point of view, the points where the deferred flush should be
performed are simply:

1. when changing the vma maps, that might be luf'ed.
2. when updating data of the pages, that might be luf'ed.

All we need to do is to indentify the points:

1. when changing the vma maps, that might be luf'ed.

a) mmap and munmap e.i. fault handler or unmap_region().
b) permission to writable e.i. mprotect or fault handler.
c) what I'm missing.

2. when updating data of the pages, that might be luf'ed.

a) updating files through vfs e.g. file_end_write().
b) updating files through writable maps e.i. 1-a) or 1-b).
c) what I'm missing.

Some of them are already performing necessary tlb flush and the others
are not. luf has to handle the others, that I've been focusing on. Of
course, there might be what I'm missing tho.

Worth noting again, luf is working only on *migration* and *reclaim*
currently. Thing is when to stop the pending initiated from migration
or reclaim by luf.

Byungchul

> way (check whenever we take the mmap/vma lock etc ...) and if there would
> still be a performance win.
>
> --
> Cheers,
>
> David / dhildenb

2024-06-03 13:23:57

by Dave Hansen

[permalink] [raw]
Subject: Re: [PATCH v11 09/12] mm: implement LUF(Lazy Unmap Flush) defering tlb flush when folios get unmapped

On 6/3/24 02:35, Byungchul Park wrote:
...> In luf's point of view, the points where the deferred flush should be
> performed are simply:
>
> 1. when changing the vma maps, that might be luf'ed.
> 2. when updating data of the pages, that might be luf'ed.

It's simple, but the devil is in the details as always.

> All we need to do is to indentify the points:
>
> 1. when changing the vma maps, that might be luf'ed.
>
> a) mmap and munmap e.i. fault handler or unmap_region().
> b) permission to writable e.i. mprotect or fault handler.
> c) what I'm missing.

I'd say it even more generally: anything that installs a PTE which is
inconsistent with the original PTE. That, of course, includes writes.
But it also includes crazy things that we do like uprobes. Take a look
at __replace_page().

I think the page_vma_mapped_walk() checks plus the ptl keep LUF at bay
there. But it needs some really thorough review.

But the bigger concern is that, if there was a problem, I can't think of
a systematic way to find it.

> 2. when updating data of the pages, that might be luf'ed.
>
> a) updating files through vfs e.g. file_end_write().
> b) updating files through writable maps e.i. 1-a) or 1-b).
> c) what I'm missing.

Filesystems or block devices that change content without a "write" from
the local system. Network filesystems and block devices come to mind.
I honestly don't know what all the rules are around these, but they
could certainly be troublesome.

There appear to be some interactions for NFS between file locking and
page cache flushing.

But, stepping back ...

I'd honestly be a lot more comfortable if there was even a debugging LUF
mode that enforced a rule that said:

1. A LUF'd PTE can't be rewritten until after a luf_flush() occurs
2. A LUF'd page's position in the page cache can't be replaced until
after a luf_flush()

or *some* other independent set of rules that can tell us when something
goes wrong. That uprobes code, for instance, seems like it will work.
But I can also imagine writing it ten other ways where it would break
when combined with LUF.

2024-06-03 16:07:27

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH v11 09/12] mm: implement LUF(Lazy Unmap Flush) defering tlb flush when folios get unmapped

On 03.06.24 15:23, Dave Hansen wrote:
> On 6/3/24 02:35, Byungchul Park wrote:
> ...> In luf's point of view, the points where the deferred flush should be
>> performed are simply:
>>
>> 1. when changing the vma maps, that might be luf'ed.
>> 2. when updating data of the pages, that might be luf'ed.
>
> It's simple, but the devil is in the details as always.
>
>> All we need to do is to indentify the points:
>>
>> 1. when changing the vma maps, that might be luf'ed.
>>
>> a) mmap and munmap e.i. fault handler or unmap_region().
>> b) permission to writable e.i. mprotect or fault handler.
>> c) what I'm missing.
>
> I'd say it even more generally: anything that installs a PTE which is
> inconsistent with the original PTE. That, of course, includes writes.
> But it also includes crazy things that we do like uprobes. Take a look
> at __replace_page().
>
> I think the page_vma_mapped_walk() checks plus the ptl keep LUF at bay
> there. But it needs some really thorough review.
>
> But the bigger concern is that, if there was a problem, I can't think of
> a systematic way to find it.

Fully agreed!

>
>> 2. when updating data of the pages, that might be luf'ed.
>>
>> a) updating files through vfs e.g. file_end_write().
>> b) updating files through writable maps e.i. 1-a) or 1-b).
>> c) what I'm missing.
>
> Filesystems or block devices that change content without a "write" from
> the local system. Network filesystems and block devices come to mind.
> I honestly don't know what all the rules are around these, but they
> could certainly be troublesome.
>
> There appear to be some interactions for NFS between file locking and
> page cache flushing.
>
> But, stepping back ...
>
> I'd honestly be a lot more comfortable if there was even a debugging LUF
> mode that enforced a rule that said:
>
> 1. A LUF'd PTE can't be rewritten until after a luf_flush() occurs

I was playing with the idea of using a PTE marker. Then it's clear for
munmap/mremap/page faults that there is an outstanding flush required.
the alternative might be a VMA flag, but that's harder to actually
enforce an invariant.

> 2. A LUF'd page's position in the page cache can't be replaced until
> after a luf_flush()

That's the most tricky bit. I think these are the VFS concerns like

1) Page migration/reclaim ends up freeing the old page. TLB not flushed.
2) write() to the new page / write from other process to the new page
3) CPU reads stale content from old page

PTE markers can't handle that.

--
Cheers,

David / dhildenb


2024-06-03 16:37:59

by Dave Hansen

[permalink] [raw]
Subject: Re: [PATCH v11 09/12] mm: implement LUF(Lazy Unmap Flush) defering tlb flush when folios get unmapped

On 6/3/24 09:05, David Hildenbrand wrote:
...
>>    2. A LUF'd page's position in the page cache can't be replaced until
>>       after a luf_flush()
>
> That's the most tricky bit. I think these are the VFS concerns like
>
> 1) Page migration/reclaim ends up freeing the old page. TLB not flushed.
> 2) write() to the new page / write from other process to the new page
> 3) CPU reads stale content from old page
>
> PTE markers can't handle that.

Yeah, we'd need some equivalent of a PTE marker, but for the page cache.
Presumably some xa_value() that means a reader has to go do a
luf_flush() before going any farther.

That would actually have a chance at fixing two issues: One where a new
page cache insertion is attempted. The other where someone goes to look
in the page cache and takes some action _because_ it is empty (I think
NFS is doing some of this for file locks).

LUF is also pretty fundamentally built on the idea that files can't
change without LUF being aware. That model seems to work decently for
normal old filesystems on normal old local block devices. I'm worried
about NFS, and I don't know how seriously folks take FUSE, but it
obviously can't work well for FUSE.

2024-06-03 17:01:49

by Matthew Wilcox

[permalink] [raw]
Subject: Re: [PATCH v11 09/12] mm: implement LUF(Lazy Unmap Flush) defering tlb flush when folios get unmapped

On Mon, Jun 03, 2024 at 09:37:46AM -0700, Dave Hansen wrote:
> Yeah, we'd need some equivalent of a PTE marker, but for the page cache.
> Presumably some xa_value() that means a reader has to go do a
> luf_flush() before going any farther.

I can allocate one for that. We've got something like 1000 currently
unused values which can't be mistaken for anything else.

> That would actually have a chance at fixing two issues: One where a new
> page cache insertion is attempted. The other where someone goes to look
> in the page cache and takes some action _because_ it is empty (I think
> NFS is doing some of this for file locks).
>
> LUF is also pretty fundamentally built on the idea that files can't
> change without LUF being aware. That model seems to work decently for
> normal old filesystems on normal old local block devices. I'm worried
> about NFS, and I don't know how seriously folks take FUSE, but it
> obviously can't work well for FUSE.

I'm more concerned with:

- page goes back to buddy
- page is allocated to slab
- application reads through stale TLB entry and sees kernel memory

Or did that scenario get resolved?

2024-06-03 18:01:30

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH v11 09/12] mm: implement LUF(Lazy Unmap Flush) defering tlb flush when folios get unmapped

On 03.06.24 19:01, Matthew Wilcox wrote:
> On Mon, Jun 03, 2024 at 09:37:46AM -0700, Dave Hansen wrote:
>> Yeah, we'd need some equivalent of a PTE marker, but for the page cache.
>> Presumably some xa_value() that means a reader has to go do a
>> luf_flush() before going any farther.
>
> I can allocate one for that. We've got something like 1000 currently
> unused values which can't be mistaken for anything else.

I'm curious when to set that, though.

While migrating/reclaiming, when unmapping the folio from the page
tables, the folio is still valid in the page cache. So at the point in
time of unmapping from one process, we cannot simply replace the folio
in the page cache by some other value -- I think.

Maybe it's all easier than I think.

--
Cheers,

David / dhildenb


2024-06-04 00:35:07

by Byungchul Park

[permalink] [raw]
Subject: Re: [PATCH v11 09/12] mm: implement LUF(Lazy Unmap Flush) defering tlb flush when folios get unmapped

On Mon, Jun 03, 2024 at 06:01:05PM +0100, Matthew Wilcox wrote:
> On Mon, Jun 03, 2024 at 09:37:46AM -0700, Dave Hansen wrote:
> > Yeah, we'd need some equivalent of a PTE marker, but for the page cache.
> > Presumably some xa_value() that means a reader has to go do a
> > luf_flush() before going any farther.
>
> I can allocate one for that. We've got something like 1000 currently
> unused values which can't be mistaken for anything else.
>
> > That would actually have a chance at fixing two issues: One where a new
> > page cache insertion is attempted. The other where someone goes to look
> > in the page cache and takes some action _because_ it is empty (I think
> > NFS is doing some of this for file locks).
> >
> > LUF is also pretty fundamentally built on the idea that files can't
> > change without LUF being aware. That model seems to work decently for
> > normal old filesystems on normal old local block devices. I'm worried
> > about NFS, and I don't know how seriously folks take FUSE, but it
> > obviously can't work well for FUSE.
>
> I'm more concerned with:
>
> - page goes back to buddy
> - page is allocated to slab

At this point, tlb flush needed will be performed in prep_new_page().

> - application reads through stale TLB entry and sees kernel memory

No worry for this case.

Byungchul
>
> Or did that scenario get resolved?

2024-06-04 01:56:36

by Byungchul Park

[permalink] [raw]
Subject: Re: [PATCH v11 09/12] mm: implement LUF(Lazy Unmap Flush) defering tlb flush when folios get unmapped

On Mon, Jun 03, 2024 at 06:23:46AM -0700, Dave Hansen wrote:
> On 6/3/24 02:35, Byungchul Park wrote:
> ...> In luf's point of view, the points where the deferred flush should be
> > performed are simply:
> >
> > 1. when changing the vma maps, that might be luf'ed.
> > 2. when updating data of the pages, that might be luf'ed.
>
> It's simple, but the devil is in the details as always.

Agree with that.

> > All we need to do is to indentify the points:
> >
> > 1. when changing the vma maps, that might be luf'ed.
> >
> > a) mmap and munmap e.i. fault handler or unmap_region().
> > b) permission to writable e.i. mprotect or fault handler.
> > c) what I'm missing.
>
> I'd say it even more generally: anything that installs a PTE which is
> inconsistent with the original PTE. That, of course, includes writes.
> But it also includes crazy things that we do like uprobes. Take a look
> at __replace_page().
>
> I think the page_vma_mapped_walk() checks plus the ptl keep LUF at bay
> there. But it needs some really thorough review.
>
> But the bigger concern is that, if there was a problem, I can't think of
> a systematic way to find it.
>
> > 2. when updating data of the pages, that might be luf'ed.
> >
> > a) updating files through vfs e.g. file_end_write().
> > b) updating files through writable maps e.i. 1-a) or 1-b).
> > c) what I'm missing.
>
> Filesystems or block devices that change content without a "write" from
> the local system. Network filesystems and block devices come to mind.

AFAIK, every network filesystem eventully "updates" its connected local
filesystem. It could be still handled at the point where updating the
local file system.

> I honestly don't know what all the rules are around these, but they
> could certainly be troublesome.
>
> There appear to be some interactions for NFS between file locking and
> page cache flushing.
>
> But, stepping back ...
>
> I'd honestly be a lot more comfortable if there was even a debugging LUF

I'd better provide a method for better debugging. Lemme know whatever
it is we need.

> mode that enforced a rule that said:

Why "debugging mode"? The following rules should be enforced always.

> 1. A LUF'd PTE can't be rewritten until after a luf_flush() occurs

"luf_flush() should be followed when.." is more correct because
"luf_flush() -> another luf -> the pte gets rewritten" can happen. So
it should be "the pte gets rewritten -> another luf by any chance ->
luf_flush()", that is still safe.

> 2. A LUF'd page's position in the page cache can't be replaced until
> after a luf_flush()

"luf_flush() should be followed when.." is more correct too.

These two rules are exactly same as what I described but more specific.
I like your way to describe the rules.

Byungchul

> or *some* other independent set of rules that can tell us when something
> goes wrong. That uprobes code, for instance, seems like it will work.
> But I can also imagine writing it ten other ways where it would break
> when combined with LUF.

2024-06-04 04:44:07

by Byungchul Park

[permalink] [raw]
Subject: Re: [PATCH v11 09/12] mm: implement LUF(Lazy Unmap Flush) defering tlb flush when folios get unmapped

On Tue, Jun 04, 2024 at 10:53:48AM +0900, Byungchul Park wrote:
> On Mon, Jun 03, 2024 at 06:23:46AM -0700, Dave Hansen wrote:
> > On 6/3/24 02:35, Byungchul Park wrote:
> > ...> In luf's point of view, the points where the deferred flush should be
> > > performed are simply:
> > >
> > > 1. when changing the vma maps, that might be luf'ed.
> > > 2. when updating data of the pages, that might be luf'ed.
> >
> > It's simple, but the devil is in the details as always.
>
> Agree with that.
>
> > > All we need to do is to indentify the points:
> > >
> > > 1. when changing the vma maps, that might be luf'ed.
> > >
> > > a) mmap and munmap e.i. fault handler or unmap_region().
> > > b) permission to writable e.i. mprotect or fault handler.
> > > c) what I'm missing.
> >
> > I'd say it even more generally: anything that installs a PTE which is
> > inconsistent with the original PTE. That, of course, includes writes.
> > But it also includes crazy things that we do like uprobes. Take a look
> > at __replace_page().
> >
> > I think the page_vma_mapped_walk() checks plus the ptl keep LUF at bay
> > there. But it needs some really thorough review.
> >
> > But the bigger concern is that, if there was a problem, I can't think of
> > a systematic way to find it.
> >
> > > 2. when updating data of the pages, that might be luf'ed.
> > >
> > > a) updating files through vfs e.g. file_end_write().
> > > b) updating files through writable maps e.i. 1-a) or 1-b).
> > > c) what I'm missing.
> >
> > Filesystems or block devices that change content without a "write" from
> > the local system. Network filesystems and block devices come to mind.
>
> AFAIK, every network filesystem eventully "updates" its connected local
> filesystem. It could be still handled at the point where updating the
> local file system.
>
> > I honestly don't know what all the rules are around these, but they
> > could certainly be troublesome.
> >
> > There appear to be some interactions for NFS between file locking and
> > page cache flushing.
> >
> > But, stepping back ...
> >
> > I'd honestly be a lot more comfortable if there was even a debugging LUF
>
> I'd better provide a method for better debugging. Lemme know whatever
> it is we need.
>
> > mode that enforced a rule that said:

Do you means a debugging mode that can WARN or inform the situation that
we don't want? If yes, sure. Now that I get this, I will re-read all
you guys' talk.

Byungchul

> Why "debugging mode"? The following rules should be enforced always.
>
> > 1. A LUF'd PTE can't be rewritten until after a luf_flush() occurs
>
> "luf_flush() should be followed when.." is more correct because
> "luf_flush() -> another luf -> the pte gets rewritten" can happen. So
> it should be "the pte gets rewritten -> another luf by any chance ->
> luf_flush()", that is still safe.
>
> > 2. A LUF'd page's position in the page cache can't be replaced until
> > after a luf_flush()
>
> "luf_flush() should be followed when.." is more correct too.
>
> These two rules are exactly same as what I described but more specific.
> I like your way to describe the rules.
>
> Byungchul
>
> > or *some* other independent set of rules that can tell us when something
> > goes wrong. That uprobes code, for instance, seems like it will work.
> > But I can also imagine writing it ten other ways where it would break
> > when combined with LUF.

2024-06-04 08:18:53

by Huang, Ying

[permalink] [raw]
Subject: Re: [PATCH v11 09/12] mm: implement LUF(Lazy Unmap Flush) defering tlb flush when folios get unmapped

David Hildenbrand <[email protected]> writes:

> On 03.06.24 19:01, Matthew Wilcox wrote:
>> On Mon, Jun 03, 2024 at 09:37:46AM -0700, Dave Hansen wrote:
>>> Yeah, we'd need some equivalent of a PTE marker, but for the page cache.
>>> Presumably some xa_value() that means a reader has to go do a
>>> luf_flush() before going any farther.
>> I can allocate one for that. We've got something like 1000
>> currently
>> unused values which can't be mistaken for anything else.
>
> I'm curious when to set that, though.
>
> While migrating/reclaiming, when unmapping the folio from the page
> tables, the folio is still valid in the page cache. So at the point in
> time of unmapping from one process, we cannot simply replace the folio
> in the page cache by some other value -- I think.
>
> Maybe it's all easier than I think.

IIUC, we need to held folio lock before replacing the folio in the page
cache. In page_cache_delete(), folio_test_locked() is checked. And, we
will lock the folio before writing to it via write syscall. So, it's
safe to defer TLB flushing until we unlock the folio.

--
Best Regards,
Huang, Ying

2024-06-06 08:33:44

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH v11 09/12] mm: implement LUF(Lazy Unmap Flush) defering tlb flush when folios get unmapped

On 04.06.24 06:43, Byungchul Park wrote:
> On Tue, Jun 04, 2024 at 10:53:48AM +0900, Byungchul Park wrote:
>> On Mon, Jun 03, 2024 at 06:23:46AM -0700, Dave Hansen wrote:
>>> On 6/3/24 02:35, Byungchul Park wrote:
>>> ...> In luf's point of view, the points where the deferred flush should be
>>>> performed are simply:
>>>>
>>>> 1. when changing the vma maps, that might be luf'ed.
>>>> 2. when updating data of the pages, that might be luf'ed.
>>>
>>> It's simple, but the devil is in the details as always.
>>
>> Agree with that.
>>
>>>> All we need to do is to indentify the points:
>>>>
>>>> 1. when changing the vma maps, that might be luf'ed.
>>>>
>>>> a) mmap and munmap e.i. fault handler or unmap_region().
>>>> b) permission to writable e.i. mprotect or fault handler.
>>>> c) what I'm missing.
>>>
>>> I'd say it even more generally: anything that installs a PTE which is
>>> inconsistent with the original PTE. That, of course, includes writes.
>>> But it also includes crazy things that we do like uprobes. Take a look
>>> at __replace_page().
>>>
>>> I think the page_vma_mapped_walk() checks plus the ptl keep LUF at bay
>>> there. But it needs some really thorough review.
>>>
>>> But the bigger concern is that, if there was a problem, I can't think of
>>> a systematic way to find it.
>>>
>>>> 2. when updating data of the pages, that might be luf'ed.
>>>>
>>>> a) updating files through vfs e.g. file_end_write().
>>>> b) updating files through writable maps e.i. 1-a) or 1-b).
>>>> c) what I'm missing.
>>>
>>> Filesystems or block devices that change content without a "write" from
>>> the local system. Network filesystems and block devices come to mind.
>>
>> AFAIK, every network filesystem eventully "updates" its connected local
>> filesystem. It could be still handled at the point where updating the
>> local file system.
>>
>>> I honestly don't know what all the rules are around these, but they
>>> could certainly be troublesome.
>>>
>>> There appear to be some interactions for NFS between file locking and
>>> page cache flushing.
>>>
>>> But, stepping back ...
>>>
>>> I'd honestly be a lot more comfortable if there was even a debugging LUF
>>
>> I'd better provide a method for better debugging. Lemme know whatever
>> it is we need.
>>
>>> mode that enforced a rule that said:
>
> Do you means a debugging mode that can WARN or inform the situation that
> we don't want? If yes, sure. Now that I get this, I will re-read all
> you guys' talk.
>

In my opinion either for debugging, or for actually enforcing it at
runtime. Whatever the cost of that would be needs to be determined.

--
Cheers,

David / dhildenb


2024-06-10 13:24:34

by Michal Hocko

[permalink] [raw]
Subject: Re: [PATCH v11 09/12] mm: implement LUF(Lazy Unmap Flush) defering tlb flush when folios get unmapped

On Tue 04-06-24 09:34:48, Byungchul Park wrote:
> On Mon, Jun 03, 2024 at 06:01:05PM +0100, Matthew Wilcox wrote:
> > On Mon, Jun 03, 2024 at 09:37:46AM -0700, Dave Hansen wrote:
> > > Yeah, we'd need some equivalent of a PTE marker, but for the page cache.
> > > Presumably some xa_value() that means a reader has to go do a
> > > luf_flush() before going any farther.
> >
> > I can allocate one for that. We've got something like 1000 currently
> > unused values which can't be mistaken for anything else.
> >
> > > That would actually have a chance at fixing two issues: One where a new
> > > page cache insertion is attempted. The other where someone goes to look
> > > in the page cache and takes some action _because_ it is empty (I think
> > > NFS is doing some of this for file locks).
> > >
> > > LUF is also pretty fundamentally built on the idea that files can't
> > > change without LUF being aware. That model seems to work decently for
> > > normal old filesystems on normal old local block devices. I'm worried
> > > about NFS, and I don't know how seriously folks take FUSE, but it
> > > obviously can't work well for FUSE.
> >
> > I'm more concerned with:
> >
> > - page goes back to buddy
> > - page is allocated to slab
>
> At this point, tlb flush needed will be performed in prep_new_page().

But that does mean that an unaware caller would get an additional
overhead of the flushing, right? I think it would be just a matter of
time before somebody can turn that into a side channel attack, not to
mention unexpected latencies introduced.
--
Michal Hocko
SUSE Labs

2024-06-11 01:10:49

by Byungchul Park

[permalink] [raw]
Subject: Re: [PATCH v11 09/12] mm: implement LUF(Lazy Unmap Flush) defering tlb flush when folios get unmapped

On Mon, Jun 10, 2024 at 03:23:49PM +0200, Michal Hocko wrote:
> On Tue 04-06-24 09:34:48, Byungchul Park wrote:
> > On Mon, Jun 03, 2024 at 06:01:05PM +0100, Matthew Wilcox wrote:
> > > On Mon, Jun 03, 2024 at 09:37:46AM -0700, Dave Hansen wrote:
> > > > Yeah, we'd need some equivalent of a PTE marker, but for the page cache.
> > > > Presumably some xa_value() that means a reader has to go do a
> > > > luf_flush() before going any farther.
> > >
> > > I can allocate one for that. We've got something like 1000 currently
> > > unused values which can't be mistaken for anything else.
> > >
> > > > That would actually have a chance at fixing two issues: One where a new
> > > > page cache insertion is attempted. The other where someone goes to look
> > > > in the page cache and takes some action _because_ it is empty (I think
> > > > NFS is doing some of this for file locks).
> > > >
> > > > LUF is also pretty fundamentally built on the idea that files can't
> > > > change without LUF being aware. That model seems to work decently for
> > > > normal old filesystems on normal old local block devices. I'm worried
> > > > about NFS, and I don't know how seriously folks take FUSE, but it
> > > > obviously can't work well for FUSE.
> > >
> > > I'm more concerned with:
> > >
> > > - page goes back to buddy
> > > - page is allocated to slab
> >
> > At this point, tlb flush needed will be performed in prep_new_page().
>
> But that does mean that an unaware caller would get an additional
> overhead of the flushing, right? I think it would be just a matter of

pcp for locality is already a better source of side channel attack. FYI,
tlb flush gets barely performed only if pending tlb flush exists.

> time before somebody can turn that into a side channel attack, not to
> mention unexpected latencies introduced.

Nope. The pending tlb flush performed in prep_new_page() is the one
that would've done already with the vanilla kernel. It's not additional
tlb flushes but it's subset of all the skipped ones.

It's worth noting all the existing mm reclaim mechaisms have already
introduced worse unexpected latencies.

Byungchul

> --
> Michal Hocko
> SUSE Labs

2024-06-11 09:12:35

by Byungchul Park

[permalink] [raw]
Subject: Re: [PATCH v11 09/12] mm: implement LUF(Lazy Unmap Flush) defering tlb flush when folios get unmapped

On Mon, Jun 03, 2024 at 06:23:46AM -0700, Dave Hansen wrote:
> On 6/3/24 02:35, Byungchul Park wrote:
> ...> In luf's point of view, the points where the deferred flush should be
> > performed are simply:
> >
> > 1. when changing the vma maps, that might be luf'ed.
> > 2. when updating data of the pages, that might be luf'ed.
>
> It's simple, but the devil is in the details as always.
>
> > All we need to do is to indentify the points:
> >
> > 1. when changing the vma maps, that might be luf'ed.
> >
> > a) mmap and munmap e.i. fault handler or unmap_region().
> > b) permission to writable e.i. mprotect or fault handler.
> > c) what I'm missing.
>
> I'd say it even more generally: anything that installs a PTE which is
> inconsistent with the original PTE. That, of course, includes writes.
> But it also includes crazy things that we do like uprobes. Take a look
> at __replace_page().
>
> I think the page_vma_mapped_walk() checks plus the ptl keep LUF at bay
> there. But it needs some really thorough review.
>
> But the bigger concern is that, if there was a problem, I can't think of
> a systematic way to find it.
>
> > 2. when updating data of the pages, that might be luf'ed.
> >
> > a) updating files through vfs e.g. file_end_write().
> > b) updating files through writable maps e.i. 1-a) or 1-b).
> > c) what I'm missing.
>
> Filesystems or block devices that change content without a "write" from
> the local system. Network filesystems and block devices come to mind.
> I honestly don't know what all the rules are around these, but they
> could certainly be troublesome.
>
> There appear to be some interactions for NFS between file locking and
> page cache flushing.
>
> But, stepping back ...
>
> I'd honestly be a lot more comfortable if there was even a debugging LUF
> mode that enforced a rule that said:
>
> 1. A LUF'd PTE can't be rewritten until after a luf_flush() occurs
> 2. A LUF'd page's position in the page cache can't be replaced until
> after a luf_flush()

I'm thinking a debug mode doing the following *pseudo* code - check the
logic only since the grammer might be wrong:

0-a) Introduce new fields in page_ext:

#ifdef LUF_DEBUG
struct list_head __percpu luf_node;
#endif

0-b) Introduce new fields in struct address_space:

#ifdef LUF_DEBUG
struct list_head __percpu luf_node;
#endif

0-c) Introduce new fields in struct task_struct:

#ifdef LUF_DEBUG
cpumask_t luf_pending_cpus;
#endif

0-d) Define percpu list_head to link luf'd folios:

#ifdef LUF_DEBUG
DEFINE_PER_CPU(struct list_head, luf_folios);
DEFINE_PER_CPU(struct list_head, luf_address_spaces);
#endif

1) When skipping tlb flush in reclaim or migration for a folio:

#ifdef LUF_DEBUG
ext = get_page_ext_for_luf_debug(folio);
as = folio_mapping(folio);

for_each_cpu(cpu, skip_cpus) {
list_add(per_cpu_ptr(ext->luf_node, cpu),
per_cpu_ptr(luf_folios, cpu));
if (as)
list_add(per_cpu_ptr(as->luf_node, cpu),
per_cpu_ptr(luf_address_spaces, cpu));
}
put_page_ext(ext);
#endif

2) When performing tlb flush in try_to_unmap_flush():
Remind luf only works on unmapping during reclaim and migration.

#ifdef LUF_DEBUG
for_each_cpu(cpu, now_flushing_cpus) {
for_each_node_safe(folio, per_cpu_ptr(luf_folios)) {
ext = get_page_ext_for_luf_debug(folio);
list_del_init(per_cpu_ptr(ext->luf_node, cpu))
put_page_ext(ext);
}

for_each_node_safe(as, per_cpu_ptr(luf_address_spaces))
list_del_init(per_cpu_ptr(as->luf_node, cpu))

cpumask_clear_cpu(cpu, current->luf_pending_cpus);
}
#endif

3) In pte_mkwrite():

#ifdef LUF_DEBUG
ext = get_page_ext_for_luf_debug(folio);

for_each_cpu(cpu, online_cpus)
if (!list_empty(per_cpu_ptr(ext->luf_node, cpu)))
cpumask_set_cpu(cpu, current->luf_pending_cpus);
put_page_ext(ext);
#endif

4) On returning to user:

#ifdef LUF_DEBUG
WARN_ON(!cpumask_empty(current->luf_pending_cpus));
#endif

5) On right after every a_ops->write_end() call:

#ifdef LUF_DEBUG
as = get_address_space_to_write_to();
for_each_cpu(cpu, online_cpus)
if (!list_empty(per_cpu_ptr(as->luf_node, cpu)))
cpumask_set_cpu(cpu, current->luf_pending_cpus);
#endif

luf_flush_or_its_optimized_version();

#ifdef LUF_DEBUG
WARN_ON(!cpumask_empty(current->luf_pending_cpus));
#endif

I will implement the debug mode this way with all serialized. Do you
think it works for what we want?

Byungchul

> or *some* other independent set of rules that can tell us when something
> goes wrong. That uprobes code, for instance, seems like it will work.
> But I can also imagine writing it ten other ways where it would break
> when combined with LUF.

2024-06-11 12:23:49

by Michal Hocko

[permalink] [raw]
Subject: Re: [PATCH v11 09/12] mm: implement LUF(Lazy Unmap Flush) defering tlb flush when folios get unmapped

On Tue 11-06-24 09:55:23, Byungchul Park wrote:
> On Mon, Jun 10, 2024 at 03:23:49PM +0200, Michal Hocko wrote:
> > On Tue 04-06-24 09:34:48, Byungchul Park wrote:
> > > On Mon, Jun 03, 2024 at 06:01:05PM +0100, Matthew Wilcox wrote:
> > > > On Mon, Jun 03, 2024 at 09:37:46AM -0700, Dave Hansen wrote:
> > > > > Yeah, we'd need some equivalent of a PTE marker, but for the page cache.
> > > > > Presumably some xa_value() that means a reader has to go do a
> > > > > luf_flush() before going any farther.
> > > >
> > > > I can allocate one for that. We've got something like 1000 currently
> > > > unused values which can't be mistaken for anything else.
> > > >
> > > > > That would actually have a chance at fixing two issues: One where a new
> > > > > page cache insertion is attempted. The other where someone goes to look
> > > > > in the page cache and takes some action _because_ it is empty (I think
> > > > > NFS is doing some of this for file locks).
> > > > >
> > > > > LUF is also pretty fundamentally built on the idea that files can't
> > > > > change without LUF being aware. That model seems to work decently for
> > > > > normal old filesystems on normal old local block devices. I'm worried
> > > > > about NFS, and I don't know how seriously folks take FUSE, but it
> > > > > obviously can't work well for FUSE.
> > > >
> > > > I'm more concerned with:
> > > >
> > > > - page goes back to buddy
> > > > - page is allocated to slab
> > >
> > > At this point, tlb flush needed will be performed in prep_new_page().
> >
> > But that does mean that an unaware caller would get an additional
> > overhead of the flushing, right? I think it would be just a matter of
>
> pcp for locality is already a better source of side channel attack. FYI,
> tlb flush gets barely performed only if pending tlb flush exists.

Right but rare and hard to predict latencies are much worse than
consistent once.

> > time before somebody can turn that into a side channel attack, not to
> > mention unexpected latencies introduced.
>
> Nope. The pending tlb flush performed in prep_new_page() is the one
> that would've done already with the vanilla kernel. It's not additional
> tlb flushes but it's subset of all the skipped ones.

But those skipped once could have happened in a completely different
context (e.g. a different process or even a diffrent security domain),
right?

> It's worth noting all the existing mm reclaim mechaisms have already
> introduced worse unexpected latencies.

Right, but a reclaim, especially direct reclaim, are expected to be
slow. It is much different to see spike latencies on system with a lot
of memory.
--
Michal Hocko
SUSE Labs

2024-06-14 01:58:11

by Byungchul Park

[permalink] [raw]
Subject: Re: [PATCH v11 09/12] mm: implement LUF(Lazy Unmap Flush) defering tlb flush when folios get unmapped

On Tue, Jun 04, 2024 at 10:53:48AM +0900, Byungchul Park wrote:
> On Mon, Jun 03, 2024 at 06:23:46AM -0700, Dave Hansen wrote:
> > On 6/3/24 02:35, Byungchul Park wrote:
> > ...> In luf's point of view, the points where the deferred flush should be
> > > performed are simply:
> > >
> > > 1. when changing the vma maps, that might be luf'ed.
> > > 2. when updating data of the pages, that might be luf'ed.
> >
> > It's simple, but the devil is in the details as always.
>
> Agree with that.
>
> > > All we need to do is to indentify the points:
> > >
> > > 1. when changing the vma maps, that might be luf'ed.
> > >
> > > a) mmap and munmap e.i. fault handler or unmap_region().
> > > b) permission to writable e.i. mprotect or fault handler.
> > > c) what I'm missing.
> >
> > I'd say it even more generally: anything that installs a PTE which is
> > inconsistent with the original PTE. That, of course, includes writes.
> > But it also includes crazy things that we do like uprobes. Take a look
> > at __replace_page().
> >
> > I think the page_vma_mapped_walk() checks plus the ptl keep LUF at bay
> > there. But it needs some really thorough review.
> >
> > But the bigger concern is that, if there was a problem, I can't think of
> > a systematic way to find it.
> >
> > > 2. when updating data of the pages, that might be luf'ed.
> > >
> > > a) updating files through vfs e.g. file_end_write().
> > > b) updating files through writable maps e.i. 1-a) or 1-b).
> > > c) what I'm missing.
> >
> > Filesystems or block devices that change content without a "write" from
> > the local system. Network filesystems and block devices come to mind.
>
> AFAIK, every network filesystem eventully "updates" its connected local
> filesystem. It could be still handled at the point where updating the
> local file system.

To cover client of network file systems and any using page cache, struct
address_space_operations's write_end() call sites seem to be the best
place to handle that. At the same time, of course, I should limit the
target of luf to 'folio_mapping(folio) != NULL' for file pages.

Byungchul

> > I honestly don't know what all the rules are around these, but they
> > could certainly be troublesome.
> >
> > There appear to be some interactions for NFS between file locking and
> > page cache flushing.
> >
> > But, stepping back ...
> >
> > I'd honestly be a lot more comfortable if there was even a debugging LUF
>
> I'd better provide a method for better debugging. Lemme know whatever
> it is we need.
>
> > mode that enforced a rule that said:
>
> Why "debugging mode"? The following rules should be enforced always.
>
> > 1. A LUF'd PTE can't be rewritten until after a luf_flush() occurs
>
> "luf_flush() should be followed when.." is more correct because
> "luf_flush() -> another luf -> the pte gets rewritten" can happen. So
> it should be "the pte gets rewritten -> another luf by any chance ->
> luf_flush()", that is still safe.
>
> > 2. A LUF'd page's position in the page cache can't be replaced until
> > after a luf_flush()
>
> "luf_flush() should be followed when.." is more correct too.
>
> These two rules are exactly same as what I described but more specific.
> I like your way to describe the rules.
>
> Byungchul
>
> > or *some* other independent set of rules that can tell us when something
> > goes wrong. That uprobes code, for instance, seems like it will work.
> > But I can also imagine writing it ten other ways where it would break
> > when combined with LUF.

2024-06-14 02:45:39

by Byungchul Park

[permalink] [raw]
Subject: Re: [PATCH v11 09/12] mm: implement LUF(Lazy Unmap Flush) defering tlb flush when folios get unmapped

On Tue, Jun 11, 2024 at 01:55:05PM +0200, Michal Hocko wrote:
> On Tue 11-06-24 09:55:23, Byungchul Park wrote:
> > On Mon, Jun 10, 2024 at 03:23:49PM +0200, Michal Hocko wrote:
> > > On Tue 04-06-24 09:34:48, Byungchul Park wrote:
> > > > On Mon, Jun 03, 2024 at 06:01:05PM +0100, Matthew Wilcox wrote:
> > > > > On Mon, Jun 03, 2024 at 09:37:46AM -0700, Dave Hansen wrote:
> > > > > > Yeah, we'd need some equivalent of a PTE marker, but for the page cache.
> > > > > > Presumably some xa_value() that means a reader has to go do a
> > > > > > luf_flush() before going any farther.
> > > > >
> > > > > I can allocate one for that. We've got something like 1000 currently
> > > > > unused values which can't be mistaken for anything else.
> > > > >
> > > > > > That would actually have a chance at fixing two issues: One where a new
> > > > > > page cache insertion is attempted. The other where someone goes to look
> > > > > > in the page cache and takes some action _because_ it is empty (I think
> > > > > > NFS is doing some of this for file locks).
> > > > > >
> > > > > > LUF is also pretty fundamentally built on the idea that files can't
> > > > > > change without LUF being aware. That model seems to work decently for
> > > > > > normal old filesystems on normal old local block devices. I'm worried
> > > > > > about NFS, and I don't know how seriously folks take FUSE, but it
> > > > > > obviously can't work well for FUSE.
> > > > >
> > > > > I'm more concerned with:
> > > > >
> > > > > - page goes back to buddy
> > > > > - page is allocated to slab
> > > >
> > > > At this point, tlb flush needed will be performed in prep_new_page().
> > >
> > > But that does mean that an unaware caller would get an additional
> > > overhead of the flushing, right? I think it would be just a matter of
> >
> > pcp for locality is already a better source of side channel attack. FYI,
> > tlb flush gets barely performed only if pending tlb flush exists.
>
> Right but rare and hard to predict latencies are much worse than
> consistent once.

No doubt it'd be the best if we keep things consistent as long as
possible. How consistent *we require* it would be, matters. Lemme know
criteria for that if any. I will check it.

> > > time before somebody can turn that into a side channel attack, not to
> > > mention unexpected latencies introduced.
> >
> > Nope. The pending tlb flush performed in prep_new_page() is the one
> > that would've done already with the vanilla kernel. It's not additional
> > tlb flushes but it's subset of all the skipped ones.
>
> But those skipped once could have happened in a completely different
> context (e.g. a different process or even a diffrent security domain),
> right?

Right.

> > It's worth noting all the existing mm reclaim mechaisms have already
> > introduced worse unexpected latencies.
>
> Right, but a reclaim, especially direct reclaim, are expected to be
> slow. It is much different to see spike latencies on system with a lot
> of memory.

Talking about rt system? In rt system, the system should prevent its
memory from being reclaimed, IMHO, since these will add unexpected
latencies.

Reclaim and migrations alreay introduce unexpected latencies themselves.
Why does only latencies by luf matter? I'm asking to understand what
you mean, in order to fix luf if any.

vanilla
-------
alloc_page() {
...
preempted by kswapd or direct reclaim {
...
reclaim
unmap file pages
tlb shootdown
...
migration
unmap pages
tlb shootdown
...
}
...
interrupted by tlb shootdown from other CPUs {
...
}
...
prep_new_page() {
...
}
}

with luf
--------
alloc_page() {
...
preempted by kswapd or direct reclaim {
...
reclaim
unmap file pages
(skip tlb shootdown)
...
migration
unmap pages
(skip tlb shootdown)
...
}
...
interrupted by tlb shootdown from other CPUs {
...
}
...
prep_new_page() {
...
/*
* This can be tlb shootdown skipped in this context or others.
*/
tlb shootdown with much smaller cpumask
...
}
}

I really want to understand why only latentcies introduced in luf
matter? Why does not latencies already introduced in vanilla matter?

Byungchul

> --
> Michal Hocko
> SUSE Labs