2024-05-31 09:26:02

by Byungchul Park

[permalink] [raw]
Subject: [PATCH v11 00/12] LUF(Lazy Unmap Flush) reducing tlb numbers over 90%

Hi everyone,

While I'm working with a tiered memory system e.g. CXL memory, I have
been facing migration overhead esp. tlb shootdown on promotion or
demotion between different tiers. Yeah.. most tlb shootdowns on
migration through hinting fault can be avoided thanks to Huang Ying's
work, commit 4d4b6d66db ("mm,unmap: avoid flushing tlb in batch if PTE
is inaccessible").

However, it's only for migration through hinting fault. I thought it'd
be much better if we have a general mechanism to reduce all the tlb
numbers that we can apply to any unmap code, that we normally believe
tlb flush should be followed.

I'm suggesting a new mechanism, LUF(Lazy Unmap Flush), that defers tlb
flush until folios that have been unmapped and freed, eventually get
allocated again. It's safe for folios that had been mapped read-only
and were unmapped, as long as the contents of the folios don't change
while staying in pcp or buddy so we can still read the data through the
stale tlb entries.

tlb flush can be defered when folios get unmapped as long as it
guarantees to perform tlb flush needed, before the folios actually
become used, of course, only if all the corresponding ptes don't have
write permission. Otherwise, the system will get messed up.

To achieve that, for the folios that map only to non-writable tlb
entries, prevent tlb flush during unmapping but perform it just before
the folios actually become used, out of buddy or pcp.

However, we should cancel the pending by LUF and perform the deferred
TLB flush right away when:

1. a writable pte is newly set through fault handler
2. a file is updated
3. kasan needs poisoning on free
4. the kernel wants to init pages on free

No matter what type of workload is used for performance evaluation, the
result would be positive thanks to the unconditional reduction of tlb
flushes, tlb misses and interrupts. For the test, I picked up one of
the most popular and heavy workload, llama.cpp that is a
LLM(Large Language Model) inference engine.

The result would depend on memory latency and how often reclaim runs,
which implies tlb miss overhead and how many times unmapping happens.
In my system, the result shows:

1. tlb shootdown interrupts are reduced about 97%.
2. The test program runtime is reduced about 4.5%.

The test environment and the result is like:

Machine: bare metal, x86_64, Intel(R) Xeon(R) Gold 6430
CPU: 1 socket 64 core with hyper thread on
Numa: 2 nodes (64 CPUs DRAM 42GB, no CPUs CXL expander 98GB)
Config: swap off, numa balancing tiering on, demotion enabled

The test set:

llama.cpp/main -m $(70G_model1) -p "who are you?" -s 1 -t 15 -n 20 &
llama.cpp/main -m $(70G_model2) -p "who are you?" -s 1 -t 15 -n 20 &
llama.cpp/main -m $(70G_model3) -p "who are you?" -s 1 -t 15 -n 20 &
wait

where -t: nr of threads, -s: seed used to make the runtime stable,
-n: nr of tokens that determines the runtime, -p: prompt to ask,
-m: LLM model to use.

Run the test set 5 times successively with caches dropped every run
via 'echo 3 > /proc/sys/vm/drop_caches'. Each inference prints its
runtime at the end of each.

1. Runtime from the output of llama.cpp:

BEFORE
------
llama_print_timings: total time = 883450.54 ms / 24 tokens
llama_print_timings: total time = 861665.91 ms / 24 tokens
llama_print_timings: total time = 898079.02 ms / 24 tokens
llama_print_timings: total time = 879897.69 ms / 24 tokens
llama_print_timings: total time = 892360.75 ms / 24 tokens
llama_print_timings: total time = 884587.85 ms / 24 tokens
llama_print_timings: total time = 861023.19 ms / 24 tokens
llama_print_timings: total time = 900022.18 ms / 24 tokens
llama_print_timings: total time = 878771.88 ms / 24 tokens
llama_print_timings: total time = 889027.98 ms / 24 tokens
llama_print_timings: total time = 880783.90 ms / 24 tokens
llama_print_timings: total time = 856475.29 ms / 24 tokens
llama_print_timings: total time = 896842.21 ms / 24 tokens
llama_print_timings: total time = 878883.53 ms / 24 tokens
llama_print_timings: total time = 890122.10 ms / 24 tokens

AFTER
-----
llama_print_timings: total time = 871060.86 ms / 24 tokens
llama_print_timings: total time = 825609.53 ms / 24 tokens
llama_print_timings: total time = 836854.81 ms / 24 tokens
llama_print_timings: total time = 843147.99 ms / 24 tokens
llama_print_timings: total time = 831426.65 ms / 24 tokens
llama_print_timings: total time = 873939.23 ms / 24 tokens
llama_print_timings: total time = 826127.69 ms / 24 tokens
llama_print_timings: total time = 835489.26 ms / 24 tokens
llama_print_timings: total time = 842589.62 ms / 24 tokens
llama_print_timings: total time = 833700.66 ms / 24 tokens
llama_print_timings: total time = 875996.19 ms / 24 tokens
llama_print_timings: total time = 826401.73 ms / 24 tokens
llama_print_timings: total time = 839341.28 ms / 24 tokens
llama_print_timings: total time = 841075.10 ms / 24 tokens
llama_print_timings: total time = 835136.41 ms / 24 tokens

2. tlb shootdowns from 'cat /proc/interrupts':

BEFORE
------
TLB:
80911532 93691786 100296251 111062810 109769109 109862429
108968588 119175230 115779676 118377498 119325266 120300143
124514185 116697222 121068466 118031913 122660681 117494403
121819907 116960596 120936335 117217061 118630217 122322724
119595577 111693298 119232201 120030377 115334687 113179982
118808254 116353592 140987367 137095516 131724276 139742240
136501150 130428761 127585535 132483981 133430250 133756207
131786710 126365824 129812539 133850040 131742690 125142213
128572830 132234350 131945922 128417707 133355434 129972846
126331823 134050849 133991626 121129038 124637283 132830916
126875507 122322440 125776487 124340278 TLB shootdowns

AFTER
-----
TLB:
2121206 2615108 2983494 2911950 3055086 3092672
3204894 3346082 3286744 3307310 3357296 3315940
3428034 3112596 3143325 3185551 3186493 3322314
3330523 3339663 3156064 3272070 3296309 3198962
3332662 3315870 3234467 3353240 3281234 3300666
3345452 3173097 4009196 3932215 3898735 3726531
3717982 3671726 3728788 3724613 3799147 3691764
3620630 3684655 3666688 3393974 3448651 3487593
3446357 3618418 3671920 3712949 3575264 3715385
3641513 3630897 3691047 3630690 3504933 3662647
3629926 3443044 3832970 3548813 TLB shootdowns

---

Changes from v10:

1. Rebase on akpm/mm.git mm-unstable as of May 28, 2024.
2. Cancel LUF on file_end_write() when updating a file.
(feedbacked by Dave Hansen)
3. Cancel LUF after every update_mmu_tlb*() in fault handler if
it's going to set the pte to writable.
4. Cancel LUF on freeing pages if kasan needs poisoning.
(feedbacked by David Hildenbrand)
5. Cancel LUF on freeing pages if want_init_on_free().
(feedbacked by David Hildenbrand)
6. Change test iteration from 10 times to 5 times.
7. Not include perf result. (I will add it if needed.)
8. Trivial optimization.

Changes from v9:

1. Expand the candidate to apply this mechanism:
BEFORE - The souce folios at any type of migration.
AFTER - Any folios that have been unmapped and freed.
2. Change the workload for test:
BEFORE - XSBench
AFTER - llama.cpp (one of the most popluar real workload)
3. Change the test environment:
BEFORE - qemu machine, too small DRAM(1GB), large remote mem
AFTER - bare metal, real CXL memory, practical memory size
4. Rename the mechanism from MIGRC(Migration Read Copy) to
LUF(Lazy Unmap Flush) to reflect the current version of the
mechanism can be applied not only to unmap during migration
but any unmap code e.g. unmap in shrink_folio_list().
5. Fix build error for riscv. (feedbacked by kernel test bot)
6. Supplement commit messages to describe what this mechanism is
for, especially in the patches for arch code. (feedbacked by
Thomas Gleixner)
7. Clean up some trivial things.

Changes from v8:

1. Rebase on akpm/mm.git mm-unstable as of April 18, 2024.
2. Supplement comments and commit message.
3. Change the candidate to apply migrc mechanism:
BEFORE - The source folios at demotion and promotion.
AFTER - The souce folios at any type of migration.
4. Change how migrc mechanism works:
BEFORE - Reduce tlb flushes by deferring folio_free() for
source folios during demotion and promotion.
AFTER - Reduce tlb flushes by deferring tlb flush until they
actually become used, out of pcp or buddy. The
current version of migrc does *not* defer calling
folio_free() but let it go as it is as the same as
vanilla kernel, with the folios marked kind of 'need
to tlb flush'. And then handle the flush when the
page exits from pcp or buddy so as to prevent
changing vm stats e.g. free pages.

Changes from v7:

1. Rewrite cover letter to explain what 'migrc' mechasism is.
(feedbacked by Andrew Morton)
2. Supplement the commit message of a patch 'mm: Add APIs to
free a folio directly to the buddy bypassing pcp'.
(feedbacked by Andrew Morton)

Changes from v6:

1. Fix build errors in case of
CONFIG_ARCH_WANT_BATCHED_UNMAP_tlb_FLUSH disabled by moving
migrc_flush_{start,end}() calls from arch code to
try_to_unmap_flush() in mm/rmap.c.

Changes from v5:

1. Fix build errors in case of CONFIG_MIGRATION disabled or
CONFIG_HWPOISON_INJECT moduled. (feedbacked by kernel test
bot and Raymond Jay Golo)
2. Organize migrc code with two kconfigs, CONFIG_MIGRATION and
CONFIG_ARCH_WANT_BATCHED_UNMAP_tlb_FLUSH.

Changes from v4:

1. Rebase on v6.7.
2. Fix build errors in arm64 that is doing nothing for tlb flush
but has CONFIG_ARCH_WANT_BATCHED_UNMAP_tlb_FLUSH. (reported
by kernel test robot)
3. Don't use any page flag. So the system would give up migrc
mechanism more often but it's okay. The final improvement is
good enough.
4. Instead, optimize full tlb flush(arch_tlbbatch_flush()) by
avoiding redundant CPUs from tlb flush.

Changes from v3:

1. Don't use the kconfig, CONFIG_MIGRC, and remove sysctl knob,
migrc_enable. (feedbacked by Nadav)
2. Remove the optimization skipping CPUs that have already
performed tlb flushes needed by any reason when performing
tlb flushes by migrc because I can't tell the performance
difference between w/ the optimization and w/o that.
(feedbacked by Nadav)
3. Minimize arch-specific code. While at it, move all the migrc
declarations and inline functions from include/linux/mm.h to
mm/internal.h (feedbacked by Dave Hansen, Nadav)
4. Separate a part making migrc paused when the system is in
high memory pressure to another patch. (feedbacked by Nadav)
5. Rename:
a. arch_tlbbatch_clean() to arch_tlbbatch_clear(),
b. tlb_ubc_nowr to tlb_ubc_ro,
c. migrc_try_flush_free_folios() to migrc_flush_free_folios(),
d. migrc_stop to migrc_pause.
(feedbacked by Nadav)
6. Use ->lru list_head instead of introducing a new llist_head.
(feedbacked by Nadav)
7. Use non-atomic operations of page-flag when it's safe.
(feedbacked by Nadav)
8. Use stack instead of keeping a pointer of 'struct migrc_req'
in struct task, which is for manipulating it locally.
(feedbacked by Nadav)
9. Replace a lot of simple functions to inline functions placed
in a header, mm/internal.h. (feedbacked by Nadav)
10. Add additional sufficient comments. (feedbacked by Nadav)
11. Remove a lot of wrapper functions. (feedbacked by Nadav)

Changes from RFC v2:

1. Remove additional occupation in struct page. To do that,
unioned with lru field for migrc's list and added a page
flag. I know page flag is a thing that we don't like to add
but no choice because migrc should distinguish folios under
migrc's control from others. Instead, I force migrc to be
used only on 64 bit system to mitigate you guys from getting
angry.
2. Remove meaningless internal object allocator that I
introduced to minimize impact onto the system. However, a ton
of tests showed there was no difference.
3. Stop migrc from working when the system is in high memory
pressure like about to perform direct reclaim. At the
condition where the swap mechanism is heavily used, I found
the system suffered from regression without this control.
4. Exclude folios that pte_dirty() == true from migrc's interest
so that migrc can work simpler.
5. Combine several patches that work tightly coupled to one.
6. Add sufficient comments for better review.
7. Manage migrc's request in per-node manner (from globally).
8. Add tlb miss improvement in commit message.
9. Test with more CPUs(4 -> 16) to see bigger improvement.

Changes from RFC:

1. Fix a bug triggered when a destination folio at the previous
migration becomes a source folio at the next migration,
before the folio gets handled properly so that the folio can
play with another migration. There was inconsistency in the
folio's state. Fixed it.
2. Split the patch set into more pieces so that the folks can
review better. (Feedbacked by Nadav Amit)
3. Fix a wrong usage of barrier e.g. smp_mb__after_atomic().
(Feedbacked by Nadav Amit)
4. Tried to add sufficient comments to explain the patch set
better. (Feedbacked by Nadav Amit)

Byungchul Park (12):
x86/tlb: add APIs manipulating tlb batch's arch data
arm64: tlbflush: add APIs manipulating tlb batch's arch data
riscv, tlb: add APIs manipulating tlb batch's arch data
x86/tlb, riscv/tlb, mm/rmap: separate arch_tlbbatch_clear() out of
arch_tlbbatch_flush()
mm: buddy: make room for a new variable, ugen, in struct page
mm: add folio_put_ugen() to deliver unmap generation number to pcp or
buddy
mm: add a parameter, unmap generation number, to free_unref_folios()
mm/rmap: recognize read-only tlb entries during batched tlb flush
mm: implement LUF(Lazy Unmap Flush) defering tlb flush when folios get
unmapped
mm: separate move/undo parts from migrate_pages_batch()
mm, migrate: apply luf mechanism to unmapping during migration
mm, vmscan: apply luf mechanism to unmapping during folio reclaim

arch/arm64/include/asm/tlbflush.h | 18 ++
arch/riscv/include/asm/tlbflush.h | 21 ++
arch/riscv/mm/tlbflush.c | 1 -
arch/x86/include/asm/tlbflush.h | 18 ++
arch/x86/mm/tlb.c | 2 -
include/linux/fs.h | 6 +
include/linux/mm.h | 22 ++
include/linux/mm_types.h | 48 +++-
include/linux/rmap.h | 7 +-
include/linux/sched.h | 11 +
mm/compaction.c | 12 +-
mm/internal.h | 115 +++++++++-
mm/memory.c | 39 +++-
mm/migrate.c | 184 ++++++++++------
mm/page_alloc.c | 174 ++++++++++++---
mm/page_isolation.c | 6 +
mm/page_reporting.c | 10 +
mm/rmap.c | 352 +++++++++++++++++++++++++++++-
mm/swap.c | 18 +-
mm/vmscan.c | 29 ++-
20 files changed, 959 insertions(+), 134 deletions(-)


base-commit: b610f75d19a34b488021b9a4d2e3bd1cf34fc200
--
2.17.1



2024-05-31 09:29:18

by Byungchul Park

[permalink] [raw]
Subject: [PATCH v11 08/12] mm/rmap: recognize read-only tlb entries during batched tlb flush

Functionally, no change. This is a preparation for luf mechanism that
requires to recognize read-only tlb entries and handle them in a
different way. The newly introduced API in this patch, fold_ubc(), will
be used by luf mechanism.

Signed-off-by: Byungchul Park <[email protected]>
---
include/linux/sched.h | 1 +
mm/internal.h | 4 ++++
mm/rmap.c | 34 ++++++++++++++++++++++++++++++++--
3 files changed, 37 insertions(+), 2 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index ab5a2ed79b88..d9722c014157 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1340,6 +1340,7 @@ struct task_struct {
#endif

struct tlbflush_unmap_batch tlb_ubc;
+ struct tlbflush_unmap_batch tlb_ubc_ro;
unsigned short int ugen;

/* Cache last used pipe for splice(): */
diff --git a/mm/internal.h b/mm/internal.h
index dba6d0eb7b6d..ca6fb5b2a640 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -1124,6 +1124,7 @@ extern struct workqueue_struct *mm_percpu_wq;
void try_to_unmap_flush(void);
void try_to_unmap_flush_dirty(void);
void flush_tlb_batched_pending(struct mm_struct *mm);
+void fold_ubc(struct tlbflush_unmap_batch *dst, struct tlbflush_unmap_batch *src);
#else
static inline void try_to_unmap_flush(void)
{
@@ -1134,6 +1135,9 @@ static inline void try_to_unmap_flush_dirty(void)
static inline void flush_tlb_batched_pending(struct mm_struct *mm)
{
}
+static inline void fold_ubc(struct tlbflush_unmap_batch *dst, struct tlbflush_unmap_batch *src)
+{
+}
#endif /* CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH */

extern const struct trace_print_flags pageflag_names[];
diff --git a/mm/rmap.c b/mm/rmap.c
index a65a94aada8d..1a246788e867 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -634,6 +634,28 @@ struct anon_vma *folio_lock_anon_vma_read(struct folio *folio,
}

#ifdef CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH
+
+void fold_ubc(struct tlbflush_unmap_batch *dst,
+ struct tlbflush_unmap_batch *src)
+{
+ if (!src->flush_required)
+ return;
+
+ /*
+ * Fold src to dst.
+ */
+ arch_tlbbatch_fold(&dst->arch, &src->arch);
+ dst->writable = dst->writable || src->writable;
+ dst->flush_required = true;
+
+ /*
+ * Reset src.
+ */
+ arch_tlbbatch_clear(&src->arch);
+ src->flush_required = false;
+ src->writable = false;
+}
+
/*
* Flush TLB entries for recently unmapped pages from remote CPUs. It is
* important if a PTE was dirty when it was unmapped that it's flushed
@@ -643,7 +665,9 @@ struct anon_vma *folio_lock_anon_vma_read(struct folio *folio,
void try_to_unmap_flush(void)
{
struct tlbflush_unmap_batch *tlb_ubc = &current->tlb_ubc;
+ struct tlbflush_unmap_batch *tlb_ubc_ro = &current->tlb_ubc_ro;

+ fold_ubc(tlb_ubc, tlb_ubc_ro);
if (!tlb_ubc->flush_required)
return;

@@ -657,8 +681,9 @@ void try_to_unmap_flush(void)
void try_to_unmap_flush_dirty(void)
{
struct tlbflush_unmap_batch *tlb_ubc = &current->tlb_ubc;
+ struct tlbflush_unmap_batch *tlb_ubc_ro = &current->tlb_ubc_ro;

- if (tlb_ubc->writable)
+ if (tlb_ubc->writable || tlb_ubc_ro->writable)
try_to_unmap_flush();
}

@@ -675,13 +700,18 @@ void try_to_unmap_flush_dirty(void)
static void set_tlb_ubc_flush_pending(struct mm_struct *mm, pte_t pteval,
unsigned long uaddr)
{
- struct tlbflush_unmap_batch *tlb_ubc = &current->tlb_ubc;
+ struct tlbflush_unmap_batch *tlb_ubc;
int batch;
bool writable = pte_dirty(pteval);

if (!pte_accessible(mm, pteval))
return;

+ if (pte_write(pteval))
+ tlb_ubc = &current->tlb_ubc;
+ else
+ tlb_ubc = &current->tlb_ubc_ro;
+
arch_tlbbatch_add_pending(&tlb_ubc->arch, mm, uaddr);
tlb_ubc->flush_required = true;

--
2.17.1


2024-05-31 09:29:18

by Byungchul Park

[permalink] [raw]
Subject: [PATCH v11 06/12] mm: add folio_put_ugen() to deliver unmap generation number to pcp or buddy

Introduced a new API, folio_put_ugen(), to deliver unmap generation
number to pcp or buddy that will be used by luf mechanism to track need
of tlb flush for each page residing in pcp or buddy.

For now, the delivery should work for the following call path that is of
releasing source folios during migration:

folio_put_ugen()
__folio_put_ugen()
free_unref_page()
free_unref_page_commit()
free_one_page()
__free_one_page()

The generation number should be handed over properly when pages travel
between pcp and buddy, and must do necessary handling on exit from pcp
or buddy. This patch doesn't include actual body for tlb flush on the
exit, which will be filled by the main patch of luf mechanism.

Signed-off-by: Byungchul Park <[email protected]>
---
include/linux/mm.h | 22 +++++++
include/linux/sched.h | 1 +
mm/compaction.c | 10 +++
mm/internal.h | 71 ++++++++++++++++++++-
mm/page_alloc.c | 144 ++++++++++++++++++++++++++++++++++--------
mm/page_isolation.c | 6 ++
mm/page_reporting.c | 10 +++
mm/swap.c | 12 +++-
8 files changed, 248 insertions(+), 28 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 3aa1b6889bcc..54cb6316a76d 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1316,6 +1316,7 @@ static inline struct folio *virt_to_folio(const void *x)
}

void __folio_put(struct folio *folio);
+void __folio_put_ugen(struct folio *folio, unsigned short int ugen);

void put_pages_list(struct list_head *pages);

@@ -1508,6 +1509,27 @@ static inline void folio_put(struct folio *folio)
__folio_put(folio);
}

+/**
+ * folio_put_ugen - Decrement the last reference count on a folio.
+ * @folio: The folio.
+ * @ugen: The unmap generation # of TLB flush that the folio requires.
+ *
+ * The folio's reference count should be one since the only user, folio
+ * migration code, calls folio_put_ugen() only when the folio has no
+ * reference else. The memory will be released back to the page
+ * allocator and may be used by another allocation immediately. Do not
+ * access the memory or the struct folio after calling folio_put_ugen().
+ *
+ * Context: May be called in process or interrupt context, but not in NMI
+ * context. May be called while holding a spinlock.
+ */
+static inline void folio_put_ugen(struct folio *folio, unsigned short int ugen)
+{
+ if (WARN_ON(!folio_put_testzero(folio)))
+ return;
+ __folio_put_ugen(folio, ugen);
+}
+
/**
* folio_put_refs - Reduce the reference count on a folio.
* @folio: The folio.
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 61591ac6eab6..ab5a2ed79b88 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1340,6 +1340,7 @@ struct task_struct {
#endif

struct tlbflush_unmap_batch tlb_ubc;
+ unsigned short int ugen;

/* Cache last used pipe for splice(): */
struct pipe_inode_info *splice_pipe;
diff --git a/mm/compaction.c b/mm/compaction.c
index e731d45befc7..13799fbb2a9a 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -701,6 +701,11 @@ static unsigned long isolate_freepages_block(struct compact_control *cc,
if (locked)
spin_unlock_irqrestore(&cc->zone->lock, flags);

+ /*
+ * Check and flush before using the isolated pages.
+ */
+ check_flush_task_ugen();
+
/*
* Be careful to not go outside of the pageblock.
*/
@@ -1673,6 +1678,11 @@ static void fast_isolate_freepages(struct compact_control *cc)

spin_unlock_irqrestore(&cc->zone->lock, flags);

+ /*
+ * Check and flush before using the isolated pages.
+ */
+ check_flush_task_ugen();
+
/* Skip fast search if enough freepages isolated */
if (cc->nr_freepages >= cc->nr_migratepages)
break;
diff --git a/mm/internal.h b/mm/internal.h
index 552e1061d36d..380ae980e4f9 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -661,7 +661,7 @@ extern bool free_pages_prepare(struct page *page, unsigned int order);

extern int user_min_free_kbytes;

-void free_unref_page(struct page *page, unsigned int order);
+void free_unref_page(struct page *page, unsigned int order, unsigned short int ugen);
void free_unref_folios(struct folio_batch *fbatch);

extern void zone_pcp_reset(struct zone *zone);
@@ -1536,6 +1536,75 @@ static inline void shrinker_debugfs_remove(struct dentry *debugfs_entry,
void workingset_update_node(struct xa_node *node);
extern struct list_lru shadow_nodes;

+#if defined(CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH)
+static inline unsigned short int ugen_latest(unsigned short int a, unsigned short int b)
+{
+ if (!a || !b)
+ return a + b;
+
+ /*
+ * The ugen is wrapped around so let's use this trick.
+ */
+ if ((short int)(a - b) < 0)
+ return b;
+ else
+ return a;
+}
+
+static inline void update_task_ugen(unsigned short int ugen)
+{
+ current->ugen = ugen_latest(current->ugen, ugen);
+}
+
+static inline unsigned short int hand_over_task_ugen(void)
+{
+ unsigned short int ret = current->ugen;
+
+ current->ugen = 0;
+ return ret;
+}
+
+static inline void check_flush_task_ugen(void)
+{
+ /*
+ * XXX: luf mechanism will handle this. For now, do nothing but
+ * reset current's ugen to finalize this turn.
+ */
+ current->ugen = 0;
+}
+
+/*
+ * Check the constratints of what luf currently supports.
+ */
+static inline bool can_luf_folio(struct folio *f)
+{
+ bool can_luf = true;
+
+ /*
+ * XXX: Remove the constraint once luf handles zone device folio.
+ */
+ can_luf = can_luf && likely(!folio_is_zone_device(f));
+
+ /*
+ * XXX: Remove the constraint once luf handles hugetlb folio.
+ */
+ can_luf = can_luf && likely(!folio_test_hugetlb(f));
+
+ /*
+ * XXX: Remove the constraint once luf handles large folio.
+ */
+ can_luf = can_luf && likely(!folio_test_large(f));
+
+ return can_luf;
+}
+#else /* CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH */
+static inline unsigned short int ugen_latest(unsigned short int a, unsigned short int b) { return 0; }
+static inline void update_task_ugen(unsigned short int ugen) {}
+static inline unsigned short int hand_over_task_ugen(void) { return 0; }
+static inline void check_flush_task_ugen(void) {}
+static inline bool can_luf_folio(struct folio *f) { return false; }
+#endif
+
struct unlink_vma_file_batch {
int count;
struct vm_area_struct *vmas[8];
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index ae57dd8718fe..6fbbe45be5ae 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -688,6 +688,7 @@ static inline void __del_page_from_free_list(struct page *page, struct zone *zon
if (page_reported(page))
__ClearPageReported(page);

+ update_task_ugen(page_buddy_ugen(page));
list_del(&page->buddy_list);
__ClearPageBuddy(page);
set_page_private(page, 0);
@@ -760,7 +761,7 @@ buddy_merge_likely(unsigned long pfn, unsigned long buddy_pfn,
static inline void __free_one_page(struct page *page,
unsigned long pfn,
struct zone *zone, unsigned int order,
- int migratetype, fpi_t fpi_flags)
+ int migratetype, fpi_t fpi_flags, unsigned short int ugen)
{
struct capture_control *capc = task_capc(zone);
unsigned long buddy_pfn = 0;
@@ -775,12 +776,22 @@ static inline void __free_one_page(struct page *page,
VM_BUG_ON_PAGE(pfn & ((1 << order) - 1), page);
VM_BUG_ON_PAGE(bad_range(zone, page), page);

+ /*
+ * Ensure private is zero before using it inside buddy.
+ */
+ set_page_private(page, 0);
+
account_freepages(zone, 1 << order, migratetype);

while (order < MAX_PAGE_ORDER) {
int buddy_mt = migratetype;

if (compaction_capture(capc, page, order, migratetype)) {
+ /*
+ * Capturer will check_flush_task_ugen() through
+ * prep_new_page().
+ */
+ update_task_ugen(ugen);
account_freepages(zone, -(1 << order), migratetype);
return;
}
@@ -811,6 +822,11 @@ static inline void __free_one_page(struct page *page,
if (page_is_guard(buddy))
clear_page_guard(zone, buddy, order);
else
+ /*
+ * __del_page_from_free_list() updates current's
+ * ugen that pairs with hand_over_task_ugen() below
+ * in this funtion.
+ */
__del_page_from_free_list(buddy, zone, order, buddy_mt);

if (unlikely(buddy_mt != migratetype)) {
@@ -829,7 +845,8 @@ static inline void __free_one_page(struct page *page,
}

done_merging:
- set_buddy_order_ugen(page, order, 0);
+ ugen = ugen_latest(ugen, hand_over_task_ugen());
+ set_buddy_order_ugen(page, order, ugen);

if (fpi_flags & FPI_TO_TAIL)
to_tail = true;
@@ -1040,6 +1057,11 @@ __always_inline bool free_pages_prepare(struct page *page,

VM_BUG_ON_PAGE(PageTail(page), page);

+ /*
+ * Ensure private is zero before using it inside pcp.
+ */
+ set_page_private(page, 0);
+
trace_mm_page_free(page, order);
kmsan_free_page(page, order);

@@ -1171,17 +1193,23 @@ static void free_pcppages_bulk(struct zone *zone, int count,
do {
unsigned long pfn;
int mt;
+ unsigned short int ugen;

page = list_last_entry(list, struct page, pcp_list);
pfn = page_to_pfn(page);
mt = get_pfnblock_migratetype(page, pfn);

+ /*
+ * pcp uses private to store ugen.
+ */
+ ugen = page_private(page);
+
/* must delete to avoid corrupting pcp list */
list_del(&page->pcp_list);
count -= nr_pages;
pcp->count -= nr_pages;

- __free_one_page(page, pfn, zone, order, mt, FPI_NONE);
+ __free_one_page(page, pfn, zone, order, mt, FPI_NONE, ugen);
trace_mm_page_pcpu_drain(page, order, mt);
} while (count > 0 && !list_empty(list));
}
@@ -1191,14 +1219,14 @@ static void free_pcppages_bulk(struct zone *zone, int count,

static void free_one_page(struct zone *zone, struct page *page,
unsigned long pfn, unsigned int order,
- fpi_t fpi_flags)
+ fpi_t fpi_flags, unsigned short int ugen)
{
unsigned long flags;
int migratetype;

spin_lock_irqsave(&zone->lock, flags);
migratetype = get_pfnblock_migratetype(page, pfn);
- __free_one_page(page, pfn, zone, order, migratetype, fpi_flags);
+ __free_one_page(page, pfn, zone, order, migratetype, fpi_flags, ugen);
spin_unlock_irqrestore(&zone->lock, flags);
}

@@ -1211,7 +1239,7 @@ static void __free_pages_ok(struct page *page, unsigned int order,
if (!free_pages_prepare(page, order))
return;

- free_one_page(zone, page, pfn, order, fpi_flags);
+ free_one_page(zone, page, pfn, order, fpi_flags, 0);

__count_vm_events(PGFREE, 1 << order);
}
@@ -1476,6 +1504,10 @@ inline void post_alloc_hook(struct page *page, unsigned int order,
static void prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
unsigned int alloc_flags)
{
+ /*
+ * Check and flush before using the pages.
+ */
+ check_flush_task_ugen();
post_alloc_hook(page, order, gfp_flags);

if (order && (gfp_flags & __GFP_COMP))
@@ -1511,6 +1543,10 @@ struct page *__rmqueue_smallest(struct zone *zone, unsigned int order,
page = get_page_from_free_area(area, migratetype);
if (!page)
continue;
+ /*
+ * del_page_from_free_list() updates current's ugen that
+ * pairs with check_flush_task_ugen() in prep_new_page().
+ */
del_page_from_free_list(page, zone, current_order, migratetype);
expand(zone, page, order, current_order, migratetype);
trace_mm_page_alloc_zone_locked(page, order, migratetype,
@@ -1673,7 +1709,8 @@ static unsigned long find_large_buddy(unsigned long start_pfn)

/* Split a multi-block free page into its individual pageblocks */
static void split_large_buddy(struct zone *zone, struct page *page,
- unsigned long pfn, int order)
+ unsigned long pfn, int order,
+ unsigned short int ugen)
{
unsigned long end_pfn = pfn + (1 << order);

@@ -1686,7 +1723,7 @@ static void split_large_buddy(struct zone *zone, struct page *page,
while (pfn != end_pfn) {
int mt = get_pfnblock_migratetype(page, pfn);

- __free_one_page(page, pfn, zone, pageblock_order, mt, FPI_NONE);
+ __free_one_page(page, pfn, zone, pageblock_order, mt, FPI_NONE, ugen);
pfn += pageblock_nr_pages;
page = pfn_to_page(pfn);
}
@@ -1728,22 +1765,34 @@ bool move_freepages_block_isolate(struct zone *zone, struct page *page,
if (pfn != start_pfn) {
struct page *buddy = pfn_to_page(pfn);
int order = buddy_order(buddy);
+ unsigned short int ugen;

+ /*
+ * del_page_from_free_list() updates current's ugen that
+ * pairs with the following hand_over_task_ugen().
+ */
del_page_from_free_list(buddy, zone, order,
get_pfnblock_migratetype(buddy, pfn));
+ ugen = hand_over_task_ugen();
set_pageblock_migratetype(page, migratetype);
- split_large_buddy(zone, buddy, pfn, order);
+ split_large_buddy(zone, buddy, pfn, order, ugen);
return true;
}

/* We're the starting block of a larger buddy */
if (PageBuddy(page) && buddy_order(page) > pageblock_order) {
int order = buddy_order(page);
+ unsigned short int ugen;

+ /*
+ * del_page_from_free_list() updates current's ugen that
+ * pairs with the following hand_over_task_ugen().
+ */
del_page_from_free_list(page, zone, order,
get_pfnblock_migratetype(page, pfn));
+ ugen = hand_over_task_ugen();
set_pageblock_migratetype(page, migratetype);
- split_large_buddy(zone, page, pfn, order);
+ split_large_buddy(zone, page, pfn, order, ugen);
return true;
}
move:
@@ -1863,6 +1912,10 @@ steal_suitable_fallback(struct zone *zone, struct page *page,

/* Take ownership for orders >= pageblock_order */
if (current_order >= pageblock_order) {
+ /*
+ * del_page_from_free_list() updates current's ugen that
+ * pairs with check_flush_task_ugen() in prep_new_page().
+ */
del_page_from_free_list(page, zone, current_order, block_type);
change_pageblock_range(page, current_order, start_type);
expand(zone, page, order, current_order, start_type);
@@ -1918,6 +1971,10 @@ steal_suitable_fallback(struct zone *zone, struct page *page,
}

single_page:
+ /*
+ * del_page_from_free_list() updates current's ugen that pairs
+ * with check_flush_task_ugen() in prep_new_page().
+ */
del_page_from_free_list(page, zone, current_order, block_type);
expand(zone, page, order, current_order, block_type);
return page;
@@ -2539,7 +2596,7 @@ static int nr_pcp_high(struct per_cpu_pages *pcp, struct zone *zone,

static void free_unref_page_commit(struct zone *zone, struct per_cpu_pages *pcp,
struct page *page, int migratetype,
- unsigned int order)
+ unsigned int order, unsigned short int ugen)
{
int high, batch;
int pindex;
@@ -2553,6 +2610,11 @@ static void free_unref_page_commit(struct zone *zone, struct per_cpu_pages *pcp,
pcp->alloc_factor >>= 1;
__count_vm_events(PGFREE, 1 << order);
pindex = order_to_pindex(migratetype, order);
+
+ /*
+ * pcp uses private to store ugen.
+ */
+ set_page_private(page, ugen);
list_add(&page->pcp_list, &pcp->lists[pindex]);
pcp->count += 1 << order;

@@ -2588,7 +2650,8 @@ static void free_unref_page_commit(struct zone *zone, struct per_cpu_pages *pcp,
/*
* Free a pcp page
*/
-void free_unref_page(struct page *page, unsigned int order)
+void free_unref_page(struct page *page, unsigned int order,
+ unsigned short int ugen)
{
unsigned long __maybe_unused UP_flags;
struct per_cpu_pages *pcp;
@@ -2614,7 +2677,7 @@ void free_unref_page(struct page *page, unsigned int order)
migratetype = get_pfnblock_migratetype(page, pfn);
if (unlikely(migratetype >= MIGRATE_PCPTYPES)) {
if (unlikely(is_migrate_isolate(migratetype))) {
- free_one_page(page_zone(page), page, pfn, order, FPI_NONE);
+ free_one_page(page_zone(page), page, pfn, order, FPI_NONE, ugen);
return;
}
migratetype = MIGRATE_MOVABLE;
@@ -2624,10 +2687,10 @@ void free_unref_page(struct page *page, unsigned int order)
pcp_trylock_prepare(UP_flags);
pcp = pcp_spin_trylock(zone->per_cpu_pageset);
if (pcp) {
- free_unref_page_commit(zone, pcp, page, migratetype, order);
+ free_unref_page_commit(zone, pcp, page, migratetype, order, ugen);
pcp_spin_unlock(pcp);
} else {
- free_one_page(zone, page, pfn, order, FPI_NONE);
+ free_one_page(zone, page, pfn, order, FPI_NONE, ugen);
}
pcp_trylock_finish(UP_flags);
}
@@ -2657,7 +2720,7 @@ void free_unref_folios(struct folio_batch *folios)
*/
if (!pcp_allowed_order(order)) {
free_one_page(folio_zone(folio), &folio->page,
- pfn, order, FPI_NONE);
+ pfn, order, FPI_NONE, 0);
continue;
}
folio->private = (void *)(unsigned long)order;
@@ -2693,7 +2756,7 @@ void free_unref_folios(struct folio_batch *folios)
*/
if (is_migrate_isolate(migratetype)) {
free_one_page(zone, &folio->page, pfn,
- order, FPI_NONE);
+ order, FPI_NONE, 0);
continue;
}

@@ -2706,7 +2769,7 @@ void free_unref_folios(struct folio_batch *folios)
if (unlikely(!pcp)) {
pcp_trylock_finish(UP_flags);
free_one_page(zone, &folio->page, pfn,
- order, FPI_NONE);
+ order, FPI_NONE, 0);
continue;
}
locked_zone = zone;
@@ -2721,7 +2784,7 @@ void free_unref_folios(struct folio_batch *folios)

trace_mm_page_free_batched(&folio->page);
free_unref_page_commit(zone, pcp, &folio->page, migratetype,
- order);
+ order, 0);
}

if (pcp) {
@@ -2772,6 +2835,11 @@ int __isolate_free_page(struct page *page, unsigned int order)
return 0;
}

+ /*
+ * del_page_from_free_list() updates current's ugen. The user of
+ * the isolated page should check_flush_task_ugen() before using
+ * it.
+ */
del_page_from_free_list(page, zone, order, mt);

/*
@@ -2813,7 +2881,7 @@ void __putback_isolated_page(struct page *page, unsigned int order, int mt)

/* Return isolated page to tail of freelist. */
__free_one_page(page, page_to_pfn(page), zone, order, mt,
- FPI_SKIP_REPORT_NOTIFY | FPI_TO_TAIL);
+ FPI_SKIP_REPORT_NOTIFY | FPI_TO_TAIL, 0);
}

/*
@@ -2956,6 +3024,11 @@ struct page *__rmqueue_pcplist(struct zone *zone, unsigned int order,
}

page = list_first_entry(list, struct page, pcp_list);
+
+ /*
+ * Pairs with check_flush_task_ugen() in prep_new_page().
+ */
+ update_task_ugen(page_private(page));
list_del(&page->pcp_list);
pcp->count -= 1 << order;
} while (check_new_pages(page, order));
@@ -4782,11 +4855,11 @@ void __free_pages(struct page *page, unsigned int order)
struct alloc_tag *tag = pgalloc_tag_get(page);

if (put_page_testzero(page))
- free_unref_page(page, order);
+ free_unref_page(page, order, 0);
else if (!head) {
pgalloc_tag_sub_pages(tag, (1 << order) - 1);
while (order-- > 0)
- free_unref_page(page + (1 << order), order);
+ free_unref_page(page + (1 << order), order, 0);
}
}
EXPORT_SYMBOL(__free_pages);
@@ -4848,7 +4921,7 @@ void __page_frag_cache_drain(struct page *page, unsigned int count)
VM_BUG_ON_PAGE(page_ref_count(page) == 0, page);

if (page_ref_sub_and_test(page, count))
- free_unref_page(page, compound_order(page));
+ free_unref_page(page, compound_order(page), 0);
}
EXPORT_SYMBOL(__page_frag_cache_drain);

@@ -4889,7 +4962,7 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc,
goto refill;

if (unlikely(nc->pfmemalloc)) {
- free_unref_page(page, compound_order(page));
+ free_unref_page(page, compound_order(page), 0);
goto refill;
}

@@ -4933,7 +5006,7 @@ void page_frag_free(void *addr)
struct page *page = virt_to_head_page(addr);

if (unlikely(put_page_testzero(page)))
- free_unref_page(page, compound_order(page));
+ free_unref_page(page, compound_order(page), 0);
}
EXPORT_SYMBOL(page_frag_free);

@@ -6742,10 +6815,19 @@ void __offline_isolated_pages(unsigned long start_pfn, unsigned long end_pfn)
BUG_ON(!PageBuddy(page));
VM_WARN_ON(get_pageblock_migratetype(page) != MIGRATE_ISOLATE);
order = buddy_order(page);
+ /*
+ * del_page_from_free_list() updates current's ugen that
+ * pairs with check_flush_task_ugen() below in this function.
+ */
del_page_from_free_list(page, zone, order, MIGRATE_ISOLATE);
pfn += (1 << order);
}
spin_unlock_irqrestore(&zone->lock, flags);
+
+ /*
+ * Check and flush before using it.
+ */
+ check_flush_task_ugen();
}
#endif

@@ -6829,6 +6911,11 @@ bool take_page_off_buddy(struct page *page)
int migratetype = get_pfnblock_migratetype(page_head,
pfn_head);

+ /*
+ * del_page_from_free_list() updates current's
+ * ugen that pairs with check_flush_task_ugen() below
+ * in this function.
+ */
del_page_from_free_list(page_head, zone, page_order,
migratetype);
break_down_buddy_pages(zone, page_head, page, 0,
@@ -6841,6 +6928,11 @@ bool take_page_off_buddy(struct page *page)
break;
}
spin_unlock_irqrestore(&zone->lock, flags);
+
+ /*
+ * Check and flush before using it.
+ */
+ check_flush_task_ugen();
return ret;
}

@@ -6859,7 +6951,7 @@ bool put_page_back_buddy(struct page *page)
int migratetype = get_pfnblock_migratetype(page, pfn);

ClearPageHWPoisonTakenOff(page);
- __free_one_page(page, pfn, zone, 0, migratetype, FPI_NONE);
+ __free_one_page(page, pfn, zone, 0, migratetype, FPI_NONE, 0);
if (TestClearPageHWPoison(page)) {
ret = true;
}
diff --git a/mm/page_isolation.c b/mm/page_isolation.c
index 042937d5abe4..5823da60a621 100644
--- a/mm/page_isolation.c
+++ b/mm/page_isolation.c
@@ -260,6 +260,12 @@ static void unset_migratetype_isolate(struct page *page, int migratetype)
zone->nr_isolate_pageblock--;
out:
spin_unlock_irqrestore(&zone->lock, flags);
+
+ /*
+ * Check and flush for the pages that have been isolated.
+ */
+ if (isolated_page)
+ check_flush_task_ugen();
}

static inline struct page *
diff --git a/mm/page_reporting.c b/mm/page_reporting.c
index e4c428e61d8c..4f94a3ea1b22 100644
--- a/mm/page_reporting.c
+++ b/mm/page_reporting.c
@@ -221,6 +221,11 @@ page_reporting_cycle(struct page_reporting_dev_info *prdev, struct zone *zone,
/* release lock before waiting on report processing */
spin_unlock_irq(&zone->lock);

+ /*
+ * Check and flush before using the isolated pages.
+ */
+ check_flush_task_ugen();
+
/* begin processing pages in local list */
err = prdev->report(prdev, sgl, PAGE_REPORTING_CAPACITY);

@@ -253,6 +258,11 @@ page_reporting_cycle(struct page_reporting_dev_info *prdev, struct zone *zone,

spin_unlock_irq(&zone->lock);

+ /*
+ * Check and flush before using the isolated pages.
+ */
+ check_flush_task_ugen();
+
return err;
}

diff --git a/mm/swap.c b/mm/swap.c
index dc205bdfbbd4..dae169b19ab9 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -125,10 +125,20 @@ void __folio_put(struct folio *folio)
page_cache_release(folio);
folio_undo_large_rmappable(folio);
mem_cgroup_uncharge(folio);
- free_unref_page(&folio->page, folio_order(folio));
+ free_unref_page(&folio->page, folio_order(folio), 0);
}
EXPORT_SYMBOL(__folio_put);

+void __folio_put_ugen(struct folio *folio, unsigned short int ugen)
+{
+ if (WARN_ON(!can_luf_folio(folio)))
+ return;
+
+ page_cache_release(folio);
+ mem_cgroup_uncharge(folio);
+ free_unref_page(&folio->page, 0, ugen);
+}
+
/**
* put_pages_list() - release a list of pages
* @pages: list of pages threaded on page->lru
--
2.17.1


2024-05-31 09:31:26

by Byungchul Park

[permalink] [raw]
Subject: [PATCH v11 12/12] mm, vmscan: apply luf mechanism to unmapping during folio reclaim

A new mechanism, LUF(Lazy Unmap Flush), defers tlb flush until folios
that have been unmapped and freed, eventually get allocated again. It's
safe for folios that had been mapped read only and were unmapped, since
the contents of the folios don't change while staying in pcp or buddy
so we can still read the data through the stale tlb entries.

Applied the mechanism to unmapping during folio reclaim.

Signed-off-by: Byungchul Park <[email protected]>
---
include/linux/rmap.h | 5 +++--
mm/rmap.c | 5 ++++-
mm/vmscan.c | 21 ++++++++++++++++++++-
3 files changed, 27 insertions(+), 4 deletions(-)

diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index 6aca569e342b..9f3e66239f0a 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -661,7 +661,7 @@ int folio_referenced(struct folio *, int is_locked,
struct mem_cgroup *memcg, unsigned long *vm_flags);

bool try_to_migrate(struct folio *folio, enum ttu_flags flags);
-void try_to_unmap(struct folio *, enum ttu_flags flags);
+bool try_to_unmap(struct folio *, enum ttu_flags flags);

int make_device_exclusive_range(struct mm_struct *mm, unsigned long start,
unsigned long end, struct page **pages,
@@ -770,8 +770,9 @@ static inline int folio_referenced(struct folio *folio, int is_locked,
return 0;
}

-static inline void try_to_unmap(struct folio *folio, enum ttu_flags flags)
+static inline bool try_to_unmap(struct folio *folio, enum ttu_flags flags)
{
+ return false;
}

static inline int folio_mkclean(struct folio *folio)
diff --git a/mm/rmap.c b/mm/rmap.c
index b8b977278a1b..6f90c2adc4ae 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -2272,10 +2272,11 @@ static int folio_not_mapped(struct folio *folio)
* Tries to remove all the page table entries which are mapping this
* folio. It is the caller's responsibility to check if the folio is
* still mapped if needed (use TTU_SYNC to prevent accounting races).
+ * Return true if all the mappings are read-only, otherwise false.
*
* Context: Caller must hold the folio lock.
*/
-void try_to_unmap(struct folio *folio, enum ttu_flags flags)
+bool try_to_unmap(struct folio *folio, enum ttu_flags flags)
{
struct rmap_walk_control rwc = {
.rmap_one = try_to_unmap_one,
@@ -2300,6 +2301,8 @@ void try_to_unmap(struct folio *folio, enum ttu_flags flags)
fold_ubc(tlb_ubc_luf, tlb_ubc_ro);
else
fold_ubc(tlb_ubc, tlb_ubc_ro);
+
+ return can_luf;
}

/*
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 15efe6f0edce..d52a6e605183 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1034,14 +1034,17 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
struct reclaim_stat *stat, bool ignore_references)
{
struct folio_batch free_folios;
+ struct folio_batch free_folios_luf;
LIST_HEAD(ret_folios);
LIST_HEAD(demote_folios);
unsigned int nr_reclaimed = 0;
unsigned int pgactivate = 0;
bool do_demote_pass;
struct swap_iocb *plug = NULL;
+ unsigned short int ugen;

folio_batch_init(&free_folios);
+ folio_batch_init(&free_folios_luf);
memset(stat, 0, sizeof(*stat));
cond_resched();
do_demote_pass = can_demote(pgdat->node_id, sc);
@@ -1053,6 +1056,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
enum folio_references references = FOLIOREF_RECLAIM;
bool dirty, writeback;
unsigned int nr_pages;
+ bool can_luf = false;

cond_resched();

@@ -1295,7 +1299,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
if (folio_test_large(folio) && list_empty(&folio->_deferred_list))
flags |= TTU_SYNC;

- try_to_unmap(folio, flags);
+ can_luf = try_to_unmap(folio, flags);
if (folio_mapped(folio)) {
stat->nr_unmap_fail += nr_pages;
if (!was_swapbacked &&
@@ -1458,6 +1462,18 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
nr_reclaimed += nr_pages;

folio_undo_large_rmappable(folio);
+
+ if (can_luf) {
+ if (folio_batch_add(&free_folios_luf, folio) == 0) {
+ mem_cgroup_uncharge_folios(&free_folios_luf);
+ ugen = try_to_unmap_luf();
+ if (!ugen)
+ try_to_unmap_flush();
+ free_unref_folios(&free_folios_luf, ugen);
+ }
+ continue;
+ }
+
if (folio_batch_add(&free_folios, folio) == 0) {
mem_cgroup_uncharge_folios(&free_folios);
try_to_unmap_flush();
@@ -1527,8 +1543,11 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
pgactivate = stat->nr_activate[0] + stat->nr_activate[1];

mem_cgroup_uncharge_folios(&free_folios);
+ mem_cgroup_uncharge_folios(&free_folios_luf);
+ ugen = try_to_unmap_luf();
try_to_unmap_flush();
free_unref_folios(&free_folios, 0);
+ free_unref_folios(&free_folios_luf, ugen);

list_splice(&ret_folios, folio_list);
count_vm_events(PGACTIVATE, pgactivate);
--
2.17.1


2024-05-31 09:37:49

by Byungchul Park

[permalink] [raw]
Subject: [PATCH v11 07/12] mm: add a parameter, unmap generation number, to free_unref_folios()

Unmap generation number is used by luf mechanism to track need of tlb
flush for each page residing in pcp or buddy.

The number should be delivered to pcp or buddy via free_unref_folios()
that is for releasing folios that have been unmapped during reclaim in
shrink_folio_list().

Signed-off-by: Byungchul Park <[email protected]>
---
mm/internal.h | 2 +-
mm/page_alloc.c | 10 +++++-----
mm/swap.c | 6 +++---
mm/vmscan.c | 8 ++++----
4 files changed, 13 insertions(+), 13 deletions(-)

diff --git a/mm/internal.h b/mm/internal.h
index 380ae980e4f9..dba6d0eb7b6d 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -662,7 +662,7 @@ extern bool free_pages_prepare(struct page *page, unsigned int order);
extern int user_min_free_kbytes;

void free_unref_page(struct page *page, unsigned int order, unsigned short int ugen);
-void free_unref_folios(struct folio_batch *fbatch);
+void free_unref_folios(struct folio_batch *fbatch, unsigned short int ugen);

extern void zone_pcp_reset(struct zone *zone);
extern void zone_pcp_disable(struct zone *zone);
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 6fbbe45be5ae..c9acb4da91e0 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2698,7 +2698,7 @@ void free_unref_page(struct page *page, unsigned int order,
/*
* Free a batch of folios
*/
-void free_unref_folios(struct folio_batch *folios)
+void free_unref_folios(struct folio_batch *folios, unsigned short int ugen)
{
unsigned long __maybe_unused UP_flags;
struct per_cpu_pages *pcp = NULL;
@@ -2720,7 +2720,7 @@ void free_unref_folios(struct folio_batch *folios)
*/
if (!pcp_allowed_order(order)) {
free_one_page(folio_zone(folio), &folio->page,
- pfn, order, FPI_NONE, 0);
+ pfn, order, FPI_NONE, ugen);
continue;
}
folio->private = (void *)(unsigned long)order;
@@ -2756,7 +2756,7 @@ void free_unref_folios(struct folio_batch *folios)
*/
if (is_migrate_isolate(migratetype)) {
free_one_page(zone, &folio->page, pfn,
- order, FPI_NONE, 0);
+ order, FPI_NONE, ugen);
continue;
}

@@ -2769,7 +2769,7 @@ void free_unref_folios(struct folio_batch *folios)
if (unlikely(!pcp)) {
pcp_trylock_finish(UP_flags);
free_one_page(zone, &folio->page, pfn,
- order, FPI_NONE, 0);
+ order, FPI_NONE, ugen);
continue;
}
locked_zone = zone;
@@ -2784,7 +2784,7 @@ void free_unref_folios(struct folio_batch *folios)

trace_mm_page_free_batched(&folio->page);
free_unref_page_commit(zone, pcp, &folio->page, migratetype,
- order, 0);
+ order, ugen);
}

if (pcp) {
diff --git a/mm/swap.c b/mm/swap.c
index dae169b19ab9..67605bbfc95c 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -161,11 +161,11 @@ void put_pages_list(struct list_head *pages)
/* LRU flag must be clear because it's passed using the lru */
if (folio_batch_add(&fbatch, folio) > 0)
continue;
- free_unref_folios(&fbatch);
+ free_unref_folios(&fbatch, 0);
}

if (fbatch.nr)
- free_unref_folios(&fbatch);
+ free_unref_folios(&fbatch, 0);
INIT_LIST_HEAD(pages);
}
EXPORT_SYMBOL(put_pages_list);
@@ -1027,7 +1027,7 @@ void folios_put_refs(struct folio_batch *folios, unsigned int *refs)

folios->nr = j;
mem_cgroup_uncharge_folios(folios);
- free_unref_folios(folios);
+ free_unref_folios(folios, 0);
}
EXPORT_SYMBOL(folios_put_refs);

diff --git a/mm/vmscan.c b/mm/vmscan.c
index b9170f767353..15efe6f0edce 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1461,7 +1461,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
if (folio_batch_add(&free_folios, folio) == 0) {
mem_cgroup_uncharge_folios(&free_folios);
try_to_unmap_flush();
- free_unref_folios(&free_folios);
+ free_unref_folios(&free_folios, 0);
}
continue;

@@ -1528,7 +1528,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,

mem_cgroup_uncharge_folios(&free_folios);
try_to_unmap_flush();
- free_unref_folios(&free_folios);
+ free_unref_folios(&free_folios, 0);

list_splice(&ret_folios, folio_list);
count_vm_events(PGACTIVATE, pgactivate);
@@ -1868,7 +1868,7 @@ static unsigned int move_folios_to_lru(struct lruvec *lruvec,
if (folio_batch_add(&free_folios, folio) == 0) {
spin_unlock_irq(&lruvec->lru_lock);
mem_cgroup_uncharge_folios(&free_folios);
- free_unref_folios(&free_folios);
+ free_unref_folios(&free_folios, 0);
spin_lock_irq(&lruvec->lru_lock);
}

@@ -1890,7 +1890,7 @@ static unsigned int move_folios_to_lru(struct lruvec *lruvec,
if (free_folios.nr) {
spin_unlock_irq(&lruvec->lru_lock);
mem_cgroup_uncharge_folios(&free_folios);
- free_unref_folios(&free_folios);
+ free_unref_folios(&free_folios, 0);
spin_lock_irq(&lruvec->lru_lock);
}

--
2.17.1


2024-05-31 09:38:03

by Byungchul Park

[permalink] [raw]
Subject: [PATCH v11 01/12] x86/tlb: add APIs manipulating tlb batch's arch data

A new mechanism, LUF(Lazy Unmap Flush), defers tlb flush until folios
that have been unmapped and freed, eventually get allocated again. It's
safe for folios that had been mapped read-only and were unmapped, since
the contents of the folios wouldn't change while staying in pcp or buddy
so we can still read the data through the stale tlb entries.

This is a preparation for the mechanism that needs to recognize
read-only tlb entries by separating tlb batch arch data into two, one is
for read-only entries and the other is for writable ones, and merging
those two when needed.

It also optimizes tlb shootdown by skipping CPUs that have already
performed tlb flush needed since. To support it, added APIs
manipulating arch data for x86.

Signed-off-by: Byungchul Park <[email protected]>
---
arch/x86/include/asm/tlbflush.h | 18 ++++++++++++++++++
1 file changed, 18 insertions(+)

diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
index 25726893c6f4..a14f77c5cdde 100644
--- a/arch/x86/include/asm/tlbflush.h
+++ b/arch/x86/include/asm/tlbflush.h
@@ -5,6 +5,7 @@
#include <linux/mm_types.h>
#include <linux/mmu_notifier.h>
#include <linux/sched.h>
+#include <linux/cpumask.h>

#include <asm/processor.h>
#include <asm/cpufeature.h>
@@ -293,6 +294,23 @@ static inline void arch_flush_tlb_batched_pending(struct mm_struct *mm)

extern void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch);

+static inline void arch_tlbbatch_clear(struct arch_tlbflush_unmap_batch *batch)
+{
+ cpumask_clear(&batch->cpumask);
+}
+
+static inline void arch_tlbbatch_fold(struct arch_tlbflush_unmap_batch *bdst,
+ struct arch_tlbflush_unmap_batch *bsrc)
+{
+ cpumask_or(&bdst->cpumask, &bdst->cpumask, &bsrc->cpumask);
+}
+
+static inline bool arch_tlbbatch_done(struct arch_tlbflush_unmap_batch *bdst,
+ struct arch_tlbflush_unmap_batch *bsrc)
+{
+ return !cpumask_andnot(&bdst->cpumask, &bdst->cpumask, &bsrc->cpumask);
+}
+
static inline bool pte_flags_need_flush(unsigned long oldflags,
unsigned long newflags,
bool ignore_access)
--
2.17.1


2024-05-31 09:38:24

by Byungchul Park

[permalink] [raw]
Subject: [PATCH v11 04/12] x86/tlb, riscv/tlb, mm/rmap: separate arch_tlbbatch_clear() out of arch_tlbbatch_flush()

A new mechanism, LUF(Lazy Unmap Flush), defers tlb flush until folios
that have been unmapped and freed, eventually get allocated again. It's
safe for folios that had been mapped read only and were unmapped, since
the contents of the folios don't change while staying in pcp or buddy
so we can still read the data through the stale tlb entries.

This is a preparation for the mechanism that requires to avoid redundant
tlb flush by manipulating tlb batch's arch data. To achieve that, we
need to separate the part clearing the tlb batch's arch data out of
arch_tlbbatch_flush().

Signed-off-by: Byungchul Park <[email protected]>
---
arch/riscv/mm/tlbflush.c | 1 -
arch/x86/mm/tlb.c | 2 --
mm/rmap.c | 1 +
3 files changed, 1 insertion(+), 3 deletions(-)

diff --git a/arch/riscv/mm/tlbflush.c b/arch/riscv/mm/tlbflush.c
index 9b6e86ce3867..36f996af6256 100644
--- a/arch/riscv/mm/tlbflush.c
+++ b/arch/riscv/mm/tlbflush.c
@@ -201,5 +201,4 @@ void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
{
__flush_tlb_range(&batch->cpumask, FLUSH_TLB_NO_ASID, 0,
FLUSH_TLB_MAX_SIZE, PAGE_SIZE);
- cpumask_clear(&batch->cpumask);
}
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 44ac64f3a047..24bce69222cd 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -1265,8 +1265,6 @@ void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
local_irq_enable();
}

- cpumask_clear(&batch->cpumask);
-
put_flush_tlb_info();
put_cpu();
}
diff --git a/mm/rmap.c b/mm/rmap.c
index 52357d79917c..a65a94aada8d 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -648,6 +648,7 @@ void try_to_unmap_flush(void)
return;

arch_tlbbatch_flush(&tlb_ubc->arch);
+ arch_tlbbatch_clear(&tlb_ubc->arch);
tlb_ubc->flush_required = false;
tlb_ubc->writable = false;
}
--
2.17.1