2024-05-20 02:18:18

by Byungchul Park

[permalink] [raw]
Subject: [RESEND PATCH v10 00/12] LUF(Lazy Unmap Flush) reducing tlb numbers over 90%

Hi everyone,

While I'm working with a tiered memory system e.g. CXL memory, I have
been facing migration overhead esp. tlb shootdown on promotion or
demotion between different tiers. Yeah.. most tlb shootdowns on
migration through hinting fault can be avoided thanks to Huang Ying's
work, commit 4d4b6d66db ("mm,unmap: avoid flushing tlb in batch if PTE
is inaccessible"). See the following link for more information:

https://lore.kernel.org/lkml/[email protected]/

However, it's only for migration through hinting fault. I thought it'd
be much better if we have a general mechanism to reduce all the tlb
numbers that we can apply to any unmap code, that we normally believe
tlb flush should be followed.

I'm suggesting a new mechanism, LUF(Lazy Unmap Flush), defers tlb flush
until folios that have been unmapped and freed, eventually get allocated
again. It's safe for folios that had been mapped read-only and were
unmapped, since the contents of the folios don't change while staying in
pcp or buddy so we can still read the data through the stale tlb entries.

tlb flush can be defered when folios get unmapped as long as it
guarantees to perform tlb flush needed, before the folios actually
become used, of course, only if all the corresponding ptes don't have
write permission. Otherwise, the system will get messed up.

To achieve that:

1. For the folios that map only to non-writable tlb entries, prevent
tlb flush during unmapping but perform it just before the folios
actually become used, out of buddy or pcp.

2. When any non-writable ptes change to writable e.g. through fault
handler, give up luf mechanism and perform tlb flush required
right away.

3. When a writable mapping is created e.g. through mmap(), give up
luf mechanism and perform tlb flush required right away.

No matter what type of workload is used for performance evaluation, the
result would be positive thanks to the unconditional reduction of tlb
flushes, tlb misses and interrupts. For the test, I picked up one of
the most popular and heavy workload, llama.cpp that is a
LLM(Large Language Model) inference engine.

The result would depend on memory latency and how often reclaim runs,
which implies tlb miss overhead and how many times unmapping happens.
In my system, the result shows:

1. tlb flushes are reduced about 95%.
2. tlb misses(itlb) are reduced about 80%.
3. tlb misses(dtlb store) are reduced about 57%.
4. tlb misses(dtlb load) are reduced about 24%.
5. tlb shootdown interrupts are reduced about 95%.
6. The test program runtime is reduced about 5%.

The test environment and the result is like:

Machine: bare metal, x86_64, Intel(R) Xeon(R) Gold 6430
CPU: 1 socket 64 core with hyper thread on
Numa: 2 nodes (64 CPUs DRAM 42GB, no CPUs CXL expander 98GB)
Config: swap off, numa balancing tiering on, demotion enabled

The test set:

llama.cpp/main -m $(70G_model1) -p "who are you?" -s 1 -t 15 -n 20 &
llama.cpp/main -m $(70G_model2) -p "who are you?" -s 1 -t 15 -n 20 &
llama.cpp/main -m $(70G_model3) -p "who are you?" -s 1 -t 15 -n 20 &
wait

where -t: nr of threads, -s: seed used to make the runtime stable,
-n: nr of tokens that determines the runtime, -p: prompt to ask,
-m: LLM model to use.

Run the test set 10 times successively with caches dropped every run
via 'echo 3 > /proc/sys/vm/drop_caches'. Each inference prints its
runtime at the end of each.

1. Runtime from the output of llama.cpp:

BEFORE
------
llama_print_timings: total time = 1002461.95 ms / 24 tokens
llama_print_timings: total time = 1044978.38 ms / 24 tokens
llama_print_timings: total time = 1000653.09 ms / 24 tokens
llama_print_timings: total time = 1047104.80 ms / 24 tokens
llama_print_timings: total time = 1069430.36 ms / 24 tokens
llama_print_timings: total time = 1068201.16 ms / 24 tokens
llama_print_timings: total time = 1078092.59 ms / 24 tokens
llama_print_timings: total time = 1073200.45 ms / 24 tokens
llama_print_timings: total time = 1067136.00 ms / 24 tokens
llama_print_timings: total time = 1076442.56 ms / 24 tokens
llama_print_timings: total time = 1004142.64 ms / 24 tokens
llama_print_timings: total time = 1042942.65 ms / 24 tokens
llama_print_timings: total time = 999933.76 ms / 24 tokens
llama_print_timings: total time = 1046548.83 ms / 24 tokens
llama_print_timings: total time = 1068671.48 ms / 24 tokens
llama_print_timings: total time = 1068285.76 ms / 24 tokens
llama_print_timings: total time = 1077789.63 ms / 24 tokens
llama_print_timings: total time = 1071558.93 ms / 24 tokens
llama_print_timings: total time = 1066181.55 ms / 24 tokens
llama_print_timings: total time = 1076767.53 ms / 24 tokens
llama_print_timings: total time = 1004065.63 ms / 24 tokens
llama_print_timings: total time = 1044522.13 ms / 24 tokens
llama_print_timings: total time = 999725.33 ms / 24 tokens
llama_print_timings: total time = 1047510.77 ms / 24 tokens
llama_print_timings: total time = 1068010.27 ms / 24 tokens
llama_print_timings: total time = 1068999.31 ms / 24 tokens
llama_print_timings: total time = 1077648.05 ms / 24 tokens
llama_print_timings: total time = 1071378.96 ms / 24 tokens
llama_print_timings: total time = 1066326.32 ms / 24 tokens
llama_print_timings: total time = 1077088.92 ms / 24 tokens

AFTER
-----
llama_print_timings: total time = 988522.03 ms / 24 tokens
llama_print_timings: total time = 997204.52 ms / 24 tokens
llama_print_timings: total time = 996605.86 ms / 24 tokens
llama_print_timings: total time = 991985.50 ms / 24 tokens
llama_print_timings: total time = 1035143.31 ms / 24 tokens
llama_print_timings: total time = 993660.18 ms / 24 tokens
llama_print_timings: total time = 983082.14 ms / 24 tokens
llama_print_timings: total time = 990431.36 ms / 24 tokens
llama_print_timings: total time = 992707.09 ms / 24 tokens
llama_print_timings: total time = 992673.27 ms / 24 tokens
llama_print_timings: total time = 989285.43 ms / 24 tokens
llama_print_timings: total time = 996710.06 ms / 24 tokens
llama_print_timings: total time = 996534.64 ms / 24 tokens
llama_print_timings: total time = 991344.17 ms / 24 tokens
llama_print_timings: total time = 1035210.84 ms / 24 tokens
llama_print_timings: total time = 994714.13 ms / 24 tokens
llama_print_timings: total time = 984184.15 ms / 24 tokens
llama_print_timings: total time = 990909.45 ms / 24 tokens
llama_print_timings: total time = 991881.48 ms / 24 tokens
llama_print_timings: total time = 993918.03 ms / 24 tokens
llama_print_timings: total time = 990061.34 ms / 24 tokens
llama_print_timings: total time = 998076.69 ms / 24 tokens
llama_print_timings: total time = 997082.59 ms / 24 tokens
llama_print_timings: total time = 990677.58 ms / 24 tokens
llama_print_timings: total time = 1036054.94 ms / 24 tokens
llama_print_timings: total time = 994125.93 ms / 24 tokens
llama_print_timings: total time = 982467.01 ms / 24 tokens
llama_print_timings: total time = 990191.60 ms / 24 tokens
llama_print_timings: total time = 993319.24 ms / 24 tokens
llama_print_timings: total time = 992540.57 ms / 24 tokens

2. tlb shootdowns from 'cat /proc/interrupts':

BEFORE
------
TLB:
125553646 141418810 161932620 176853972 186655697 190399283
192143823 196414038 192872439 193313658 193395617 192521416
190788161 195067598 198016061 193607347 194293972 190786732
191545637 194856822 191801931 189634535 190399803 196365922
195268398 190115840 188050050 193194908 195317617 190820190
190164820 185556071 226797214 229592631 216112464 209909495
205575979 205950252 204948111 197999795 198892232 205287952
199344631 195015158 195869844 198858745 195692876 200961904
203463252 205921722 199850838 206145986 199613202 199961345
200129577 203020521 207873649 203697671 197093386 204243803
205993323 200934664 204193128 194435376 TLB shootdowns

AFTER
-----
TLB:
5648092 6610142 7032849 7882308 8088518 8352310
8656536 8705136 8647426 8905583 8985408 8704522
8884344 9026261 8929974 8869066 8877575 8810096
8770984 8754503 8801694 8865925 8787524 8656432
8755912 8682034 8773935 8832925 8797997 8515777
8481240 8891258 10595243 10285973 9756935 9573681
9398968 9069244 9242984 8899009 9310690 9029095
9069758 9105825 9092703 9270202 9460287 9258546
9180415 9232723 9270611 9175020 9490420 9360316
9420818 9057663 9525631 9310152 9152242 8654483
9181804 9050847 8919916 8883856 TLB shootdowns

3. tlb numbers from 'perf stat' per test set:

BEFORE
------
3163679332 dTLB-load-misses
2017751856 dTLB-store-misses
327092903 iTLB-load-misses
1357543886 tlb:tlb_flush

AFTER
-----
2394694609 dTLB-load-misses
861144167 dTLB-store-misses
64055579 iTLB-load-misses
69175002 tlb:tlb_flush

---

Changes from v9:

1. Expand the candidate to apply this mechanism:

BEFORE - The souce folios at any type of migration.
AFTER - Any folios that have been unmapped and freed.

2. Change the workload for test:

BEFORE - XSBench
AFTER - llama.cpp (one of the most popluar real workload)

3. Change the test environment:

BEFORE - qemu machine, too small DRAM(1GB), large remote mem
AFTER - bare metal, real CXL memory, practical memory size

4. Rename the mechanism from MIGRC(Migration Read Copy) to
LUF(Lazy Unmap Flush) to reflect the current version of the
mechanism can be applied not only to unmap during migration
but any unmap code e.g. unmap in shrink_folio_list().

5. Fix build error for riscv. (feedbacked by kernel test bot)

6. Supplement commit messages to describe what this mechanism is
for, especially in the patches for arch code. (feedbacked by
Thomas Gleixner)

7. Clean up some trivial things.

Changes from v8:

1. Rebase on akpm/mm.git mm-unstable as of April 18, 2024.
2. Supplement comments and commit message.
3. Change the candidate to apply migrc mechanism:

BEFORE - The source folios at demotion and promotion.
AFTER - The souce folios at any type of migration.

4. Change how migrc mechanism works:

BEFORE - Reduce tlb flushes by deferring folio_free() for
source folios during demotion and promotion.
AFTER - Reduce tlb flushes by deferring tlb flush until they
actually become used, out of pcp or buddy. The
current version of migrc does *not* defer calling
folio_free() but let it go as it is as the same as
vanilla kernel, with the folios marked kind of 'need
to tlb flush'. And then handle the flush when the
page exits from pcp or buddy so as to prevent
changing vm stats e.g. free pages.

Changes from v7:

1. Rewrite cover letter to explain what 'migrc' mechasism is.
(feedbacked by Andrew Morton)
2. Supplement the commit message of a patch 'mm: Add APIs to
free a folio directly to the buddy bypassing pcp'.
(feedbacked by Andrew Morton)

Changes from v6:

1. Fix build errors in case of
CONFIG_ARCH_WANT_BATCHED_UNMAP_tlb_FLUSH disabled by moving
migrc_flush_{start,end}() calls from arch code to
try_to_unmap_flush() in mm/rmap.c.

Changes from v5:

1. Fix build errors in case of CONFIG_MIGRATION disabled or
CONFIG_HWPOISON_INJECT moduled. (feedbacked by kernel test
bot and Raymond Jay Golo)
2. Organize migrc code with two kconfigs, CONFIG_MIGRATION and
CONFIG_ARCH_WANT_BATCHED_UNMAP_tlb_FLUSH.

Changes from v4:

1. Rebase on v6.7.
2. Fix build errors in arm64 that is doing nothing for tlb flush
but has CONFIG_ARCH_WANT_BATCHED_UNMAP_tlb_FLUSH. (reported
by kernel test robot)
3. Don't use any page flag. So the system would give up migrc
mechanism more often but it's okay. The final improvement is
good enough.
4. Instead, optimize full tlb flush(arch_tlbbatch_flush()) by
avoiding redundant CPUs from tlb flush.

Changes from v3:

1. Don't use the kconfig, CONFIG_MIGRC, and remove sysctl knob,
migrc_enable. (feedbacked by Nadav)
2. Remove the optimization skipping CPUs that have already
performed tlb flushes needed by any reason when performing
tlb flushes by migrc because I can't tell the performance
difference between w/ the optimization and w/o that.
(feedbacked by Nadav)
3. Minimize arch-specific code. While at it, move all the migrc
declarations and inline functions from include/linux/mm.h to
mm/internal.h (feedbacked by Dave Hansen, Nadav)
4. Separate a part making migrc paused when the system is in
high memory pressure to another patch. (feedbacked by Nadav)
5. Rename:
a. arch_tlbbatch_clean() to arch_tlbbatch_clear(),
b. tlb_ubc_nowr to tlb_ubc_ro,
c. migrc_try_flush_free_folios() to migrc_flush_free_folios(),
d. migrc_stop to migrc_pause.
(feedbacked by Nadav)
6. Use ->lru list_head instead of introducing a new llist_head.
(feedbacked by Nadav)
7. Use non-atomic operations of page-flag when it's safe.
(feedbacked by Nadav)
8. Use stack instead of keeping a pointer of 'struct migrc_req'
in struct task, which is for manipulating it locally.
(feedbacked by Nadav)
9. Replace a lot of simple functions to inline functions placed
in a header, mm/internal.h. (feedbacked by Nadav)
10. Add additional sufficient comments. (feedbacked by Nadav)
11. Remove a lot of wrapper functions. (feedbacked by Nadav)

Changes from RFC v2:

1. Remove additional occupation in struct page. To do that,
unioned with lru field for migrc's list and added a page
flag. I know page flag is a thing that we don't like to add
but no choice because migrc should distinguish folios under
migrc's control from others. Instead, I force migrc to be
used only on 64 bit system to mitigate you guys from getting
angry.
2. Remove meaningless internal object allocator that I
introduced to minimize impact onto the system. However, a ton
of tests showed there was no difference.
3. Stop migrc from working when the system is in high memory
pressure like about to perform direct reclaim. At the
condition where the swap mechanism is heavily used, I found
the system suffered from regression without this control.
4. Exclude folios that pte_dirty() == true from migrc's interest
so that migrc can work simpler.
5. Combine several patches that work tightly coupled to one.
6. Add sufficient comments for better review.
7. Manage migrc's request in per-node manner (from globally).
8. Add tlb miss improvement in commit message.
9. Test with more CPUs(4 -> 16) to see bigger improvement.

Changes from RFC:

1. Fix a bug triggered when a destination folio at the previous
migration becomes a source folio at the next migration,
before the folio gets handled properly so that the folio can
play with another migration. There was inconsistency in the
folio's state. Fixed it.
2. Split the patch set into more pieces so that the folks can
review better. (Feedbacked by Nadav Amit)
3. Fix a wrong usage of barrier e.g. smp_mb__after_atomic().
(Feedbacked by Nadav Amit)
4. Tried to add sufficient comments to explain the patch set
better. (Feedbacked by Nadav Amit)

Byungchul Park (12):
x86/tlb: add APIs manipulating tlb batch's arch data
arm64: tlbflush: add APIs manipulating tlb batch's arch data
riscv, tlb: add APIs manipulating tlb batch's arch data
x86/tlb, riscv/tlb, mm/rmap: separate arch_tlbbatch_clear() out of
arch_tlbbatch_flush()
mm: buddy: make room for a new variable, ugen, in struct page
mm: add folio_put_ugen() to deliver unmap generation number to pcp or
buddy
mm: add a parameter, unmap generation number, to free_unref_folios()
mm/rmap: recognize read-only tlb entries during batched tlb flush
mm: implement LUF(Lazy Unmap Flush) defering tlb flush when folios get
unmapped
mm: separate move/undo parts from migrate_pages_batch()
mm, migrate: apply luf mechanism to unmapping during migration
mm, vmscan: apply luf mechanism to unmapping during folio reclaim

arch/arm64/include/asm/tlbflush.h | 18 ++
arch/riscv/include/asm/tlbflush.h | 21 ++
arch/riscv/mm/tlbflush.c | 1 -
arch/x86/include/asm/tlbflush.h | 18 ++
arch/x86/mm/tlb.c | 2 -
include/linux/mm.h | 22 ++
include/linux/mm_types.h | 40 +++-
include/linux/rmap.h | 7 +-
include/linux/sched.h | 11 +
mm/compaction.c | 10 +
mm/internal.h | 115 +++++++++-
mm/memory.c | 8 +
mm/migrate.c | 184 ++++++++++------
mm/mmap.c | 8 +
mm/page_alloc.c | 157 +++++++++++---
mm/page_isolation.c | 6 +
mm/page_reporting.c | 10 +
mm/rmap.c | 345 +++++++++++++++++++++++++++++-
mm/swap.c | 18 +-
mm/vmscan.c | 29 ++-
20 files changed, 904 insertions(+), 126 deletions(-)


base-commit: f52bcd4a9f6058704a6f6b6b50418f579defd4fe
--
2.17.1



2024-05-20 02:18:47

by Byungchul Park

[permalink] [raw]
Subject: [RESEND PATCH v10 04/12] x86/tlb, riscv/tlb, mm/rmap: separate arch_tlbbatch_clear() out of arch_tlbbatch_flush()

A new mechanism, LUF(Lazy Unmap Flush), defers tlb flush until folios
that have been unmapped and freed, eventually get allocated again. It's
safe for folios that had been mapped read only and were unmapped, since
the contents of the folios don't change while staying in pcp or buddy
so we can still read the data through the stale tlb entries.

This is a preparation for the mechanism that requires to avoid redundant
tlb flush by manipulating tlb batch's arch data. To achieve that, we
need to separate the part clearing the tlb batch's arch data out of
arch_tlbbatch_flush().

Signed-off-by: Byungchul Park <[email protected]>
---
arch/riscv/mm/tlbflush.c | 1 -
arch/x86/mm/tlb.c | 2 --
mm/rmap.c | 1 +
3 files changed, 1 insertion(+), 3 deletions(-)

diff --git a/arch/riscv/mm/tlbflush.c b/arch/riscv/mm/tlbflush.c
index 07d743f87b3f..9cbd27148357 100644
--- a/arch/riscv/mm/tlbflush.c
+++ b/arch/riscv/mm/tlbflush.c
@@ -234,5 +234,4 @@ void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
{
__flush_tlb_range(&batch->cpumask, FLUSH_TLB_NO_ASID, 0,
FLUSH_TLB_MAX_SIZE, PAGE_SIZE);
- cpumask_clear(&batch->cpumask);
}
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 44ac64f3a047..24bce69222cd 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -1265,8 +1265,6 @@ void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
local_irq_enable();
}

- cpumask_clear(&batch->cpumask);
-
put_flush_tlb_info();
put_cpu();
}
diff --git a/mm/rmap.c b/mm/rmap.c
index 2608c40dffad..cf8a99a49aef 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -649,6 +649,7 @@ void try_to_unmap_flush(void)
return;

arch_tlbbatch_flush(&tlb_ubc->arch);
+ arch_tlbbatch_clear(&tlb_ubc->arch);
tlb_ubc->flush_required = false;
tlb_ubc->writable = false;
}
--
2.17.1


2024-05-20 02:18:59

by Byungchul Park

[permalink] [raw]
Subject: [RESEND PATCH v10 07/12] mm: add a parameter, unmap generation number, to free_unref_folios()

Unmap generation number is used by luf mechanism to track need of tlb
flush for each page residing in pcp or buddy.

The number should be delivered to pcp or buddy via free_unref_folios()
that is for releasing folios that have been unmapped during reclaim in
shrink_folio_list().

Signed-off-by: Byungchul Park <[email protected]>
---
mm/internal.h | 2 +-
mm/page_alloc.c | 10 +++++-----
mm/swap.c | 6 +++---
mm/vmscan.c | 8 ++++----
4 files changed, 13 insertions(+), 13 deletions(-)

diff --git a/mm/internal.h b/mm/internal.h
index 332662047c17..0d4c74e76de6 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -639,7 +639,7 @@ extern bool free_pages_prepare(struct page *page, unsigned int order);
extern int user_min_free_kbytes;

void free_unref_page(struct page *page, unsigned int order, unsigned short int ugen);
-void free_unref_folios(struct folio_batch *fbatch);
+void free_unref_folios(struct folio_batch *fbatch, unsigned short int ugen);

extern void zone_pcp_reset(struct zone *zone);
extern void zone_pcp_disable(struct zone *zone);
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 2cd278c207d1..63f14305f4de 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2706,7 +2706,7 @@ void free_unref_page(struct page *page, unsigned int order,
/*
* Free a batch of folios
*/
-void free_unref_folios(struct folio_batch *folios)
+void free_unref_folios(struct folio_batch *folios, unsigned short int ugen)
{
unsigned long __maybe_unused UP_flags;
struct per_cpu_pages *pcp = NULL;
@@ -2729,7 +2729,7 @@ void free_unref_folios(struct folio_batch *folios)
*/
if (!pcp_allowed_order(order)) {
free_one_page(folio_zone(folio), &folio->page,
- pfn, order, FPI_NONE, 0);
+ pfn, order, FPI_NONE, ugen);
continue;
}
folio->private = (void *)(unsigned long)order;
@@ -2765,7 +2765,7 @@ void free_unref_folios(struct folio_batch *folios)
*/
if (is_migrate_isolate(migratetype)) {
free_one_page(zone, &folio->page, pfn,
- order, FPI_NONE, 0);
+ order, FPI_NONE, ugen);
continue;
}

@@ -2778,7 +2778,7 @@ void free_unref_folios(struct folio_batch *folios)
if (unlikely(!pcp)) {
pcp_trylock_finish(UP_flags);
free_one_page(zone, &folio->page, pfn,
- order, FPI_NONE, 0);
+ order, FPI_NONE, ugen);
continue;
}
locked_zone = zone;
@@ -2793,7 +2793,7 @@ void free_unref_folios(struct folio_batch *folios)

trace_mm_page_free_batched(&folio->page);
free_unref_page_commit(zone, pcp, &folio->page, migratetype,
- order, 0);
+ order, ugen);
}

if (pcp) {
diff --git a/mm/swap.c b/mm/swap.c
index 0fc5a5e8457f..1937ac937b8f 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -163,11 +163,11 @@ void put_pages_list(struct list_head *pages)
/* LRU flag must be clear because it's passed using the lru */
if (folio_batch_add(&fbatch, folio) > 0)
continue;
- free_unref_folios(&fbatch);
+ free_unref_folios(&fbatch, 0);
}

if (fbatch.nr)
- free_unref_folios(&fbatch);
+ free_unref_folios(&fbatch, 0);
INIT_LIST_HEAD(pages);
}
EXPORT_SYMBOL(put_pages_list);
@@ -1029,7 +1029,7 @@ void folios_put_refs(struct folio_batch *folios, unsigned int *refs)

folios->nr = j;
mem_cgroup_uncharge_folios(folios);
- free_unref_folios(folios);
+ free_unref_folios(folios, 0);
}
EXPORT_SYMBOL(folios_put_refs);

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 49bd94423961..bb0ff11f9ec9 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1460,7 +1460,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
if (folio_batch_add(&free_folios, folio) == 0) {
mem_cgroup_uncharge_folios(&free_folios);
try_to_unmap_flush();
- free_unref_folios(&free_folios);
+ free_unref_folios(&free_folios, 0);
}
continue;

@@ -1527,7 +1527,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,

mem_cgroup_uncharge_folios(&free_folios);
try_to_unmap_flush();
- free_unref_folios(&free_folios);
+ free_unref_folios(&free_folios, 0);

list_splice(&ret_folios, folio_list);
count_vm_events(PGACTIVATE, pgactivate);
@@ -1869,7 +1869,7 @@ static unsigned int move_folios_to_lru(struct lruvec *lruvec,
if (folio_batch_add(&free_folios, folio) == 0) {
spin_unlock_irq(&lruvec->lru_lock);
mem_cgroup_uncharge_folios(&free_folios);
- free_unref_folios(&free_folios);
+ free_unref_folios(&free_folios, 0);
spin_lock_irq(&lruvec->lru_lock);
}

@@ -1891,7 +1891,7 @@ static unsigned int move_folios_to_lru(struct lruvec *lruvec,
if (free_folios.nr) {
spin_unlock_irq(&lruvec->lru_lock);
mem_cgroup_uncharge_folios(&free_folios);
- free_unref_folios(&free_folios);
+ free_unref_folios(&free_folios, 0);
spin_lock_irq(&lruvec->lru_lock);
}

--
2.17.1


2024-05-20 02:19:19

by Byungchul Park

[permalink] [raw]
Subject: [RESEND PATCH v10 08/12] mm/rmap: recognize read-only tlb entries during batched tlb flush

Functionally, no change. This is a preparation for luf mechanism that
requires to recognize read-only tlb entries and handle them in a
different way. The newly introduced API in this patch, fold_ubc(), will
be used by luf mechanism.

Signed-off-by: Byungchul Park <[email protected]>
---
include/linux/sched.h | 1 +
mm/internal.h | 4 ++++
mm/rmap.c | 34 ++++++++++++++++++++++++++++++++--
3 files changed, 37 insertions(+), 2 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 2aa48adad226..0915390b1b5e 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1339,6 +1339,7 @@ struct task_struct {
#endif

struct tlbflush_unmap_batch tlb_ubc;
+ struct tlbflush_unmap_batch tlb_ubc_ro;
unsigned short int ugen;

/* Cache last used pipe for splice(): */
diff --git a/mm/internal.h b/mm/internal.h
index 0d4c74e76de6..805f0e6ecab4 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -1100,6 +1100,7 @@ extern struct workqueue_struct *mm_percpu_wq;
void try_to_unmap_flush(void);
void try_to_unmap_flush_dirty(void);
void flush_tlb_batched_pending(struct mm_struct *mm);
+void fold_ubc(struct tlbflush_unmap_batch *dst, struct tlbflush_unmap_batch *src);
#else
static inline void try_to_unmap_flush(void)
{
@@ -1110,6 +1111,9 @@ static inline void try_to_unmap_flush_dirty(void)
static inline void flush_tlb_batched_pending(struct mm_struct *mm)
{
}
+static inline void fold_ubc(struct tlbflush_unmap_batch *dst, struct tlbflush_unmap_batch *src)
+{
+}
#endif /* CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH */

extern const struct trace_print_flags pageflag_names[];
diff --git a/mm/rmap.c b/mm/rmap.c
index cf8a99a49aef..328b5e2217e6 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -635,6 +635,28 @@ struct anon_vma *folio_lock_anon_vma_read(struct folio *folio,
}

#ifdef CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH
+
+void fold_ubc(struct tlbflush_unmap_batch *dst,
+ struct tlbflush_unmap_batch *src)
+{
+ if (!src->flush_required)
+ return;
+
+ /*
+ * Fold src to dst.
+ */
+ arch_tlbbatch_fold(&dst->arch, &src->arch);
+ dst->writable = dst->writable || src->writable;
+ dst->flush_required = true;
+
+ /*
+ * Reset src.
+ */
+ arch_tlbbatch_clear(&src->arch);
+ src->flush_required = false;
+ src->writable = false;
+}
+
/*
* Flush TLB entries for recently unmapped pages from remote CPUs. It is
* important if a PTE was dirty when it was unmapped that it's flushed
@@ -644,7 +666,9 @@ struct anon_vma *folio_lock_anon_vma_read(struct folio *folio,
void try_to_unmap_flush(void)
{
struct tlbflush_unmap_batch *tlb_ubc = &current->tlb_ubc;
+ struct tlbflush_unmap_batch *tlb_ubc_ro = &current->tlb_ubc_ro;

+ fold_ubc(tlb_ubc, tlb_ubc_ro);
if (!tlb_ubc->flush_required)
return;

@@ -658,8 +682,9 @@ void try_to_unmap_flush(void)
void try_to_unmap_flush_dirty(void)
{
struct tlbflush_unmap_batch *tlb_ubc = &current->tlb_ubc;
+ struct tlbflush_unmap_batch *tlb_ubc_ro = &current->tlb_ubc_ro;

- if (tlb_ubc->writable)
+ if (tlb_ubc->writable || tlb_ubc_ro->writable)
try_to_unmap_flush();
}

@@ -676,13 +701,18 @@ void try_to_unmap_flush_dirty(void)
static void set_tlb_ubc_flush_pending(struct mm_struct *mm, pte_t pteval,
unsigned long uaddr)
{
- struct tlbflush_unmap_batch *tlb_ubc = &current->tlb_ubc;
+ struct tlbflush_unmap_batch *tlb_ubc;
int batch;
bool writable = pte_dirty(pteval);

if (!pte_accessible(mm, pteval))
return;

+ if (pte_write(pteval))
+ tlb_ubc = &current->tlb_ubc;
+ else
+ tlb_ubc = &current->tlb_ubc_ro;
+
arch_tlbbatch_add_pending(&tlb_ubc->arch, mm, uaddr);
tlb_ubc->flush_required = true;

--
2.17.1


2024-05-20 02:19:56

by Byungchul Park

[permalink] [raw]
Subject: [RESEND PATCH v10 12/12] mm, vmscan: apply luf mechanism to unmapping during folio reclaim

A new mechanism, LUF(Lazy Unmap Flush), defers tlb flush until folios
that have been unmapped and freed, eventually get allocated again. It's
safe for folios that had been mapped read only and were unmapped, since
the contents of the folios don't change while staying in pcp or buddy
so we can still read the data through the stale tlb entries.

Applied the mechanism to unmapping during folio reclaim.

Signed-off-by: Byungchul Park <[email protected]>
---
include/linux/rmap.h | 5 +++--
mm/rmap.c | 5 ++++-
mm/vmscan.c | 21 ++++++++++++++++++++-
3 files changed, 27 insertions(+), 4 deletions(-)

diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index 1898a2c1c087..9ca752f8de97 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -658,7 +658,7 @@ int folio_referenced(struct folio *, int is_locked,
struct mem_cgroup *memcg, unsigned long *vm_flags);

bool try_to_migrate(struct folio *folio, enum ttu_flags flags);
-void try_to_unmap(struct folio *, enum ttu_flags flags);
+bool try_to_unmap(struct folio *, enum ttu_flags flags);

int make_device_exclusive_range(struct mm_struct *mm, unsigned long start,
unsigned long end, struct page **pages,
@@ -777,8 +777,9 @@ static inline int folio_referenced(struct folio *folio, int is_locked,
return 0;
}

-static inline void try_to_unmap(struct folio *folio, enum ttu_flags flags)
+static inline bool try_to_unmap(struct folio *folio, enum ttu_flags flags)
{
+ return false;
}

static inline int folio_mkclean(struct folio *folio)
diff --git a/mm/rmap.c b/mm/rmap.c
index d25ae20a47b5..571e337af448 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -2237,10 +2237,11 @@ static int folio_not_mapped(struct folio *folio)
* Tries to remove all the page table entries which are mapping this
* folio. It is the caller's responsibility to check if the folio is
* still mapped if needed (use TTU_SYNC to prevent accounting races).
+ * Return true if all the mappings are read-only, otherwise false.
*
* Context: Caller must hold the folio lock.
*/
-void try_to_unmap(struct folio *folio, enum ttu_flags flags)
+bool try_to_unmap(struct folio *folio, enum ttu_flags flags)
{
struct rmap_walk_control rwc = {
.rmap_one = try_to_unmap_one,
@@ -2265,6 +2266,8 @@ void try_to_unmap(struct folio *folio, enum ttu_flags flags)
fold_ubc(tlb_ubc_luf, tlb_ubc_ro);
else
fold_ubc(tlb_ubc, tlb_ubc_ro);
+
+ return can_luf;
}

/*
diff --git a/mm/vmscan.c b/mm/vmscan.c
index bb0ff11f9ec9..4e2e9d07cd96 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1031,14 +1031,17 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
struct reclaim_stat *stat, bool ignore_references)
{
struct folio_batch free_folios;
+ struct folio_batch free_folios_luf;
LIST_HEAD(ret_folios);
LIST_HEAD(demote_folios);
unsigned int nr_reclaimed = 0;
unsigned int pgactivate = 0;
bool do_demote_pass;
struct swap_iocb *plug = NULL;
+ unsigned short int ugen;

folio_batch_init(&free_folios);
+ folio_batch_init(&free_folios_luf);
memset(stat, 0, sizeof(*stat));
cond_resched();
do_demote_pass = can_demote(pgdat->node_id, sc);
@@ -1050,6 +1053,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
enum folio_references references = FOLIOREF_RECLAIM;
bool dirty, writeback;
unsigned int nr_pages;
+ bool can_luf = false;

cond_resched();

@@ -1292,7 +1296,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
if (folio_test_large(folio) && list_empty(&folio->_deferred_list))
flags |= TTU_SYNC;

- try_to_unmap(folio, flags);
+ can_luf = try_to_unmap(folio, flags);
if (folio_mapped(folio)) {
stat->nr_unmap_fail += nr_pages;
if (!was_swapbacked &&
@@ -1457,6 +1461,18 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
if (folio_test_large(folio) &&
folio_test_large_rmappable(folio))
folio_undo_large_rmappable(folio);
+
+ if (can_luf) {
+ if (folio_batch_add(&free_folios_luf, folio) == 0) {
+ mem_cgroup_uncharge_folios(&free_folios_luf);
+ ugen = try_to_unmap_luf();
+ if (!ugen)
+ try_to_unmap_flush();
+ free_unref_folios(&free_folios_luf, ugen);
+ }
+ continue;
+ }
+
if (folio_batch_add(&free_folios, folio) == 0) {
mem_cgroup_uncharge_folios(&free_folios);
try_to_unmap_flush();
@@ -1526,8 +1542,11 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
pgactivate = stat->nr_activate[0] + stat->nr_activate[1];

mem_cgroup_uncharge_folios(&free_folios);
+ mem_cgroup_uncharge_folios(&free_folios_luf);
+ ugen = try_to_unmap_luf();
try_to_unmap_flush();
free_unref_folios(&free_folios, 0);
+ free_unref_folios(&free_folios_luf, ugen);

list_splice(&ret_folios, folio_list);
count_vm_events(PGACTIVATE, pgactivate);
--
2.17.1


2024-05-20 02:26:16

by Byungchul Park

[permalink] [raw]
Subject: [RESEND PATCH v10 05/12] mm: buddy: make room for a new variable, ugen, in struct page

Functionally, no change. This is a preparation for luf mechanism that
tracks need of tlb flush for each page residing in buddy, using a
generation number in struct page.

Fortunately, since the private field in struct page is used only to
store page order in buddy, ranging from 0 to MAX_PAGE_ORDER, that can be
covered with unsigned short int. So splitted it into two smaller ones,
order and ugen, so that the both can be used in buddy at the same time.

Signed-off-by: Byungchul Park <[email protected]>
---
include/linux/mm_types.h | 40 +++++++++++++++++++++++++++++++++-------
mm/internal.h | 4 ++--
mm/page_alloc.c | 13 ++++++++-----
3 files changed, 43 insertions(+), 14 deletions(-)

diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index db0adf5721cc..cd4ec0d10ffb 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -108,13 +108,25 @@ struct page {
pgoff_t index; /* Our offset within mapping. */
unsigned long share; /* share count for fsdax */
};
- /**
- * @private: Mapping-private opaque data.
- * Usually used for buffer_heads if PagePrivate.
- * Used for swp_entry_t if PageSwapCache.
- * Indicates order in the buddy system if PageBuddy.
- */
- unsigned long private;
+ union {
+ /**
+ * @private: Mapping-private opaque data.
+ * Usually used for buffer_heads if PagePrivate.
+ * Used for swp_entry_t if PageSwapCache.
+ */
+ unsigned long private;
+ struct {
+ /*
+ * Indicates order in the buddy system if PageBuddy.
+ */
+ unsigned short int order;
+ /*
+ * Tracks need of tlb flush used by luf,
+ * which stands for lazy unmap flush.
+ */
+ unsigned short int ugen;
+ };
+ };
};
struct { /* page_pool used by netstack */
/**
@@ -521,6 +533,20 @@ static inline void set_page_private(struct page *page, unsigned long private)
page->private = private;
}

+#define page_buddy_order(page) ((page)->order)
+
+static inline void set_page_buddy_order(struct page *page, unsigned int order)
+{
+ page->order = (unsigned short int)order;
+}
+
+#define page_buddy_ugen(page) ((page)->ugen)
+
+static inline void set_page_buddy_ugen(struct page *page, unsigned short int ugen)
+{
+ page->ugen = ugen;
+}
+
static inline void *folio_get_private(struct folio *folio)
{
return folio->private;
diff --git a/mm/internal.h b/mm/internal.h
index c6483f73ec13..eb9c7d8650fc 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -453,7 +453,7 @@ struct alloc_context {
static inline unsigned int buddy_order(struct page *page)
{
/* PageBuddy() must be checked by the caller */
- return page_private(page);
+ return page_buddy_order(page);
}

/*
@@ -467,7 +467,7 @@ static inline unsigned int buddy_order(struct page *page)
* times, potentially observing different values in the tests and the actual
* use of the result.
*/
-#define buddy_order_unsafe(page) READ_ONCE(page_private(page))
+#define buddy_order_unsafe(page) READ_ONCE(page_buddy_order(page))

/*
* This function checks whether a page is free && is the buddy
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 33d4a1be927b..917b22b429d1 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -565,9 +565,12 @@ void prep_compound_page(struct page *page, unsigned int order)
prep_compound_head(page, order);
}

-static inline void set_buddy_order(struct page *page, unsigned int order)
+static inline void set_buddy_order_ugen(struct page *page,
+ unsigned int order,
+ unsigned short int ugen)
{
- set_page_private(page, order);
+ set_page_buddy_order(page, order);
+ set_page_buddy_ugen(page, order);
__SetPageBuddy(page);
}

@@ -834,7 +837,7 @@ static inline void __free_one_page(struct page *page,
}

done_merging:
- set_buddy_order(page, order);
+ set_buddy_order_ugen(page, order, 0);

if (fpi_flags & FPI_TO_TAIL)
to_tail = true;
@@ -1344,7 +1347,7 @@ static inline void expand(struct zone *zone, struct page *page,
continue;

__add_to_free_list(&page[size], zone, high, migratetype, false);
- set_buddy_order(&page[size], high);
+ set_buddy_order_ugen(&page[size], high, 0);
nr_added += size;
}
account_freepages(zone, nr_added, migratetype);
@@ -6802,7 +6805,7 @@ static void break_down_buddy_pages(struct zone *zone, struct page *page,
continue;

add_to_free_list(current_buddy, zone, high, migratetype, false);
- set_buddy_order(current_buddy, high);
+ set_buddy_order_ugen(current_buddy, high, 0);
}
}

--
2.17.1


2024-05-20 02:26:22

by Byungchul Park

[permalink] [raw]
Subject: [RESEND PATCH v10 10/12] mm: separate move/undo parts from migrate_pages_batch()

Functionally, no change. This is a preparation for luf mechanism that
requires to use separated folio lists for its own handling during
migration. Refactored migrate_pages_batch() so as to separate move/undo
parts from migrate_pages_batch().

Signed-off-by: Byungchul Park <[email protected]>
---
mm/migrate.c | 134 +++++++++++++++++++++++++++++++--------------------
1 file changed, 83 insertions(+), 51 deletions(-)

diff --git a/mm/migrate.c b/mm/migrate.c
index c7692f303fa7..f9ed7a2b8720 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1609,6 +1609,81 @@ static int migrate_hugetlbs(struct list_head *from, new_folio_t get_new_folio,
return nr_failed;
}

+static void migrate_folios_move(struct list_head *src_folios,
+ struct list_head *dst_folios,
+ free_folio_t put_new_folio, unsigned long private,
+ enum migrate_mode mode, int reason,
+ struct list_head *ret_folios,
+ struct migrate_pages_stats *stats,
+ int *retry, int *thp_retry, int *nr_failed,
+ int *nr_retry_pages)
+{
+ struct folio *folio, *folio2, *dst, *dst2;
+ bool is_thp;
+ int nr_pages;
+ int rc;
+
+ dst = list_first_entry(dst_folios, struct folio, lru);
+ dst2 = list_next_entry(dst, lru);
+ list_for_each_entry_safe(folio, folio2, src_folios, lru) {
+ is_thp = folio_test_large(folio) && folio_test_pmd_mappable(folio);
+ nr_pages = folio_nr_pages(folio);
+
+ cond_resched();
+
+ rc = migrate_folio_move(put_new_folio, private,
+ folio, dst, mode,
+ reason, ret_folios);
+ /*
+ * The rules are:
+ * Success: folio will be freed
+ * -EAGAIN: stay on the unmap_folios list
+ * Other errno: put on ret_folios list
+ */
+ switch(rc) {
+ case -EAGAIN:
+ *retry += 1;
+ *thp_retry += is_thp;
+ *nr_retry_pages += nr_pages;
+ break;
+ case MIGRATEPAGE_SUCCESS:
+ stats->nr_succeeded += nr_pages;
+ stats->nr_thp_succeeded += is_thp;
+ break;
+ default:
+ *nr_failed += 1;
+ stats->nr_thp_failed += is_thp;
+ stats->nr_failed_pages += nr_pages;
+ break;
+ }
+ dst = dst2;
+ dst2 = list_next_entry(dst, lru);
+ }
+}
+
+static void migrate_folios_undo(struct list_head *src_folios,
+ struct list_head *dst_folios,
+ free_folio_t put_new_folio, unsigned long private,
+ struct list_head *ret_folios)
+{
+ struct folio *folio, *folio2, *dst, *dst2;
+
+ dst = list_first_entry(dst_folios, struct folio, lru);
+ dst2 = list_next_entry(dst, lru);
+ list_for_each_entry_safe(folio, folio2, src_folios, lru) {
+ int old_page_state = 0;
+ struct anon_vma *anon_vma = NULL;
+
+ __migrate_folio_extract(dst, &old_page_state, &anon_vma);
+ migrate_folio_undo_src(folio, old_page_state & PAGE_WAS_MAPPED,
+ anon_vma, true, ret_folios);
+ list_del(&dst->lru);
+ migrate_folio_undo_dst(dst, true, put_new_folio, private);
+ dst = dst2;
+ dst2 = list_next_entry(dst, lru);
+ }
+}
+
/*
* migrate_pages_batch() first unmaps folios in the from list as many as
* possible, then move the unmapped folios.
@@ -1631,7 +1706,7 @@ static int migrate_pages_batch(struct list_head *from,
int pass = 0;
bool is_thp = false;
bool is_large = false;
- struct folio *folio, *folio2, *dst = NULL, *dst2;
+ struct folio *folio, *folio2, *dst = NULL;
int rc, rc_saved = 0, nr_pages;
LIST_HEAD(unmap_folios);
LIST_HEAD(dst_folios);
@@ -1790,42 +1865,11 @@ static int migrate_pages_batch(struct list_head *from,
thp_retry = 0;
nr_retry_pages = 0;

- dst = list_first_entry(&dst_folios, struct folio, lru);
- dst2 = list_next_entry(dst, lru);
- list_for_each_entry_safe(folio, folio2, &unmap_folios, lru) {
- is_thp = folio_test_large(folio) && folio_test_pmd_mappable(folio);
- nr_pages = folio_nr_pages(folio);
-
- cond_resched();
-
- rc = migrate_folio_move(put_new_folio, private,
- folio, dst, mode,
- reason, ret_folios);
- /*
- * The rules are:
- * Success: folio will be freed
- * -EAGAIN: stay on the unmap_folios list
- * Other errno: put on ret_folios list
- */
- switch(rc) {
- case -EAGAIN:
- retry++;
- thp_retry += is_thp;
- nr_retry_pages += nr_pages;
- break;
- case MIGRATEPAGE_SUCCESS:
- stats->nr_succeeded += nr_pages;
- stats->nr_thp_succeeded += is_thp;
- break;
- default:
- nr_failed++;
- stats->nr_thp_failed += is_thp;
- stats->nr_failed_pages += nr_pages;
- break;
- }
- dst = dst2;
- dst2 = list_next_entry(dst, lru);
- }
+ /* Move the unmapped folios */
+ migrate_folios_move(&unmap_folios, &dst_folios,
+ put_new_folio, private, mode, reason,
+ ret_folios, stats, &retry, &thp_retry,
+ &nr_failed, &nr_retry_pages);
}
nr_failed += retry;
stats->nr_thp_failed += thp_retry;
@@ -1834,20 +1878,8 @@ static int migrate_pages_batch(struct list_head *from,
rc = rc_saved ? : nr_failed;
out:
/* Cleanup remaining folios */
- dst = list_first_entry(&dst_folios, struct folio, lru);
- dst2 = list_next_entry(dst, lru);
- list_for_each_entry_safe(folio, folio2, &unmap_folios, lru) {
- int old_page_state = 0;
- struct anon_vma *anon_vma = NULL;
-
- __migrate_folio_extract(dst, &old_page_state, &anon_vma);
- migrate_folio_undo_src(folio, old_page_state & PAGE_WAS_MAPPED,
- anon_vma, true, ret_folios);
- list_del(&dst->lru);
- migrate_folio_undo_dst(dst, true, put_new_folio, private);
- dst = dst2;
- dst2 = list_next_entry(dst, lru);
- }
+ migrate_folios_undo(&unmap_folios, &dst_folios,
+ put_new_folio, private, ret_folios);

return rc;
}
--
2.17.1


2024-05-20 02:26:33

by Byungchul Park

[permalink] [raw]
Subject: [RESEND PATCH v10 11/12] mm, migrate: apply luf mechanism to unmapping during migration

A new mechanism, LUF(Lazy Unmap Flush), defers tlb flush until folios
that have been unmapped and freed, eventually get allocated again. It's
safe for folios that had been mapped read only and were unmapped, since
the contents of the folios don't change while staying in pcp or buddy
so we can still read the data through the stale tlb entries.

Applied the mechanism to unmapping during migration.

Signed-off-by: Byungchul Park <[email protected]>
---
include/linux/rmap.h | 2 +-
mm/migrate.c | 56 ++++++++++++++++++++++++++++++++------------
mm/rmap.c | 9 ++++---
3 files changed, 48 insertions(+), 19 deletions(-)

diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index 0f906dc6d280..1898a2c1c087 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -657,7 +657,7 @@ static inline int folio_try_share_anon_rmap_pmd(struct folio *folio,
int folio_referenced(struct folio *, int is_locked,
struct mem_cgroup *memcg, unsigned long *vm_flags);

-void try_to_migrate(struct folio *folio, enum ttu_flags flags);
+bool try_to_migrate(struct folio *folio, enum ttu_flags flags);
void try_to_unmap(struct folio *, enum ttu_flags flags);

int make_device_exclusive_range(struct mm_struct *mm, unsigned long start,
diff --git a/mm/migrate.c b/mm/migrate.c
index f9ed7a2b8720..c8b0e5203e9a 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1090,7 +1090,8 @@ static void migrate_folio_undo_dst(struct folio *dst, bool locked,

/* Cleanup src folio upon migration success */
static void migrate_folio_done(struct folio *src,
- enum migrate_reason reason)
+ enum migrate_reason reason,
+ unsigned short int ugen)
{
/*
* Compaction can migrate also non-LRU pages which are
@@ -1101,8 +1102,12 @@ static void migrate_folio_done(struct folio *src,
mod_node_page_state(folio_pgdat(src), NR_ISOLATED_ANON +
folio_is_file_lru(src), -folio_nr_pages(src));

- if (reason != MR_MEMORY_FAILURE)
- /* We release the page in page_handle_poison. */
+ /* We release the page in page_handle_poison. */
+ if (reason == MR_MEMORY_FAILURE)
+ check_luf_flush(ugen);
+ else if (ugen)
+ folio_put_ugen(src, ugen);
+ else
folio_put(src);
}

@@ -1110,7 +1115,8 @@ static void migrate_folio_done(struct folio *src,
static int migrate_folio_unmap(new_folio_t get_new_folio,
free_folio_t put_new_folio, unsigned long private,
struct folio *src, struct folio **dstp, enum migrate_mode mode,
- enum migrate_reason reason, struct list_head *ret)
+ enum migrate_reason reason, struct list_head *ret,
+ bool *can_luf)
{
struct folio *dst;
int rc = -EAGAIN;
@@ -1126,7 +1132,7 @@ static int migrate_folio_unmap(new_folio_t get_new_folio,
folio_clear_unevictable(src);
/* free_pages_prepare() will clear PG_isolated. */
list_del(&src->lru);
- migrate_folio_done(src, reason);
+ migrate_folio_done(src, reason, 0);
return MIGRATEPAGE_SUCCESS;
}

@@ -1244,7 +1250,7 @@ static int migrate_folio_unmap(new_folio_t get_new_folio,
/* Establish migration ptes */
VM_BUG_ON_FOLIO(folio_test_anon(src) &&
!folio_test_ksm(src) && !anon_vma, src);
- try_to_migrate(src, mode == MIGRATE_ASYNC ? TTU_BATCH_FLUSH : 0);
+ *can_luf = try_to_migrate(src, mode == MIGRATE_ASYNC ? TTU_BATCH_FLUSH : 0);
old_page_state |= PAGE_WAS_MAPPED;
}

@@ -1272,7 +1278,7 @@ static int migrate_folio_unmap(new_folio_t get_new_folio,
static int migrate_folio_move(free_folio_t put_new_folio, unsigned long private,
struct folio *src, struct folio *dst,
enum migrate_mode mode, enum migrate_reason reason,
- struct list_head *ret)
+ struct list_head *ret, unsigned short int ugen)
{
int rc;
int old_page_state = 0;
@@ -1326,7 +1332,7 @@ static int migrate_folio_move(free_folio_t put_new_folio, unsigned long private,
if (anon_vma)
put_anon_vma(anon_vma);
folio_unlock(src);
- migrate_folio_done(src, reason);
+ migrate_folio_done(src, reason, ugen);

return rc;
out:
@@ -1616,7 +1622,7 @@ static void migrate_folios_move(struct list_head *src_folios,
struct list_head *ret_folios,
struct migrate_pages_stats *stats,
int *retry, int *thp_retry, int *nr_failed,
- int *nr_retry_pages)
+ int *nr_retry_pages, unsigned short int ugen)
{
struct folio *folio, *folio2, *dst, *dst2;
bool is_thp;
@@ -1633,7 +1639,7 @@ static void migrate_folios_move(struct list_head *src_folios,

rc = migrate_folio_move(put_new_folio, private,
folio, dst, mode,
- reason, ret_folios);
+ reason, ret_folios, ugen);
/*
* The rules are:
* Success: folio will be freed
@@ -1710,7 +1716,11 @@ static int migrate_pages_batch(struct list_head *from,
int rc, rc_saved = 0, nr_pages;
LIST_HEAD(unmap_folios);
LIST_HEAD(dst_folios);
+ LIST_HEAD(unmap_folios_luf);
+ LIST_HEAD(dst_folios_luf);
bool nosplit = (reason == MR_NUMA_MISPLACED);
+ unsigned short int ugen;
+ bool can_luf;

VM_WARN_ON_ONCE(mode != MIGRATE_ASYNC &&
!list_empty(from) && !list_is_singular(from));
@@ -1773,9 +1783,11 @@ static int migrate_pages_batch(struct list_head *from,
continue;
}

+ can_luf = false;
rc = migrate_folio_unmap(get_new_folio, put_new_folio,
private, folio, &dst, mode, reason,
- ret_folios);
+ ret_folios, &can_luf);
+
/*
* The rules are:
* Success: folio will be freed
@@ -1821,7 +1833,8 @@ static int migrate_pages_batch(struct list_head *from,
/* nr_failed isn't updated for not used */
stats->nr_thp_failed += thp_retry;
rc_saved = rc;
- if (list_empty(&unmap_folios))
+ if (list_empty(&unmap_folios) &&
+ list_empty(&unmap_folios_luf))
goto out;
else
goto move;
@@ -1835,8 +1848,13 @@ static int migrate_pages_batch(struct list_head *from,
stats->nr_thp_succeeded += is_thp;
break;
case MIGRATEPAGE_UNMAP:
- list_move_tail(&folio->lru, &unmap_folios);
- list_add_tail(&dst->lru, &dst_folios);
+ if (can_luf) {
+ list_move_tail(&folio->lru, &unmap_folios_luf);
+ list_add_tail(&dst->lru, &dst_folios_luf);
+ } else {
+ list_move_tail(&folio->lru, &unmap_folios);
+ list_add_tail(&dst->lru, &dst_folios);
+ }
break;
default:
/*
@@ -1856,6 +1874,8 @@ static int migrate_pages_batch(struct list_head *from,
stats->nr_thp_failed += thp_retry;
stats->nr_failed_pages += nr_retry_pages;
move:
+ /* Should be before try_to_unmap_flush() */
+ ugen = try_to_unmap_luf();
/* Flush TLBs for all unmapped folios */
try_to_unmap_flush();

@@ -1869,7 +1889,11 @@ static int migrate_pages_batch(struct list_head *from,
migrate_folios_move(&unmap_folios, &dst_folios,
put_new_folio, private, mode, reason,
ret_folios, stats, &retry, &thp_retry,
- &nr_failed, &nr_retry_pages);
+ &nr_failed, &nr_retry_pages, 0);
+ migrate_folios_move(&unmap_folios_luf, &dst_folios_luf,
+ put_new_folio, private, mode, reason,
+ ret_folios, stats, &retry, &thp_retry,
+ &nr_failed, &nr_retry_pages, ugen);
}
nr_failed += retry;
stats->nr_thp_failed += thp_retry;
@@ -1880,6 +1904,8 @@ static int migrate_pages_batch(struct list_head *from,
/* Cleanup remaining folios */
migrate_folios_undo(&unmap_folios, &dst_folios,
put_new_folio, private, ret_folios);
+ migrate_folios_undo(&unmap_folios_luf, &dst_folios_luf,
+ put_new_folio, private, ret_folios);

return rc;
}
diff --git a/mm/rmap.c b/mm/rmap.c
index e42783c02114..d25ae20a47b5 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -2600,8 +2600,9 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma,
*
* Tries to remove all the page table entries which are mapping this folio and
* replace them with special swap entries. Caller must hold the folio lock.
+ * Return true if all the mappings are read-only, otherwise false.
*/
-void try_to_migrate(struct folio *folio, enum ttu_flags flags)
+bool try_to_migrate(struct folio *folio, enum ttu_flags flags)
{
struct rmap_walk_control rwc = {
.rmap_one = try_to_migrate_one,
@@ -2620,11 +2621,11 @@ void try_to_migrate(struct folio *folio, enum ttu_flags flags)
*/
if (WARN_ON_ONCE(flags & ~(TTU_RMAP_LOCKED | TTU_SPLIT_HUGE_PMD |
TTU_SYNC | TTU_BATCH_FLUSH)))
- return;
+ return false;

if (folio_is_zone_device(folio) &&
(!folio_is_device_private(folio) && !folio_is_device_coherent(folio)))
- return;
+ return false;

/*
* During exec, a temporary VMA is setup and later moved.
@@ -2649,6 +2650,8 @@ void try_to_migrate(struct folio *folio, enum ttu_flags flags)
fold_ubc(tlb_ubc_luf, tlb_ubc_ro);
else
fold_ubc(tlb_ubc, tlb_ubc_ro);
+
+ return can_luf;
}

#ifdef CONFIG_DEVICE_PRIVATE
--
2.17.1


2024-05-20 02:26:42

by Byungchul Park

[permalink] [raw]
Subject: [RESEND PATCH v10 06/12] mm: add folio_put_ugen() to deliver unmap generation number to pcp or buddy

Introduced a new API, folio_put_ugen(), to deliver unmap generation
number to pcp or buddy that will be used by luf mechanism to track need
of tlb flush for each page residing in pcp or buddy.

For now, the delivery should work for the following call path that is of
releasing source folios during migration:

folio_put_ugen()
__folio_put_ugen()
free_unref_page()
free_unref_page_commit()
free_one_page()
__free_one_page()

The generation number should be handed over properly when pages travel
between pcp and buddy, and must do necessary handling on exit from pcp
or buddy. This patch doesn't include actual body for tlb flush on the
exit, which will be filled by the main patch of luf mechanism.

Signed-off-by: Byungchul Park <[email protected]>
---
include/linux/mm.h | 22 +++++++
include/linux/sched.h | 1 +
mm/compaction.c | 10 +++
mm/internal.h | 70 +++++++++++++++++++-
mm/page_alloc.c | 144 ++++++++++++++++++++++++++++++++++--------
mm/page_isolation.c | 6 ++
mm/page_reporting.c | 10 +++
mm/swap.c | 12 +++-
8 files changed, 247 insertions(+), 28 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index dc33f8269fb5..2369ebedb8bd 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1312,6 +1312,7 @@ static inline struct folio *virt_to_folio(const void *x)
}

void __folio_put(struct folio *folio);
+void __folio_put_ugen(struct folio *folio, unsigned short int ugen);

void put_pages_list(struct list_head *pages);

@@ -1509,6 +1510,27 @@ static inline void folio_put(struct folio *folio)
__folio_put(folio);
}

+/**
+ * folio_put_ugen - Decrement the last reference count on a folio.
+ * @folio: The folio.
+ * @ugen: The unmap generation # of TLB flush that the folio requires.
+ *
+ * The folio's reference count should be one since the only user, folio
+ * migration code, calls folio_put_ugen() only when the folio has no
+ * reference else. The memory will be released back to the page
+ * allocator and may be used by another allocation immediately. Do not
+ * access the memory or the struct folio after calling folio_put_ugen().
+ *
+ * Context: May be called in process or interrupt context, but not in NMI
+ * context. May be called while holding a spinlock.
+ */
+static inline void folio_put_ugen(struct folio *folio, unsigned short int ugen)
+{
+ if (WARN_ON(!folio_put_testzero(folio)))
+ return;
+ __folio_put_ugen(folio, ugen);
+}
+
/**
* folio_put_refs - Reduce the reference count on a folio.
* @folio: The folio.
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 4118b3f959c3..2aa48adad226 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1339,6 +1339,7 @@ struct task_struct {
#endif

struct tlbflush_unmap_batch tlb_ubc;
+ unsigned short int ugen;

/* Cache last used pipe for splice(): */
struct pipe_inode_info *splice_pipe;
diff --git a/mm/compaction.c b/mm/compaction.c
index e731d45befc7..13799fbb2a9a 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -701,6 +701,11 @@ static unsigned long isolate_freepages_block(struct compact_control *cc,
if (locked)
spin_unlock_irqrestore(&cc->zone->lock, flags);

+ /*
+ * Check and flush before using the isolated pages.
+ */
+ check_flush_task_ugen();
+
/*
* Be careful to not go outside of the pageblock.
*/
@@ -1673,6 +1678,11 @@ static void fast_isolate_freepages(struct compact_control *cc)

spin_unlock_irqrestore(&cc->zone->lock, flags);

+ /*
+ * Check and flush before using the isolated pages.
+ */
+ check_flush_task_ugen();
+
/* Skip fast search if enough freepages isolated */
if (cc->nr_freepages >= cc->nr_migratepages)
break;
diff --git a/mm/internal.h b/mm/internal.h
index eb9c7d8650fc..332662047c17 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -638,7 +638,7 @@ extern bool free_pages_prepare(struct page *page, unsigned int order);

extern int user_min_free_kbytes;

-void free_unref_page(struct page *page, unsigned int order);
+void free_unref_page(struct page *page, unsigned int order, unsigned short int ugen);
void free_unref_folios(struct folio_batch *fbatch);

extern void zone_pcp_reset(struct zone *zone);
@@ -1512,4 +1512,72 @@ static inline void shrinker_debugfs_remove(struct dentry *debugfs_entry,
void workingset_update_node(struct xa_node *node);
extern struct list_lru shadow_nodes;

+#if defined(CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH)
+static inline unsigned short int ugen_latest(unsigned short int a, unsigned short int b)
+{
+ if (!a || !b)
+ return a + b;
+
+ /*
+ * The ugen is wrapped around so let's use this trick.
+ */
+ if ((short int)(a - b) < 0)
+ return b;
+ else
+ return a;
+}
+
+static inline void update_task_ugen(unsigned short int ugen)
+{
+ current->ugen = ugen_latest(current->ugen, ugen);
+}
+
+static inline unsigned short int hand_over_task_ugen(void)
+{
+ unsigned short int ret = current->ugen;
+
+ current->ugen = 0;
+ return ret;
+}
+
+static inline void check_flush_task_ugen(void)
+{
+ /*
+ * XXX: luf mechanism will handle this. For now, do nothing but
+ * reset current's ugen to finalize this turn.
+ */
+ current->ugen = 0;
+}
+
+/*
+ * Check the constratints of what luf currently supports.
+ */
+static inline bool can_luf_folio(struct folio *f)
+{
+ bool can_luf = true;
+
+ /*
+ * XXX: Remove the constraint once luf handles zone device folio.
+ */
+ can_luf = can_luf && likely(!folio_is_zone_device(f));
+
+ /*
+ * XXX: Remove the constraint once luf handles hugetlb folio.
+ */
+ can_luf = can_luf && likely(!folio_test_hugetlb(f));
+
+ /*
+ * XXX: Remove the constraint once luf handles large folio.
+ */
+ can_luf = can_luf && likely(!folio_test_large(f));
+
+ return can_luf;
+}
+#else /* CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH */
+static inline unsigned short int ugen_latest(unsigned short int a, unsigned short int b) { return 0; }
+static inline void update_task_ugen(unsigned short int ugen) {}
+static inline unsigned short int hand_over_task_ugen(void) { return 0; }
+static inline void check_flush_task_ugen(void) {}
+static inline bool can_luf_folio(struct folio *f) { return false; }
+#endif
#endif /* __MM_INTERNAL_H */
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 917b22b429d1..2cd278c207d1 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -696,6 +696,7 @@ static inline void __del_page_from_free_list(struct page *page, struct zone *zon
if (page_reported(page))
__ClearPageReported(page);

+ update_task_ugen(page_buddy_ugen(page));
list_del(&page->buddy_list);
__ClearPageBuddy(page);
set_page_private(page, 0);
@@ -768,7 +769,7 @@ buddy_merge_likely(unsigned long pfn, unsigned long buddy_pfn,
static inline void __free_one_page(struct page *page,
unsigned long pfn,
struct zone *zone, unsigned int order,
- int migratetype, fpi_t fpi_flags)
+ int migratetype, fpi_t fpi_flags, unsigned short int ugen)
{
struct capture_control *capc = task_capc(zone);
unsigned long buddy_pfn = 0;
@@ -783,12 +784,22 @@ static inline void __free_one_page(struct page *page,
VM_BUG_ON_PAGE(pfn & ((1 << order) - 1), page);
VM_BUG_ON_PAGE(bad_range(zone, page), page);

+ /*
+ * Ensure private is zero before using it inside buddy.
+ */
+ set_page_private(page, 0);
+
account_freepages(zone, 1 << order, migratetype);

while (order < MAX_PAGE_ORDER) {
int buddy_mt = migratetype;

if (compaction_capture(capc, page, order, migratetype)) {
+ /*
+ * Capturer will check_flush_task_ugen() through
+ * prep_new_page().
+ */
+ update_task_ugen(ugen);
account_freepages(zone, -(1 << order), migratetype);
return;
}
@@ -819,6 +830,11 @@ static inline void __free_one_page(struct page *page,
if (page_is_guard(buddy))
clear_page_guard(zone, buddy, order);
else
+ /*
+ * __del_page_from_free_list() updates current's
+ * ugen that pairs with hand_over_task_ugen() below
+ * in this funtion.
+ */
__del_page_from_free_list(buddy, zone, order, buddy_mt);

if (unlikely(buddy_mt != migratetype)) {
@@ -837,7 +853,8 @@ static inline void __free_one_page(struct page *page,
}

done_merging:
- set_buddy_order_ugen(page, order, 0);
+ ugen = ugen_latest(ugen, hand_over_task_ugen());
+ set_buddy_order_ugen(page, order, ugen);

if (fpi_flags & FPI_TO_TAIL)
to_tail = true;
@@ -1048,6 +1065,11 @@ __always_inline bool free_pages_prepare(struct page *page,

VM_BUG_ON_PAGE(PageTail(page), page);

+ /*
+ * Ensure private is zero before using it inside pcp.
+ */
+ set_page_private(page, 0);
+
trace_mm_page_free(page, order);
kmsan_free_page(page, order);

@@ -1179,17 +1201,23 @@ static void free_pcppages_bulk(struct zone *zone, int count,
do {
unsigned long pfn;
int mt;
+ unsigned short int ugen;

page = list_last_entry(list, struct page, pcp_list);
pfn = page_to_pfn(page);
mt = get_pfnblock_migratetype(page, pfn);

+ /*
+ * pcp uses private to store ugen.
+ */
+ ugen = page_private(page);
+
/* must delete to avoid corrupting pcp list */
list_del(&page->pcp_list);
count -= nr_pages;
pcp->count -= nr_pages;

- __free_one_page(page, pfn, zone, order, mt, FPI_NONE);
+ __free_one_page(page, pfn, zone, order, mt, FPI_NONE, ugen);
trace_mm_page_pcpu_drain(page, order, mt);
} while (count > 0 && !list_empty(list));
}
@@ -1199,14 +1227,14 @@ static void free_pcppages_bulk(struct zone *zone, int count,

static void free_one_page(struct zone *zone, struct page *page,
unsigned long pfn, unsigned int order,
- fpi_t fpi_flags)
+ fpi_t fpi_flags, unsigned short int ugen)
{
unsigned long flags;
int migratetype;

spin_lock_irqsave(&zone->lock, flags);
migratetype = get_pfnblock_migratetype(page, pfn);
- __free_one_page(page, pfn, zone, order, migratetype, fpi_flags);
+ __free_one_page(page, pfn, zone, order, migratetype, fpi_flags, ugen);
spin_unlock_irqrestore(&zone->lock, flags);
}

@@ -1219,7 +1247,7 @@ static void __free_pages_ok(struct page *page, unsigned int order,
if (!free_pages_prepare(page, order))
return;

- free_one_page(zone, page, pfn, order, fpi_flags);
+ free_one_page(zone, page, pfn, order, fpi_flags, 0);

__count_vm_events(PGFREE, 1 << order);
}
@@ -1484,6 +1512,10 @@ inline void post_alloc_hook(struct page *page, unsigned int order,
static void prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
unsigned int alloc_flags)
{
+ /*
+ * Check and flush before using the pages.
+ */
+ check_flush_task_ugen();
post_alloc_hook(page, order, gfp_flags);

if (order && (gfp_flags & __GFP_COMP))
@@ -1519,6 +1551,10 @@ struct page *__rmqueue_smallest(struct zone *zone, unsigned int order,
page = get_page_from_free_area(area, migratetype);
if (!page)
continue;
+ /*
+ * del_page_from_free_list() updates current's ugen that
+ * pairs with check_flush_task_ugen() in prep_new_page().
+ */
del_page_from_free_list(page, zone, current_order, migratetype);
expand(zone, page, order, current_order, migratetype);
trace_mm_page_alloc_zone_locked(page, order, migratetype,
@@ -1681,7 +1717,8 @@ static unsigned long find_large_buddy(unsigned long start_pfn)

/* Split a multi-block free page into its individual pageblocks */
static void split_large_buddy(struct zone *zone, struct page *page,
- unsigned long pfn, int order)
+ unsigned long pfn, int order,
+ unsigned short int ugen)
{
unsigned long end_pfn = pfn + (1 << order);

@@ -1694,7 +1731,7 @@ static void split_large_buddy(struct zone *zone, struct page *page,
while (pfn != end_pfn) {
int mt = get_pfnblock_migratetype(page, pfn);

- __free_one_page(page, pfn, zone, pageblock_order, mt, FPI_NONE);
+ __free_one_page(page, pfn, zone, pageblock_order, mt, FPI_NONE, ugen);
pfn += pageblock_nr_pages;
page = pfn_to_page(pfn);
}
@@ -1736,22 +1773,34 @@ bool move_freepages_block_isolate(struct zone *zone, struct page *page,
if (pfn != start_pfn) {
struct page *buddy = pfn_to_page(pfn);
int order = buddy_order(buddy);
+ unsigned short int ugen;

+ /*
+ * del_page_from_free_list() updates current's ugen that
+ * pairs with the following hand_over_task_ugen().
+ */
del_page_from_free_list(buddy, zone, order,
get_pfnblock_migratetype(buddy, pfn));
+ ugen = hand_over_task_ugen();
set_pageblock_migratetype(page, migratetype);
- split_large_buddy(zone, buddy, pfn, order);
+ split_large_buddy(zone, buddy, pfn, order, ugen);
return true;
}

/* We're the starting block of a larger buddy */
if (PageBuddy(page) && buddy_order(page) > pageblock_order) {
int order = buddy_order(page);
+ unsigned short int ugen;

+ /*
+ * del_page_from_free_list() updates current's ugen that
+ * pairs with the following hand_over_task_ugen().
+ */
del_page_from_free_list(page, zone, order,
get_pfnblock_migratetype(page, pfn));
+ ugen = hand_over_task_ugen();
set_pageblock_migratetype(page, migratetype);
- split_large_buddy(zone, page, pfn, order);
+ split_large_buddy(zone, page, pfn, order, ugen);
return true;
}
move:
@@ -1871,6 +1920,10 @@ steal_suitable_fallback(struct zone *zone, struct page *page,

/* Take ownership for orders >= pageblock_order */
if (current_order >= pageblock_order) {
+ /*
+ * del_page_from_free_list() updates current's ugen that
+ * pairs with check_flush_task_ugen() in prep_new_page().
+ */
del_page_from_free_list(page, zone, current_order, block_type);
change_pageblock_range(page, current_order, start_type);
expand(zone, page, order, current_order, start_type);
@@ -1926,6 +1979,10 @@ steal_suitable_fallback(struct zone *zone, struct page *page,
}

single_page:
+ /*
+ * del_page_from_free_list() updates current's ugen that pairs
+ * with check_flush_task_ugen() in prep_new_page().
+ */
del_page_from_free_list(page, zone, current_order, block_type);
expand(zone, page, order, current_order, block_type);
return page;
@@ -2547,7 +2604,7 @@ static int nr_pcp_high(struct per_cpu_pages *pcp, struct zone *zone,

static void free_unref_page_commit(struct zone *zone, struct per_cpu_pages *pcp,
struct page *page, int migratetype,
- unsigned int order)
+ unsigned int order, unsigned short int ugen)
{
int high, batch;
int pindex;
@@ -2561,6 +2618,11 @@ static void free_unref_page_commit(struct zone *zone, struct per_cpu_pages *pcp,
pcp->alloc_factor >>= 1;
__count_vm_events(PGFREE, 1 << order);
pindex = order_to_pindex(migratetype, order);
+
+ /*
+ * pcp uses private to store ugen.
+ */
+ set_page_private(page, ugen);
list_add(&page->pcp_list, &pcp->lists[pindex]);
pcp->count += 1 << order;

@@ -2596,7 +2658,8 @@ static void free_unref_page_commit(struct zone *zone, struct per_cpu_pages *pcp,
/*
* Free a pcp page
*/
-void free_unref_page(struct page *page, unsigned int order)
+void free_unref_page(struct page *page, unsigned int order,
+ unsigned short int ugen)
{
unsigned long __maybe_unused UP_flags;
struct per_cpu_pages *pcp;
@@ -2622,7 +2685,7 @@ void free_unref_page(struct page *page, unsigned int order)
migratetype = get_pfnblock_migratetype(page, pfn);
if (unlikely(migratetype >= MIGRATE_PCPTYPES)) {
if (unlikely(is_migrate_isolate(migratetype))) {
- free_one_page(page_zone(page), page, pfn, order, FPI_NONE);
+ free_one_page(page_zone(page), page, pfn, order, FPI_NONE, ugen);
return;
}
migratetype = MIGRATE_MOVABLE;
@@ -2632,10 +2695,10 @@ void free_unref_page(struct page *page, unsigned int order)
pcp_trylock_prepare(UP_flags);
pcp = pcp_spin_trylock(zone->per_cpu_pageset);
if (pcp) {
- free_unref_page_commit(zone, pcp, page, migratetype, order);
+ free_unref_page_commit(zone, pcp, page, migratetype, order, ugen);
pcp_spin_unlock(pcp);
} else {
- free_one_page(zone, page, pfn, order, FPI_NONE);
+ free_one_page(zone, page, pfn, order, FPI_NONE, ugen);
}
pcp_trylock_finish(UP_flags);
}
@@ -2666,7 +2729,7 @@ void free_unref_folios(struct folio_batch *folios)
*/
if (!pcp_allowed_order(order)) {
free_one_page(folio_zone(folio), &folio->page,
- pfn, order, FPI_NONE);
+ pfn, order, FPI_NONE, 0);
continue;
}
folio->private = (void *)(unsigned long)order;
@@ -2702,7 +2765,7 @@ void free_unref_folios(struct folio_batch *folios)
*/
if (is_migrate_isolate(migratetype)) {
free_one_page(zone, &folio->page, pfn,
- order, FPI_NONE);
+ order, FPI_NONE, 0);
continue;
}

@@ -2715,7 +2778,7 @@ void free_unref_folios(struct folio_batch *folios)
if (unlikely(!pcp)) {
pcp_trylock_finish(UP_flags);
free_one_page(zone, &folio->page, pfn,
- order, FPI_NONE);
+ order, FPI_NONE, 0);
continue;
}
locked_zone = zone;
@@ -2730,7 +2793,7 @@ void free_unref_folios(struct folio_batch *folios)

trace_mm_page_free_batched(&folio->page);
free_unref_page_commit(zone, pcp, &folio->page, migratetype,
- order);
+ order, 0);
}

if (pcp) {
@@ -2781,6 +2844,11 @@ int __isolate_free_page(struct page *page, unsigned int order)
return 0;
}

+ /*
+ * del_page_from_free_list() updates current's ugen. The user of
+ * the isolated page should check_flush_task_ugen() before using
+ * it.
+ */
del_page_from_free_list(page, zone, order, mt);

/*
@@ -2822,7 +2890,7 @@ void __putback_isolated_page(struct page *page, unsigned int order, int mt)

/* Return isolated page to tail of freelist. */
__free_one_page(page, page_to_pfn(page), zone, order, mt,
- FPI_SKIP_REPORT_NOTIFY | FPI_TO_TAIL);
+ FPI_SKIP_REPORT_NOTIFY | FPI_TO_TAIL, 0);
}

/*
@@ -2965,6 +3033,11 @@ struct page *__rmqueue_pcplist(struct zone *zone, unsigned int order,
}

page = list_first_entry(list, struct page, pcp_list);
+
+ /*
+ * Pairs with check_flush_task_ugen() in prep_new_page().
+ */
+ update_task_ugen(page_private(page));
list_del(&page->pcp_list);
pcp->count -= 1 << order;
} while (check_new_pages(page, order));
@@ -4791,11 +4864,11 @@ void __free_pages(struct page *page, unsigned int order)
struct alloc_tag *tag = pgalloc_tag_get(page);

if (put_page_testzero(page))
- free_unref_page(page, order);
+ free_unref_page(page, order, 0);
else if (!head) {
pgalloc_tag_sub_pages(tag, (1 << order) - 1);
while (order-- > 0)
- free_unref_page(page + (1 << order), order);
+ free_unref_page(page + (1 << order), order, 0);
}
}
EXPORT_SYMBOL(__free_pages);
@@ -4857,7 +4930,7 @@ void __page_frag_cache_drain(struct page *page, unsigned int count)
VM_BUG_ON_PAGE(page_ref_count(page) == 0, page);

if (page_ref_sub_and_test(page, count))
- free_unref_page(page, compound_order(page));
+ free_unref_page(page, compound_order(page), 0);
}
EXPORT_SYMBOL(__page_frag_cache_drain);

@@ -4898,7 +4971,7 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc,
goto refill;

if (unlikely(nc->pfmemalloc)) {
- free_unref_page(page, compound_order(page));
+ free_unref_page(page, compound_order(page), 0);
goto refill;
}

@@ -4942,7 +5015,7 @@ void page_frag_free(void *addr)
struct page *page = virt_to_head_page(addr);

if (unlikely(put_page_testzero(page)))
- free_unref_page(page, compound_order(page));
+ free_unref_page(page, compound_order(page), 0);
}
EXPORT_SYMBOL(page_frag_free);

@@ -6751,10 +6824,19 @@ void __offline_isolated_pages(unsigned long start_pfn, unsigned long end_pfn)
BUG_ON(!PageBuddy(page));
VM_WARN_ON(get_pageblock_migratetype(page) != MIGRATE_ISOLATE);
order = buddy_order(page);
+ /*
+ * del_page_from_free_list() updates current's ugen that
+ * pairs with check_flush_task_ugen() below in this function.
+ */
del_page_from_free_list(page, zone, order, MIGRATE_ISOLATE);
pfn += (1 << order);
}
spin_unlock_irqrestore(&zone->lock, flags);
+
+ /*
+ * Check and flush before using it.
+ */
+ check_flush_task_ugen();
}
#endif

@@ -6830,6 +6912,11 @@ bool take_page_off_buddy(struct page *page)
int migratetype = get_pfnblock_migratetype(page_head,
pfn_head);

+ /*
+ * del_page_from_free_list() updates current's
+ * ugen that pairs with check_flush_task_ugen() below
+ * in this function.
+ */
del_page_from_free_list(page_head, zone, page_order,
migratetype);
break_down_buddy_pages(zone, page_head, page, 0,
@@ -6842,6 +6929,11 @@ bool take_page_off_buddy(struct page *page)
break;
}
spin_unlock_irqrestore(&zone->lock, flags);
+
+ /*
+ * Check and flush before using it.
+ */
+ check_flush_task_ugen();
return ret;
}

@@ -6860,7 +6952,7 @@ bool put_page_back_buddy(struct page *page)
int migratetype = get_pfnblock_migratetype(page, pfn);

ClearPageHWPoisonTakenOff(page);
- __free_one_page(page, pfn, zone, 0, migratetype, FPI_NONE);
+ __free_one_page(page, pfn, zone, 0, migratetype, FPI_NONE, 0);
if (TestClearPageHWPoison(page)) {
ret = true;
}
diff --git a/mm/page_isolation.c b/mm/page_isolation.c
index 042937d5abe4..5823da60a621 100644
--- a/mm/page_isolation.c
+++ b/mm/page_isolation.c
@@ -260,6 +260,12 @@ static void unset_migratetype_isolate(struct page *page, int migratetype)
zone->nr_isolate_pageblock--;
out:
spin_unlock_irqrestore(&zone->lock, flags);
+
+ /*
+ * Check and flush for the pages that have been isolated.
+ */
+ if (isolated_page)
+ check_flush_task_ugen();
}

static inline struct page *
diff --git a/mm/page_reporting.c b/mm/page_reporting.c
index e4c428e61d8c..4f94a3ea1b22 100644
--- a/mm/page_reporting.c
+++ b/mm/page_reporting.c
@@ -221,6 +221,11 @@ page_reporting_cycle(struct page_reporting_dev_info *prdev, struct zone *zone,
/* release lock before waiting on report processing */
spin_unlock_irq(&zone->lock);

+ /*
+ * Check and flush before using the isolated pages.
+ */
+ check_flush_task_ugen();
+
/* begin processing pages in local list */
err = prdev->report(prdev, sgl, PAGE_REPORTING_CAPACITY);

@@ -253,6 +258,11 @@ page_reporting_cycle(struct page_reporting_dev_info *prdev, struct zone *zone,

spin_unlock_irq(&zone->lock);

+ /*
+ * Check and flush before using the isolated pages.
+ */
+ check_flush_task_ugen();
+
return err;
}

diff --git a/mm/swap.c b/mm/swap.c
index f0d478eee292..0fc5a5e8457f 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -126,10 +126,20 @@ void __folio_put(struct folio *folio)
if (folio_test_large(folio) && folio_test_large_rmappable(folio))
folio_undo_large_rmappable(folio);
mem_cgroup_uncharge(folio);
- free_unref_page(&folio->page, folio_order(folio));
+ free_unref_page(&folio->page, folio_order(folio), 0);
}
EXPORT_SYMBOL(__folio_put);

+void __folio_put_ugen(struct folio *folio, unsigned short int ugen)
+{
+ if (WARN_ON(!can_luf_folio(folio)))
+ return;
+
+ page_cache_release(folio);
+ mem_cgroup_uncharge(folio);
+ free_unref_page(&folio->page, 0, ugen);
+}
+
/**
* put_pages_list() - release a list of pages
* @pages: list of pages threaded on page->lru
--
2.17.1


2024-05-20 02:26:48

by Byungchul Park

[permalink] [raw]
Subject: [RESEND PATCH v10 09/12] mm: implement LUF(Lazy Unmap Flush) defering tlb flush when folios get unmapped

A new mechanism, LUF(Lazy Unmap Flush), defers tlb flush until folios
that have been unmapped and freed, eventually get allocated again. It's
safe for folios that had been mapped read-only and were unmapped, since
the contents of the folios don't change while staying in pcp or buddy
so we can still read the data through the stale tlb entries.

tlb flush can be defered when folios get unmapped as long as it
guarantees to perform tlb flush needed, before the folios actually
become used, of course, only if all the corresponding ptes don't have
write permission. Otherwise, the system will get messed up.

To achieve that:

1. For the folios that map only to non-writable tlb entries, prevent
tlb flush during unmapping but perform it just before the folios
actually become used, out of buddy or pcp.

2. When any non-writable ptes change to writable e.g. through fault
handler, give up luf mechanism and perform tlb flush required
right away.

3. When a writable mapping is created e.g. through mmap(), give up
luf mechanism and perform tlb flush required right away.

No matter what type of workload is used for performance evaluation, the
result would be positive thanks to the unconditional reduction of tlb
flushes, tlb misses and interrupts. For the test, I picked up one of
the most popular and heavy workload, llama.cpp that is a
LLM(Large Language Model) inference engine.

The result would depend on memory latency and how often reclaim runs,
which implies tlb miss overhead and how many times unmapping happens.
In my system, the result shows:

1. tlb flushes are reduced about 95%.
2. tlb misses(itlb) are reduced about 80%.
3. tlb misses(dtlb store) are reduced about 57%.
4. tlb misses(dtlb load) are reduced about 24%.
5. tlb shootdown interrupts are reduced about 95%.
6. The test program runtime is reduced about 5%.

The test environment and the result is like:

Machine: bare metal, x86_64, Intel(R) Xeon(R) Gold 6430
CPU: 1 socket 64 core with hyper thread on
Numa: 2 nodes (64 CPUs DRAM 42GB, no CPUs CXL expander 98GB)
Config: swap off, numa balancing tiering on, demotion enabled

The test set:

llama.cpp/main -m $(70G_model1) -p "who are you?" -s 1 -t 15 -n 20 &
llama.cpp/main -m $(70G_model2) -p "who are you?" -s 1 -t 15 -n 20 &
llama.cpp/main -m $(70G_model3) -p "who are you?" -s 1 -t 15 -n 20 &
wait

where -t: nr of threads, -s: seed used to make the runtime stable,
-n: nr of tokens that determines the runtime, -p: prompt to ask,
-m: LLM model to use.

Run the test set 10 times successively with caches dropped every run
via 'echo 3 > /proc/sys/vm/drop_caches'. Each inference prints its
runtime at the end of each.

1. Runtime from the output of llama.cpp:

BEFORE
------
llama_print_timings: total time = 1002461.95 ms / 24 tokens
llama_print_timings: total time = 1044978.38 ms / 24 tokens
llama_print_timings: total time = 1000653.09 ms / 24 tokens
llama_print_timings: total time = 1047104.80 ms / 24 tokens
llama_print_timings: total time = 1069430.36 ms / 24 tokens
llama_print_timings: total time = 1068201.16 ms / 24 tokens
llama_print_timings: total time = 1078092.59 ms / 24 tokens
llama_print_timings: total time = 1073200.45 ms / 24 tokens
llama_print_timings: total time = 1067136.00 ms / 24 tokens
llama_print_timings: total time = 1076442.56 ms / 24 tokens
llama_print_timings: total time = 1004142.64 ms / 24 tokens
llama_print_timings: total time = 1042942.65 ms / 24 tokens
llama_print_timings: total time = 999933.76 ms / 24 tokens
llama_print_timings: total time = 1046548.83 ms / 24 tokens
llama_print_timings: total time = 1068671.48 ms / 24 tokens
llama_print_timings: total time = 1068285.76 ms / 24 tokens
llama_print_timings: total time = 1077789.63 ms / 24 tokens
llama_print_timings: total time = 1071558.93 ms / 24 tokens
llama_print_timings: total time = 1066181.55 ms / 24 tokens
llama_print_timings: total time = 1076767.53 ms / 24 tokens
llama_print_timings: total time = 1004065.63 ms / 24 tokens
llama_print_timings: total time = 1044522.13 ms / 24 tokens
llama_print_timings: total time = 999725.33 ms / 24 tokens
llama_print_timings: total time = 1047510.77 ms / 24 tokens
llama_print_timings: total time = 1068010.27 ms / 24 tokens
llama_print_timings: total time = 1068999.31 ms / 24 tokens
llama_print_timings: total time = 1077648.05 ms / 24 tokens
llama_print_timings: total time = 1071378.96 ms / 24 tokens
llama_print_timings: total time = 1066326.32 ms / 24 tokens
llama_print_timings: total time = 1077088.92 ms / 24 tokens

AFTER
-----
llama_print_timings: total time = 988522.03 ms / 24 tokens
llama_print_timings: total time = 997204.52 ms / 24 tokens
llama_print_timings: total time = 996605.86 ms / 24 tokens
llama_print_timings: total time = 991985.50 ms / 24 tokens
llama_print_timings: total time = 1035143.31 ms / 24 tokens
llama_print_timings: total time = 993660.18 ms / 24 tokens
llama_print_timings: total time = 983082.14 ms / 24 tokens
llama_print_timings: total time = 990431.36 ms / 24 tokens
llama_print_timings: total time = 992707.09 ms / 24 tokens
llama_print_timings: total time = 992673.27 ms / 24 tokens
llama_print_timings: total time = 989285.43 ms / 24 tokens
llama_print_timings: total time = 996710.06 ms / 24 tokens
llama_print_timings: total time = 996534.64 ms / 24 tokens
llama_print_timings: total time = 991344.17 ms / 24 tokens
llama_print_timings: total time = 1035210.84 ms / 24 tokens
llama_print_timings: total time = 994714.13 ms / 24 tokens
llama_print_timings: total time = 984184.15 ms / 24 tokens
llama_print_timings: total time = 990909.45 ms / 24 tokens
llama_print_timings: total time = 991881.48 ms / 24 tokens
llama_print_timings: total time = 993918.03 ms / 24 tokens
llama_print_timings: total time = 990061.34 ms / 24 tokens
llama_print_timings: total time = 998076.69 ms / 24 tokens
llama_print_timings: total time = 997082.59 ms / 24 tokens
llama_print_timings: total time = 990677.58 ms / 24 tokens
llama_print_timings: total time = 1036054.94 ms / 24 tokens
llama_print_timings: total time = 994125.93 ms / 24 tokens
llama_print_timings: total time = 982467.01 ms / 24 tokens
llama_print_timings: total time = 990191.60 ms / 24 tokens
llama_print_timings: total time = 993319.24 ms / 24 tokens
llama_print_timings: total time = 992540.57 ms / 24 tokens

2. tlb shootdowns from 'cat /proc/interrupts':

BEFORE
------
TLB:
125553646 141418810 161932620 176853972 186655697 190399283
192143823 196414038 192872439 193313658 193395617 192521416
190788161 195067598 198016061 193607347 194293972 190786732
191545637 194856822 191801931 189634535 190399803 196365922
195268398 190115840 188050050 193194908 195317617 190820190
190164820 185556071 226797214 229592631 216112464 209909495
205575979 205950252 204948111 197999795 198892232 205287952
199344631 195015158 195869844 198858745 195692876 200961904
203463252 205921722 199850838 206145986 199613202 199961345
200129577 203020521 207873649 203697671 197093386 204243803
205993323 200934664 204193128 194435376 TLB shootdowns

AFTER
-----
TLB:
5648092 6610142 7032849 7882308 8088518 8352310
8656536 8705136 8647426 8905583 8985408 8704522
8884344 9026261 8929974 8869066 8877575 8810096
8770984 8754503 8801694 8865925 8787524 8656432
8755912 8682034 8773935 8832925 8797997 8515777
8481240 8891258 10595243 10285973 9756935 9573681
9398968 9069244 9242984 8899009 9310690 9029095
9069758 9105825 9092703 9270202 9460287 9258546
9180415 9232723 9270611 9175020 9490420 9360316
9420818 9057663 9525631 9310152 9152242 8654483
9181804 9050847 8919916 8883856 TLB shootdowns

3. tlb numbers from 'perf stat' per test set:

BEFORE
------
3163679332 dTLB-load-misses
2017751856 dTLB-store-misses
327092903 iTLB-load-misses
1357543886 tlb:tlb_flush

AFTER
-----
2394694609 dTLB-load-misses
861144167 dTLB-store-misses
64055579 iTLB-load-misses
69175002 tlb:tlb_flush

Signed-off-by: Byungchul Park <[email protected]>
---
include/linux/sched.h | 9 ++
mm/internal.h | 43 +++++-
mm/memory.c | 8 ++
mm/mmap.c | 8 ++
mm/rmap.c | 308 +++++++++++++++++++++++++++++++++++++++++-
5 files changed, 366 insertions(+), 10 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 0915390b1b5e..6f83703ec284 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1340,8 +1340,17 @@ struct task_struct {

struct tlbflush_unmap_batch tlb_ubc;
struct tlbflush_unmap_batch tlb_ubc_ro;
+ struct tlbflush_unmap_batch tlb_ubc_luf;
unsigned short int ugen;

+#if defined(CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH)
+ /*
+ * whether all the mappings of a folio during unmap are read-only
+ * so that luf can work on the folio
+ */
+ bool can_luf;
+#endif
+
/* Cache last used pipe for splice(): */
struct pipe_inode_info *splice_pipe;

diff --git a/mm/internal.h b/mm/internal.h
index 805f0e6ecab4..2a44194f5d39 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -1517,6 +1517,38 @@ void workingset_update_node(struct xa_node *node);
extern struct list_lru shadow_nodes;

#if defined(CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH)
+unsigned short int try_to_unmap_luf(void);
+void check_luf_flush(unsigned short int ugen);
+void luf_flush(void);
+
+/*
+ * Reset the indicator indicating there are no writable mappings at the
+ * beginning of every rmap traverse for unmap. luf can work only when
+ * all the mappings are read-only.
+ */
+static inline void can_luf_init(void)
+{
+ current->can_luf = true;
+}
+
+/*
+ * Mark the folio is not applicable to luf once it found a writble or
+ * dirty pte during rmap traverse for unmap.
+ */
+static inline void can_luf_fail(void)
+{
+ current->can_luf = false;
+}
+
+/*
+ * Check if all the mappings are read-only and read-only mappings even
+ * exist.
+ */
+static inline bool can_luf_test(void)
+{
+ return current->can_luf && current->tlb_ubc_ro.flush_required;
+}
+
static inline unsigned short int ugen_latest(unsigned short int a, unsigned short int b)
{
if (!a || !b)
@@ -1546,10 +1578,7 @@ static inline unsigned short int hand_over_task_ugen(void)

static inline void check_flush_task_ugen(void)
{
- /*
- * XXX: luf mechanism will handle this. For now, do nothing but
- * reset current's ugen to finalize this turn.
- */
+ check_luf_flush(current->ugen);
current->ugen = 0;
}

@@ -1578,6 +1607,12 @@ static inline bool can_luf_folio(struct folio *f)
return can_luf;
}
#else /* CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH */
+static inline unsigned short int try_to_unmap_luf(void) { return 0; }
+static inline void check_luf_flush(unsigned short int ugen) {}
+static inline void luf_flush(void) {}
+static inline void can_luf_init(void) {}
+static inline void can_luf_fail(void) {}
+static inline bool can_luf_test(void) { return false; }
static inline unsigned short int ugen_latest(unsigned short int a, unsigned short int b) { return 0; }
static inline void update_task_ugen(unsigned short int ugen) {}
static inline unsigned short int hand_over_task_ugen(void) { return 0; }
diff --git a/mm/memory.c b/mm/memory.c
index 33d87b64d15d..f218c275d307 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3617,6 +3617,14 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf)
if (vmf->page)
folio = page_folio(vmf->page);

+ /*
+ * The folio may or may not be one that is under luf's control
+ * and might be about to change its permission to writable.
+ * Conservatively give up deferring tlb flush just in case.
+ */
+ if (folio)
+ luf_flush();
+
/*
* Shared mapping: we are guaranteed to have VM_WRITE and
* FAULT_FLAG_WRITE set at this point.
diff --git a/mm/mmap.c b/mm/mmap.c
index 47363e7f7ea2..3b3bece4b079 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -1271,6 +1271,14 @@ unsigned long do_mmap(struct file *file, unsigned long addr,
pkey = 0;
}

+ /*
+ * This mmap may or may not be mapping to ones that is under
+ * luf's control. However, conservatively give up deferring tlb
+ * flush just in case.
+ */
+ if (prot & PROT_WRITE)
+ luf_flush();
+
/* Do simple checking here so the lower-level routines won't have
* to. we assume access permissions have been handled by the open
* of the memory object, so we don't do any here.
diff --git a/mm/rmap.c b/mm/rmap.c
index 328b5e2217e6..e42783c02114 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -635,6 +635,270 @@ struct anon_vma *folio_lock_anon_vma_read(struct folio *folio,
}

#ifdef CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH
+static struct tlbflush_unmap_batch luf_ubc;
+static DEFINE_SPINLOCK(luf_lock);
+
+/*
+ * Don't be zero to distinguish from invalid ugen, 0.
+ */
+static unsigned short int ugen_next(unsigned short int a)
+{
+ return a + 1 ?: a + 2;
+}
+
+static bool ugen_before(unsigned short int a, unsigned short int b)
+{
+ return (short int)(a - b) < 0;
+}
+
+/*
+ * Need to synchronize between tlb flush and managing pending CPUs in
+ * luf_ubc. Take a look at the following scenario, where CPU0 is in
+ * try_to_unmap_flush() and CPU1 is in migrate_pages_batch():
+ *
+ * CPU0 CPU1
+ * ---- ----
+ * tlb flush
+ * unmap folios (needing tlb flush)
+ * add pending CPUs to luf_ubc
+ * <-- not performed tlb flush needed by
+ * the unmap above yet but the request
+ * will be cleared by CPU0 shortly. bug!
+ * clear the CPUs from luf_ubc
+ *
+ * The pending CPUs added in CPU1 should not be cleared from luf_ubc
+ * in CPU0 because the tlb flush for luf_ubc added in CPU1 has not
+ * been performed this turn. To avoid this, using 'on_flushing'
+ * variable, prevent adding pending CPUs to luf_ubc and give up luf
+ * mechanism if someone is in the middle of tlb flush, like:
+ *
+ * CPU0 CPU1
+ * ---- ----
+ * on_flushing++
+ * tlb flush
+ * unmap folios (needing tlb flush)
+ * if on_flushing == 0:
+ * add pending CPUs to luf_ubc
+ * else: <-- hit
+ * give up luf mechanism
+ * clear the CPUs from luf_ubc
+ * on_flushing--
+ *
+ * Only the following case would be allowed for luf mechanism to work:
+ *
+ * CPU0 CPU1
+ * ---- ----
+ * unmap folios (needing tlb flush)
+ * if on_flushing == 0: <-- hit
+ * add pending CPUs to luf_ubc
+ * else:
+ * give up luf mechanism
+ * on_flushing++
+ * tlb flush
+ * clear the CPUs from luf_ubc
+ * on_flushing--
+ */
+static int on_flushing;
+
+/*
+ * When more than one thread enter check_luf_flush() at the same
+ * time, each should wait for the request on progress to be done to
+ * avoid the following scenario, where the both CPUs are in
+ * check_luf_flush():
+ *
+ * CPU0 CPU1
+ * ---- ----
+ * if !luf_ubc.flush_required:
+ * return
+ * luf_ubc.flush_required = false
+ * if !luf_ubc.flush_requied: <-- hit
+ * return <-- not performed tlb flush
+ * needed yet but return. bug!
+ * luf_ubc.flush_required = false
+ * try_to_unmap_flush()
+ * finalize
+ * try_to_unmap_flush() <-- performs tlb flush needed
+ * finalize
+ *
+ * So it should be handled:
+ *
+ * CPU0 CPU1
+ * ---- ----
+ * atomically execute {
+ * if luf_on_flushing:
+ * wait for the completion
+ * return
+ * if !luf_ubc.flush_required:
+ * return
+ * luf_ubc.flush_required = false
+ * luf_on_flushing = true
+ * }
+ * atomically execute {
+ * if luf_on_flushing: <-- hit
+ * wait for the completion
+ * return <-- tlb flush needed is done
+ * if !luf_ubc.flush_requied:
+ * return
+ * luf_ubc.flush_required = false
+ * luf_on_flushing = true
+ * }
+ *
+ * try_to_unmap_flush()
+ * luf_on_flushing = false
+ * finalize
+ * try_to_unmap_flush() <-- performs tlb flush needed
+ * luf_on_flushing = false
+ * finalize
+ */
+static bool luf_on_flushing;
+
+/*
+ * Generation number for the current request of deferred tlb flush.
+ */
+static unsigned short int luf_gen;
+
+/*
+ * Generation number for the next request.
+ */
+static unsigned short int luf_gen_next = 1;
+
+/*
+ * Generation number for the latest request handled.
+ */
+static unsigned short int luf_gen_done;
+
+unsigned short int try_to_unmap_luf(void)
+{
+ struct tlbflush_unmap_batch *tlb_ubc = &current->tlb_ubc;
+ struct tlbflush_unmap_batch *tlb_ubc_luf = &current->tlb_ubc_luf;
+ unsigned long flags;
+ unsigned short int ugen;
+
+ if (!spin_trylock_irqsave(&luf_lock, flags)) {
+ /*
+ * Give up luf mechanism. Just let tlb flush needed
+ * handled by try_to_unmap_flush() at the caller side.
+ */
+ fold_ubc(tlb_ubc, tlb_ubc_luf);
+ return 0;
+ }
+
+ if (on_flushing || luf_on_flushing) {
+ spin_unlock_irqrestore(&luf_lock, flags);
+
+ /*
+ * Give up luf mechanism. Just let tlb flush needed
+ * handled by try_to_unmap_flush() at the caller side.
+ */
+ fold_ubc(tlb_ubc, tlb_ubc_luf);
+ return 0;
+ }
+
+ fold_ubc(&luf_ubc, tlb_ubc_luf);
+ ugen = luf_gen = luf_gen_next;
+ spin_unlock_irqrestore(&luf_lock, flags);
+
+ return ugen;
+}
+
+static void rmap_flush_start(void)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&luf_lock, flags);
+ on_flushing++;
+ spin_unlock_irqrestore(&luf_lock, flags);
+}
+
+static void rmap_flush_end(struct tlbflush_unmap_batch *batch)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&luf_lock, flags);
+ if (arch_tlbbatch_done(&luf_ubc.arch, &batch->arch)) {
+ luf_ubc.flush_required = false;
+ luf_ubc.writable = false;
+ }
+ on_flushing--;
+ spin_unlock_irqrestore(&luf_lock, flags);
+}
+
+/*
+ * It must be guaranteed to have completed tlb flush requested on return.
+ */
+void check_luf_flush(unsigned short int ugen)
+{
+ struct tlbflush_unmap_batch *tlb_ubc = &current->tlb_ubc;
+ unsigned long flags;
+
+ /*
+ * Nothing has been requested. We are done.
+ */
+ if (!ugen)
+ return;
+retry:
+ /*
+ * We can see a larger value than or equal to luf_gen_done,
+ * which means the tlb flush we need has been done.
+ */
+ if (!ugen_before(READ_ONCE(luf_gen_done), ugen))
+ return;
+
+ spin_lock_irqsave(&luf_lock, flags);
+
+ /*
+ * With luf_lock held, we might read luf_gen_done updated.
+ */
+ if (ugen_next(luf_gen_done) != ugen) {
+ spin_unlock_irqrestore(&luf_lock, flags);
+ return;
+ }
+
+ /*
+ * Others are already working for us.
+ */
+ if (luf_on_flushing) {
+ spin_unlock_irqrestore(&luf_lock, flags);
+ goto retry;
+ }
+
+ if (!luf_ubc.flush_required) {
+ spin_unlock_irqrestore(&luf_lock, flags);
+ return;
+ }
+
+ fold_ubc(tlb_ubc, &luf_ubc);
+ luf_gen_next = ugen_next(luf_gen);
+ luf_on_flushing = true;
+ spin_unlock_irqrestore(&luf_lock, flags);
+
+ try_to_unmap_flush();
+
+ spin_lock_irqsave(&luf_lock, flags);
+ luf_on_flushing = false;
+
+ /*
+ * luf_gen_done can be read by another with luf_lock not
+ * held so use WRITE_ONCE() to prevent tearing.
+ */
+ WRITE_ONCE(luf_gen_done, ugen);
+ spin_unlock_irqrestore(&luf_lock, flags);
+}
+
+void luf_flush(void)
+{
+ unsigned long flags;
+ unsigned short int ugen;
+
+ /*
+ * Obtain the latest ugen number.
+ */
+ spin_lock_irqsave(&luf_lock, flags);
+ ugen = luf_gen;
+ spin_unlock_irqrestore(&luf_lock, flags);
+
+ check_luf_flush(ugen);
+}

void fold_ubc(struct tlbflush_unmap_batch *dst,
struct tlbflush_unmap_batch *src)
@@ -666,13 +930,15 @@ void fold_ubc(struct tlbflush_unmap_batch *dst,
void try_to_unmap_flush(void)
{
struct tlbflush_unmap_batch *tlb_ubc = &current->tlb_ubc;
- struct tlbflush_unmap_batch *tlb_ubc_ro = &current->tlb_ubc_ro;
+ struct tlbflush_unmap_batch *tlb_ubc_luf = &current->tlb_ubc_luf;

- fold_ubc(tlb_ubc, tlb_ubc_ro);
+ fold_ubc(tlb_ubc, tlb_ubc_luf);
if (!tlb_ubc->flush_required)
return;

+ rmap_flush_start();
arch_tlbbatch_flush(&tlb_ubc->arch);
+ rmap_flush_end(tlb_ubc);
arch_tlbbatch_clear(&tlb_ubc->arch);
tlb_ubc->flush_required = false;
tlb_ubc->writable = false;
@@ -682,9 +948,9 @@ void try_to_unmap_flush(void)
void try_to_unmap_flush_dirty(void)
{
struct tlbflush_unmap_batch *tlb_ubc = &current->tlb_ubc;
- struct tlbflush_unmap_batch *tlb_ubc_ro = &current->tlb_ubc_ro;
+ struct tlbflush_unmap_batch *tlb_ubc_luf = &current->tlb_ubc_luf;

- if (tlb_ubc->writable || tlb_ubc_ro->writable)
+ if (tlb_ubc->writable || tlb_ubc_luf->writable)
try_to_unmap_flush();
}

@@ -708,9 +974,15 @@ static void set_tlb_ubc_flush_pending(struct mm_struct *mm, pte_t pteval,
if (!pte_accessible(mm, pteval))
return;

- if (pte_write(pteval))
+ if (pte_write(pteval)) {
tlb_ubc = &current->tlb_ubc;
- else
+
+ /*
+ * luf cannot work with the folio once it found a
+ * writable or dirty mapping on it.
+ */
+ can_luf_fail();
+ } else
tlb_ubc = &current->tlb_ubc_ro;

arch_tlbbatch_add_pending(&tlb_ubc->arch, mm, uaddr);
@@ -1976,11 +2248,23 @@ void try_to_unmap(struct folio *folio, enum ttu_flags flags)
.done = folio_not_mapped,
.anon_lock = folio_lock_anon_vma_read,
};
+ struct tlbflush_unmap_batch *tlb_ubc = &current->tlb_ubc;
+ struct tlbflush_unmap_batch *tlb_ubc_ro = &current->tlb_ubc_ro;
+ struct tlbflush_unmap_batch *tlb_ubc_luf = &current->tlb_ubc_luf;
+ bool can_luf;
+
+ can_luf_init();

if (flags & TTU_RMAP_LOCKED)
rmap_walk_locked(folio, &rwc);
else
rmap_walk(folio, &rwc);
+
+ can_luf = can_luf_folio(folio) && can_luf_test();
+ if (can_luf)
+ fold_ubc(tlb_ubc_luf, tlb_ubc_ro);
+ else
+ fold_ubc(tlb_ubc, tlb_ubc_ro);
}

/*
@@ -2325,6 +2609,10 @@ void try_to_migrate(struct folio *folio, enum ttu_flags flags)
.done = folio_not_mapped,
.anon_lock = folio_lock_anon_vma_read,
};
+ struct tlbflush_unmap_batch *tlb_ubc = &current->tlb_ubc;
+ struct tlbflush_unmap_batch *tlb_ubc_ro = &current->tlb_ubc_ro;
+ struct tlbflush_unmap_batch *tlb_ubc_luf = &current->tlb_ubc_luf;
+ bool can_luf;

/*
* Migration always ignores mlock and only supports TTU_RMAP_LOCKED and
@@ -2349,10 +2637,18 @@ void try_to_migrate(struct folio *folio, enum ttu_flags flags)
if (!folio_test_ksm(folio) && folio_test_anon(folio))
rwc.invalid_vma = invalid_migration_vma;

+ can_luf_init();
+
if (flags & TTU_RMAP_LOCKED)
rmap_walk_locked(folio, &rwc);
else
rmap_walk(folio, &rwc);
+
+ can_luf = can_luf_folio(folio) && can_luf_test();
+ if (can_luf)
+ fold_ubc(tlb_ubc_luf, tlb_ubc_ro);
+ else
+ fold_ubc(tlb_ubc, tlb_ubc_ro);
}

#ifdef CONFIG_DEVICE_PRIVATE
--
2.17.1


2024-05-20 02:27:32

by Byungchul Park

[permalink] [raw]
Subject: [RESEND PATCH v10 03/12] riscv, tlb: add APIs manipulating tlb batch's arch data

A new mechanism, LUF(Lazy Unmap Flush), defers tlb flush until folios
that have been unmapped and freed, eventually get allocated again. It's
safe for folios that had been mapped read only and were unmapped, since
the contents of the folios don't change while staying in pcp or buddy
so we can still read the data through the stale tlb entries.

This is a preparation for the mechanism that needs to recognize
read-only tlb entries by separating tlb batch arch data into two, one is
for read-only entries and the other is for writable ones, and merging
those two when needed.

It also optimizes tlb shootdown by skipping CPUs that have already
performed tlb flush needed since. To support it, added APIs
manipulating arch data for riscv.

Signed-off-by: Byungchul Park <[email protected]>
---
arch/riscv/include/asm/tlbflush.h | 21 +++++++++++++++++++++
1 file changed, 21 insertions(+)

diff --git a/arch/riscv/include/asm/tlbflush.h b/arch/riscv/include/asm/tlbflush.h
index 4112cc8d1d69..480c082ccde3 100644
--- a/arch/riscv/include/asm/tlbflush.h
+++ b/arch/riscv/include/asm/tlbflush.h
@@ -8,6 +8,7 @@
#define _ASM_RISCV_TLBFLUSH_H

#include <linux/mm_types.h>
+#include <linux/cpumask.h>
#include <asm/smp.h>
#include <asm/errata_list.h>

@@ -55,6 +56,26 @@ void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *batch,
void arch_flush_tlb_batched_pending(struct mm_struct *mm);
void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch);

+static inline void arch_tlbbatch_clear(struct arch_tlbflush_unmap_batch *batch)
+{
+ cpumask_clear(&batch->cpumask);
+
+}
+
+static inline void arch_tlbbatch_fold(struct arch_tlbflush_unmap_batch *bdst,
+ struct arch_tlbflush_unmap_batch *bsrc)
+{
+ cpumask_or(&bdst->cpumask, &bdst->cpumask, &bsrc->cpumask);
+
+}
+
+static inline bool arch_tlbbatch_done(struct arch_tlbflush_unmap_batch *bdst,
+ struct arch_tlbflush_unmap_batch *bsrc)
+{
+ return !cpumask_andnot(&bdst->cpumask, &bdst->cpumask, &bsrc->cpumask);
+
+}
+
#else /* CONFIG_SMP && CONFIG_MMU */

#define flush_tlb_all() local_flush_tlb_all()
--
2.17.1


2024-05-22 22:56:49

by Andrew Morton

[permalink] [raw]
Subject: Re: [RESEND PATCH v10 00/12] LUF(Lazy Unmap Flush) reducing tlb numbers over 90%

On Mon, 20 May 2024 11:17:22 +0900 Byungchul Park <[email protected]> wrote:

> While I'm working with a tiered memory system e.g. CXL memory, I have
> been facing migration overhead esp. tlb shootdown on promotion or
> demotion between different tiers. Yeah.. most tlb shootdowns on
> migration through hinting fault can be avoided thanks to Huang Ying's
> work, commit 4d4b6d66db ("mm,unmap: avoid flushing tlb in batch if PTE
> is inaccessible"). See the following link for more information:
>
> https://lore.kernel.org/lkml/[email protected]/
>
> However, it's only for migration through hinting fault. I thought it'd
> be much better if we have a general mechanism to reduce all the tlb
> numbers that we can apply to any unmap code, that we normally believe
> tlb flush should be followed.
>
> I'm suggesting a new mechanism, LUF(Lazy Unmap Flush), defers tlb flush
> until folios that have been unmapped and freed, eventually get allocated
> again. It's safe for folios that had been mapped read-only and were
> unmapped, since the contents of the folios don't change while staying in
> pcp or buddy so we can still read the data through the stale tlb entries.

Version 10 and no reviewed-by's or acked-by's. Reviewing the review
history isn't helped by the change in the naming of the patch series.

Seems that you're measuring a ~5% overall speedup in a realistic
workload? That's nice.

I'll defer this for a week or so to see what reviewers have to say. If
"nothing", please poke me and I guess I'll merge it up to see what
happens ;)


2024-05-23 02:05:58

by Byungchul Park

[permalink] [raw]
Subject: Re: [RESEND PATCH v10 00/12] LUF(Lazy Unmap Flush) reducing tlb numbers over 90%

On Wed, May 22, 2024 at 03:56:41PM -0700, Andrew Morton wrote:
> On Mon, 20 May 2024 11:17:22 +0900 Byungchul Park <[email protected]> wrote:
>
> > While I'm working with a tiered memory system e.g. CXL memory, I have
> > been facing migration overhead esp. tlb shootdown on promotion or
> > demotion between different tiers. Yeah.. most tlb shootdowns on
> > migration through hinting fault can be avoided thanks to Huang Ying's
> > work, commit 4d4b6d66db ("mm,unmap: avoid flushing tlb in batch if PTE
> > is inaccessible"). See the following link for more information:
> >
> > https://lore.kernel.org/lkml/[email protected]/
> >
> > However, it's only for migration through hinting fault. I thought it'd
> > be much better if we have a general mechanism to reduce all the tlb
> > numbers that we can apply to any unmap code, that we normally believe
> > tlb flush should be followed.
> >
> > I'm suggesting a new mechanism, LUF(Lazy Unmap Flush), defers tlb flush
> > until folios that have been unmapped and freed, eventually get allocated
> > again. It's safe for folios that had been mapped read-only and were
> > unmapped, since the contents of the folios don't change while staying in
> > pcp or buddy so we can still read the data through the stale tlb entries.
>
> Version 10 and no reviewed-by's or acked-by's. Reviewing the review
> history isn't helped by the change in the naming of the patch series.
>
> Seems that you're measuring a ~5% overall speedup in a realistic
> workload? That's nice.
>
> I'll defer this for a week or so to see what reviewers have to say. If
> "nothing", please poke me and I guess I'll merge it up to see what

I will poke you and will be ready for that ;)

Byungchul

> happens ;)