2024-04-08 18:40:14

by Ryan Roberts

[permalink] [raw]
Subject: [PATCH v7 0/7] Swap-out mTHP without splitting

Hi All,

This series adds support for swapping out multi-size THP (mTHP) without needing
to first split the large folio via split_huge_page_to_list_to_order(). It
closely follows the approach already used to swap-out PMD-sized THP.

There are a couple of reasons for swapping out mTHP without splitting:

- Performance: It is expensive to split a large folio and under extreme memory
pressure some workloads regressed performance when using 64K mTHP vs 4K
small folios because of this extra cost in the swap-out path. This series
not only eliminates the regression but makes it faster to swap out 64K mTHP
vs 4K small folios.

- Memory fragmentation avoidance: If we can avoid splitting a large folio
memory is less likely to become fragmented, making it easier to re-allocate
a large folio in future.

- Performance: Enables a separate series [7] to swap-in whole mTHPs, which
means we won't lose the TLB-efficiency benefits of mTHP once the memory has
been through a swap cycle.

I've done what I thought was the smallest change possible, and as a result, this
approach is only employed when the swap is backed by a non-rotating block device
(just as PMD-sized THP is supported today). Discussion against the RFC concluded
that this is sufficient.


Performance Testing
===================

I've run some swap performance tests on Ampere Altra VM (arm64) with 8 CPUs. The
VM is set up with a 35G block ram device as the swap device and the test is run
from inside a memcg limited to 40G memory. I've then run `usemem` from
vm-scalability with 70 processes, each allocating and writing 1G of memory. I've
repeated everything 6 times and taken the mean performance improvement relative
to 4K page baseline:

| alloc size | baseline | + this series |
| | mm-unstable (~v6.9-rc1) | |
|:-----------|------------------------:|------------------------:|
| 4K Page | 0.0% | 1.3% |
| 64K THP | -13.6% | 46.3% |
| 2M THP | 91.4% | 89.6% |

So with this change, the 64K swap performance goes from a 14% regression to a
46% improvement. While 2M shows a small regression I'm confident that this is
just noise.

---
The series applies against mm-unstable (as of 2024-04-08) after dropping v6 of
this series from it. The performance numbers are from v5. Since the delta is
very small I don't anticipate any performance changes. I'm optimistically hoping
this is the final version.


Changes since v6 [6]
====================

- patch #1
- swap_page_trans_huge_swapped() takes order instead of nr_pages (per Chris)
- patch #2
- Fix bug in swap_pte_batch() to consider swp pte bits (per David)
- Improved docs for clear_not_present_full_ptes() (per David)
- Improved docs for free_swap_and_cache_nr() (per David)
- patch #5
- Split out change to get_swap_pages() interface into own patch (per David)
- patch #6 (was patch #5)
- Improved readability of shrink_folio_list() with longer lines (per David)


Changes since v5 [5]
====================

- patch #2
- Don't bother trying to reclaim swap if none of the entries' refs have gone
to 0 in free_swap_and_cache_nr() (per Huang, Ying)
- patch #5
- Only update THP_SWPOUT_FALLBACK counters for pmd-mappable folios (per
Barry Song)
- patch #6
- Fix bug in madvise_cold_or_pageout_pte_range(): don't continue without ptl
(reported by Barry [8], sysbot [9])


Changes since v4 [4]
====================

- patch #3:
- Added R-B from Huang, Ying - thanks!
- patch #4:
- get_swap_pages() now takes order instead of nr_pages (per Huang, Ying)
- Removed WARN_ON_ONCE() from get_swap_pages()
- Reworded comment for scan_swap_map_try_ssd_cluster() (per Huang, Ying)
- Unified VM_WARN_ON()s in scan_swap_map_slots() to scan: (per Huang, Ying)
- Removed redundant "order == 0" check (per Huang, Ying)
- patch #5:
- Marked list_empty() check with data_race() (per David)
- Added R-B from Barry and David - thanks!
- patch #6:
- Implemented mkold_ptes() generic helper (pre David)
- Enhanced folio_pte_batch() to report any_young (per David)
- madvise_cold_or_pageout_pte_range() sets old in batch (per David)
- Added R-B from Barry - thanks!


Changes since v3 [3]
====================

- Renamed SWAP_NEXT_NULL -> SWAP_NEXT_INVALID (per Huang, Ying)
- Simplified max offset calculation (per Huang, Ying)
- Reinstated struct percpu_cluster to contain per-cluster, per-order `next`
offset (per Huang, Ying)
- Removed swap_alloc_large() and merged its functionality into
scan_swap_map_slots() (per Huang, Ying)
- Avoid extra cost of folio ref and lock due to removal of CLUSTER_FLAG_HUGE
by freeing swap entries in batches (see patch 2) (per DavidH)
- vmscan splits folio if its partially mapped (per Barry Song, DavidH)
- Avoid splitting in MADV_PAGEOUT path (per Barry Song)
- Dropped "mm: swap: Simplify ssd behavior when scanner steals entry" patch
since it's not actually a problem for THP as I first thought.


Changes since v2 [2]
====================

- Reuse scan_swap_map_try_ssd_cluster() between order-0 and order > 0
allocation. This required some refactoring to make everything work nicely
(new patches 2 and 3).
- Fix bug where nr_swap_pages would say there are pages available but the
scanner would not be able to allocate them because they were reserved for the
per-cpu allocator. We now allow stealing of order-0 entries from the high
order per-cpu clusters (in addition to exisiting stealing from order-0
per-cpu clusters).


Changes since v1 [1]
====================

- patch 1:
- Use cluster_set_count() instead of cluster_set_count_flag() in
swap_alloc_cluster() since we no longer have any flag to set. I was unable
to kill cluster_set_count_flag() as proposed against v1 as other call
sites depend explicitly setting flags to 0.
- patch 2:
- Moved large_next[] array into percpu_cluster to make it per-cpu
(recommended by Huang, Ying).
- large_next[] array is dynamically allocated because PMD_ORDER is not
compile-time constant for powerpc (fixes build error).


[1] https://lore.kernel.org/linux-mm/[email protected]/
[2] https://lore.kernel.org/linux-mm/[email protected]/
[3] https://lore.kernel.org/linux-mm/[email protected]/
[4] https://lore.kernel.org/linux-mm/[email protected]/
[5] https://lore.kernel.org/linux-mm/[email protected]/
[6] https://lore.kernel.org/linux-mm/[email protected]/
[7] https://lore.kernel.org/linux-mm/[email protected]/
[8] https://lore.kernel.org/linux-mm/CAGsJ_4yMOow27WDvN2q=E4HAtDd2PJ=OQ5Pj9DG+6FLWwNuXUw@mail.gmail.com/
[9] https://lore.kernel.org/linux-mm/[email protected]/

Thanks,
Ryan

Ryan Roberts (7):
mm: swap: Remove CLUSTER_FLAG_HUGE from swap_cluster_info:flags
mm: swap: free_swap_and_cache_nr() as batched free_swap_and_cache()
mm: swap: Simplify struct percpu_cluster
mm: swap: Update get_swap_pages() to take folio order
mm: swap: Allow storage of all mTHP orders
mm: vmscan: Avoid split during shrink_folio_list()
mm: madvise: Avoid split during MADV_PAGEOUT and MADV_COLD

include/linux/pgtable.h | 59 ++++++++
include/linux/swap.h | 35 +++--
mm/huge_memory.c | 3 -
mm/internal.h | 75 +++++++++-
mm/madvise.c | 99 +++++++-----
mm/memory.c | 17 ++-
mm/swap_slots.c | 6 +-
mm/swapfile.c | 325 +++++++++++++++++++++++-----------------
mm/vmscan.c | 20 +--
9 files changed, 422 insertions(+), 217 deletions(-)

--
2.25.1



2024-04-08 18:40:41

by Ryan Roberts

[permalink] [raw]
Subject: [PATCH v7 1/7] mm: swap: Remove CLUSTER_FLAG_HUGE from swap_cluster_info:flags

As preparation for supporting small-sized THP in the swap-out path,
without first needing to split to order-0, Remove the CLUSTER_FLAG_HUGE,
which, when present, always implies PMD-sized THP, which is the same as
the cluster size.

The only use of the flag was to determine whether a swap entry refers to
a single page or a PMD-sized THP in swap_page_trans_huge_swapped().
Instead of relying on the flag, we now pass in order, which originates
from the folio's order. This allows the logic to work for folios of any
order.

The one snag is that one of the swap_page_trans_huge_swapped() call
sites does not have the folio. But it was only being called there to
shortcut a call __try_to_reclaim_swap() in some cases.
__try_to_reclaim_swap() gets the folio and (via some other functions)
calls swap_page_trans_huge_swapped(). So I've removed the problematic
call site and believe the new logic should be functionally equivalent.

That said, removing the fast path means that we will take a reference
and trylock a large folio much more often, which we would like to avoid.
The next patch will solve this.

Removing CLUSTER_FLAG_HUGE also means we can remove split_swap_cluster()
which used to be called during folio splitting, since
split_swap_cluster()'s only job was to remove the flag.

Reviewed-by: "Huang, Ying" <[email protected]>
Acked-by: Chris Li <[email protected]>
Acked-by: David Hildenbrand <[email protected]>
Signed-off-by: Ryan Roberts <[email protected]>
---
include/linux/swap.h | 10 ----------
mm/huge_memory.c | 3 ---
mm/swapfile.c | 47 ++++++++------------------------------------
3 files changed, 8 insertions(+), 52 deletions(-)

diff --git a/include/linux/swap.h b/include/linux/swap.h
index a211a0383425..f6f78198f000 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -259,7 +259,6 @@ struct swap_cluster_info {
};
#define CLUSTER_FLAG_FREE 1 /* This cluster is free */
#define CLUSTER_FLAG_NEXT_NULL 2 /* This cluster has no next cluster */
-#define CLUSTER_FLAG_HUGE 4 /* This cluster is backing a transparent huge page */

/*
* We assign a cluster to each CPU, so each CPU can allocate swap entry from
@@ -590,15 +589,6 @@ static inline int add_swap_extent(struct swap_info_struct *sis,
}
#endif /* CONFIG_SWAP */

-#ifdef CONFIG_THP_SWAP
-extern int split_swap_cluster(swp_entry_t entry);
-#else
-static inline int split_swap_cluster(swp_entry_t entry)
-{
- return 0;
-}
-#endif
-
#ifdef CONFIG_MEMCG
static inline int mem_cgroup_swappiness(struct mem_cgroup *memcg)
{
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index b106baec7260..5b875f0fc923 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2892,9 +2892,6 @@ static void __split_huge_page(struct page *page, struct list_head *list,
shmem_uncharge(folio->mapping->host, nr_dropped);
remap_page(folio, nr);

- if (folio_test_swapcache(folio))
- split_swap_cluster(folio->swap);
-
/*
* set page to its compound_head when split to non order-0 pages, so
* we can skip unlocking it below, since PG_locked is transferred to
diff --git a/mm/swapfile.c b/mm/swapfile.c
index 5e6d2304a2a4..1ded6d1dcab4 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -343,18 +343,6 @@ static inline void cluster_set_null(struct swap_cluster_info *info)
info->data = 0;
}

-static inline bool cluster_is_huge(struct swap_cluster_info *info)
-{
- if (IS_ENABLED(CONFIG_THP_SWAP))
- return info->flags & CLUSTER_FLAG_HUGE;
- return false;
-}
-
-static inline void cluster_clear_huge(struct swap_cluster_info *info)
-{
- info->flags &= ~CLUSTER_FLAG_HUGE;
-}
-
static inline struct swap_cluster_info *lock_cluster(struct swap_info_struct *si,
unsigned long offset)
{
@@ -1027,7 +1015,7 @@ static int swap_alloc_cluster(struct swap_info_struct *si, swp_entry_t *slot)
offset = idx * SWAPFILE_CLUSTER;
ci = lock_cluster(si, offset);
alloc_cluster(si, idx);
- cluster_set_count_flag(ci, SWAPFILE_CLUSTER, CLUSTER_FLAG_HUGE);
+ cluster_set_count(ci, SWAPFILE_CLUSTER);

memset(si->swap_map + offset, SWAP_HAS_CACHE, SWAPFILE_CLUSTER);
unlock_cluster(ci);
@@ -1365,7 +1353,6 @@ void put_swap_folio(struct folio *folio, swp_entry_t entry)

ci = lock_cluster_or_swap_info(si, offset);
if (size == SWAPFILE_CLUSTER) {
- VM_BUG_ON(!cluster_is_huge(ci));
map = si->swap_map + offset;
for (i = 0; i < SWAPFILE_CLUSTER; i++) {
val = map[i];
@@ -1373,7 +1360,6 @@ void put_swap_folio(struct folio *folio, swp_entry_t entry)
if (val == SWAP_HAS_CACHE)
free_entries++;
}
- cluster_clear_huge(ci);
if (free_entries == SWAPFILE_CLUSTER) {
unlock_cluster_or_swap_info(si, ci);
spin_lock(&si->lock);
@@ -1395,23 +1381,6 @@ void put_swap_folio(struct folio *folio, swp_entry_t entry)
unlock_cluster_or_swap_info(si, ci);
}

-#ifdef CONFIG_THP_SWAP
-int split_swap_cluster(swp_entry_t entry)
-{
- struct swap_info_struct *si;
- struct swap_cluster_info *ci;
- unsigned long offset = swp_offset(entry);
-
- si = _swap_info_get(entry);
- if (!si)
- return -EBUSY;
- ci = lock_cluster(si, offset);
- cluster_clear_huge(ci);
- unlock_cluster(ci);
- return 0;
-}
-#endif
-
static int swp_entry_cmp(const void *ent1, const void *ent2)
{
const swp_entry_t *e1 = ent1, *e2 = ent2;
@@ -1519,22 +1488,23 @@ int swp_swapcount(swp_entry_t entry)
}

static bool swap_page_trans_huge_swapped(struct swap_info_struct *si,
- swp_entry_t entry)
+ swp_entry_t entry, int order)
{
struct swap_cluster_info *ci;
unsigned char *map = si->swap_map;
+ unsigned int nr_pages = 1 << order;
unsigned long roffset = swp_offset(entry);
- unsigned long offset = round_down(roffset, SWAPFILE_CLUSTER);
+ unsigned long offset = round_down(roffset, nr_pages);
int i;
bool ret = false;

ci = lock_cluster_or_swap_info(si, offset);
- if (!ci || !cluster_is_huge(ci)) {
+ if (!ci || nr_pages == 1) {
if (swap_count(map[roffset]))
ret = true;
goto unlock_out;
}
- for (i = 0; i < SWAPFILE_CLUSTER; i++) {
+ for (i = 0; i < nr_pages; i++) {
if (swap_count(map[offset + i])) {
ret = true;
break;
@@ -1556,7 +1526,7 @@ static bool folio_swapped(struct folio *folio)
if (!IS_ENABLED(CONFIG_THP_SWAP) || likely(!folio_test_large(folio)))
return swap_swapcount(si, entry) != 0;

- return swap_page_trans_huge_swapped(si, entry);
+ return swap_page_trans_huge_swapped(si, entry, folio_order(folio));
}

/**
@@ -1622,8 +1592,7 @@ int free_swap_and_cache(swp_entry_t entry)
}

count = __swap_entry_free(p, entry);
- if (count == SWAP_HAS_CACHE &&
- !swap_page_trans_huge_swapped(p, entry))
+ if (count == SWAP_HAS_CACHE)
__try_to_reclaim_swap(p, swp_offset(entry),
TTRS_UNMAPPED | TTRS_FULL);
put_swap_device(p);
--
2.25.1


2024-04-08 18:40:50

by Ryan Roberts

[permalink] [raw]
Subject: [PATCH v7 2/7] mm: swap: free_swap_and_cache_nr() as batched free_swap_and_cache()

Now that we no longer have a convenient flag in the cluster to determine
if a folio is large, free_swap_and_cache() will take a reference and
lock a large folio much more often, which could lead to contention and
(e.g.) failure to split large folios, etc.

Let's solve that problem by batch freeing swap and cache with a new
function, free_swap_and_cache_nr(), to free a contiguous range of swap
entries together. This allows us to first drop a reference to each swap
slot before we try to release the cache folio. This means we only try to
release the folio once, only taking the reference and lock once - much
better than the previous 512 times for the 2M THP case.

Contiguous swap entries are gathered in zap_pte_range() and
madvise_free_pte_range() in a similar way to how present ptes are
already gathered in zap_pte_range().

While we are at it, let's simplify by converting the return type of both
functions to void. The return value was used only by zap_pte_range() to
print a bad pte, and was ignored by everyone else, so the extra
reporting wasn't exactly guaranteed. We will still get the warning with
most of the information from get_swap_device(). With the batch version,
we wouldn't know which pte was bad anyway so could print the wrong one.

Signed-off-by: Ryan Roberts <[email protected]>
---
include/linux/pgtable.h | 29 ++++++++++++
include/linux/swap.h | 12 +++--
mm/internal.h | 63 ++++++++++++++++++++++++++
mm/madvise.c | 12 +++--
mm/memory.c | 13 +++---
mm/swapfile.c | 97 +++++++++++++++++++++++++++++++++--------
6 files changed, 195 insertions(+), 31 deletions(-)

diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index a3fc8150b047..75096025fe52 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -708,6 +708,35 @@ static inline void pte_clear_not_present_full(struct mm_struct *mm,
}
#endif

+#ifndef clear_not_present_full_ptes
+/**
+ * clear_not_present_full_ptes - Clear multiple not present PTEs which are
+ * consecutive in the pgtable.
+ * @mm: Address space the ptes represent.
+ * @addr: Address of the first pte.
+ * @ptep: Page table pointer for the first entry.
+ * @nr: Number of entries to clear.
+ * @full: Whether we are clearing a full mm.
+ *
+ * May be overridden by the architecture; otherwise, implemented as a simple
+ * loop over pte_clear_not_present_full().
+ *
+ * Context: The caller holds the page table lock. The PTEs are all not present.
+ * The PTEs are all in the same PMD.
+ */
+static inline void clear_not_present_full_ptes(struct mm_struct *mm,
+ unsigned long addr, pte_t *ptep, unsigned int nr, int full)
+{
+ for (;;) {
+ pte_clear_not_present_full(mm, addr, ptep, full);
+ if (--nr == 0)
+ break;
+ ptep++;
+ addr += PAGE_SIZE;
+ }
+}
+#endif
+
#ifndef __HAVE_ARCH_PTEP_CLEAR_FLUSH
extern pte_t ptep_clear_flush(struct vm_area_struct *vma,
unsigned long address,
diff --git a/include/linux/swap.h b/include/linux/swap.h
index f6f78198f000..5737236dc3ce 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -471,7 +471,7 @@ extern int swap_duplicate(swp_entry_t);
extern int swapcache_prepare(swp_entry_t);
extern void swap_free(swp_entry_t);
extern void swapcache_free_entries(swp_entry_t *entries, int n);
-extern int free_swap_and_cache(swp_entry_t);
+extern void free_swap_and_cache_nr(swp_entry_t entry, int nr);
int swap_type_of(dev_t device, sector_t offset);
int find_first_swap(dev_t *device);
extern unsigned int count_swap_pages(int, int);
@@ -520,8 +520,9 @@ static inline void put_swap_device(struct swap_info_struct *si)
#define free_pages_and_swap_cache(pages, nr) \
release_pages((pages), (nr));

-/* used to sanity check ptes in zap_pte_range when CONFIG_SWAP=0 */
-#define free_swap_and_cache(e) is_pfn_swap_entry(e)
+static inline void free_swap_and_cache_nr(swp_entry_t entry, int nr)
+{
+}

static inline void free_swap_cache(struct folio *folio)
{
@@ -589,6 +590,11 @@ static inline int add_swap_extent(struct swap_info_struct *sis,
}
#endif /* CONFIG_SWAP */

+static inline void free_swap_and_cache(swp_entry_t entry)
+{
+ free_swap_and_cache_nr(entry, 1);
+}
+
#ifdef CONFIG_MEMCG
static inline int mem_cgroup_swappiness(struct mem_cgroup *memcg)
{
diff --git a/mm/internal.h b/mm/internal.h
index 3bdc8693b54f..de68705624b0 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -11,6 +11,8 @@
#include <linux/mm.h>
#include <linux/pagemap.h>
#include <linux/rmap.h>
+#include <linux/swap.h>
+#include <linux/swapops.h>
#include <linux/tracepoint-defs.h>

struct folio_batch;
@@ -189,6 +191,67 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr,

return min(ptep - start_ptep, max_nr);
}
+
+/**
+ * pte_next_swp_offset - Increment the swap entry offset field of a swap pte.
+ * @pte: The initial pte state; is_swap_pte(pte) must be true.
+ *
+ * Increments the swap offset, while maintaining all other fields, including
+ * swap type, and any swp pte bits. The resulting pte is returned.
+ */
+static inline pte_t pte_next_swp_offset(pte_t pte)
+{
+ swp_entry_t entry = pte_to_swp_entry(pte);
+ pte_t new = __swp_entry_to_pte(__swp_entry(swp_type(entry),
+ swp_offset(entry) + 1));
+
+ if (pte_swp_soft_dirty(pte))
+ new = pte_swp_mksoft_dirty(new);
+ if (pte_swp_exclusive(pte))
+ new = pte_swp_mkexclusive(new);
+ if (pte_swp_uffd_wp(pte))
+ new = pte_swp_mkuffd_wp(new);
+
+ return new;
+}
+
+/**
+ * swap_pte_batch - detect a PTE batch for a set of contiguous swap entries
+ * @start_ptep: Page table pointer for the first entry.
+ * @max_nr: The maximum number of table entries to consider.
+ * @pte: Page table entry for the first entry.
+ *
+ * Detect a batch of contiguous swap entries: consecutive (non-present) PTEs
+ * containing swap entries all with consecutive offsets and targeting the same
+ * swap type, all with matching swp pte bits.
+ *
+ * max_nr must be at least one and must be limited by the caller so scanning
+ * cannot exceed a single page table.
+ *
+ * Return: the number of table entries in the batch.
+ */
+static inline int swap_pte_batch(pte_t *start_ptep, int max_nr, pte_t pte)
+{
+ pte_t expected_pte = pte_next_swp_offset(pte);
+ const pte_t *end_ptep = start_ptep + max_nr;
+ pte_t *ptep = start_ptep + 1;
+
+ VM_WARN_ON(max_nr < 1);
+ VM_WARN_ON(!is_swap_pte(pte));
+ VM_WARN_ON(non_swap_entry(pte_to_swp_entry(pte)));
+
+ while (ptep < end_ptep) {
+ pte = ptep_get(ptep);
+
+ if (!pte_same(pte, expected_pte))
+ break;
+
+ expected_pte = pte_next_swp_offset(expected_pte);
+ ptep++;
+ }
+
+ return ptep - start_ptep;
+}
#endif /* CONFIG_MMU */

void __acct_reclaim_writeback(pg_data_t *pgdat, struct folio *folio,
diff --git a/mm/madvise.c b/mm/madvise.c
index 1f77a51baaac..5011ecb24344 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -628,6 +628,7 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr,
struct folio *folio;
int nr_swap = 0;
unsigned long next;
+ int nr, max_nr;

next = pmd_addr_end(addr, end);
if (pmd_trans_huge(*pmd))
@@ -640,7 +641,8 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr,
return 0;
flush_tlb_batched_pending(mm);
arch_enter_lazy_mmu_mode();
- for (; addr != end; pte++, addr += PAGE_SIZE) {
+ for (; addr != end; pte += nr, addr += PAGE_SIZE * nr) {
+ nr = 1;
ptent = ptep_get(pte);

if (pte_none(ptent))
@@ -655,9 +657,11 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr,

entry = pte_to_swp_entry(ptent);
if (!non_swap_entry(entry)) {
- nr_swap--;
- free_swap_and_cache(entry);
- pte_clear_not_present_full(mm, addr, pte, tlb->fullmm);
+ max_nr = (end - addr) / PAGE_SIZE;
+ nr = swap_pte_batch(pte, max_nr, ptent);
+ nr_swap -= nr;
+ free_swap_and_cache_nr(entry, nr);
+ clear_not_present_full_ptes(mm, addr, pte, nr, tlb->fullmm);
} else if (is_hwpoison_entry(entry) ||
is_poisoned_swp_entry(entry)) {
pte_clear_not_present_full(mm, addr, pte, tlb->fullmm);
diff --git a/mm/memory.c b/mm/memory.c
index b98e4d907a14..0db2aa066a5a 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1637,12 +1637,13 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb,
folio_remove_rmap_pte(folio, page, vma);
folio_put(folio);
} else if (!non_swap_entry(entry)) {
- /* Genuine swap entry, hence a private anon page */
+ max_nr = (end - addr) / PAGE_SIZE;
+ nr = swap_pte_batch(pte, max_nr, ptent);
+ /* Genuine swap entries, hence a private anon pages */
if (!should_zap_cows(details))
continue;
- rss[MM_SWAPENTS]--;
- if (unlikely(!free_swap_and_cache(entry)))
- print_bad_pte(vma, addr, ptent, NULL);
+ rss[MM_SWAPENTS] -= nr;
+ free_swap_and_cache_nr(entry, nr);
} else if (is_migration_entry(entry)) {
folio = pfn_swap_entry_folio(entry);
if (!should_zap_folio(details, folio))
@@ -1665,8 +1666,8 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb,
pr_alert("unrecognized swap entry 0x%lx\n", entry.val);
WARN_ON_ONCE(1);
}
- pte_clear_not_present_full(mm, addr, pte, tlb->fullmm);
- zap_install_uffd_wp_if_needed(vma, addr, pte, 1, details, ptent);
+ clear_not_present_full_ptes(mm, addr, pte, nr, tlb->fullmm);
+ zap_install_uffd_wp_if_needed(vma, addr, pte, nr, details, ptent);
} while (pte += nr, addr += PAGE_SIZE * nr, addr != end);

add_mm_rss_vec(mm, rss);
diff --git a/mm/swapfile.c b/mm/swapfile.c
index 1ded6d1dcab4..20c45757f2b2 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -130,7 +130,11 @@ static inline unsigned char swap_count(unsigned char ent)
/* Reclaim the swap entry if swap is getting full*/
#define TTRS_FULL 0x4

-/* returns 1 if swap entry is freed */
+/*
+ * returns number of pages in the folio that backs the swap entry. If positive,
+ * the folio was reclaimed. If negative, the folio was not reclaimed. If 0, no
+ * folio was associated with the swap entry.
+ */
static int __try_to_reclaim_swap(struct swap_info_struct *si,
unsigned long offset, unsigned long flags)
{
@@ -155,6 +159,7 @@ static int __try_to_reclaim_swap(struct swap_info_struct *si,
ret = folio_free_swap(folio);
folio_unlock(folio);
}
+ ret = ret ? folio_nr_pages(folio) : -folio_nr_pages(folio);
folio_put(folio);
return ret;
}
@@ -895,7 +900,7 @@ static int scan_swap_map_slots(struct swap_info_struct *si,
swap_was_freed = __try_to_reclaim_swap(si, offset, TTRS_ANYWAY);
spin_lock(&si->lock);
/* entry was freed successfully, try to use this again */
- if (swap_was_freed)
+ if (swap_was_freed > 0)
goto checks;
goto scan; /* check next one */
}
@@ -1572,32 +1577,88 @@ bool folio_free_swap(struct folio *folio)
return true;
}

-/*
- * Free the swap entry like above, but also try to
- * free the page cache entry if it is the last user.
+/**
+ * free_swap_and_cache_nr() - Release reference on range of swap entries and
+ * reclaim their cache if no more references remain.
+ * @entry: First entry of range.
+ * @nr: Number of entries in range.
+ *
+ * For each swap entry in the contiguous range, release a reference. If any swap
+ * entries become free, try to reclaim their underlying folios, if present. The
+ * offset range is defined by [entry.offset, entry.offset + nr).
*/
-int free_swap_and_cache(swp_entry_t entry)
+void free_swap_and_cache_nr(swp_entry_t entry, int nr)
{
- struct swap_info_struct *p;
+ const unsigned long start_offset = swp_offset(entry);
+ const unsigned long end_offset = start_offset + nr;
+ unsigned int type = swp_type(entry);
+ struct swap_info_struct *si;
+ bool any_only_cache = false;
+ unsigned long offset;
unsigned char count;

if (non_swap_entry(entry))
- return 1;
+ return;

- p = get_swap_device(entry);
- if (p) {
- if (WARN_ON(data_race(!p->swap_map[swp_offset(entry)]))) {
- put_swap_device(p);
- return 0;
+ si = get_swap_device(entry);
+ if (!si)
+ return;
+
+ if (WARN_ON(end_offset > si->max))
+ goto out;
+
+ /*
+ * First free all entries in the range.
+ */
+ for (offset = start_offset; offset < end_offset; offset++) {
+ if (data_race(si->swap_map[offset])) {
+ count = __swap_entry_free(si, swp_entry(type, offset));
+ if (count == SWAP_HAS_CACHE)
+ any_only_cache = true;
+ } else {
+ WARN_ON_ONCE(1);
}
+ }
+
+ /*
+ * Short-circuit the below loop if none of the entries had their
+ * reference drop to zero.
+ */
+ if (!any_only_cache)
+ goto out;

- count = __swap_entry_free(p, entry);
- if (count == SWAP_HAS_CACHE)
- __try_to_reclaim_swap(p, swp_offset(entry),
+ /*
+ * Now go back over the range trying to reclaim the swap cache. This is
+ * more efficient for large folios because we will only try to reclaim
+ * the swap once per folio in the common case. If we do
+ * __swap_entry_free() and __try_to_reclaim_swap() in the same loop, the
+ * latter will get a reference and lock the folio for every individual
+ * page but will only succeed once the swap slot for every subpage is
+ * zero.
+ */
+ for (offset = start_offset; offset < end_offset; offset += nr) {
+ nr = 1;
+ if (READ_ONCE(si->swap_map[offset]) == SWAP_HAS_CACHE) {
+ /*
+ * Folios are always naturally aligned in swap so
+ * advance forward to the next boundary. Zero means no
+ * folio was found for the swap entry, so advance by 1
+ * in this case. Negative value means folio was found
+ * but could not be reclaimed. Here we can still advance
+ * to the next boundary.
+ */
+ nr = __try_to_reclaim_swap(si, offset,
TTRS_UNMAPPED | TTRS_FULL);
- put_swap_device(p);
+ if (nr == 0)
+ nr = 1;
+ else if (nr < 0)
+ nr = -nr;
+ nr = ALIGN(offset + 1, nr) - offset;
+ }
}
- return p != NULL;
+
+out:
+ put_swap_device(si);
}

#ifdef CONFIG_HIBERNATION
--
2.25.1


2024-04-08 18:41:09

by Ryan Roberts

[permalink] [raw]
Subject: [PATCH v7 4/7] mm: swap: Update get_swap_pages() to take folio order

We are about to allow swap storage of any mTHP size. To prepare for
that, let's change get_swap_pages() to take a folio order parameter
instead of nr_pages. This makes the interface self-documenting; a
power-of-2 number of pages must be provided. We will also need the order
internally so this simplifies accessing it.

Reviewed-by: "Huang, Ying" <[email protected]>
Signed-off-by: Ryan Roberts <[email protected]>
---
include/linux/swap.h | 2 +-
mm/swap_slots.c | 6 +++---
mm/swapfile.c | 13 +++++++------
3 files changed, 11 insertions(+), 10 deletions(-)

diff --git a/include/linux/swap.h b/include/linux/swap.h
index 5e1e4f5bf0cb..b888e1080a94 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -471,7 +471,7 @@ swp_entry_t folio_alloc_swap(struct folio *folio);
bool folio_free_swap(struct folio *folio);
void put_swap_folio(struct folio *folio, swp_entry_t entry);
extern swp_entry_t get_swap_page_of_type(int);
-extern int get_swap_pages(int n, swp_entry_t swp_entries[], int entry_size);
+extern int get_swap_pages(int n, swp_entry_t swp_entries[], int order);
extern int add_swap_count_continuation(swp_entry_t, gfp_t);
extern void swap_shmem_alloc(swp_entry_t);
extern int swap_duplicate(swp_entry_t);
diff --git a/mm/swap_slots.c b/mm/swap_slots.c
index 53abeaf1371d..13ab3b771409 100644
--- a/mm/swap_slots.c
+++ b/mm/swap_slots.c
@@ -264,7 +264,7 @@ static int refill_swap_slots_cache(struct swap_slots_cache *cache)
cache->cur = 0;
if (swap_slot_cache_active)
cache->nr = get_swap_pages(SWAP_SLOTS_CACHE_SIZE,
- cache->slots, 1);
+ cache->slots, 0);

return cache->nr;
}
@@ -311,7 +311,7 @@ swp_entry_t folio_alloc_swap(struct folio *folio)

if (folio_test_large(folio)) {
if (IS_ENABLED(CONFIG_THP_SWAP))
- get_swap_pages(1, &entry, folio_nr_pages(folio));
+ get_swap_pages(1, &entry, folio_order(folio));
goto out;
}

@@ -343,7 +343,7 @@ swp_entry_t folio_alloc_swap(struct folio *folio)
goto out;
}

- get_swap_pages(1, &entry, 1);
+ get_swap_pages(1, &entry, 0);
out:
if (mem_cgroup_try_charge_swap(folio, entry)) {
put_swap_folio(folio, entry);
diff --git a/mm/swapfile.c b/mm/swapfile.c
index e3f855475278..d2e3d3cd439f 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -278,15 +278,15 @@ static void discard_swap_cluster(struct swap_info_struct *si,
#ifdef CONFIG_THP_SWAP
#define SWAPFILE_CLUSTER HPAGE_PMD_NR

-#define swap_entry_size(size) (size)
+#define swap_entry_order(order) (order)
#else
#define SWAPFILE_CLUSTER 256

/*
- * Define swap_entry_size() as constant to let compiler to optimize
+ * Define swap_entry_order() as constant to let compiler to optimize
* out some code if !CONFIG_THP_SWAP
*/
-#define swap_entry_size(size) 1
+#define swap_entry_order(order) 0
#endif
#define LATENCY_LIMIT 256

@@ -1042,9 +1042,10 @@ static void swap_free_cluster(struct swap_info_struct *si, unsigned long idx)
swap_range_free(si, offset, SWAPFILE_CLUSTER);
}

-int get_swap_pages(int n_goal, swp_entry_t swp_entries[], int entry_size)
+int get_swap_pages(int n_goal, swp_entry_t swp_entries[], int entry_order)
{
- unsigned long size = swap_entry_size(entry_size);
+ int order = swap_entry_order(entry_order);
+ unsigned long size = 1 << order;
struct swap_info_struct *si, *next;
long avail_pgs;
int n_ret = 0;
@@ -1349,7 +1350,7 @@ void put_swap_folio(struct folio *folio, swp_entry_t entry)
unsigned char *map;
unsigned int i, free_entries = 0;
unsigned char val;
- int size = swap_entry_size(folio_nr_pages(folio));
+ int size = 1 << swap_entry_order(folio_order(folio));

si = _swap_info_get(entry);
if (!si)
--
2.25.1


2024-04-08 18:41:25

by Ryan Roberts

[permalink] [raw]
Subject: [PATCH v7 5/7] mm: swap: Allow storage of all mTHP orders

Multi-size THP enables performance improvements by allocating large,
pte-mapped folios for anonymous memory. However I've observed that on an
arm64 system running a parallel workload (e.g. kernel compilation)
across many cores, under high memory pressure, the speed regresses. This
is due to bottlenecking on the increased number of TLBIs added due to
all the extra folio splitting when the large folios are swapped out.

Therefore, solve this regression by adding support for swapping out mTHP
without needing to split the folio, just like is already done for
PMD-sized THP. This change only applies when CONFIG_THP_SWAP is enabled,
and when the swap backing store is a non-rotating block device. These
are the same constraints as for the existing PMD-sized THP swap-out
support.

Note that no attempt is made to swap-in (m)THP here - this is still done
page-by-page, like for PMD-sized THP. But swapping-out mTHP is a
prerequisite for swapping-in mTHP.

The main change here is to improve the swap entry allocator so that it
can allocate any power-of-2 number of contiguous entries between [1, (1
<< PMD_ORDER)]. This is done by allocating a cluster for each distinct
order and allocating sequentially from it until the cluster is full.
This ensures that we don't need to search the map and we get no
fragmentation due to alignment padding for different orders in the
cluster. If there is no current cluster for a given order, we attempt to
allocate a free cluster from the list. If there are no free clusters, we
fail the allocation and the caller can fall back to splitting the folio
and allocates individual entries (as per existing PMD-sized THP
fallback).

The per-order current clusters are maintained per-cpu using the existing
infrastructure. This is done to avoid interleving pages from different
tasks, which would prevent IO being batched. This is already done for
the order-0 allocations so we follow the same pattern.

As is done for order-0 per-cpu clusters, the scanner now can steal
order-0 entries from any per-cpu-per-order reserved cluster. This
ensures that when the swap file is getting full, space doesn't get tied
up in the per-cpu reserves.

This change only modifies swap to be able to accept any order mTHP. It
doesn't change the callers to elide doing the actual split. That will be
done in separate changes.

Reviewed-by: "Huang, Ying" <[email protected]>
Signed-off-by: Ryan Roberts <[email protected]>
---
include/linux/swap.h | 8 ++-
mm/swapfile.c | 162 ++++++++++++++++++++++++-------------------
2 files changed, 98 insertions(+), 72 deletions(-)

diff --git a/include/linux/swap.h b/include/linux/swap.h
index b888e1080a94..11c53692f65f 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -268,13 +268,19 @@ struct swap_cluster_info {
*/
#define SWAP_NEXT_INVALID 0

+#ifdef CONFIG_THP_SWAP
+#define SWAP_NR_ORDERS (PMD_ORDER + 1)
+#else
+#define SWAP_NR_ORDERS 1
+#endif
+
/*
* We assign a cluster to each CPU, so each CPU can allocate swap entry from
* its own cluster and swapout sequentially. The purpose is to optimize swapout
* throughput.
*/
struct percpu_cluster {
- unsigned int next; /* Likely next allocation offset */
+ unsigned int next[SWAP_NR_ORDERS]; /* Likely next allocation offset */
};

struct swap_cluster_list {
diff --git a/mm/swapfile.c b/mm/swapfile.c
index d2e3d3cd439f..148ef08f19dd 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -551,10 +551,12 @@ static void free_cluster(struct swap_info_struct *si, unsigned long idx)

/*
* The cluster corresponding to page_nr will be used. The cluster will be
- * removed from free cluster list and its usage counter will be increased.
+ * removed from free cluster list and its usage counter will be increased by
+ * count.
*/
-static void inc_cluster_info_page(struct swap_info_struct *p,
- struct swap_cluster_info *cluster_info, unsigned long page_nr)
+static void add_cluster_info_page(struct swap_info_struct *p,
+ struct swap_cluster_info *cluster_info, unsigned long page_nr,
+ unsigned long count)
{
unsigned long idx = page_nr / SWAPFILE_CLUSTER;

@@ -563,9 +565,19 @@ static void inc_cluster_info_page(struct swap_info_struct *p,
if (cluster_is_free(&cluster_info[idx]))
alloc_cluster(p, idx);

- VM_BUG_ON(cluster_count(&cluster_info[idx]) >= SWAPFILE_CLUSTER);
+ VM_BUG_ON(cluster_count(&cluster_info[idx]) + count > SWAPFILE_CLUSTER);
cluster_set_count(&cluster_info[idx],
- cluster_count(&cluster_info[idx]) + 1);
+ cluster_count(&cluster_info[idx]) + count);
+}
+
+/*
+ * The cluster corresponding to page_nr will be used. The cluster will be
+ * removed from free cluster list and its usage counter will be increased by 1.
+ */
+static void inc_cluster_info_page(struct swap_info_struct *p,
+ struct swap_cluster_info *cluster_info, unsigned long page_nr)
+{
+ add_cluster_info_page(p, cluster_info, page_nr, 1);
}

/*
@@ -595,7 +607,7 @@ static void dec_cluster_info_page(struct swap_info_struct *p,
*/
static bool
scan_swap_map_ssd_cluster_conflict(struct swap_info_struct *si,
- unsigned long offset)
+ unsigned long offset, int order)
{
struct percpu_cluster *percpu_cluster;
bool conflict;
@@ -609,24 +621,39 @@ scan_swap_map_ssd_cluster_conflict(struct swap_info_struct *si,
return false;

percpu_cluster = this_cpu_ptr(si->percpu_cluster);
- percpu_cluster->next = SWAP_NEXT_INVALID;
+ percpu_cluster->next[order] = SWAP_NEXT_INVALID;
+ return true;
+}
+
+static inline bool swap_range_empty(char *swap_map, unsigned int start,
+ unsigned int nr_pages)
+{
+ unsigned int i;
+
+ for (i = 0; i < nr_pages; i++) {
+ if (swap_map[start + i])
+ return false;
+ }
+
return true;
}

/*
- * Try to get a swap entry from current cpu's swap entry pool (a cluster). This
- * might involve allocating a new cluster for current CPU too.
+ * Try to get swap entries with specified order from current cpu's swap entry
+ * pool (a cluster). This might involve allocating a new cluster for current CPU
+ * too.
*/
static bool scan_swap_map_try_ssd_cluster(struct swap_info_struct *si,
- unsigned long *offset, unsigned long *scan_base)
+ unsigned long *offset, unsigned long *scan_base, int order)
{
+ unsigned int nr_pages = 1 << order;
struct percpu_cluster *cluster;
struct swap_cluster_info *ci;
unsigned int tmp, max;

new_cluster:
cluster = this_cpu_ptr(si->percpu_cluster);
- tmp = cluster->next;
+ tmp = cluster->next[order];
if (tmp == SWAP_NEXT_INVALID) {
if (!cluster_list_empty(&si->free_clusters)) {
tmp = cluster_next(&si->free_clusters.head) *
@@ -647,26 +674,27 @@ static bool scan_swap_map_try_ssd_cluster(struct swap_info_struct *si,

/*
* Other CPUs can use our cluster if they can't find a free cluster,
- * check if there is still free entry in the cluster
+ * check if there is still free entry in the cluster, maintaining
+ * natural alignment.
*/
max = min_t(unsigned long, si->max, ALIGN(tmp + 1, SWAPFILE_CLUSTER));
if (tmp < max) {
ci = lock_cluster(si, tmp);
while (tmp < max) {
- if (!si->swap_map[tmp])
+ if (swap_range_empty(si->swap_map, tmp, nr_pages))
break;
- tmp++;
+ tmp += nr_pages;
}
unlock_cluster(ci);
}
if (tmp >= max) {
- cluster->next = SWAP_NEXT_INVALID;
+ cluster->next[order] = SWAP_NEXT_INVALID;
goto new_cluster;
}
*offset = tmp;
*scan_base = tmp;
- tmp += 1;
- cluster->next = tmp < max ? tmp : SWAP_NEXT_INVALID;
+ tmp += nr_pages;
+ cluster->next[order] = tmp < max ? tmp : SWAP_NEXT_INVALID;
return true;
}

@@ -796,13 +824,14 @@ static bool swap_offset_available_and_locked(struct swap_info_struct *si,

static int scan_swap_map_slots(struct swap_info_struct *si,
unsigned char usage, int nr,
- swp_entry_t slots[])
+ swp_entry_t slots[], int order)
{
struct swap_cluster_info *ci;
unsigned long offset;
unsigned long scan_base;
unsigned long last_in_cluster = 0;
int latency_ration = LATENCY_LIMIT;
+ unsigned int nr_pages = 1 << order;
int n_ret = 0;
bool scanned_many = false;

@@ -817,6 +846,25 @@ static int scan_swap_map_slots(struct swap_info_struct *si,
* And we let swap pages go all over an SSD partition. Hugh
*/

+ if (order > 0) {
+ /*
+ * Should not even be attempting large allocations when huge
+ * page swap is disabled. Warn and fail the allocation.
+ */
+ if (!IS_ENABLED(CONFIG_THP_SWAP) ||
+ nr_pages > SWAPFILE_CLUSTER) {
+ VM_WARN_ON_ONCE(1);
+ return 0;
+ }
+
+ /*
+ * Swapfile is not block device or not using clusters so unable
+ * to allocate large entries.
+ */
+ if (!(si->flags & SWP_BLKDEV) || !si->cluster_info)
+ return 0;
+ }
+
si->flags += SWP_SCANNING;
/*
* Use percpu scan base for SSD to reduce lock contention on
@@ -831,8 +879,11 @@ static int scan_swap_map_slots(struct swap_info_struct *si,

/* SSD algorithm */
if (si->cluster_info) {
- if (!scan_swap_map_try_ssd_cluster(si, &offset, &scan_base))
+ if (!scan_swap_map_try_ssd_cluster(si, &offset, &scan_base, order)) {
+ if (order > 0)
+ goto no_page;
goto scan;
+ }
} else if (unlikely(!si->cluster_nr--)) {
if (si->pages - si->inuse_pages < SWAPFILE_CLUSTER) {
si->cluster_nr = SWAPFILE_CLUSTER - 1;
@@ -874,13 +925,16 @@ static int scan_swap_map_slots(struct swap_info_struct *si,

checks:
if (si->cluster_info) {
- while (scan_swap_map_ssd_cluster_conflict(si, offset)) {
+ while (scan_swap_map_ssd_cluster_conflict(si, offset, order)) {
/* take a break if we already got some slots */
if (n_ret)
goto done;
if (!scan_swap_map_try_ssd_cluster(si, &offset,
- &scan_base))
+ &scan_base, order)) {
+ if (order > 0)
+ goto no_page;
goto scan;
+ }
}
}
if (!(si->flags & SWP_WRITEOK))
@@ -911,11 +965,11 @@ static int scan_swap_map_slots(struct swap_info_struct *si,
else
goto done;
}
- WRITE_ONCE(si->swap_map[offset], usage);
- inc_cluster_info_page(si, si->cluster_info, offset);
+ memset(si->swap_map + offset, usage, nr_pages);
+ add_cluster_info_page(si, si->cluster_info, offset, nr_pages);
unlock_cluster(ci);

- swap_range_alloc(si, offset, 1);
+ swap_range_alloc(si, offset, nr_pages);
slots[n_ret++] = swp_entry(si->type, offset);

/* got enough slots or reach max slots? */
@@ -936,8 +990,10 @@ static int scan_swap_map_slots(struct swap_info_struct *si,

/* try to get more slots in cluster */
if (si->cluster_info) {
- if (scan_swap_map_try_ssd_cluster(si, &offset, &scan_base))
+ if (scan_swap_map_try_ssd_cluster(si, &offset, &scan_base, order))
goto checks;
+ if (order > 0)
+ goto done;
} else if (si->cluster_nr && !si->swap_map[++offset]) {
/* non-ssd case, still more slots in cluster? */
--si->cluster_nr;
@@ -964,11 +1020,13 @@ static int scan_swap_map_slots(struct swap_info_struct *si,
}

done:
- set_cluster_next(si, offset + 1);
+ if (order == 0)
+ set_cluster_next(si, offset + 1);
si->flags -= SWP_SCANNING;
return n_ret;

scan:
+ VM_WARN_ON(order > 0);
spin_unlock(&si->lock);
while (++offset <= READ_ONCE(si->highest_bit)) {
if (unlikely(--latency_ration < 0)) {
@@ -997,38 +1055,6 @@ static int scan_swap_map_slots(struct swap_info_struct *si,
return n_ret;
}

-static int swap_alloc_cluster(struct swap_info_struct *si, swp_entry_t *slot)
-{
- unsigned long idx;
- struct swap_cluster_info *ci;
- unsigned long offset;
-
- /*
- * Should not even be attempting cluster allocations when huge
- * page swap is disabled. Warn and fail the allocation.
- */
- if (!IS_ENABLED(CONFIG_THP_SWAP)) {
- VM_WARN_ON_ONCE(1);
- return 0;
- }
-
- if (cluster_list_empty(&si->free_clusters))
- return 0;
-
- idx = cluster_list_first(&si->free_clusters);
- offset = idx * SWAPFILE_CLUSTER;
- ci = lock_cluster(si, offset);
- alloc_cluster(si, idx);
- cluster_set_count(ci, SWAPFILE_CLUSTER);
-
- memset(si->swap_map + offset, SWAP_HAS_CACHE, SWAPFILE_CLUSTER);
- unlock_cluster(ci);
- swap_range_alloc(si, offset, SWAPFILE_CLUSTER);
- *slot = swp_entry(si->type, offset);
-
- return 1;
-}
-
static void swap_free_cluster(struct swap_info_struct *si, unsigned long idx)
{
unsigned long offset = idx * SWAPFILE_CLUSTER;
@@ -1051,9 +1077,6 @@ int get_swap_pages(int n_goal, swp_entry_t swp_entries[], int entry_order)
int n_ret = 0;
int node;

- /* Only single cluster request supported */
- WARN_ON_ONCE(n_goal > 1 && size == SWAPFILE_CLUSTER);
-
spin_lock(&swap_avail_lock);

avail_pgs = atomic_long_read(&nr_swap_pages) / size;
@@ -1089,14 +1112,10 @@ int get_swap_pages(int n_goal, swp_entry_t swp_entries[], int entry_order)
spin_unlock(&si->lock);
goto nextsi;
}
- if (size == SWAPFILE_CLUSTER) {
- if (si->flags & SWP_BLKDEV)
- n_ret = swap_alloc_cluster(si, swp_entries);
- } else
- n_ret = scan_swap_map_slots(si, SWAP_HAS_CACHE,
- n_goal, swp_entries);
+ n_ret = scan_swap_map_slots(si, SWAP_HAS_CACHE,
+ n_goal, swp_entries, order);
spin_unlock(&si->lock);
- if (n_ret || size == SWAPFILE_CLUSTER)
+ if (n_ret || size > 1)
goto check_out;
cond_resched();

@@ -1673,7 +1692,7 @@ swp_entry_t get_swap_page_of_type(int type)

/* This is called for allocating swap entry, not cache */
spin_lock(&si->lock);
- if ((si->flags & SWP_WRITEOK) && scan_swap_map_slots(si, 1, 1, &entry))
+ if ((si->flags & SWP_WRITEOK) && scan_swap_map_slots(si, 1, 1, &entry, 0))
atomic_long_dec(&nr_swap_pages);
spin_unlock(&si->lock);
fail:
@@ -3127,7 +3146,7 @@ SYSCALL_DEFINE2(swapon, const char __user *, specialfile, int, swap_flags)
p->flags |= SWP_SYNCHRONOUS_IO;

if (p->bdev && bdev_nonrot(p->bdev)) {
- int cpu;
+ int cpu, i;
unsigned long ci, nr_cluster;

p->flags |= SWP_SOLIDSTATE;
@@ -3165,7 +3184,8 @@ SYSCALL_DEFINE2(swapon, const char __user *, specialfile, int, swap_flags)
struct percpu_cluster *cluster;

cluster = per_cpu_ptr(p->percpu_cluster, cpu);
- cluster->next = SWAP_NEXT_INVALID;
+ for (i = 0; i < SWAP_NR_ORDERS; i++)
+ cluster->next[i] = SWAP_NEXT_INVALID;
}
} else {
atomic_inc(&nr_rotate_swap);
--
2.25.1


2024-04-08 18:41:33

by Ryan Roberts

[permalink] [raw]
Subject: [PATCH v7 6/7] mm: vmscan: Avoid split during shrink_folio_list()

Now that swap supports storing all mTHP sizes, avoid splitting large
folios before swap-out. This benefits performance of the swap-out path
by eliding split_folio_to_list(), which is expensive, and also sets us
up for swapping in large folios in a future series.

If the folio is partially mapped, we continue to split it since we want
to avoid the extra IO overhead and storage of writing out pages
uneccessarily.

THP_SWPOUT and THP_SWPOUT_FALLBACK counters should continue to count
events only for PMD-mappable folios to avoid user confusion. THP_SWPOUT
already has the appropriate guard. Add a guard for THP_SWPOUT_FALLBACK.
It may be appropriate to add per-size counters in future.

Reviewed-by: David Hildenbrand <[email protected]>
Reviewed-by: Barry Song <[email protected]>
Signed-off-by: Ryan Roberts <[email protected]>
---
mm/vmscan.c | 20 ++++++++++----------
1 file changed, 10 insertions(+), 10 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 00adaf1cb2c3..bca2d9981c95 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1223,25 +1223,25 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
if (!can_split_folio(folio, NULL))
goto activate_locked;
/*
- * Split folios without a PMD map right
- * away. Chances are some or all of the
- * tail pages can be freed without IO.
+ * Split partially mapped folios right away.
+ * We can free the unmapped pages without IO.
*/
- if (!folio_entire_mapcount(folio) &&
- split_folio_to_list(folio,
- folio_list))
+ if (data_race(!list_empty(&folio->_deferred_list)) &&
+ split_folio_to_list(folio, folio_list))
goto activate_locked;
}
if (!add_to_swap(folio)) {
if (!folio_test_large(folio))
goto activate_locked_split;
/* Fallback to swap normal pages */
- if (split_folio_to_list(folio,
- folio_list))
+ if (split_folio_to_list(folio, folio_list))
goto activate_locked;
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
- count_memcg_folio_events(folio, THP_SWPOUT_FALLBACK, 1);
- count_vm_event(THP_SWPOUT_FALLBACK);
+ if (nr_pages >= HPAGE_PMD_NR) {
+ count_memcg_folio_events(folio,
+ THP_SWPOUT_FALLBACK, 1);
+ count_vm_event(THP_SWPOUT_FALLBACK);
+ }
#endif
if (!add_to_swap(folio))
goto activate_locked_split;
--
2.25.1


2024-04-08 18:41:55

by Ryan Roberts

[permalink] [raw]
Subject: [PATCH v7 7/7] mm: madvise: Avoid split during MADV_PAGEOUT and MADV_COLD

Rework madvise_cold_or_pageout_pte_range() to avoid splitting any large
folio that is fully and contiguously mapped in the pageout/cold vm
range. This change means that large folios will be maintained all the
way to swap storage. This both improves performance during swap-out, by
eliding the cost of splitting the folio, and sets us up nicely for
maintaining the large folio when it is swapped back in (to be covered in
a separate series).

Folios that are not fully mapped in the target range are still split,
but note that behavior is changed so that if the split fails for any
reason (folio locked, shared, etc) we now leave it as is and move to the
next pte in the range and continue work on the proceeding folios.
Previously any failure of this sort would cause the entire operation to
give up and no folios mapped at higher addresses were paged out or made
cold. Given large folios are becoming more common, this old behavior
would have likely lead to wasted opportunities.

While we are at it, change the code that clears young from the ptes to
use ptep_test_and_clear_young(), via the new mkold_ptes() batch helper
function. This is more efficent than get_and_clear/modify/set,
especially for contpte mappings on arm64, where the old approach would
require unfolding/refolding and the new approach can be done in place.

Reviewed-by: Barry Song <[email protected]>
Signed-off-by: Ryan Roberts <[email protected]>
---
include/linux/pgtable.h | 30 ++++++++++++++
mm/internal.h | 12 +++++-
mm/madvise.c | 87 +++++++++++++++++++++++------------------
mm/memory.c | 4 +-
4 files changed, 92 insertions(+), 41 deletions(-)

diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index 75096025fe52..e2f45e22a6d1 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -361,6 +361,36 @@ static inline int ptep_test_and_clear_young(struct vm_area_struct *vma,
}
#endif

+#ifndef mkold_ptes
+/**
+ * mkold_ptes - Mark PTEs that map consecutive pages of the same folio as old.
+ * @vma: VMA the pages are mapped into.
+ * @addr: Address the first page is mapped at.
+ * @ptep: Page table pointer for the first entry.
+ * @nr: Number of entries to mark old.
+ *
+ * May be overridden by the architecture; otherwise, implemented as a simple
+ * loop over ptep_test_and_clear_young().
+ *
+ * Note that PTE bits in the PTE range besides the PFN can differ. For example,
+ * some PTEs might be write-protected.
+ *
+ * Context: The caller holds the page table lock. The PTEs map consecutive
+ * pages that belong to the same folio. The PTEs are all in the same PMD.
+ */
+static inline void mkold_ptes(struct vm_area_struct *vma, unsigned long addr,
+ pte_t *ptep, unsigned int nr)
+{
+ for (;;) {
+ ptep_test_and_clear_young(vma, addr, ptep);
+ if (--nr == 0)
+ break;
+ ptep++;
+ addr += PAGE_SIZE;
+ }
+}
+#endif
+
#ifndef __HAVE_ARCH_PMDP_TEST_AND_CLEAR_YOUNG
#if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_ARCH_HAS_NONLEAF_PMD_YOUNG)
static inline int pmdp_test_and_clear_young(struct vm_area_struct *vma,
diff --git a/mm/internal.h b/mm/internal.h
index de68705624b0..9d3250b4a08a 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -130,6 +130,8 @@ static inline pte_t __pte_batch_clear_ignored(pte_t pte, fpb_t flags)
* @flags: Flags to modify the PTE batch semantics.
* @any_writable: Optional pointer to indicate whether any entry except the
* first one is writable.
+ * @any_young: Optional pointer to indicate whether any entry except the
+ * first one is young.
*
* Detect a PTE batch: consecutive (present) PTEs that map consecutive
* pages of the same large folio.
@@ -145,16 +147,18 @@ static inline pte_t __pte_batch_clear_ignored(pte_t pte, fpb_t flags)
*/
static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
pte_t *start_ptep, pte_t pte, int max_nr, fpb_t flags,
- bool *any_writable)
+ bool *any_writable, bool *any_young)
{
unsigned long folio_end_pfn = folio_pfn(folio) + folio_nr_pages(folio);
const pte_t *end_ptep = start_ptep + max_nr;
pte_t expected_pte, *ptep;
- bool writable;
+ bool writable, young;
int nr;

if (any_writable)
*any_writable = false;
+ if (any_young)
+ *any_young = false;

VM_WARN_ON_FOLIO(!pte_present(pte), folio);
VM_WARN_ON_FOLIO(!folio_test_large(folio) || max_nr < 1, folio);
@@ -168,6 +172,8 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
pte = ptep_get(ptep);
if (any_writable)
writable = !!pte_write(pte);
+ if (any_young)
+ young = !!pte_young(pte);
pte = __pte_batch_clear_ignored(pte, flags);

if (!pte_same(pte, expected_pte))
@@ -183,6 +189,8 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr,

if (any_writable)
*any_writable |= writable;
+ if (any_young)
+ *any_young |= young;

nr = pte_batch_hint(ptep, pte);
expected_pte = pte_advance_pfn(expected_pte, nr);
diff --git a/mm/madvise.c b/mm/madvise.c
index 5011ecb24344..f59169888b8e 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -336,6 +336,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
LIST_HEAD(folio_list);
bool pageout_anon_only_filter;
unsigned int batch_count = 0;
+ int nr;

if (fatal_signal_pending(current))
return -EINTR;
@@ -423,7 +424,8 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
return 0;
flush_tlb_batched_pending(mm);
arch_enter_lazy_mmu_mode();
- for (; addr < end; pte++, addr += PAGE_SIZE) {
+ for (; addr < end; pte += nr, addr += nr * PAGE_SIZE) {
+ nr = 1;
ptent = ptep_get(pte);

if (++batch_count == SWAP_CLUSTER_MAX) {
@@ -447,55 +449,66 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
continue;

/*
- * Creating a THP page is expensive so split it only if we
- * are sure it's worth. Split it if we are only owner.
+ * If we encounter a large folio, only split it if it is not
+ * fully mapped within the range we are operating on. Otherwise
+ * leave it as is so that it can be swapped out whole. If we
+ * fail to split a folio, leave it in place and advance to the
+ * next pte in the range.
*/
if (folio_test_large(folio)) {
- int err;
-
- if (folio_likely_mapped_shared(folio))
- break;
- if (pageout_anon_only_filter && !folio_test_anon(folio))
- break;
- if (!folio_trylock(folio))
- break;
- folio_get(folio);
- arch_leave_lazy_mmu_mode();
- pte_unmap_unlock(start_pte, ptl);
- start_pte = NULL;
- err = split_folio(folio);
- folio_unlock(folio);
- folio_put(folio);
- if (err)
- break;
- start_pte = pte =
- pte_offset_map_lock(mm, pmd, addr, &ptl);
- if (!start_pte)
- break;
- arch_enter_lazy_mmu_mode();
- pte--;
- addr -= PAGE_SIZE;
- continue;
+ const fpb_t fpb_flags = FPB_IGNORE_DIRTY |
+ FPB_IGNORE_SOFT_DIRTY;
+ int max_nr = (end - addr) / PAGE_SIZE;
+ bool any_young;
+
+ nr = folio_pte_batch(folio, addr, pte, ptent, max_nr,
+ fpb_flags, NULL, &any_young);
+ if (any_young)
+ ptent = pte_mkyoung(ptent);
+
+ if (nr < folio_nr_pages(folio)) {
+ int err;
+
+ if (folio_likely_mapped_shared(folio))
+ continue;
+ if (pageout_anon_only_filter && !folio_test_anon(folio))
+ continue;
+ if (!folio_trylock(folio))
+ continue;
+ folio_get(folio);
+ arch_leave_lazy_mmu_mode();
+ pte_unmap_unlock(start_pte, ptl);
+ start_pte = NULL;
+ err = split_folio(folio);
+ folio_unlock(folio);
+ folio_put(folio);
+ start_pte = pte =
+ pte_offset_map_lock(mm, pmd, addr, &ptl);
+ if (!start_pte)
+ break;
+ arch_enter_lazy_mmu_mode();
+ if (!err)
+ nr = 0;
+ continue;
+ }
}

/*
* Do not interfere with other mappings of this folio and
- * non-LRU folio.
+ * non-LRU folio. If we have a large folio at this point, we
+ * know it is fully mapped so if its mapcount is the same as its
+ * number of pages, it must be exclusive.
*/
- if (!folio_test_lru(folio) || folio_mapcount(folio) != 1)
+ if (!folio_test_lru(folio) ||
+ folio_mapcount(folio) != folio_nr_pages(folio))
continue;

if (pageout_anon_only_filter && !folio_test_anon(folio))
continue;

- VM_BUG_ON_FOLIO(folio_test_large(folio), folio);
-
if (!pageout && pte_young(ptent)) {
- ptent = ptep_get_and_clear_full(mm, addr, pte,
- tlb->fullmm);
- ptent = pte_mkold(ptent);
- set_pte_at(mm, addr, pte, ptent);
- tlb_remove_tlb_entry(tlb, pte, addr);
+ mkold_ptes(vma, addr, pte, nr);
+ tlb_remove_tlb_entries(tlb, pte, nr, addr);
}

/*
diff --git a/mm/memory.c b/mm/memory.c
index 0db2aa066a5a..78422d1c7381 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -989,7 +989,7 @@ copy_present_ptes(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma
flags |= FPB_IGNORE_SOFT_DIRTY;

nr = folio_pte_batch(folio, addr, src_pte, pte, max_nr, flags,
- &any_writable);
+ &any_writable, NULL);
folio_ref_add(folio, nr);
if (folio_test_anon(folio)) {
if (unlikely(folio_try_dup_anon_rmap_ptes(folio, page,
@@ -1559,7 +1559,7 @@ static inline int zap_present_ptes(struct mmu_gather *tlb,
*/
if (unlikely(folio_test_large(folio) && max_nr != 1)) {
nr = folio_pte_batch(folio, addr, pte, ptent, max_nr, fpb_flags,
- NULL);
+ NULL, NULL);

zap_present_folio_ptes(tlb, vma, folio, page, pte, ptent, nr,
addr, details, rss, force_flush,
--
2.25.1


2024-04-08 19:04:35

by Ryan Roberts

[permalink] [raw]
Subject: [PATCH v7 3/7] mm: swap: Simplify struct percpu_cluster

struct percpu_cluster stores the index of cpu's current cluster and the
offset of the next entry that will be allocated for the cpu. These two
pieces of information are redundant because the cluster index is just
(offset / SWAPFILE_CLUSTER). The only reason for explicitly keeping the
cluster index is because the structure used for it also has a flag to
indicate "no cluster". However this data structure also contains a spin
lock, which is never used in this context, as a side effect the code
copies the spinlock_t structure, which is questionable coding practice
in my view.

So let's clean this up and store only the next offset, and use a
sentinal value (SWAP_NEXT_INVALID) to indicate "no cluster".
SWAP_NEXT_INVALID is chosen to be 0, because 0 will never be seen
legitimately; The first page in the swap file is the swap header, which
is always marked bad to prevent it from being allocated as an entry.
This also prevents the cluster to which it belongs being marked free, so
it will never appear on the free list.

This change saves 16 bytes per cpu. And given we are shortly going to
extend this mechanism to be per-cpu-AND-per-order, we will end up saving
16 * 9 = 144 bytes per cpu, which adds up if you have 256 cpus in the
system.

Reviewed-by: "Huang, Ying" <[email protected]>
Signed-off-by: Ryan Roberts <[email protected]>
---
include/linux/swap.h | 9 ++++++++-
mm/swapfile.c | 22 +++++++++++-----------
2 files changed, 19 insertions(+), 12 deletions(-)

diff --git a/include/linux/swap.h b/include/linux/swap.h
index 5737236dc3ce..5e1e4f5bf0cb 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -260,13 +260,20 @@ struct swap_cluster_info {
#define CLUSTER_FLAG_FREE 1 /* This cluster is free */
#define CLUSTER_FLAG_NEXT_NULL 2 /* This cluster has no next cluster */

+/*
+ * The first page in the swap file is the swap header, which is always marked
+ * bad to prevent it from being allocated as an entry. This also prevents the
+ * cluster to which it belongs being marked free. Therefore 0 is safe to use as
+ * a sentinel to indicate next is not valid in percpu_cluster.
+ */
+#define SWAP_NEXT_INVALID 0
+
/*
* We assign a cluster to each CPU, so each CPU can allocate swap entry from
* its own cluster and swapout sequentially. The purpose is to optimize swapout
* throughput.
*/
struct percpu_cluster {
- struct swap_cluster_info index; /* Current cluster index */
unsigned int next; /* Likely next allocation offset */
};

diff --git a/mm/swapfile.c b/mm/swapfile.c
index 20c45757f2b2..e3f855475278 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -609,7 +609,7 @@ scan_swap_map_ssd_cluster_conflict(struct swap_info_struct *si,
return false;

percpu_cluster = this_cpu_ptr(si->percpu_cluster);
- cluster_set_null(&percpu_cluster->index);
+ percpu_cluster->next = SWAP_NEXT_INVALID;
return true;
}

@@ -622,14 +622,14 @@ static bool scan_swap_map_try_ssd_cluster(struct swap_info_struct *si,
{
struct percpu_cluster *cluster;
struct swap_cluster_info *ci;
- unsigned long tmp, max;
+ unsigned int tmp, max;

new_cluster:
cluster = this_cpu_ptr(si->percpu_cluster);
- if (cluster_is_null(&cluster->index)) {
+ tmp = cluster->next;
+ if (tmp == SWAP_NEXT_INVALID) {
if (!cluster_list_empty(&si->free_clusters)) {
- cluster->index = si->free_clusters.head;
- cluster->next = cluster_next(&cluster->index) *
+ tmp = cluster_next(&si->free_clusters.head) *
SWAPFILE_CLUSTER;
} else if (!cluster_list_empty(&si->discard_clusters)) {
/*
@@ -649,9 +649,7 @@ static bool scan_swap_map_try_ssd_cluster(struct swap_info_struct *si,
* Other CPUs can use our cluster if they can't find a free cluster,
* check if there is still free entry in the cluster
*/
- tmp = cluster->next;
- max = min_t(unsigned long, si->max,
- (cluster_next(&cluster->index) + 1) * SWAPFILE_CLUSTER);
+ max = min_t(unsigned long, si->max, ALIGN(tmp + 1, SWAPFILE_CLUSTER));
if (tmp < max) {
ci = lock_cluster(si, tmp);
while (tmp < max) {
@@ -662,12 +660,13 @@ static bool scan_swap_map_try_ssd_cluster(struct swap_info_struct *si,
unlock_cluster(ci);
}
if (tmp >= max) {
- cluster_set_null(&cluster->index);
+ cluster->next = SWAP_NEXT_INVALID;
goto new_cluster;
}
- cluster->next = tmp + 1;
*offset = tmp;
*scan_base = tmp;
+ tmp += 1;
+ cluster->next = tmp < max ? tmp : SWAP_NEXT_INVALID;
return true;
}

@@ -3163,8 +3162,9 @@ SYSCALL_DEFINE2(swapon, const char __user *, specialfile, int, swap_flags)
}
for_each_possible_cpu(cpu) {
struct percpu_cluster *cluster;
+
cluster = per_cpu_ptr(p->percpu_cluster, cpu);
- cluster_set_null(&cluster->index);
+ cluster->next = SWAP_NEXT_INVALID;
}
} else {
atomic_inc(&nr_rotate_swap);
--
2.25.1


2024-04-09 07:35:15

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH v7 2/7] mm: swap: free_swap_and_cache_nr() as batched free_swap_and_cache()

On 08.04.24 20:39, Ryan Roberts wrote:
> Now that we no longer have a convenient flag in the cluster to determine
> if a folio is large, free_swap_and_cache() will take a reference and
> lock a large folio much more often, which could lead to contention and
> (e.g.) failure to split large folios, etc.
>
> Let's solve that problem by batch freeing swap and cache with a new
> function, free_swap_and_cache_nr(), to free a contiguous range of swap
> entries together. This allows us to first drop a reference to each swap
> slot before we try to release the cache folio. This means we only try to
> release the folio once, only taking the reference and lock once - much
> better than the previous 512 times for the 2M THP case.
>
> Contiguous swap entries are gathered in zap_pte_range() and
> madvise_free_pte_range() in a similar way to how present ptes are
> already gathered in zap_pte_range().
>
> While we are at it, let's simplify by converting the return type of both
> functions to void. The return value was used only by zap_pte_range() to
> print a bad pte, and was ignored by everyone else, so the extra
> reporting wasn't exactly guaranteed. We will still get the warning with
> most of the information from get_swap_device(). With the batch version,
> we wouldn't know which pte was bad anyway so could print the wrong one.
>
> Signed-off-by: Ryan Roberts <[email protected]>
> ---
> include/linux/pgtable.h | 29 ++++++++++++
> include/linux/swap.h | 12 +++--
> mm/internal.h | 63 ++++++++++++++++++++++++++
> mm/madvise.c | 12 +++--
> mm/memory.c | 13 +++---
> mm/swapfile.c | 97 +++++++++++++++++++++++++++++++++--------
> 6 files changed, 195 insertions(+), 31 deletions(-)
>
> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> index a3fc8150b047..75096025fe52 100644
> --- a/include/linux/pgtable.h
> +++ b/include/linux/pgtable.h
> @@ -708,6 +708,35 @@ static inline void pte_clear_not_present_full(struct mm_struct *mm,
> }
> #endif
>
> +#ifndef clear_not_present_full_ptes
> +/**
> + * clear_not_present_full_ptes - Clear multiple not present PTEs which are
> + * consecutive in the pgtable.
> + * @mm: Address space the ptes represent.
> + * @addr: Address of the first pte.
> + * @ptep: Page table pointer for the first entry.
> + * @nr: Number of entries to clear.
> + * @full: Whether we are clearing a full mm.
> + *
> + * May be overridden by the architecture; otherwise, implemented as a simple
> + * loop over pte_clear_not_present_full().
> + *
> + * Context: The caller holds the page table lock. The PTEs are all not present.
> + * The PTEs are all in the same PMD.
> + */
> +static inline void clear_not_present_full_ptes(struct mm_struct *mm,
> + unsigned long addr, pte_t *ptep, unsigned int nr, int full)
> +{
> + for (;;) {
> + pte_clear_not_present_full(mm, addr, ptep, full);
> + if (--nr == 0)
> + break;
> + ptep++;
> + addr += PAGE_SIZE;
> + }
> +}
> +#endif
> +
> #ifndef __HAVE_ARCH_PTEP_CLEAR_FLUSH
> extern pte_t ptep_clear_flush(struct vm_area_struct *vma,
> unsigned long address,
> diff --git a/include/linux/swap.h b/include/linux/swap.h
> index f6f78198f000..5737236dc3ce 100644
> --- a/include/linux/swap.h
> +++ b/include/linux/swap.h
> @@ -471,7 +471,7 @@ extern int swap_duplicate(swp_entry_t);
> extern int swapcache_prepare(swp_entry_t);
> extern void swap_free(swp_entry_t);
> extern void swapcache_free_entries(swp_entry_t *entries, int n);
> -extern int free_swap_and_cache(swp_entry_t);
> +extern void free_swap_and_cache_nr(swp_entry_t entry, int nr);
> int swap_type_of(dev_t device, sector_t offset);
> int find_first_swap(dev_t *device);
> extern unsigned int count_swap_pages(int, int);
> @@ -520,8 +520,9 @@ static inline void put_swap_device(struct swap_info_struct *si)
> #define free_pages_and_swap_cache(pages, nr) \
> release_pages((pages), (nr));
>
> -/* used to sanity check ptes in zap_pte_range when CONFIG_SWAP=0 */
> -#define free_swap_and_cache(e) is_pfn_swap_entry(e)
> +static inline void free_swap_and_cache_nr(swp_entry_t entry, int nr)
> +{
> +}
>
> static inline void free_swap_cache(struct folio *folio)
> {
> @@ -589,6 +590,11 @@ static inline int add_swap_extent(struct swap_info_struct *sis,
> }
> #endif /* CONFIG_SWAP */
>
> +static inline void free_swap_and_cache(swp_entry_t entry)
> +{
> + free_swap_and_cache_nr(entry, 1);
> +}
> +
> #ifdef CONFIG_MEMCG
> static inline int mem_cgroup_swappiness(struct mem_cgroup *memcg)
> {
> diff --git a/mm/internal.h b/mm/internal.h
> index 3bdc8693b54f..de68705624b0 100644
> --- a/mm/internal.h
> +++ b/mm/internal.h
> @@ -11,6 +11,8 @@
> #include <linux/mm.h>
> #include <linux/pagemap.h>
> #include <linux/rmap.h>
> +#include <linux/swap.h>
> +#include <linux/swapops.h>
> #include <linux/tracepoint-defs.h>
>
> struct folio_batch;
> @@ -189,6 +191,67 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
>
> return min(ptep - start_ptep, max_nr);
> }
> +
> +/**
> + * pte_next_swp_offset - Increment the swap entry offset field of a swap pte.
> + * @pte: The initial pte state; is_swap_pte(pte) must be true.

Likely we also want non_swap_entry() to be false.

> + *
> + * Increments the swap offset, while maintaining all other fields, including
> + * swap type, and any swp pte bits. The resulting pte is returned.
> + */
> +static inline pte_t pte_next_swp_offset(pte_t pte)
> +{
> + swp_entry_t entry = pte_to_swp_entry(pte);
> + pte_t new = __swp_entry_to_pte(__swp_entry(swp_type(entry),
> + swp_offset(entry) + 1));
> +
> + if (pte_swp_soft_dirty(pte))
> + new = pte_swp_mksoft_dirty(new);
> + if (pte_swp_exclusive(pte))
> + new = pte_swp_mkexclusive(new);
> + if (pte_swp_uffd_wp(pte))
> + new = pte_swp_mkuffd_wp(new);
> +
> + return new;
> +}


Acked-by: David Hildenbrand <[email protected]>

--
Cheers,

David / dhildenb


2024-04-09 07:43:47

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH v7 4/7] mm: swap: Update get_swap_pages() to take folio order

On 08.04.24 20:39, Ryan Roberts wrote:
> We are about to allow swap storage of any mTHP size. To prepare for
> that, let's change get_swap_pages() to take a folio order parameter
> instead of nr_pages. This makes the interface self-documenting; a
> power-of-2 number of pages must be provided. We will also need the order
> internally so this simplifies accessing it.
>
> Reviewed-by: "Huang, Ying" <[email protected]>
> Signed-off-by: Ryan Roberts <[email protected]>
> ---

Reviewed-by: David Hildenbrand <[email protected]>

--
Cheers,

David / dhildenb


2024-04-09 07:52:05

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH v7 7/7] mm: madvise: Avoid split during MADV_PAGEOUT and MADV_COLD

On 08.04.24 20:39, Ryan Roberts wrote:
> Rework madvise_cold_or_pageout_pte_range() to avoid splitting any large
> folio that is fully and contiguously mapped in the pageout/cold vm
> range. This change means that large folios will be maintained all the
> way to swap storage. This both improves performance during swap-out, by
> eliding the cost of splitting the folio, and sets us up nicely for
> maintaining the large folio when it is swapped back in (to be covered in
> a separate series).
>
> Folios that are not fully mapped in the target range are still split,
> but note that behavior is changed so that if the split fails for any
> reason (folio locked, shared, etc) we now leave it as is and move to the
> next pte in the range and continue work on the proceeding folios.
> Previously any failure of this sort would cause the entire operation to
> give up and no folios mapped at higher addresses were paged out or made
> cold. Given large folios are becoming more common, this old behavior
> would have likely lead to wasted opportunities.
>
> While we are at it, change the code that clears young from the ptes to
> use ptep_test_and_clear_young(), via the new mkold_ptes() batch helper
> function. This is more efficent than get_and_clear/modify/set,
> especially for contpte mappings on arm64, where the old approach would
> require unfolding/refolding and the new approach can be done in place.
>
> Reviewed-by: Barry Song <[email protected]>
> Signed-off-by: Ryan Roberts <[email protected]>
> ---

Acked-by: David Hildenbrand <[email protected]>

--
Cheers,

David / dhildenb


2024-04-09 08:52:01

by Barry Song

[permalink] [raw]
Subject: Re: [PATCH v7 2/7] mm: swap: free_swap_and_cache_nr() as batched free_swap_and_cache()

On Tue, Apr 9, 2024 at 6:40 AM Ryan Roberts <[email protected]> wrote:
>
> Now that we no longer have a convenient flag in the cluster to determine
> if a folio is large, free_swap_and_cache() will take a reference and
> lock a large folio much more often, which could lead to contention and
> (e.g.) failure to split large folios, etc.
>
> Let's solve that problem by batch freeing swap and cache with a new
> function, free_swap_and_cache_nr(), to free a contiguous range of swap
> entries together. This allows us to first drop a reference to each swap
> slot before we try to release the cache folio. This means we only try to
> release the folio once, only taking the reference and lock once - much
> better than the previous 512 times for the 2M THP case.
>
> Contiguous swap entries are gathered in zap_pte_range() and
> madvise_free_pte_range() in a similar way to how present ptes are
> already gathered in zap_pte_range().
>
> While we are at it, let's simplify by converting the return type of both
> functions to void. The return value was used only by zap_pte_range() to
> print a bad pte, and was ignored by everyone else, so the extra
> reporting wasn't exactly guaranteed. We will still get the warning with
> most of the information from get_swap_device(). With the batch version,
> we wouldn't know which pte was bad anyway so could print the wrong one.
>
> Signed-off-by: Ryan Roberts <[email protected]>
> ---
> include/linux/pgtable.h | 29 ++++++++++++
> include/linux/swap.h | 12 +++--
> mm/internal.h | 63 ++++++++++++++++++++++++++
> mm/madvise.c | 12 +++--
> mm/memory.c | 13 +++---
> mm/swapfile.c | 97 +++++++++++++++++++++++++++++++++--------
> 6 files changed, 195 insertions(+), 31 deletions(-)
>
> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> index a3fc8150b047..75096025fe52 100644
> --- a/include/linux/pgtable.h
> +++ b/include/linux/pgtable.h
> @@ -708,6 +708,35 @@ static inline void pte_clear_not_present_full(struct mm_struct *mm,
> }
> #endif
>
> +#ifndef clear_not_present_full_ptes
> +/**
> + * clear_not_present_full_ptes - Clear multiple not present PTEs which are
> + * consecutive in the pgtable.
> + * @mm: Address space the ptes represent.
> + * @addr: Address of the first pte.
> + * @ptep: Page table pointer for the first entry.
> + * @nr: Number of entries to clear.
> + * @full: Whether we are clearing a full mm.
> + *
> + * May be overridden by the architecture; otherwise, implemented as a simple
> + * loop over pte_clear_not_present_full().
> + *
> + * Context: The caller holds the page table lock. The PTEs are all not present.
> + * The PTEs are all in the same PMD.
> + */
> +static inline void clear_not_present_full_ptes(struct mm_struct *mm,
> + unsigned long addr, pte_t *ptep, unsigned int nr, int full)
> +{
> + for (;;) {
> + pte_clear_not_present_full(mm, addr, ptep, full);
> + if (--nr == 0)
> + break;
> + ptep++;
> + addr += PAGE_SIZE;
> + }
> +}
> +#endif
> +
> #ifndef __HAVE_ARCH_PTEP_CLEAR_FLUSH
> extern pte_t ptep_clear_flush(struct vm_area_struct *vma,
> unsigned long address,
> diff --git a/include/linux/swap.h b/include/linux/swap.h
> index f6f78198f000..5737236dc3ce 100644
> --- a/include/linux/swap.h
> +++ b/include/linux/swap.h
> @@ -471,7 +471,7 @@ extern int swap_duplicate(swp_entry_t);
> extern int swapcache_prepare(swp_entry_t);
> extern void swap_free(swp_entry_t);
> extern void swapcache_free_entries(swp_entry_t *entries, int n);
> -extern int free_swap_and_cache(swp_entry_t);
> +extern void free_swap_and_cache_nr(swp_entry_t entry, int nr);
> int swap_type_of(dev_t device, sector_t offset);
> int find_first_swap(dev_t *device);
> extern unsigned int count_swap_pages(int, int);
> @@ -520,8 +520,9 @@ static inline void put_swap_device(struct swap_info_struct *si)
> #define free_pages_and_swap_cache(pages, nr) \
> release_pages((pages), (nr));
>
> -/* used to sanity check ptes in zap_pte_range when CONFIG_SWAP=0 */
> -#define free_swap_and_cache(e) is_pfn_swap_entry(e)
> +static inline void free_swap_and_cache_nr(swp_entry_t entry, int nr)
> +{
> +}
>
> static inline void free_swap_cache(struct folio *folio)
> {
> @@ -589,6 +590,11 @@ static inline int add_swap_extent(struct swap_info_struct *sis,
> }
> #endif /* CONFIG_SWAP */
>
> +static inline void free_swap_and_cache(swp_entry_t entry)
> +{
> + free_swap_and_cache_nr(entry, 1);
> +}
> +
> #ifdef CONFIG_MEMCG
> static inline int mem_cgroup_swappiness(struct mem_cgroup *memcg)
> {
> diff --git a/mm/internal.h b/mm/internal.h
> index 3bdc8693b54f..de68705624b0 100644
> --- a/mm/internal.h
> +++ b/mm/internal.h
> @@ -11,6 +11,8 @@
> #include <linux/mm.h>
> #include <linux/pagemap.h>
> #include <linux/rmap.h>
> +#include <linux/swap.h>
> +#include <linux/swapops.h>
> #include <linux/tracepoint-defs.h>
>
> struct folio_batch;
> @@ -189,6 +191,67 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
>
> return min(ptep - start_ptep, max_nr);
> }
> +
> +/**
> + * pte_next_swp_offset - Increment the swap entry offset field of a swap pte.
> + * @pte: The initial pte state; is_swap_pte(pte) must be true.
> + *
> + * Increments the swap offset, while maintaining all other fields, including
> + * swap type, and any swp pte bits. The resulting pte is returned.
> + */
> +static inline pte_t pte_next_swp_offset(pte_t pte)
> +{
> + swp_entry_t entry = pte_to_swp_entry(pte);
> + pte_t new = __swp_entry_to_pte(__swp_entry(swp_type(entry),
> + swp_offset(entry) + 1));
> +
> + if (pte_swp_soft_dirty(pte))
> + new = pte_swp_mksoft_dirty(new);
> + if (pte_swp_exclusive(pte))
> + new = pte_swp_mkexclusive(new);
> + if (pte_swp_uffd_wp(pte))
> + new = pte_swp_mkuffd_wp(new);

I don't quite understand this. If this page table entry is exclusive,
will its subsequent page table entry also be exclusive without
question?
in try_to_unmap_one, exclusive is per-subpage but not per-folio:

anon_exclusive = folio_test_anon(folio) &&
PageAnonExclusive(subpage);

same questions also for diry, wp etc.

> +
> + return new;
> +}
> +
> +/**
> + * swap_pte_batch - detect a PTE batch for a set of contiguous swap entries
> + * @start_ptep: Page table pointer for the first entry.
> + * @max_nr: The maximum number of table entries to consider.
> + * @pte: Page table entry for the first entry.
> + *
> + * Detect a batch of contiguous swap entries: consecutive (non-present) PTEs
> + * containing swap entries all with consecutive offsets and targeting the same
> + * swap type, all with matching swp pte bits.
> + *
> + * max_nr must be at least one and must be limited by the caller so scanning
> + * cannot exceed a single page table.
> + *
> + * Return: the number of table entries in the batch.
> + */
> +static inline int swap_pte_batch(pte_t *start_ptep, int max_nr, pte_t pte)
> +{
> + pte_t expected_pte = pte_next_swp_offset(pte);
> + const pte_t *end_ptep = start_ptep + max_nr;
> + pte_t *ptep = start_ptep + 1;
> +
> + VM_WARN_ON(max_nr < 1);
> + VM_WARN_ON(!is_swap_pte(pte));
> + VM_WARN_ON(non_swap_entry(pte_to_swp_entry(pte)));
> +
> + while (ptep < end_ptep) {
> + pte = ptep_get(ptep);
> +
> + if (!pte_same(pte, expected_pte))
> + break;
> +
> + expected_pte = pte_next_swp_offset(expected_pte);
> + ptep++;
> + }
> +
> + return ptep - start_ptep;
> +}
> #endif /* CONFIG_MMU */
>
> void __acct_reclaim_writeback(pg_data_t *pgdat, struct folio *folio,
> diff --git a/mm/madvise.c b/mm/madvise.c
> index 1f77a51baaac..5011ecb24344 100644
> --- a/mm/madvise.c
> +++ b/mm/madvise.c
> @@ -628,6 +628,7 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr,
> struct folio *folio;
> int nr_swap = 0;
> unsigned long next;
> + int nr, max_nr;
>
> next = pmd_addr_end(addr, end);
> if (pmd_trans_huge(*pmd))
> @@ -640,7 +641,8 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr,
> return 0;
> flush_tlb_batched_pending(mm);
> arch_enter_lazy_mmu_mode();
> - for (; addr != end; pte++, addr += PAGE_SIZE) {
> + for (; addr != end; pte += nr, addr += PAGE_SIZE * nr) {
> + nr = 1;
> ptent = ptep_get(pte);
>
> if (pte_none(ptent))
> @@ -655,9 +657,11 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr,
>
> entry = pte_to_swp_entry(ptent);
> if (!non_swap_entry(entry)) {
> - nr_swap--;
> - free_swap_and_cache(entry);
> - pte_clear_not_present_full(mm, addr, pte, tlb->fullmm);
> + max_nr = (end - addr) / PAGE_SIZE;
> + nr = swap_pte_batch(pte, max_nr, ptent);
> + nr_swap -= nr;
> + free_swap_and_cache_nr(entry, nr);
> + clear_not_present_full_ptes(mm, addr, pte, nr, tlb->fullmm);
> } else if (is_hwpoison_entry(entry) ||
> is_poisoned_swp_entry(entry)) {
> pte_clear_not_present_full(mm, addr, pte, tlb->fullmm);
> diff --git a/mm/memory.c b/mm/memory.c
> index b98e4d907a14..0db2aa066a5a 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -1637,12 +1637,13 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb,
> folio_remove_rmap_pte(folio, page, vma);
> folio_put(folio);
> } else if (!non_swap_entry(entry)) {
> - /* Genuine swap entry, hence a private anon page */
> + max_nr = (end - addr) / PAGE_SIZE;
> + nr = swap_pte_batch(pte, max_nr, ptent);
> + /* Genuine swap entries, hence a private anon pages */
> if (!should_zap_cows(details))
> continue;
> - rss[MM_SWAPENTS]--;
> - if (unlikely(!free_swap_and_cache(entry)))
> - print_bad_pte(vma, addr, ptent, NULL);
> + rss[MM_SWAPENTS] -= nr;
> + free_swap_and_cache_nr(entry, nr);
> } else if (is_migration_entry(entry)) {
> folio = pfn_swap_entry_folio(entry);
> if (!should_zap_folio(details, folio))
> @@ -1665,8 +1666,8 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb,
> pr_alert("unrecognized swap entry 0x%lx\n", entryval);
> WARN_ON_ONCE(1);
> }
> - pte_clear_not_present_full(mm, addr, pte, tlb->fullmm);
> - zap_install_uffd_wp_if_needed(vma, addr, pte, 1, details, ptent);
> + clear_not_present_full_ptes(mm, addr, pte, nr, tlb->fullmm);
> + zap_install_uffd_wp_if_needed(vma, addr, pte, nr, details, ptent);
> } while (pte += nr, addr += PAGE_SIZE * nr, addr != end);
>
> add_mm_rss_vec(mm, rss);
> diff --git a/mm/swapfile.c b/mm/swapfile.c
> index 1ded6d1dcab4..20c45757f2b2 100644
> --- a/mm/swapfile.c
> +++ b/mm/swapfile.c
> @@ -130,7 +130,11 @@ static inline unsigned char swap_count(unsigned char ent)
> /* Reclaim the swap entry if swap is getting full*/
> #define TTRS_FULL 0x4
>
> -/* returns 1 if swap entry is freed */
> +/*
> + * returns number of pages in the folio that backs the swap entry. If positive,
> + * the folio was reclaimed. If negative, the folio was not reclaimed. If 0, no
> + * folio was associated with the swap entry.
> + */
> static int __try_to_reclaim_swap(struct swap_info_struct *si,
> unsigned long offset, unsigned long flags)
> {
> @@ -155,6 +159,7 @@ static int __try_to_reclaim_swap(struct swap_info_struct *si,
> ret = folio_free_swap(folio);
> folio_unlock(folio);
> }
> + ret = ret ? folio_nr_pages(folio) : -folio_nr_pages(folio);
> folio_put(folio);
> return ret;
> }
> @@ -895,7 +900,7 @@ static int scan_swap_map_slots(struct swap_info_struct *si,
> swap_was_freed = __try_to_reclaim_swap(si, offset, TTRS_ANYWAY);
> spin_lock(&si->lock);
> /* entry was freed successfully, try to use this again */
> - if (swap_was_freed)
> + if (swap_was_freed > 0)
> goto checks;
> goto scan; /* check next one */
> }
> @@ -1572,32 +1577,88 @@ bool folio_free_swap(struct folio *folio)
> return true;
> }
>
> -/*
> - * Free the swap entry like above, but also try to
> - * free the page cache entry if it is the last user.
> +/**
> + * free_swap_and_cache_nr() - Release reference on range of swap entries and
> + * reclaim their cache if no more references remain.
> + * @entry: First entry of range.
> + * @nr: Number of entries in range.
> + *
> + * For each swap entry in the contiguous range, release a reference. If any swap
> + * entries become free, try to reclaim their underlying folios, if present. The
> + * offset range is defined by [entry.offset, entry.offset + nr).
> */
> -int free_swap_and_cache(swp_entry_t entry)
> +void free_swap_and_cache_nr(swp_entry_t entry, int nr)
> {
> - struct swap_info_struct *p;
> + const unsigned long start_offset = swp_offset(entry);
> + const unsigned long end_offset = start_offset + nr;
> + unsigned int type = swp_type(entry);
> + struct swap_info_struct *si;
> + bool any_only_cache = false;
> + unsigned long offset;
> unsigned char count;
>
> if (non_swap_entry(entry))
> - return 1;
> + return;
>
> - p = get_swap_device(entry);
> - if (p) {
> - if (WARN_ON(data_race(!p->swap_map[swp_offset(entry)]))) {
> - put_swap_device(p);
> - return 0;
> + si = get_swap_device(entry);
> + if (!si)
> + return;
> +
> + if (WARN_ON(end_offset > si->max))
> + goto out;
> +
> + /*
> + * First free all entries in the range.
> + */
> + for (offset = start_offset; offset < end_offset; offset++) {
> + if (data_race(si->swap_map[offset])) {
> + count = __swap_entry_free(si, swp_entry(type, offset));
> + if (count == SWAP_HAS_CACHE)
> + any_only_cache = true;
> + } else {
> + WARN_ON_ONCE(1);
> }
> + }
> +
> + /*
> + * Short-circuit the below loop if none of the entries had their
> + * reference drop to zero.
> + */
> + if (!any_only_cache)
> + goto out;
>
> - count = __swap_entry_free(p, entry);
> - if (count == SWAP_HAS_CACHE)
> - __try_to_reclaim_swap(p, swp_offset(entry),
> + /*
> + * Now go back over the range trying to reclaim the swap cache. This is
> + * more efficient for large folios because we will only try to reclaim
> + * the swap once per folio in the common case. If we do
> + * __swap_entry_free() and __try_to_reclaim_swap() in the same loop, the
> + * latter will get a reference and lock the folio for every individual
> + * page but will only succeed once the swap slot for every subpage is
> + * zero.
> + */
> + for (offset = start_offset; offset < end_offset; offset += nr) {
> + nr = 1;
> + if (READ_ONCE(si->swap_map[offset]) == SWAP_HAS_CACHE) {
> + /*
> + * Folios are always naturally aligned in swap so
> + * advance forward to the next boundary. Zero means no
> + * folio was found for the swap entry, so advance by 1
> + * in this case. Negative value means folio was found
> + * but could not be reclaimed. Here we can still advance
> + * to the next boundary.
> + */
> + nr = __try_to_reclaim_swap(si, offset,
> TTRS_UNMAPPED | TTRS_FULL);
> - put_swap_device(p);
> + if (nr == 0)
> + nr = 1;
> + else if (nr < 0)
> + nr = -nr;
> + nr = ALIGN(offset + 1, nr) - offset;
> + }
> }
> - return p != NULL;
> +
> +out:
> + put_swap_device(si);
> }
>
> #ifdef CONFIG_HIBERNATION
> --
> 2.25.1
>

Thanks
Barry

2024-04-09 09:23:48

by Barry Song

[permalink] [raw]
Subject: Re: [PATCH v7 2/7] mm: swap: free_swap_and_cache_nr() as batched free_swap_and_cache()

On Tue, Apr 9, 2024 at 8:51 PM Barry Song <[email protected]> wrote:
>
> On Tue, Apr 9, 2024 at 6:40 AM Ryan Roberts <[email protected]> wrote:
> >
> > Now that we no longer have a convenient flag in the cluster to determine
> > if a folio is large, free_swap_and_cache() will take a reference and
> > lock a large folio much more often, which could lead to contention and
> > (e.g.) failure to split large folios, etc.
> >
> > Let's solve that problem by batch freeing swap and cache with a new
> > function, free_swap_and_cache_nr(), to free a contiguous range of swap
> > entries together. This allows us to first drop a reference to each swap
> > slot before we try to release the cache folio. This means we only try to
> > release the folio once, only taking the reference and lock once - much
> > better than the previous 512 times for the 2M THP case.
> >
> > Contiguous swap entries are gathered in zap_pte_range() and
> > madvise_free_pte_range() in a similar way to how present ptes are
> > already gathered in zap_pte_range().
> >
> > While we are at it, let's simplify by converting the return type of both
> > functions to void. The return value was used only by zap_pte_range() to
> > print a bad pte, and was ignored by everyone else, so the extra
> > reporting wasn't exactly guaranteed. We will still get the warning with
> > most of the information from get_swap_device(). With the batch version,
> > we wouldn't know which pte was bad anyway so could print the wrong one.
> >
> > Signed-off-by: Ryan Roberts <[email protected]>
> > ---
> > include/linux/pgtable.h | 29 ++++++++++++
> > include/linux/swap.h | 12 +++--
> > mm/internal.h | 63 ++++++++++++++++++++++++++
> > mm/madvise.c | 12 +++--
> > mm/memory.c | 13 +++---
> > mm/swapfile.c | 97 +++++++++++++++++++++++++++++++++--------
> > 6 files changed, 195 insertions(+), 31 deletions(-)
> >
> > diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> > index a3fc8150b047..75096025fe52 100644
> > --- a/include/linux/pgtable.h
> > +++ b/include/linux/pgtable.h
> > @@ -708,6 +708,35 @@ static inline void pte_clear_not_present_full(struct mm_struct *mm,
> > }
> > #endif
> >
> > +#ifndef clear_not_present_full_ptes
> > +/**
> > + * clear_not_present_full_ptes - Clear multiple not present PTEs which are
> > + * consecutive in the pgtable.
> > + * @mm: Address space the ptes represent.
> > + * @addr: Address of the first pte.
> > + * @ptep: Page table pointer for the first entry.
> > + * @nr: Number of entries to clear.
> > + * @full: Whether we are clearing a full mm.
> > + *
> > + * May be overridden by the architecture; otherwise, implemented as a simple
> > + * loop over pte_clear_not_present_full().
> > + *
> > + * Context: The caller holds the page table lock. The PTEs are all not present.
> > + * The PTEs are all in the same PMD.
> > + */
> > +static inline void clear_not_present_full_ptes(struct mm_struct *mm,
> > + unsigned long addr, pte_t *ptep, unsigned int nr, int full)
> > +{
> > + for (;;) {
> > + pte_clear_not_present_full(mm, addr, ptep, full);
> > + if (--nr == 0)
> > + break;
> > + ptep++;
> > + addr += PAGE_SIZE;
> > + }
> > +}
> > +#endif
> > +
> > #ifndef __HAVE_ARCH_PTEP_CLEAR_FLUSH
> > extern pte_t ptep_clear_flush(struct vm_area_struct *vma,
> > unsigned long address,
> > diff --git a/include/linux/swap.h b/include/linux/swap.h
> > index f6f78198f000..5737236dc3ce 100644
> > --- a/include/linux/swap.h
> > +++ b/include/linux/swap.h
> > @@ -471,7 +471,7 @@ extern int swap_duplicate(swp_entry_t);
> > extern int swapcache_prepare(swp_entry_t);
> > extern void swap_free(swp_entry_t);
> > extern void swapcache_free_entries(swp_entry_t *entries, int n);
> > -extern int free_swap_and_cache(swp_entry_t);
> > +extern void free_swap_and_cache_nr(swp_entry_t entry, int nr);
> > int swap_type_of(dev_t device, sector_t offset);
> > int find_first_swap(dev_t *device);
> > extern unsigned int count_swap_pages(int, int);
> > @@ -520,8 +520,9 @@ static inline void put_swap_device(struct swap_info_struct *si)
> > #define free_pages_and_swap_cache(pages, nr) \
> > release_pages((pages), (nr));
> >
> > -/* used to sanity check ptes in zap_pte_range when CONFIG_SWAP=0 */
> > -#define free_swap_and_cache(e) is_pfn_swap_entry(e)
> > +static inline void free_swap_and_cache_nr(swp_entry_t entry, int nr)
> > +{
> > +}
> >
> > static inline void free_swap_cache(struct folio *folio)
> > {
> > @@ -589,6 +590,11 @@ static inline int add_swap_extent(struct swap_info_struct *sis,
> > }
> > #endif /* CONFIG_SWAP */
> >
> > +static inline void free_swap_and_cache(swp_entry_t entry)
> > +{
> > + free_swap_and_cache_nr(entry, 1);
> > +}
> > +
> > #ifdef CONFIG_MEMCG
> > static inline int mem_cgroup_swappiness(struct mem_cgroup *memcg)
> > {
> > diff --git a/mm/internal.h b/mm/internal.h
> > index 3bdc8693b54f..de68705624b0 100644
> > --- a/mm/internal.h
> > +++ b/mm/internal.h
> > @@ -11,6 +11,8 @@
> > #include <linux/mm.h>
> > #include <linux/pagemap.h>
> > #include <linux/rmap.h>
> > +#include <linux/swap.h>
> > +#include <linux/swapops.h>
> > #include <linux/tracepoint-defs.h>
> >
> > struct folio_batch;
> > @@ -189,6 +191,67 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
> >
> > return min(ptep - start_ptep, max_nr);
> > }
> > +
> > +/**
> > + * pte_next_swp_offset - Increment the swap entry offset field of a swap pte.
> > + * @pte: The initial pte state; is_swap_pte(pte) must be true.
> > + *
> > + * Increments the swap offset, while maintaining all other fields, including
> > + * swap type, and any swp pte bits. The resulting pte is returned.
> > + */
> > +static inline pte_t pte_next_swp_offset(pte_t pte)
> > +{
> > + swp_entry_t entry = pte_to_swp_entry(pte);
> > + pte_t new = __swp_entry_to_pte(__swp_entry(swp_type(entry),
> > + swp_offset(entry) + 1));
> > +
> > + if (pte_swp_soft_dirty(pte))
> > + new = pte_swp_mksoft_dirty(new);
> > + if (pte_swp_exclusive(pte))
> > + new = pte_swp_mkexclusive(new);
> > + if (pte_swp_uffd_wp(pte))
> > + new = pte_swp_mkuffd_wp(new);
>
> I don't quite understand this. If this page table entry is exclusive,
> will its subsequent page table entry also be exclusive without
> question?
> in try_to_unmap_one, exclusive is per-subpage but not per-folio:
>
> anon_exclusive = folio_test_anon(folio) &&
> PageAnonExclusive(subpage);
>
> same questions also for diry, wp etc.

Sorry for the noise. you are right. based on your new version, I think I should
entirely drop:

[PATCH v2 3/5] mm: swap_pte_batch: add an output argument to reture if
all swap entries are exclusive

https://lore.kernel.org/linux-mm/[email protected]/

>
> > +
> > + return new;
> > +}
> > +
> > +/**
> > + * swap_pte_batch - detect a PTE batch for a set of contiguous swap entries
> > + * @start_ptep: Page table pointer for the first entry.
> > + * @max_nr: The maximum number of table entries to consider.
> > + * @pte: Page table entry for the first entry.
> > + *
> > + * Detect a batch of contiguous swap entries: consecutive (non-present) PTEs
> > + * containing swap entries all with consecutive offsets and targeting the same
> > + * swap type, all with matching swp pte bits.
> > + *
> > + * max_nr must be at least one and must be limited by the caller so scanning
> > + * cannot exceed a single page table.
> > + *
> > + * Return: the number of table entries in the batch.
> > + */
> > +static inline int swap_pte_batch(pte_t *start_ptep, int max_nr, pte_t pte)
> > +{
> > + pte_t expected_pte = pte_next_swp_offset(pte);
> > + const pte_t *end_ptep = start_ptep + max_nr;
> > + pte_t *ptep = start_ptep + 1;
> > +
> > + VM_WARN_ON(max_nr < 1);
> > + VM_WARN_ON(!is_swap_pte(pte));
> > + VM_WARN_ON(non_swap_entry(pte_to_swp_entry(pte)));
> > +
> > + while (ptep < end_ptep) {
> > + pte = ptep_get(ptep);
> > +
> > + if (!pte_same(pte, expected_pte))
> > + break;
> > +
> > + expected_pte = pte_next_swp_offset(expected_pte);
> > + ptep++;
> > + }
> > +
> > + return ptep - start_ptep;
> > +}
> > #endif /* CONFIG_MMU */
> >
> > void __acct_reclaim_writeback(pg_data_t *pgdat, struct folio *folio,
> > diff --git a/mm/madvise.c b/mm/madvise.c
> > index 1f77a51baaac..5011ecb24344 100644
> > --- a/mm/madvise.c
> > +++ b/mm/madvise.c
> > @@ -628,6 +628,7 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr,
> > struct folio *folio;
> > int nr_swap = 0;
> > unsigned long next;
> > + int nr, max_nr;
> >
> > next = pmd_addr_end(addr, end);
> > if (pmd_trans_huge(*pmd))
> > @@ -640,7 +641,8 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr,
> > return 0;
> > flush_tlb_batched_pending(mm);
> > arch_enter_lazy_mmu_mode();
> > - for (; addr != end; pte++, addr += PAGE_SIZE) {
> > + for (; addr != end; pte += nr, addr += PAGE_SIZE * nr) {
> > + nr = 1;
> > ptent = ptep_get(pte);
> >
> > if (pte_none(ptent))
> > @@ -655,9 +657,11 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr,
> >
> > entry = pte_to_swp_entry(ptent);
> > if (!non_swap_entry(entry)) {
> > - nr_swap--;
> > - free_swap_and_cache(entry);
> > - pte_clear_not_present_full(mm, addr, pte, tlb->fullmm);
> > + max_nr = (end - addr) / PAGE_SIZE;
> > + nr = swap_pte_batch(pte, max_nr, ptent);
> > + nr_swap -= nr;
> > + free_swap_and_cache_nr(entry, nr);
> > + clear_not_present_full_ptes(mm, addr, pte, nr, tlb->fullmm);
> > } else if (is_hwpoison_entry(entry) ||
> > is_poisoned_swp_entry(entry)) {
> > pte_clear_not_present_full(mm, addr, pte, tlb->fullmm);
> > diff --git a/mm/memory.c b/mm/memory.c
> > index b98e4d907a14..0db2aa066a5a 100644
> > --- a/mm/memory.c
> > +++ b/mm/memory.c
> > @@ -1637,12 +1637,13 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb,
> > folio_remove_rmap_pte(folio, page, vma);
> > folio_put(folio);
> > } else if (!non_swap_entry(entry)) {
> > - /* Genuine swap entry, hence a private anon page */
> > + max_nr = (end - addr) / PAGE_SIZE;
> > + nr = swap_pte_batch(pte, max_nr, ptent);
> > + /* Genuine swap entries, hence a private anon pages */
> > if (!should_zap_cows(details))
> > continue;
> > - rss[MM_SWAPENTS]--;
> > - if (unlikely(!free_swap_and_cache(entry)))
> > - print_bad_pte(vma, addr, ptent, NULL);
> > + rss[MM_SWAPENTS] -= nr;
> > + free_swap_and_cache_nr(entry, nr);
> > } else if (is_migration_entry(entry)) {
> > folio = pfn_swap_entry_folio(entry);
> > if (!should_zap_folio(details, folio))
> > @@ -1665,8 +1666,8 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb,
> > pr_alert("unrecognized swap entry 0x%lx\n", entry.val);
> > WARN_ON_ONCE(1);
> > }
> > - pte_clear_not_present_full(mm, addr, pte, tlb->fullmm);
> > - zap_install_uffd_wp_if_needed(vma, addr, pte, 1, details, ptent);
> > + clear_not_present_full_ptes(mm, addr, pte, nr, tlb->fullmm);
> > + zap_install_uffd_wp_if_needed(vma, addr, pte, nr, details, ptent);
> > } while (pte += nr, addr += PAGE_SIZE * nr, addr != end);
> >
> > add_mm_rss_vec(mm, rss);
> > diff --git a/mm/swapfile.c b/mm/swapfile.c
> > index 1ded6d1dcab4..20c45757f2b2 100644
> > --- a/mm/swapfile.c
> > +++ b/mm/swapfile.c
> > @@ -130,7 +130,11 @@ static inline unsigned char swap_count(unsigned char ent)
> > /* Reclaim the swap entry if swap is getting full*/
> > #define TTRS_FULL 0x4
> >
> > -/* returns 1 if swap entry is freed */
> > +/*
> > + * returns number of pages in the folio that backs the swap entry. If positive,
> > + * the folio was reclaimed. If negative, the folio was not reclaimed. If 0, no
> > + * folio was associated with the swap entry.
> > + */
> > static int __try_to_reclaim_swap(struct swap_info_struct *si,
> > unsigned long offset, unsigned long flags)
> > {
> > @@ -155,6 +159,7 @@ static int __try_to_reclaim_swap(struct swap_info_struct *si,
> > ret = folio_free_swap(folio);
> > folio_unlock(folio);
> > }
> > + ret = ret ? folio_nr_pages(folio) : -folio_nr_pages(folio);
> > folio_put(folio);
> > return ret;
> > }
> > @@ -895,7 +900,7 @@ static int scan_swap_map_slots(struct swap_info_struct *si,
> > swap_was_freed = __try_to_reclaim_swap(si, offset, TTRS_ANYWAY);
> > spin_lock(&si->lock);
> > /* entry was freed successfully, try to use this again */
> > - if (swap_was_freed)
> > + if (swap_was_freed > 0)
> > goto checks;
> > goto scan; /* check next one */
> > }
> > @@ -1572,32 +1577,88 @@ bool folio_free_swap(struct folio *folio)
> > return true;
> > }
> >
> > -/*
> > - * Free the swap entry like above, but also try to
> > - * free the page cache entry if it is the last user.
> > +/**
> > + * free_swap_and_cache_nr() - Release reference on range of swap entries and
> > + * reclaim their cache if no more references remain.
> > + * @entry: First entry of range.
> > + * @nr: Number of entries in range.
> > + *
> > + * For each swap entry in the contiguous range, release a reference. If any swap
> > + * entries become free, try to reclaim their underlying folios, if present. The
> > + * offset range is defined by [entry.offset, entry.offset + nr).
> > */
> > -int free_swap_and_cache(swp_entry_t entry)
> > +void free_swap_and_cache_nr(swp_entry_t entry, int nr)
> > {
> > - struct swap_info_struct *p;
> > + const unsigned long start_offset = swp_offset(entry);
> > + const unsigned long end_offset = start_offset + nr;
> > + unsigned int type = swp_type(entry);
> > + struct swap_info_struct *si;
> > + bool any_only_cache = false;
> > + unsigned long offset;
> > unsigned char count;
> >
> > if (non_swap_entry(entry))
> > - return 1;
> > + return;
> >
> > - p = get_swap_device(entry);
> > - if (p) {
> > - if (WARN_ON(data_race(!p->swap_map[swp_offset(entry)]))) {
> > - put_swap_device(p);
> > - return 0;
> > + si = get_swap_device(entry);
> > + if (!si)
> > + return;
> > +
> > + if (WARN_ON(end_offset > si->max))
> > + goto out;
> > +
> > + /*
> > + * First free all entries in the range.
> > + */
> > + for (offset = start_offset; offset < end_offset; offset++) {
> > + if (data_race(si->swap_map[offset])) {
> > + count = __swap_entry_free(si, swp_entry(type, offset));
> > + if (count == SWAP_HAS_CACHE)
> > + any_only_cache = true;
> > + } else {
> > + WARN_ON_ONCE(1);
> > }
> > + }
> > +
> > + /*
> > + * Short-circuit the below loop if none of the entries had their
> > + * reference drop to zero.
> > + */
> > + if (!any_only_cache)
> > + goto out;
> >
> > - count = __swap_entry_free(p, entry);
> > - if (count == SWAP_HAS_CACHE)
> > - __try_to_reclaim_swap(p, swp_offset(entry),
> > + /*
> > + * Now go back over the range trying to reclaim the swap cache. This is
> > + * more efficient for large folios because we will only try to reclaim
> > + * the swap once per folio in the common case. If we do
> > + * __swap_entry_free() and __try_to_reclaim_swap() in the same loop, the
> > + * latter will get a reference and lock the folio for every individual
> > + * page but will only succeed once the swap slot for every subpage is
> > + * zero.
> > + */
> > + for (offset = start_offset; offset < end_offset; offset += nr) {
> > + nr = 1;
> > + if (READ_ONCE(si->swap_map[offset]) == SWAP_HAS_CACHE) {
> > + /*
> > + * Folios are always naturally aligned in swap so
> > + * advance forward to the next boundary. Zero means no
> > + * folio was found for the swap entry, so advance by 1
> > + * in this case. Negative value means folio was found
> > + * but could not be reclaimed. Here we can still advance
> > + * to the next boundary.
> > + */
> > + nr = __try_to_reclaim_swap(si, offset,
> > TTRS_UNMAPPED | TTRS_FULL);
> > - put_swap_device(p);
> > + if (nr == 0)
> > + nr = 1;
> > + else if (nr < 0)
> > + nr = -nr;
> > + nr = ALIGN(offset + 1, nr) - offset;
> > + }
> > }
> > - return p != NULL;
> > +
> > +out:
> > + put_swap_device(si);
> > }
> >
> > #ifdef CONFIG_HIBERNATION
> > --
> > 2.25.1
> >
>
> Thanks
> Barry

2024-04-09 09:25:17

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH v7 2/7] mm: swap: free_swap_and_cache_nr() as batched free_swap_and_cache()

On 09.04.24 11:22, Barry Song wrote:
> On Tue, Apr 9, 2024 at 8:51 PM Barry Song <[email protected]> wrote:
>>
>> On Tue, Apr 9, 2024 at 6:40 AM Ryan Roberts <[email protected]> wrote:
>>>
>>> Now that we no longer have a convenient flag in the cluster to determine
>>> if a folio is large, free_swap_and_cache() will take a reference and
>>> lock a large folio much more often, which could lead to contention and
>>> (e.g.) failure to split large folios, etc.
>>>
>>> Let's solve that problem by batch freeing swap and cache with a new
>>> function, free_swap_and_cache_nr(), to free a contiguous range of swap
>>> entries together. This allows us to first drop a reference to each swap
>>> slot before we try to release the cache folio. This means we only try to
>>> release the folio once, only taking the reference and lock once - much
>>> better than the previous 512 times for the 2M THP case.
>>>
>>> Contiguous swap entries are gathered in zap_pte_range() and
>>> madvise_free_pte_range() in a similar way to how present ptes are
>>> already gathered in zap_pte_range().
>>>
>>> While we are at it, let's simplify by converting the return type of both
>>> functions to void. The return value was used only by zap_pte_range() to
>>> print a bad pte, and was ignored by everyone else, so the extra
>>> reporting wasn't exactly guaranteed. We will still get the warning with
>>> most of the information from get_swap_device(). With the batch version,
>>> we wouldn't know which pte was bad anyway so could print the wrong one.
>>>
>>> Signed-off-by: Ryan Roberts <[email protected]>
>>> ---
>>> include/linux/pgtable.h | 29 ++++++++++++
>>> include/linux/swap.h | 12 +++--
>>> mm/internal.h | 63 ++++++++++++++++++++++++++
>>> mm/madvise.c | 12 +++--
>>> mm/memory.c | 13 +++---
>>> mm/swapfile.c | 97 +++++++++++++++++++++++++++++++++--------
>>> 6 files changed, 195 insertions(+), 31 deletions(-)
>>>
>>> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
>>> index a3fc8150b047..75096025fe52 100644
>>> --- a/include/linux/pgtable.h
>>> +++ b/include/linux/pgtable.h
>>> @@ -708,6 +708,35 @@ static inline void pte_clear_not_present_full(struct mm_struct *mm,
>>> }
>>> #endif
>>>
>>> +#ifndef clear_not_present_full_ptes
>>> +/**
>>> + * clear_not_present_full_ptes - Clear multiple not present PTEs which are
>>> + * consecutive in the pgtable.
>>> + * @mm: Address space the ptes represent.
>>> + * @addr: Address of the first pte.
>>> + * @ptep: Page table pointer for the first entry.
>>> + * @nr: Number of entries to clear.
>>> + * @full: Whether we are clearing a full mm.
>>> + *
>>> + * May be overridden by the architecture; otherwise, implemented as a simple
>>> + * loop over pte_clear_not_present_full().
>>> + *
>>> + * Context: The caller holds the page table lock. The PTEs are all not present.
>>> + * The PTEs are all in the same PMD.
>>> + */
>>> +static inline void clear_not_present_full_ptes(struct mm_struct *mm,
>>> + unsigned long addr, pte_t *ptep, unsigned int nr, int full)
>>> +{
>>> + for (;;) {
>>> + pte_clear_not_present_full(mm, addr, ptep, full);
>>> + if (--nr == 0)
>>> + break;
>>> + ptep++;
>>> + addr += PAGE_SIZE;
>>> + }
>>> +}
>>> +#endif
>>> +
>>> #ifndef __HAVE_ARCH_PTEP_CLEAR_FLUSH
>>> extern pte_t ptep_clear_flush(struct vm_area_struct *vma,
>>> unsigned long address,
>>> diff --git a/include/linux/swap.h b/include/linux/swap.h
>>> index f6f78198f000..5737236dc3ce 100644
>>> --- a/include/linux/swap.h
>>> +++ b/include/linux/swap.h
>>> @@ -471,7 +471,7 @@ extern int swap_duplicate(swp_entry_t);
>>> extern int swapcache_prepare(swp_entry_t);
>>> extern void swap_free(swp_entry_t);
>>> extern void swapcache_free_entries(swp_entry_t *entries, int n);
>>> -extern int free_swap_and_cache(swp_entry_t);
>>> +extern void free_swap_and_cache_nr(swp_entry_t entry, int nr);
>>> int swap_type_of(dev_t device, sector_t offset);
>>> int find_first_swap(dev_t *device);
>>> extern unsigned int count_swap_pages(int, int);
>>> @@ -520,8 +520,9 @@ static inline void put_swap_device(struct swap_info_struct *si)
>>> #define free_pages_and_swap_cache(pages, nr) \
>>> release_pages((pages), (nr));
>>>
>>> -/* used to sanity check ptes in zap_pte_range when CONFIG_SWAP=0 */
>>> -#define free_swap_and_cache(e) is_pfn_swap_entry(e)
>>> +static inline void free_swap_and_cache_nr(swp_entry_t entry, int nr)
>>> +{
>>> +}
>>>
>>> static inline void free_swap_cache(struct folio *folio)
>>> {
>>> @@ -589,6 +590,11 @@ static inline int add_swap_extent(struct swap_info_struct *sis,
>>> }
>>> #endif /* CONFIG_SWAP */
>>>
>>> +static inline void free_swap_and_cache(swp_entry_t entry)
>>> +{
>>> + free_swap_and_cache_nr(entry, 1);
>>> +}
>>> +
>>> #ifdef CONFIG_MEMCG
>>> static inline int mem_cgroup_swappiness(struct mem_cgroup *memcg)
>>> {
>>> diff --git a/mm/internal.h b/mm/internal.h
>>> index 3bdc8693b54f..de68705624b0 100644
>>> --- a/mm/internal.h
>>> +++ b/mm/internal.h
>>> @@ -11,6 +11,8 @@
>>> #include <linux/mm.h>
>>> #include <linux/pagemap.h>
>>> #include <linux/rmap.h>
>>> +#include <linux/swap.h>
>>> +#include <linux/swapops.h>
>>> #include <linux/tracepoint-defs.h>
>>>
>>> struct folio_batch;
>>> @@ -189,6 +191,67 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
>>>
>>> return min(ptep - start_ptep, max_nr);
>>> }
>>> +
>>> +/**
>>> + * pte_next_swp_offset - Increment the swap entry offset field of a swap pte.
>>> + * @pte: The initial pte state; is_swap_pte(pte) must be true.
>>> + *
>>> + * Increments the swap offset, while maintaining all other fields, including
>>> + * swap type, and any swp pte bits. The resulting pte is returned.
>>> + */
>>> +static inline pte_t pte_next_swp_offset(pte_t pte)
>>> +{
>>> + swp_entry_t entry = pte_to_swp_entry(pte);
>>> + pte_t new = __swp_entry_to_pte(__swp_entry(swp_type(entry),
>>> + swp_offset(entry) + 1));
>>> +
>>> + if (pte_swp_soft_dirty(pte))
>>> + new = pte_swp_mksoft_dirty(new);
>>> + if (pte_swp_exclusive(pte))
>>> + new = pte_swp_mkexclusive(new);
>>> + if (pte_swp_uffd_wp(pte))
>>> + new = pte_swp_mkuffd_wp(new);
>>
>> I don't quite understand this. If this page table entry is exclusive,
>> will its subsequent page table entry also be exclusive without
>> question?
>> in try_to_unmap_one, exclusive is per-subpage but not per-folio:
>>
>> anon_exclusive = folio_test_anon(folio) &&
>> PageAnonExclusive(subpage);
>>
>> same questions also for diry, wp etc.
>
> Sorry for the noise. you are right. based on your new version, I think I should
> entirely drop:
>
> [PATCH v2 3/5] mm: swap_pte_batch: add an output argument to reture if
> all swap entries are exclusive

Yes. If we ever want to ignore some bits, we should likely add flags to
change the behavior, like for folio_pte_batch().

For swapin, you really want the exclusive bits to match, though.
softdirty and uffd-wp as well at least initially for simplicity.

--
Cheers,

David / dhildenb


2024-04-09 09:41:59

by Barry Song

[permalink] [raw]
Subject: Re: [PATCH v7 2/7] mm: swap: free_swap_and_cache_nr() as batched free_swap_and_cache()

On Tue, Apr 9, 2024 at 9:24 PM David Hildenbrand <[email protected]> wrote:
>
> On 09.04.24 11:22, Barry Song wrote:
> > On Tue, Apr 9, 2024 at 8:51 PM Barry Song <[email protected]> wrote:
> >>
> >> On Tue, Apr 9, 2024 at 6:40 AM Ryan Roberts <[email protected]> wrote:
> >>>
> >>> Now that we no longer have a convenient flag in the cluster to determine
> >>> if a folio is large, free_swap_and_cache() will take a reference and
> >>> lock a large folio much more often, which could lead to contention and
> >>> (e.g.) failure to split large folios, etc.
> >>>
> >>> Let's solve that problem by batch freeing swap and cache with a new
> >>> function, free_swap_and_cache_nr(), to free a contiguous range of swap
> >>> entries together. This allows us to first drop a reference to each swap
> >>> slot before we try to release the cache folio. This means we only try to
> >>> release the folio once, only taking the reference and lock once - much
> >>> better than the previous 512 times for the 2M THP case.
> >>>
> >>> Contiguous swap entries are gathered in zap_pte_range() and
> >>> madvise_free_pte_range() in a similar way to how present ptes are
> >>> already gathered in zap_pte_range().
> >>>
> >>> While we are at it, let's simplify by converting the return type of both
> >>> functions to void. The return value was used only by zap_pte_range() to
> >>> print a bad pte, and was ignored by everyone else, so the extra
> >>> reporting wasn't exactly guaranteed. We will still get the warning with
> >>> most of the information from get_swap_device(). With the batch version,
> >>> we wouldn't know which pte was bad anyway so could print the wrong one.
> >>>
> >>> Signed-off-by: Ryan Roberts <[email protected]>
> >>> ---
> >>> include/linux/pgtable.h | 29 ++++++++++++
> >>> include/linux/swap.h | 12 +++--
> >>> mm/internal.h | 63 ++++++++++++++++++++++++++
> >>> mm/madvise.c | 12 +++--
> >>> mm/memory.c | 13 +++---
> >>> mm/swapfile.c | 97 +++++++++++++++++++++++++++++++++--------
> >>> 6 files changed, 195 insertions(+), 31 deletions(-)
> >>>
> >>> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> >>> index a3fc8150b047..75096025fe52 100644
> >>> --- a/include/linux/pgtable.h
> >>> +++ b/include/linux/pgtable.h
> >>> @@ -708,6 +708,35 @@ static inline void pte_clear_not_present_full(struct mm_struct *mm,
> >>> }
> >>> #endif
> >>>
> >>> +#ifndef clear_not_present_full_ptes
> >>> +/**
> >>> + * clear_not_present_full_ptes - Clear multiple not present PTEs which are
> >>> + * consecutive in the pgtable.
> >>> + * @mm: Address space the ptes represent.
> >>> + * @addr: Address of the first pte.
> >>> + * @ptep: Page table pointer for the first entry.
> >>> + * @nr: Number of entries to clear.
> >>> + * @full: Whether we are clearing a full mm.
> >>> + *
> >>> + * May be overridden by the architecture; otherwise, implemented as a simple
> >>> + * loop over pte_clear_not_present_full().
> >>> + *
> >>> + * Context: The caller holds the page table lock. The PTEs are all not present.
> >>> + * The PTEs are all in the same PMD.
> >>> + */
> >>> +static inline void clear_not_present_full_ptes(struct mm_struct *mm,
> >>> + unsigned long addr, pte_t *ptep, unsigned int nr, int full)
> >>> +{
> >>> + for (;;) {
> >>> + pte_clear_not_present_full(mm, addr, ptep, full);
> >>> + if (--nr == 0)
> >>> + break;
> >>> + ptep++;
> >>> + addr += PAGE_SIZE;
> >>> + }
> >>> +}
> >>> +#endif
> >>> +
> >>> #ifndef __HAVE_ARCH_PTEP_CLEAR_FLUSH
> >>> extern pte_t ptep_clear_flush(struct vm_area_struct *vma,
> >>> unsigned long address,
> >>> diff --git a/include/linux/swap.h b/include/linux/swap.h
> >>> index f6f78198f000..5737236dc3ce 100644
> >>> --- a/include/linux/swap.h
> >>> +++ b/include/linux/swap.h
> >>> @@ -471,7 +471,7 @@ extern int swap_duplicate(swp_entry_t);
> >>> extern int swapcache_prepare(swp_entry_t);
> >>> extern void swap_free(swp_entry_t);
> >>> extern void swapcache_free_entries(swp_entry_t *entries, int n);
> >>> -extern int free_swap_and_cache(swp_entry_t);
> >>> +extern void free_swap_and_cache_nr(swp_entry_t entry, int nr);
> >>> int swap_type_of(dev_t device, sector_t offset);
> >>> int find_first_swap(dev_t *device);
> >>> extern unsigned int count_swap_pages(int, int);
> >>> @@ -520,8 +520,9 @@ static inline void put_swap_device(struct swap_info_struct *si)
> >>> #define free_pages_and_swap_cache(pages, nr) \
> >>> release_pages((pages), (nr));
> >>>
> >>> -/* used to sanity check ptes in zap_pte_range when CONFIG_SWAP=0 */
> >>> -#define free_swap_and_cache(e) is_pfn_swap_entry(e)
> >>> +static inline void free_swap_and_cache_nr(swp_entry_t entry, int nr)
> >>> +{
> >>> +}
> >>>
> >>> static inline void free_swap_cache(struct folio *folio)
> >>> {
> >>> @@ -589,6 +590,11 @@ static inline int add_swap_extent(struct swap_info_struct *sis,
> >>> }
> >>> #endif /* CONFIG_SWAP */
> >>>
> >>> +static inline void free_swap_and_cache(swp_entry_t entry)
> >>> +{
> >>> + free_swap_and_cache_nr(entry, 1);
> >>> +}
> >>> +
> >>> #ifdef CONFIG_MEMCG
> >>> static inline int mem_cgroup_swappiness(struct mem_cgroup *memcg)
> >>> {
> >>> diff --git a/mm/internal.h b/mm/internal.h
> >>> index 3bdc8693b54f..de68705624b0 100644
> >>> --- a/mm/internal.h
> >>> +++ b/mm/internal.h
> >>> @@ -11,6 +11,8 @@
> >>> #include <linux/mm.h>
> >>> #include <linux/pagemap.h>
> >>> #include <linux/rmap.h>
> >>> +#include <linux/swap.h>
> >>> +#include <linux/swapops.h>
> >>> #include <linux/tracepoint-defs.h>
> >>>
> >>> struct folio_batch;
> >>> @@ -189,6 +191,67 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
> >>>
> >>> return min(ptep - start_ptep, max_nr);
> >>> }
> >>> +
> >>> +/**
> >>> + * pte_next_swp_offset - Increment the swap entry offset field of a swap pte.
> >>> + * @pte: The initial pte state; is_swap_pte(pte) must be true.
> >>> + *
> >>> + * Increments the swap offset, while maintaining all other fields, including
> >>> + * swap type, and any swp pte bits. The resulting pte is returned.
> >>> + */
> >>> +static inline pte_t pte_next_swp_offset(pte_t pte)
> >>> +{
> >>> + swp_entry_t entry = pte_to_swp_entry(pte);
> >>> + pte_t new = __swp_entry_to_pte(__swp_entry(swp_type(entry),
> >>> + swp_offset(entry) + 1));
> >>> +
> >>> + if (pte_swp_soft_dirty(pte))
> >>> + new = pte_swp_mksoft_dirty(new);
> >>> + if (pte_swp_exclusive(pte))
> >>> + new = pte_swp_mkexclusive(new);
> >>> + if (pte_swp_uffd_wp(pte))
> >>> + new = pte_swp_mkuffd_wp(new);
> >>
> >> I don't quite understand this. If this page table entry is exclusive,
> >> will its subsequent page table entry also be exclusive without
> >> question?
> >> in try_to_unmap_one, exclusive is per-subpage but not per-folio:
> >>
> >> anon_exclusive = folio_test_anon(folio) &&
> >> PageAnonExclusive(subpage);
> >>
> >> same questions also for diry, wp etc.
> >
> > Sorry for the noise. you are right. based on your new version, I think I should
> > entirely drop:
> >
> > [PATCH v2 3/5] mm: swap_pte_batch: add an output argument to reture if
> > all swap entries are exclusive
>
> Yes. If we ever want to ignore some bits, we should likely add flags to
> change the behavior, like for folio_pte_batch().
>
> For swapin, you really want the exclusive bits to match, though.

I am not quite sure I definitely need exclusive bits to match. i can either
drop my 3/5 or ignore the exclusive bit as below (if anyone is not shared,
swpin won't reuse the large folio, but it can still entirely map it read-only):

diff --git a/mm/internal.h b/mm/internal.h
index cae39c372bfc..5726e729c9ee 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -253,10 +253,22 @@ static inline int swap_pte_batch(pte_t
*start_ptep, int max_nr, pte_t pte,
*any_shared |= !pte_swp_exclusive(pte);

while (ptep < end_ptep) {
+ pte_t ignore_exclusive_pte;
+ pte_t ignore_exclusive_expected_pte;
pte = ptep_get(ptep);

- if (!pte_same(pte, expected_pte))
- break;
+ if (any_shared) {
+ ignore_exclusive_pte = pte;
+ ignore_exclusive_expected_pte = expected_pte;
+ ignore_exclusive_pte =
pte_swp_clear_exclusive(ignore_exclusive_pte);
+ ignore_exclusive_expected_pte =
pte_swp_clear_exclusive(expected_pte);
+
+ if (!pte_same(ignore_exclusive_pte,
ignore_exclusive_expected_pte))
+ break;
+ } else {
+ if (!pte_same(pte, expected_pte))
+ break;
+ }

if (any_shared)
*any_shared |= !pte_swp_exclusive(pte);

> softdirty and uffd-wp as well at least initially for simplicity.

yes for this.

By the way, I wonder if you and Ryan have a moment to review swpin
refault patchset
v2 :-)

[PATCH v2 0/5] large folios swap-in: handle refault cases first
https://lore.kernel.org/linux-mm/[email protected]/


>
> --
> Cheers,
>
> David / dhildenb
>

Thanks
Barry

2024-04-09 09:55:34

by Ryan Roberts

[permalink] [raw]
Subject: Re: [PATCH v7 2/7] mm: swap: free_swap_and_cache_nr() as batched free_swap_and_cache()

On 09/04/2024 10:41, Barry Song wrote:
> On Tue, Apr 9, 2024 at 9:24 PM David Hildenbrand <[email protected]> wrote:
>>
>> On 09.04.24 11:22, Barry Song wrote:
>>> On Tue, Apr 9, 2024 at 8:51 PM Barry Song <[email protected]> wrote:
>>>>
>>>> On Tue, Apr 9, 2024 at 6:40 AM Ryan Roberts <[email protected]> wrote:
>>>>>
>>>>> Now that we no longer have a convenient flag in the cluster to determine
>>>>> if a folio is large, free_swap_and_cache() will take a reference and
>>>>> lock a large folio much more often, which could lead to contention and
>>>>> (e.g.) failure to split large folios, etc.
>>>>>
>>>>> Let's solve that problem by batch freeing swap and cache with a new
>>>>> function, free_swap_and_cache_nr(), to free a contiguous range of swap
>>>>> entries together. This allows us to first drop a reference to each swap
>>>>> slot before we try to release the cache folio. This means we only try to
>>>>> release the folio once, only taking the reference and lock once - much
>>>>> better than the previous 512 times for the 2M THP case.
>>>>>
>>>>> Contiguous swap entries are gathered in zap_pte_range() and
>>>>> madvise_free_pte_range() in a similar way to how present ptes are
>>>>> already gathered in zap_pte_range().
>>>>>
>>>>> While we are at it, let's simplify by converting the return type of both
>>>>> functions to void. The return value was used only by zap_pte_range() to
>>>>> print a bad pte, and was ignored by everyone else, so the extra
>>>>> reporting wasn't exactly guaranteed. We will still get the warning with
>>>>> most of the information from get_swap_device(). With the batch version,
>>>>> we wouldn't know which pte was bad anyway so could print the wrong one.
>>>>>
>>>>> Signed-off-by: Ryan Roberts <[email protected]>
>>>>> ---
>>>>> include/linux/pgtable.h | 29 ++++++++++++
>>>>> include/linux/swap.h | 12 +++--
>>>>> mm/internal.h | 63 ++++++++++++++++++++++++++
>>>>> mm/madvise.c | 12 +++--
>>>>> mm/memory.c | 13 +++---
>>>>> mm/swapfile.c | 97 +++++++++++++++++++++++++++++++++--------
>>>>> 6 files changed, 195 insertions(+), 31 deletions(-)
>>>>>
>>>>> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
>>>>> index a3fc8150b047..75096025fe52 100644
>>>>> --- a/include/linux/pgtable.h
>>>>> +++ b/include/linux/pgtable.h
>>>>> @@ -708,6 +708,35 @@ static inline void pte_clear_not_present_full(struct mm_struct *mm,
>>>>> }
>>>>> #endif
>>>>>
>>>>> +#ifndef clear_not_present_full_ptes
>>>>> +/**
>>>>> + * clear_not_present_full_ptes - Clear multiple not present PTEs which are
>>>>> + * consecutive in the pgtable.
>>>>> + * @mm: Address space the ptes represent.
>>>>> + * @addr: Address of the first pte.
>>>>> + * @ptep: Page table pointer for the first entry.
>>>>> + * @nr: Number of entries to clear.
>>>>> + * @full: Whether we are clearing a full mm.
>>>>> + *
>>>>> + * May be overridden by the architecture; otherwise, implemented as a simple
>>>>> + * loop over pte_clear_not_present_full().
>>>>> + *
>>>>> + * Context: The caller holds the page table lock. The PTEs are all not present.
>>>>> + * The PTEs are all in the same PMD.
>>>>> + */
>>>>> +static inline void clear_not_present_full_ptes(struct mm_struct *mm,
>>>>> + unsigned long addr, pte_t *ptep, unsigned int nr, int full)
>>>>> +{
>>>>> + for (;;) {
>>>>> + pte_clear_not_present_full(mm, addr, ptep, full);
>>>>> + if (--nr == 0)
>>>>> + break;
>>>>> + ptep++;
>>>>> + addr += PAGE_SIZE;
>>>>> + }
>>>>> +}
>>>>> +#endif
>>>>> +
>>>>> #ifndef __HAVE_ARCH_PTEP_CLEAR_FLUSH
>>>>> extern pte_t ptep_clear_flush(struct vm_area_struct *vma,
>>>>> unsigned long address,
>>>>> diff --git a/include/linux/swap.h b/include/linux/swap.h
>>>>> index f6f78198f000..5737236dc3ce 100644
>>>>> --- a/include/linux/swap.h
>>>>> +++ b/include/linux/swap.h
>>>>> @@ -471,7 +471,7 @@ extern int swap_duplicate(swp_entry_t);
>>>>> extern int swapcache_prepare(swp_entry_t);
>>>>> extern void swap_free(swp_entry_t);
>>>>> extern void swapcache_free_entries(swp_entry_t *entries, int n);
>>>>> -extern int free_swap_and_cache(swp_entry_t);
>>>>> +extern void free_swap_and_cache_nr(swp_entry_t entry, int nr);
>>>>> int swap_type_of(dev_t device, sector_t offset);
>>>>> int find_first_swap(dev_t *device);
>>>>> extern unsigned int count_swap_pages(int, int);
>>>>> @@ -520,8 +520,9 @@ static inline void put_swap_device(struct swap_info_struct *si)
>>>>> #define free_pages_and_swap_cache(pages, nr) \
>>>>> release_pages((pages), (nr));
>>>>>
>>>>> -/* used to sanity check ptes in zap_pte_range when CONFIG_SWAP=0 */
>>>>> -#define free_swap_and_cache(e) is_pfn_swap_entry(e)
>>>>> +static inline void free_swap_and_cache_nr(swp_entry_t entry, int nr)
>>>>> +{
>>>>> +}
>>>>>
>>>>> static inline void free_swap_cache(struct folio *folio)
>>>>> {
>>>>> @@ -589,6 +590,11 @@ static inline int add_swap_extent(struct swap_info_struct *sis,
>>>>> }
>>>>> #endif /* CONFIG_SWAP */
>>>>>
>>>>> +static inline void free_swap_and_cache(swp_entry_t entry)
>>>>> +{
>>>>> + free_swap_and_cache_nr(entry, 1);
>>>>> +}
>>>>> +
>>>>> #ifdef CONFIG_MEMCG
>>>>> static inline int mem_cgroup_swappiness(struct mem_cgroup *memcg)
>>>>> {
>>>>> diff --git a/mm/internal.h b/mm/internal.h
>>>>> index 3bdc8693b54f..de68705624b0 100644
>>>>> --- a/mm/internal.h
>>>>> +++ b/mm/internal.h
>>>>> @@ -11,6 +11,8 @@
>>>>> #include <linux/mm.h>
>>>>> #include <linux/pagemap.h>
>>>>> #include <linux/rmap.h>
>>>>> +#include <linux/swap.h>
>>>>> +#include <linux/swapops.h>
>>>>> #include <linux/tracepoint-defs.h>
>>>>>
>>>>> struct folio_batch;
>>>>> @@ -189,6 +191,67 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
>>>>>
>>>>> return min(ptep - start_ptep, max_nr);
>>>>> }
>>>>> +
>>>>> +/**
>>>>> + * pte_next_swp_offset - Increment the swap entry offset field of a swap pte.
>>>>> + * @pte: The initial pte state; is_swap_pte(pte) must be true.
>>>>> + *
>>>>> + * Increments the swap offset, while maintaining all other fields, including
>>>>> + * swap type, and any swp pte bits. The resulting pte is returned.
>>>>> + */
>>>>> +static inline pte_t pte_next_swp_offset(pte_t pte)
>>>>> +{
>>>>> + swp_entry_t entry = pte_to_swp_entry(pte);
>>>>> + pte_t new = __swp_entry_to_pte(__swp_entry(swp_type(entry),
>>>>> + swp_offset(entry) + 1));
>>>>> +
>>>>> + if (pte_swp_soft_dirty(pte))
>>>>> + new = pte_swp_mksoft_dirty(new);
>>>>> + if (pte_swp_exclusive(pte))
>>>>> + new = pte_swp_mkexclusive(new);
>>>>> + if (pte_swp_uffd_wp(pte))
>>>>> + new = pte_swp_mkuffd_wp(new);
>>>>
>>>> I don't quite understand this. If this page table entry is exclusive,
>>>> will its subsequent page table entry also be exclusive without
>>>> question?
>>>> in try_to_unmap_one, exclusive is per-subpage but not per-folio:
>>>>
>>>> anon_exclusive = folio_test_anon(folio) &&
>>>> PageAnonExclusive(subpage);
>>>>
>>>> same questions also for diry, wp etc.
>>>
>>> Sorry for the noise. you are right. based on your new version, I think I should
>>> entirely drop:
>>>
>>> [PATCH v2 3/5] mm: swap_pte_batch: add an output argument to reture if
>>> all swap entries are exclusive
>>
>> Yes. If we ever want to ignore some bits, we should likely add flags to
>> change the behavior, like for folio_pte_batch().
>>
>> For swapin, you really want the exclusive bits to match, though.
>
> I am not quite sure I definitely need exclusive bits to match. i can either
> drop my 3/5 or ignore the exclusive bit as below (if anyone is not shared,
> swpin won't reuse the large folio, but it can still entirely map it read-only):
>
> diff --git a/mm/internal.h b/mm/internal.h
> index cae39c372bfc..5726e729c9ee 100644
> --- a/mm/internal.h
> +++ b/mm/internal.h
> @@ -253,10 +253,22 @@ static inline int swap_pte_batch(pte_t
> *start_ptep, int max_nr, pte_t pte,
> *any_shared |= !pte_swp_exclusive(pte);
>
> while (ptep < end_ptep) {
> + pte_t ignore_exclusive_pte;
> + pte_t ignore_exclusive_expected_pte;
> pte = ptep_get(ptep);
>
> - if (!pte_same(pte, expected_pte))
> - break;
> + if (any_shared) {
> + ignore_exclusive_pte = pte;
> + ignore_exclusive_expected_pte = expected_pte;
> + ignore_exclusive_pte =
> pte_swp_clear_exclusive(ignore_exclusive_pte);
> + ignore_exclusive_expected_pte =
> pte_swp_clear_exclusive(expected_pte);
> +
> + if (!pte_same(ignore_exclusive_pte,
> ignore_exclusive_expected_pte))
> + break;
> + } else {
> + if (!pte_same(pte, expected_pte))
> + break;
> + }
>
> if (any_shared)
> *any_shared |= !pte_swp_exclusive(pte);

I'll leave David to comment on this proposal; I'm not sure I understand all the
details. The code change does look a bit "busy" though - sometimes that can be
an indicator :)

>
>> softdirty and uffd-wp as well at least initially for simplicity.
>
> yes for this.
>
> By the way, I wonder if you and Ryan have a moment to review swpin
> refault patchset
> v2 :-)

It's on my todo list! I'm very keen to get as much large swap-out and swap-in
support into v6.10 as we can. Hoping to get to it inthe next couple of days.

>
> [PATCH v2 0/5] large folios swap-in: handle refault cases first
> https://lore.kernel.org/linux-mm/[email protected]/
>
>
>>
>> --
>> Cheers,
>>
>> David / dhildenb
>>
>
> Thanks
> Barry


2024-04-09 10:35:34

by Barry Song

[permalink] [raw]
Subject: Re: [PATCH v7 2/7] mm: swap: free_swap_and_cache_nr() as batched free_swap_and_cache()

On Tue, Apr 9, 2024 at 9:55 PM Ryan Roberts <[email protected]> wrote:
>
> On 09/04/2024 10:41, Barry Song wrote:
> > On Tue, Apr 9, 2024 at 9:24 PM David Hildenbrand <[email protected]> wrote:
> >>
> >> On 09.04.24 11:22, Barry Song wrote:
> >>> On Tue, Apr 9, 2024 at 8:51 PM Barry Song <[email protected]> wrote:
> >>>>
> >>>> On Tue, Apr 9, 2024 at 6:40 AM Ryan Roberts <[email protected]> wrote:
> >>>>>
> >>>>> Now that we no longer have a convenient flag in the cluster to determine
> >>>>> if a folio is large, free_swap_and_cache() will take a reference and
> >>>>> lock a large folio much more often, which could lead to contention and
> >>>>> (e.g.) failure to split large folios, etc.
> >>>>>
> >>>>> Let's solve that problem by batch freeing swap and cache with a new
> >>>>> function, free_swap_and_cache_nr(), to free a contiguous range of swap
> >>>>> entries together. This allows us to first drop a reference to each swap
> >>>>> slot before we try to release the cache folio. This means we only try to
> >>>>> release the folio once, only taking the reference and lock once - much
> >>>>> better than the previous 512 times for the 2M THP case.
> >>>>>
> >>>>> Contiguous swap entries are gathered in zap_pte_range() and
> >>>>> madvise_free_pte_range() in a similar way to how present ptes are
> >>>>> already gathered in zap_pte_range().
> >>>>>
> >>>>> While we are at it, let's simplify by converting the return type of both
> >>>>> functions to void. The return value was used only by zap_pte_range() to
> >>>>> print a bad pte, and was ignored by everyone else, so the extra
> >>>>> reporting wasn't exactly guaranteed. We will still get the warning with
> >>>>> most of the information from get_swap_device(). With the batch version,
> >>>>> we wouldn't know which pte was bad anyway so could print the wrong one.
> >>>>>
> >>>>> Signed-off-by: Ryan Roberts <[email protected]>
> >>>>> ---
> >>>>> include/linux/pgtable.h | 29 ++++++++++++
> >>>>> include/linux/swap.h | 12 +++--
> >>>>> mm/internal.h | 63 ++++++++++++++++++++++++++
> >>>>> mm/madvise.c | 12 +++--
> >>>>> mm/memory.c | 13 +++---
> >>>>> mm/swapfile.c | 97 +++++++++++++++++++++++++++++++++--------
> >>>>> 6 files changed, 195 insertions(+), 31 deletions(-)
> >>>>>
> >>>>> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> >>>>> index a3fc8150b047..75096025fe52 100644
> >>>>> --- a/include/linux/pgtable.h
> >>>>> +++ b/include/linux/pgtable.h
> >>>>> @@ -708,6 +708,35 @@ static inline void pte_clear_not_present_full(struct mm_struct *mm,
> >>>>> }
> >>>>> #endif
> >>>>>
> >>>>> +#ifndef clear_not_present_full_ptes
> >>>>> +/**
> >>>>> + * clear_not_present_full_ptes - Clear multiple not present PTEs which are
> >>>>> + * consecutive in the pgtable.
> >>>>> + * @mm: Address space the ptes represent.
> >>>>> + * @addr: Address of the first pte.
> >>>>> + * @ptep: Page table pointer for the first entry.
> >>>>> + * @nr: Number of entries to clear.
> >>>>> + * @full: Whether we are clearing a full mm.
> >>>>> + *
> >>>>> + * May be overridden by the architecture; otherwise, implemented as a simple
> >>>>> + * loop over pte_clear_not_present_full().
> >>>>> + *
> >>>>> + * Context: The caller holds the page table lock. The PTEs are all not present.
> >>>>> + * The PTEs are all in the same PMD.
> >>>>> + */
> >>>>> +static inline void clear_not_present_full_ptes(struct mm_struct *mm,
> >>>>> + unsigned long addr, pte_t *ptep, unsigned int nr, int full)
> >>>>> +{
> >>>>> + for (;;) {
> >>>>> + pte_clear_not_present_full(mm, addr, ptep, full);
> >>>>> + if (--nr == 0)
> >>>>> + break;
> >>>>> + ptep++;
> >>>>> + addr += PAGE_SIZE;
> >>>>> + }
> >>>>> +}
> >>>>> +#endif
> >>>>> +
> >>>>> #ifndef __HAVE_ARCH_PTEP_CLEAR_FLUSH
> >>>>> extern pte_t ptep_clear_flush(struct vm_area_struct *vma,
> >>>>> unsigned long address,
> >>>>> diff --git a/include/linux/swap.h b/include/linux/swap.h
> >>>>> index f6f78198f000..5737236dc3ce 100644
> >>>>> --- a/include/linux/swap.h
> >>>>> +++ b/include/linux/swap.h
> >>>>> @@ -471,7 +471,7 @@ extern int swap_duplicate(swp_entry_t);
> >>>>> extern int swapcache_prepare(swp_entry_t);
> >>>>> extern void swap_free(swp_entry_t);
> >>>>> extern void swapcache_free_entries(swp_entry_t *entries, int n);
> >>>>> -extern int free_swap_and_cache(swp_entry_t);
> >>>>> +extern void free_swap_and_cache_nr(swp_entry_t entry, int nr);
> >>>>> int swap_type_of(dev_t device, sector_t offset);
> >>>>> int find_first_swap(dev_t *device);
> >>>>> extern unsigned int count_swap_pages(int, int);
> >>>>> @@ -520,8 +520,9 @@ static inline void put_swap_device(struct swap_info_struct *si)
> >>>>> #define free_pages_and_swap_cache(pages, nr) \
> >>>>> release_pages((pages), (nr));
> >>>>>
> >>>>> -/* used to sanity check ptes in zap_pte_range when CONFIG_SWAP=0 */
> >>>>> -#define free_swap_and_cache(e) is_pfn_swap_entry(e)
> >>>>> +static inline void free_swap_and_cache_nr(swp_entry_t entry, int nr)
> >>>>> +{
> >>>>> +}
> >>>>>
> >>>>> static inline void free_swap_cache(struct folio *folio)
> >>>>> {
> >>>>> @@ -589,6 +590,11 @@ static inline int add_swap_extent(struct swap_info_struct *sis,
> >>>>> }
> >>>>> #endif /* CONFIG_SWAP */
> >>>>>
> >>>>> +static inline void free_swap_and_cache(swp_entry_t entry)
> >>>>> +{
> >>>>> + free_swap_and_cache_nr(entry, 1);
> >>>>> +}
> >>>>> +
> >>>>> #ifdef CONFIG_MEMCG
> >>>>> static inline int mem_cgroup_swappiness(struct mem_cgroup *memcg)
> >>>>> {
> >>>>> diff --git a/mm/internal.h b/mm/internal.h
> >>>>> index 3bdc8693b54f..de68705624b0 100644
> >>>>> --- a/mm/internal.h
> >>>>> +++ b/mm/internal.h
> >>>>> @@ -11,6 +11,8 @@
> >>>>> #include <linux/mm.h>
> >>>>> #include <linux/pagemap.h>
> >>>>> #include <linux/rmap.h>
> >>>>> +#include <linux/swap.h>
> >>>>> +#include <linux/swapops.h>
> >>>>> #include <linux/tracepoint-defs.h>
> >>>>>
> >>>>> struct folio_batch;
> >>>>> @@ -189,6 +191,67 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
> >>>>>
> >>>>> return min(ptep - start_ptep, max_nr);
> >>>>> }
> >>>>> +
> >>>>> +/**
> >>>>> + * pte_next_swp_offset - Increment the swap entry offset field of a swap pte.
> >>>>> + * @pte: The initial pte state; is_swap_pte(pte) must be true.
> >>>>> + *
> >>>>> + * Increments the swap offset, while maintaining all other fields, including
> >>>>> + * swap type, and any swp pte bits. The resulting pte is returned.
> >>>>> + */
> >>>>> +static inline pte_t pte_next_swp_offset(pte_t pte)
> >>>>> +{
> >>>>> + swp_entry_t entry = pte_to_swp_entry(pte);
> >>>>> + pte_t new = __swp_entry_to_pte(__swp_entry(swp_type(entry),
> >>>>> + swp_offset(entry) + 1));
> >>>>> +
> >>>>> + if (pte_swp_soft_dirty(pte))
> >>>>> + new = pte_swp_mksoft_dirty(new);
> >>>>> + if (pte_swp_exclusive(pte))
> >>>>> + new = pte_swp_mkexclusive(new);
> >>>>> + if (pte_swp_uffd_wp(pte))
> >>>>> + new = pte_swp_mkuffd_wp(new);
> >>>>
> >>>> I don't quite understand this. If this page table entry is exclusive,
> >>>> will its subsequent page table entry also be exclusive without
> >>>> question?
> >>>> in try_to_unmap_one, exclusive is per-subpage but not per-folio:
> >>>>
> >>>> anon_exclusive = folio_test_anon(folio) &&
> >>>> PageAnonExclusive(subpage);
> >>>>
> >>>> same questions also for diry, wp etc.
> >>>
> >>> Sorry for the noise. you are right. based on your new version, I think I should
> >>> entirely drop:
> >>>
> >>> [PATCH v2 3/5] mm: swap_pte_batch: add an output argument to reture if
> >>> all swap entries are exclusive
> >>
> >> Yes. If we ever want to ignore some bits, we should likely add flags to
> >> change the behavior, like for folio_pte_batch().
> >>
> >> For swapin, you really want the exclusive bits to match, though.
> >
> > I am not quite sure I definitely need exclusive bits to match. i can either
> > drop my 3/5 or ignore the exclusive bit as below (if anyone is not shared,
> > swpin won't reuse the large folio, but it can still entirely map it read-only):
> >
> > diff --git a/mm/internal.h b/mm/internal.h
> > index cae39c372bfc..5726e729c9ee 100644
> > --- a/mm/internal.h
> > +++ b/mm/internal.h
> > @@ -253,10 +253,22 @@ static inline int swap_pte_batch(pte_t
> > *start_ptep, int max_nr, pte_t pte,
> > *any_shared |= !pte_swp_exclusive(pte);
> >
> > while (ptep < end_ptep) {
> > + pte_t ignore_exclusive_pte;
> > + pte_t ignore_exclusive_expected_pte;
> > pte = ptep_get(ptep);
> >
> > - if (!pte_same(pte, expected_pte))
> > - break;
> > + if (any_shared) {
> > + ignore_exclusive_pte = pte;
> > + ignore_exclusive_expected_pte = expected_pte;
> > + ignore_exclusive_pte =
> > pte_swp_clear_exclusive(ignore_exclusive_pte);
> > + ignore_exclusive_expected_pte =
> > pte_swp_clear_exclusive(expected_pte);
> > +
> > + if (!pte_same(ignore_exclusive_pte,
> > ignore_exclusive_expected_pte))
> > + break;
> > + } else {
> > + if (!pte_same(pte, expected_pte))
> > + break;
> > + }
> >
> > if (any_shared)
> > *any_shared |= !pte_swp_exclusive(pte);
>
> I'll leave David to comment on this proposal; I'm not sure I understand all the
> details. The code change does look a bit "busy" though - sometimes that can be
> an indicator :)

indeed. I wrote it in one minute.

I'm confident that the code can be written in a manner similar to
__pte_batch_clear_ignored. I was only proposing the approach,
not selling the code :-)

>
> >
> >> softdirty and uffd-wp as well at least initially for simplicity.
> >
> > yes for this.
> >
> > By the way, I wonder if you and Ryan have a moment to review swpin
> > refault patchset
> > v2 :-)
>
> It's on my todo list! I'm very keen to get as much large swap-out and swap-in
> support into v6.10 as we can. Hoping to get to it inthe next couple of days.
>
> >
> > [PATCH v2 0/5] large folios swap-in: handle refault cases first
> > https://lore.kernel.org/linux-mm/[email protected]/
> >
> >
> >>
> >> --
> >> Cheers,
> >>
> >> David / dhildenb
> >>
> >

Thanks
Barry
>

2024-04-09 10:42:53

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH v7 2/7] mm: swap: free_swap_and_cache_nr() as batched free_swap_and_cache()

On 09.04.24 11:55, Ryan Roberts wrote:
> On 09/04/2024 10:41, Barry Song wrote:
>> On Tue, Apr 9, 2024 at 9:24 PM David Hildenbrand <[email protected]> wrote:
>>>
>>> On 09.04.24 11:22, Barry Song wrote:
>>>> On Tue, Apr 9, 2024 at 8:51 PM Barry Song <[email protected]> wrote:
>>>>>
>>>>> On Tue, Apr 9, 2024 at 6:40 AM Ryan Roberts <[email protected]> wrote:
>>>>>>
>>>>>> Now that we no longer have a convenient flag in the cluster to determine
>>>>>> if a folio is large, free_swap_and_cache() will take a reference and
>>>>>> lock a large folio much more often, which could lead to contention and
>>>>>> (e.g.) failure to split large folios, etc.
>>>>>>
>>>>>> Let's solve that problem by batch freeing swap and cache with a new
>>>>>> function, free_swap_and_cache_nr(), to free a contiguous range of swap
>>>>>> entries together. This allows us to first drop a reference to each swap
>>>>>> slot before we try to release the cache folio. This means we only try to
>>>>>> release the folio once, only taking the reference and lock once - much
>>>>>> better than the previous 512 times for the 2M THP case.
>>>>>>
>>>>>> Contiguous swap entries are gathered in zap_pte_range() and
>>>>>> madvise_free_pte_range() in a similar way to how present ptes are
>>>>>> already gathered in zap_pte_range().
>>>>>>
>>>>>> While we are at it, let's simplify by converting the return type of both
>>>>>> functions to void. The return value was used only by zap_pte_range() to
>>>>>> print a bad pte, and was ignored by everyone else, so the extra
>>>>>> reporting wasn't exactly guaranteed. We will still get the warning with
>>>>>> most of the information from get_swap_device(). With the batch version,
>>>>>> we wouldn't know which pte was bad anyway so could print the wrong one.
>>>>>>
>>>>>> Signed-off-by: Ryan Roberts <[email protected]>
>>>>>> ---
>>>>>> include/linux/pgtable.h | 29 ++++++++++++
>>>>>> include/linux/swap.h | 12 +++--
>>>>>> mm/internal.h | 63 ++++++++++++++++++++++++++
>>>>>> mm/madvise.c | 12 +++--
>>>>>> mm/memory.c | 13 +++---
>>>>>> mm/swapfile.c | 97 +++++++++++++++++++++++++++++++++--------
>>>>>> 6 files changed, 195 insertions(+), 31 deletions(-)
>>>>>>
>>>>>> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
>>>>>> index a3fc8150b047..75096025fe52 100644
>>>>>> --- a/include/linux/pgtable.h
>>>>>> +++ b/include/linux/pgtable.h
>>>>>> @@ -708,6 +708,35 @@ static inline void pte_clear_not_present_full(struct mm_struct *mm,
>>>>>> }
>>>>>> #endif
>>>>>>
>>>>>> +#ifndef clear_not_present_full_ptes
>>>>>> +/**
>>>>>> + * clear_not_present_full_ptes - Clear multiple not present PTEs which are
>>>>>> + * consecutive in the pgtable.
>>>>>> + * @mm: Address space the ptes represent.
>>>>>> + * @addr: Address of the first pte.
>>>>>> + * @ptep: Page table pointer for the first entry.
>>>>>> + * @nr: Number of entries to clear.
>>>>>> + * @full: Whether we are clearing a full mm.
>>>>>> + *
>>>>>> + * May be overridden by the architecture; otherwise, implemented as a simple
>>>>>> + * loop over pte_clear_not_present_full().
>>>>>> + *
>>>>>> + * Context: The caller holds the page table lock. The PTEs are all not present.
>>>>>> + * The PTEs are all in the same PMD.
>>>>>> + */
>>>>>> +static inline void clear_not_present_full_ptes(struct mm_struct *mm,
>>>>>> + unsigned long addr, pte_t *ptep, unsigned int nr, int full)
>>>>>> +{
>>>>>> + for (;;) {
>>>>>> + pte_clear_not_present_full(mm, addr, ptep, full);
>>>>>> + if (--nr == 0)
>>>>>> + break;
>>>>>> + ptep++;
>>>>>> + addr += PAGE_SIZE;
>>>>>> + }
>>>>>> +}
>>>>>> +#endif
>>>>>> +
>>>>>> #ifndef __HAVE_ARCH_PTEP_CLEAR_FLUSH
>>>>>> extern pte_t ptep_clear_flush(struct vm_area_struct *vma,
>>>>>> unsigned long address,
>>>>>> diff --git a/include/linux/swap.h b/include/linux/swap.h
>>>>>> index f6f78198f000..5737236dc3ce 100644
>>>>>> --- a/include/linux/swap.h
>>>>>> +++ b/include/linux/swap.h
>>>>>> @@ -471,7 +471,7 @@ extern int swap_duplicate(swp_entry_t);
>>>>>> extern int swapcache_prepare(swp_entry_t);
>>>>>> extern void swap_free(swp_entry_t);
>>>>>> extern void swapcache_free_entries(swp_entry_t *entries, int n);
>>>>>> -extern int free_swap_and_cache(swp_entry_t);
>>>>>> +extern void free_swap_and_cache_nr(swp_entry_t entry, int nr);
>>>>>> int swap_type_of(dev_t device, sector_t offset);
>>>>>> int find_first_swap(dev_t *device);
>>>>>> extern unsigned int count_swap_pages(int, int);
>>>>>> @@ -520,8 +520,9 @@ static inline void put_swap_device(struct swap_info_struct *si)
>>>>>> #define free_pages_and_swap_cache(pages, nr) \
>>>>>> release_pages((pages), (nr));
>>>>>>
>>>>>> -/* used to sanity check ptes in zap_pte_range when CONFIG_SWAP=0 */
>>>>>> -#define free_swap_and_cache(e) is_pfn_swap_entry(e)
>>>>>> +static inline void free_swap_and_cache_nr(swp_entry_t entry, int nr)
>>>>>> +{
>>>>>> +}
>>>>>>
>>>>>> static inline void free_swap_cache(struct folio *folio)
>>>>>> {
>>>>>> @@ -589,6 +590,11 @@ static inline int add_swap_extent(struct swap_info_struct *sis,
>>>>>> }
>>>>>> #endif /* CONFIG_SWAP */
>>>>>>
>>>>>> +static inline void free_swap_and_cache(swp_entry_t entry)
>>>>>> +{
>>>>>> + free_swap_and_cache_nr(entry, 1);
>>>>>> +}
>>>>>> +
>>>>>> #ifdef CONFIG_MEMCG
>>>>>> static inline int mem_cgroup_swappiness(struct mem_cgroup *memcg)
>>>>>> {
>>>>>> diff --git a/mm/internal.h b/mm/internal.h
>>>>>> index 3bdc8693b54f..de68705624b0 100644
>>>>>> --- a/mm/internal.h
>>>>>> +++ b/mm/internal.h
>>>>>> @@ -11,6 +11,8 @@
>>>>>> #include <linux/mm.h>
>>>>>> #include <linux/pagemap.h>
>>>>>> #include <linux/rmap.h>
>>>>>> +#include <linux/swap.h>
>>>>>> +#include <linux/swapops.h>
>>>>>> #include <linux/tracepoint-defs.h>
>>>>>>
>>>>>> struct folio_batch;
>>>>>> @@ -189,6 +191,67 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
>>>>>>
>>>>>> return min(ptep - start_ptep, max_nr);
>>>>>> }
>>>>>> +
>>>>>> +/**
>>>>>> + * pte_next_swp_offset - Increment the swap entry offset field of a swap pte.
>>>>>> + * @pte: The initial pte state; is_swap_pte(pte) must be true.
>>>>>> + *
>>>>>> + * Increments the swap offset, while maintaining all other fields, including
>>>>>> + * swap type, and any swp pte bits. The resulting pte is returned.
>>>>>> + */
>>>>>> +static inline pte_t pte_next_swp_offset(pte_t pte)
>>>>>> +{
>>>>>> + swp_entry_t entry = pte_to_swp_entry(pte);
>>>>>> + pte_t new = __swp_entry_to_pte(__swp_entry(swp_type(entry),
>>>>>> + swp_offset(entry) + 1));
>>>>>> +
>>>>>> + if (pte_swp_soft_dirty(pte))
>>>>>> + new = pte_swp_mksoft_dirty(new);
>>>>>> + if (pte_swp_exclusive(pte))
>>>>>> + new = pte_swp_mkexclusive(new);
>>>>>> + if (pte_swp_uffd_wp(pte))
>>>>>> + new = pte_swp_mkuffd_wp(new);
>>>>>
>>>>> I don't quite understand this. If this page table entry is exclusive,
>>>>> will its subsequent page table entry also be exclusive without
>>>>> question?
>>>>> in try_to_unmap_one, exclusive is per-subpage but not per-folio:
>>>>>
>>>>> anon_exclusive = folio_test_anon(folio) &&
>>>>> PageAnonExclusive(subpage);
>>>>>
>>>>> same questions also for diry, wp etc.
>>>>
>>>> Sorry for the noise. you are right. based on your new version, I think I should
>>>> entirely drop:
>>>>
>>>> [PATCH v2 3/5] mm: swap_pte_batch: add an output argument to reture if
>>>> all swap entries are exclusive
>>>
>>> Yes. If we ever want to ignore some bits, we should likely add flags to
>>> change the behavior, like for folio_pte_batch().
>>>
>>> For swapin, you really want the exclusive bits to match, though.
>>
>> I am not quite sure I definitely need exclusive bits to match. i can either
>> drop my 3/5 or ignore the exclusive bit as below (if anyone is not shared,
>> swpin won't reuse the large folio, but it can still entirely map it read-only):
>>
>> diff --git a/mm/internal.h b/mm/internal.h
>> index cae39c372bfc..5726e729c9ee 100644
>> --- a/mm/internal.h
>> +++ b/mm/internal.h
>> @@ -253,10 +253,22 @@ static inline int swap_pte_batch(pte_t
>> *start_ptep, int max_nr, pte_t pte,
>> *any_shared |= !pte_swp_exclusive(pte);
>>
>> while (ptep < end_ptep) {
>> + pte_t ignore_exclusive_pte;
>> + pte_t ignore_exclusive_expected_pte;
>> pte = ptep_get(ptep);
>>
>> - if (!pte_same(pte, expected_pte))
>> - break;
>> + if (any_shared) {
>> + ignore_exclusive_pte = pte;
>> + ignore_exclusive_expected_pte = expected_pte;
>> + ignore_exclusive_pte =
>> pte_swp_clear_exclusive(ignore_exclusive_pte);
>> + ignore_exclusive_expected_pte =
>> pte_swp_clear_exclusive(expected_pte);
>> +
>> + if (!pte_same(ignore_exclusive_pte,
>> ignore_exclusive_expected_pte))
>> + break;
>> + } else {
>> + if (!pte_same(pte, expected_pte))
>> + break;
>> + }
>>
>> if (any_shared)
>> *any_shared |= !pte_swp_exclusive(pte);
>
> I'll leave David to comment on this proposal; I'm not sure I understand all the
> details. The code change does look a bit "busy" though - sometimes that can be
> an indicator :)

I'd prefer to keep it simple initially. Devil is in the detail for the
refault case: in the past, dropping an exclusive flag could have been
problematic with some O_DIRECT workloads that were not using FOLL_PIN.
IIUC, some still remain.

More details can be had from
https://lore.kernel.org/all/[email protected]/


It might all change a bit if I manage to get folio_test_anon_exclusive()
flying. The current plan is that all individual swp PTEs would still
have pte_swp_exclusive() set, and we would *not* clear the
folio_test_anon_exclusive() flag while the folio is still in the
swapcache [which we do today to make fork() handling easier].

That will make refault significantly easier to handle regarding
exclusivity handling with large folios.

--
Cheers,

David / dhildenb


2024-04-09 11:19:00

by Ryan Roberts

[permalink] [raw]
Subject: [PATCH] FIXUP: mm: swap: free_swap_and_cache_nr() as batched free_swap_and_cache()

Hi Andrew,

Could you please squash this into commit "mm: swap:
free_swap_and_cache_nr() as batched free_swap_and_cache()", which is
already in mm-unstable?

It fixes a build warning on parisc [1] due to their implementation of
__swp_entry_to_pte() not correctly putting the macro args in
parenthisis. But it turns out that a bunch of other arches are also
faulty in this regard.

I'm also adding an extra statement to the documentation for
pte_next_swp_offset() as suggested by David.

[1] https://lore.kernel.org/all/[email protected]/

Thanks,
Ryan

Signed-off-by: Ryan Roberts <[email protected]>
---
mm/internal.h | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/mm/internal.h b/mm/internal.h
index 9d3250b4a08a..22152e0c8494 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -202,7 +202,8 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr,

/**
* pte_next_swp_offset - Increment the swap entry offset field of a swap pte.
- * @pte: The initial pte state; is_swap_pte(pte) must be true.
+ * @pte: The initial pte state; is_swap_pte(pte) must be true and
+ * non_swap_entry() must be false.
*
* Increments the swap offset, while maintaining all other fields, including
* swap type, and any swp pte bits. The resulting pte is returned.
@@ -211,7 +212,7 @@ static inline pte_t pte_next_swp_offset(pte_t pte)
{
swp_entry_t entry = pte_to_swp_entry(pte);
pte_t new = __swp_entry_to_pte(__swp_entry(swp_type(entry),
- swp_offset(entry) + 1));
+ (swp_offset(entry) + 1)));

if (pte_swp_soft_dirty(pte))
new = pte_swp_mksoft_dirty(new);
--
2.25.1


2024-05-13 07:30:22

by Barry Song

[permalink] [raw]
Subject: Re: [PATCH v7 5/7] mm: swap: Allow storage of all mTHP orders

On Tue, Apr 9, 2024 at 6:40 AM Ryan Roberts <[email protected]> wrote:
>
> Multi-size THP enables performance improvements by allocating large,
> pte-mapped folios for anonymous memory. However I've observed that on an
> arm64 system running a parallel workload (e.g. kernel compilation)
> across many cores, under high memory pressure, the speed regresses. This
> is due to bottlenecking on the increased number of TLBIs added due to
> all the extra folio splitting when the large folios are swapped out.
>
> Therefore, solve this regression by adding support for swapping out mTHP
> without needing to split the folio, just like is already done for
> PMD-sized THP. This change only applies when CONFIG_THP_SWAP is enabled,
> and when the swap backing store is a non-rotating block device. These
> are the same constraints as for the existing PMD-sized THP swap-out
> support.
>
> Note that no attempt is made to swap-in (m)THP here - this is still done
> page-by-page, like for PMD-sized THP. But swapping-out mTHP is a
> prerequisite for swapping-in mTHP.
>
> The main change here is to improve the swap entry allocator so that it
> can allocate any power-of-2 number of contiguous entries between [1, (1
> << PMD_ORDER)]. This is done by allocating a cluster for each distinct
> order and allocating sequentially from it until the cluster is full.
> This ensures that we don't need to search the map and we get no
> fragmentation due to alignment padding for different orders in the
> cluster. If there is no current cluster for a given order, we attempt to
> allocate a free cluster from the list. If there are no free clusters, we
> fail the allocation and the caller can fall back to splitting the folio
> and allocates individual entries (as per existing PMD-sized THP
> fallback).
>
> The per-order current clusters are maintained per-cpu using the existing
> infrastructure. This is done to avoid interleving pages from different
> tasks, which would prevent IO being batched. This is already done for
> the order-0 allocations so we follow the same pattern.
>
> As is done for order-0 per-cpu clusters, the scanner now can steal
> order-0 entries from any per-cpu-per-order reserved cluster. This
> ensures that when the swap file is getting full, space doesn't get tied
> up in the per-cpu reserves.
>
> This change only modifies swap to be able to accept any order mTHP. It
> doesn't change the callers to elide doing the actual split. That will be
> done in separate changes.
>
> Reviewed-by: "Huang, Ying" <[email protected]>
> Signed-off-by: Ryan Roberts <[email protected]>
> ---
> include/linux/swap.h | 8 ++-
> mm/swapfile.c | 162 ++++++++++++++++++++++++-------------------
> 2 files changed, 98 insertions(+), 72 deletions(-)
>
> diff --git a/include/linux/swap.h b/include/linux/swap.h
> index b888e1080a94..11c53692f65f 100644
> --- a/include/linux/swap.h
> +++ b/include/linux/swap.h
> @@ -268,13 +268,19 @@ struct swap_cluster_info {
> */
> #define SWAP_NEXT_INVALID 0
>
> +#ifdef CONFIG_THP_SWAP
> +#define SWAP_NR_ORDERS (PMD_ORDER + 1)
> +#else
> +#define SWAP_NR_ORDERS 1
> +#endif
> +
> /*
> * We assign a cluster to each CPU, so each CPU can allocate swap entry from
> * its own cluster and swapout sequentially. The purpose is to optimize swapout
> * throughput.
> */
> struct percpu_cluster {
> - unsigned int next; /* Likely next allocation offset */
> + unsigned int next[SWAP_NR_ORDERS]; /* Likely next allocation offset */
> };
>
> struct swap_cluster_list {
> diff --git a/mm/swapfile.c b/mm/swapfile.c
> index d2e3d3cd439f..148ef08f19dd 100644
> --- a/mm/swapfile.c
> +++ b/mm/swapfile.c
> @@ -551,10 +551,12 @@ static void free_cluster(struct swap_info_struct *si, unsigned long idx)
>
> /*
> * The cluster corresponding to page_nr will be used. The cluster will be
> - * removed from free cluster list and its usage counter will be increased.
> + * removed from free cluster list and its usage counter will be increased by
> + * count.
> */
> -static void inc_cluster_info_page(struct swap_info_struct *p,
> - struct swap_cluster_info *cluster_info, unsigned long page_nr)
> +static void add_cluster_info_page(struct swap_info_struct *p,
> + struct swap_cluster_info *cluster_info, unsigned long page_nr,
> + unsigned long count)
> {
> unsigned long idx = page_nr / SWAPFILE_CLUSTER;
>
> @@ -563,9 +565,19 @@ static void inc_cluster_info_page(struct swap_info_struct *p,
> if (cluster_is_free(&cluster_info[idx]))
> alloc_cluster(p, idx);
>
> - VM_BUG_ON(cluster_count(&cluster_info[idx]) >= SWAPFILE_CLUSTER);
> + VM_BUG_ON(cluster_count(&cluster_info[idx]) + count > SWAPFILE_CLUSTER);
> cluster_set_count(&cluster_info[idx],
> - cluster_count(&cluster_info[idx]) + 1);
> + cluster_count(&cluster_info[idx]) + count);
> +}
> +
> +/*
> + * The cluster corresponding to page_nr will be used. The cluster will be
> + * removed from free cluster list and its usage counter will be increased by 1.
> + */
> +static void inc_cluster_info_page(struct swap_info_struct *p,
> + struct swap_cluster_info *cluster_info, unsigned long page_nr)
> +{
> + add_cluster_info_page(p, cluster_info, page_nr, 1);
> }
>
> /*
> @@ -595,7 +607,7 @@ static void dec_cluster_info_page(struct swap_info_struct *p,
> */
> static bool
> scan_swap_map_ssd_cluster_conflict(struct swap_info_struct *si,
> - unsigned long offset)
> + unsigned long offset, int order)
> {
> struct percpu_cluster *percpu_cluster;
> bool conflict;
> @@ -609,24 +621,39 @@ scan_swap_map_ssd_cluster_conflict(struct swap_info_struct *si,
> return false;
>
> percpu_cluster = this_cpu_ptr(si->percpu_cluster);
> - percpu_cluster->next = SWAP_NEXT_INVALID;
> + percpu_cluster->next[order] = SWAP_NEXT_INVALID;
> + return true;
> +}
> +
> +static inline bool swap_range_empty(char *swap_map, unsigned int start,
> + unsigned int nr_pages)
> +{
> + unsigned int i;
> +
> + for (i = 0; i < nr_pages; i++) {
> + if (swap_map[start + i])
> + return false;
> + }
> +
> return true;
> }
>
> /*
> - * Try to get a swap entry from current cpu's swap entry pool (a cluster). This
> - * might involve allocating a new cluster for current CPU too.
> + * Try to get swap entries with specified order from current cpu's swap entry
> + * pool (a cluster). This might involve allocating a new cluster for current CPU
> + * too.
> */
> static bool scan_swap_map_try_ssd_cluster(struct swap_info_struct *si,
> - unsigned long *offset, unsigned long *scan_base)
> + unsigned long *offset, unsigned long *scan_base, int order)
> {
> + unsigned int nr_pages = 1 << order;
> struct percpu_cluster *cluster;
> struct swap_cluster_info *ci;
> unsigned int tmp, max;
>
> new_cluster:
> cluster = this_cpu_ptr(si->percpu_cluster);
> - tmp = cluster->next;
> + tmp = cluster->next[order];
> if (tmp == SWAP_NEXT_INVALID) {
> if (!cluster_list_empty(&si->free_clusters)) {
> tmp = cluster_next(&si->free_clusters.head) *
> @@ -647,26 +674,27 @@ static bool scan_swap_map_try_ssd_cluster(struct swap_info_struct *si,
>
> /*
> * Other CPUs can use our cluster if they can't find a free cluster,
> - * check if there is still free entry in the cluster
> + * check if there is still free entry in the cluster, maintaining
> + * natural alignment.
> */
> max = min_t(unsigned long, si->max, ALIGN(tmp + 1, SWAPFILE_CLUSTER));
> if (tmp < max) {
> ci = lock_cluster(si, tmp);
> while (tmp < max) {
> - if (!si->swap_map[tmp])
> + if (swap_range_empty(si->swap_map, tmp, nr_pages))
> break;
> - tmp++;
> + tmp += nr_pages;
> }
> unlock_cluster(ci);
> }
> if (tmp >= max) {
> - cluster->next = SWAP_NEXT_INVALID;
> + cluster->next[order] = SWAP_NEXT_INVALID;
> goto new_cluster;
> }
> *offset = tmp;
> *scan_base = tmp;
> - tmp += 1;
> - cluster->next = tmp < max ? tmp : SWAP_NEXT_INVALID;
> + tmp += nr_pages;
> + cluster->next[order] = tmp < max ? tmp : SWAP_NEXT_INVALID;
> return true;
> }
>
> @@ -796,13 +824,14 @@ static bool swap_offset_available_and_locked(struct swap_info_struct *si,
>
> static int scan_swap_map_slots(struct swap_info_struct *si,
> unsigned char usage, int nr,
> - swp_entry_t slots[])
> + swp_entry_t slots[], int order)
> {
> struct swap_cluster_info *ci;
> unsigned long offset;
> unsigned long scan_base;
> unsigned long last_in_cluster = 0;
> int latency_ration = LATENCY_LIMIT;
> + unsigned int nr_pages = 1 << order;
> int n_ret = 0;
> bool scanned_many = false;
>
> @@ -817,6 +846,25 @@ static int scan_swap_map_slots(struct swap_info_struct *si,
> * And we let swap pages go all over an SSD partition. Hugh
> */
>
> + if (order > 0) {
> + /*
> + * Should not even be attempting large allocations when huge
> + * page swap is disabled. Warn and fail the allocation.
> + */
> + if (!IS_ENABLED(CONFIG_THP_SWAP) ||
> + nr_pages > SWAPFILE_CLUSTER) {
> + VM_WARN_ON_ONCE(1);
> + return 0;
> + }
> +
> + /*
> + * Swapfile is not block device or not using clusters so unable
> + * to allocate large entries.
> + */
> + if (!(si->flags & SWP_BLKDEV) || !si->cluster_info)
> + return 0;
> + }
> +
> si->flags += SWP_SCANNING;
> /*
> * Use percpu scan base for SSD to reduce lock contention on
> @@ -831,8 +879,11 @@ static int scan_swap_map_slots(struct swap_info_struct *si,
>
> /* SSD algorithm */
> if (si->cluster_info) {
> - if (!scan_swap_map_try_ssd_cluster(si, &offset, &scan_base))
> + if (!scan_swap_map_try_ssd_cluster(si, &offset, &scan_base, order)) {

Hi Ryan,

Sorry for bringing up an old thread.

During the initial hour of utilizing an Android phone with 64KiB mTHP,
we noticed that the
anon_swpout_fallback rate was less than 10%. However, after several
hours of phone
usage, we observed a significant increase in the anon_swpout_fallback
rate, reaching
100%.

As I checked the code of scan_swap_map_try_ssd_cluster(),

static bool scan_swap_map_try_ssd_cluster(struct swap_info_struct *si,
unsigned long *offset, unsigned long *scan_base, int order)
{
unsigned int nr_pages = 1 << order;
struct percpu_cluster *cluster;
struct swap_cluster_info *ci;
unsigned int tmp, max;

new_cluster:
cluster = this_cpu_ptr(si->percpu_cluster);
tmp = cluster->next[order];
if (tmp == SWAP_NEXT_INVALID) {
if (!cluster_list_empty(&si->free_clusters)) {
tmp = cluster_next(&si->free_clusters.head) *
SWAPFILE_CLUSTER;
} else if (!cluster_list_empty(&si->discard_clusters)) {
/*
* we don't have free cluster but have some clusters in
* discarding, do discard now and reclaim them, then
* reread cluster_next_cpu since we dropped si->lock
*/
swap_do_scheduled_discard(si);
*scan_base = this_cpu_read(*si->cluster_next_cpu);
*offset = *scan_base;
goto new_cluster;
} else
return false;
}
..

}

Considering the cluster_list_empty() checks, is it necessary to have
free_cluster to
ensure a continuous allocation of swap slots for large folio swap out?
For instance,
if numerous clusters still possess ample free swap slots, could we
potentially miss
out on them due to a lack of execution of a slow scan?

I'm not saying your patchset has problems, just that I have some questions.

Thanks
Barry

2024-05-13 08:44:03

by Ryan Roberts

[permalink] [raw]
Subject: Re: [PATCH v7 5/7] mm: swap: Allow storage of all mTHP orders

On 13/05/2024 08:30, Barry Song wrote:
> On Tue, Apr 9, 2024 at 6:40 AM Ryan Roberts <[email protected]> wrote:
>>
>> Multi-size THP enables performance improvements by allocating large,
>> pte-mapped folios for anonymous memory. However I've observed that on an
>> arm64 system running a parallel workload (e.g. kernel compilation)
>> across many cores, under high memory pressure, the speed regresses. This
>> is due to bottlenecking on the increased number of TLBIs added due to
>> all the extra folio splitting when the large folios are swapped out.
>>
>> Therefore, solve this regression by adding support for swapping out mTHP
>> without needing to split the folio, just like is already done for
>> PMD-sized THP. This change only applies when CONFIG_THP_SWAP is enabled,
>> and when the swap backing store is a non-rotating block device. These
>> are the same constraints as for the existing PMD-sized THP swap-out
>> support.
>>
>> Note that no attempt is made to swap-in (m)THP here - this is still done
>> page-by-page, like for PMD-sized THP. But swapping-out mTHP is a
>> prerequisite for swapping-in mTHP.
>>
>> The main change here is to improve the swap entry allocator so that it
>> can allocate any power-of-2 number of contiguous entries between [1, (1
>> << PMD_ORDER)]. This is done by allocating a cluster for each distinct
>> order and allocating sequentially from it until the cluster is full.
>> This ensures that we don't need to search the map and we get no
>> fragmentation due to alignment padding for different orders in the
>> cluster. If there is no current cluster for a given order, we attempt to
>> allocate a free cluster from the list. If there are no free clusters, we
>> fail the allocation and the caller can fall back to splitting the folio
>> and allocates individual entries (as per existing PMD-sized THP
>> fallback).
>>
>> The per-order current clusters are maintained per-cpu using the existing
>> infrastructure. This is done to avoid interleving pages from different
>> tasks, which would prevent IO being batched. This is already done for
>> the order-0 allocations so we follow the same pattern.
>>
>> As is done for order-0 per-cpu clusters, the scanner now can steal
>> order-0 entries from any per-cpu-per-order reserved cluster. This
>> ensures that when the swap file is getting full, space doesn't get tied
>> up in the per-cpu reserves.
>>
>> This change only modifies swap to be able to accept any order mTHP. It
>> doesn't change the callers to elide doing the actual split. That will be
>> done in separate changes.

[...]

>
> Hi Ryan,
>
> Sorry for bringing up an old thread.

No problem - thanks for the report!

>
> During the initial hour of utilizing an Android phone with 64KiB mTHP,
> we noticed that the
> anon_swpout_fallback rate was less than 10%. However, after several
> hours of phone
> usage, we observed a significant increase in the anon_swpout_fallback
> rate, reaching
> 100%.

I suspect this is due to fragmentation of the clusters; If there is just one
page left in a cluster then the cluster can't be freed and once the cluster free
list is empty a new cluster allcoation will fail and this will cause fallback to
order-0.

>
> As I checked the code of scan_swap_map_try_ssd_cluster(),
>
> static bool scan_swap_map_try_ssd_cluster(struct swap_info_struct *si,
> unsigned long *offset, unsigned long *scan_base, int order)
> {
> unsigned int nr_pages = 1 << order;
> struct percpu_cluster *cluster;
> struct swap_cluster_info *ci;
> unsigned int tmp, max;
>
> new_cluster:
> cluster = this_cpu_ptr(si->percpu_cluster);
> tmp = cluster->next[order];
> if (tmp == SWAP_NEXT_INVALID) {
> if (!cluster_list_empty(&si->free_clusters)) {
> tmp = cluster_next(&si->free_clusters.head) *
> SWAPFILE_CLUSTER;
> } else if (!cluster_list_empty(&si->discard_clusters)) {
> /*
> * we don't have free cluster but have some clusters in
> * discarding, do discard now and reclaim them, then
> * reread cluster_next_cpu since we dropped si->lock
> */
> swap_do_scheduled_discard(si);
> *scan_base = this_cpu_read(*si->cluster_next_cpu);
> *offset = *scan_base;
> goto new_cluster;
> } else
> return false;
> }
> ...
>
> }
>
> Considering the cluster_list_empty() checks, is it necessary to have
> free_cluster to
> ensure a continuous allocation of swap slots for large folio swap out?

Yes, currently that is done by design; if we can't allocate a free cluster then
we only scan for free space in an already allocated cluster for order-0
allocations. I did this for a couple of reasons;

1: Simplicity.

2: Keep behavior the same as PMD-order allocations, which are never scanned
(although the cluster is the same size as the PMD so scanning would be pointless
there - so perhaps this is not a good argument for not scanning smaller high
orders).

3: If scanning for a high order fails then we would fall back to order-0 and
scan again, so I was trying to avoid the potential for 2 scans (although once
you split the page, you'll end up scanning per-page, so perhaps its not a real
argument either).

> For instance,
> if numerous clusters still possess ample free swap slots, could we
> potentially miss
> out on them due to a lack of execution of a slow scan?

I think it would definitely be possible to add support for scanning high orders
and from memory, I don't think it would be too difficult. Based on your
experience, it sounds like this would be valuable.

I'm going to be out on Paternity leave for 3 weeks from end of today, so I won't
personally be able to do this until I get back. I might find some time to review
if you were to post something though :)

>
> I'm not saying your patchset has problems, just that I have some questions.

Let's call it "opportunity for further improvement" rather than problems. :)

I suspect swap-in of large folios may help reduce the fragmentation a bit since
we are less likely to keep parts of a previously swapped-out mTHP in swap.

Also, I understand that Chris Li has been doing some thinking around an
indirection layer which would remove the requirement for pages of a large folio
to be stored contiguously in the swap file. I think he is planning to talk about
that at LSFMM? (which I sadly won't be attending).

Thanks,
Ryan

>
> Thanks
> Barry


2024-05-13 09:24:37

by Barry Song

[permalink] [raw]
Subject: Re: [PATCH v7 5/7] mm: swap: Allow storage of all mTHP orders

On Mon, May 13, 2024 at 8:43 PM Ryan Roberts <[email protected]> wrote:
>
> On 13/05/2024 08:30, Barry Song wrote:
> > On Tue, Apr 9, 2024 at 6:40 AM Ryan Roberts <[email protected]> wrote:
> >>
> >> Multi-size THP enables performance improvements by allocating large,
> >> pte-mapped folios for anonymous memory. However I've observed that on an
> >> arm64 system running a parallel workload (e.g. kernel compilation)
> >> across many cores, under high memory pressure, the speed regresses. This
> >> is due to bottlenecking on the increased number of TLBIs added due to
> >> all the extra folio splitting when the large folios are swapped out.
> >>
> >> Therefore, solve this regression by adding support for swapping out mTHP
> >> without needing to split the folio, just like is already done for
> >> PMD-sized THP. This change only applies when CONFIG_THP_SWAP is enabled,
> >> and when the swap backing store is a non-rotating block device. These
> >> are the same constraints as for the existing PMD-sized THP swap-out
> >> support.
> >>
> >> Note that no attempt is made to swap-in (m)THP here - this is still done
> >> page-by-page, like for PMD-sized THP. But swapping-out mTHP is a
> >> prerequisite for swapping-in mTHP.
> >>
> >> The main change here is to improve the swap entry allocator so that it
> >> can allocate any power-of-2 number of contiguous entries between [1, (1
> >> << PMD_ORDER)]. This is done by allocating a cluster for each distinct
> >> order and allocating sequentially from it until the cluster is full.
> >> This ensures that we don't need to search the map and we get no
> >> fragmentation due to alignment padding for different orders in the
> >> cluster. If there is no current cluster for a given order, we attempt to
> >> allocate a free cluster from the list. If there are no free clusters, we
> >> fail the allocation and the caller can fall back to splitting the folio
> >> and allocates individual entries (as per existing PMD-sized THP
> >> fallback).
> >>
> >> The per-order current clusters are maintained per-cpu using the existing
> >> infrastructure. This is done to avoid interleving pages from different
> >> tasks, which would prevent IO being batched. This is already done for
> >> the order-0 allocations so we follow the same pattern.
> >>
> >> As is done for order-0 per-cpu clusters, the scanner now can steal
> >> order-0 entries from any per-cpu-per-order reserved cluster. This
> >> ensures that when the swap file is getting full, space doesn't get tied
> >> up in the per-cpu reserves.
> >>
> >> This change only modifies swap to be able to accept any order mTHP. It
> >> doesn't change the callers to elide doing the actual split. That will be
> >> done in separate changes.
>
> [...]
>
> >
> > Hi Ryan,
> >
> > Sorry for bringing up an old thread.
>
> No problem - thanks for the report!
>
> >
> > During the initial hour of utilizing an Android phone with 64KiB mTHP,
> > we noticed that the
> > anon_swpout_fallback rate was less than 10%. However, after several
> > hours of phone
> > usage, we observed a significant increase in the anon_swpout_fallback
> > rate, reaching
> > 100%.
>
> I suspect this is due to fragmentation of the clusters; If there is just one
> page left in a cluster then the cluster can't be freed and once the cluster free
> list is empty a new cluster allcoation will fail and this will cause fallback to
> order-0.
>
> >
> > As I checked the code of scan_swap_map_try_ssd_cluster(),
> >
> > static bool scan_swap_map_try_ssd_cluster(struct swap_info_struct *si,
> > unsigned long *offset, unsigned long *scan_base, int order)
> > {
> > unsigned int nr_pages = 1 << order;
> > struct percpu_cluster *cluster;
> > struct swap_cluster_info *ci;
> > unsigned int tmp, max;
> >
> > new_cluster:
> > cluster = this_cpu_ptr(si->percpu_cluster);
> > tmp = cluster->next[order];
> > if (tmp == SWAP_NEXT_INVALID) {
> > if (!cluster_list_empty(&si->free_clusters)) {
> > tmp = cluster_next(&si->free_clusters.head) *
> > SWAPFILE_CLUSTER;
> > } else if (!cluster_list_empty(&si->discard_clusters)) {
> > /*
> > * we don't have free cluster but have some clusters in
> > * discarding, do discard now and reclaim them, then
> > * reread cluster_next_cpu since we dropped si->lock
> > */
> > swap_do_scheduled_discard(si);
> > *scan_base = this_cpu_read(*si->cluster_next_cpu);
> > *offset = *scan_base;
> > goto new_cluster;
> > } else
> > return false;
> > }
> > ...
> >
> > }
> >
> > Considering the cluster_list_empty() checks, is it necessary to have
> > free_cluster to
> > ensure a continuous allocation of swap slots for large folio swap out?
>
> Yes, currently that is done by design; if we can't allocate a free cluster then
> we only scan for free space in an already allocated cluster for order-0
> allocations. I did this for a couple of reasons;
>
> 1: Simplicity.
>
> 2: Keep behavior the same as PMD-order allocations, which are never scanned
> (although the cluster is the same size as the PMD so scanning would be pointless
> there - so perhaps this is not a good argument for not scanning smaller high
> orders).
>
> 3: If scanning for a high order fails then we would fall back to order-0 and
> scan again, so I was trying to avoid the potential for 2 scans (although once
> you split the page, you'll end up scanning per-page, so perhaps its not a real
> argument either).
>
> > For instance,
> > if numerous clusters still possess ample free swap slots, could we
> > potentially miss
> > out on them due to a lack of execution of a slow scan?
>
> I think it would definitely be possible to add support for scanning high orders
> and from memory, I don't think it would be too difficult. Based on your
> experience, it sounds like this would be valuable.
>
> I'm going to be out on Paternity leave for 3 weeks from end of today, so I won't
> personally be able to do this until I get back. I might find some time to review
> if you were to post something though :)

Congratulations on the arrival of your precious little one! Forget
about the swap and
mTHP, enjoy your time with the family :-)

>
> >
> > I'm not saying your patchset has problems, just that I have some questions.
>
> Let's call it "opportunity for further improvement" rather than problems. :)
>
> I suspect swap-in of large folios may help reduce the fragmentation a bit since
> we are less likely to keep parts of a previously swapped-out mTHP in swap.
>
> Also, I understand that Chris Li has been doing some thinking around an
> indirection layer which would remove the requirement for pages of a large folio
> to be stored contiguously in the swap file. I think he is planning to talk about
> that at LSFMM? (which I sadly won't be attending).
>
> Thanks,
> Ryan
>
> >

Thanks
Barry

2024-06-03 21:19:14

by Yosry Ahmed

[permalink] [raw]
Subject: Re: [PATCH v7 0/7] Swap-out mTHP without splitting

On Mon, Apr 8, 2024 at 11:40 AM Ryan Roberts <[email protected]> wrote:
>
> Hi All,
>
> This series adds support for swapping out multi-size THP (mTHP) without needing
> to first split the large folio via split_huge_page_to_list_to_order(). It
> closely follows the approach already used to swap-out PMD-sized THP.
>
> There are a couple of reasons for swapping out mTHP without splitting:
>
> - Performance: It is expensive to split a large folio and under extreme memory
> pressure some workloads regressed performance when using 64K mTHP vs 4K
> small folios because of this extra cost in the swap-out path. This series
> not only eliminates the regression but makes it faster to swap out 64K mTHP
> vs 4K small folios.
>
> - Memory fragmentation avoidance: If we can avoid splitting a large folio
> memory is less likely to become fragmented, making it easier to re-allocate
> a large folio in future.
>
> - Performance: Enables a separate series [7] to swap-in whole mTHPs, which
> means we won't lose the TLB-efficiency benefits of mTHP once the memory has
> been through a swap cycle.
>
> I've done what I thought was the smallest change possible, and as a result, this
> approach is only employed when the swap is backed by a non-rotating block device
> (just as PMD-sized THP is supported today). Discussion against the RFC concluded
> that this is sufficient.
>
>
> Performance Testing
> ===================
>
> I've run some swap performance tests on Ampere Altra VM (arm64) with 8 CPUs. The
> VM is set up with a 35G block ram device as the swap device and the test is run
> from inside a memcg limited to 40G memory. I've then run `usemem` from
> vm-scalability with 70 processes, each allocating and writing 1G of memory. I've
> repeated everything 6 times and taken the mean performance improvement relative
> to 4K page baseline:
>
> | alloc size | baseline | + this series |
> | | mm-unstable (~v6.9-rc1) | |
> |:-----------|------------------------:|------------------------:|
> | 4K Page | 0.0% | 1.3% |
> | 64K THP | -13.6% | 46.3% |
> | 2M THP | 91.4% | 89.6% |
>
> So with this change, the 64K swap performance goes from a 14% regression to a
> 46% improvement. While 2M shows a small regression I'm confident that this is
> just noise.
>
> ---
> The series applies against mm-unstable (as of 2024-04-08) after dropping v6 of
> this series from it. The performance numbers are from v5. Since the delta is
> very small I don't anticipate any performance changes. I'm optimistically hoping
> this is the final version.
>
>
> Changes since v6 [6]
> ====================
>
> - patch #1
> - swap_page_trans_huge_swapped() takes order instead of nr_pages (per Chris)
> - patch #2
> - Fix bug in swap_pte_batch() to consider swp pte bits (per David)
> - Improved docs for clear_not_present_full_ptes() (per David)
> - Improved docs for free_swap_and_cache_nr() (per David)
> - patch #5
> - Split out change to get_swap_pages() interface into own patch (per David)
> - patch #6 (was patch #5)
> - Improved readability of shrink_folio_list() with longer lines (per David)
>
>
> Changes since v5 [5]
> ====================
>
> - patch #2
> - Don't bother trying to reclaim swap if none of the entries' refs have gone
> to 0 in free_swap_and_cache_nr() (per Huang, Ying)
> - patch #5
> - Only update THP_SWPOUT_FALLBACK counters for pmd-mappable folios (per
> Barry Song)
> - patch #6
> - Fix bug in madvise_cold_or_pageout_pte_range(): don't continue without ptl
> (reported by Barry [8], sysbot [9])
>
>
> Changes since v4 [4]
> ====================
>
> - patch #3:
> - Added R-B from Huang, Ying - thanks!
> - patch #4:
> - get_swap_pages() now takes order instead of nr_pages (per Huang, Ying)
> - Removed WARN_ON_ONCE() from get_swap_pages()
> - Reworded comment for scan_swap_map_try_ssd_cluster() (per Huang, Ying)
> - Unified VM_WARN_ON()s in scan_swap_map_slots() to scan: (per Huang, Ying)
> - Removed redundant "order == 0" check (per Huang, Ying)
> - patch #5:
> - Marked list_empty() check with data_race() (per David)
> - Added R-B from Barry and David - thanks!
> - patch #6:
> - Implemented mkold_ptes() generic helper (pre David)
> - Enhanced folio_pte_batch() to report any_young (per David)
> - madvise_cold_or_pageout_pte_range() sets old in batch (per David)
> - Added R-B from Barry - thanks!
>
>
> Changes since v3 [3]
> ====================
>
> - Renamed SWAP_NEXT_NULL -> SWAP_NEXT_INVALID (per Huang, Ying)
> - Simplified max offset calculation (per Huang, Ying)
> - Reinstated struct percpu_cluster to contain per-cluster, per-order `next`
> offset (per Huang, Ying)
> - Removed swap_alloc_large() and merged its functionality into
> scan_swap_map_slots() (per Huang, Ying)
> - Avoid extra cost of folio ref and lock due to removal of CLUSTER_FLAG_HUGE
> by freeing swap entries in batches (see patch 2) (per DavidH)
> - vmscan splits folio if its partially mapped (per Barry Song, DavidH)
> - Avoid splitting in MADV_PAGEOUT path (per Barry Song)
> - Dropped "mm: swap: Simplify ssd behavior when scanner steals entry" patch
> since it's not actually a problem for THP as I first thought.
>
>
> Changes since v2 [2]
> ====================
>
> - Reuse scan_swap_map_try_ssd_cluster() between order-0 and order > 0
> allocation. This required some refactoring to make everything work nicely
> (new patches 2 and 3).
> - Fix bug where nr_swap_pages would say there are pages available but the
> scanner would not be able to allocate them because they were reserved for the
> per-cpu allocator. We now allow stealing of order-0 entries from the high
> order per-cpu clusters (in addition to exisiting stealing from order-0
> per-cpu clusters).
>
>
> Changes since v1 [1]
> ====================
>
> - patch 1:
> - Use cluster_set_count() instead of cluster_set_count_flag() in
> swap_alloc_cluster() since we no longer have any flag to set. I was unable
> to kill cluster_set_count_flag() as proposed against v1 as other call
> sites depend explicitly setting flags to 0.
> - patch 2:
> - Moved large_next[] array into percpu_cluster to make it per-cpu
> (recommended by Huang, Ying).
> - large_next[] array is dynamically allocated because PMD_ORDER is not
> compile-time constant for powerpc (fixes build error).
>
>
> [1] https://lore.kernel.org/linux-mm/[email protected]/
> [2] https://lore.kernel.org/linux-mm/[email protected]/
> [3] https://lore.kernel.org/linux-mm/[email protected]/
> [4] https://lore.kernel.org/linux-mm/[email protected]/
> [5] https://lore.kernel.org/linux-mm/[email protected]/
> [6] https://lore.kernel.org/linux-mm/[email protected]/
> [7] https://lore.kernel.org/linux-mm/[email protected]/
> [8] https://lore.kernel.org/linux-mm/CAGsJ_4yMOow27WDvN2q=E4HAtDd2PJ=OQ5Pj9DG+6FLWwNuXUw@mail.gmail.com/
> [9] https://lore.kernel.org/linux-mm/[email protected]/
>
> Thanks,
> Ryan
>
> Ryan Roberts (7):
> mm: swap: Remove CLUSTER_FLAG_HUGE from swap_cluster_info:flags
> mm: swap: free_swap_and_cache_nr() as batched free_swap_and_cache()
> mm: swap: Simplify struct percpu_cluster
> mm: swap: Update get_swap_pages() to take folio order
> mm: swap: Allow storage of all mTHP orders
> mm: vmscan: Avoid split during shrink_folio_list()
> mm: madvise: Avoid split during MADV_PAGEOUT and MADV_COLD

+Zi Yan

While looking at the page splitting code, I noticed that
split_huge_page_to_list_to_order() will refuse to split a folio in the
swapcache to any order higher than 0. It has the following check:

if (new_order) {
/* Only swapping a whole PMD-mapped folio is supported */
if (folio_test_swapcache(folio))
return -EINVAL;
...
}

I am guessing with this series this may no longer be applicable?

>
> include/linux/pgtable.h | 59 ++++++++
> include/linux/swap.h | 35 +++--
> mm/huge_memory.c | 3 -
> mm/internal.h | 75 +++++++++-
> mm/madvise.c | 99 +++++++-----
> mm/memory.c | 17 ++-
> mm/swap_slots.c | 6 +-
> mm/swapfile.c | 325 +++++++++++++++++++++++-----------------
> mm/vmscan.c | 20 +--
> 9 files changed, 422 insertions(+), 217 deletions(-)
>
> --
> 2.25.1
>
>

2024-06-03 22:16:00

by Zi Yan

[permalink] [raw]
Subject: Re: [PATCH v7 0/7] Swap-out mTHP without splitting

On 3 Jun 2024, at 14:18, Yosry Ahmed wrote:

> On Mon, Apr 8, 2024 at 11:40 AM Ryan Roberts <[email protected]> wrote:
>>
>> Hi All,
>>
>> This series adds support for swapping out multi-size THP (mTHP) without needing
>> to first split the large folio via split_huge_page_to_list_to_order(). It
>> closely follows the approach already used to swap-out PMD-sized THP.
>>
>> There are a couple of reasons for swapping out mTHP without splitting:
>>
>> - Performance: It is expensive to split a large folio and under extreme memory
>> pressure some workloads regressed performance when using 64K mTHP vs 4K
>> small folios because of this extra cost in the swap-out path. This series
>> not only eliminates the regression but makes it faster to swap out 64K mTHP
>> vs 4K small folios.
>>
>> - Memory fragmentation avoidance: If we can avoid splitting a large folio
>> memory is less likely to become fragmented, making it easier to re-allocate
>> a large folio in future.
>>
>> - Performance: Enables a separate series [7] to swap-in whole mTHPs, which
>> means we won't lose the TLB-efficiency benefits of mTHP once the memory has
>> been through a swap cycle.
>>
>> I've done what I thought was the smallest change possible, and as a result, this
>> approach is only employed when the swap is backed by a non-rotating block device
>> (just as PMD-sized THP is supported today). Discussion against the RFC concluded
>> that this is sufficient.
>>
>>
>> Performance Testing
>> ===================
>>
>> I've run some swap performance tests on Ampere Altra VM (arm64) with 8 CPUs. The
>> VM is set up with a 35G block ram device as the swap device and the test is run
>> from inside a memcg limited to 40G memory. I've then run `usemem` from
>> vm-scalability with 70 processes, each allocating and writing 1G of memory. I've
>> repeated everything 6 times and taken the mean performance improvement relative
>> to 4K page baseline:
>>
>> | alloc size | baseline | + this series |
>> | | mm-unstable (~v6.9-rc1) | |
>> |:-----------|------------------------:|------------------------:|
>> | 4K Page | 0.0% | 1.3% |
>> | 64K THP | -13.6% | 46.3% |
>> | 2M THP | 91.4% | 89.6% |
>>
>> So with this change, the 64K swap performance goes from a 14% regression to a
>> 46% improvement. While 2M shows a small regression I'm confident that this is
>> just noise.
>>
>> ---
>> The series applies against mm-unstable (as of 2024-04-08) after dropping v6 of
>> this series from it. The performance numbers are from v5. Since the delta is
>> very small I don't anticipate any performance changes. I'm optimistically hoping
>> this is the final version.
>>
>>
>> Changes since v6 [6]
>> ====================
>>
>> - patch #1
>> - swap_page_trans_huge_swapped() takes order instead of nr_pages (per Chris)
>> - patch #2
>> - Fix bug in swap_pte_batch() to consider swp pte bits (per David)
>> - Improved docs for clear_not_present_full_ptes() (per David)
>> - Improved docs for free_swap_and_cache_nr() (per David)
>> - patch #5
>> - Split out change to get_swap_pages() interface into own patch (per David)
>> - patch #6 (was patch #5)
>> - Improved readability of shrink_folio_list() with longer lines (per David)
>>
>>
>> Changes since v5 [5]
>> ====================
>>
>> - patch #2
>> - Don't bother trying to reclaim swap if none of the entries' refs have gone
>> to 0 in free_swap_and_cache_nr() (per Huang, Ying)
>> - patch #5
>> - Only update THP_SWPOUT_FALLBACK counters for pmd-mappable folios (per
>> Barry Song)
>> - patch #6
>> - Fix bug in madvise_cold_or_pageout_pte_range(): don't continue without ptl
>> (reported by Barry [8], sysbot [9])
>>
>>
>> Changes since v4 [4]
>> ====================
>>
>> - patch #3:
>> - Added R-B from Huang, Ying - thanks!
>> - patch #4:
>> - get_swap_pages() now takes order instead of nr_pages (per Huang, Ying)
>> - Removed WARN_ON_ONCE() from get_swap_pages()
>> - Reworded comment for scan_swap_map_try_ssd_cluster() (per Huang, Ying)
>> - Unified VM_WARN_ON()s in scan_swap_map_slots() to scan: (per Huang, Ying)
>> - Removed redundant "order == 0" check (per Huang, Ying)
>> - patch #5:
>> - Marked list_empty() check with data_race() (per David)
>> - Added R-B from Barry and David - thanks!
>> - patch #6:
>> - Implemented mkold_ptes() generic helper (pre David)
>> - Enhanced folio_pte_batch() to report any_young (per David)
>> - madvise_cold_or_pageout_pte_range() sets old in batch (per David)
>> - Added R-B from Barry - thanks!
>>
>>
>> Changes since v3 [3]
>> ====================
>>
>> - Renamed SWAP_NEXT_NULL -> SWAP_NEXT_INVALID (per Huang, Ying)
>> - Simplified max offset calculation (per Huang, Ying)
>> - Reinstated struct percpu_cluster to contain per-cluster, per-order `next`
>> offset (per Huang, Ying)
>> - Removed swap_alloc_large() and merged its functionality into
>> scan_swap_map_slots() (per Huang, Ying)
>> - Avoid extra cost of folio ref and lock due to removal of CLUSTER_FLAG_HUGE
>> by freeing swap entries in batches (see patch 2) (per DavidH)
>> - vmscan splits folio if its partially mapped (per Barry Song, DavidH)
>> - Avoid splitting in MADV_PAGEOUT path (per Barry Song)
>> - Dropped "mm: swap: Simplify ssd behavior when scanner steals entry" patch
>> since it's not actually a problem for THP as I first thought.
>>
>>
>> Changes since v2 [2]
>> ====================
>>
>> - Reuse scan_swap_map_try_ssd_cluster() between order-0 and order > 0
>> allocation. This required some refactoring to make everything work nicely
>> (new patches 2 and 3).
>> - Fix bug where nr_swap_pages would say there are pages available but the
>> scanner would not be able to allocate them because they were reserved for the
>> per-cpu allocator. We now allow stealing of order-0 entries from the high
>> order per-cpu clusters (in addition to exisiting stealing from order-0
>> per-cpu clusters).
>>
>>
>> Changes since v1 [1]
>> ====================
>>
>> - patch 1:
>> - Use cluster_set_count() instead of cluster_set_count_flag() in
>> swap_alloc_cluster() since we no longer have any flag to set. I was unable
>> to kill cluster_set_count_flag() as proposed against v1 as other call
>> sites depend explicitly setting flags to 0.
>> - patch 2:
>> - Moved large_next[] array into percpu_cluster to make it per-cpu
>> (recommended by Huang, Ying).
>> - large_next[] array is dynamically allocated because PMD_ORDER is not
>> compile-time constant for powerpc (fixes build error).
>>
>>
>> [1] https://lore.kernel.org/linux-mm/[email protected]/
>> [2] https://lore.kernel.org/linux-mm/[email protected]/
>> [3] https://lore.kernel.org/linux-mm/[email protected]/
>> [4] https://lore.kernel.org/linux-mm/[email protected]/
>> [5] https://lore.kernel.org/linux-mm/[email protected]/
>> [6] https://lore.kernel.org/linux-mm/[email protected]/
>> [7] https://lore.kernel.org/linux-mm/[email protected]/
>> [8] https://lore.kernel.org/linux-mm/CAGsJ_4yMOow27WDvN2q=E4HAtDd2PJ=OQ5Pj9DG+6FLWwNuXUw@mail.gmail.com/
>> [9] https://lore.kernel.org/linux-mm/[email protected]/
>>
>> Thanks,
>> Ryan
>>
>> Ryan Roberts (7):
>> mm: swap: Remove CLUSTER_FLAG_HUGE from swap_cluster_info:flags
>> mm: swap: free_swap_and_cache_nr() as batched free_swap_and_cache()
>> mm: swap: Simplify struct percpu_cluster
>> mm: swap: Update get_swap_pages() to take folio order
>> mm: swap: Allow storage of all mTHP orders
>> mm: vmscan: Avoid split during shrink_folio_list()
>> mm: madvise: Avoid split during MADV_PAGEOUT and MADV_COLD
>
> +Zi Yan
>
> While looking at the page splitting code, I noticed that
> split_huge_page_to_list_to_order() will refuse to split a folio in the
> swapcache to any order higher than 0. It has the following check:
>
> if (new_order) {
> /* Only swapping a whole PMD-mapped folio is supported */
> if (folio_test_swapcache(folio))
> return -EINVAL;
> ...
> }
>
> I am guessing with this series this may no longer be applicable?

Yes, you can remove it but please make sure the swapcache code below is right[1].

[1] https://elixir.bootlin.com/linux/v6.10-rc2/source/mm/huge_memory.c#L2868

Best Regards,
Yan, Zi


Attachments:
signature.asc (871.00 B)
OpenPGP digital signature