2024-04-13 00:22:51

by Lance Yang

[permalink] [raw]
Subject: [PATCH v6 0/2] mm/madvise: enhance lazyfreeing with mTHP in madvise_free

Hi All,

This patchset adds support for lazyfreeing multi-size THP (mTHP) without
needing to first split the large folio via split_folio(). However, we
still need to split a large folio that is not fully mapped within the
target range.

If a large folio is locked or shared, or if we fail to split it, we just
leave it in place and advance to the next PTE in the range. But note that
the behavior is changed; previously, any failure of this sort would cause
the entire operation to give up. As large folios become more common,
sticking to the old way could result in wasted opportunities.

Performance Testing
===================

On an Intel I5 CPU, lazyfreeing a 1GiB VMA backed by PTE-mapped folios of
the same size results in the following runtimes for madvise(MADV_FREE)
in seconds (shorter is better):

Folio Size | Old | New | Change
------------------------------------------
4KiB | 0.590251 | 0.590259 | 0%
16KiB | 2.990447 | 0.185655 | -94%
32KiB | 2.547831 | 0.104870 | -95%
64KiB | 2.457796 | 0.052812 | -97%
128KiB | 2.281034 | 0.032777 | -99%
256KiB | 2.230387 | 0.017496 | -99%
512KiB | 2.189106 | 0.010781 | -99%
1024KiB | 2.183949 | 0.007753 | -99%
2048KiB | 0.002799 | 0.002804 | 0%

---
This patchset applies against mm-unstable (37a4ecbf36cb).

The performance numbers are from v2. I did a quick benchmark run of v6 and
nothing significantly changed.

Changes since v5 [5]
====================
- Convert mkold_ptes() to clear_young_dirty_ptes() (per Ryan Roberts)
- Use the __bitwise flags as the input for clear_young_dirty_ptes()
(per David Hildenbrand)
- Follow the pattern already established by the original code
(per Ryan Roberts)

Changes since v4 [4]
====================
- The first patch implements the MADV_FREE change and introduces
mkold_clean_ptes() with a generic implementation. The second patch
specializes mkold_clean_ptes() for arm64, providing a performance boost
specific to arm64 (per Ryan Roberts)
- Drop the full parameter and call ptep_get_and_clear() in mkold_clean_ptes()
(per Ryan Roberts)
- Keep the previous behavior that avoids locking the folio if it wasn't in the
swapcache or if it wasn't dirty (per Ryan Roberts)

Changes since v3 [3]
====================
- Rename refresh_full_ptes -> mkold_clean_ptes (per Ryan Roberts)
- Override mkold_clean_ptes() for arm64 to make it faster (per Ryan Roberts)
- Update the changelog

Changes since v2 [2]
====================
- Only skip all the PTEs for nr_pages when the number of batched PTEs matches
nr_pages (per Barry Song)
- Change folio_pte_batch() to consume an optional *any_dirty and *any_young
function (per David Hildenbrand)
- Move the ptep_get_and_clear_full() loop into refresh_full_ptes() (per
David Hildenbrand)
- Follow a similar pattern for madvise_free_pte_range() (per Ryan Roberts)

Changes since v1 [1]
====================
- Update the performance numbers
- Update the changelog (per Ryan Roberts)
- Check the COW folio (per Yin Fengwei)
- Check if we are mapping all subpages (per Barry Song, David Hildenbrand,
Ryan Roberts)

[1] https://lore.kernel.org/linux-mm/[email protected]
[2] https://lore.kernel.org/linux-mm/[email protected]
[3] https://lore.kernel.org/linux-mm/[email protected]
[4] https://lore.kernel.org/linux-mm/[email protected]
[5] https://lore.kernel.org/linux-mm/[email protected]

Thanks,
Lance

Lance Yang (2):
mm/arm64: override clear_young_dirty_ptes() batch helper
mm/madvise: optimize lazyfreeing with mTHP in madvise_free

arch/arm64/include/asm/pgtable.h | 37 ++++++++++++++++++++++
arch/arm64/mm/contpte.c | 28 +++++++++++++++++
include/linux/mm_types.h | 9 ++++++
include/linux/pgtable.h | 42 +++++++++++++++++++++++++
mm/internal.h | 12 +++++--
mm/madvise.c | 147 +++++++++++++++++++++++++++++----------
mm/memory.c | 4 +--
7 files changed, 212 insertions(+), 67 deletions(-)

--
2.33.1



2024-04-13 00:23:05

by Lance Yang

[permalink] [raw]
Subject: [PATCH v6 1/2] mm/madvise: optimize lazyfreeing with mTHP in madvise_free

This patch optimizes lazyfreeing with PTE-mapped mTHP[1]
(Inspired by David Hildenbrand[2]). We aim to avoid unnecessary folio
splitting if the large folio is fully mapped within the target range.

If a large folio is locked or shared, or if we fail to split it, we just
leave it in place and advance to the next PTE in the range. But note that
the behavior is changed; previously, any failure of this sort would cause
the entire operation to give up. As large folios become more common,
sticking to the old way could result in wasted opportunities.

On an Intel I5 CPU, lazyfreeing a 1GiB VMA backed by PTE-mapped folios of
the same size results in the following runtimes for madvise(MADV_FREE) in
seconds (shorter is better):

Folio Size | Old | New | Change
------------------------------------------
4KiB | 0.590251 | 0.590259 | 0%
16KiB | 2.990447 | 0.185655 | -94%
32KiB | 2.547831 | 0.104870 | -95%
64KiB | 2.457796 | 0.052812 | -97%
128KiB | 2.281034 | 0.032777 | -99%
256KiB | 2.230387 | 0.017496 | -99%
512KiB | 2.189106 | 0.010781 | -99%
1024KiB | 2.183949 | 0.007753 | -99%
2048KiB | 0.002799 | 0.002804 | 0%

[1] https://lkml.kernel.org/r/[email protected]
[2] https://lore.kernel.org/linux-mm/[email protected]

Signed-off-by: Lance Yang <[email protected]>
---
include/linux/mm_types.h | 9 +++
include/linux/pgtable.h | 42 +++++++++++
mm/internal.h | 12 +++-
mm/madvise.c | 147 ++++++++++++++++++++++-----------------
mm/memory.c | 4 +-
5 files changed, 147 insertions(+), 67 deletions(-)

diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index c432add95913..3c224e25f473 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -1367,6 +1367,15 @@ enum fault_flag {

typedef unsigned int __bitwise zap_flags_t;

+/* Flags for clear_young_dirty_ptes(). */
+typedef int __bitwise cydp_t;
+
+/* make PTEs after pte_mkold() */
+#define CYDP_CLEAR_YOUNG ((__force cydp_t)BIT(0))
+
+/* make PTEs after pte_mkclean() */
+#define CYDP_CLEAR_DIRTY ((__force cydp_t)BIT(1))
+
/*
* FOLL_PIN and FOLL_LONGTERM may be used in various combinations with each
* other. Here is what they mean, and how to use them:
diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index e2f45e22a6d1..d7958243f099 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -489,6 +489,48 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
}
#endif

+#ifndef clear_young_dirty_ptes
+/**
+ * clear_young_dirty_ptes - Mark PTEs that map consecutive pages of the
+ * same folio as old/clean.
+ * @mm: Address space the pages are mapped into.
+ * @addr: Address the first page is mapped at.
+ * @ptep: Page table pointer for the first entry.
+ * @nr: Number of entries to mark old/clean.
+ * @flags: Flags to modify the PTE batch semantics.
+ *
+ * May be overridden by the architecture; otherwise, implemented by
+ * get_and_clear/modify/set for each pte in the range.
+ *
+ * Note that PTE bits in the PTE range besides the PFN can differ. For example,
+ * some PTEs might be write-protected.
+ *
+ * Context: The caller holds the page table lock. The PTEs map consecutive
+ * pages that belong to the same folio. The PTEs are all in the same PMD.
+ */
+static inline void clear_young_dirty_ptes(struct mm_struct *mm,
+ unsigned long addr, pte_t *ptep,
+ unsigned int nr, cydp_t flags)
+{
+ pte_t pte;
+
+ for (;;) {
+ pte = ptep_get_and_clear(mm, addr, ptep);
+
+ if (flags | CYDP_CLEAR_YOUNG)
+ pte = pte_mkold(pte);
+ if (flags | CYDP_CLEAR_DIRTY)
+ pte = pte_mkclean(pte);
+
+ set_pte_at(mm, addr, ptep, pte);
+ if (--nr == 0)
+ break;
+ ptep++;
+ addr += PAGE_SIZE;
+ }
+}
+#endif
+
static inline void ptep_clear(struct mm_struct *mm, unsigned long addr,
pte_t *ptep)
{
diff --git a/mm/internal.h b/mm/internal.h
index 3c0f3e3f9d99..ab8fcdeaf6eb 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -134,6 +134,8 @@ static inline pte_t __pte_batch_clear_ignored(pte_t pte, fpb_t flags)
* first one is writable.
* @any_young: Optional pointer to indicate whether any entry except the
* first one is young.
+ * @any_dirty: Optional pointer to indicate whether any entry except the
+ * first one is dirty.
*
* Detect a PTE batch: consecutive (present) PTEs that map consecutive
* pages of the same large folio.
@@ -149,18 +151,20 @@ static inline pte_t __pte_batch_clear_ignored(pte_t pte, fpb_t flags)
*/
static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
pte_t *start_ptep, pte_t pte, int max_nr, fpb_t flags,
- bool *any_writable, bool *any_young)
+ bool *any_writable, bool *any_young, bool *any_dirty)
{
unsigned long folio_end_pfn = folio_pfn(folio) + folio_nr_pages(folio);
const pte_t *end_ptep = start_ptep + max_nr;
pte_t expected_pte, *ptep;
- bool writable, young;
+ bool writable, young, dirty;
int nr;

if (any_writable)
*any_writable = false;
if (any_young)
*any_young = false;
+ if (any_dirty)
+ *any_dirty = false;

VM_WARN_ON_FOLIO(!pte_present(pte), folio);
VM_WARN_ON_FOLIO(!folio_test_large(folio) || max_nr < 1, folio);
@@ -176,6 +180,8 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
writable = !!pte_write(pte);
if (any_young)
young = !!pte_young(pte);
+ if (any_dirty)
+ dirty = !!pte_dirty(pte);
pte = __pte_batch_clear_ignored(pte, flags);

if (!pte_same(pte, expected_pte))
@@ -193,6 +199,8 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
*any_writable |= writable;
if (any_young)
*any_young |= young;
+ if (any_dirty)
+ *any_dirty |= dirty;

nr = pte_batch_hint(ptep, pte);
expected_pte = pte_advance_pfn(expected_pte, nr);
diff --git a/mm/madvise.c b/mm/madvise.c
index d34ca6983227..b4103e2df346 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -321,6 +321,39 @@ static inline bool can_do_file_pageout(struct vm_area_struct *vma)
file_permission(vma->vm_file, MAY_WRITE) == 0;
}

+static inline int madvise_folio_pte_batch(unsigned long addr, unsigned long end,
+ struct folio *folio, pte_t *ptep,
+ pte_t pte, bool *any_young,
+ bool *any_dirty)
+{
+ int max_nr = (end - addr) / PAGE_SIZE;
+ const fpb_t fpb_flags = FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRTY;
+
+ return folio_pte_batch(folio, addr, ptep, pte, max_nr, fpb_flags, NULL,
+ any_young, any_dirty);
+}
+
+static inline bool madvise_pte_split_folio(struct mm_struct *mm, pmd_t *pmd,
+ unsigned long addr,
+ struct folio *folio, pte_t **pte,
+ spinlock_t **ptl)
+{
+ int err;
+
+ if (!folio_trylock(folio))
+ return false;
+
+ folio_get(folio);
+ pte_unmap_unlock(*pte, *ptl);
+ err = split_folio(folio);
+ folio_unlock(folio);
+ folio_put(folio);
+
+ *pte = pte_offset_map_lock(mm, pmd, addr, ptl);
+
+ return err == 0;
+}
+
static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
unsigned long addr, unsigned long end,
struct mm_walk *walk)
@@ -456,41 +489,30 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
* next pte in the range.
*/
if (folio_test_large(folio)) {
- const fpb_t fpb_flags = FPB_IGNORE_DIRTY |
- FPB_IGNORE_SOFT_DIRTY;
- int max_nr = (end - addr) / PAGE_SIZE;
bool any_young;

- nr = folio_pte_batch(folio, addr, pte, ptent, max_nr,
- fpb_flags, NULL, &any_young);
- if (any_young)
- ptent = pte_mkyoung(ptent);
+ nr = madvise_folio_pte_batch(addr, end, folio, pte,
+ ptent, &any_young, NULL);

if (nr < folio_nr_pages(folio)) {
- int err;
-
if (folio_likely_mapped_shared(folio))
continue;
if (pageout_anon_only_filter && !folio_test_anon(folio))
continue;
- if (!folio_trylock(folio))
- continue;
- folio_get(folio);
+
arch_leave_lazy_mmu_mode();
- pte_unmap_unlock(start_pte, ptl);
- start_pte = NULL;
- err = split_folio(folio);
- folio_unlock(folio);
- folio_put(folio);
- start_pte = pte =
- pte_offset_map_lock(mm, pmd, addr, &ptl);
+ if (madvise_pte_split_folio(mm, pmd, addr,
+ folio, &start_pte, &ptl))
+ nr = 0;
if (!start_pte)
break;
+ pte = start_pte;
arch_enter_lazy_mmu_mode();
- if (!err)
- nr = 0;
continue;
}
+
+ if (any_young)
+ ptent = pte_mkyoung(ptent);
}

/*
@@ -507,7 +529,8 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
continue;

if (!pageout && pte_young(ptent)) {
- mkold_ptes(vma, addr, pte, nr);
+ clear_young_dirty_ptes(mm, addr, pte, nr,
+ CYDP_CLEAR_YOUNG);
tlb_remove_tlb_entries(tlb, pte, nr, addr);
}

@@ -687,44 +710,51 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr,
continue;

/*
- * If pmd isn't transhuge but the folio is large and
- * is owned by only this process, split it and
- * deactivate all pages.
+ * If we encounter a large folio, only split it if it is not
+ * fully mapped within the range we are operating on. Otherwise
+ * leave it as is so that it can be marked as lazyfree. If we
+ * fail to split a folio, leave it in place and advance to the
+ * next pte in the range.
*/
if (folio_test_large(folio)) {
- int err;
+ bool any_young, any_dirty;

- if (folio_likely_mapped_shared(folio))
- break;
- if (!folio_trylock(folio))
- break;
- folio_get(folio);
- arch_leave_lazy_mmu_mode();
- pte_unmap_unlock(start_pte, ptl);
- start_pte = NULL;
- err = split_folio(folio);
- folio_unlock(folio);
- folio_put(folio);
- if (err)
- break;
- start_pte = pte =
- pte_offset_map_lock(mm, pmd, addr, &ptl);
- if (!start_pte)
- break;
- arch_enter_lazy_mmu_mode();
- pte--;
- addr -= PAGE_SIZE;
- continue;
+ nr = madvise_folio_pte_batch(addr, end, folio, pte,
+ ptent, &any_young, &any_dirty);
+
+ if (nr < folio_nr_pages(folio)) {
+ if (folio_likely_mapped_shared(folio))
+ continue;
+
+ arch_leave_lazy_mmu_mode();
+ if (madvise_pte_split_folio(mm, pmd, addr,
+ folio, &start_pte, &ptl))
+ nr = 0;
+ if (!start_pte)
+ break;
+ pte = start_pte;
+ arch_enter_lazy_mmu_mode();
+ continue;
+ }
+
+ if (any_young)
+ ptent = pte_mkyoung(ptent);
+ if (any_dirty)
+ ptent = pte_mkdirty(ptent);
}

+ if (folio_mapcount(folio) != folio_nr_pages(folio))
+ continue;
+
if (folio_test_swapcache(folio) || folio_test_dirty(folio)) {
if (!folio_trylock(folio))
continue;
/*
- * If folio is shared with others, we mustn't clear
- * the folio's dirty flag.
+ * If we have a large folio at this point, we know it is
+ * fully mapped so if its mapcount is the same as its
+ * number of pages, it must be exclusive.
*/
- if (folio_mapcount(folio) != 1) {
+ if (folio_mapcount(folio) != folio_nr_pages(folio)) {
folio_unlock(folio);
continue;
}
@@ -740,19 +770,10 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr,
}

if (pte_young(ptent) || pte_dirty(ptent)) {
- /*
- * Some of architecture(ex, PPC) don't update TLB
- * with set_pte_at and tlb_remove_tlb_entry so for
- * the portability, remap the pte with old|clean
- * after pte clearing.
- */
- ptent = ptep_get_and_clear_full(mm, addr, pte,
- tlb->fullmm);
-
- ptent = pte_mkold(ptent);
- ptent = pte_mkclean(ptent);
- set_pte_at(mm, addr, pte, ptent);
- tlb_remove_tlb_entry(tlb, pte, addr);
+ clear_young_dirty_ptes(mm, addr, pte, nr,
+ CYDP_CLEAR_YOUNG |
+ CYDP_CLEAR_DIRTY);
+ tlb_remove_tlb_entries(tlb, pte, nr, addr);
}
folio_mark_lazyfree(folio);
}
diff --git a/mm/memory.c b/mm/memory.c
index 76157b32faa8..b6fa5146b260 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -989,7 +989,7 @@ copy_present_ptes(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma
flags |= FPB_IGNORE_SOFT_DIRTY;

nr = folio_pte_batch(folio, addr, src_pte, pte, max_nr, flags,
- &any_writable, NULL);
+ &any_writable, NULL, NULL);
folio_ref_add(folio, nr);
if (folio_test_anon(folio)) {
if (unlikely(folio_try_dup_anon_rmap_ptes(folio, page,
@@ -1558,7 +1558,7 @@ static inline int zap_present_ptes(struct mmu_gather *tlb,
*/
if (unlikely(folio_test_large(folio) && max_nr != 1)) {
nr = folio_pte_batch(folio, addr, pte, ptent, max_nr, fpb_flags,
- NULL, NULL);
+ NULL, NULL, NULL);

zap_present_folio_ptes(tlb, vma, folio, page, pte, ptent, nr,
addr, details, rss, force_flush,
--
2.33.1


2024-04-13 00:23:19

by Lance Yang

[permalink] [raw]
Subject: [PATCH v6 2/2] mm/arm64: override clear_young_dirty_ptes() batch helper

The per-pte get_and_clear/modify/set approach would result in
unfolding/refolding for contpte mappings on arm64. So we need
to override clear_young_dirty_ptes() for arm64 to avoid it.

Suggested-by: David Hildenbrand <[email protected]>
Suggested-by: Barry Song <[email protected]>
Signed-off-by: Ryan Roberts <[email protected]>
Signed-off-by: Lance Yang <[email protected]>
---
arch/arm64/include/asm/pgtable.h | 37 ++++++++++++++++++++++++++++++++
arch/arm64/mm/contpte.c | 28 ++++++++++++++++++++++++
2 files changed, 65 insertions(+)

diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index 9fd8613b2db2..f951774dd2d6 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -1223,6 +1223,28 @@ static inline void __wrprotect_ptes(struct mm_struct *mm, unsigned long address,
__ptep_set_wrprotect(mm, address, ptep);
}

+static inline void __clear_young_dirty_ptes(struct mm_struct *mm,
+ unsigned long addr, pte_t *ptep,
+ unsigned int nr, cydp_t flags)
+{
+ pte_t pte;
+
+ for (;;) {
+ pte = __ptep_get(ptep);
+
+ if (flags | CYDP_CLEAR_YOUNG)
+ pte = pte_mkold(pte);
+ if (flags | CYDP_CLEAR_DIRTY)
+ pte = pte_mkclean(pte);
+
+ __set_pte(ptep, pte);
+ if (--nr == 0)
+ break;
+ ptep++;
+ addr += PAGE_SIZE;
+ }
+}
+
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
#define __HAVE_ARCH_PMDP_SET_WRPROTECT
static inline void pmdp_set_wrprotect(struct mm_struct *mm,
@@ -1379,6 +1401,9 @@ extern void contpte_wrprotect_ptes(struct mm_struct *mm, unsigned long addr,
extern int contpte_ptep_set_access_flags(struct vm_area_struct *vma,
unsigned long addr, pte_t *ptep,
pte_t entry, int dirty);
+extern void contpte_clear_young_dirty_ptes(struct mm_struct *mm,
+ unsigned long addr, pte_t *ptep,
+ unsigned int nr, cydp_t flags);

static __always_inline void contpte_try_fold(struct mm_struct *mm,
unsigned long addr, pte_t *ptep, pte_t pte)
@@ -1603,6 +1628,17 @@ static inline int ptep_set_access_flags(struct vm_area_struct *vma,
return contpte_ptep_set_access_flags(vma, addr, ptep, entry, dirty);
}

+#define clear_young_dirty_ptes clear_young_dirty_ptes
+static inline void clear_young_dirty_ptes(struct mm_struct *mm,
+ unsigned long addr, pte_t *ptep,
+ unsigned int nr, cydp_t flags)
+{
+ if (likely(nr == 1 && !pte_cont(__ptep_get(ptep))))
+ __clear_young_dirty_ptes(mm, addr, ptep, nr, flags);
+ else
+ contpte_clear_young_dirty_ptes(mm, addr, ptep, nr, flags);
+}
+
#else /* CONFIG_ARM64_CONTPTE */

#define ptep_get __ptep_get
@@ -1622,6 +1658,7 @@ static inline int ptep_set_access_flags(struct vm_area_struct *vma,
#define wrprotect_ptes __wrprotect_ptes
#define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS
#define ptep_set_access_flags __ptep_set_access_flags
+#define clear_young_dirty_ptes __clear_young_dirty_ptes

#endif /* CONFIG_ARM64_CONTPTE */

diff --git a/arch/arm64/mm/contpte.c b/arch/arm64/mm/contpte.c
index 1b64b4c3f8bf..bf3b089d9641 100644
--- a/arch/arm64/mm/contpte.c
+++ b/arch/arm64/mm/contpte.c
@@ -361,6 +361,34 @@ void contpte_wrprotect_ptes(struct mm_struct *mm, unsigned long addr,
}
EXPORT_SYMBOL_GPL(contpte_wrprotect_ptes);

+void contpte_clear_young_dirty_ptes(struct mm_struct *mm, unsigned long addr,
+ pte_t *ptep, unsigned int nr, cydp_t flags)
+{
+ /*
+ * We can safely clear access/dirty without needing to unfold from
+ * the architectures perspective, even when contpte is set. If the
+ * range starts or ends midway through a contpte block, we can just
+ * expand to include the full contpte block. While this is not
+ * exactly what the core-mm asked for, it tracks access/dirty per
+ * folio, not per page. And since we only create a contpte block
+ * when it is covered by a single folio, we can get away with
+ * clearing access/dirty for the whole block.
+ */
+ unsigned int start = addr;
+ unsigned int end = start + nr;
+
+ if (pte_cont(__ptep_get(ptep + nr - 1)))
+ end = ALIGN(end, CONT_PTE_SIZE);
+
+ if (pte_cont(__ptep_get(ptep))) {
+ start = ALIGN_DOWN(start, CONT_PTE_SIZE);
+ ptep = contpte_align_down(ptep);
+ }
+
+ __clear_young_dirty_ptes(mm, start, ptep, end - start, flags);
+}
+EXPORT_SYMBOL_GPL(contpte_clear_young_dirty_ptes);
+
int contpte_ptep_set_access_flags(struct vm_area_struct *vma,
unsigned long addr, pte_t *ptep,
pte_t entry, int dirty)
--
2.33.1


2024-04-15 08:47:36

by Ryan Roberts

[permalink] [raw]
Subject: Re: [PATCH v6 1/2] mm/madvise: optimize lazyfreeing with mTHP in madvise_free

On 13/04/2024 01:22, Lance Yang wrote:
> This patch optimizes lazyfreeing with PTE-mapped mTHP[1]
> (Inspired by David Hildenbrand[2]). We aim to avoid unnecessary folio
> splitting if the large folio is fully mapped within the target range.
>
> If a large folio is locked or shared, or if we fail to split it, we just
> leave it in place and advance to the next PTE in the range. But note that
> the behavior is changed; previously, any failure of this sort would cause
> the entire operation to give up. As large folios become more common,
> sticking to the old way could result in wasted opportunities.
>
> On an Intel I5 CPU, lazyfreeing a 1GiB VMA backed by PTE-mapped folios of
> the same size results in the following runtimes for madvise(MADV_FREE) in
> seconds (shorter is better):
>
> Folio Size | Old | New | Change
> ------------------------------------------
> 4KiB | 0.590251 | 0.590259 | 0%
> 16KiB | 2.990447 | 0.185655 | -94%
> 32KiB | 2.547831 | 0.104870 | -95%
> 64KiB | 2.457796 | 0.052812 | -97%
> 128KiB | 2.281034 | 0.032777 | -99%
> 256KiB | 2.230387 | 0.017496 | -99%
> 512KiB | 2.189106 | 0.010781 | -99%
> 1024KiB | 2.183949 | 0.007753 | -99%
> 2048KiB | 0.002799 | 0.002804 | 0%
>
> [1] https://lkml.kernel.org/r/[email protected]
> [2] https://lore.kernel.org/linux-mm/[email protected]
>
> Signed-off-by: Lance Yang <[email protected]>

This is looking close IMHO. Just one bug and suggestion below.


> ---
> include/linux/mm_types.h | 9 +++
> include/linux/pgtable.h | 42 +++++++++++
> mm/internal.h | 12 +++-
> mm/madvise.c | 147 ++++++++++++++++++++++-----------------
> mm/memory.c | 4 +-
> 5 files changed, 147 insertions(+), 67 deletions(-)
>
> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> index c432add95913..3c224e25f473 100644
> --- a/include/linux/mm_types.h
> +++ b/include/linux/mm_types.h
> @@ -1367,6 +1367,15 @@ enum fault_flag {
>
> typedef unsigned int __bitwise zap_flags_t;
>
> +/* Flags for clear_young_dirty_ptes(). */
> +typedef int __bitwise cydp_t;
> +
> +/* make PTEs after pte_mkold() */
> +#define CYDP_CLEAR_YOUNG ((__force cydp_t)BIT(0))
> +
> +/* make PTEs after pte_mkclean() */
> +#define CYDP_CLEAR_DIRTY ((__force cydp_t)BIT(1))
> +
> /*
> * FOLL_PIN and FOLL_LONGTERM may be used in various combinations with each
> * other. Here is what they mean, and how to use them:
> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> index e2f45e22a6d1..d7958243f099 100644
> --- a/include/linux/pgtable.h
> +++ b/include/linux/pgtable.h
> @@ -489,6 +489,48 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
> }
> #endif
>
> +#ifndef clear_young_dirty_ptes
> +/**
> + * clear_young_dirty_ptes - Mark PTEs that map consecutive pages of the
> + * same folio as old/clean.
> + * @mm: Address space the pages are mapped into.
> + * @addr: Address the first page is mapped at.
> + * @ptep: Page table pointer for the first entry.
> + * @nr: Number of entries to mark old/clean.
> + * @flags: Flags to modify the PTE batch semantics.
> + *
> + * May be overridden by the architecture; otherwise, implemented by
> + * get_and_clear/modify/set for each pte in the range.
> + *
> + * Note that PTE bits in the PTE range besides the PFN can differ. For example,
> + * some PTEs might be write-protected.
> + *
> + * Context: The caller holds the page table lock. The PTEs map consecutive
> + * pages that belong to the same folio. The PTEs are all in the same PMD.
> + */
> +static inline void clear_young_dirty_ptes(struct mm_struct *mm,

My suggestion to introduce clear_young_dirty_ptes() was so that we could
*remove* mkold_ptes(). So I think it would be good to split that to a separate
preparatory patch, where you change all the callers to call the new function and
remove mkold_ptes().

Additionally since many arches already override ptep_test_and_clear_young() (and
that's what the default mkold_ptes() did, you might want to call that in the
below loop if *only* CYDP_CLEAR_YOUNG is set to avoid the possibility of any
regression. I know I only made this change for the swap-out series so what you
have done may well be ok in practice - and certainly cleaner. It would be good
to hear others' opinions.

Note the existing mkold_ptes() takes a vma instead of mm (because that's what
ptep_test_and_clear_young() takes). So suggest passing vma. You can get mm from
vma->mm.

> + unsigned long addr, pte_t *ptep,
> + unsigned int nr, cydp_t flags)
> +{
> + pte_t pte;
> +
> + for (;;) {

Suggestion:
if (flags == CYDP_CLEAR_YOUNG) {
ptep_test_and_clear_young(vma, addr, ptep);
else {

> + pte = ptep_get_and_clear(mm, addr, ptep);
> +
> + if (flags | CYDP_CLEAR_YOUNG)

bug: this needs to be bitwise and (&). Currently it will always evaluate to
true. Same for next one.

> + pte = pte_mkold(pte);
> + if (flags | CYDP_CLEAR_DIRTY)
> + pte = pte_mkclean(pte);
> +
> + set_pte_at(mm, addr, ptep, pte);

}

> + if (--nr == 0)
> + break;
> + ptep++;
> + addr += PAGE_SIZE;
> + }
> +}
> +#endif
> +
> static inline void ptep_clear(struct mm_struct *mm, unsigned long addr,
> pte_t *ptep)
> {
> diff --git a/mm/internal.h b/mm/internal.h
> index 3c0f3e3f9d99..ab8fcdeaf6eb 100644
> --- a/mm/internal.h
> +++ b/mm/internal.h
> @@ -134,6 +134,8 @@ static inline pte_t __pte_batch_clear_ignored(pte_t pte, fpb_t flags)
> * first one is writable.
> * @any_young: Optional pointer to indicate whether any entry except the
> * first one is young.
> + * @any_dirty: Optional pointer to indicate whether any entry except the
> + * first one is dirty.
> *
> * Detect a PTE batch: consecutive (present) PTEs that map consecutive
> * pages of the same large folio.
> @@ -149,18 +151,20 @@ static inline pte_t __pte_batch_clear_ignored(pte_t pte, fpb_t flags)
> */
> static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
> pte_t *start_ptep, pte_t pte, int max_nr, fpb_t flags,
> - bool *any_writable, bool *any_young)
> + bool *any_writable, bool *any_young, bool *any_dirty)
> {
> unsigned long folio_end_pfn = folio_pfn(folio) + folio_nr_pages(folio);
> const pte_t *end_ptep = start_ptep + max_nr;
> pte_t expected_pte, *ptep;
> - bool writable, young;
> + bool writable, young, dirty;
> int nr;
>
> if (any_writable)
> *any_writable = false;
> if (any_young)
> *any_young = false;
> + if (any_dirty)
> + *any_dirty = false;
>
> VM_WARN_ON_FOLIO(!pte_present(pte), folio);
> VM_WARN_ON_FOLIO(!folio_test_large(folio) || max_nr < 1, folio);
> @@ -176,6 +180,8 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
> writable = !!pte_write(pte);
> if (any_young)
> young = !!pte_young(pte);
> + if (any_dirty)
> + dirty = !!pte_dirty(pte);
> pte = __pte_batch_clear_ignored(pte, flags);
>
> if (!pte_same(pte, expected_pte))
> @@ -193,6 +199,8 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
> *any_writable |= writable;
> if (any_young)
> *any_young |= young;
> + if (any_dirty)
> + *any_dirty |= dirty;
>
> nr = pte_batch_hint(ptep, pte);
> expected_pte = pte_advance_pfn(expected_pte, nr);
> diff --git a/mm/madvise.c b/mm/madvise.c
> index d34ca6983227..b4103e2df346 100644
> --- a/mm/madvise.c
> +++ b/mm/madvise.c
> @@ -321,6 +321,39 @@ static inline bool can_do_file_pageout(struct vm_area_struct *vma)
> file_permission(vma->vm_file, MAY_WRITE) == 0;
> }
>
> +static inline int madvise_folio_pte_batch(unsigned long addr, unsigned long end,
> + struct folio *folio, pte_t *ptep,
> + pte_t pte, bool *any_young,
> + bool *any_dirty)
> +{
> + int max_nr = (end - addr) / PAGE_SIZE;
> + const fpb_t fpb_flags = FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRTY;
> +
> + return folio_pte_batch(folio, addr, ptep, pte, max_nr, fpb_flags, NULL,
> + any_young, any_dirty);
> +}
> +
> +static inline bool madvise_pte_split_folio(struct mm_struct *mm, pmd_t *pmd,
> + unsigned long addr,
> + struct folio *folio, pte_t **pte,
> + spinlock_t **ptl)
> +{
> + int err;
> +
> + if (!folio_trylock(folio))
> + return false;
> +
> + folio_get(folio);
> + pte_unmap_unlock(*pte, *ptl);
> + err = split_folio(folio);
> + folio_unlock(folio);
> + folio_put(folio);
> +
> + *pte = pte_offset_map_lock(mm, pmd, addr, ptl);
> +
> + return err == 0;
> +}
> +
> static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
> unsigned long addr, unsigned long end,
> struct mm_walk *walk)
> @@ -456,41 +489,30 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
> * next pte in the range.
> */
> if (folio_test_large(folio)) {
> - const fpb_t fpb_flags = FPB_IGNORE_DIRTY |
> - FPB_IGNORE_SOFT_DIRTY;
> - int max_nr = (end - addr) / PAGE_SIZE;
> bool any_young;
>
> - nr = folio_pte_batch(folio, addr, pte, ptent, max_nr,
> - fpb_flags, NULL, &any_young);
> - if (any_young)
> - ptent = pte_mkyoung(ptent);
> + nr = madvise_folio_pte_batch(addr, end, folio, pte,
> + ptent, &any_young, NULL);
>
> if (nr < folio_nr_pages(folio)) {
> - int err;
> -
> if (folio_likely_mapped_shared(folio))
> continue;
> if (pageout_anon_only_filter && !folio_test_anon(folio))
> continue;
> - if (!folio_trylock(folio))
> - continue;
> - folio_get(folio);
> +
> arch_leave_lazy_mmu_mode();
> - pte_unmap_unlock(start_pte, ptl);
> - start_pte = NULL;
> - err = split_folio(folio);
> - folio_unlock(folio);
> - folio_put(folio);
> - start_pte = pte =
> - pte_offset_map_lock(mm, pmd, addr, &ptl);
> + if (madvise_pte_split_folio(mm, pmd, addr,
> + folio, &start_pte, &ptl))
> + nr = 0;
> if (!start_pte)
> break;
> + pte = start_pte;
> arch_enter_lazy_mmu_mode();
> - if (!err)
> - nr = 0;
> continue;
> }
> +
> + if (any_young)
> + ptent = pte_mkyoung(ptent);
> }
>
> /*
> @@ -507,7 +529,8 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
> continue;
>
> if (!pageout && pte_young(ptent)) {
> - mkold_ptes(vma, addr, pte, nr);
> + clear_young_dirty_ptes(mm, addr, pte, nr,
> + CYDP_CLEAR_YOUNG);
> tlb_remove_tlb_entries(tlb, pte, nr, addr);
> }
>
> @@ -687,44 +710,51 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr,
> continue;
>
> /*
> - * If pmd isn't transhuge but the folio is large and
> - * is owned by only this process, split it and
> - * deactivate all pages.
> + * If we encounter a large folio, only split it if it is not
> + * fully mapped within the range we are operating on. Otherwise
> + * leave it as is so that it can be marked as lazyfree. If we
> + * fail to split a folio, leave it in place and advance to the
> + * next pte in the range.
> */
> if (folio_test_large(folio)) {
> - int err;
> + bool any_young, any_dirty;
>
> - if (folio_likely_mapped_shared(folio))
> - break;
> - if (!folio_trylock(folio))
> - break;
> - folio_get(folio);
> - arch_leave_lazy_mmu_mode();
> - pte_unmap_unlock(start_pte, ptl);
> - start_pte = NULL;
> - err = split_folio(folio);
> - folio_unlock(folio);
> - folio_put(folio);
> - if (err)
> - break;
> - start_pte = pte =
> - pte_offset_map_lock(mm, pmd, addr, &ptl);
> - if (!start_pte)
> - break;
> - arch_enter_lazy_mmu_mode();
> - pte--;
> - addr -= PAGE_SIZE;
> - continue;
> + nr = madvise_folio_pte_batch(addr, end, folio, pte,
> + ptent, &any_young, &any_dirty);
> +
> + if (nr < folio_nr_pages(folio)) {
> + if (folio_likely_mapped_shared(folio))
> + continue;
> +
> + arch_leave_lazy_mmu_mode();
> + if (madvise_pte_split_folio(mm, pmd, addr,
> + folio, &start_pte, &ptl))
> + nr = 0;
> + if (!start_pte)
> + break;
> + pte = start_pte;
> + arch_enter_lazy_mmu_mode();
> + continue;
> + }
> +
> + if (any_young)
> + ptent = pte_mkyoung(ptent);
> + if (any_dirty)
> + ptent = pte_mkdirty(ptent);
> }
>
> + if (folio_mapcount(folio) != folio_nr_pages(folio))
> + continue;
> +
> if (folio_test_swapcache(folio) || folio_test_dirty(folio)) {
> if (!folio_trylock(folio))
> continue;
> /*
> - * If folio is shared with others, we mustn't clear
> - * the folio's dirty flag.
> + * If we have a large folio at this point, we know it is
> + * fully mapped so if its mapcount is the same as its
> + * number of pages, it must be exclusive.
> */
> - if (folio_mapcount(folio) != 1) {
> + if (folio_mapcount(folio) != folio_nr_pages(folio)) {
> folio_unlock(folio);
> continue;
> }
> @@ -740,19 +770,10 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr,
> }
>
> if (pte_young(ptent) || pte_dirty(ptent)) {
> - /*
> - * Some of architecture(ex, PPC) don't update TLB
> - * with set_pte_at and tlb_remove_tlb_entry so for
> - * the portability, remap the pte with old|clean
> - * after pte clearing.
> - */
> - ptent = ptep_get_and_clear_full(mm, addr, pte,
> - tlb->fullmm);
> -
> - ptent = pte_mkold(ptent);
> - ptent = pte_mkclean(ptent);
> - set_pte_at(mm, addr, pte, ptent);
> - tlb_remove_tlb_entry(tlb, pte, addr);
> + clear_young_dirty_ptes(mm, addr, pte, nr,
> + CYDP_CLEAR_YOUNG |
> + CYDP_CLEAR_DIRTY);
> + tlb_remove_tlb_entries(tlb, pte, nr, addr);
> }
> folio_mark_lazyfree(folio);
> }
> diff --git a/mm/memory.c b/mm/memory.c
> index 76157b32faa8..b6fa5146b260 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -989,7 +989,7 @@ copy_present_ptes(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma
> flags |= FPB_IGNORE_SOFT_DIRTY;
>
> nr = folio_pte_batch(folio, addr, src_pte, pte, max_nr, flags,
> - &any_writable, NULL);
> + &any_writable, NULL, NULL);
> folio_ref_add(folio, nr);
> if (folio_test_anon(folio)) {
> if (unlikely(folio_try_dup_anon_rmap_ptes(folio, page,
> @@ -1558,7 +1558,7 @@ static inline int zap_present_ptes(struct mmu_gather *tlb,
> */
> if (unlikely(folio_test_large(folio) && max_nr != 1)) {
> nr = folio_pte_batch(folio, addr, pte, ptent, max_nr, fpb_flags,
> - NULL, NULL);
> + NULL, NULL, NULL);
>
> zap_present_folio_ptes(tlb, vma, folio, page, pte, ptent, nr,
> addr, details, rss, force_flush,


2024-04-15 08:59:27

by Ryan Roberts

[permalink] [raw]
Subject: Re: [PATCH v6 2/2] mm/arm64: override clear_young_dirty_ptes() batch helper

On 13/04/2024 01:22, Lance Yang wrote:
> The per-pte get_and_clear/modify/set approach would result in
> unfolding/refolding for contpte mappings on arm64. So we need
> to override clear_young_dirty_ptes() for arm64 to avoid it.
>
> Suggested-by: David Hildenbrand <[email protected]>
> Suggested-by: Barry Song <[email protected]>
> Signed-off-by: Ryan Roberts <[email protected]>

No, afraid I haven't signed off yet!

> Signed-off-by: Lance Yang <[email protected]>
> ---
> arch/arm64/include/asm/pgtable.h | 37 ++++++++++++++++++++++++++++++++
> arch/arm64/mm/contpte.c | 28 ++++++++++++++++++++++++
> 2 files changed, 65 insertions(+)
>
> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
> index 9fd8613b2db2..f951774dd2d6 100644
> --- a/arch/arm64/include/asm/pgtable.h
> +++ b/arch/arm64/include/asm/pgtable.h
> @@ -1223,6 +1223,28 @@ static inline void __wrprotect_ptes(struct mm_struct *mm, unsigned long address,
> __ptep_set_wrprotect(mm, address, ptep);
> }
>
> +static inline void __clear_young_dirty_ptes(struct mm_struct *mm,
> + unsigned long addr, pte_t *ptep,
> + unsigned int nr, cydp_t flags)
> +{
> + pte_t pte;
> +
> + for (;;) {
> + pte = __ptep_get(ptep);
> +
> + if (flags | CYDP_CLEAR_YOUNG)

bug: should be bitwise AND (&).

> + pte = pte_mkold(pte);
> + if (flags | CYDP_CLEAR_DIRTY)
> + pte = pte_mkclean(pte);
> +
> + __set_pte(ptep, pte);

The __ptep_get() and __set_pte() are not atomic. This is only safe when you are
clearing BOTH access and dirty (as I explained in the previous version). If you
are only clearing one of the flags, you will need a similar cmpxchg loop as for
__ptep_test_and_clear_young(). Otherwise you can race with the HW and lose
information.

> + if (--nr == 0)
> + break;
> + ptep++;
> + addr += PAGE_SIZE;
> + }
> +}
> +
> #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> #define __HAVE_ARCH_PMDP_SET_WRPROTECT
> static inline void pmdp_set_wrprotect(struct mm_struct *mm,
> @@ -1379,6 +1401,9 @@ extern void contpte_wrprotect_ptes(struct mm_struct *mm, unsigned long addr,
> extern int contpte_ptep_set_access_flags(struct vm_area_struct *vma,
> unsigned long addr, pte_t *ptep,
> pte_t entry, int dirty);
> +extern void contpte_clear_young_dirty_ptes(struct mm_struct *mm,
> + unsigned long addr, pte_t *ptep,
> + unsigned int nr, cydp_t flags);
>
> static __always_inline void contpte_try_fold(struct mm_struct *mm,
> unsigned long addr, pte_t *ptep, pte_t pte)
> @@ -1603,6 +1628,17 @@ static inline int ptep_set_access_flags(struct vm_area_struct *vma,
> return contpte_ptep_set_access_flags(vma, addr, ptep, entry, dirty);
> }
>
> +#define clear_young_dirty_ptes clear_young_dirty_ptes
> +static inline void clear_young_dirty_ptes(struct mm_struct *mm,
> + unsigned long addr, pte_t *ptep,
> + unsigned int nr, cydp_t flags)
> +{
> + if (likely(nr == 1 && !pte_cont(__ptep_get(ptep))))
> + __clear_young_dirty_ptes(mm, addr, ptep, nr, flags);
> + else
> + contpte_clear_young_dirty_ptes(mm, addr, ptep, nr, flags);
> +}
> +
> #else /* CONFIG_ARM64_CONTPTE */
>
> #define ptep_get __ptep_get
> @@ -1622,6 +1658,7 @@ static inline int ptep_set_access_flags(struct vm_area_struct *vma,
> #define wrprotect_ptes __wrprotect_ptes
> #define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS
> #define ptep_set_access_flags __ptep_set_access_flags
> +#define clear_young_dirty_ptes __clear_young_dirty_ptes
>
> #endif /* CONFIG_ARM64_CONTPTE */
>
> diff --git a/arch/arm64/mm/contpte.c b/arch/arm64/mm/contpte.c
> index 1b64b4c3f8bf..bf3b089d9641 100644
> --- a/arch/arm64/mm/contpte.c
> +++ b/arch/arm64/mm/contpte.c
> @@ -361,6 +361,34 @@ void contpte_wrprotect_ptes(struct mm_struct *mm, unsigned long addr,
> }
> EXPORT_SYMBOL_GPL(contpte_wrprotect_ptes);
>
> +void contpte_clear_young_dirty_ptes(struct mm_struct *mm, unsigned long addr,
> + pte_t *ptep, unsigned int nr, cydp_t flags)
> +{
> + /*
> + * We can safely clear access/dirty without needing to unfold from
> + * the architectures perspective, even when contpte is set. If the
> + * range starts or ends midway through a contpte block, we can just
> + * expand to include the full contpte block. While this is not
> + * exactly what the core-mm asked for, it tracks access/dirty per
> + * folio, not per page. And since we only create a contpte block
> + * when it is covered by a single folio, we can get away with
> + * clearing access/dirty for the whole block.
> + */
> + unsigned int start = addr;
> + unsigned int end = start + nr;

There are addresses; they should be unsigned long. May have been my error
originally when I sent you the example snippet.

Thanks,
Ryan

> +
> + if (pte_cont(__ptep_get(ptep + nr - 1)))
> + end = ALIGN(end, CONT_PTE_SIZE);
> +
> + if (pte_cont(__ptep_get(ptep))) {
> + start = ALIGN_DOWN(start, CONT_PTE_SIZE);
> + ptep = contpte_align_down(ptep);
> + }
> +
> + __clear_young_dirty_ptes(mm, start, ptep, end - start, flags);
> +}
> +EXPORT_SYMBOL_GPL(contpte_clear_young_dirty_ptes);
> +
> int contpte_ptep_set_access_flags(struct vm_area_struct *vma,
> unsigned long addr, pte_t *ptep,
> pte_t entry, int dirty)


2024-04-15 09:30:37

by Lance Yang

[permalink] [raw]
Subject: Re: [PATCH v6 1/2] mm/madvise: optimize lazyfreeing with mTHP in madvise_free

Hey Ryan,

Thanks a lot for taking time to review!

On Mon, Apr 15, 2024 at 4:47 PM Ryan Roberts <[email protected]> wrote:
>
> On 13/04/2024 01:22, Lance Yang wrote:
> > This patch optimizes lazyfreeing with PTE-mapped mTHP[1]
> > (Inspired by David Hildenbrand[2]). We aim to avoid unnecessary folio
> > splitting if the large folio is fully mapped within the target range.
> >
> > If a large folio is locked or shared, or if we fail to split it, we just
> > leave it in place and advance to the next PTE in the range. But note that
> > the behavior is changed; previously, any failure of this sort would cause
> > the entire operation to give up. As large folios become more common,
> > sticking to the old way could result in wasted opportunities.
> >
> > On an Intel I5 CPU, lazyfreeing a 1GiB VMA backed by PTE-mapped folios of
> > the same size results in the following runtimes for madvise(MADV_FREE) in
> > seconds (shorter is better):
> >
> > Folio Size | Old | New | Change
> > ------------------------------------------
> > 4KiB | 0.590251 | 0.590259 | 0%
> > 16KiB | 2.990447 | 0.185655 | -94%
> > 32KiB | 2.547831 | 0.104870 | -95%
> > 64KiB | 2.457796 | 0.052812 | -97%
> > 128KiB | 2.281034 | 0.032777 | -99%
> > 256KiB | 2.230387 | 0.017496 | -99%
> > 512KiB | 2.189106 | 0.010781 | -99%
> > 1024KiB | 2.183949 | 0.007753 | -99%
> > 2048KiB | 0.002799 | 0.002804 | 0%
> >
> > [1] https://lkml.kernel.org/r/20231207161211.2374093-5-ryan.roberts@armcom
> > [2] https://lore.kernel.org/linux-mm/[email protected]
> >
> > Signed-off-by: Lance Yang <[email protected]>
>
> This is looking close IMHO. Just one bug and suggestion below.
>
>
> > ---
> > include/linux/mm_types.h | 9 +++
> > include/linux/pgtable.h | 42 +++++++++++
> > mm/internal.h | 12 +++-
> > mm/madvise.c | 147 ++++++++++++++++++++++-----------------
> > mm/memory.c | 4 +-
> > 5 files changed, 147 insertions(+), 67 deletions(-)
> >
> > diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> > index c432add95913..3c224e25f473 100644
> > --- a/include/linux/mm_types.h
> > +++ b/include/linux/mm_types.h
> > @@ -1367,6 +1367,15 @@ enum fault_flag {
> >
> > typedef unsigned int __bitwise zap_flags_t;
> >
> > +/* Flags for clear_young_dirty_ptes(). */
> > +typedef int __bitwise cydp_t;
> > +
> > +/* make PTEs after pte_mkold() */
> > +#define CYDP_CLEAR_YOUNG ((__force cydp_t)BIT(0))
> > +
> > +/* make PTEs after pte_mkclean() */
> > +#define CYDP_CLEAR_DIRTY ((__force cydp_t)BIT(1))
> > +
> > /*
> > * FOLL_PIN and FOLL_LONGTERM may be used in various combinations with each
> > * other. Here is what they mean, and how to use them:
> > diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> > index e2f45e22a6d1..d7958243f099 100644
> > --- a/include/linux/pgtable.h
> > +++ b/include/linux/pgtable.h
> > @@ -489,6 +489,48 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
> > }
> > #endif
> >
> > +#ifndef clear_young_dirty_ptes
> > +/**
> > + * clear_young_dirty_ptes - Mark PTEs that map consecutive pages of the
> > + * same folio as old/clean.
> > + * @mm: Address space the pages are mapped into.
> > + * @addr: Address the first page is mapped at.
> > + * @ptep: Page table pointer for the first entry.
> > + * @nr: Number of entries to mark old/clean.
> > + * @flags: Flags to modify the PTE batch semantics.
> > + *
> > + * May be overridden by the architecture; otherwise, implemented by
> > + * get_and_clear/modify/set for each pte in the range.
> > + *
> > + * Note that PTE bits in the PTE range besides the PFN can differ. For example,
> > + * some PTEs might be write-protected.
> > + *
> > + * Context: The caller holds the page table lock. The PTEs map consecutive
> > + * pages that belong to the same folio. The PTEs are all in the same PMD.
> > + */
> > +static inline void clear_young_dirty_ptes(struct mm_struct *mm,
>
> My suggestion to introduce clear_young_dirty_ptes() was so that we could
> *remove* mkold_ptes(). So I think it would be good to split that to a separate

Sorry, I forgot to remove it :(

> preparatory patch, where you change all the callers to call the new function and
> remove mkold_ptes().

Thanks for your suggestion!
I'll split this into a separate preparatory patch, modify all the callers
to use new function clear_young_dirty_ptes(), and then remove mkold_ptes().

>
> Additionally since many arches already override ptep_test_and_clear_young() (and
> that's what the default mkold_ptes() did, you might want to call that in the
> below loop if *only* CYDP_CLEAR_YOUNG is set to avoid the possibility of any
> regression. I know I only made this change for the swap-out series so what you
> have done may well be ok in practice - and certainly cleaner. It would be good
> to hear others' opinions.

It makes sense to me. I'll consider calling ptep_test_and_clear_young() in
the loop only when CYDP_CLEAR_YOUNG is set to avoid any regression.

Thanks, and I'll wait to hear others' opinions.

>
> Note the existing mkold_ptes() takes a vma instead of mm (because that's what
> ptep_test_and_clear_young() takes). So suggest passing vma. You can get mm from
> vma->mm.

Got it.

>
> > + unsigned long addr, pte_t *ptep,
> > + unsigned int nr, cydp_t flags)
> > +{
> > + pte_t pte;
> > +
> > + for (;;) {
>
> Suggestion:
> if (flags == CYDP_CLEAR_YOUNG) {
> ptep_test_and_clear_young(vma, addr, ptep);
> else {
>
> > + pte = ptep_get_and_clear(mm, addr, ptep);
> > +
> > + if (flags | CYDP_CLEAR_YOUNG)
>
> bug: this needs to be bitwise and (&). Currently it will always evaluate to
> true. Same for next one.

Sorry, my bad for the oversight and mistake :(
I'll fix it, thanks!

Thanks,
Lance

>
> > + pte = pte_mkold(pte);
> > + if (flags | CYDP_CLEAR_DIRTY)
> > + pte = pte_mkclean(pte);
> > +
> > + set_pte_at(mm, addr, ptep, pte);
>
> }
>
> > + if (--nr == 0)
> > + break;
> > + ptep++;
> > + addr += PAGE_SIZE;
> > + }
> > +}
> > +#endif
> > +
> > static inline void ptep_clear(struct mm_struct *mm, unsigned long addr,
> > pte_t *ptep)
> > {
> > diff --git a/mm/internal.h b/mm/internal.h
> > index 3c0f3e3f9d99..ab8fcdeaf6eb 100644
> > --- a/mm/internal.h
> > +++ b/mm/internal.h
> > @@ -134,6 +134,8 @@ static inline pte_t __pte_batch_clear_ignored(pte_t pte, fpb_t flags)
> > * first one is writable.
> > * @any_young: Optional pointer to indicate whether any entry except the
> > * first one is young.
> > + * @any_dirty: Optional pointer to indicate whether any entry except the
> > + * first one is dirty.
> > *
> > * Detect a PTE batch: consecutive (present) PTEs that map consecutive
> > * pages of the same large folio.
> > @@ -149,18 +151,20 @@ static inline pte_t __pte_batch_clear_ignored(pte_t pte, fpb_t flags)
> > */
> > static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
> > pte_t *start_ptep, pte_t pte, int max_nr, fpb_t flags,
> > - bool *any_writable, bool *any_young)
> > + bool *any_writable, bool *any_young, bool *any_dirty)
> > {
> > unsigned long folio_end_pfn = folio_pfn(folio) + folio_nr_pages(folio);
> > const pte_t *end_ptep = start_ptep + max_nr;
> > pte_t expected_pte, *ptep;
> > - bool writable, young;
> > + bool writable, young, dirty;
> > int nr;
> >
> > if (any_writable)
> > *any_writable = false;
> > if (any_young)
> > *any_young = false;
> > + if (any_dirty)
> > + *any_dirty = false;
> >
> > VM_WARN_ON_FOLIO(!pte_present(pte), folio);
> > VM_WARN_ON_FOLIO(!folio_test_large(folio) || max_nr < 1, folio);
> > @@ -176,6 +180,8 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
> > writable = !!pte_write(pte);
> > if (any_young)
> > young = !!pte_young(pte);
> > + if (any_dirty)
> > + dirty = !!pte_dirty(pte);
> > pte = __pte_batch_clear_ignored(pte, flags);
> >
> > if (!pte_same(pte, expected_pte))
> > @@ -193,6 +199,8 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
> > *any_writable |= writable;
> > if (any_young)
> > *any_young |= young;
> > + if (any_dirty)
> > + *any_dirty |= dirty;
> >
> > nr = pte_batch_hint(ptep, pte);
> > expected_pte = pte_advance_pfn(expected_pte, nr);
> > diff --git a/mm/madvise.c b/mm/madvise.c
> > index d34ca6983227..b4103e2df346 100644
> > --- a/mm/madvise.c
> > +++ b/mm/madvise.c
> > @@ -321,6 +321,39 @@ static inline bool can_do_file_pageout(struct vm_area_struct *vma)
> > file_permission(vma->vm_file, MAY_WRITE) == 0;
> > }
> >
> > +static inline int madvise_folio_pte_batch(unsigned long addr, unsigned long end,
> > + struct folio *folio, pte_t *ptep,
> > + pte_t pte, bool *any_young,
> > + bool *any_dirty)
> > +{
> > + int max_nr = (end - addr) / PAGE_SIZE;
> > + const fpb_t fpb_flags = FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRTY;
> > +
> > + return folio_pte_batch(folio, addr, ptep, pte, max_nr, fpb_flags, NULL,
> > + any_young, any_dirty);
> > +}
> > +
> > +static inline bool madvise_pte_split_folio(struct mm_struct *mm, pmd_t *pmd,
> > + unsigned long addr,
> > + struct folio *folio, pte_t **pte,
> > + spinlock_t **ptl)
> > +{
> > + int err;
> > +
> > + if (!folio_trylock(folio))
> > + return false;
> > +
> > + folio_get(folio);
> > + pte_unmap_unlock(*pte, *ptl);
> > + err = split_folio(folio);
> > + folio_unlock(folio);
> > + folio_put(folio);
> > +
> > + *pte = pte_offset_map_lock(mm, pmd, addr, ptl);
> > +
> > + return err == 0;
> > +}
> > +
> > static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
> > unsigned long addr, unsigned long end,
> > struct mm_walk *walk)
> > @@ -456,41 +489,30 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
> > * next pte in the range.
> > */
> > if (folio_test_large(folio)) {
> > - const fpb_t fpb_flags = FPB_IGNORE_DIRTY |
> > - FPB_IGNORE_SOFT_DIRTY;
> > - int max_nr = (end - addr) / PAGE_SIZE;
> > bool any_young;
> >
> > - nr = folio_pte_batch(folio, addr, pte, ptent, max_nr,
> > - fpb_flags, NULL, &any_young);
> > - if (any_young)
> > - ptent = pte_mkyoung(ptent);
> > + nr = madvise_folio_pte_batch(addr, end, folio, pte,
> > + ptent, &any_young, NULL);
> >
> > if (nr < folio_nr_pages(folio)) {
> > - int err;
> > -
> > if (folio_likely_mapped_shared(folio))
> > continue;
> > if (pageout_anon_only_filter && !folio_test_anon(folio))
> > continue;
> > - if (!folio_trylock(folio))
> > - continue;
> > - folio_get(folio);
> > +
> > arch_leave_lazy_mmu_mode();
> > - pte_unmap_unlock(start_pte, ptl);
> > - start_pte = NULL;
> > - err = split_folio(folio);
> > - folio_unlock(folio);
> > - folio_put(folio);
> > - start_pte = pte =
> > - pte_offset_map_lock(mm, pmd, addr, &ptl);
> > + if (madvise_pte_split_folio(mm, pmd, addr,
> > + folio, &start_pte, &ptl))
> > + nr = 0;
> > if (!start_pte)
> > break;
> > + pte = start_pte;
> > arch_enter_lazy_mmu_mode();
> > - if (!err)
> > - nr = 0;
> > continue;
> > }
> > +
> > + if (any_young)
> > + ptent = pte_mkyoung(ptent);
> > }
> >
> > /*
> > @@ -507,7 +529,8 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
> > continue;
> >
> > if (!pageout && pte_young(ptent)) {
> > - mkold_ptes(vma, addr, pte, nr);
> > + clear_young_dirty_ptes(mm, addr, pte, nr,
> > + CYDP_CLEAR_YOUNG);
> > tlb_remove_tlb_entries(tlb, pte, nr, addr);
> > }
> >
> > @@ -687,44 +710,51 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr,
> > continue;
> >
> > /*
> > - * If pmd isn't transhuge but the folio is large and
> > - * is owned by only this process, split it and
> > - * deactivate all pages.
> > + * If we encounter a large folio, only split it if it is not
> > + * fully mapped within the range we are operating on. Otherwise
> > + * leave it as is so that it can be marked as lazyfree. If we
> > + * fail to split a folio, leave it in place and advance to the
> > + * next pte in the range.
> > */
> > if (folio_test_large(folio)) {
> > - int err;
> > + bool any_young, any_dirty;
> >
> > - if (folio_likely_mapped_shared(folio))
> > - break;
> > - if (!folio_trylock(folio))
> > - break;
> > - folio_get(folio);
> > - arch_leave_lazy_mmu_mode();
> > - pte_unmap_unlock(start_pte, ptl);
> > - start_pte = NULL;
> > - err = split_folio(folio);
> > - folio_unlock(folio);
> > - folio_put(folio);
> > - if (err)
> > - break;
> > - start_pte = pte =
> > - pte_offset_map_lock(mm, pmd, addr, &ptl);
> > - if (!start_pte)
> > - break;
> > - arch_enter_lazy_mmu_mode();
> > - pte--;
> > - addr -= PAGE_SIZE;
> > - continue;
> > + nr = madvise_folio_pte_batch(addr, end, folio, pte,
> > + ptent, &any_young, &any_dirty);
> > +
> > + if (nr < folio_nr_pages(folio)) {
> > + if (folio_likely_mapped_shared(folio))
> > + continue;
> > +
> > + arch_leave_lazy_mmu_mode();
> > + if (madvise_pte_split_folio(mm, pmd, addr,
> > + folio, &start_pte, &ptl))
> > + nr = 0;
> > + if (!start_pte)
> > + break;
> > + pte = start_pte;
> > + arch_enter_lazy_mmu_mode();
> > + continue;
> > + }
> > +
> > + if (any_young)
> > + ptent = pte_mkyoung(ptent);
> > + if (any_dirty)
> > + ptent = pte_mkdirty(ptent);
> > }
> >
> > + if (folio_mapcount(folio) != folio_nr_pages(folio))
> > + continue;
> > +
> > if (folio_test_swapcache(folio) || folio_test_dirty(folio)) {
> > if (!folio_trylock(folio))
> > continue;
> > /*
> > - * If folio is shared with others, we mustn't clear
> > - * the folio's dirty flag.
> > + * If we have a large folio at this point, we know it is
> > + * fully mapped so if its mapcount is the same as its
> > + * number of pages, it must be exclusive.
> > */
> > - if (folio_mapcount(folio) != 1) {
> > + if (folio_mapcount(folio) != folio_nr_pages(folio)) {
> > folio_unlock(folio);
> > continue;
> > }
> > @@ -740,19 +770,10 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr,
> > }
> >
> > if (pte_young(ptent) || pte_dirty(ptent)) {
> > - /*
> > - * Some of architecture(ex, PPC) don't update TLB
> > - * with set_pte_at and tlb_remove_tlb_entry so for
> > - * the portability, remap the pte with old|clean
> > - * after pte clearing.
> > - */
> > - ptent = ptep_get_and_clear_full(mm, addr, pte,
> > - tlb->fullmm);
> > -
> > - ptent = pte_mkold(ptent);
> > - ptent = pte_mkclean(ptent);
> > - set_pte_at(mm, addr, pte, ptent);
> > - tlb_remove_tlb_entry(tlb, pte, addr);
> > + clear_young_dirty_ptes(mm, addr, pte, nr,
> > + CYDP_CLEAR_YOUNG |
> > + CYDP_CLEAR_DIRTY);
> > + tlb_remove_tlb_entries(tlb, pte, nr, addr);
> > }
> > folio_mark_lazyfree(folio);
> > }
> > diff --git a/mm/memory.c b/mm/memory.c
> > index 76157b32faa8..b6fa5146b260 100644
> > --- a/mm/memory.c
> > +++ b/mm/memory.c
> > @@ -989,7 +989,7 @@ copy_present_ptes(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma
> > flags |= FPB_IGNORE_SOFT_DIRTY;
> >
> > nr = folio_pte_batch(folio, addr, src_pte, pte, max_nr, flags,
> > - &any_writable, NULL);
> > + &any_writable, NULL, NULL);
> > folio_ref_add(folio, nr);
> > if (folio_test_anon(folio)) {
> > if (unlikely(folio_try_dup_anon_rmap_ptes(folio, page,
> > @@ -1558,7 +1558,7 @@ static inline int zap_present_ptes(struct mmu_gather *tlb,
> > */
> > if (unlikely(folio_test_large(folio) && max_nr != 1)) {
> > nr = folio_pte_batch(folio, addr, pte, ptent, max_nr, fpb_flags,
> > - NULL, NULL);
> > + NULL, NULL, NULL);
> >
> > zap_present_folio_ptes(tlb, vma, folio, page, pte, ptent, nr,
> > addr, details, rss, force_flush,
>

2024-04-15 09:41:38

by Lance Yang

[permalink] [raw]
Subject: Re: [PATCH v6 2/2] mm/arm64: override clear_young_dirty_ptes() batch helper

On Mon, Apr 15, 2024 at 4:59 PM Ryan Roberts <[email protected]> wrote:
>
> On 13/04/2024 01:22, Lance Yang wrote:
> > The per-pte get_and_clear/modify/set approach would result in
> > unfolding/refolding for contpte mappings on arm64. So we need
> > to override clear_young_dirty_ptes() for arm64 to avoid it.
> >
> > Suggested-by: David Hildenbrand <[email protected]>
> > Suggested-by: Barry Song <[email protected]>
> > Signed-off-by: Ryan Roberts <[email protected]>
>
> No, afraid I haven't signed off yet!

Actually, you've done most of this change, and I just do the legwork :)
But I'll remove this s-o-b.

>
> > Signed-off-by: Lance Yang <[email protected]>
> > ---
> > arch/arm64/include/asm/pgtable.h | 37 ++++++++++++++++++++++++++++++++
> > arch/arm64/mm/contpte.c | 28 ++++++++++++++++++++++++
> > 2 files changed, 65 insertions(+)
> >
> > diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
> > index 9fd8613b2db2..f951774dd2d6 100644
> > --- a/arch/arm64/include/asm/pgtable.h
> > +++ b/arch/arm64/include/asm/pgtable.h
> > @@ -1223,6 +1223,28 @@ static inline void __wrprotect_ptes(struct mm_struct *mm, unsigned long address,
> > __ptep_set_wrprotect(mm, address, ptep);
> > }
> >
> > +static inline void __clear_young_dirty_ptes(struct mm_struct *mm,
> > + unsigned long addr, pte_t *ptep,
> > + unsigned int nr, cydp_t flags)
> > +{
> > + pte_t pte;
> > +
> > + for (;;) {
> > + pte = __ptep_get(ptep);
> > +
> > + if (flags | CYDP_CLEAR_YOUNG)
>
> bug: should be bitwise AND (&).

Good spot! Thanks!

>
> > + pte = pte_mkold(pte);
> > + if (flags | CYDP_CLEAR_DIRTY)
> > + pte = pte_mkclean(pte);
> > +
> > + __set_pte(ptep, pte);
>
> The __ptep_get() and __set_pte() are not atomic. This is only safe when you are
> clearing BOTH access and dirty (as I explained in the previous version). If you
> are only clearing one of the flags, you will need a similar cmpxchg loop as for
> __ptep_test_and_clear_young(). Otherwise you can race with the HW and lose
> information.

Thanks again for your patience and explanation!
I still got it wrong :(

>
> > + if (--nr == 0)
> > + break;
> > + ptep++;
> > + addr += PAGE_SIZE;
> > + }
> > +}
> > +
> > #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> > #define __HAVE_ARCH_PMDP_SET_WRPROTECT
> > static inline void pmdp_set_wrprotect(struct mm_struct *mm,
> > @@ -1379,6 +1401,9 @@ extern void contpte_wrprotect_ptes(struct mm_struct *mm, unsigned long addr,
> > extern int contpte_ptep_set_access_flags(struct vm_area_struct *vma,
> > unsigned long addr, pte_t *ptep,
> > pte_t entry, int dirty);
> > +extern void contpte_clear_young_dirty_ptes(struct mm_struct *mm,
> > + unsigned long addr, pte_t *ptep,
> > + unsigned int nr, cydp_t flags);
> >
> > static __always_inline void contpte_try_fold(struct mm_struct *mm,
> > unsigned long addr, pte_t *ptep, pte_t pte)
> > @@ -1603,6 +1628,17 @@ static inline int ptep_set_access_flags(struct vm_area_struct *vma,
> > return contpte_ptep_set_access_flags(vma, addr, ptep, entry, dirty);
> > }
> >
> > +#define clear_young_dirty_ptes clear_young_dirty_ptes
> > +static inline void clear_young_dirty_ptes(struct mm_struct *mm,
> > + unsigned long addr, pte_t *ptep,
> > + unsigned int nr, cydp_t flags)
> > +{
> > + if (likely(nr == 1 && !pte_cont(__ptep_get(ptep))))
> > + __clear_young_dirty_ptes(mm, addr, ptep, nr, flags);
> > + else
> > + contpte_clear_young_dirty_ptes(mm, addr, ptep, nr, flags);
> > +}
> > +
> > #else /* CONFIG_ARM64_CONTPTE */
> >
> > #define ptep_get __ptep_get
> > @@ -1622,6 +1658,7 @@ static inline int ptep_set_access_flags(struct vm_area_struct *vma,
> > #define wrprotect_ptes __wrprotect_ptes
> > #define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS
> > #define ptep_set_access_flags __ptep_set_access_flags
> > +#define clear_young_dirty_ptes __clear_young_dirty_ptes
> >
> > #endif /* CONFIG_ARM64_CONTPTE */
> >
> > diff --git a/arch/arm64/mm/contpte.c b/arch/arm64/mm/contpte.c
> > index 1b64b4c3f8bf..bf3b089d9641 100644
> > --- a/arch/arm64/mm/contpte.c
> > +++ b/arch/arm64/mm/contpte.c
> > @@ -361,6 +361,34 @@ void contpte_wrprotect_ptes(struct mm_struct *mm, unsigned long addr,
> > }
> > EXPORT_SYMBOL_GPL(contpte_wrprotect_ptes);
> >
> > +void contpte_clear_young_dirty_ptes(struct mm_struct *mm, unsigned long addr,
> > + pte_t *ptep, unsigned int nr, cydp_t flags)
> > +{
> > + /*
> > + * We can safely clear access/dirty without needing to unfold from
> > + * the architectures perspective, even when contpte is set. If the
> > + * range starts or ends midway through a contpte block, we can just
> > + * expand to include the full contpte block. While this is not
> > + * exactly what the core-mm asked for, it tracks access/dirty per
> > + * folio, not per page. And since we only create a contpte block
> > + * when it is covered by a single folio, we can get away with
> > + * clearing access/dirty for the whole block.
> > + */
> > + unsigned int start = addr;
> > + unsigned int end = start + nr;
>
> There are addresses; they should be unsigned long. May have been my error
> originally when I sent you the example snippet.

Got it. I'll sort it.

Thanks again for your time!

Thanks,
Lance

>
> Thanks,
> Ryan
>
> > +
> > + if (pte_cont(__ptep_get(ptep + nr - 1)))
> > + end = ALIGN(end, CONT_PTE_SIZE);
> > +
> > + if (pte_cont(__ptep_get(ptep))) {
> > + start = ALIGN_DOWN(start, CONT_PTE_SIZE);
> > + ptep = contpte_align_down(ptep);
> > + }
> > +
> > + __clear_young_dirty_ptes(mm, start, ptep, end - start, flags);
> > +}
> > +EXPORT_SYMBOL_GPL(contpte_clear_young_dirty_ptes);
> > +
> > int contpte_ptep_set_access_flags(struct vm_area_struct *vma,
> > unsigned long addr, pte_t *ptep,
> > pte_t entry, int dirty)
>