2023-10-13 08:58:47

by Kefeng Wang

[permalink] [raw]
Subject: [PATCH -next v2 00/19] mm: convert page cpupid functions to folios

The cpupid(or access time) used by numa balancing is stored in flags
or _last_cpupid(if LAST_CPUPID_NOT_IN_PAGE_FLAGS) of page, this is to
convert page cpupid to folio cpupid, a new _last_cpupid is added into
folio, which make us to use folio->_last_cpupid directly, and the
page cpupid functions are converted to folio ones.

page_cpupid_last() -> folio_last_cpupid()
xchg_page_access_time() -> folio_xchg_access_time()
page_cpupid_xchg_last() -> folio_xchg_last_cpupid()

v2:
- add virtual into folio too
- re-write and split patches to make them easier to review
- rename to folio_last_cpupid/folio_xchg_last_cpupid/folio_xchg_access_time
- rebased on next-20231013

v1:
- drop inappropriate page_cpupid_reset_last conversion from RFC
- rebased on next-20231009

Kefeng Wang (19):
mm_types: add virtual and _last_cpupid into struct folio
mm: add folio_last_cpupid()
mm: memory: use folio_last_cpupid() in do_numa_page()
mm: huge_memory: use folio_last_cpupid() in do_huge_pmd_numa_page()
mm: huge_memory: use folio_last_cpupid() in __split_huge_page_tail()
mm: remove page_cpupid_last()
mm: add folio_xchg_access_time()
sched/fair: use folio_xchg_access_time() in numa_hint_fault_latency()
mm: mprotect: use a folio in change_pte_range()
mm: huge_memory: use a folio in change_huge_pmd()
mm: remove xchg_page_access_time()
mm: add folio_xchg_last_cpupid()
sched/fair: use folio_xchg_last_cpupid() in
should_numa_migrate_memory()
mm: migrate: use folio_xchg_last_cpupid() in folio_migrate_flags()
mm: huge_memory: use folio_xchg_last_cpupid() in
__split_huge_page_tail()
mm: make finish_mkwrite_fault() static
mm: convert wp_page_reuse() and finish_mkwrite_fault() to take a folio
mm: use folio_xchg_last_cpupid() in wp_page_reuse()
mm: remove page_cpupid_xchg_last()

include/linux/mm.h | 30 +++++++++++++++---------------
include/linux/mm_types.h | 22 ++++++++++++++++++----
kernel/sched/fair.c | 4 ++--
mm/huge_memory.c | 17 +++++++++--------
mm/memory.c | 37 +++++++++++++++++++------------------
mm/migrate.c | 8 ++++----
mm/mmzone.c | 6 +++---
mm/mprotect.c | 16 +++++++++-------
8 files changed, 79 insertions(+), 61 deletions(-)

--
2.27.0


2023-10-13 08:58:49

by Kefeng Wang

[permalink] [raw]
Subject: [PATCH -next v2 07/19] mm: add folio_xchg_access_time()

Add folio_xchg_access_time() wrapper, which is required to convert
xchg_page_access_time() to folio vertion later in the series.

Signed-off-by: Kefeng Wang <[email protected]>
---
include/linux/mm.h | 5 +++++
1 file changed, 5 insertions(+)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 1d56a818b212..1238ab784d8b 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1794,6 +1794,11 @@ static inline void vma_set_access_pid_bit(struct vm_area_struct *vma)
}
#endif /* CONFIG_NUMA_BALANCING */

+static inline int folio_xchg_access_time(struct folio *folio, int time)
+{
+ return xchg_page_access_time(&folio->page, time);
+}
+
#if defined(CONFIG_KASAN_SW_TAGS) || defined(CONFIG_KASAN_HW_TAGS)

/*
--
2.27.0

2023-10-13 08:58:52

by Kefeng Wang

[permalink] [raw]
Subject: [PATCH -next v2 03/19] mm: memory: use folio_last_cpupid() in do_numa_page()

Convert to use folio_last_cpupid() in do_numa_page().

Signed-off-by: Kefeng Wang <[email protected]>
---
mm/memory.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/memory.c b/mm/memory.c
index c4b4aa4c1180..a1cf25a3ff16 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -4861,7 +4861,7 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf)
!node_is_toptier(nid))
last_cpupid = (-1 & LAST_CPUPID_MASK);
else
- last_cpupid = page_cpupid_last(&folio->page);
+ last_cpupid = folio_last_cpupid(folio);
target_nid = numa_migrate_prep(folio, vma, vmf->address, nid, &flags);
if (target_nid == NUMA_NO_NODE) {
folio_put(folio);
--
2.27.0

2023-10-13 08:59:03

by Kefeng Wang

[permalink] [raw]
Subject: [PATCH -next v2 04/19] mm: huge_memory: use folio_last_cpupid() in do_huge_pmd_numa_page()

Convert to use folio_last_cpupid() in do_huge_pmd_numa_page().

Signed-off-by: Kefeng Wang <[email protected]>
---
mm/huge_memory.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index c9cbcbf6697e..f9571bf92603 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1562,7 +1562,7 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf)
* to record page access time. So use default value.
*/
if (node_is_toptier(nid))
- last_cpupid = page_cpupid_last(&folio->page);
+ last_cpupid = folio_last_cpupid(folio);
target_nid = numa_migrate_prep(folio, vma, haddr, nid, &flags);
if (target_nid == NUMA_NO_NODE) {
folio_put(folio);
--
2.27.0

2023-10-13 08:59:06

by Kefeng Wang

[permalink] [raw]
Subject: [PATCH -next v2 05/19] mm: huge_memory: use folio_last_cpupid() in __split_huge_page_tail()

Convert to use folio_last_cpupid() in __split_huge_page_tail().

Signed-off-by: Kefeng Wang <[email protected]>
---
mm/huge_memory.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index f9571bf92603..5455dfe4c3c7 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2514,7 +2514,7 @@ static void __split_huge_page_tail(struct folio *folio, int tail,
if (page_is_idle(head))
set_page_idle(page_tail);

- page_cpupid_xchg_last(page_tail, page_cpupid_last(head));
+ page_cpupid_xchg_last(page_tail, folio_last_cpupid(folio));

/*
* always add to the tail because some iterators expect new
--
2.27.0

2023-10-13 08:59:13

by Kefeng Wang

[permalink] [raw]
Subject: [PATCH -next v2 13/19] sched/fair: use folio_xchg_last_cpupid() in should_numa_migrate_memory()

Convert to use folio_xchg_last_cpupid() in should_numa_migrate_memory().

Signed-off-by: Kefeng Wang <[email protected]>
---
kernel/sched/fair.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index bc07f29a4a42..f3cb4c8974c5 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1862,7 +1862,7 @@ bool should_numa_migrate_memory(struct task_struct *p, struct folio *folio,
}

this_cpupid = cpu_pid_to_cpupid(dst_cpu, current->pid);
- last_cpupid = page_cpupid_xchg_last(&folio->page, this_cpupid);
+ last_cpupid = folio_xchg_last_cpupid(folio, this_cpupid);

if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING) &&
!node_is_toptier(src_nid) && !cpupid_valid(last_cpupid))
--
2.27.0

2023-10-13 08:59:15

by Kefeng Wang

[permalink] [raw]
Subject: [PATCH -next v2 08/19] sched/fair: use folio_xchg_access_time() in numa_hint_fault_latency()

Convert to use folio_xchg_access_time() in numa_hint_fault_latency().

Signed-off-by: Kefeng Wang <[email protected]>
---
kernel/sched/fair.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 78ad23fcb7f9..bc07f29a4a42 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1766,7 +1766,7 @@ static int numa_hint_fault_latency(struct folio *folio)
int last_time, time;

time = jiffies_to_msecs(jiffies);
- last_time = xchg_page_access_time(&folio->page, time);
+ last_time = folio_xchg_access_time(folio, time);

return (time - last_time) & PAGE_ACCESS_TIME_MASK;
}
--
2.27.0

2023-10-13 08:59:19

by Kefeng Wang

[permalink] [raw]
Subject: [PATCH -next v2 06/19] mm: remove page_cpupid_last()

Since all calls use folio_last_cpupid(), remove page_cpupid_last().

Signed-off-by: Kefeng Wang <[email protected]>
---
include/linux/mm.h | 17 ++++++-----------
1 file changed, 6 insertions(+), 11 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 1c393a72037b..1d56a818b212 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1700,18 +1700,18 @@ static inline int page_cpupid_xchg_last(struct page *page, int cpupid)
return xchg(&page->_last_cpupid, cpupid & LAST_CPUPID_MASK);
}

-static inline int page_cpupid_last(struct page *page)
+static inline int folio_last_cpupid(struct folio *folio)
{
- return page->_last_cpupid;
+ return folio->_last_cpupid;
}
static inline void page_cpupid_reset_last(struct page *page)
{
page->_last_cpupid = -1 & LAST_CPUPID_MASK;
}
#else
-static inline int page_cpupid_last(struct page *page)
+static inline int folio_last_cpupid(struct folio *folio)
{
- return (page->flags >> LAST_CPUPID_PGSHIFT) & LAST_CPUPID_MASK;
+ return (folio->flags >> LAST_CPUPID_PGSHIFT) & LAST_CPUPID_MASK;
}

extern int page_cpupid_xchg_last(struct page *page, int cpupid);
@@ -1750,9 +1750,9 @@ static inline int xchg_page_access_time(struct page *page, int time)
return 0;
}

-static inline int page_cpupid_last(struct page *page)
+static inline int folio_last_cpupid(struct folio *folio)
{
- return page_to_nid(page); /* XXX */
+ return folio_nid(folio); /* XXX */
}

static inline int cpupid_to_nid(int cpupid)
@@ -1794,11 +1794,6 @@ static inline void vma_set_access_pid_bit(struct vm_area_struct *vma)
}
#endif /* CONFIG_NUMA_BALANCING */

-static inline int folio_last_cpupid(struct folio *folio)
-{
- return page_cpupid_last(&folio->page);
-}
-
#if defined(CONFIG_KASAN_SW_TAGS) || defined(CONFIG_KASAN_HW_TAGS)

/*
--
2.27.0

2023-10-13 08:59:20

by Kefeng Wang

[permalink] [raw]
Subject: [PATCH -next v2 10/19] mm: huge_memory: use a folio in change_huge_pmd()

Use a folio in change_huge_pmd(), which helps to remove last
xchg_page_access_time() caller.

Signed-off-by: Kefeng Wang <[email protected]>
---
mm/huge_memory.c | 13 +++++++------
1 file changed, 7 insertions(+), 6 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 5455dfe4c3c7..f01f345141da 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1856,7 +1856,7 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
if (is_swap_pmd(*pmd)) {
swp_entry_t entry = pmd_to_swp_entry(*pmd);
- struct page *page = pfn_swap_entry_to_page(entry);
+ struct folio *folio = page_folio(pfn_swap_entry_to_page(entry));
pmd_t newpmd;

VM_BUG_ON(!is_pmd_migration_entry(*pmd));
@@ -1865,7 +1865,7 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
* A protection check is difficult so
* just be safe and disable write
*/
- if (PageAnon(page))
+ if (folio_test_anon(folio))
entry = make_readable_exclusive_migration_entry(swp_offset(entry));
else
entry = make_readable_migration_entry(swp_offset(entry));
@@ -1887,7 +1887,7 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
#endif

if (prot_numa) {
- struct page *page;
+ struct folio *folio;
bool toptier;
/*
* Avoid trapping faults against the zero page. The read-only
@@ -1900,8 +1900,8 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
if (pmd_protnone(*pmd))
goto unlock;

- page = pmd_page(*pmd);
- toptier = node_is_toptier(page_to_nid(page));
+ folio = page_folio(pmd_page(*pmd));
+ toptier = node_is_toptier(folio_nid(folio));
/*
* Skip scanning top tier node if normal numa
* balancing is disabled
@@ -1912,7 +1912,8 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,

if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING &&
!toptier)
- xchg_page_access_time(page, jiffies_to_msecs(jiffies));
+ folio_xchg_access_time(folio,
+ jiffies_to_msecs(jiffies));
}
/*
* In case prot_numa, we are under mmap_read_lock(mm). It's critical
--
2.27.0

2023-10-13 08:59:21

by Kefeng Wang

[permalink] [raw]
Subject: [PATCH -next v2 11/19] mm: remove xchg_page_access_time()

Since all calls use folio_xchg_access_time(), remove
xchg_page_access_time().

Signed-off-by: Kefeng Wang <[email protected]>
---
include/linux/mm.h | 12 ++++--------
1 file changed, 4 insertions(+), 8 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 1238ab784d8b..8a2ff345338b 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1722,11 +1722,12 @@ static inline void page_cpupid_reset_last(struct page *page)
}
#endif /* LAST_CPUPID_NOT_IN_PAGE_FLAGS */

-static inline int xchg_page_access_time(struct page *page, int time)
+static inline int folio_xchg_access_time(struct folio *folio, int time)
{
int last_time;

- last_time = page_cpupid_xchg_last(page, time >> PAGE_ACCESS_TIME_BUCKETS);
+ last_time = page_cpupid_xchg_last(&folio->page,
+ time >> PAGE_ACCESS_TIME_BUCKETS);
return last_time << PAGE_ACCESS_TIME_BUCKETS;
}

@@ -1745,7 +1746,7 @@ static inline int page_cpupid_xchg_last(struct page *page, int cpupid)
return page_to_nid(page); /* XXX */
}

-static inline int xchg_page_access_time(struct page *page, int time)
+static inline int folio_xchg_access_time(struct folio *folio, int time)
{
return 0;
}
@@ -1794,11 +1795,6 @@ static inline void vma_set_access_pid_bit(struct vm_area_struct *vma)
}
#endif /* CONFIG_NUMA_BALANCING */

-static inline int folio_xchg_access_time(struct folio *folio, int time)
-{
- return xchg_page_access_time(&folio->page, time);
-}
-
#if defined(CONFIG_KASAN_SW_TAGS) || defined(CONFIG_KASAN_HW_TAGS)

/*
--
2.27.0

2023-10-13 08:59:35

by Kefeng Wang

[permalink] [raw]
Subject: [PATCH -next v2 12/19] mm: add folio_xchg_last_cpupid()

Add folio_xchg_last_cpupid() wrapper, which is required to convert
page_cpupid_xchg_last() to folio vertion later in the series.

Signed-off-by: Kefeng Wang <[email protected]>
---
include/linux/mm.h | 5 +++++
1 file changed, 5 insertions(+)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 8a2ff345338b..8229137e093b 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1795,6 +1795,11 @@ static inline void vma_set_access_pid_bit(struct vm_area_struct *vma)
}
#endif /* CONFIG_NUMA_BALANCING */

+static inline int folio_xchg_last_cpupid(struct folio *folio, int cpupid)
+{
+ return page_cpupid_xchg_last(&folio->page, cpupid);
+}
+
#if defined(CONFIG_KASAN_SW_TAGS) || defined(CONFIG_KASAN_HW_TAGS)

/*
--
2.27.0

2023-10-13 08:59:38

by Kefeng Wang

[permalink] [raw]
Subject: [PATCH -next v2 15/19] mm: huge_memory: use folio_xchg_last_cpupid() in __split_huge_page_tail()

Convert to use folio_xchg_last_cpupid() in __split_huge_page_tail().

Signed-off-by: Kefeng Wang <[email protected]>
---
mm/huge_memory.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index f01f345141da..f31f02472396 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2515,7 +2515,7 @@ static void __split_huge_page_tail(struct folio *folio, int tail,
if (page_is_idle(head))
set_page_idle(page_tail);

- page_cpupid_xchg_last(page_tail, folio_last_cpupid(folio));
+ folio_xchg_last_cpupid(new_folio, folio_last_cpupid(folio));

/*
* always add to the tail because some iterators expect new
--
2.27.0

2023-10-13 08:59:39

by Kefeng Wang

[permalink] [raw]
Subject: [PATCH -next v2 18/19] mm: use folio_xchg_last_cpupid() in wp_page_reuse()

Convert to use folio_xchg_last_cpupid() in wp_page_reuse(), and remove
page variable.

Signed-off-by: Kefeng Wang <[email protected]>
---
mm/memory.c | 19 ++++++++++---------
1 file changed, 10 insertions(+), 9 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index 6b58ceb0961f..e85c009917b4 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3022,19 +3022,20 @@ static inline void wp_page_reuse(struct vm_fault *vmf, struct folio *folio)
__releases(vmf->ptl)
{
struct vm_area_struct *vma = vmf->vma;
- struct page *page = vmf->page;
pte_t entry;

VM_BUG_ON(!(vmf->flags & FAULT_FLAG_WRITE));
- VM_BUG_ON(folio && folio_test_anon(folio) && !PageAnonExclusive(page));

- /*
- * Clear the pages cpupid information as the existing
- * information potentially belongs to a now completely
- * unrelated process.
- */
- if (page)
- page_cpupid_xchg_last(page, (1 << LAST_CPUPID_SHIFT) - 1);
+ if (folio) {
+ VM_BUG_ON(folio_test_anon(folio) &&
+ !PageAnonExclusive(vmf->page));
+ /*
+ * Clear the pages cpupid information as the existing
+ * information potentially belongs to a now completely
+ * unrelated process.
+ */
+ folio_xchg_last_cpupid(folio, (1 << LAST_CPUPID_SHIFT) - 1);
+ }

flush_cache_page(vma, vmf->address, pte_pfn(vmf->orig_pte));
entry = pte_mkyoung(vmf->orig_pte);
--
2.27.0

2023-10-13 08:59:42

by Kefeng Wang

[permalink] [raw]
Subject: [PATCH -next v2 14/19] mm: migrate: use folio_xchg_last_cpupid() in folio_migrate_flags()

Convert to use folio_xchg_last_cpupid() in folio_migrate_flags(), also
directly use folio_nid() instead of page_to_nid(&folio->page).

Signed-off-by: Kefeng Wang <[email protected]>
---
mm/migrate.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/mm/migrate.c b/mm/migrate.c
index 5348827bd958..821c42d61ed0 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -588,20 +588,20 @@ void folio_migrate_flags(struct folio *newfolio, struct folio *folio)
* Copy NUMA information to the new page, to prevent over-eager
* future migrations of this same page.
*/
- cpupid = page_cpupid_xchg_last(&folio->page, -1);
+ cpupid = folio_xchg_last_cpupid(folio, -1);
/*
* For memory tiering mode, when migrate between slow and fast
* memory node, reset cpupid, because that is used to record
* page access time in slow memory node.
*/
if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING) {
- bool f_toptier = node_is_toptier(page_to_nid(&folio->page));
- bool t_toptier = node_is_toptier(page_to_nid(&newfolio->page));
+ bool f_toptier = node_is_toptier(folio_nid(folio));
+ bool t_toptier = node_is_toptier(folio_nid(newfolio));

if (f_toptier != t_toptier)
cpupid = -1;
}
- page_cpupid_xchg_last(&newfolio->page, cpupid);
+ folio_xchg_last_cpupid(newfolio, cpupid);

folio_migrate_ksm(newfolio, folio);
/*
--
2.27.0

2023-10-13 08:59:47

by Kefeng Wang

[permalink] [raw]
Subject: [PATCH -next v2 16/19] mm: make finish_mkwrite_fault() static

Make finish_mkwrite_fault static since it is not used outside of
memory.c.

Signed-off-by: Kefeng Wang <[email protected]>
---
include/linux/mm.h | 1 -
mm/memory.c | 2 +-
2 files changed, 1 insertion(+), 2 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 8229137e093b..70eae2e7d5e5 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1346,7 +1346,6 @@ void set_pte_range(struct vm_fault *vmf, struct folio *folio,
struct page *page, unsigned int nr, unsigned long addr);

vm_fault_t finish_fault(struct vm_fault *vmf);
-vm_fault_t finish_mkwrite_fault(struct vm_fault *vmf);
#endif

/*
diff --git a/mm/memory.c b/mm/memory.c
index a1cf25a3ff16..b6cc24257683 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3272,7 +3272,7 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf)
* Return: %0 on success, %VM_FAULT_NOPAGE when PTE got changed before
* we acquired PTE lock.
*/
-vm_fault_t finish_mkwrite_fault(struct vm_fault *vmf)
+static vm_fault_t finish_mkwrite_fault(struct vm_fault *vmf)
{
WARN_ON_ONCE(!(vmf->vma->vm_flags & VM_SHARED));
vmf->pte = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd, vmf->address,
--
2.27.0

2023-10-13 08:59:55

by Kefeng Wang

[permalink] [raw]
Subject: [PATCH -next v2 09/19] mm: mprotect: use a folio in change_pte_range()

Use a folio in change_pte_range() to save three compound_head() calls.

Signed-off-by: Kefeng Wang <[email protected]>
---
mm/mprotect.c | 16 +++++++++-------
1 file changed, 9 insertions(+), 7 deletions(-)

diff --git a/mm/mprotect.c b/mm/mprotect.c
index f1dc8f8c84ef..81991102f785 100644
--- a/mm/mprotect.c
+++ b/mm/mprotect.c
@@ -114,7 +114,7 @@ static long change_pte_range(struct mmu_gather *tlb,
* pages. See similar comment in change_huge_pmd.
*/
if (prot_numa) {
- struct page *page;
+ struct folio *folio;
int nid;
bool toptier;

@@ -122,13 +122,14 @@ static long change_pte_range(struct mmu_gather *tlb,
if (pte_protnone(oldpte))
continue;

- page = vm_normal_page(vma, addr, oldpte);
- if (!page || is_zone_device_page(page) || PageKsm(page))
+ folio = vm_normal_folio(vma, addr, oldpte);
+ if (!folio || folio_is_zone_device(folio) ||
+ folio_test_ksm(folio))
continue;

/* Also skip shared copy-on-write pages */
if (is_cow_mapping(vma->vm_flags) &&
- page_count(page) != 1)
+ folio_ref_count(folio) != 1)
continue;

/*
@@ -136,14 +137,15 @@ static long change_pte_range(struct mmu_gather *tlb,
* it cannot move them all from MIGRATE_ASYNC
* context.
*/
- if (page_is_file_lru(page) && PageDirty(page))
+ if (folio_is_file_lru(folio) &&
+ folio_test_dirty(folio))
continue;

/*
* Don't mess with PTEs if page is already on the node
* a single-threaded process is running on.
*/
- nid = page_to_nid(page);
+ nid = folio_nid(folio);
if (target_node == nid)
continue;
toptier = node_is_toptier(nid);
@@ -157,7 +159,7 @@ static long change_pte_range(struct mmu_gather *tlb,
continue;
if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING &&
!toptier)
- xchg_page_access_time(page,
+ folio_xchg_access_time(folio,
jiffies_to_msecs(jiffies));
}

--
2.27.0

2023-10-13 09:01:21

by Kefeng Wang

[permalink] [raw]
Subject: [PATCH -next v2 17/19] mm: convert wp_page_reuse() and finish_mkwrite_fault() to take a folio

Saves one compound_head() call, also in preparation for
page_cpupid_xchg_last() conversion.

Signed-off-by: Kefeng Wang <[email protected]>
---
mm/memory.c | 18 +++++++++---------
1 file changed, 9 insertions(+), 9 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index b6cc24257683..6b58ceb0961f 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3018,7 +3018,7 @@ static vm_fault_t fault_dirty_shared_page(struct vm_fault *vmf)
* case, all we need to do here is to mark the page as writable and update
* any related book-keeping.
*/
-static inline void wp_page_reuse(struct vm_fault *vmf)
+static inline void wp_page_reuse(struct vm_fault *vmf, struct folio *folio)
__releases(vmf->ptl)
{
struct vm_area_struct *vma = vmf->vma;
@@ -3026,7 +3026,7 @@ static inline void wp_page_reuse(struct vm_fault *vmf)
pte_t entry;

VM_BUG_ON(!(vmf->flags & FAULT_FLAG_WRITE));
- VM_BUG_ON(page && PageAnon(page) && !PageAnonExclusive(page));
+ VM_BUG_ON(folio && folio_test_anon(folio) && !PageAnonExclusive(page));

/*
* Clear the pages cpupid information as the existing
@@ -3272,7 +3272,7 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf)
* Return: %0 on success, %VM_FAULT_NOPAGE when PTE got changed before
* we acquired PTE lock.
*/
-static vm_fault_t finish_mkwrite_fault(struct vm_fault *vmf)
+static vm_fault_t finish_mkwrite_fault(struct vm_fault *vmf, struct folio *folio)
{
WARN_ON_ONCE(!(vmf->vma->vm_flags & VM_SHARED));
vmf->pte = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd, vmf->address,
@@ -3288,7 +3288,7 @@ static vm_fault_t finish_mkwrite_fault(struct vm_fault *vmf)
pte_unmap_unlock(vmf->pte, vmf->ptl);
return VM_FAULT_NOPAGE;
}
- wp_page_reuse(vmf);
+ wp_page_reuse(vmf, folio);
return 0;
}

@@ -3312,9 +3312,9 @@ static vm_fault_t wp_pfn_shared(struct vm_fault *vmf)
ret = vma->vm_ops->pfn_mkwrite(vmf);
if (ret & (VM_FAULT_ERROR | VM_FAULT_NOPAGE))
return ret;
- return finish_mkwrite_fault(vmf);
+ return finish_mkwrite_fault(vmf, NULL);
}
- wp_page_reuse(vmf);
+ wp_page_reuse(vmf, NULL);
return 0;
}

@@ -3342,14 +3342,14 @@ static vm_fault_t wp_page_shared(struct vm_fault *vmf, struct folio *folio)
folio_put(folio);
return tmp;
}
- tmp = finish_mkwrite_fault(vmf);
+ tmp = finish_mkwrite_fault(vmf, folio);
if (unlikely(tmp & (VM_FAULT_ERROR | VM_FAULT_NOPAGE))) {
folio_unlock(folio);
folio_put(folio);
return tmp;
}
} else {
- wp_page_reuse(vmf);
+ wp_page_reuse(vmf, folio);
folio_lock(folio);
}
ret |= fault_dirty_shared_page(vmf);
@@ -3494,7 +3494,7 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf)
pte_unmap_unlock(vmf->pte, vmf->ptl);
return 0;
}
- wp_page_reuse(vmf);
+ wp_page_reuse(vmf, folio);
return 0;
}
/*
--
2.27.0

2023-10-13 09:01:50

by Kefeng Wang

[permalink] [raw]
Subject: [PATCH -next v2 19/19] mm: remove page_cpupid_xchg_last()

Since all calls use folio_xchg_last_cpupid(), remove
page_cpupid_xchg_last().

Signed-off-by: Kefeng Wang <[email protected]>
---
include/linux/mm.h | 19 +++++++------------
mm/mmzone.c | 6 +++---
2 files changed, 10 insertions(+), 15 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 70eae2e7d5e5..287d52ace444 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1694,9 +1694,9 @@ static inline bool __cpupid_match_pid(pid_t task_pid, int cpupid)

#define cpupid_match_pid(task, cpupid) __cpupid_match_pid(task->pid, cpupid)
#ifdef LAST_CPUPID_NOT_IN_PAGE_FLAGS
-static inline int page_cpupid_xchg_last(struct page *page, int cpupid)
+static inline int folio_xchg_last_cpupid(struct folio *folio, int cpupid)
{
- return xchg(&page->_last_cpupid, cpupid & LAST_CPUPID_MASK);
+ return xchg(&folio->_last_cpupid, cpupid & LAST_CPUPID_MASK);
}

static inline int folio_last_cpupid(struct folio *folio)
@@ -1713,7 +1713,7 @@ static inline int folio_last_cpupid(struct folio *folio)
return (folio->flags >> LAST_CPUPID_PGSHIFT) & LAST_CPUPID_MASK;
}

-extern int page_cpupid_xchg_last(struct page *page, int cpupid);
+int folio_xchg_last_cpupid(struct folio *folio, int cpupid);

static inline void page_cpupid_reset_last(struct page *page)
{
@@ -1725,8 +1725,8 @@ static inline int folio_xchg_access_time(struct folio *folio, int time)
{
int last_time;

- last_time = page_cpupid_xchg_last(&folio->page,
- time >> PAGE_ACCESS_TIME_BUCKETS);
+ last_time = folio_xchg_last_cpupid(folio,
+ time >> PAGE_ACCESS_TIME_BUCKETS);
return last_time << PAGE_ACCESS_TIME_BUCKETS;
}

@@ -1740,9 +1740,9 @@ static inline void vma_set_access_pid_bit(struct vm_area_struct *vma)
}
}
#else /* !CONFIG_NUMA_BALANCING */
-static inline int page_cpupid_xchg_last(struct page *page, int cpupid)
+static inline int folio_xchg_last_cpupid(struct folio *folio, int cpupid)
{
- return page_to_nid(page); /* XXX */
+ return folio_nid(folio); /* XXX */
}

static inline int folio_xchg_access_time(struct folio *folio, int time)
@@ -1794,11 +1794,6 @@ static inline void vma_set_access_pid_bit(struct vm_area_struct *vma)
}
#endif /* CONFIG_NUMA_BALANCING */

-static inline int folio_xchg_last_cpupid(struct folio *folio, int cpupid)
-{
- return page_cpupid_xchg_last(&folio->page, cpupid);
-}
-
#if defined(CONFIG_KASAN_SW_TAGS) || defined(CONFIG_KASAN_HW_TAGS)

/*
diff --git a/mm/mmzone.c b/mm/mmzone.c
index 68e1511be12d..b594d3f268fe 100644
--- a/mm/mmzone.c
+++ b/mm/mmzone.c
@@ -93,19 +93,19 @@ void lruvec_init(struct lruvec *lruvec)
}

#if defined(CONFIG_NUMA_BALANCING) && !defined(LAST_CPUPID_NOT_IN_PAGE_FLAGS)
-int page_cpupid_xchg_last(struct page *page, int cpupid)
+int folio_xchg_last_cpupid(struct folio *folio, int cpupid)
{
unsigned long old_flags, flags;
int last_cpupid;

- old_flags = READ_ONCE(page->flags);
+ old_flags = READ_ONCE(folio->flags);
do {
flags = old_flags;
last_cpupid = (flags >> LAST_CPUPID_PGSHIFT) & LAST_CPUPID_MASK;

flags &= ~(LAST_CPUPID_MASK << LAST_CPUPID_PGSHIFT);
flags |= (cpupid & LAST_CPUPID_MASK) << LAST_CPUPID_PGSHIFT;
- } while (unlikely(!try_cmpxchg(&page->flags, &old_flags, flags)));
+ } while (unlikely(!try_cmpxchg(&folio->flags, &old_flags, flags)));

return last_cpupid;
}
--
2.27.0

2023-10-13 15:13:57

by Matthew Wilcox

[permalink] [raw]
Subject: Re: [PATCH -next v2 09/19] mm: mprotect: use a folio in change_pte_range()

On Fri, Oct 13, 2023 at 04:55:53PM +0800, Kefeng Wang wrote:
> Use a folio in change_pte_range() to save three compound_head() calls.

Yes, but here we have a change of behaviour, which should be argued
is desirable. Before if a partial THP was mapped, or a fs large
folio, we would do this to individual pages. Now we're doing it to the
entire folio. Is that desirable? I don't have the background to argue
either way.

> @@ -157,7 +159,7 @@ static long change_pte_range(struct mmu_gather *tlb,
> continue;
> if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING &&
> !toptier)
> - xchg_page_access_time(page,
> + folio_xchg_access_time(folio,
> jiffies_to_msecs(jiffies));
> }

2023-10-13 15:19:24

by Matthew Wilcox

[permalink] [raw]
Subject: Re: [PATCH -next v2 18/19] mm: use folio_xchg_last_cpupid() in wp_page_reuse()

On Fri, Oct 13, 2023 at 04:56:02PM +0800, Kefeng Wang wrote:
> Convert to use folio_xchg_last_cpupid() in wp_page_reuse(), and remove
> page variable.

... another case where we're changing behaviour and need to argue it's
desirable.

> - /*
> - * Clear the pages cpupid information as the existing
> - * information potentially belongs to a now completely
> - * unrelated process.
> - */
> - if (page)
> - page_cpupid_xchg_last(page, (1 << LAST_CPUPID_SHIFT) - 1);
> + if (folio) {
> + VM_BUG_ON(folio_test_anon(folio) &&
> + !PageAnonExclusive(vmf->page));
> + /*
> + * Clear the pages cpupid information as the existing

s/pages/folio's/

> + * information potentially belongs to a now completely
> + * unrelated process.
> + */
> + folio_xchg_last_cpupid(folio, (1 << LAST_CPUPID_SHIFT) - 1);
> + }

2023-10-14 03:13:07

by Kefeng Wang

[permalink] [raw]
Subject: Re: [PATCH -next v2 09/19] mm: mprotect: use a folio in change_pte_range()



On 2023/10/13 23:13, Matthew Wilcox wrote:
> On Fri, Oct 13, 2023 at 04:55:53PM +0800, Kefeng Wang wrote:
>> Use a folio in change_pte_range() to save three compound_head() calls.
>
> Yes, but here we have a change of behaviour, which should be argued
> is desirable. Before if a partial THP was mapped, or a fs large
> folio, we would do this to individual pages. Now we're doing it to the
> entire folio. Is that desirable? I don't have the background to argue
> either way.

The Huang's replay in v1[1] already mentioned this, we only use
last_cpupid from head page, and large folio won't be handled from
do_numa_page(), and if large folio numa balancing is supported,
we could migrate the entire large folio mapped only one process,
or maybe split the large folio mapped multi-processes, and when
split it, we will copy the last_cpupid from head to the tail page.
Anyway, I think this change or the wp_page_reuse() won't break
current numa balancing.

Thanks.


[1]https://lore.kernel.org/linux-mm/[email protected]/
>
>> @@ -157,7 +159,7 @@ static long change_pte_range(struct mmu_gather *tlb,
>> continue;
>> if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING &&
>> !toptier)
>> - xchg_page_access_time(page,
>> + folio_xchg_access_time(folio,
>> jiffies_to_msecs(jiffies));
>> }
>

2023-10-17 07:34:41

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH -next v2 17/19] mm: convert wp_page_reuse() and finish_mkwrite_fault() to take a folio

Hi Kefeng,

kernel test robot noticed the following build warnings:

[auto build test WARNING on akpm-mm/mm-everything]

url: https://github.com/intel-lab-lkp/linux/commits/Kefeng-Wang/mm_types-add-virtual-and-_last_cpupid-into-struct-folio/20231017-121040
base: https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything
patch link: https://lore.kernel.org/r/20231013085603.1227349-18-wangkefeng.wang%40huawei.com
patch subject: [PATCH -next v2 17/19] mm: convert wp_page_reuse() and finish_mkwrite_fault() to take a folio
config: m68k-allyesconfig (https://download.01.org/0day-ci/archive/20231017/[email protected]/config)
compiler: m68k-linux-gcc (GCC) 13.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20231017/[email protected]/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <[email protected]>
| Closes: https://lore.kernel.org/oe-kbuild-all/[email protected]/

All warnings (new ones prefixed by >>):

>> mm/memory.c:3276: warning: Function parameter or member 'folio' not described in 'finish_mkwrite_fault'


vim +3276 mm/memory.c

2f38ab2c3c7fef Shachar Raindel 2015-04-14 3258
66a6197c118540 Jan Kara 2016-12-14 3259 /**
66a6197c118540 Jan Kara 2016-12-14 3260 * finish_mkwrite_fault - finish page fault for a shared mapping, making PTE
66a6197c118540 Jan Kara 2016-12-14 3261 * writeable once the page is prepared
66a6197c118540 Jan Kara 2016-12-14 3262 *
66a6197c118540 Jan Kara 2016-12-14 3263 * @vmf: structure describing the fault
66a6197c118540 Jan Kara 2016-12-14 3264 *
66a6197c118540 Jan Kara 2016-12-14 3265 * This function handles all that is needed to finish a write page fault in a
66a6197c118540 Jan Kara 2016-12-14 3266 * shared mapping due to PTE being read-only once the mapped page is prepared.
a862f68a8b3600 Mike Rapoport 2019-03-05 3267 * It handles locking of PTE and modifying it.
66a6197c118540 Jan Kara 2016-12-14 3268 *
66a6197c118540 Jan Kara 2016-12-14 3269 * The function expects the page to be locked or other protection against
66a6197c118540 Jan Kara 2016-12-14 3270 * concurrent faults / writeback (such as DAX radix tree locks).
a862f68a8b3600 Mike Rapoport 2019-03-05 3271 *
2797e79f1a491f Liu Xiang 2021-06-28 3272 * Return: %0 on success, %VM_FAULT_NOPAGE when PTE got changed before
a862f68a8b3600 Mike Rapoport 2019-03-05 3273 * we acquired PTE lock.
66a6197c118540 Jan Kara 2016-12-14 3274 */
60fe935fc6b035 Kefeng Wang 2023-10-13 3275 static vm_fault_t finish_mkwrite_fault(struct vm_fault *vmf, struct folio *folio)
66a6197c118540 Jan Kara 2016-12-14 @3276 {
66a6197c118540 Jan Kara 2016-12-14 3277 WARN_ON_ONCE(!(vmf->vma->vm_flags & VM_SHARED));
66a6197c118540 Jan Kara 2016-12-14 3278 vmf->pte = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd, vmf->address,
66a6197c118540 Jan Kara 2016-12-14 3279 &vmf->ptl);
3db82b9374ca92 Hugh Dickins 2023-06-08 3280 if (!vmf->pte)
3db82b9374ca92 Hugh Dickins 2023-06-08 3281 return VM_FAULT_NOPAGE;
66a6197c118540 Jan Kara 2016-12-14 3282 /*
66a6197c118540 Jan Kara 2016-12-14 3283 * We might have raced with another page fault while we released the
66a6197c118540 Jan Kara 2016-12-14 3284 * pte_offset_map_lock.
66a6197c118540 Jan Kara 2016-12-14 3285 */
c33c794828f212 Ryan Roberts 2023-06-12 3286 if (!pte_same(ptep_get(vmf->pte), vmf->orig_pte)) {
7df676974359f9 Bibo Mao 2020-05-27 3287 update_mmu_tlb(vmf->vma, vmf->address, vmf->pte);
66a6197c118540 Jan Kara 2016-12-14 3288 pte_unmap_unlock(vmf->pte, vmf->ptl);
a19e25536ed3a2 Jan Kara 2016-12-14 3289 return VM_FAULT_NOPAGE;
66a6197c118540 Jan Kara 2016-12-14 3290 }
60fe935fc6b035 Kefeng Wang 2023-10-13 3291 wp_page_reuse(vmf, folio);
a19e25536ed3a2 Jan Kara 2016-12-14 3292 return 0;
66a6197c118540 Jan Kara 2016-12-14 3293 }
66a6197c118540 Jan Kara 2016-12-14 3294

--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

2023-10-17 09:06:11

by Kefeng Wang

[permalink] [raw]
Subject: Re: [PATCH -next v2 17/19] mm: convert wp_page_reuse() and finish_mkwrite_fault() to take a folio



On 2023/10/17 15:33, kernel test robot wrote:
> Hi Kefeng,
>
> kernel test robot noticed the following build warnings:
>
> [auto build test WARNING on akpm-mm/mm-everything]
>
> url: https://github.com/intel-lab-lkp/linux/commits/Kefeng-Wang/mm_types-add-virtual-and-_last_cpupid-into-struct-folio/20231017-121040
> base: https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything
> patch link: https://lore.kernel.org/r/20231013085603.1227349-18-wangkefeng.wang%40huawei.com
> patch subject: [PATCH -next v2 17/19] mm: convert wp_page_reuse() and finish_mkwrite_fault() to take a folio
> config: m68k-allyesconfig (https://download.01.org/0day-ci/archive/20231017/[email protected]/config)
> compiler: m68k-linux-gcc (GCC) 13.2.0
> reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20231017/[email protected]/reproduce)
>
> If you fix the issue in a separate patch/commit (i.e. not just a new version of
> the same patch/commit), kindly add following tags
> | Reported-by: kernel test robot <[email protected]>
> | Closes: https://lore.kernel.org/oe-kbuild-all/[email protected]/
>
> All warnings (new ones prefixed by >>):
>
>>> mm/memory.c:3276: warning: Function parameter or member 'folio' not described in 'finish_mkwrite_fault'
>

Hi Andrew, should I resend this patch? or could you help me to update
it, also a comment(page -> folio's) on patch18, thanks.

>
> vim +3276 mm/memory.c
>
> 2f38ab2c3c7fef Shachar Raindel 2015-04-14 3258
> 66a6197c118540 Jan Kara 2016-12-14 3259 /**
> 66a6197c118540 Jan Kara 2016-12-14 3260 * finish_mkwrite_fault - finish page fault for a shared mapping, making PTE
> 66a6197c118540 Jan Kara 2016-12-14 3261 * writeable once the page is prepared
> 66a6197c118540 Jan Kara 2016-12-14 3262 *
> 66a6197c118540 Jan Kara 2016-12-14 3263 * @vmf: structure describing the fault
> 66a6197c118540 Jan Kara 2016-12-14 3264 *
> 66a6197c118540 Jan Kara 2016-12-14 3265 * This function handles all that is needed to finish a write page fault in a
> 66a6197c118540 Jan Kara 2016-12-14 3266 * shared mapping due to PTE being read-only once the mapped page is prepared.
> a862f68a8b3600 Mike Rapoport 2019-03-05 3267 * It handles locking of PTE and modifying it.
> 66a6197c118540 Jan Kara 2016-12-14 3268 *
> 66a6197c118540 Jan Kara 2016-12-14 3269 * The function expects the page to be locked or other protection against
> 66a6197c118540 Jan Kara 2016-12-14 3270 * concurrent faults / writeback (such as DAX radix tree locks).
> a862f68a8b3600 Mike Rapoport 2019-03-05 3271 *
> 2797e79f1a491f Liu Xiang 2021-06-28 3272 * Return: %0 on success, %VM_FAULT_NOPAGE when PTE got changed before
> a862f68a8b3600 Mike Rapoport 2019-03-05 3273 * we acquired PTE lock.
> 66a6197c118540 Jan Kara 2016-12-14 3274 */
> 60fe935fc6b035 Kefeng Wang 2023-10-13 3275 static vm_fault_t finish_mkwrite_fault(struct vm_fault *vmf, struct folio *folio)
> 66a6197c118540 Jan Kara 2016-12-14 @3276 {
> 66a6197c118540 Jan Kara 2016-12-14 3277 WARN_ON_ONCE(!(vmf->vma->vm_flags & VM_SHARED));
> 66a6197c118540 Jan Kara 2016-12-14 3278 vmf->pte = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd, vmf->address,
> 66a6197c118540 Jan Kara 2016-12-14 3279 &vmf->ptl);
> 3db82b9374ca92 Hugh Dickins 2023-06-08 3280 if (!vmf->pte)
> 3db82b9374ca92 Hugh Dickins 2023-06-08 3281 return VM_FAULT_NOPAGE;
> 66a6197c118540 Jan Kara 2016-12-14 3282 /*
> 66a6197c118540 Jan Kara 2016-12-14 3283 * We might have raced with another page fault while we released the
> 66a6197c118540 Jan Kara 2016-12-14 3284 * pte_offset_map_lock.
> 66a6197c118540 Jan Kara 2016-12-14 3285 */
> c33c794828f212 Ryan Roberts 2023-06-12 3286 if (!pte_same(ptep_get(vmf->pte), vmf->orig_pte)) {
> 7df676974359f9 Bibo Mao 2020-05-27 3287 update_mmu_tlb(vmf->vma, vmf->address, vmf->pte);
> 66a6197c118540 Jan Kara 2016-12-14 3288 pte_unmap_unlock(vmf->pte, vmf->ptl);
> a19e25536ed3a2 Jan Kara 2016-12-14 3289 return VM_FAULT_NOPAGE;
> 66a6197c118540 Jan Kara 2016-12-14 3290 }
> 60fe935fc6b035 Kefeng Wang 2023-10-13 3291 wp_page_reuse(vmf, folio);
> a19e25536ed3a2 Jan Kara 2016-12-14 3292 return 0;
> 66a6197c118540 Jan Kara 2016-12-14 3293 }
> 66a6197c118540 Jan Kara 2016-12-14 3294
>

2023-10-17 14:54:52

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH -next v2 17/19] mm: convert wp_page_reuse() and finish_mkwrite_fault() to take a folio

On Tue, 17 Oct 2023 17:04:41 +0800 Kefeng Wang <[email protected]> wrote:

> >>> mm/memory.c:3276: warning: Function parameter or member 'folio' not described in 'finish_mkwrite_fault'
> >
>
> Hi Andrew, should I resend this patch? or could you help me to update
> it, also a comment(page -> folio's) on patch18, thanks.

I'd assumed a new series would be sent, addressing Matthew's comments.