2023-12-19 07:56:09

by Peter Xu

[permalink] [raw]
Subject: [PATCH 00/13] mm/gup: Unify hugetlb, part 2

From: Peter Xu <[email protected]>

This is v1 of the series. The series removes the hugetlb slow gup path
after a previous refactor work [1], so that slow gup now uses the exact
same path to handle all kinds of memory including hugetlb.

It's based on latest mm-unstalbe (c13bdc82ada9).

RFC->v1 (use old verion's patch index):
- Fix build for !THP and/or !HUGETLB
- Fix a bug in the hugepd path that can cause a loop
- patch 3:
- Rename to "Make HPAGE_PXD_* macros visible even if !THP" [Christoph, MikeR]
- patch 5:
- Posted separately as a devmap bugfix
- patch 6:
- Add a comment above gup_huge_pd() explaining the file folio writeback
issue, with references when anyone would extend hugepd to other types
of file memories [Christoph]
- patch 7:
- Keep parameter name as "page" in record_subpages(), modify the
commit message by removing any assumption on head page. [Matthew]
- patch 12:
- Rename to: "mm/gup: handle hugetlb in the generic follow_page_mask
code" [Christoph]
- Added patch:
- "mm/Kconfig: CONFIG_PGTABLE_HAS_HUGE_LEAVES", this helps to fix build
for !THP and/or !HUGETLB
- Added patch
- "mm/gup: Cache *pudp in follow_pud_mask()", as suggested by Christoph

For the long term, this is one small step towards merging hugetlb specific
codes into generic mm paths. From that POV, this series removes one
reference to huge_pte_offset() out of many others.

To make it work, the generic code will need to at least understand hugepd,
which is already done like so in fast-gup. Fortunately it seems that's the
only major thing I need to teach slow GUP to share the common path for now
besides normal huge PxD entries. Non-gup can be more challenging, but
that's a question for later.

There's one major difference for slow-gup on cont_pte / cont_pmd handling,
currently supported on three architectures (aarch64, riscv, ppc). Before
the series, slow gup will be able to recognize cont entries with the help
of huge_pte_offset(). Now it's gone but still working, by looking up
pgtable entries one by one. It's not ideal, but hopefully this change
should not affect yet on major workloads. There's some more information in
the commit message of the last patch. If this would be a concern, we can
consider teaching slow gup to recognize cont pte/pmd entries, and that
should recover the lost performance. But I doubt its necessity for now, so
I kept it as simple as it can be.

Test Done
=========

This v1 went through the normal GUP smoke tests over different memory
types on archs (using VM instances): x86_64, aarch64, ppc64le. For
aarch64, tested over 64KB cont_pte huge pages. For ppc64le, tested over
16MB hugepd entries (Power8 hash MMU on 4K base page size).

Patch layout
=============

Patch 1-7: Preparation works, or cleanups in relevant code paths
Patch 8-12: Teach slow gup with all kinds of huge entries (pXd, hugepd)
Patch 13: Drop hugetlb_follow_page_mask()

More information can be found in the commit messages of each patch. Any
comment will be welcomed. Thanks.

[1] https://lore.kernel.org/all/[email protected]

Peter Xu (13):
mm/Kconfig: CONFIG_PGTABLE_HAS_HUGE_LEAVES
mm/hugetlb: Declare hugetlbfs_pagecache_present() non-static
mm: Provide generic pmd_thp_or_huge()
mm: Make HPAGE_PXD_* macros even if !THP
mm: Introduce vma_pgtable_walk_{begin|end}()
mm/gup: Drop folio_fast_pin_allowed() in hugepd processing
mm/gup: Refactor record_subpages() to find 1st small page
mm/gup: Handle hugetlb for no_page_table()
mm/gup: Cache *pudp in follow_pud_mask()
mm/gup: Handle huge pud for follow_pud_mask()
mm/gup: Handle huge pmd for follow_pmd_mask()
mm/gup: Handle hugepd for follow_page()
mm/gup: Handle hugetlb in the generic follow_page_mask code

include/linux/huge_mm.h | 25 +--
include/linux/hugetlb.h | 16 +-
include/linux/mm.h | 3 +
include/linux/pgtable.h | 4 +
mm/Kconfig | 3 +
mm/gup.c | 362 ++++++++++++++++++++++++++++++++--------
mm/huge_memory.c | 133 +--------------
mm/hugetlb.c | 75 +--------
mm/internal.h | 7 +-
mm/memory.c | 12 ++
10 files changed, 342 insertions(+), 298 deletions(-)

--
2.41.0



2023-12-19 07:57:09

by Peter Xu

[permalink] [raw]
Subject: [PATCH 04/13] mm: Make HPAGE_PXD_* macros even if !THP

From: Peter Xu <[email protected]>

These macros can be helpful when we plan to merge hugetlb code into generic
code. Move them out and define them even if !THP.

We actually already defined HPAGE_PMD_NR for other reasons even if !THP.
Reorganize these macros.

Reviewed-by: Christoph Hellwig <[email protected]>
Signed-off-by: Peter Xu <[email protected]>
---
include/linux/huge_mm.h | 17 ++++++-----------
1 file changed, 6 insertions(+), 11 deletions(-)

diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index fa7a38a30fc6..d335130e145f 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -64,9 +64,6 @@ ssize_t single_hugepage_flag_show(struct kobject *kobj,
enum transparent_hugepage_flag flag);
extern struct kobj_attribute shmem_enabled_attr;

-#define HPAGE_PMD_ORDER (HPAGE_PMD_SHIFT-PAGE_SHIFT)
-#define HPAGE_PMD_NR (1<<HPAGE_PMD_ORDER)
-
/*
* Mask of all large folio orders supported for anonymous THP; all orders up to
* and including PMD_ORDER, except order-0 (which is not "huge") and order-1
@@ -87,14 +84,19 @@ extern struct kobj_attribute shmem_enabled_attr;
#define thp_vma_allowable_order(vma, vm_flags, smaps, in_pf, enforce_sysfs, order) \
(!!thp_vma_allowable_orders(vma, vm_flags, smaps, in_pf, enforce_sysfs, BIT(order)))

-#ifdef CONFIG_TRANSPARENT_HUGEPAGE
#define HPAGE_PMD_SHIFT PMD_SHIFT
#define HPAGE_PMD_SIZE ((1UL) << HPAGE_PMD_SHIFT)
#define HPAGE_PMD_MASK (~(HPAGE_PMD_SIZE - 1))
+#define HPAGE_PMD_ORDER (HPAGE_PMD_SHIFT-PAGE_SHIFT)
+#define HPAGE_PMD_NR (1<<HPAGE_PMD_ORDER)

#define HPAGE_PUD_SHIFT PUD_SHIFT
#define HPAGE_PUD_SIZE ((1UL) << HPAGE_PUD_SHIFT)
#define HPAGE_PUD_MASK (~(HPAGE_PUD_SIZE - 1))
+#define HPAGE_PUD_ORDER (HPAGE_PUD_SHIFT-PAGE_SHIFT)
+#define HPAGE_PUD_NR (1<<HPAGE_PUD_ORDER)
+
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE

extern unsigned long transparent_hugepage_flags;
extern unsigned long huge_anon_orders_always;
@@ -378,13 +380,6 @@ static inline bool thp_migration_supported(void)
}

#else /* CONFIG_TRANSPARENT_HUGEPAGE */
-#define HPAGE_PMD_SHIFT ({ BUILD_BUG(); 0; })
-#define HPAGE_PMD_MASK ({ BUILD_BUG(); 0; })
-#define HPAGE_PMD_SIZE ({ BUILD_BUG(); 0; })
-
-#define HPAGE_PUD_SHIFT ({ BUILD_BUG(); 0; })
-#define HPAGE_PUD_MASK ({ BUILD_BUG(); 0; })
-#define HPAGE_PUD_SIZE ({ BUILD_BUG(); 0; })

static inline bool folio_test_pmd_mappable(struct folio *folio)
{
--
2.41.0


2023-12-19 07:58:03

by Peter Xu

[permalink] [raw]
Subject: [PATCH 06/13] mm/gup: Drop folio_fast_pin_allowed() in hugepd processing

From: Peter Xu <[email protected]>

Hugepd format for GUP is only used in PowerPC with hugetlbfs. There are
some kernel usage of hugepd (can refer to hugepd_populate_kernel() for
PPC_8XX), however those pages are not candidates for GUP.

Commit a6e79df92e4a ("mm/gup: disallow FOLL_LONGTERM GUP-fast writing to
file-backed mappings") added a check to fail gup-fast if there's potential
risk of violating GUP over writeback file systems. That should never apply
to hugepd. Considering that hugepd is an old format (and even
software-only), there's no plan to extend hugepd into other file typed
memories that is prone to the same issue.

Drop that check, not only because it'll never be true for hugepd per any
known plan, but also it paves way for reusing the function outside
fast-gup.

To make sure we'll still remember this issue just in case hugepd will be
extended to support non-hugetlbfs memories, add a rich comment above
gup_huge_pd(), explaining the issue with proper references.

Cc: Christoph Hellwig <[email protected]>
Cc: Lorenzo Stoakes <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: [email protected]
Signed-off-by: Peter Xu <[email protected]>
---
mm/gup.c | 13 ++++++++-----
1 file changed, 8 insertions(+), 5 deletions(-)

diff --git a/mm/gup.c b/mm/gup.c
index efc9847e58fb..bb5b7134f10b 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -2820,11 +2820,6 @@ static int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long addr,
return 0;
}

- if (!folio_fast_pin_allowed(folio, flags)) {
- gup_put_folio(folio, refs, flags);
- return 0;
- }
-
if (!pte_write(pte) && gup_must_unshare(NULL, flags, &folio->page)) {
gup_put_folio(folio, refs, flags);
return 0;
@@ -2835,6 +2830,14 @@ static int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long addr,
return 1;
}

+/*
+ * NOTE: currently GUP for a hugepd is only possible on hugetlbfs file
+ * systems on Power, which does not have issue with folio writeback against
+ * GUP updates. When hugepd will be extended to support non-hugetlbfs or
+ * even anonymous memory, we need to do extra check as what we do with most
+ * of the other folios. See writable_file_mapping_allowed() and
+ * folio_fast_pin_allowed() for more information.
+ */
static int gup_huge_pd(hugepd_t hugepd, unsigned long addr,
unsigned int pdshift, unsigned long end, unsigned int flags,
struct page **pages, int *nr)
--
2.41.0


2023-12-19 07:58:24

by Peter Xu

[permalink] [raw]
Subject: [PATCH 07/13] mm/gup: Refactor record_subpages() to find 1st small page

From: Peter Xu <[email protected]>

All the fast-gup functions take a tail page to operate, always need to do
page mask calculations before feeding that into record_subpages().

Merge that logic into record_subpages(), so that it will do the nth_page()
calculation.

Signed-off-by: Peter Xu <[email protected]>
---
mm/gup.c | 25 ++++++++++++++-----------
1 file changed, 14 insertions(+), 11 deletions(-)

diff --git a/mm/gup.c b/mm/gup.c
index bb5b7134f10b..82d28d517d0d 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -2767,13 +2767,16 @@ static int __gup_device_huge_pud(pud_t pud, pud_t *pudp, unsigned long addr,
}
#endif

-static int record_subpages(struct page *page, unsigned long addr,
- unsigned long end, struct page **pages)
+static int record_subpages(struct page *page, unsigned long sz,
+ unsigned long addr, unsigned long end,
+ struct page **pages)
{
+ struct page *start_page;
int nr;

+ start_page = nth_page(page, (addr & (sz - 1)) >> PAGE_SHIFT);
for (nr = 0; addr != end; nr++, addr += PAGE_SIZE)
- pages[nr] = nth_page(page, nr);
+ pages[nr] = nth_page(start_page, nr);

return nr;
}
@@ -2808,8 +2811,8 @@ static int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long addr,
/* hugepages are never "special" */
VM_BUG_ON(!pfn_valid(pte_pfn(pte)));

- page = nth_page(pte_page(pte), (addr & (sz - 1)) >> PAGE_SHIFT);
- refs = record_subpages(page, addr, end, pages + *nr);
+ page = pte_page(pte);
+ refs = record_subpages(page, sz, addr, end, pages + *nr);

folio = try_grab_folio(page, refs, flags);
if (!folio)
@@ -2882,8 +2885,8 @@ static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr,
pages, nr);
}

- page = nth_page(pmd_page(orig), (addr & ~PMD_MASK) >> PAGE_SHIFT);
- refs = record_subpages(page, addr, end, pages + *nr);
+ page = pmd_page(orig);
+ refs = record_subpages(page, PMD_SIZE, addr, end, pages + *nr);

folio = try_grab_folio(page, refs, flags);
if (!folio)
@@ -2926,8 +2929,8 @@ static int gup_huge_pud(pud_t orig, pud_t *pudp, unsigned long addr,
pages, nr);
}

- page = nth_page(pud_page(orig), (addr & ~PUD_MASK) >> PAGE_SHIFT);
- refs = record_subpages(page, addr, end, pages + *nr);
+ page = pud_page(orig);
+ refs = record_subpages(page, PUD_SIZE, addr, end, pages + *nr);

folio = try_grab_folio(page, refs, flags);
if (!folio)
@@ -2966,8 +2969,8 @@ static int gup_huge_pgd(pgd_t orig, pgd_t *pgdp, unsigned long addr,

BUILD_BUG_ON(pgd_devmap(orig));

- page = nth_page(pgd_page(orig), (addr & ~PGDIR_MASK) >> PAGE_SHIFT);
- refs = record_subpages(page, addr, end, pages + *nr);
+ page = pgd_page(orig);
+ refs = record_subpages(page, PGDIR_SIZE, addr, end, pages + *nr);

folio = try_grab_folio(page, refs, flags);
if (!folio)
--
2.41.0


2023-12-19 07:58:46

by Peter Xu

[permalink] [raw]
Subject: [PATCH 08/13] mm/gup: Handle hugetlb for no_page_table()

From: Peter Xu <[email protected]>

no_page_table() is not yet used for hugetlb code paths. Make it prepared.

The major difference here is hugetlb will return -EFAULT as long as page
cache does not exist, even if VM_SHARED. See hugetlb_follow_page_mask().

Pass "address" into no_page_table() too, as hugetlb will need it.

Reviewed-by: Christoph Hellwig <[email protected]>
Signed-off-by: Peter Xu <[email protected]>
---
mm/gup.c | 44 ++++++++++++++++++++++++++------------------
1 file changed, 26 insertions(+), 18 deletions(-)

diff --git a/mm/gup.c b/mm/gup.c
index 82d28d517d0d..6c0d82fa8cc7 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -501,19 +501,27 @@ static inline void mm_set_has_pinned_flag(unsigned long *mm_flags)

#ifdef CONFIG_MMU
static struct page *no_page_table(struct vm_area_struct *vma,
- unsigned int flags)
+ unsigned int flags, unsigned long address)
{
+ if (!(flags & FOLL_DUMP))
+ return NULL;
+
/*
- * When core dumping an enormous anonymous area that nobody
- * has touched so far, we don't want to allocate unnecessary pages or
+ * When core dumping, we don't want to allocate unnecessary pages or
* page tables. Return error instead of NULL to skip handle_mm_fault,
* then get_dump_page() will return NULL to leave a hole in the dump.
* But we can only make this optimization where a hole would surely
* be zero-filled if handle_mm_fault() actually did handle it.
*/
- if ((flags & FOLL_DUMP) &&
- (vma_is_anonymous(vma) || !vma->vm_ops->fault))
+ if (is_vm_hugetlb_page(vma)) {
+ struct hstate *h = hstate_vma(vma);
+
+ if (!hugetlbfs_pagecache_present(h, vma, address))
+ return ERR_PTR(-EFAULT);
+ } else if ((vma_is_anonymous(vma) || !vma->vm_ops->fault)) {
return ERR_PTR(-EFAULT);
+ }
+
return NULL;
}

@@ -593,7 +601,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma,

ptep = pte_offset_map_lock(mm, pmd, address, &ptl);
if (!ptep)
- return no_page_table(vma, flags);
+ return no_page_table(vma, flags, address);
pte = ptep_get(ptep);
if (!pte_present(pte))
goto no_page;
@@ -685,7 +693,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma,
pte_unmap_unlock(ptep, ptl);
if (!pte_none(pte))
return NULL;
- return no_page_table(vma, flags);
+ return no_page_table(vma, flags, address);
}

static struct page *follow_pmd_mask(struct vm_area_struct *vma,
@@ -701,27 +709,27 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma,
pmd = pmd_offset(pudp, address);
pmdval = pmdp_get_lockless(pmd);
if (pmd_none(pmdval))
- return no_page_table(vma, flags);
+ return no_page_table(vma, flags, address);
if (!pmd_present(pmdval))
- return no_page_table(vma, flags);
+ return no_page_table(vma, flags, address);
if (pmd_devmap(pmdval)) {
ptl = pmd_lock(mm, pmd);
page = follow_devmap_pmd(vma, address, pmd, flags, &ctx->pgmap);
spin_unlock(ptl);
if (page)
return page;
- return no_page_table(vma, flags);
+ return no_page_table(vma, flags, address);
}
if (likely(!pmd_trans_huge(pmdval)))
return follow_page_pte(vma, address, pmd, flags, &ctx->pgmap);

if (pmd_protnone(pmdval) && !gup_can_follow_protnone(vma, flags))
- return no_page_table(vma, flags);
+ return no_page_table(vma, flags, address);

ptl = pmd_lock(mm, pmd);
if (unlikely(!pmd_present(*pmd))) {
spin_unlock(ptl);
- return no_page_table(vma, flags);
+ return no_page_table(vma, flags, address);
}
if (unlikely(!pmd_trans_huge(*pmd))) {
spin_unlock(ptl);
@@ -752,17 +760,17 @@ static struct page *follow_pud_mask(struct vm_area_struct *vma,

pud = pud_offset(p4dp, address);
if (pud_none(*pud))
- return no_page_table(vma, flags);
+ return no_page_table(vma, flags, address);
if (pud_devmap(*pud)) {
ptl = pud_lock(mm, pud);
page = follow_devmap_pud(vma, address, pud, flags, &ctx->pgmap);
spin_unlock(ptl);
if (page)
return page;
- return no_page_table(vma, flags);
+ return no_page_table(vma, flags, address);
}
if (unlikely(pud_bad(*pud)))
- return no_page_table(vma, flags);
+ return no_page_table(vma, flags, address);

return follow_pmd_mask(vma, address, pud, flags, ctx);
}
@@ -776,10 +784,10 @@ static struct page *follow_p4d_mask(struct vm_area_struct *vma,

p4d = p4d_offset(pgdp, address);
if (p4d_none(*p4d))
- return no_page_table(vma, flags);
+ return no_page_table(vma, flags, address);
BUILD_BUG_ON(p4d_huge(*p4d));
if (unlikely(p4d_bad(*p4d)))
- return no_page_table(vma, flags);
+ return no_page_table(vma, flags, address);

return follow_pud_mask(vma, address, p4d, flags, ctx);
}
@@ -829,7 +837,7 @@ static struct page *follow_page_mask(struct vm_area_struct *vma,
pgd = pgd_offset(mm, address);

if (pgd_none(*pgd) || unlikely(pgd_bad(*pgd)))
- return no_page_table(vma, flags);
+ return no_page_table(vma, flags, address);

return follow_p4d_mask(vma, address, pgd, flags, ctx);
}
--
2.41.0


2023-12-19 07:58:49

by Peter Xu

[permalink] [raw]
Subject: [PATCH 09/13] mm/gup: Cache *pudp in follow_pud_mask()

From: Peter Xu <[email protected]>

Introduce "pud_t pud" in the function, so the code won't dereference *pudp
multiple time. Not only because that looks less straightforward, but also
because if the dereference really happened, it's not clear whether there
can be race to see different *pudp values if it's being modified at the
same time.

Signed-off-by: Peter Xu <[email protected]>
---
mm/gup.c | 17 +++++++++--------
1 file changed, 9 insertions(+), 8 deletions(-)

diff --git a/mm/gup.c b/mm/gup.c
index 6c0d82fa8cc7..97e87b7a15c3 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -753,26 +753,27 @@ static struct page *follow_pud_mask(struct vm_area_struct *vma,
unsigned int flags,
struct follow_page_context *ctx)
{
- pud_t *pud;
+ pud_t *pudp, pud;
spinlock_t *ptl;
struct page *page;
struct mm_struct *mm = vma->vm_mm;

- pud = pud_offset(p4dp, address);
- if (pud_none(*pud))
+ pudp = pud_offset(p4dp, address);
+ pud = *pudp;
+ if (pud_none(pud))
return no_page_table(vma, flags, address);
- if (pud_devmap(*pud)) {
- ptl = pud_lock(mm, pud);
- page = follow_devmap_pud(vma, address, pud, flags, &ctx->pgmap);
+ if (pud_devmap(pud)) {
+ ptl = pud_lock(mm, pudp);
+ page = follow_devmap_pud(vma, address, pudp, flags, &ctx->pgmap);
spin_unlock(ptl);
if (page)
return page;
return no_page_table(vma, flags, address);
}
- if (unlikely(pud_bad(*pud)))
+ if (unlikely(pud_bad(pud)))
return no_page_table(vma, flags, address);

- return follow_pmd_mask(vma, address, pud, flags, ctx);
+ return follow_pmd_mask(vma, address, pudp, flags, ctx);
}

static struct page *follow_p4d_mask(struct vm_area_struct *vma,
--
2.41.0


2023-12-19 07:59:29

by Peter Xu

[permalink] [raw]
Subject: [PATCH 10/13] mm/gup: Handle huge pud for follow_pud_mask()

From: Peter Xu <[email protected]>

Teach follow_pud_mask() to be able to handle normal PUD pages like hugetlb.

Rename follow_devmap_pud() to follow_huge_pud() so that it can process
either huge devmap or hugetlb. Move it out of TRANSPARENT_HUGEPAGE_PUD and
and huge_memory.c (which relies on CONFIG_THP).

In the new follow_huge_pud(), taking care of possible CoR for hugetlb if
necessary. touch_pud() needs to be moved out of huge_memory.c to be
accessable from gup.c even if !THP.

Since at it, optimize the non-present check by adding a pud_present() early
check before taking the pgtable lock, failing the follow_page() early if
PUD is not present: that is required by both devmap or hugetlb. Use
pud_huge() to also cover the pud_devmap() case.

One more trivial thing to mention is, introduce "pud_t pud" in the code
paths along the way, so the code doesn't dereference *pudp multiple time.
Not only because that looks less straightforward, but also because if the
dereference really happened, it's not clear whether there can be race to
see different *pudp values when it's being modified at the same time.

Setting ctx->page_mask properly for a PUD entry. As a side effect, this
patch should also be able to optimize devmap GUP on PUD to be able to jump
over the whole PUD range, but not yet verified. Hugetlb already can do so
prior to this patch.

Signed-off-by: Peter Xu <[email protected]>
---
include/linux/huge_mm.h | 8 -----
mm/gup.c | 70 +++++++++++++++++++++++++++++++++++++++--
mm/huge_memory.c | 47 ++-------------------------
mm/internal.h | 2 ++
4 files changed, 71 insertions(+), 56 deletions(-)

diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index d335130e145f..80f181d76f94 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -346,8 +346,6 @@ static inline bool folio_test_pmd_mappable(struct folio *folio)

struct page *follow_devmap_pmd(struct vm_area_struct *vma, unsigned long addr,
pmd_t *pmd, int flags, struct dev_pagemap **pgmap);
-struct page *follow_devmap_pud(struct vm_area_struct *vma, unsigned long addr,
- pud_t *pud, int flags, struct dev_pagemap **pgmap);

vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf);

@@ -503,12 +501,6 @@ static inline struct page *follow_devmap_pmd(struct vm_area_struct *vma,
return NULL;
}

-static inline struct page *follow_devmap_pud(struct vm_area_struct *vma,
- unsigned long addr, pud_t *pud, int flags, struct dev_pagemap **pgmap)
-{
- return NULL;
-}
-
static inline bool thp_migration_supported(void)
{
return false;
diff --git a/mm/gup.c b/mm/gup.c
index 97e87b7a15c3..5b14f91d2f6b 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -525,6 +525,70 @@ static struct page *no_page_table(struct vm_area_struct *vma,
return NULL;
}

+#ifdef CONFIG_PGTABLE_HAS_HUGE_LEAVES
+static struct page *follow_huge_pud(struct vm_area_struct *vma,
+ unsigned long addr, pud_t *pudp,
+ int flags, struct follow_page_context *ctx)
+{
+ struct mm_struct *mm = vma->vm_mm;
+ struct page *page;
+ pud_t pud = *pudp;
+ unsigned long pfn = pud_pfn(pud);
+ int ret;
+
+ assert_spin_locked(pud_lockptr(mm, pudp));
+
+ if ((flags & FOLL_WRITE) && !pud_write(pud))
+ return NULL;
+
+ if (!pud_present(pud))
+ return NULL;
+
+ pfn += (addr & ~PUD_MASK) >> PAGE_SHIFT;
+
+#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
+ if (pud_devmap(pud)) {
+ /*
+ * device mapped pages can only be returned if the caller
+ * will manage the page reference count.
+ *
+ * At least one of FOLL_GET | FOLL_PIN must be set, so
+ * assert that here:
+ */
+ if (!(flags & (FOLL_GET | FOLL_PIN)))
+ return ERR_PTR(-EEXIST);
+
+ if (flags & FOLL_TOUCH)
+ touch_pud(vma, addr, pudp, flags & FOLL_WRITE);
+
+ ctx->pgmap = get_dev_pagemap(pfn, ctx->pgmap);
+ if (!ctx->pgmap)
+ return ERR_PTR(-EFAULT);
+ }
+#endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */
+ page = pfn_to_page(pfn);
+
+ if (!pud_devmap(pud) && !pud_write(pud) &&
+ gup_must_unshare(vma, flags, page))
+ return ERR_PTR(-EMLINK);
+
+ ret = try_grab_page(page, flags);
+ if (ret)
+ page = ERR_PTR(ret);
+ else
+ ctx->page_mask = HPAGE_PUD_NR - 1;
+
+ return page;
+}
+#else /* CONFIG_PGTABLE_HAS_HUGE_LEAVES */
+static struct page *follow_huge_pud(struct vm_area_struct *vma,
+ unsigned long addr, pud_t *pudp,
+ int flags, struct follow_page_context *ctx)
+{
+ return NULL;
+}
+#endif /* CONFIG_PGTABLE_HAS_HUGE_LEAVES */
+
static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address,
pte_t *pte, unsigned int flags)
{
@@ -760,11 +824,11 @@ static struct page *follow_pud_mask(struct vm_area_struct *vma,

pudp = pud_offset(p4dp, address);
pud = *pudp;
- if (pud_none(pud))
+ if (pud_none(pud) || !pud_present(pud))
return no_page_table(vma, flags, address);
- if (pud_devmap(pud)) {
+ if (pud_huge(pud)) {
ptl = pud_lock(mm, pudp);
- page = follow_devmap_pud(vma, address, pudp, flags, &ctx->pgmap);
+ page = follow_huge_pud(vma, address, pudp, flags, ctx);
spin_unlock(ptl);
if (page)
return page;
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 6be1a380a298..def1dbe0d7e8 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1371,8 +1371,8 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
}

#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
-static void touch_pud(struct vm_area_struct *vma, unsigned long addr,
- pud_t *pud, bool write)
+void touch_pud(struct vm_area_struct *vma, unsigned long addr,
+ pud_t *pud, bool write)
{
pud_t _pud;

@@ -1384,49 +1384,6 @@ static void touch_pud(struct vm_area_struct *vma, unsigned long addr,
update_mmu_cache_pud(vma, addr, pud);
}

-struct page *follow_devmap_pud(struct vm_area_struct *vma, unsigned long addr,
- pud_t *pud, int flags, struct dev_pagemap **pgmap)
-{
- unsigned long pfn = pud_pfn(*pud);
- struct mm_struct *mm = vma->vm_mm;
- struct page *page;
- int ret;
-
- assert_spin_locked(pud_lockptr(mm, pud));
-
- if (flags & FOLL_WRITE && !pud_write(*pud))
- return NULL;
-
- if (pud_present(*pud) && pud_devmap(*pud))
- /* pass */;
- else
- return NULL;
-
- if (flags & FOLL_TOUCH)
- touch_pud(vma, addr, pud, flags & FOLL_WRITE);
-
- /*
- * device mapped pages can only be returned if the
- * caller will manage the page reference count.
- *
- * At least one of FOLL_GET | FOLL_PIN must be set, so assert that here:
- */
- if (!(flags & (FOLL_GET | FOLL_PIN)))
- return ERR_PTR(-EEXIST);
-
- pfn += (addr & ~PUD_MASK) >> PAGE_SHIFT;
- *pgmap = get_dev_pagemap(pfn, *pgmap);
- if (!*pgmap)
- return ERR_PTR(-EFAULT);
- page = pfn_to_page(pfn);
-
- ret = try_grab_page(page, flags);
- if (ret)
- page = ERR_PTR(ret);
-
- return page;
-}
-
int copy_huge_pud(struct mm_struct *dst_mm, struct mm_struct *src_mm,
pud_t *dst_pud, pud_t *src_pud, unsigned long addr,
struct vm_area_struct *vma)
diff --git a/mm/internal.h b/mm/internal.h
index 222e63b2dea4..2fca14553d0f 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -1007,6 +1007,8 @@ int __must_check try_grab_page(struct page *page, unsigned int flags);
/*
* mm/huge_memory.c
*/
+void touch_pud(struct vm_area_struct *vma, unsigned long addr,
+ pud_t *pud, bool write);
struct page *follow_trans_huge_pmd(struct vm_area_struct *vma,
unsigned long addr, pmd_t *pmd,
unsigned int flags);
--
2.41.0


2023-12-19 08:00:15

by Peter Xu

[permalink] [raw]
Subject: [PATCH 11/13] mm/gup: Handle huge pmd for follow_pmd_mask()

From: Peter Xu <[email protected]>

Replace pmd_trans_huge() with pmd_thp_or_huge() to also cover pmd_huge() as
long as enabled.

FOLL_TOUCH and FOLL_SPLIT_PMD only apply to THP, not yet huge.

Since now follow_trans_huge_pmd() can process hugetlb pages, renaming it
into follow_huge_pmd() to match what it does. Move it into gup.c so not
depend on CONFIG_THP.

When at it, move the ctx->page_mask setup into follow_huge_pmd(), only set
it when the page is valid. It was not a bug to set it before even if GUP
failed (page==NULL), because follow_page_mask() callers always ignores
page_mask if so. But doing so makes the code cleaner.

Signed-off-by: Peter Xu <[email protected]>
---
mm/gup.c | 107 ++++++++++++++++++++++++++++++++++++++++++++---
mm/huge_memory.c | 86 +------------------------------------
mm/internal.h | 5 +--
3 files changed, 105 insertions(+), 93 deletions(-)

diff --git a/mm/gup.c b/mm/gup.c
index 5b14f91d2f6b..080dff79b650 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -580,6 +580,93 @@ static struct page *follow_huge_pud(struct vm_area_struct *vma,

return page;
}
+
+/* FOLL_FORCE can write to even unwritable PMDs in COW mappings. */
+static inline bool can_follow_write_pmd(pmd_t pmd, struct page *page,
+ struct vm_area_struct *vma,
+ unsigned int flags)
+{
+ /* If the pmd is writable, we can write to the page. */
+ if (pmd_write(pmd))
+ return true;
+
+ /* Maybe FOLL_FORCE is set to override it? */
+ if (!(flags & FOLL_FORCE))
+ return false;
+
+ /* But FOLL_FORCE has no effect on shared mappings */
+ if (vma->vm_flags & (VM_MAYSHARE | VM_SHARED))
+ return false;
+
+ /* ... or read-only private ones */
+ if (!(vma->vm_flags & VM_MAYWRITE))
+ return false;
+
+ /* ... or already writable ones that just need to take a write fault */
+ if (vma->vm_flags & VM_WRITE)
+ return false;
+
+ /*
+ * See can_change_pte_writable(): we broke COW and could map the page
+ * writable if we have an exclusive anonymous page ...
+ */
+ if (!page || !PageAnon(page) || !PageAnonExclusive(page))
+ return false;
+
+ /* ... and a write-fault isn't required for other reasons. */
+ if (vma_soft_dirty_enabled(vma) && !pmd_soft_dirty(pmd))
+ return false;
+ return !userfaultfd_huge_pmd_wp(vma, pmd);
+}
+
+static struct page *follow_huge_pmd(struct vm_area_struct *vma,
+ unsigned long addr, pmd_t *pmd,
+ unsigned int flags,
+ struct follow_page_context *ctx)
+{
+ struct mm_struct *mm = vma->vm_mm;
+ pmd_t pmdval = *pmd;
+ struct page *page;
+ int ret;
+
+ assert_spin_locked(pmd_lockptr(mm, pmd));
+
+ page = pmd_page(pmdval);
+ VM_BUG_ON_PAGE(!PageHead(page) && !is_zone_device_page(page), page);
+
+ if ((flags & FOLL_WRITE) &&
+ !can_follow_write_pmd(pmdval, page, vma, flags))
+ return NULL;
+
+ /* Avoid dumping huge zero page */
+ if ((flags & FOLL_DUMP) && is_huge_zero_pmd(pmdval))
+ return ERR_PTR(-EFAULT);
+
+ if (pmd_protnone(*pmd) && !gup_can_follow_protnone(vma, flags))
+ return NULL;
+
+ if (!pmd_write(pmdval) && gup_must_unshare(vma, flags, page))
+ return ERR_PTR(-EMLINK);
+
+ VM_BUG_ON_PAGE((flags & FOLL_PIN) && PageAnon(page) &&
+ !PageAnonExclusive(page), page);
+
+ ret = try_grab_page(page, flags);
+ if (ret)
+ return ERR_PTR(ret);
+
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+ if (pmd_trans_huge(pmdval) && (flags & FOLL_TOUCH))
+ touch_pmd(vma, addr, pmd, flags & FOLL_WRITE);
+#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
+
+ page += (addr & ~HPAGE_PMD_MASK) >> PAGE_SHIFT;
+ ctx->page_mask = HPAGE_PMD_NR - 1;
+ VM_BUG_ON_PAGE(!PageCompound(page) && !is_zone_device_page(page), page);
+
+ return page;
+}
+
#else /* CONFIG_PGTABLE_HAS_HUGE_LEAVES */
static struct page *follow_huge_pud(struct vm_area_struct *vma,
unsigned long addr, pud_t *pudp,
@@ -587,6 +674,14 @@ static struct page *follow_huge_pud(struct vm_area_struct *vma,
{
return NULL;
}
+
+static struct page *follow_huge_pmd(struct vm_area_struct *vma,
+ unsigned long addr, pmd_t *pmd,
+ unsigned int flags,
+ struct follow_page_context *ctx)
+{
+ return NULL;
+}
#endif /* CONFIG_PGTABLE_HAS_HUGE_LEAVES */

static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address,
@@ -784,31 +879,31 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma,
return page;
return no_page_table(vma, flags, address);
}
- if (likely(!pmd_trans_huge(pmdval)))
+ if (likely(!pmd_thp_or_huge(pmdval)))
return follow_page_pte(vma, address, pmd, flags, &ctx->pgmap);

if (pmd_protnone(pmdval) && !gup_can_follow_protnone(vma, flags))
return no_page_table(vma, flags, address);

ptl = pmd_lock(mm, pmd);
- if (unlikely(!pmd_present(*pmd))) {
+ pmdval = *pmd;
+ if (unlikely(!pmd_present(pmdval))) {
spin_unlock(ptl);
return no_page_table(vma, flags, address);
}
- if (unlikely(!pmd_trans_huge(*pmd))) {
+ if (unlikely(!pmd_thp_or_huge(pmdval))) {
spin_unlock(ptl);
return follow_page_pte(vma, address, pmd, flags, &ctx->pgmap);
}
- if (flags & FOLL_SPLIT_PMD) {
+ if (pmd_trans_huge(pmdval) && (flags & FOLL_SPLIT_PMD)) {
spin_unlock(ptl);
split_huge_pmd(vma, pmd, address);
/* If pmd was left empty, stuff a page table in there quickly */
return pte_alloc(mm, pmd) ? ERR_PTR(-ENOMEM) :
follow_page_pte(vma, address, pmd, flags, &ctx->pgmap);
}
- page = follow_trans_huge_pmd(vma, address, pmd, flags);
+ page = follow_huge_pmd(vma, address, pmd, flags, ctx);
spin_unlock(ptl);
- ctx->page_mask = HPAGE_PMD_NR - 1;
return page;
}

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index def1dbe0d7e8..930c59d7ceab 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1216,8 +1216,8 @@ vm_fault_t vmf_insert_pfn_pud(struct vm_fault *vmf, pfn_t pfn, bool write)
EXPORT_SYMBOL_GPL(vmf_insert_pfn_pud);
#endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */

-static void touch_pmd(struct vm_area_struct *vma, unsigned long addr,
- pmd_t *pmd, bool write)
+void touch_pmd(struct vm_area_struct *vma, unsigned long addr,
+ pmd_t *pmd, bool write)
{
pmd_t _pmd;

@@ -1570,88 +1570,6 @@ static inline bool can_change_pmd_writable(struct vm_area_struct *vma,
return pmd_dirty(pmd);
}

-/* FOLL_FORCE can write to even unwritable PMDs in COW mappings. */
-static inline bool can_follow_write_pmd(pmd_t pmd, struct page *page,
- struct vm_area_struct *vma,
- unsigned int flags)
-{
- /* If the pmd is writable, we can write to the page. */
- if (pmd_write(pmd))
- return true;
-
- /* Maybe FOLL_FORCE is set to override it? */
- if (!(flags & FOLL_FORCE))
- return false;
-
- /* But FOLL_FORCE has no effect on shared mappings */
- if (vma->vm_flags & (VM_MAYSHARE | VM_SHARED))
- return false;
-
- /* ... or read-only private ones */
- if (!(vma->vm_flags & VM_MAYWRITE))
- return false;
-
- /* ... or already writable ones that just need to take a write fault */
- if (vma->vm_flags & VM_WRITE)
- return false;
-
- /*
- * See can_change_pte_writable(): we broke COW and could map the page
- * writable if we have an exclusive anonymous page ...
- */
- if (!page || !PageAnon(page) || !PageAnonExclusive(page))
- return false;
-
- /* ... and a write-fault isn't required for other reasons. */
- if (vma_soft_dirty_enabled(vma) && !pmd_soft_dirty(pmd))
- return false;
- return !userfaultfd_huge_pmd_wp(vma, pmd);
-}
-
-struct page *follow_trans_huge_pmd(struct vm_area_struct *vma,
- unsigned long addr,
- pmd_t *pmd,
- unsigned int flags)
-{
- struct mm_struct *mm = vma->vm_mm;
- struct page *page;
- int ret;
-
- assert_spin_locked(pmd_lockptr(mm, pmd));
-
- page = pmd_page(*pmd);
- VM_BUG_ON_PAGE(!PageHead(page) && !is_zone_device_page(page), page);
-
- if ((flags & FOLL_WRITE) &&
- !can_follow_write_pmd(*pmd, page, vma, flags))
- return NULL;
-
- /* Avoid dumping huge zero page */
- if ((flags & FOLL_DUMP) && is_huge_zero_pmd(*pmd))
- return ERR_PTR(-EFAULT);
-
- if (pmd_protnone(*pmd) && !gup_can_follow_protnone(vma, flags))
- return NULL;
-
- if (!pmd_write(*pmd) && gup_must_unshare(vma, flags, page))
- return ERR_PTR(-EMLINK);
-
- VM_BUG_ON_PAGE((flags & FOLL_PIN) && PageAnon(page) &&
- !PageAnonExclusive(page), page);
-
- ret = try_grab_page(page, flags);
- if (ret)
- return ERR_PTR(ret);
-
- if (flags & FOLL_TOUCH)
- touch_pmd(vma, addr, pmd, flags & FOLL_WRITE);
-
- page += (addr & ~HPAGE_PMD_MASK) >> PAGE_SHIFT;
- VM_BUG_ON_PAGE(!PageCompound(page) && !is_zone_device_page(page), page);
-
- return page;
-}
-
/* NUMA hinting page fault entry point for trans huge pmds */
vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf)
{
diff --git a/mm/internal.h b/mm/internal.h
index 2fca14553d0f..c0e953a1eb62 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -1009,9 +1009,8 @@ int __must_check try_grab_page(struct page *page, unsigned int flags);
*/
void touch_pud(struct vm_area_struct *vma, unsigned long addr,
pud_t *pud, bool write);
-struct page *follow_trans_huge_pmd(struct vm_area_struct *vma,
- unsigned long addr, pmd_t *pmd,
- unsigned int flags);
+void touch_pmd(struct vm_area_struct *vma, unsigned long addr,
+ pmd_t *pmd, bool write);

/*
* mm/mmap.c
--
2.41.0


2023-12-19 08:06:06

by Peter Xu

[permalink] [raw]
Subject: [PATCH 12/13] mm/gup: Handle hugepd for follow_page()

From: Peter Xu <[email protected]>

Hugepd is only used in PowerPC so far on 4K page size kernels where hash
mmu is used. follow_page_mask() used to leverage hugetlb APIs to access
hugepd entries. Teach follow_page_mask() itself on hugepd.

With previous refactors on fast-gup gup_huge_pd(), most of the code can be
easily leveraged. There's something not needed for follow page, for
example, gup_hugepte() tries to detect pgtable entry change which will
never happen with slow gup (which has the pgtable lock held), but that's
not a problem to check.

Since follow_page() always only fetch one page, set the end to "address +
PAGE_SIZE" should suffice. We will still do the pgtable walk once for each
hugetlb page by setting ctx->page_mask properly.

One thing worth mentioning is that some level of pgtable's _bad() helper
will report is_hugepd() entries as TRUE on Power8 hash MMUs. I think it at
least applies to PUD on Power8 with 4K pgsize. It means feeding a hugepd
entry to pud_bad() will report a false positive. Let's leave that for now
because it can be arch-specific where I am a bit declined to touch. In
this patch it's not a problem as long as hugepd is detected before any bad
pgtable entries.

Signed-off-by: Peter Xu <[email protected]>
---
mm/gup.c | 78 +++++++++++++++++++++++++++++++++++++++++++++++++-------
1 file changed, 69 insertions(+), 9 deletions(-)

diff --git a/mm/gup.c b/mm/gup.c
index 080dff79b650..14a7d13e7bd6 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -30,6 +30,11 @@ struct follow_page_context {
unsigned int page_mask;
};

+static struct page *follow_hugepd(struct vm_area_struct *vma, hugepd_t hugepd,
+ unsigned long addr, unsigned int pdshift,
+ unsigned int flags,
+ struct follow_page_context *ctx);
+
static inline void sanity_check_pinned_pages(struct page **pages,
unsigned long npages)
{
@@ -871,6 +876,9 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma,
return no_page_table(vma, flags, address);
if (!pmd_present(pmdval))
return no_page_table(vma, flags, address);
+ if (unlikely(is_hugepd(__hugepd(pmd_val(pmdval)))))
+ return follow_hugepd(vma, __hugepd(pmd_val(pmdval)),
+ address, PMD_SHIFT, flags, ctx);
if (pmd_devmap(pmdval)) {
ptl = pmd_lock(mm, pmd);
page = follow_devmap_pmd(vma, address, pmd, flags, &ctx->pgmap);
@@ -921,6 +929,9 @@ static struct page *follow_pud_mask(struct vm_area_struct *vma,
pud = *pudp;
if (pud_none(pud) || !pud_present(pud))
return no_page_table(vma, flags, address);
+ if (unlikely(is_hugepd(__hugepd(pud_val(pud)))))
+ return follow_hugepd(vma, __hugepd(pud_val(pud)),
+ address, PUD_SHIFT, flags, ctx);
if (pud_huge(pud)) {
ptl = pud_lock(mm, pudp);
page = follow_huge_pud(vma, address, pudp, flags, ctx);
@@ -940,13 +951,17 @@ static struct page *follow_p4d_mask(struct vm_area_struct *vma,
unsigned int flags,
struct follow_page_context *ctx)
{
- p4d_t *p4d;
+ p4d_t *p4d, p4dval;

p4d = p4d_offset(pgdp, address);
- if (p4d_none(*p4d))
- return no_page_table(vma, flags, address);
- BUILD_BUG_ON(p4d_huge(*p4d));
- if (unlikely(p4d_bad(*p4d)))
+ p4dval = *p4d;
+ BUILD_BUG_ON(p4d_huge(p4dval));
+
+ if (unlikely(is_hugepd(__hugepd(p4d_val(p4dval)))))
+ return follow_hugepd(vma, __hugepd(p4d_val(p4dval)),
+ address, P4D_SHIFT, flags, ctx);
+
+ if (p4d_none(p4dval) || unlikely(p4d_bad(p4dval)))
return no_page_table(vma, flags, address);

return follow_pud_mask(vma, address, p4d, flags, ctx);
@@ -980,7 +995,7 @@ static struct page *follow_page_mask(struct vm_area_struct *vma,
unsigned long address, unsigned int flags,
struct follow_page_context *ctx)
{
- pgd_t *pgd;
+ pgd_t *pgd, pgdval;
struct mm_struct *mm = vma->vm_mm;

ctx->page_mask = 0;
@@ -995,11 +1010,17 @@ static struct page *follow_page_mask(struct vm_area_struct *vma,
&ctx->page_mask);

pgd = pgd_offset(mm, address);
+ pgdval = *pgd;

- if (pgd_none(*pgd) || unlikely(pgd_bad(*pgd)))
- return no_page_table(vma, flags, address);
+ if (unlikely(is_hugepd(__hugepd(pgd_val(pgdval)))))
+ page = follow_hugepd(vma, __hugepd(pgd_val(pgdval)),
+ address, PGDIR_SHIFT, flags, ctx);
+ else if (pgd_none(*pgd) || unlikely(pgd_bad(*pgd)))
+ page = no_page_table(vma, flags, address);
+ else
+ page = follow_p4d_mask(vma, address, pgd, flags, ctx);

- return follow_p4d_mask(vma, address, pgd, flags, ctx);
+ return page;
}

struct page *follow_page(struct vm_area_struct *vma, unsigned long address,
@@ -3026,6 +3047,37 @@ static int gup_huge_pd(hugepd_t hugepd, unsigned long addr,

return 1;
}
+
+static struct page *follow_hugepd(struct vm_area_struct *vma, hugepd_t hugepd,
+ unsigned long addr, unsigned int pdshift,
+ unsigned int flags,
+ struct follow_page_context *ctx)
+{
+ struct page *page;
+ struct hstate *h;
+ spinlock_t *ptl;
+ int nr = 0, ret;
+ pte_t *ptep;
+
+ /* Only hugetlb supports hugepd */
+ if (WARN_ON_ONCE(!is_vm_hugetlb_page(vma)))
+ return ERR_PTR(-EFAULT);
+
+ h = hstate_vma(vma);
+ ptep = hugepte_offset(hugepd, addr, pdshift);
+ ptl = huge_pte_lock(h, vma->vm_mm, ptep);
+ ret = gup_huge_pd(hugepd, addr, pdshift, addr + PAGE_SIZE,
+ flags, &page, &nr);
+ spin_unlock(ptl);
+
+ if (ret) {
+ WARN_ON_ONCE(nr != 1);
+ ctx->page_mask = (1U << huge_page_order(h)) - 1;
+ return page;
+ }
+
+ return NULL;
+}
#else
static inline int gup_huge_pd(hugepd_t hugepd, unsigned long addr,
unsigned int pdshift, unsigned long end, unsigned int flags,
@@ -3033,6 +3085,14 @@ static inline int gup_huge_pd(hugepd_t hugepd, unsigned long addr,
{
return 0;
}
+
+static struct page *follow_hugepd(struct vm_area_struct *vma, hugepd_t hugepd,
+ unsigned long addr, unsigned int pdshift,
+ unsigned int flags,
+ struct follow_page_context *ctx)
+{
+ return NULL;
+}
#endif /* CONFIG_ARCH_HAS_HUGEPD */

static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr,
--
2.41.0


2023-12-19 08:06:18

by Peter Xu

[permalink] [raw]
Subject: [PATCH 03/13] mm: Provide generic pmd_thp_or_huge()

From: Peter Xu <[email protected]>

ARM defines pmd_thp_or_huge(), detecting either a THP or a huge PMD. It
can be a helpful helper if we want to merge more THP and hugetlb code
paths. Make it a generic default implementation, only exist when
CONFIG_MMU. Arch can overwrite it by defining its own version.

For example, ARM's pgtable-2level.h defines it to always return false.

Keep the macro declared with all config, it should be optimized to a false
anyway if !THP && !HUGETLB.

Signed-off-by: Peter Xu <[email protected]>
---
include/linux/pgtable.h | 4 ++++
mm/gup.c | 3 +--
2 files changed, 5 insertions(+), 2 deletions(-)

diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index af7639c3b0a3..6f2fa1977b8a 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -1355,6 +1355,10 @@ static inline int pmd_write(pmd_t pmd)
#endif /* pmd_write */
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */

+#ifndef pmd_thp_or_huge
+#define pmd_thp_or_huge(pmd) (pmd_huge(pmd) || pmd_trans_huge(pmd))
+#endif
+
#ifndef pud_write
static inline int pud_write(pud_t pud)
{
diff --git a/mm/gup.c b/mm/gup.c
index 0a5f0e91bfec..efc9847e58fb 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -3004,8 +3004,7 @@ static int gup_pmd_range(pud_t *pudp, pud_t pud, unsigned long addr, unsigned lo
if (!pmd_present(pmd))
return 0;

- if (unlikely(pmd_trans_huge(pmd) || pmd_huge(pmd) ||
- pmd_devmap(pmd))) {
+ if (unlikely(pmd_thp_or_huge(pmd) || pmd_devmap(pmd))) {
/* See gup_pte_range() */
if (pmd_protnone(pmd))
return 0;
--
2.41.0


2023-12-19 08:06:37

by Peter Xu

[permalink] [raw]
Subject: [PATCH 13/13] mm/gup: Handle hugetlb in the generic follow_page_mask code

From: Peter Xu <[email protected]>

Now follow_page() is ready to handle hugetlb pages in whatever form, and
over all architectures. Switch to the generic code path.

Time to retire hugetlb_follow_page_mask(), following the previous
retirement of follow_hugetlb_page() in 4849807114b8.

There may be a slight difference of how the loops run when processing slow
GUP over a large hugetlb range on cont_pte/cont_pmd supported archs: each
loop of __get_user_pages() will resolve one pgtable entry with the patch
applied, rather than relying on the size of hugetlb hstate, the latter may
cover multiple entries in one loop.

A quick performance test on an aarch64 VM on M1 chip shows 15% degrade over
a tight loop of slow gup after the path switched. That shouldn't be a
problem because slow-gup should not be a hot path for GUP in general: when
page is commonly present, fast-gup will already succeed, while when the
page is indeed missing and require a follow up page fault, the slow gup
degrade will probably buried in the fault paths anyway. It also explains
why slow gup for THP used to be very slow before 57edfcfd3419 ("mm/gup:
accelerate thp gup even for "pages != NULL"") lands, the latter not part of
a performance analysis but a side benefit. If the performance will be a
concern, we can consider handle CONT_PTE in follow_page().

Before that is justified to be necessary, keep everything clean and simple.

Signed-off-by: Peter Xu <[email protected]>
---
include/linux/hugetlb.h | 7 ----
mm/gup.c | 15 +++------
mm/hugetlb.c | 71 -----------------------------------------
3 files changed, 5 insertions(+), 88 deletions(-)

diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index f8c5c174c8a6..8a352d577ebf 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -332,13 +332,6 @@ static inline void hugetlb_zap_end(
{
}

-static inline struct page *hugetlb_follow_page_mask(
- struct vm_area_struct *vma, unsigned long address, unsigned int flags,
- unsigned int *page_mask)
-{
- BUILD_BUG(); /* should never be compiled in if !CONFIG_HUGETLB_PAGE*/
-}
-
static inline int copy_hugetlb_page_range(struct mm_struct *dst,
struct mm_struct *src,
struct vm_area_struct *dst_vma,
diff --git a/mm/gup.c b/mm/gup.c
index 14a7d13e7bd6..f34c0a912311 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -997,18 +997,11 @@ static struct page *follow_page_mask(struct vm_area_struct *vma,
{
pgd_t *pgd, pgdval;
struct mm_struct *mm = vma->vm_mm;
+ struct page *page;

- ctx->page_mask = 0;
-
- /*
- * Call hugetlb_follow_page_mask for hugetlb vmas as it will use
- * special hugetlb page table walking code. This eliminates the
- * need to check for hugetlb entries in the general walking code.
- */
- if (is_vm_hugetlb_page(vma))
- return hugetlb_follow_page_mask(vma, address, flags,
- &ctx->page_mask);
+ vma_pgtable_walk_begin(vma);

+ ctx->page_mask = 0;
pgd = pgd_offset(mm, address);
pgdval = *pgd;

@@ -1020,6 +1013,8 @@ static struct page *follow_page_mask(struct vm_area_struct *vma,
else
page = follow_p4d_mask(vma, address, pgd, flags, ctx);

+ vma_pgtable_walk_end(vma);
+
return page;
}

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 29705e5c6f40..3013122a739f 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -6783,77 +6783,6 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte,
}
#endif /* CONFIG_USERFAULTFD */

-struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma,
- unsigned long address, unsigned int flags,
- unsigned int *page_mask)
-{
- struct hstate *h = hstate_vma(vma);
- struct mm_struct *mm = vma->vm_mm;
- unsigned long haddr = address & huge_page_mask(h);
- struct page *page = NULL;
- spinlock_t *ptl;
- pte_t *pte, entry;
- int ret;
-
- hugetlb_vma_lock_read(vma);
- pte = hugetlb_walk(vma, haddr, huge_page_size(h));
- if (!pte)
- goto out_unlock;
-
- ptl = huge_pte_lock(h, mm, pte);
- entry = huge_ptep_get(pte);
- if (pte_present(entry)) {
- page = pte_page(entry);
-
- if (!huge_pte_write(entry)) {
- if (flags & FOLL_WRITE) {
- page = NULL;
- goto out;
- }
-
- if (gup_must_unshare(vma, flags, page)) {
- /* Tell the caller to do unsharing */
- page = ERR_PTR(-EMLINK);
- goto out;
- }
- }
-
- page = nth_page(page, ((address & ~huge_page_mask(h)) >> PAGE_SHIFT));
-
- /*
- * Note that page may be a sub-page, and with vmemmap
- * optimizations the page struct may be read only.
- * try_grab_page() will increase the ref count on the
- * head page, so this will be OK.
- *
- * try_grab_page() should always be able to get the page here,
- * because we hold the ptl lock and have verified pte_present().
- */
- ret = try_grab_page(page, flags);
-
- if (WARN_ON_ONCE(ret)) {
- page = ERR_PTR(ret);
- goto out;
- }
-
- *page_mask = (1U << huge_page_order(h)) - 1;
- }
-out:
- spin_unlock(ptl);
-out_unlock:
- hugetlb_vma_unlock_read(vma);
-
- /*
- * Fixup retval for dump requests: if pagecache doesn't exist,
- * don't try to allocate a new page but just skip it.
- */
- if (!page && (flags & FOLL_DUMP) &&
- !hugetlbfs_pagecache_present(h, vma, address))
- page = ERR_PTR(-EFAULT);
-
- return page;
-}
-
long hugetlb_change_protection(struct vm_area_struct *vma,
unsigned long address, unsigned long end,
pgprot_t newprot, unsigned long cp_flags)
--
2.41.0


2023-12-19 08:06:54

by Peter Xu

[permalink] [raw]
Subject: [PATCH 02/13] mm/hugetlb: Declare hugetlbfs_pagecache_present() non-static

From: Peter Xu <[email protected]>

It will be used outside hugetlb.c soon.

Signed-off-by: Peter Xu <[email protected]>
---
include/linux/hugetlb.h | 9 +++++++++
mm/hugetlb.c | 4 ++--
2 files changed, 11 insertions(+), 2 deletions(-)

diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 236ec7b63c54..f8c5c174c8a6 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -174,6 +174,9 @@ u32 hugetlb_fault_mutex_hash(struct address_space *mapping, pgoff_t idx);

pte_t *huge_pmd_share(struct mm_struct *mm, struct vm_area_struct *vma,
unsigned long addr, pud_t *pud);
+bool hugetlbfs_pagecache_present(struct hstate *h,
+ struct vm_area_struct *vma,
+ unsigned long address);

struct address_space *hugetlb_page_mapping_lock_write(struct page *hpage);

@@ -1221,6 +1224,12 @@ static inline void hugetlb_register_node(struct node *node)
static inline void hugetlb_unregister_node(struct node *node)
{
}
+
+static inline bool hugetlbfs_pagecache_present(
+ struct hstate *h, struct vm_area_struct *vma, unsigned long address)
+{
+ return false;
+}
#endif /* CONFIG_HUGETLB_PAGE */

static inline spinlock_t *huge_pte_lock(struct hstate *h,
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 6feb3e0630d1..29705e5c6f40 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -6018,8 +6018,8 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma,
/*
* Return whether there is a pagecache page to back given address within VMA.
*/
-static bool hugetlbfs_pagecache_present(struct hstate *h,
- struct vm_area_struct *vma, unsigned long address)
+bool hugetlbfs_pagecache_present(struct hstate *h,
+ struct vm_area_struct *vma, unsigned long address)
{
struct address_space *mapping = vma->vm_file->f_mapping;
pgoff_t idx = linear_page_index(vma, address);
--
2.41.0


2023-12-19 08:07:14

by Peter Xu

[permalink] [raw]
Subject: [PATCH 05/13] mm: Introduce vma_pgtable_walk_{begin|end}()

From: Peter Xu <[email protected]>

Introduce per-vma begin()/end() helpers for pgtable walks. This is a
preparation work to merge hugetlb pgtable walkers with generic mm.

The helpers need to be called before and after a pgtable walk, will start
to be needed if the pgtable walker code supports hugetlb pages. It's a
hook point for any type of VMA, but for now only hugetlb uses it to
stablize the pgtable pages from getting away (due to possible pmd
unsharing).

Reviewed-by: Christoph Hellwig <[email protected]>
Signed-off-by: Peter Xu <[email protected]>
---
include/linux/mm.h | 3 +++
mm/memory.c | 12 ++++++++++++
2 files changed, 15 insertions(+)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index b72bf25a45cf..85e43775824b 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -4181,4 +4181,7 @@ static inline bool pfn_is_unaccepted_memory(unsigned long pfn)
return range_contains_unaccepted_memory(paddr, paddr + PAGE_SIZE);
}

+void vma_pgtable_walk_begin(struct vm_area_struct *vma);
+void vma_pgtable_walk_end(struct vm_area_struct *vma);
+
#endif /* _LINUX_MM_H */
diff --git a/mm/memory.c b/mm/memory.c
index 1795aba53cf5..9ac6a9db971e 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -6270,3 +6270,15 @@ void ptlock_free(struct ptdesc *ptdesc)
kmem_cache_free(page_ptl_cachep, ptdesc->ptl);
}
#endif
+
+void vma_pgtable_walk_begin(struct vm_area_struct *vma)
+{
+ if (is_vm_hugetlb_page(vma))
+ hugetlb_vma_lock_read(vma);
+}
+
+void vma_pgtable_walk_end(struct vm_area_struct *vma)
+{
+ if (is_vm_hugetlb_page(vma))
+ hugetlb_vma_unlock_read(vma);
+}
--
2.41.0


2023-12-19 08:07:33

by Peter Xu

[permalink] [raw]
Subject: [PATCH 01/13] mm/Kconfig: CONFIG_PGTABLE_HAS_HUGE_LEAVES

From: Peter Xu <[email protected]>

Introduce a config option that will be selected as long as huge leaves are
involved in pgtable (thp or hugetlbfs). It would be useful to mark any
code with this new config that can process either hugetlb or thp pages in
any level that is higher than pte level.

Signed-off-by: Peter Xu <[email protected]>
---
mm/Kconfig | 3 +++
1 file changed, 3 insertions(+)

diff --git a/mm/Kconfig b/mm/Kconfig
index 8f8b02e9c136..4ca97d959323 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -904,6 +904,9 @@ config READ_ONLY_THP_FOR_FS

endif # TRANSPARENT_HUGEPAGE

+config PGTABLE_HAS_HUGE_LEAVES
+ def_bool TRANSPARENT_HUGEPAGE || HUGETLB_PAGE
+
#
# UP and nommu archs use km based percpu allocator
#
--
2.41.0


2023-12-19 16:29:38

by James Houghton

[permalink] [raw]
Subject: Re: [PATCH 09/13] mm/gup: Cache *pudp in follow_pud_mask()

On Tue, Dec 19, 2023 at 2:57 AM <[email protected]> wrote:
>
> From: Peter Xu <[email protected]>
>
> Introduce "pud_t pud" in the function, so the code won't dereference *pudp
> multiple time. Not only because that looks less straightforward, but also
> because if the dereference really happened, it's not clear whether there
> can be race to see different *pudp values if it's being modified at the
> same time.
>
> Signed-off-by: Peter Xu <[email protected]>
> ---
> mm/gup.c | 17 +++++++++--------
> 1 file changed, 9 insertions(+), 8 deletions(-)
>
> diff --git a/mm/gup.c b/mm/gup.c
> index 6c0d82fa8cc7..97e87b7a15c3 100644
> --- a/mm/gup.c
> +++ b/mm/gup.c
> @@ -753,26 +753,27 @@ static struct page *follow_pud_mask(struct vm_area_struct *vma,
> unsigned int flags,
> struct follow_page_context *ctx)
> {
> - pud_t *pud;
> + pud_t *pudp, pud;
> spinlock_t *ptl;
> struct page *page;
> struct mm_struct *mm = vma->vm_mm;
>
> - pud = pud_offset(p4dp, address);
> - if (pud_none(*pud))
> + pudp = pud_offset(p4dp, address);
> + pud = *pudp;

I think you might want a READ_ONCE() on this so that the compiler
doesn't actually read the pud multiple times.

> + if (pud_none(pud))
> return no_page_table(vma, flags, address);
> - if (pud_devmap(*pud)) {
> - ptl = pud_lock(mm, pud);
> - page = follow_devmap_pud(vma, address, pud, flags, &ctx->pgmap);
> + if (pud_devmap(pud)) {
> + ptl = pud_lock(mm, pudp);
> + page = follow_devmap_pud(vma, address, pudp, flags, &ctx->pgmap);
> spin_unlock(ptl);
> if (page)
> return page;
> return no_page_table(vma, flags, address);
> }
> - if (unlikely(pud_bad(*pud)))
> + if (unlikely(pud_bad(pud)))
> return no_page_table(vma, flags, address);

Not your change, but reading this, it's not clear to me that
`pud_present(*pudp)` (and non-leaf) would necessarily be true at this
point -- like, I would prefer to see `!pud_present(pud)` instead of
`pud_bad()`. Thank you for adding that in the next patch. :)

Feel free to add:

Acked-by: James Houghton <[email protected]>

2023-12-20 01:43:42

by Peter Xu

[permalink] [raw]
Subject: Re: [PATCH 09/13] mm/gup: Cache *pudp in follow_pud_mask()

On Tue, Dec 19, 2023 at 11:28:54AM -0500, James Houghton wrote:
> On Tue, Dec 19, 2023 at 2:57 AM <[email protected]> wrote:
> >
> > From: Peter Xu <[email protected]>
> >
> > Introduce "pud_t pud" in the function, so the code won't dereference *pudp
> > multiple time. Not only because that looks less straightforward, but also
> > because if the dereference really happened, it's not clear whether there
> > can be race to see different *pudp values if it's being modified at the
> > same time.
> >
> > Signed-off-by: Peter Xu <[email protected]>
> > ---
> > mm/gup.c | 17 +++++++++--------
> > 1 file changed, 9 insertions(+), 8 deletions(-)
> >
> > diff --git a/mm/gup.c b/mm/gup.c
> > index 6c0d82fa8cc7..97e87b7a15c3 100644
> > --- a/mm/gup.c
> > +++ b/mm/gup.c
> > @@ -753,26 +753,27 @@ static struct page *follow_pud_mask(struct vm_area_struct *vma,
> > unsigned int flags,
> > struct follow_page_context *ctx)
> > {
> > - pud_t *pud;
> > + pud_t *pudp, pud;
> > spinlock_t *ptl;
> > struct page *page;
> > struct mm_struct *mm = vma->vm_mm;
> >
> > - pud = pud_offset(p4dp, address);
> > - if (pud_none(*pud))
> > + pudp = pud_offset(p4dp, address);
> > + pud = *pudp;
>
> I think you might want a READ_ONCE() on this so that the compiler
> doesn't actually read the pud multiple times.

Makes sense. I probably only did the "split" part which Christoph
requested, without thinking futher than that. :)

>
> > + if (pud_none(pud))
> > return no_page_table(vma, flags, address);
> > - if (pud_devmap(*pud)) {
> > - ptl = pud_lock(mm, pud);
> > - page = follow_devmap_pud(vma, address, pud, flags, &ctx->pgmap);
> > + if (pud_devmap(pud)) {
> > + ptl = pud_lock(mm, pudp);
> > + page = follow_devmap_pud(vma, address, pudp, flags, &ctx->pgmap);
> > spin_unlock(ptl);
> > if (page)
> > return page;
> > return no_page_table(vma, flags, address);
> > }
> > - if (unlikely(pud_bad(*pud)))
> > + if (unlikely(pud_bad(pud)))
> > return no_page_table(vma, flags, address);
>
> Not your change, but reading this, it's not clear to me that
> `pud_present(*pudp)` (and non-leaf) would necessarily be true at this
> point -- like, I would prefer to see `!pud_present(pud)` instead of
> `pud_bad()`. Thank you for adding that in the next patch. :)

I think the assumption here is it is expected to be a directory entry when
reaching here, and for a valid directory entry pud_present() should always
return true (a side note: pud_present() may not mean "PRESENT bit set", see
m68k's implementation for example).

Yeah I added that in the next patch, my intention was to check
!pud_present() for all cases without the need to take pgtable lock, though.

>
> Feel free to add:
>
> Acked-by: James Houghton <[email protected]>

Thanks,

--
Peter Xu


2023-12-22 01:36:12

by Peter Xu

[permalink] [raw]
Subject: Re: [PATCH 00/13] mm/gup: Unify hugetlb, part 2

Copy Muchun, which I forgot since the start, sorry.

--
Peter Xu


2023-12-25 06:30:17

by Muchun Song

[permalink] [raw]
Subject: Re: [PATCH 03/13] mm: Provide generic pmd_thp_or_huge()



On 2023/12/19 15:55, [email protected] wrote:
> From: Peter Xu <[email protected]>
>
> ARM defines pmd_thp_or_huge(), detecting either a THP or a huge PMD. It
> can be a helpful helper if we want to merge more THP and hugetlb code
> paths. Make it a generic default implementation, only exist when
> CONFIG_MMU. Arch can overwrite it by defining its own version.
>
> For example, ARM's pgtable-2level.h defines it to always return false.
>
> Keep the macro declared with all config, it should be optimized to a false
> anyway if !THP && !HUGETLB.
>
> Signed-off-by: Peter Xu <[email protected]>
> ---
> include/linux/pgtable.h | 4 ++++
> mm/gup.c | 3 +--
> 2 files changed, 5 insertions(+), 2 deletions(-)
>
> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> index af7639c3b0a3..6f2fa1977b8a 100644
> --- a/include/linux/pgtable.h
> +++ b/include/linux/pgtable.h
> @@ -1355,6 +1355,10 @@ static inline int pmd_write(pmd_t pmd)
> #endif /* pmd_write */
> #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
>
> +#ifndef pmd_thp_or_huge

I think it may be the time to rename to pmd_thp_or_hugetlb,
the "huge" is really confusing. thp is not huge? Actually,
it is huge. It is better to make it more specific from now on, like
"hugetlb".

BTW, please cc me via the new email ([email protected]) next edition.

Thanks.

> +#define pmd_thp_or_huge(pmd) (pmd_huge(pmd) || pmd_trans_huge(pmd))
> +#endif
> +
> #ifndef pud_write
> static inline int pud_write(pud_t pud)
> {
> diff --git a/mm/gup.c b/mm/gup.c
> index 0a5f0e91bfec..efc9847e58fb 100644
> --- a/mm/gup.c
> +++ b/mm/gup.c
> @@ -3004,8 +3004,7 @@ static int gup_pmd_range(pud_t *pudp, pud_t pud, unsigned long addr, unsigned lo
> if (!pmd_present(pmd))
> return 0;
>
> - if (unlikely(pmd_trans_huge(pmd) || pmd_huge(pmd) ||
> - pmd_devmap(pmd))) {
> + if (unlikely(pmd_thp_or_huge(pmd) || pmd_devmap(pmd))) {
> /* See gup_pte_range() */
> if (pmd_protnone(pmd))
> return 0;


2023-12-25 06:35:45

by Muchun Song

[permalink] [raw]
Subject: Re: [PATCH 05/13] mm: Introduce vma_pgtable_walk_{begin|end}()



> On Dec 19, 2023, at 15:55, [email protected] wrote:
>
> From: Peter Xu <[email protected]>
>
> Introduce per-vma begin()/end() helpers for pgtable walks. This is a
> preparation work to merge hugetlb pgtable walkers with generic mm.
>
> The helpers need to be called before and after a pgtable walk, will start
> to be needed if the pgtable walker code supports hugetlb pages. It's a
> hook point for any type of VMA, but for now only hugetlb uses it to
> stablize the pgtable pages from getting away (due to possible pmd
> unsharing).
>
> Reviewed-by: Christoph Hellwig <[email protected]>
> Signed-off-by: Peter Xu <[email protected]>

Reviewed-by: Muchun Song <[email protected]>

Thanks.


2024-01-02 05:37:50

by Peter Xu

[permalink] [raw]
Subject: Re: [PATCH 03/13] mm: Provide generic pmd_thp_or_huge()

On Mon, Dec 25, 2023 at 02:29:53PM +0800, Muchun Song wrote:
> > @@ -1355,6 +1355,10 @@ static inline int pmd_write(pmd_t pmd)
> > #endif /* pmd_write */
> > #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
> > +#ifndef pmd_thp_or_huge
>
> I think it may be the time to rename to pmd_thp_or_hugetlb,
> the "huge" is really confusing. thp is not huge? Actually,
> it is huge. It is better to make it more specific from now on, like
> "hugetlb".

The rename will need to touch ARM code, which I wanted to avoid, see:

arch/arm64/include/asm/pgtable.h:#define pmd_thp_or_huge(pmd) (pmd_huge(pmd) || pmd_trans_huge(pmd))

So far this series only touches generic code. Would you mind I keep this
patch as-is, and leave renaming to later?

>
> BTW, please cc me via the new email ([email protected]) next edition.

Sure. Thanks for taking a look.

--
Peter Xu


2024-01-02 05:39:59

by Peter Xu

[permalink] [raw]
Subject: Re: [PATCH 05/13] mm: Introduce vma_pgtable_walk_{begin|end}()

On Mon, Dec 25, 2023 at 02:34:48PM +0800, Muchun Song wrote:
> Reviewed-by: Muchun Song <[email protected]>

You're using the old email address here. Do you want me to also use the
linux.dev one that you suggested me to use?

--
Peter Xu


2024-01-02 06:30:48

by Muchun Song

[permalink] [raw]
Subject: Re: [PATCH 05/13] mm: Introduce vma_pgtable_walk_{begin|end}()



> On Jan 2, 2024, at 13:39, Peter Xu <[email protected]> wrote:
>
> On Mon, Dec 25, 2023 at 02:34:48PM +0800, Muchun Song wrote:
>> Reviewed-by: Muchun Song <[email protected]>
>
> You're using the old email address here. Do you want me to also use the
> linux.dev one that you suggested me to use?

Either is OK for the RB tag.

>
> --
> Peter Xu
>


2024-01-02 06:31:12

by Muchun Song

[permalink] [raw]
Subject: Re: [PATCH 03/13] mm: Provide generic pmd_thp_or_huge()



> On Jan 2, 2024, at 13:37, Peter Xu <[email protected]> wrote:
>
> On Mon, Dec 25, 2023 at 02:29:53PM +0800, Muchun Song wrote:
>>> @@ -1355,6 +1355,10 @@ static inline int pmd_write(pmd_t pmd)
>>> #endif /* pmd_write */
>>> #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
>>> +#ifndef pmd_thp_or_huge
>>
>> I think it may be the time to rename to pmd_thp_or_hugetlb,
>> the "huge" is really confusing. thp is not huge? Actually,
>> it is huge. It is better to make it more specific from now on, like
>> "hugetlb".
>
> The rename will need to touch ARM code, which I wanted to avoid, see:

I see.

>
> arch/arm64/include/asm/pgtable.h:#define pmd_thp_or_huge(pmd) (pmd_huge(pmd) || pmd_trans_huge(pmd))
>
> So far this series only touches generic code. Would you mind I keep this
> patch as-is, and leave renaming to later?

OK.

THanks.

>
>>
>> BTW, please cc me via the new email ([email protected]) next edition.
>
> Sure. Thanks for taking a look.
>
> --
> Peter Xu
>