2024-03-27 15:38:01

by Peter Xu

[permalink] [raw]
Subject: [PATCH v4 00/13] mm/gup: Unify hugetlb, part 2

From: Peter Xu <[email protected]>

v4:
- Fix build issues, tested on more archs/configs ([x86_64, i386, arm, arm64,
powerpc, riscv, s390] x [allno, alldef, allmod]).
- Squashed the fixup series into v3, touched up commit messages [1]
- Added the patch to fix pud_pfn() into the series [2]
- Fixed one more build issue on arm+alldefconfig, where pgd_t is a
two-item array.
- Manage R-bs: add some, remove some (due to the squashes above)
- Rebase to latest mm-unstable (2f6182cd23a7, March 26th)

rfc: https://lore.kernel.org/r/[email protected]
v1: https://lore.kernel.org/r/[email protected]
v2: https://lore.kernel.org/r/[email protected]
v3: https://lore.kernel.org/r/[email protected]

The series removes the hugetlb slow gup path after a previous refactor work
[1], so that slow gup now uses the exact same path to process all kinds of
memory including hugetlb.

For the long term, we may want to remove most, if not all, call sites of
huge_pte_offset(). It'll be ideal if that API can be completely dropped
from arch hugetlb API. This series is one small step towards merging
hugetlb specific codes into generic mm paths. From that POV, this series
removes one reference to huge_pte_offset() out of many others.

One goal of such a route is that we can reconsider merging hugetlb features
like High Granularity Mapping (HGM). It was not accepted in the past
because it may add lots of hugetlb specific codes and make the mm code even
harder to maintain. With a merged codeset, features like HGM can hopefully
share some code with THP, legacy (PMD+) or modern (continuous PTEs).

To make it work, the generic slow gup code will need to at least understand
hugepd, which is already done like so in fast-gup. Due to the specialty of
hugepd to be software-only solution (no hardware recognizes the hugepd
format, so it's purely artificial structures), there's chance we can merge
some or all hugepd formats with cont_pte in the future. That question is
yet unsettled from Power side to have an acknowledgement. As of now for
this series, I kept the hugepd handling because we may still need to do so
before getting a clearer picture of the future of hugepd. The other reason
is simply that we did it already for fast-gup and most codes are still
around to be reused. It'll make more sense to keep slow/fast gup behave
the same before a decision is made to remove hugepd.

There's one major difference for slow-gup on cont_pte / cont_pmd handling,
currently supported on three architectures (aarch64, riscv, ppc). Before
the series, slow gup will be able to recognize e.g. cont_pte entries with
the help of huge_pte_offset() when hstate is around. Now it's gone but
still working, by looking up pgtable entries one by one.

It's not ideal, but hopefully this change should not affect yet on major
workloads. There's some more information in the commit message of the last
patch. If this would be a concern, we can consider teaching slow gup to
recognize cont pte/pmd entries, and that should recover the lost
performance. But I doubt its necessity for now, so I kept it as simple as
it can be.

Test Done
=========

For x86_64, tested full gup_test matrix over 2MB huge pages. For aarch64,
tested the same over 64KB cont_pte huge pages.

One note is that this v3 didn't go through any ppc test anymore, as finding
such system can always take time. It's based on the fact that it was
tested in previous versions, and this version should have zero change
regarding to hugepd sections.

If anyone (Christophe?) wants to give it a shot on PowerPC, please do and I
would appreciate it: "./run_vmtests.sh -a -t gup_test" should do well
enough (please consider [2] applied if hugepd is <1MB), as long as we're
sure the hugepd pages are touched as expected.

Patch layout
=============

Patch 1-8: Preparation works, or cleanups in relevant code paths
Patch 9-11: Teach slow gup with all kinds of huge entries (pXd, hugepd)
Patch 12: Drop hugetlb_follow_page_mask()

More information can be found in the commit messages of each patch. Any
comment will be welcomed. Thanks.

[1] https://lore.kernel.org/all/[email protected]
[2] https://lore.kernel.org/r/[email protected]

Peter Xu (13):
mm/Kconfig: CONFIG_PGTABLE_HAS_HUGE_LEAVES
mm/hugetlb: Declare hugetlbfs_pagecache_present() non-static
mm: Make HPAGE_PXD_* macros even if !THP
mm: Introduce vma_pgtable_walk_{begin|end}()
mm/arch: Provide pud_pfn() fallback
mm/gup: Drop folio_fast_pin_allowed() in hugepd processing
mm/gup: Refactor record_subpages() to find 1st small page
mm/gup: Handle hugetlb for no_page_table()
mm/gup: Cache *pudp in follow_pud_mask()
mm/gup: Handle huge pud for follow_pud_mask()
mm/gup: Handle huge pmd for follow_pmd_mask()
mm/gup: Handle hugepd for follow_page()
mm/gup: Handle hugetlb in the generic follow_page_mask code

arch/riscv/include/asm/pgtable.h | 1 +
arch/s390/include/asm/pgtable.h | 1 +
arch/sparc/include/asm/pgtable_64.h | 1 +
arch/x86/include/asm/pgtable.h | 1 +
include/linux/huge_mm.h | 37 +-
include/linux/hugetlb.h | 16 +-
include/linux/mm.h | 3 +
include/linux/pgtable.h | 10 +
mm/Kconfig | 6 +
mm/gup.c | 518 ++++++++++++++++++++--------
mm/huge_memory.c | 133 +------
mm/hugetlb.c | 75 +---
mm/internal.h | 7 +-
mm/memory.c | 12 +
14 files changed, 441 insertions(+), 380 deletions(-)

--
2.44.0



2024-03-27 15:42:54

by Peter Xu

[permalink] [raw]
Subject: [PATCH v4 05/13] mm/arch: Provide pud_pfn() fallback

From: Peter Xu <[email protected]>

The comment in the code explains the reasons. We took a different approach
comparing to pmd_pfn() by providing a fallback function.

Another option is to provide some lower level config options (compare to
HUGETLB_PAGE or THP) to identify which layer an arch can support for such
huge mappings. However that can be an overkill.

Cc: Mike Rapoport (IBM) <[email protected]>
Cc: Matthew Wilcox <[email protected]>
Reviewed-by: Jason Gunthorpe <[email protected]>
Signed-off-by: Peter Xu <[email protected]>
---
arch/riscv/include/asm/pgtable.h | 1 +
arch/s390/include/asm/pgtable.h | 1 +
arch/sparc/include/asm/pgtable_64.h | 1 +
arch/x86/include/asm/pgtable.h | 1 +
include/linux/pgtable.h | 10 ++++++++++
5 files changed, 14 insertions(+)

diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
index 20242402fc11..0ca28cc8e3fa 100644
--- a/arch/riscv/include/asm/pgtable.h
+++ b/arch/riscv/include/asm/pgtable.h
@@ -646,6 +646,7 @@ static inline unsigned long pmd_pfn(pmd_t pmd)

#define __pud_to_phys(pud) (__page_val_to_pfn(pud_val(pud)) << PAGE_SHIFT)

+#define pud_pfn pud_pfn
static inline unsigned long pud_pfn(pud_t pud)
{
return ((__pud_to_phys(pud) & PUD_MASK) >> PAGE_SHIFT);
diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h
index 1a71cb19c089..6cbbe473f680 100644
--- a/arch/s390/include/asm/pgtable.h
+++ b/arch/s390/include/asm/pgtable.h
@@ -1414,6 +1414,7 @@ static inline unsigned long pud_deref(pud_t pud)
return (unsigned long)__va(pud_val(pud) & origin_mask);
}

+#define pud_pfn pud_pfn
static inline unsigned long pud_pfn(pud_t pud)
{
return __pa(pud_deref(pud)) >> PAGE_SHIFT;
diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h
index 4d1bafaba942..26efc9bb644a 100644
--- a/arch/sparc/include/asm/pgtable_64.h
+++ b/arch/sparc/include/asm/pgtable_64.h
@@ -875,6 +875,7 @@ static inline bool pud_leaf(pud_t pud)
return pte_val(pte) & _PAGE_PMD_HUGE;
}

+#define pud_pfn pud_pfn
static inline unsigned long pud_pfn(pud_t pud)
{
pte_t pte = __pte(pud_val(pud));
diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index cefc7a84f7a4..273f7557218c 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -234,6 +234,7 @@ static inline unsigned long pmd_pfn(pmd_t pmd)
return (pfn & pmd_pfn_mask(pmd)) >> PAGE_SHIFT;
}

+#define pud_pfn pud_pfn
static inline unsigned long pud_pfn(pud_t pud)
{
phys_addr_t pfn = pud_val(pud);
diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index 600e17d03659..75fe309a4e10 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -1817,6 +1817,16 @@ typedef unsigned int pgtbl_mod_mask;
#define pte_leaf_size(x) PAGE_SIZE
#endif

+/*
+ * We always define pmd_pfn for all archs as it's used in lots of generic
+ * code. Now it happens too for pud_pfn (and can happen for larger
+ * mappings too in the future; we're not there yet). Instead of defining
+ * it for all archs (like pmd_pfn), provide a fallback.
+ */
+#ifndef pud_pfn
+#define pud_pfn(x) ({ BUILD_BUG(); 0; })
+#endif
+
/*
* Some architectures have MMUs that are configurable or selectable at boot
* time. These lead to variable PTRS_PER_x. For statically allocated arrays it
--
2.44.0


2024-03-27 15:47:09

by Peter Xu

[permalink] [raw]
Subject: [PATCH v4 10/13] mm/gup: Handle huge pud for follow_pud_mask()

From: Peter Xu <[email protected]>

Teach follow_pud_mask() to be able to handle normal PUD pages like hugetlb.

Rename follow_devmap_pud() to follow_huge_pud() so that it can process
either huge devmap or hugetlb. Move it out of TRANSPARENT_HUGEPAGE_PUD and
and huge_memory.c (which relies on CONFIG_THP). Switch to pud_leaf() to
detect both cases in the slow gup.

In the new follow_huge_pud(), taking care of possible CoR for hugetlb if
necessary. touch_pud() needs to be moved out of huge_memory.c to be
accessable from gup.c even if !THP.

Since at it, optimize the non-present check by adding a pud_present() early
check before taking the pgtable lock, failing the follow_page() early if
PUD is not present: that is required by both devmap or hugetlb. Use
pud_huge() to also cover the pud_devmap() case.

One more trivial thing to mention is, introduce "pud_t pud" in the code
paths along the way, so the code doesn't dereference *pudp multiple time.
Not only because that looks less straightforward, but also because if the
dereference really happened, it's not clear whether there can be race to
see different *pudp values when it's being modified at the same time.

Setting ctx->page_mask properly for a PUD entry. As a side effect, this
patch should also be able to optimize devmap GUP on PUD to be able to jump
over the whole PUD range, but not yet verified. Hugetlb already can do so
prior to this patch.

Reviewed-by: Jason Gunthorpe <[email protected]>
Signed-off-by: Peter Xu <[email protected]>
---
include/linux/huge_mm.h | 8 -----
mm/gup.c | 70 +++++++++++++++++++++++++++++++++++++++--
mm/huge_memory.c | 47 ++-------------------------
mm/internal.h | 2 ++
4 files changed, 71 insertions(+), 56 deletions(-)

diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index d3bb25c39482..3f36511bdc02 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -351,8 +351,6 @@ static inline bool folio_test_pmd_mappable(struct folio *folio)

struct page *follow_devmap_pmd(struct vm_area_struct *vma, unsigned long addr,
pmd_t *pmd, int flags, struct dev_pagemap **pgmap);
-struct page *follow_devmap_pud(struct vm_area_struct *vma, unsigned long addr,
- pud_t *pud, int flags, struct dev_pagemap **pgmap);

vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf);

@@ -507,12 +505,6 @@ static inline struct page *follow_devmap_pmd(struct vm_area_struct *vma,
return NULL;
}

-static inline struct page *follow_devmap_pud(struct vm_area_struct *vma,
- unsigned long addr, pud_t *pud, int flags, struct dev_pagemap **pgmap)
-{
- return NULL;
-}
-
static inline bool thp_migration_supported(void)
{
return false;
diff --git a/mm/gup.c b/mm/gup.c
index 26b8cca24077..1e5d42211bb4 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -525,6 +525,70 @@ static struct page *no_page_table(struct vm_area_struct *vma,
return NULL;
}

+#ifdef CONFIG_PGTABLE_HAS_HUGE_LEAVES
+static struct page *follow_huge_pud(struct vm_area_struct *vma,
+ unsigned long addr, pud_t *pudp,
+ int flags, struct follow_page_context *ctx)
+{
+ struct mm_struct *mm = vma->vm_mm;
+ struct page *page;
+ pud_t pud = *pudp;
+ unsigned long pfn = pud_pfn(pud);
+ int ret;
+
+ assert_spin_locked(pud_lockptr(mm, pudp));
+
+ if ((flags & FOLL_WRITE) && !pud_write(pud))
+ return NULL;
+
+ if (!pud_present(pud))
+ return NULL;
+
+ pfn += (addr & ~PUD_MASK) >> PAGE_SHIFT;
+
+ if (IS_ENABLED(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD) &&
+ pud_devmap(pud)) {
+ /*
+ * device mapped pages can only be returned if the caller
+ * will manage the page reference count.
+ *
+ * At least one of FOLL_GET | FOLL_PIN must be set, so
+ * assert that here:
+ */
+ if (!(flags & (FOLL_GET | FOLL_PIN)))
+ return ERR_PTR(-EEXIST);
+
+ if (flags & FOLL_TOUCH)
+ touch_pud(vma, addr, pudp, flags & FOLL_WRITE);
+
+ ctx->pgmap = get_dev_pagemap(pfn, ctx->pgmap);
+ if (!ctx->pgmap)
+ return ERR_PTR(-EFAULT);
+ }
+
+ page = pfn_to_page(pfn);
+
+ if (!pud_devmap(pud) && !pud_write(pud) &&
+ gup_must_unshare(vma, flags, page))
+ return ERR_PTR(-EMLINK);
+
+ ret = try_grab_page(page, flags);
+ if (ret)
+ page = ERR_PTR(ret);
+ else
+ ctx->page_mask = HPAGE_PUD_NR - 1;
+
+ return page;
+}
+#else /* CONFIG_PGTABLE_HAS_HUGE_LEAVES */
+static struct page *follow_huge_pud(struct vm_area_struct *vma,
+ unsigned long addr, pud_t *pudp,
+ int flags, struct follow_page_context *ctx)
+{
+ return NULL;
+}
+#endif /* CONFIG_PGTABLE_HAS_HUGE_LEAVES */
+
static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address,
pte_t *pte, unsigned int flags)
{
@@ -760,11 +824,11 @@ static struct page *follow_pud_mask(struct vm_area_struct *vma,

pudp = pud_offset(p4dp, address);
pud = READ_ONCE(*pudp);
- if (pud_none(pud))
+ if (!pud_present(pud))
return no_page_table(vma, flags, address);
- if (pud_devmap(pud)) {
+ if (pud_leaf(pud)) {
ptl = pud_lock(mm, pudp);
- page = follow_devmap_pud(vma, address, pudp, flags, &ctx->pgmap);
+ page = follow_huge_pud(vma, address, pudp, flags, ctx);
spin_unlock(ptl);
if (page)
return page;
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index bc6fa82d9815..2979198d7b71 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1377,8 +1377,8 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
}

#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
-static void touch_pud(struct vm_area_struct *vma, unsigned long addr,
- pud_t *pud, bool write)
+void touch_pud(struct vm_area_struct *vma, unsigned long addr,
+ pud_t *pud, bool write)
{
pud_t _pud;

@@ -1390,49 +1390,6 @@ static void touch_pud(struct vm_area_struct *vma, unsigned long addr,
update_mmu_cache_pud(vma, addr, pud);
}

-struct page *follow_devmap_pud(struct vm_area_struct *vma, unsigned long addr,
- pud_t *pud, int flags, struct dev_pagemap **pgmap)
-{
- unsigned long pfn = pud_pfn(*pud);
- struct mm_struct *mm = vma->vm_mm;
- struct page *page;
- int ret;
-
- assert_spin_locked(pud_lockptr(mm, pud));
-
- if (flags & FOLL_WRITE && !pud_write(*pud))
- return NULL;
-
- if (pud_present(*pud) && pud_devmap(*pud))
- /* pass */;
- else
- return NULL;
-
- if (flags & FOLL_TOUCH)
- touch_pud(vma, addr, pud, flags & FOLL_WRITE);
-
- /*
- * device mapped pages can only be returned if the
- * caller will manage the page reference count.
- *
- * At least one of FOLL_GET | FOLL_PIN must be set, so assert that here:
- */
- if (!(flags & (FOLL_GET | FOLL_PIN)))
- return ERR_PTR(-EEXIST);
-
- pfn += (addr & ~PUD_MASK) >> PAGE_SHIFT;
- *pgmap = get_dev_pagemap(pfn, *pgmap);
- if (!*pgmap)
- return ERR_PTR(-EFAULT);
- page = pfn_to_page(pfn);
-
- ret = try_grab_page(page, flags);
- if (ret)
- page = ERR_PTR(ret);
-
- return page;
-}
-
int copy_huge_pud(struct mm_struct *dst_mm, struct mm_struct *src_mm,
pud_t *dst_pud, pud_t *src_pud, unsigned long addr,
struct vm_area_struct *vma)
diff --git a/mm/internal.h b/mm/internal.h
index 6c8d3844b6a3..eee8c82740b5 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -1111,6 +1111,8 @@ int __must_check try_grab_page(struct page *page, unsigned int flags);
/*
* mm/huge_memory.c
*/
+void touch_pud(struct vm_area_struct *vma, unsigned long addr,
+ pud_t *pud, bool write);
struct page *follow_trans_huge_pmd(struct vm_area_struct *vma,
unsigned long addr, pmd_t *pmd,
unsigned int flags);
--
2.44.0


2024-03-27 16:06:57

by Peter Xu

[permalink] [raw]
Subject: [PATCH v4 07/13] mm/gup: Refactor record_subpages() to find 1st small page

From: Peter Xu <[email protected]>

All the fast-gup functions take a tail page to operate, always need to do
page mask calculations before feeding that into record_subpages().

Merge that logic into record_subpages(), so that it will do the nth_page()
calculation.

Reviewed-by: Jason Gunthorpe <[email protected]>
Signed-off-by: Peter Xu <[email protected]>
---
mm/gup.c | 25 ++++++++++++++-----------
1 file changed, 14 insertions(+), 11 deletions(-)

diff --git a/mm/gup.c b/mm/gup.c
index db35b056fc9a..c2881772216b 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -2779,13 +2779,16 @@ static int __gup_device_huge_pud(pud_t pud, pud_t *pudp, unsigned long addr,
}
#endif

-static int record_subpages(struct page *page, unsigned long addr,
- unsigned long end, struct page **pages)
+static int record_subpages(struct page *page, unsigned long sz,
+ unsigned long addr, unsigned long end,
+ struct page **pages)
{
+ struct page *start_page;
int nr;

+ start_page = nth_page(page, (addr & (sz - 1)) >> PAGE_SHIFT);
for (nr = 0; addr != end; nr++, addr += PAGE_SIZE)
- pages[nr] = nth_page(page, nr);
+ pages[nr] = nth_page(start_page, nr);

return nr;
}
@@ -2820,8 +2823,8 @@ static int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long addr,
/* hugepages are never "special" */
VM_BUG_ON(!pfn_valid(pte_pfn(pte)));

- page = nth_page(pte_page(pte), (addr & (sz - 1)) >> PAGE_SHIFT);
- refs = record_subpages(page, addr, end, pages + *nr);
+ page = pte_page(pte);
+ refs = record_subpages(page, sz, addr, end, pages + *nr);

folio = try_grab_folio(page, refs, flags);
if (!folio)
@@ -2894,8 +2897,8 @@ static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr,
pages, nr);
}

- page = nth_page(pmd_page(orig), (addr & ~PMD_MASK) >> PAGE_SHIFT);
- refs = record_subpages(page, addr, end, pages + *nr);
+ page = pmd_page(orig);
+ refs = record_subpages(page, PMD_SIZE, addr, end, pages + *nr);

folio = try_grab_folio(page, refs, flags);
if (!folio)
@@ -2938,8 +2941,8 @@ static int gup_huge_pud(pud_t orig, pud_t *pudp, unsigned long addr,
pages, nr);
}

- page = nth_page(pud_page(orig), (addr & ~PUD_MASK) >> PAGE_SHIFT);
- refs = record_subpages(page, addr, end, pages + *nr);
+ page = pud_page(orig);
+ refs = record_subpages(page, PUD_SIZE, addr, end, pages + *nr);

folio = try_grab_folio(page, refs, flags);
if (!folio)
@@ -2978,8 +2981,8 @@ static int gup_huge_pgd(pgd_t orig, pgd_t *pgdp, unsigned long addr,

BUILD_BUG_ON(pgd_devmap(orig));

- page = nth_page(pgd_page(orig), (addr & ~PGDIR_MASK) >> PAGE_SHIFT);
- refs = record_subpages(page, addr, end, pages + *nr);
+ page = pgd_page(orig);
+ refs = record_subpages(page, PGDIR_SIZE, addr, end, pages + *nr);

folio = try_grab_folio(page, refs, flags);
if (!folio)
--
2.44.0


2024-03-27 16:13:23

by Peter Xu

[permalink] [raw]
Subject: [PATCH v4 08/13] mm/gup: Handle hugetlb for no_page_table()

From: Peter Xu <[email protected]>

no_page_table() is not yet used for hugetlb code paths. Make it prepared.

The major difference here is hugetlb will return -EFAULT as long as page
cache does not exist, even if VM_SHARED. See hugetlb_follow_page_mask().

Pass "address" into no_page_table() too, as hugetlb will need it.

Reviewed-by: Christoph Hellwig <[email protected]>
Reviewed-by: Jason Gunthorpe <[email protected]>
Signed-off-by: Peter Xu <[email protected]>
---
mm/gup.c | 44 ++++++++++++++++++++++++++------------------
1 file changed, 26 insertions(+), 18 deletions(-)

diff --git a/mm/gup.c b/mm/gup.c
index c2881772216b..ef46a7053e16 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -501,19 +501,27 @@ static inline void mm_set_has_pinned_flag(unsigned long *mm_flags)

#ifdef CONFIG_MMU
static struct page *no_page_table(struct vm_area_struct *vma,
- unsigned int flags)
+ unsigned int flags, unsigned long address)
{
+ if (!(flags & FOLL_DUMP))
+ return NULL;
+
/*
- * When core dumping an enormous anonymous area that nobody
- * has touched so far, we don't want to allocate unnecessary pages or
+ * When core dumping, we don't want to allocate unnecessary pages or
* page tables. Return error instead of NULL to skip handle_mm_fault,
* then get_dump_page() will return NULL to leave a hole in the dump.
* But we can only make this optimization where a hole would surely
* be zero-filled if handle_mm_fault() actually did handle it.
*/
- if ((flags & FOLL_DUMP) &&
- (vma_is_anonymous(vma) || !vma->vm_ops->fault))
+ if (is_vm_hugetlb_page(vma)) {
+ struct hstate *h = hstate_vma(vma);
+
+ if (!hugetlbfs_pagecache_present(h, vma, address))
+ return ERR_PTR(-EFAULT);
+ } else if ((vma_is_anonymous(vma) || !vma->vm_ops->fault)) {
return ERR_PTR(-EFAULT);
+ }
+
return NULL;
}

@@ -593,7 +601,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma,

ptep = pte_offset_map_lock(mm, pmd, address, &ptl);
if (!ptep)
- return no_page_table(vma, flags);
+ return no_page_table(vma, flags, address);
pte = ptep_get(ptep);
if (!pte_present(pte))
goto no_page;
@@ -685,7 +693,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma,
pte_unmap_unlock(ptep, ptl);
if (!pte_none(pte))
return NULL;
- return no_page_table(vma, flags);
+ return no_page_table(vma, flags, address);
}

static struct page *follow_pmd_mask(struct vm_area_struct *vma,
@@ -701,27 +709,27 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma,
pmd = pmd_offset(pudp, address);
pmdval = pmdp_get_lockless(pmd);
if (pmd_none(pmdval))
- return no_page_table(vma, flags);
+ return no_page_table(vma, flags, address);
if (!pmd_present(pmdval))
- return no_page_table(vma, flags);
+ return no_page_table(vma, flags, address);
if (pmd_devmap(pmdval)) {
ptl = pmd_lock(mm, pmd);
page = follow_devmap_pmd(vma, address, pmd, flags, &ctx->pgmap);
spin_unlock(ptl);
if (page)
return page;
- return no_page_table(vma, flags);
+ return no_page_table(vma, flags, address);
}
if (likely(!pmd_trans_huge(pmdval)))
return follow_page_pte(vma, address, pmd, flags, &ctx->pgmap);

if (pmd_protnone(pmdval) && !gup_can_follow_protnone(vma, flags))
- return no_page_table(vma, flags);
+ return no_page_table(vma, flags, address);

ptl = pmd_lock(mm, pmd);
if (unlikely(!pmd_present(*pmd))) {
spin_unlock(ptl);
- return no_page_table(vma, flags);
+ return no_page_table(vma, flags, address);
}
if (unlikely(!pmd_trans_huge(*pmd))) {
spin_unlock(ptl);
@@ -752,17 +760,17 @@ static struct page *follow_pud_mask(struct vm_area_struct *vma,

pud = pud_offset(p4dp, address);
if (pud_none(*pud))
- return no_page_table(vma, flags);
+ return no_page_table(vma, flags, address);
if (pud_devmap(*pud)) {
ptl = pud_lock(mm, pud);
page = follow_devmap_pud(vma, address, pud, flags, &ctx->pgmap);
spin_unlock(ptl);
if (page)
return page;
- return no_page_table(vma, flags);
+ return no_page_table(vma, flags, address);
}
if (unlikely(pud_bad(*pud)))
- return no_page_table(vma, flags);
+ return no_page_table(vma, flags, address);

return follow_pmd_mask(vma, address, pud, flags, ctx);
}
@@ -777,10 +785,10 @@ static struct page *follow_p4d_mask(struct vm_area_struct *vma,
p4dp = p4d_offset(pgdp, address);
p4d = READ_ONCE(*p4dp);
if (!p4d_present(p4d))
- return no_page_table(vma, flags);
+ return no_page_table(vma, flags, address);
BUILD_BUG_ON(p4d_leaf(p4d));
if (unlikely(p4d_bad(p4d)))
- return no_page_table(vma, flags);
+ return no_page_table(vma, flags, address);

return follow_pud_mask(vma, address, p4dp, flags, ctx);
}
@@ -830,7 +838,7 @@ static struct page *follow_page_mask(struct vm_area_struct *vma,
pgd = pgd_offset(mm, address);

if (pgd_none(*pgd) || unlikely(pgd_bad(*pgd)))
- return no_page_table(vma, flags);
+ return no_page_table(vma, flags, address);

return follow_p4d_mask(vma, address, pgd, flags, ctx);
}
--
2.44.0


2024-03-27 16:57:55

by Peter Xu

[permalink] [raw]
Subject: [PATCH v4 09/13] mm/gup: Cache *pudp in follow_pud_mask()

From: Peter Xu <[email protected]>

Introduce "pud_t pud" in the function, so the code won't dereference *pudp
multiple time. Not only because that looks less straightforward, but also
because if the dereference really happened, it's not clear whether there
can be race to see different *pudp values if it's being modified at the
same time.

Acked-by: James Houghton <[email protected]>
Reviewed-by: Jason Gunthorpe <[email protected]>
Signed-off-by: Peter Xu <[email protected]>
---
mm/gup.c | 17 +++++++++--------
1 file changed, 9 insertions(+), 8 deletions(-)

diff --git a/mm/gup.c b/mm/gup.c
index ef46a7053e16..26b8cca24077 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -753,26 +753,27 @@ static struct page *follow_pud_mask(struct vm_area_struct *vma,
unsigned int flags,
struct follow_page_context *ctx)
{
- pud_t *pud;
+ pud_t *pudp, pud;
spinlock_t *ptl;
struct page *page;
struct mm_struct *mm = vma->vm_mm;

- pud = pud_offset(p4dp, address);
- if (pud_none(*pud))
+ pudp = pud_offset(p4dp, address);
+ pud = READ_ONCE(*pudp);
+ if (pud_none(pud))
return no_page_table(vma, flags, address);
- if (pud_devmap(*pud)) {
- ptl = pud_lock(mm, pud);
- page = follow_devmap_pud(vma, address, pud, flags, &ctx->pgmap);
+ if (pud_devmap(pud)) {
+ ptl = pud_lock(mm, pudp);
+ page = follow_devmap_pud(vma, address, pudp, flags, &ctx->pgmap);
spin_unlock(ptl);
if (page)
return page;
return no_page_table(vma, flags, address);
}
- if (unlikely(pud_bad(*pud)))
+ if (unlikely(pud_bad(pud)))
return no_page_table(vma, flags, address);

- return follow_pmd_mask(vma, address, pud, flags, ctx);
+ return follow_pmd_mask(vma, address, pudp, flags, ctx);
}

static struct page *follow_p4d_mask(struct vm_area_struct *vma,
--
2.44.0


2024-04-02 19:07:08

by Nathan Chancellor

[permalink] [raw]
Subject: Re: [PATCH v4 05/13] mm/arch: Provide pud_pfn() fallback

Hi Peter (and LoongArch folks),

On Wed, Mar 27, 2024 at 11:23:24AM -0400, [email protected] wrote:
> From: Peter Xu <[email protected]>
>
> The comment in the code explains the reasons. We took a different approach
> comparing to pmd_pfn() by providing a fallback function.
>
> Another option is to provide some lower level config options (compare to
> HUGETLB_PAGE or THP) to identify which layer an arch can support for such
> huge mappings. However that can be an overkill.
>
> Cc: Mike Rapoport (IBM) <[email protected]>
> Cc: Matthew Wilcox <[email protected]>
> Reviewed-by: Jason Gunthorpe <[email protected]>
> Signed-off-by: Peter Xu <[email protected]>
> ---
> arch/riscv/include/asm/pgtable.h | 1 +
> arch/s390/include/asm/pgtable.h | 1 +
> arch/sparc/include/asm/pgtable_64.h | 1 +
> arch/x86/include/asm/pgtable.h | 1 +
> include/linux/pgtable.h | 10 ++++++++++
> 5 files changed, 14 insertions(+)
>
> diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
> index 20242402fc11..0ca28cc8e3fa 100644
> --- a/arch/riscv/include/asm/pgtable.h
> +++ b/arch/riscv/include/asm/pgtable.h
> @@ -646,6 +646,7 @@ static inline unsigned long pmd_pfn(pmd_t pmd)
>
> #define __pud_to_phys(pud) (__page_val_to_pfn(pud_val(pud)) << PAGE_SHIFT)
>
> +#define pud_pfn pud_pfn
> static inline unsigned long pud_pfn(pud_t pud)
> {
> return ((__pud_to_phys(pud) & PUD_MASK) >> PAGE_SHIFT);
> diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h
> index 1a71cb19c089..6cbbe473f680 100644
> --- a/arch/s390/include/asm/pgtable.h
> +++ b/arch/s390/include/asm/pgtable.h
> @@ -1414,6 +1414,7 @@ static inline unsigned long pud_deref(pud_t pud)
> return (unsigned long)__va(pud_val(pud) & origin_mask);
> }
>
> +#define pud_pfn pud_pfn
> static inline unsigned long pud_pfn(pud_t pud)
> {
> return __pa(pud_deref(pud)) >> PAGE_SHIFT;
> diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h
> index 4d1bafaba942..26efc9bb644a 100644
> --- a/arch/sparc/include/asm/pgtable_64.h
> +++ b/arch/sparc/include/asm/pgtable_64.h
> @@ -875,6 +875,7 @@ static inline bool pud_leaf(pud_t pud)
> return pte_val(pte) & _PAGE_PMD_HUGE;
> }
>
> +#define pud_pfn pud_pfn
> static inline unsigned long pud_pfn(pud_t pud)
> {
> pte_t pte = __pte(pud_val(pud));
> diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
> index cefc7a84f7a4..273f7557218c 100644
> --- a/arch/x86/include/asm/pgtable.h
> +++ b/arch/x86/include/asm/pgtable.h
> @@ -234,6 +234,7 @@ static inline unsigned long pmd_pfn(pmd_t pmd)
> return (pfn & pmd_pfn_mask(pmd)) >> PAGE_SHIFT;
> }
>
> +#define pud_pfn pud_pfn
> static inline unsigned long pud_pfn(pud_t pud)
> {
> phys_addr_t pfn = pud_val(pud);
> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> index 600e17d03659..75fe309a4e10 100644
> --- a/include/linux/pgtable.h
> +++ b/include/linux/pgtable.h
> @@ -1817,6 +1817,16 @@ typedef unsigned int pgtbl_mod_mask;
> #define pte_leaf_size(x) PAGE_SIZE
> #endif
>
> +/*
> + * We always define pmd_pfn for all archs as it's used in lots of generic
> + * code. Now it happens too for pud_pfn (and can happen for larger
> + * mappings too in the future; we're not there yet). Instead of defining
> + * it for all archs (like pmd_pfn), provide a fallback.
> + */
> +#ifndef pud_pfn
> +#define pud_pfn(x) ({ BUILD_BUG(); 0; })
> +#endif
> +
> /*
> * Some architectures have MMUs that are configurable or selectable at boot
> * time. These lead to variable PTRS_PER_x. For statically allocated arrays it
> --
> 2.44.0
>

This BUILD_BUG() triggers for LoongArch with their defconfig, so it
seems like they need to provide an implementation of pud_pfn()?

In function 'follow_huge_pud',
inlined from 'follow_pud_mask' at mm/gup.c:1075:10,
inlined from 'follow_p4d_mask' at mm/gup.c:1105:9,
inlined from 'follow_page_mask' at mm/gup.c:1151:10:
include/linux/compiler_types.h:460:45: error: call to '__compiletime_assert_382' declared with attribute error: BUILD_BUG failed
460 | _compiletime_assert(condition, msg, __compiletime_assert_, __COUNTER__)
| ^
include/linux/compiler_types.h:441:25: note: in definition of macro '__compiletime_assert'
441 | prefix ## suffix(); \
| ^~~~~~
include/linux/compiler_types.h:460:9: note: in expansion of macro '_compiletime_assert'
460 | _compiletime_assert(condition, msg, __compiletime_assert_, __COUNTER__)
| ^~~~~~~~~~~~~~~~~~~
include/linux/build_bug.h:39:37: note: in expansion of macro 'compiletime_assert'
39 | #define BUILD_BUG_ON_MSG(cond, msg) compiletime_assert(!(cond), msg)
| ^~~~~~~~~~~~~~~~~~
include/linux/build_bug.h:59:21: note: in expansion of macro 'BUILD_BUG_ON_MSG'
59 | #define BUILD_BUG() BUILD_BUG_ON_MSG(1, "BUILD_BUG failed")
| ^~~~~~~~~~~~~~~~
include/linux/pgtable.h:1887:23: note: in expansion of macro 'BUILD_BUG'
1887 | #define pud_pfn(x) ({ BUILD_BUG(); 0; })
| ^~~~~~~~~
mm/gup.c:679:29: note: in expansion of macro 'pud_pfn'
679 | unsigned long pfn = pud_pfn(pud);
| ^~~~~~~

Cheers,
Nathan

2024-04-02 22:44:58

by Peter Xu

[permalink] [raw]
Subject: Re: [PATCH v4 05/13] mm/arch: Provide pud_pfn() fallback

On Tue, Apr 02, 2024 at 12:05:49PM -0700, Nathan Chancellor wrote:
> Hi Peter (and LoongArch folks),
>
> On Wed, Mar 27, 2024 at 11:23:24AM -0400, [email protected] wrote:
> > From: Peter Xu <[email protected]>
> >
> > The comment in the code explains the reasons. We took a different approach
> > comparing to pmd_pfn() by providing a fallback function.
> >
> > Another option is to provide some lower level config options (compare to
> > HUGETLB_PAGE or THP) to identify which layer an arch can support for such
> > huge mappings. However that can be an overkill.
> >
> > Cc: Mike Rapoport (IBM) <[email protected]>
> > Cc: Matthew Wilcox <[email protected]>
> > Reviewed-by: Jason Gunthorpe <[email protected]>
> > Signed-off-by: Peter Xu <[email protected]>
> > ---
> > arch/riscv/include/asm/pgtable.h | 1 +
> > arch/s390/include/asm/pgtable.h | 1 +
> > arch/sparc/include/asm/pgtable_64.h | 1 +
> > arch/x86/include/asm/pgtable.h | 1 +
> > include/linux/pgtable.h | 10 ++++++++++
> > 5 files changed, 14 insertions(+)
> >
> > diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
> > index 20242402fc11..0ca28cc8e3fa 100644
> > --- a/arch/riscv/include/asm/pgtable.h
> > +++ b/arch/riscv/include/asm/pgtable.h
> > @@ -646,6 +646,7 @@ static inline unsigned long pmd_pfn(pmd_t pmd)
> >
> > #define __pud_to_phys(pud) (__page_val_to_pfn(pud_val(pud)) << PAGE_SHIFT)
> >
> > +#define pud_pfn pud_pfn
> > static inline unsigned long pud_pfn(pud_t pud)
> > {
> > return ((__pud_to_phys(pud) & PUD_MASK) >> PAGE_SHIFT);
> > diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h
> > index 1a71cb19c089..6cbbe473f680 100644
> > --- a/arch/s390/include/asm/pgtable.h
> > +++ b/arch/s390/include/asm/pgtable.h
> > @@ -1414,6 +1414,7 @@ static inline unsigned long pud_deref(pud_t pud)
> > return (unsigned long)__va(pud_val(pud) & origin_mask);
> > }
> >
> > +#define pud_pfn pud_pfn
> > static inline unsigned long pud_pfn(pud_t pud)
> > {
> > return __pa(pud_deref(pud)) >> PAGE_SHIFT;
> > diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h
> > index 4d1bafaba942..26efc9bb644a 100644
> > --- a/arch/sparc/include/asm/pgtable_64.h
> > +++ b/arch/sparc/include/asm/pgtable_64.h
> > @@ -875,6 +875,7 @@ static inline bool pud_leaf(pud_t pud)
> > return pte_val(pte) & _PAGE_PMD_HUGE;
> > }
> >
> > +#define pud_pfn pud_pfn
> > static inline unsigned long pud_pfn(pud_t pud)
> > {
> > pte_t pte = __pte(pud_val(pud));
> > diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
> > index cefc7a84f7a4..273f7557218c 100644
> > --- a/arch/x86/include/asm/pgtable.h
> > +++ b/arch/x86/include/asm/pgtable.h
> > @@ -234,6 +234,7 @@ static inline unsigned long pmd_pfn(pmd_t pmd)
> > return (pfn & pmd_pfn_mask(pmd)) >> PAGE_SHIFT;
> > }
> >
> > +#define pud_pfn pud_pfn
> > static inline unsigned long pud_pfn(pud_t pud)
> > {
> > phys_addr_t pfn = pud_val(pud);
> > diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> > index 600e17d03659..75fe309a4e10 100644
> > --- a/include/linux/pgtable.h
> > +++ b/include/linux/pgtable.h
> > @@ -1817,6 +1817,16 @@ typedef unsigned int pgtbl_mod_mask;
> > #define pte_leaf_size(x) PAGE_SIZE
> > #endif
> >
> > +/*
> > + * We always define pmd_pfn for all archs as it's used in lots of generic
> > + * code. Now it happens too for pud_pfn (and can happen for larger
> > + * mappings too in the future; we're not there yet). Instead of defining
> > + * it for all archs (like pmd_pfn), provide a fallback.
> > + */
> > +#ifndef pud_pfn
> > +#define pud_pfn(x) ({ BUILD_BUG(); 0; })
> > +#endif
> > +
> > /*
> > * Some architectures have MMUs that are configurable or selectable at boot
> > * time. These lead to variable PTRS_PER_x. For statically allocated arrays it
> > --
> > 2.44.0
> >
>
> This BUILD_BUG() triggers for LoongArch with their defconfig, so it
> seems like they need to provide an implementation of pud_pfn()?
>
> In function 'follow_huge_pud',
> inlined from 'follow_pud_mask' at mm/gup.c:1075:10,
> inlined from 'follow_p4d_mask' at mm/gup.c:1105:9,
> inlined from 'follow_page_mask' at mm/gup.c:1151:10:
> include/linux/compiler_types.h:460:45: error: call to '__compiletime_assert_382' declared with attribute error: BUILD_BUG failed
> 460 | _compiletime_assert(condition, msg, __compiletime_assert_, __COUNTER__)
> | ^
> include/linux/compiler_types.h:441:25: note: in definition of macro '__compiletime_assert'
> 441 | prefix ## suffix(); \
> | ^~~~~~
> include/linux/compiler_types.h:460:9: note: in expansion of macro '_compiletime_assert'
> 460 | _compiletime_assert(condition, msg, __compiletime_assert_, __COUNTER__)
> | ^~~~~~~~~~~~~~~~~~~
> include/linux/build_bug.h:39:37: note: in expansion of macro 'compiletime_assert'
> 39 | #define BUILD_BUG_ON_MSG(cond, msg) compiletime_assert(!(cond), msg)
> | ^~~~~~~~~~~~~~~~~~
> include/linux/build_bug.h:59:21: note: in expansion of macro 'BUILD_BUG_ON_MSG'
> 59 | #define BUILD_BUG() BUILD_BUG_ON_MSG(1, "BUILD_BUG failed")
> | ^~~~~~~~~~~~~~~~
> include/linux/pgtable.h:1887:23: note: in expansion of macro 'BUILD_BUG'
> 1887 | #define pud_pfn(x) ({ BUILD_BUG(); 0; })
> | ^~~~~~~~~
> mm/gup.c:679:29: note: in expansion of macro 'pud_pfn'
> 679 | unsigned long pfn = pud_pfn(pud);
> | ^~~~~~~

I actually tested this without hitting the issue (even though I didn't
mention it in the cover letter..). I re-kicked the build test, it turns
out my "make alldefconfig" on loongarch will generate a config with both
HUGETLB=n && THP=n, while arch/loongarch/configs/loongson3_defconfig has
THP=y (which I assume was the one above build used). I didn't further
check how "make alldefconfig" generated the config; a bit surprising that
it didn't fetch from there.

(and it also surprises me that this BUILD_BUG can trigger.. I used to try
triggering it elsewhere but failed..)

For loongarch the best thing is not compile in follow_huge_pud(), as it
doesn't support pud dax, neither does it support pud hugetlb. However
again that may require some more CONFIG_* options to declare the level one
arch supports on HUGETLB_PAGE. Here maybe the simplest (and it should also
cover all the rest archs on similar issues if ever possible to happen) is
we remove the BUILD_BUG() and explain why. It should be safe for loongarch
too here in this case to not defined it until properly supported.

Thanks,

===8<===
From 585f34aa3d5b12cd2186367b0882d4293f792062 Mon Sep 17 00:00:00 2001
From: Peter Xu <[email protected]>
Date: Tue, 2 Apr 2024 18:31:07 -0400
Subject: [PATCH] fixup! mm/arch: provide pud_pfn() fallback

Signed-off-by: Peter Xu <[email protected]>
---
include/linux/pgtable.h | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index fa8f92f6e2d7..0f4b2faa1d71 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -1882,9 +1882,13 @@ typedef unsigned int pgtbl_mod_mask;
* code. Now it happens too for pud_pfn (and can happen for larger
* mappings too in the future; we're not there yet). Instead of defining
* it for all archs (like pmd_pfn), provide a fallback.
+ *
+ * Note that returning 0 here means any arch that didn't define this can
+ * get severely wrong when it hits a real pud leaf. It's arch's
+ * responsibility to properly define it when a huge pud is possible.
*/
#ifndef pud_pfn
-#define pud_pfn(x) ({ BUILD_BUG(); 0; })
+#define pud_pfn(x) 0
#endif

/*
--
2.44.0

--
Peter Xu


2024-04-02 22:53:34

by Jason Gunthorpe

[permalink] [raw]
Subject: Re: [PATCH v4 05/13] mm/arch: Provide pud_pfn() fallback

On Tue, Apr 02, 2024 at 06:43:56PM -0400, Peter Xu wrote:

> I actually tested this without hitting the issue (even though I didn't
> mention it in the cover letter..). I re-kicked the build test, it turns
> out my "make alldefconfig" on loongarch will generate a config with both
> HUGETLB=n && THP=n, while arch/loongarch/configs/loongson3_defconfig has
> THP=y (which I assume was the one above build used). I didn't further
> check how "make alldefconfig" generated the config; a bit surprising that
> it didn't fetch from there.

I suspect it is weird compiler variations.. Maybe something is not
being inlined.

> (and it also surprises me that this BUILD_BUG can trigger.. I used to try
> triggering it elsewhere but failed..)

As the pud_leaf() == FALSE should result in the BUILD_BUG never being
called and the optimizer removing it.

Perhaps the issue is that the pud_leaf() is too far from the pud_pfn?

Jason

2024-04-02 23:36:02

by Peter Xu

[permalink] [raw]
Subject: Re: [PATCH v4 05/13] mm/arch: Provide pud_pfn() fallback

On Tue, Apr 02, 2024 at 07:53:20PM -0300, Jason Gunthorpe wrote:
> On Tue, Apr 02, 2024 at 06:43:56PM -0400, Peter Xu wrote:
>
> > I actually tested this without hitting the issue (even though I didn't
> > mention it in the cover letter..). I re-kicked the build test, it turns
> > out my "make alldefconfig" on loongarch will generate a config with both
> > HUGETLB=n && THP=n, while arch/loongarch/configs/loongson3_defconfig has
> > THP=y (which I assume was the one above build used). I didn't further
> > check how "make alldefconfig" generated the config; a bit surprising that
> > it didn't fetch from there.
>
> I suspect it is weird compiler variations.. Maybe something is not
> being inlined.
>
> > (and it also surprises me that this BUILD_BUG can trigger.. I used to try
> > triggering it elsewhere but failed..)
>
> As the pud_leaf() == FALSE should result in the BUILD_BUG never being
> called and the optimizer removing it.

Good point, for some reason loongarch defined pud_leaf() without defining
pud_pfn(), which does look strange.

#define pud_leaf(pud) ((pud_val(pud) & _PAGE_HUGE) != 0)

But I noticed at least MIPS also does it.. Logically I think one arch
should define either none of both.

>
> Perhaps the issue is that the pud_leaf() is too far from the pud_pfn?

My understanding is follow_pud_mask() should completely get optimized and
follow_huge_pud() will be dropped in the compiler output if pud_leaf()==false.

Thanks,

--
Peter Xu


2024-04-03 12:08:55

by Jason Gunthorpe

[permalink] [raw]
Subject: Re: [PATCH v4 05/13] mm/arch: Provide pud_pfn() fallback

On Tue, Apr 02, 2024 at 07:35:45PM -0400, Peter Xu wrote:
> On Tue, Apr 02, 2024 at 07:53:20PM -0300, Jason Gunthorpe wrote:
> > On Tue, Apr 02, 2024 at 06:43:56PM -0400, Peter Xu wrote:
> >
> > > I actually tested this without hitting the issue (even though I didn't
> > > mention it in the cover letter..). I re-kicked the build test, it turns
> > > out my "make alldefconfig" on loongarch will generate a config with both
> > > HUGETLB=n && THP=n, while arch/loongarch/configs/loongson3_defconfig has
> > > THP=y (which I assume was the one above build used). I didn't further
> > > check how "make alldefconfig" generated the config; a bit surprising that
> > > it didn't fetch from there.
> >
> > I suspect it is weird compiler variations.. Maybe something is not
> > being inlined.
> >
> > > (and it also surprises me that this BUILD_BUG can trigger.. I used to try
> > > triggering it elsewhere but failed..)
> >
> > As the pud_leaf() == FALSE should result in the BUILD_BUG never being
> > called and the optimizer removing it.
>
> Good point, for some reason loongarch defined pud_leaf() without defining
> pud_pfn(), which does look strange.
>
> #define pud_leaf(pud) ((pud_val(pud) & _PAGE_HUGE) != 0)
>
> But I noticed at least MIPS also does it.. Logically I think one arch
> should define either none of both.

Wow, this is definately an arch issue. You can't define pud_leaf() and
not have a pud_pfn(). It makes no sense at all..

I'd say the BUILD_BUG has done it's job and found an issue, fix it by
not defining pud_leaf? I don't see any calls to pud_leaf in loongarch
at least

Jason

2024-04-03 12:26:55

by Christophe Leroy

[permalink] [raw]
Subject: Re: [PATCH v4 05/13] mm/arch: Provide pud_pfn() fallback



Le 03/04/2024 à 14:08, Jason Gunthorpe a écrit :
> On Tue, Apr 02, 2024 at 07:35:45PM -0400, Peter Xu wrote:
>> On Tue, Apr 02, 2024 at 07:53:20PM -0300, Jason Gunthorpe wrote:
>>> On Tue, Apr 02, 2024 at 06:43:56PM -0400, Peter Xu wrote:
>>>
>>>> I actually tested this without hitting the issue (even though I didn't
>>>> mention it in the cover letter..). I re-kicked the build test, it turns
>>>> out my "make alldefconfig" on loongarch will generate a config with both
>>>> HUGETLB=n && THP=n, while arch/loongarch/configs/loongson3_defconfig has
>>>> THP=y (which I assume was the one above build used). I didn't further
>>>> check how "make alldefconfig" generated the config; a bit surprising that
>>>> it didn't fetch from there.
>>>
>>> I suspect it is weird compiler variations.. Maybe something is not
>>> being inlined.
>>>
>>>> (and it also surprises me that this BUILD_BUG can trigger.. I used to try
>>>> triggering it elsewhere but failed..)
>>>
>>> As the pud_leaf() == FALSE should result in the BUILD_BUG never being
>>> called and the optimizer removing it.
>>
>> Good point, for some reason loongarch defined pud_leaf() without defining
>> pud_pfn(), which does look strange.
>>
>> #define pud_leaf(pud) ((pud_val(pud) & _PAGE_HUGE) != 0)
>>
>> But I noticed at least MIPS also does it.. Logically I think one arch
>> should define either none of both.
>
> Wow, this is definately an arch issue. You can't define pud_leaf() and
> not have a pud_pfn(). It makes no sense at all..
>
> I'd say the BUILD_BUG has done it's job and found an issue, fix it by
> not defining pud_leaf? I don't see any calls to pud_leaf in loongarch
> at least

As far as I can see it was added by commit 303be4b33562 ("LoongArch: mm:
Add p?d_leaf() definitions").

Not sure it was added for a good reason, and I'm not sure what was added
is correct because arch/loongarch/include/asm/pgtable-bits.h has:

#define _PAGE_HUGE_SHIFT 6 /* HUGE is a PMD bit */

So I'm not sure it is correct to use that bit for PUD, is it ?

Probably pud_leaf() should always return false.

Christophe

2024-04-03 13:08:12

by Jason Gunthorpe

[permalink] [raw]
Subject: Re: [PATCH v4 05/13] mm/arch: Provide pud_pfn() fallback

On Wed, Apr 03, 2024 at 12:26:43PM +0000, Christophe Leroy wrote:
>
>
> Le 03/04/2024 à 14:08, Jason Gunthorpe a écrit :
> > On Tue, Apr 02, 2024 at 07:35:45PM -0400, Peter Xu wrote:
> >> On Tue, Apr 02, 2024 at 07:53:20PM -0300, Jason Gunthorpe wrote:
> >>> On Tue, Apr 02, 2024 at 06:43:56PM -0400, Peter Xu wrote:
> >>>
> >>>> I actually tested this without hitting the issue (even though I didn't
> >>>> mention it in the cover letter..). I re-kicked the build test, it turns
> >>>> out my "make alldefconfig" on loongarch will generate a config with both
> >>>> HUGETLB=n && THP=n, while arch/loongarch/configs/loongson3_defconfig has
> >>>> THP=y (which I assume was the one above build used). I didn't further
> >>>> check how "make alldefconfig" generated the config; a bit surprising that
> >>>> it didn't fetch from there.
> >>>
> >>> I suspect it is weird compiler variations.. Maybe something is not
> >>> being inlined.
> >>>
> >>>> (and it also surprises me that this BUILD_BUG can trigger.. I used to try
> >>>> triggering it elsewhere but failed..)
> >>>
> >>> As the pud_leaf() == FALSE should result in the BUILD_BUG never being
> >>> called and the optimizer removing it.
> >>
> >> Good point, for some reason loongarch defined pud_leaf() without defining
> >> pud_pfn(), which does look strange.
> >>
> >> #define pud_leaf(pud) ((pud_val(pud) & _PAGE_HUGE) != 0)
> >>
> >> But I noticed at least MIPS also does it.. Logically I think one arch
> >> should define either none of both.
> >
> > Wow, this is definately an arch issue. You can't define pud_leaf() and
> > not have a pud_pfn(). It makes no sense at all..
> >
> > I'd say the BUILD_BUG has done it's job and found an issue, fix it by
> > not defining pud_leaf? I don't see any calls to pud_leaf in loongarch
> > at least
>
> As far as I can see it was added by commit 303be4b33562 ("LoongArch: mm:
> Add p?d_leaf() definitions").

That commit makes it sounds like the arch supports huge PUD's through
the hugepte mechanism - it says a LTP test failed so something
populated a huge PUD at least??

So maybe this?

#define pud_pfn pte_pfn

> Not sure it was added for a good reason, and I'm not sure what was added
> is correct because arch/loongarch/include/asm/pgtable-bits.h has:
>
> #define _PAGE_HUGE_SHIFT 6 /* HUGE is a PMD bit */
>
> So I'm not sure it is correct to use that bit for PUD, is it ?

Could be, lots of arches repeat the bit layouts in each radix
level.. It is essentially why the hugepte trick of pretending every
level is a pte works.

Jason

2024-04-03 13:17:56

by Christophe Leroy

[permalink] [raw]
Subject: Re: [PATCH v4 05/13] mm/arch: Provide pud_pfn() fallback



Le 03/04/2024 à 15:07, Jason Gunthorpe a écrit :
> On Wed, Apr 03, 2024 at 12:26:43PM +0000, Christophe Leroy wrote:
>>
>>
>> Le 03/04/2024 à 14:08, Jason Gunthorpe a écrit :
>>> On Tue, Apr 02, 2024 at 07:35:45PM -0400, Peter Xu wrote:
>>>> On Tue, Apr 02, 2024 at 07:53:20PM -0300, Jason Gunthorpe wrote:
>>>>> On Tue, Apr 02, 2024 at 06:43:56PM -0400, Peter Xu wrote:
>>>>>
>>>>>> I actually tested this without hitting the issue (even though I didn't
>>>>>> mention it in the cover letter..). I re-kicked the build test, it turns
>>>>>> out my "make alldefconfig" on loongarch will generate a config with both
>>>>>> HUGETLB=n && THP=n, while arch/loongarch/configs/loongson3_defconfig has
>>>>>> THP=y (which I assume was the one above build used). I didn't further
>>>>>> check how "make alldefconfig" generated the config; a bit surprising that
>>>>>> it didn't fetch from there.
>>>>>
>>>>> I suspect it is weird compiler variations.. Maybe something is not
>>>>> being inlined.
>>>>>
>>>>>> (and it also surprises me that this BUILD_BUG can trigger.. I used to try
>>>>>> triggering it elsewhere but failed..)
>>>>>
>>>>> As the pud_leaf() == FALSE should result in the BUILD_BUG never being
>>>>> called and the optimizer removing it.
>>>>
>>>> Good point, for some reason loongarch defined pud_leaf() without defining
>>>> pud_pfn(), which does look strange.
>>>>
>>>> #define pud_leaf(pud) ((pud_val(pud) & _PAGE_HUGE) != 0)
>>>>
>>>> But I noticed at least MIPS also does it.. Logically I think one arch
>>>> should define either none of both.
>>>
>>> Wow, this is definately an arch issue. You can't define pud_leaf() and
>>> not have a pud_pfn(). It makes no sense at all..
>>>
>>> I'd say the BUILD_BUG has done it's job and found an issue, fix it by
>>> not defining pud_leaf? I don't see any calls to pud_leaf in loongarch
>>> at least
>>
>> As far as I can see it was added by commit 303be4b33562 ("LoongArch: mm:
>> Add p?d_leaf() definitions").
>
> That commit makes it sounds like the arch supports huge PUD's through
> the hugepte mechanism - it says a LTP test failed so something
> populated a huge PUD at least??

Not sure, I more see it just like a copy/paste of commit 501b81046701
("mips: mm: add p?d_leaf() definitions").

The commit message says that the test failed because pmd_leaf() is
missing, it says nothing about PUD.

When looking where _PAGE_HUGE is used in loongarch, I have the
impression that it is exclusively used at PMD level.

>
> So maybe this?
>
> #define pud_pfn pte_pfn
>
>> Not sure it was added for a good reason, and I'm not sure what was added
>> is correct because arch/loongarch/include/asm/pgtable-bits.h has:
>>
>> #define _PAGE_HUGE_SHIFT 6 /* HUGE is a PMD bit */
>>
>> So I'm not sure it is correct to use that bit for PUD, is it ?
>
> Could be, lots of arches repeat the bit layouts in each radix
> level.. It is essentially why the hugepte trick of pretending every
> level is a pte works.
>
> Jason

2024-04-03 13:50:37

by Jason Gunthorpe

[permalink] [raw]
Subject: Re: [PATCH v4 05/13] mm/arch: Provide pud_pfn() fallback

On Wed, Apr 03, 2024 at 01:17:06PM +0000, Christophe Leroy wrote:

> > That commit makes it sounds like the arch supports huge PUD's through
> > the hugepte mechanism - it says a LTP test failed so something
> > populated a huge PUD at least??
>
> Not sure, I more see it just like a copy/paste of commit 501b81046701
> ("mips: mm: add p?d_leaf() definitions").
>
> The commit message says that the test failed because pmd_leaf() is
> missing, it says nothing about PUD.

AH fair enough, it is probably a C&P then

Jason

2024-04-03 18:26:21

by Peter Xu

[permalink] [raw]
Subject: Re: [PATCH v4 05/13] mm/arch: Provide pud_pfn() fallback

On Wed, Apr 03, 2024 at 09:08:41AM -0300, Jason Gunthorpe wrote:
> On Tue, Apr 02, 2024 at 07:35:45PM -0400, Peter Xu wrote:
> > On Tue, Apr 02, 2024 at 07:53:20PM -0300, Jason Gunthorpe wrote:
> > > On Tue, Apr 02, 2024 at 06:43:56PM -0400, Peter Xu wrote:
> > >
> > > > I actually tested this without hitting the issue (even though I didn't
> > > > mention it in the cover letter..). I re-kicked the build test, it turns
> > > > out my "make alldefconfig" on loongarch will generate a config with both
> > > > HUGETLB=n && THP=n, while arch/loongarch/configs/loongson3_defconfig has
> > > > THP=y (which I assume was the one above build used). I didn't further
> > > > check how "make alldefconfig" generated the config; a bit surprising that
> > > > it didn't fetch from there.
> > >
> > > I suspect it is weird compiler variations.. Maybe something is not
> > > being inlined.
> > >
> > > > (and it also surprises me that this BUILD_BUG can trigger.. I used to try
> > > > triggering it elsewhere but failed..)
> > >
> > > As the pud_leaf() == FALSE should result in the BUILD_BUG never being
> > > called and the optimizer removing it.
> >
> > Good point, for some reason loongarch defined pud_leaf() without defining
> > pud_pfn(), which does look strange.
> >
> > #define pud_leaf(pud) ((pud_val(pud) & _PAGE_HUGE) != 0)
> >
> > But I noticed at least MIPS also does it.. Logically I think one arch
> > should define either none of both.
>
> Wow, this is definately an arch issue. You can't define pud_leaf() and
> not have a pud_pfn(). It makes no sense at all..
>
> I'd say the BUILD_BUG has done it's job and found an issue, fix it by
> not defining pud_leaf? I don't see any calls to pud_leaf in loongarch
> at least

Yes, that sounds better too to me, however it means we may also risk other
archs that can fail another defconfig build.. and I worry I bring trouble
to multiple such cases. Fundamentally it's indeed my patch that broke
those builds, so I still sent the change and leave that for arch developers
to decide the best for the archs.

I think if wanted, we can add that BUILD_BUG() back when we're sure no arch
will break with it. So such changes from arch can still be proposed
alongside of removal of BUILD_BUG() (and I'd guess some other arch will
start to notice such build issue soon if existed.. so it still more or less
has similar effect of a reminder..).

Thanks,

--
Peter Xu


2024-04-04 11:28:49

by Jason Gunthorpe

[permalink] [raw]
Subject: Re: [PATCH v4 05/13] mm/arch: Provide pud_pfn() fallback

On Wed, Apr 03, 2024 at 02:25:20PM -0400, Peter Xu wrote:

> > I'd say the BUILD_BUG has done it's job and found an issue, fix it by
> > not defining pud_leaf? I don't see any calls to pud_leaf in loongarch
> > at least
>
> Yes, that sounds better too to me, however it means we may also risk other
> archs that can fail another defconfig build.. and I worry I bring trouble
> to multiple such cases. Fundamentally it's indeed my patch that broke
> those builds, so I still sent the change and leave that for arch developers
> to decide the best for the archs.

But your change causes silent data corruption if the code path is
run.. I think we are overall better to wade through the compile time
bugs from linux-next. Honestly if there were alot then I'd think there
would be more complaints already.

Maybe it should just be a seperate step from this series.

Jason

2024-04-04 12:11:45

by Peter Xu

[permalink] [raw]
Subject: Re: [PATCH v4 05/13] mm/arch: Provide pud_pfn() fallback

On Thu, Apr 04, 2024 at 08:24:04AM -0300, Jason Gunthorpe wrote:
> On Wed, Apr 03, 2024 at 02:25:20PM -0400, Peter Xu wrote:
>
> > > I'd say the BUILD_BUG has done it's job and found an issue, fix it by
> > > not defining pud_leaf? I don't see any calls to pud_leaf in loongarch
> > > at least
> >
> > Yes, that sounds better too to me, however it means we may also risk other
> > archs that can fail another defconfig build.. and I worry I bring trouble
> > to multiple such cases. Fundamentally it's indeed my patch that broke
> > those builds, so I still sent the change and leave that for arch developers
> > to decide the best for the archs.
>
> But your change causes silent data corruption if the code path is
> run.. I think we are overall better to wade through the compile time
> bugs from linux-next. Honestly if there were alot then I'd think there
> would be more complaints already.
>
> Maybe it should just be a seperate step from this series.

Right, that'll be imho better to be done separate, as I think we'd better
consolidate the code.

One thing I don't worry is the warning would cause anything real to fail; I
don't yet expect any arch that will not define pud_pfn when it needs
it.. so it can mean all of the build errors may not cause real benefits as
of now. But I agree with you we'd better have it. I'll take a todo and
I'll try to add it back after all these fallouts. With my cross build
chains now it shouldn't be hard, just take some time to revisit each arch.

Thanks,

--
Peter Xu