2024-03-23 15:16:58

by Peter Xu

[permalink] [raw]
Subject: [PATCH] mm/arch: Provide pud_pfn() fallback

From: Peter Xu <[email protected]>

The comment in the code explains the reasons. We took a different approach
comparing to pmd_pfn() by providing a fallback function.

Another option is to provide some lower level config options (compare to
HUGETLB_PAGE or THP) to identify which layer an arch can support for such
huge mappings. However that can be an overkill.

Cc: Mike Rapoport (IBM) <[email protected]>
Cc: Matthew Wilcox <[email protected]>
Reported-by: kernel test robot <[email protected]>
Closes: https://lore.kernel.org/oe-kbuild-all/[email protected]/
Signed-off-by: Peter Xu <[email protected]>
---

Andrew,

If we care about per-commit build errors (and if it is ever feasible to
reorder), we can move this patch to be before the patch "mm/gup: handle
huge pud for follow_pud_mask()" in mm-unstable to unbreak build on that
commit.

Thanks,
---
arch/riscv/include/asm/pgtable.h | 1 +
arch/s390/include/asm/pgtable.h | 1 +
arch/sparc/include/asm/pgtable_64.h | 1 +
arch/x86/include/asm/pgtable.h | 1 +
include/linux/pgtable.h | 10 ++++++++++
5 files changed, 14 insertions(+)

diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
index 20242402fc11..0ca28cc8e3fa 100644
--- a/arch/riscv/include/asm/pgtable.h
+++ b/arch/riscv/include/asm/pgtable.h
@@ -646,6 +646,7 @@ static inline unsigned long pmd_pfn(pmd_t pmd)

#define __pud_to_phys(pud) (__page_val_to_pfn(pud_val(pud)) << PAGE_SHIFT)

+#define pud_pfn pud_pfn
static inline unsigned long pud_pfn(pud_t pud)
{
return ((__pud_to_phys(pud) & PUD_MASK) >> PAGE_SHIFT);
diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h
index 1a71cb19c089..6cbbe473f680 100644
--- a/arch/s390/include/asm/pgtable.h
+++ b/arch/s390/include/asm/pgtable.h
@@ -1414,6 +1414,7 @@ static inline unsigned long pud_deref(pud_t pud)
return (unsigned long)__va(pud_val(pud) & origin_mask);
}

+#define pud_pfn pud_pfn
static inline unsigned long pud_pfn(pud_t pud)
{
return __pa(pud_deref(pud)) >> PAGE_SHIFT;
diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h
index 4d1bafaba942..26efc9bb644a 100644
--- a/arch/sparc/include/asm/pgtable_64.h
+++ b/arch/sparc/include/asm/pgtable_64.h
@@ -875,6 +875,7 @@ static inline bool pud_leaf(pud_t pud)
return pte_val(pte) & _PAGE_PMD_HUGE;
}

+#define pud_pfn pud_pfn
static inline unsigned long pud_pfn(pud_t pud)
{
pte_t pte = __pte(pud_val(pud));
diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index cefc7a84f7a4..273f7557218c 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -234,6 +234,7 @@ static inline unsigned long pmd_pfn(pmd_t pmd)
return (pfn & pmd_pfn_mask(pmd)) >> PAGE_SHIFT;
}

+#define pud_pfn pud_pfn
static inline unsigned long pud_pfn(pud_t pud)
{
phys_addr_t pfn = pud_val(pud);
diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index 2a1c044ae467..deae9e50f1a8 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -1817,6 +1817,16 @@ typedef unsigned int pgtbl_mod_mask;
#define pte_leaf_size(x) PAGE_SIZE
#endif

+/*
+ * We always define pmd_pfn for all archs as it's used in lots of generic
+ * code. Now it happens too for pud_pfn (and can happen for larger
+ * mappings too in the future; we're not there yet). Instead of defining
+ * it for all archs (like pmd_pfn), provide a fallback.
+ */
+#ifndef pud_pfn
+#define pud_pfn(x) ({ BUILD_BUG(); 0; })
+#endif
+
/*
* Some architectures have MMUs that are configurable or selectable at boot
* time. These lead to variable PTRS_PER_x. For statically allocated arrays it
--
2.44.0



2024-03-23 17:39:50

by Peter Xu

[permalink] [raw]
Subject: Re: [PATCH] mm/arch: Provide pud_pfn() fallback

On Sat, Mar 23, 2024 at 11:16:43AM -0400, [email protected] wrote:
> From: Peter Xu <[email protected]>
>
> The comment in the code explains the reasons. We took a different approach
> comparing to pmd_pfn() by providing a fallback function.
>
> Another option is to provide some lower level config options (compare to
> HUGETLB_PAGE or THP) to identify which layer an arch can support for such
> huge mappings. However that can be an overkill.
>
> Cc: Mike Rapoport (IBM) <[email protected]>
> Cc: Matthew Wilcox <[email protected]>
> Reported-by: kernel test robot <[email protected]>
> Closes: https://lore.kernel.org/oe-kbuild-all/[email protected]/

Also:

Closes: https://lore.kernel.org/oe-kbuild-all/[email protected]/

> Signed-off-by: Peter Xu <[email protected]>
> ---
>
> Andrew,
>
> If we care about per-commit build errors (and if it is ever feasible to
> reorder), we can move this patch to be before the patch "mm/gup: handle
> huge pud for follow_pud_mask()" in mm-unstable to unbreak build on that
> commit.
>
> Thanks,
> ---
> arch/riscv/include/asm/pgtable.h | 1 +
> arch/s390/include/asm/pgtable.h | 1 +
> arch/sparc/include/asm/pgtable_64.h | 1 +
> arch/x86/include/asm/pgtable.h | 1 +
> include/linux/pgtable.h | 10 ++++++++++
> 5 files changed, 14 insertions(+)
>
> diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
> index 20242402fc11..0ca28cc8e3fa 100644
> --- a/arch/riscv/include/asm/pgtable.h
> +++ b/arch/riscv/include/asm/pgtable.h
> @@ -646,6 +646,7 @@ static inline unsigned long pmd_pfn(pmd_t pmd)
>
> #define __pud_to_phys(pud) (__page_val_to_pfn(pud_val(pud)) << PAGE_SHIFT)
>
> +#define pud_pfn pud_pfn
> static inline unsigned long pud_pfn(pud_t pud)
> {
> return ((__pud_to_phys(pud) & PUD_MASK) >> PAGE_SHIFT);
> diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h
> index 1a71cb19c089..6cbbe473f680 100644
> --- a/arch/s390/include/asm/pgtable.h
> +++ b/arch/s390/include/asm/pgtable.h
> @@ -1414,6 +1414,7 @@ static inline unsigned long pud_deref(pud_t pud)
> return (unsigned long)__va(pud_val(pud) & origin_mask);
> }
>
> +#define pud_pfn pud_pfn
> static inline unsigned long pud_pfn(pud_t pud)
> {
> return __pa(pud_deref(pud)) >> PAGE_SHIFT;
> diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h
> index 4d1bafaba942..26efc9bb644a 100644
> --- a/arch/sparc/include/asm/pgtable_64.h
> +++ b/arch/sparc/include/asm/pgtable_64.h
> @@ -875,6 +875,7 @@ static inline bool pud_leaf(pud_t pud)
> return pte_val(pte) & _PAGE_PMD_HUGE;
> }
>
> +#define pud_pfn pud_pfn
> static inline unsigned long pud_pfn(pud_t pud)
> {
> pte_t pte = __pte(pud_val(pud));
> diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
> index cefc7a84f7a4..273f7557218c 100644
> --- a/arch/x86/include/asm/pgtable.h
> +++ b/arch/x86/include/asm/pgtable.h
> @@ -234,6 +234,7 @@ static inline unsigned long pmd_pfn(pmd_t pmd)
> return (pfn & pmd_pfn_mask(pmd)) >> PAGE_SHIFT;
> }
>
> +#define pud_pfn pud_pfn
> static inline unsigned long pud_pfn(pud_t pud)
> {
> phys_addr_t pfn = pud_val(pud);
> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> index 2a1c044ae467..deae9e50f1a8 100644
> --- a/include/linux/pgtable.h
> +++ b/include/linux/pgtable.h
> @@ -1817,6 +1817,16 @@ typedef unsigned int pgtbl_mod_mask;
> #define pte_leaf_size(x) PAGE_SIZE
> #endif
>
> +/*
> + * We always define pmd_pfn for all archs as it's used in lots of generic
> + * code. Now it happens too for pud_pfn (and can happen for larger
> + * mappings too in the future; we're not there yet). Instead of defining
> + * it for all archs (like pmd_pfn), provide a fallback.
> + */
> +#ifndef pud_pfn
> +#define pud_pfn(x) ({ BUILD_BUG(); 0; })
> +#endif
> +
> /*
> * Some architectures have MMUs that are configurable or selectable at boot
> * time. These lead to variable PTRS_PER_x. For statically allocated arrays it
> --
> 2.44.0
>

--
Peter Xu


2024-03-25 15:36:15

by Jason Gunthorpe

[permalink] [raw]
Subject: Re: [PATCH] mm/arch: Provide pud_pfn() fallback

On Sat, Mar 23, 2024 at 11:16:43AM -0400, [email protected] wrote:
> From: Peter Xu <[email protected]>
>
> The comment in the code explains the reasons. We took a different approach
> comparing to pmd_pfn() by providing a fallback function.
>
> Another option is to provide some lower level config options (compare to
> HUGETLB_PAGE or THP) to identify which layer an arch can support for such
> huge mappings. However that can be an overkill.
>
> Cc: Mike Rapoport (IBM) <[email protected]>
> Cc: Matthew Wilcox <[email protected]>
> Reported-by: kernel test robot <[email protected]>
> Closes: https://lore.kernel.org/oe-kbuild-all/[email protected]/
> Signed-off-by: Peter Xu <[email protected]>
> ---

Reviewed-by: Jason Gunthorpe <[email protected]>

Jason

2024-03-26 20:34:09

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH] mm/arch: Provide pud_pfn() fallback

On Sat, 23 Mar 2024 11:16:43 -0400 [email protected] wrote:

> From: Peter Xu <[email protected]>
>
> The comment in the code explains the reasons. We took a different approach
> comparing to pmd_pfn() by providing a fallback function.
>
> Another option is to provide some lower level config options (compare to
> HUGETLB_PAGE or THP) to identify which layer an arch can support for such
> huge mappings. However that can be an overkill.
>
> ...
>
> If we care about per-commit build errors (and if it is ever feasible to
> reorder), we can move this patch to be before the patch "mm/gup: handle
> huge pud for follow_pud_mask()" in mm-unstable to unbreak build on that
> commit.

I temporarily disabled that whole series a few days ago. Because of
multiple build issues, iirc.

Let's make that permanent. Please redo the whole series against
mm-unstable and resend?


2024-03-26 21:10:21

by Peter Xu

[permalink] [raw]
Subject: Re: [PATCH] mm/arch: Provide pud_pfn() fallback

On Tue, Mar 26, 2024 at 01:27:26PM -0700, Andrew Morton wrote:
> On Sat, 23 Mar 2024 11:16:43 -0400 [email protected] wrote:
>
> > From: Peter Xu <[email protected]>
> >
> > The comment in the code explains the reasons. We took a different approach
> > comparing to pmd_pfn() by providing a fallback function.
> >
> > Another option is to provide some lower level config options (compare to
> > HUGETLB_PAGE or THP) to identify which layer an arch can support for such
> > huge mappings. However that can be an overkill.
> >
> > ...
> >
> > If we care about per-commit build errors (and if it is ever feasible to
> > reorder), we can move this patch to be before the patch "mm/gup: handle
> > huge pud for follow_pud_mask()" in mm-unstable to unbreak build on that
> > commit.
>
> I temporarily disabled that whole series a few days ago. Because of
> multiple build issues, iirc.
>
> Let's make that permanent. Please redo the whole series against
> mm-unstable and resend?

Yes, that's the plan. Feel free to ignore this as this is not used until
that GUP rework series, I'll include it in the whole set to be reposted.

I'm currently doing the build tests; just finished writting the harness for
testing the matrix. It'll take a bit time to run through the tests I
specified (I tried to cover a few more archs/configs), and I'll repost with
all patches included (fixups squashed) when test finished.

[side note: I think I can reproduce the other not-reported issue, on
arm+alldefconfig; that'll get covered too]

Thanks,

--
Peter Xu