2022-11-18 05:47:37

by Anshuman Khandual

[permalink] [raw]
Subject: [PATCH] arm64/mm: Drop redundant BUG_ON(!pgtable_alloc)

__create_pgd_mapping_locked() expects a page allocator used while mapping a
virtual range. This page allocator function propagates down the call chain,
while building intermediate levels in the page table. Passed page allocator
is a necessary ingredient required to build the page table but its presence
can be asserted just once in the very beginning rather than in all the down
stream functions. This consolidates BUG_ON(!pgtable_alloc) checks just in a
single place i.e __create_pgd_mapping_locked().

Cc: Catalin Marinas <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Anshuman Khandual <[email protected]>
---
This applies on v6.1-rc5

arch/arm64/mm/mmu.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 5a19950e7289..97ca82001089 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -207,7 +207,6 @@ static void alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,

if (flags & NO_EXEC_MAPPINGS)
pmdval |= PMD_TABLE_PXN;
- BUG_ON(!pgtable_alloc);
pte_phys = pgtable_alloc(PAGE_SHIFT);
__pmd_populate(pmdp, pte_phys, pmdval);
pmd = READ_ONCE(*pmdp);
@@ -285,7 +284,6 @@ static void alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,

if (flags & NO_EXEC_MAPPINGS)
pudval |= PUD_TABLE_PXN;
- BUG_ON(!pgtable_alloc);
pmd_phys = pgtable_alloc(PMD_SHIFT);
__pud_populate(pudp, pmd_phys, pudval);
pud = READ_ONCE(*pudp);
@@ -324,7 +322,6 @@ static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end,

if (flags & NO_EXEC_MAPPINGS)
p4dval |= P4D_TABLE_PXN;
- BUG_ON(!pgtable_alloc);
pud_phys = pgtable_alloc(PUD_SHIFT);
__p4d_populate(p4dp, pud_phys, p4dval);
p4d = READ_ONCE(*p4dp);
@@ -383,6 +380,7 @@ static void __create_pgd_mapping_locked(pgd_t *pgdir, phys_addr_t phys,
phys &= PAGE_MASK;
addr = virt & PAGE_MASK;
end = PAGE_ALIGN(virt + size);
+ BUG_ON(!pgtable_alloc);

do {
next = pgd_addr_end(addr, end);
--
2.25.1



2022-11-18 12:10:46

by Mark Rutland

[permalink] [raw]
Subject: Re: [PATCH] arm64/mm: Drop redundant BUG_ON(!pgtable_alloc)

On Fri, Nov 18, 2022 at 11:01:02AM +0530, Anshuman Khandual wrote:
> __create_pgd_mapping_locked() expects a page allocator used while mapping a
> virtual range. This page allocator function propagates down the call chain,
> while building intermediate levels in the page table. Passed page allocator
> is a necessary ingredient required to build the page table but its presence
> can be asserted just once in the very beginning rather than in all the down
> stream functions. This consolidates BUG_ON(!pgtable_alloc) checks just in a
> single place i.e __create_pgd_mapping_locked().
>
> Cc: Catalin Marinas <[email protected]>
> Cc: Will Deacon <[email protected]>
> Cc: Mark Rutland <[email protected]>
> Cc: Andrew Morton <[email protected]>
> Cc: [email protected]
> Cc: [email protected]
> Signed-off-by: Anshuman Khandual <[email protected]>

I don't have strong feelings either way, so FWIW:

Acked-by: Mark Rutland <[email protected]>

Thanks,
Mark.

> ---
> This applies on v6.1-rc5
>
> arch/arm64/mm/mmu.c | 4 +---
> 1 file changed, 1 insertion(+), 3 deletions(-)
>
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 5a19950e7289..97ca82001089 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -207,7 +207,6 @@ static void alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
>
> if (flags & NO_EXEC_MAPPINGS)
> pmdval |= PMD_TABLE_PXN;
> - BUG_ON(!pgtable_alloc);
> pte_phys = pgtable_alloc(PAGE_SHIFT);
> __pmd_populate(pmdp, pte_phys, pmdval);
> pmd = READ_ONCE(*pmdp);
> @@ -285,7 +284,6 @@ static void alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
>
> if (flags & NO_EXEC_MAPPINGS)
> pudval |= PUD_TABLE_PXN;
> - BUG_ON(!pgtable_alloc);
> pmd_phys = pgtable_alloc(PMD_SHIFT);
> __pud_populate(pudp, pmd_phys, pudval);
> pud = READ_ONCE(*pudp);
> @@ -324,7 +322,6 @@ static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end,
>
> if (flags & NO_EXEC_MAPPINGS)
> p4dval |= P4D_TABLE_PXN;
> - BUG_ON(!pgtable_alloc);
> pud_phys = pgtable_alloc(PUD_SHIFT);
> __p4d_populate(p4dp, pud_phys, p4dval);
> p4d = READ_ONCE(*p4dp);
> @@ -383,6 +380,7 @@ static void __create_pgd_mapping_locked(pgd_t *pgdir, phys_addr_t phys,
> phys &= PAGE_MASK;
> addr = virt & PAGE_MASK;
> end = PAGE_ALIGN(virt + size);
> + BUG_ON(!pgtable_alloc);
>
> do {
> next = pgd_addr_end(addr, end);
> --
> 2.25.1
>

2022-11-18 19:45:52

by Will Deacon

[permalink] [raw]
Subject: Re: [PATCH] arm64/mm: Drop redundant BUG_ON(!pgtable_alloc)

On Fri, 18 Nov 2022 11:01:02 +0530, Anshuman Khandual wrote:
> __create_pgd_mapping_locked() expects a page allocator used while mapping a
> virtual range. This page allocator function propagates down the call chain,
> while building intermediate levels in the page table. Passed page allocator
> is a necessary ingredient required to build the page table but its presence
> can be asserted just once in the very beginning rather than in all the down
> stream functions. This consolidates BUG_ON(!pgtable_alloc) checks just in a
> single place i.e __create_pgd_mapping_locked().
>
> [...]

Applied to arm64 (for-next/trivial), thanks!

[1/1] arm64/mm: Drop redundant BUG_ON(!pgtable_alloc)
https://git.kernel.org/arm64/c/9ed2b4616d4e

Cheers,
--
Will

https://fixes.arm64.dev
https://next.arm64.dev
https://will.arm64.dev

2022-11-20 17:21:09

by Nathan Chancellor

[permalink] [raw]
Subject: Re: [PATCH] arm64/mm: Drop redundant BUG_ON(!pgtable_alloc)

Hi Anshuman,

On Fri, Nov 18, 2022 at 11:01:02AM +0530, Anshuman Khandual wrote:
> __create_pgd_mapping_locked() expects a page allocator used while mapping a
> virtual range. This page allocator function propagates down the call chain,
> while building intermediate levels in the page table. Passed page allocator
> is a necessary ingredient required to build the page table but its presence
> can be asserted just once in the very beginning rather than in all the down
> stream functions. This consolidates BUG_ON(!pgtable_alloc) checks just in a
> single place i.e __create_pgd_mapping_locked().
>
> Cc: Catalin Marinas <[email protected]>
> Cc: Will Deacon <[email protected]>
> Cc: Mark Rutland <[email protected]>
> Cc: Andrew Morton <[email protected]>
> Cc: [email protected]
> Cc: [email protected]
> Signed-off-by: Anshuman Khandual <[email protected]>
> ---
> This applies on v6.1-rc5
>
> arch/arm64/mm/mmu.c | 4 +---
> 1 file changed, 1 insertion(+), 3 deletions(-)
>
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 5a19950e7289..97ca82001089 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -207,7 +207,6 @@ static void alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
>
> if (flags & NO_EXEC_MAPPINGS)
> pmdval |= PMD_TABLE_PXN;
> - BUG_ON(!pgtable_alloc);
> pte_phys = pgtable_alloc(PAGE_SHIFT);
> __pmd_populate(pmdp, pte_phys, pmdval);
> pmd = READ_ONCE(*pmdp);
> @@ -285,7 +284,6 @@ static void alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
>
> if (flags & NO_EXEC_MAPPINGS)
> pudval |= PUD_TABLE_PXN;
> - BUG_ON(!pgtable_alloc);
> pmd_phys = pgtable_alloc(PMD_SHIFT);
> __pud_populate(pudp, pmd_phys, pudval);
> pud = READ_ONCE(*pudp);
> @@ -324,7 +322,6 @@ static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end,
>
> if (flags & NO_EXEC_MAPPINGS)
> p4dval |= P4D_TABLE_PXN;
> - BUG_ON(!pgtable_alloc);
> pud_phys = pgtable_alloc(PUD_SHIFT);
> __p4d_populate(p4dp, pud_phys, p4dval);
> p4d = READ_ONCE(*p4dp);
> @@ -383,6 +380,7 @@ static void __create_pgd_mapping_locked(pgd_t *pgdir, phys_addr_t phys,
> phys &= PAGE_MASK;
> addr = virt & PAGE_MASK;
> end = PAGE_ALIGN(virt + size);
> + BUG_ON(!pgtable_alloc);
>
> do {
> next = pgd_addr_end(addr, end);
> --
> 2.25.1
>
>

I just bisected a boot failure in our QEMU-based continuous integration
setup to this change as commit 9ed2b4616d4e ("arm64/mm: Drop redundant
BUG_ON(!pgtable_alloc)") in the arm64 tree. There is no output so the
panic clearly happens early at boot. If I move back to the previous
commit and add a WARN_ON() like so:

diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index d386033a074c..9280a92ff920 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -383,6 +383,7 @@ static void __create_pgd_mapping_locked(pgd_t *pgdir, phys_addr_t phys,
phys &= PAGE_MASK;
addr = virt & PAGE_MASK;
end = PAGE_ALIGN(virt + size);
+ WARN_ON(!pgtable_alloc);

do {
next = pgd_addr_end(addr, end);

I do see some stacktraces. I have attached the boot log from QEMU.

If there is any additional information I can provide or patches I can
test, I am more than happy to do so.

Cheers,
Nathan

# bad: [2ed6cab9589d7829fc38237dcca94c776304a8bd] Merge branches 'for-next/acpi', 'for-next/asm-const', 'for-next/cpufeature', 'for-next/dynamic-scs', 'for-next/errata', 'for-next/fpsimd', 'for-next/ftrace', 'for-next/insn', 'for-next/kbuild', 'for-next/kdump', 'for-next/mm', 'for-next/perf', 'for-next/selftests', 'for-next/stacks', 'for-next/trivial', 'for-next/uaccess' and 'for-next/undef-traps' into for-next/core
# good: [f0c4d9fc9cc9462659728d168387191387e903cc] Linux 6.1-rc4
git bisect start '2ed6cab9589d7829fc38237dcca94c776304a8bd' 'v6.1-rc4'
# bad: [5b468dad6e5cf4998bdc05efbc5526c111666027] arm64/mm: Drop unused restore_ttbr1
git bisect bad 5b468dad6e5cf4998bdc05efbc5526c111666027
# good: [657eef0a5420a02c02945ed8c87f2ddcbd255772] arm64: atomics: lse: remove stale dependency on JUMP_LABEL
git bisect good 657eef0a5420a02c02945ed8c87f2ddcbd255772
# bad: [9ed2b4616d4e846ece2a04cb5007ce1d1bd9e3f3] arm64/mm: Drop redundant BUG_ON(!pgtable_alloc)
git bisect bad 9ed2b4616d4e846ece2a04cb5007ce1d1bd9e3f3
# good: [d8c1d798a2e5091128c391c6dadcc9be334af3f5] arm64: make is_ttbrX_addr() noinstr-safe
git bisect good d8c1d798a2e5091128c391c6dadcc9be334af3f5
# first bad commit: [9ed2b4616d4e846ece2a04cb5007ce1d1bd9e3f3] arm64/mm: Drop redundant BUG_ON(!pgtable_alloc)


Attachments:
(No filename) (4.57 kB)
boot.log (27.10 kB)
Download all attachments

2022-11-21 06:20:31

by Anshuman Khandual

[permalink] [raw]
Subject: Re: [PATCH] arm64/mm: Drop redundant BUG_ON(!pgtable_alloc)

Hello Nathan,

Thanks for the report.

On 11/20/22 21:46, Nathan Chancellor wrote:
> Hi Anshuman,
>
> On Fri, Nov 18, 2022 at 11:01:02AM +0530, Anshuman Khandual wrote:
>> __create_pgd_mapping_locked() expects a page allocator used while mapping a
>> virtual range. This page allocator function propagates down the call chain,
>> while building intermediate levels in the page table. Passed page allocator
>> is a necessary ingredient required to build the page table but its presence
>> can be asserted just once in the very beginning rather than in all the down
>> stream functions. This consolidates BUG_ON(!pgtable_alloc) checks just in a
>> single place i.e __create_pgd_mapping_locked().
>>
>> Cc: Catalin Marinas <[email protected]>
>> Cc: Will Deacon <[email protected]>
>> Cc: Mark Rutland <[email protected]>
>> Cc: Andrew Morton <[email protected]>
>> Cc: [email protected]
>> Cc: [email protected]
>> Signed-off-by: Anshuman Khandual <[email protected]>
>> ---
>> This applies on v6.1-rc5
>>
>> arch/arm64/mm/mmu.c | 4 +---
>> 1 file changed, 1 insertion(+), 3 deletions(-)
>>
>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
>> index 5a19950e7289..97ca82001089 100644
>> --- a/arch/arm64/mm/mmu.c
>> +++ b/arch/arm64/mm/mmu.c
>> @@ -207,7 +207,6 @@ static void alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
>>
>> if (flags & NO_EXEC_MAPPINGS)
>> pmdval |= PMD_TABLE_PXN;
>> - BUG_ON(!pgtable_alloc);
>> pte_phys = pgtable_alloc(PAGE_SHIFT);
>> __pmd_populate(pmdp, pte_phys, pmdval);
>> pmd = READ_ONCE(*pmdp);
>> @@ -285,7 +284,6 @@ static void alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
>>
>> if (flags & NO_EXEC_MAPPINGS)
>> pudval |= PUD_TABLE_PXN;
>> - BUG_ON(!pgtable_alloc);
>> pmd_phys = pgtable_alloc(PMD_SHIFT);
>> __pud_populate(pudp, pmd_phys, pudval);
>> pud = READ_ONCE(*pudp);
>> @@ -324,7 +322,6 @@ static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end,
>>
>> if (flags & NO_EXEC_MAPPINGS)
>> p4dval |= P4D_TABLE_PXN;
>> - BUG_ON(!pgtable_alloc);
>> pud_phys = pgtable_alloc(PUD_SHIFT);
>> __p4d_populate(p4dp, pud_phys, p4dval);
>> p4d = READ_ONCE(*p4dp);
>> @@ -383,6 +380,7 @@ static void __create_pgd_mapping_locked(pgd_t *pgdir, phys_addr_t phys,
>> phys &= PAGE_MASK;
>> addr = virt & PAGE_MASK;
>> end = PAGE_ALIGN(virt + size);
>> + BUG_ON(!pgtable_alloc);
>>
>> do {
>> next = pgd_addr_end(addr, end);
>> --
>> 2.25.1
>>
>>
> I just bisected a boot failure in our QEMU-based continuous integration
> setup to this change as commit 9ed2b4616d4e ("arm64/mm: Drop redundant
> BUG_ON(!pgtable_alloc)") in the arm64 tree. There is no output so the
> panic clearly happens early at boot. If I move back to the previous
> commit and add a WARN_ON() like so:
>
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index d386033a074c..9280a92ff920 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -383,6 +383,7 @@ static void __create_pgd_mapping_locked(pgd_t *pgdir, phys_addr_t phys,
> phys &= PAGE_MASK;
> addr = virt & PAGE_MASK;
> end = PAGE_ALIGN(virt + size);
> + WARN_ON(!pgtable_alloc);
>
> do {
> next = pgd_addr_end(addr, end);
>
> I do see some stacktraces. I have attached the boot log from QEMU.
>
> If there is any additional information I can provide or patches I can
> test, I am more than happy to do so.

There are couple of instances, where __create_pgd_mapping() function gets called
without a valid pgtable alloc function (NULL is passed on instead), as it is not
expected to allocate page table pages, during the mapping process. The following
change after this patch should solve the reported problem.

diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 9ea8e9039992..a00563122fcb 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -42,6 +42,7 @@
#define NO_BLOCK_MAPPINGS BIT(0)
#define NO_CONT_MAPPINGS BIT(1)
#define NO_EXEC_MAPPINGS BIT(2) /* assumes FEAT_HPDS is not used */
+#define NO_ALLOC_MAPPINGS BIT(3) /* does not allocate page table pages */

int idmap_t0sz __ro_after_init;

@@ -380,7 +381,7 @@ static void __create_pgd_mapping_locked(pgd_t *pgdir, phys_addr_t phys,
phys &= PAGE_MASK;
addr = virt & PAGE_MASK;
end = PAGE_ALIGN(virt + size);
- BUG_ON(!pgtable_alloc);
+ BUG_ON(!(flags & NO_ALLOC_MAPPINGS) && !pgtable_alloc);

do {
next = pgd_addr_end(addr, end);
@@ -453,7 +454,7 @@ static void __init create_mapping_noalloc(phys_addr_t phys, unsigned long virt,
return;
}
__create_pgd_mapping(init_mm.pgd, phys, virt, size, prot, NULL,
- NO_CONT_MAPPINGS);
+ NO_CONT_MAPPINGS | NO_ALLOC_MAPPINGS);
}

void __init create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
@@ -481,7 +482,7 @@ static void update_mapping_prot(phys_addr_t phys, unsigned long virt,
}

__create_pgd_mapping(init_mm.pgd, phys, virt, size, prot, NULL,
- NO_CONT_MAPPINGS);
+ NO_CONT_MAPPINGS | NO_ALLOC_MAPPINGS);

/* flush the TLBs after updating live kernel mappings */
flush_tlb_kernel_range(virt, virt + size);

2022-11-21 12:49:45

by Mark Rutland

[permalink] [raw]
Subject: Re: [PATCH] arm64/mm: Drop redundant BUG_ON(!pgtable_alloc)

On Mon, Nov 21, 2022 at 11:00:42AM +0530, Anshuman Khandual wrote:
> Hello Nathan,
>
> Thanks for the report.
>
> On 11/20/22 21:46, Nathan Chancellor wrote:
> > Hi Anshuman,

> > I just bisected a boot failure in our QEMU-based continuous integration
> > setup to this change as commit 9ed2b4616d4e ("arm64/mm: Drop redundant
> > BUG_ON(!pgtable_alloc)") in the arm64 tree. There is no output so the
> > panic clearly happens early at boot. If I move back to the previous
> > commit and add a WARN_ON() like so:
> >
> > diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> > index d386033a074c..9280a92ff920 100644
> > --- a/arch/arm64/mm/mmu.c
> > +++ b/arch/arm64/mm/mmu.c
> > @@ -383,6 +383,7 @@ static void __create_pgd_mapping_locked(pgd_t *pgdir, phys_addr_t phys,
> > phys &= PAGE_MASK;
> > addr = virt & PAGE_MASK;
> > end = PAGE_ALIGN(virt + size);
> > + WARN_ON(!pgtable_alloc);
> >
> > do {
> > next = pgd_addr_end(addr, end);
> >
> > I do see some stacktraces. I have attached the boot log from QEMU.
> >
> > If there is any additional information I can provide or patches I can
> > test, I am more than happy to do so.
>
> There are couple of instances, where __create_pgd_mapping() function gets called
> without a valid pgtable alloc function (NULL is passed on instead), as it is not
> expected to allocate page table pages, during the mapping process. The following
> change after this patch should solve the reported problem.
>
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 9ea8e9039992..a00563122fcb 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -42,6 +42,7 @@
> #define NO_BLOCK_MAPPINGS BIT(0)
> #define NO_CONT_MAPPINGS BIT(1)
> #define NO_EXEC_MAPPINGS BIT(2) /* assumes FEAT_HPDS is not used */
> +#define NO_ALLOC_MAPPINGS BIT(3) /* does not allocate page table pages */
>
> int idmap_t0sz __ro_after_init;
>
> @@ -380,7 +381,7 @@ static void __create_pgd_mapping_locked(pgd_t *pgdir, phys_addr_t phys,
> phys &= PAGE_MASK;
> addr = virt & PAGE_MASK;
> end = PAGE_ALIGN(virt + size);
> - BUG_ON(!pgtable_alloc);
> + BUG_ON(!(flags & NO_ALLOC_MAPPINGS) && !pgtable_alloc);
>
> do {
> next = pgd_addr_end(addr, end);
> @@ -453,7 +454,7 @@ static void __init create_mapping_noalloc(phys_addr_t phys, unsigned long virt,
> return;
> }
> __create_pgd_mapping(init_mm.pgd, phys, virt, size, prot, NULL,
> - NO_CONT_MAPPINGS);
> + NO_CONT_MAPPINGS | NO_ALLOC_MAPPINGS);
> }
>
> void __init create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
> @@ -481,7 +482,7 @@ static void update_mapping_prot(phys_addr_t phys, unsigned long virt,
> }
>
> __create_pgd_mapping(init_mm.pgd, phys, virt, size, prot, NULL,
> - NO_CONT_MAPPINGS);
> + NO_CONT_MAPPINGS | NO_ALLOC_MAPPINGS);
>
> /* flush the TLBs after updating live kernel mappings */
> flush_tlb_kernel_range(virt, virt + size);

This is now more complicated than what we had originally, and it doesn't catch
the case where the caller sets NO_ALLOC_MAPPINGS but the callee ends up needing
to perform an allocation, which the old code would have caught.

This is clearly more subtle than we thought initially; for now could we please
just drop the patch?

Thanks,
Mark.

2022-11-21 13:06:47

by Will Deacon

[permalink] [raw]
Subject: Re: [PATCH] arm64/mm: Drop redundant BUG_ON(!pgtable_alloc)

On Mon, Nov 21, 2022 at 12:27:51PM +0000, Mark Rutland wrote:
> On Mon, Nov 21, 2022 at 11:00:42AM +0530, Anshuman Khandual wrote:
> > On 11/20/22 21:46, Nathan Chancellor wrote:
> > > I just bisected a boot failure in our QEMU-based continuous integration
> > > setup to this change as commit 9ed2b4616d4e ("arm64/mm: Drop redundant
> > > BUG_ON(!pgtable_alloc)") in the arm64 tree. There is no output so the
> > > panic clearly happens early at boot. If I move back to the previous
> > > commit and add a WARN_ON() like so:
> > >
> > > diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> > > index d386033a074c..9280a92ff920 100644
> > > --- a/arch/arm64/mm/mmu.c
> > > +++ b/arch/arm64/mm/mmu.c
> > > @@ -383,6 +383,7 @@ static void __create_pgd_mapping_locked(pgd_t *pgdir, phys_addr_t phys,
> > > phys &= PAGE_MASK;
> > > addr = virt & PAGE_MASK;
> > > end = PAGE_ALIGN(virt + size);
> > > + WARN_ON(!pgtable_alloc);
> > >
> > > do {
> > > next = pgd_addr_end(addr, end);
> > >
> > > I do see some stacktraces. I have attached the boot log from QEMU.
> > >
> > > If there is any additional information I can provide or patches I can
> > > test, I am more than happy to do so.
> >
> > There are couple of instances, where __create_pgd_mapping() function gets called
> > without a valid pgtable alloc function (NULL is passed on instead), as it is not
> > expected to allocate page table pages, during the mapping process. The following
> > change after this patch should solve the reported problem.
> >
> > diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> > index 9ea8e9039992..a00563122fcb 100644
> > --- a/arch/arm64/mm/mmu.c
> > +++ b/arch/arm64/mm/mmu.c
> > @@ -42,6 +42,7 @@
> > #define NO_BLOCK_MAPPINGS BIT(0)
> > #define NO_CONT_MAPPINGS BIT(1)
> > #define NO_EXEC_MAPPINGS BIT(2) /* assumes FEAT_HPDS is not used */
> > +#define NO_ALLOC_MAPPINGS BIT(3) /* does not allocate page table pages */
> >
> > int idmap_t0sz __ro_after_init;
> >
> > @@ -380,7 +381,7 @@ static void __create_pgd_mapping_locked(pgd_t *pgdir, phys_addr_t phys,
> > phys &= PAGE_MASK;
> > addr = virt & PAGE_MASK;
> > end = PAGE_ALIGN(virt + size);
> > - BUG_ON(!pgtable_alloc);
> > + BUG_ON(!(flags & NO_ALLOC_MAPPINGS) && !pgtable_alloc);
> >
> > do {
> > next = pgd_addr_end(addr, end);
> > @@ -453,7 +454,7 @@ static void __init create_mapping_noalloc(phys_addr_t phys, unsigned long virt,
> > return;
> > }
> > __create_pgd_mapping(init_mm.pgd, phys, virt, size, prot, NULL,
> > - NO_CONT_MAPPINGS);
> > + NO_CONT_MAPPINGS | NO_ALLOC_MAPPINGS);
> > }
> >
> > void __init create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
> > @@ -481,7 +482,7 @@ static void update_mapping_prot(phys_addr_t phys, unsigned long virt,
> > }
> >
> > __create_pgd_mapping(init_mm.pgd, phys, virt, size, prot, NULL,
> > - NO_CONT_MAPPINGS);
> > + NO_CONT_MAPPINGS | NO_ALLOC_MAPPINGS);
> >
> > /* flush the TLBs after updating live kernel mappings */
> > flush_tlb_kernel_range(virt, virt + size);
>
> This is now more complicated than what we had originally, and it doesn't catch
> the case where the caller sets NO_ALLOC_MAPPINGS but the callee ends up needing
> to perform an allocation, which the old code would have caught.
>
> This is clearly more subtle than we thought initially; for now could we please
> just drop the patch?

Absolutely, this was supposed to be a trivial cleanup but clearly it's much
more than that. I'll revert it today.

Thanks, Nathan!

Will

2022-11-21 14:23:54

by Robin Murphy

[permalink] [raw]
Subject: Re: [PATCH] arm64/mm: Drop redundant BUG_ON(!pgtable_alloc)

On 2022-11-21 12:27, Mark Rutland wrote:
> On Mon, Nov 21, 2022 at 11:00:42AM +0530, Anshuman Khandual wrote:
>> Hello Nathan,
>>
>> Thanks for the report.
>>
>> On 11/20/22 21:46, Nathan Chancellor wrote:
>>> Hi Anshuman,
>
>>> I just bisected a boot failure in our QEMU-based continuous integration
>>> setup to this change as commit 9ed2b4616d4e ("arm64/mm: Drop redundant
>>> BUG_ON(!pgtable_alloc)") in the arm64 tree. There is no output so the
>>> panic clearly happens early at boot. If I move back to the previous
>>> commit and add a WARN_ON() like so:
>>>
>>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
>>> index d386033a074c..9280a92ff920 100644
>>> --- a/arch/arm64/mm/mmu.c
>>> +++ b/arch/arm64/mm/mmu.c
>>> @@ -383,6 +383,7 @@ static void __create_pgd_mapping_locked(pgd_t *pgdir, phys_addr_t phys,
>>> phys &= PAGE_MASK;
>>> addr = virt & PAGE_MASK;
>>> end = PAGE_ALIGN(virt + size);
>>> + WARN_ON(!pgtable_alloc);
>>>
>>> do {
>>> next = pgd_addr_end(addr, end);
>>>
>>> I do see some stacktraces. I have attached the boot log from QEMU.
>>>
>>> If there is any additional information I can provide or patches I can
>>> test, I am more than happy to do so.
>>
>> There are couple of instances, where __create_pgd_mapping() function gets called
>> without a valid pgtable alloc function (NULL is passed on instead), as it is not
>> expected to allocate page table pages, during the mapping process. The following
>> change after this patch should solve the reported problem.
>>
>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
>> index 9ea8e9039992..a00563122fcb 100644
>> --- a/arch/arm64/mm/mmu.c
>> +++ b/arch/arm64/mm/mmu.c
>> @@ -42,6 +42,7 @@
>> #define NO_BLOCK_MAPPINGS BIT(0)
>> #define NO_CONT_MAPPINGS BIT(1)
>> #define NO_EXEC_MAPPINGS BIT(2) /* assumes FEAT_HPDS is not used */
>> +#define NO_ALLOC_MAPPINGS BIT(3) /* does not allocate page table pages */
>>
>> int idmap_t0sz __ro_after_init;
>>
>> @@ -380,7 +381,7 @@ static void __create_pgd_mapping_locked(pgd_t *pgdir, phys_addr_t phys,
>> phys &= PAGE_MASK;
>> addr = virt & PAGE_MASK;
>> end = PAGE_ALIGN(virt + size);
>> - BUG_ON(!pgtable_alloc);
>> + BUG_ON(!(flags & NO_ALLOC_MAPPINGS) && !pgtable_alloc);
>>
>> do {
>> next = pgd_addr_end(addr, end);
>> @@ -453,7 +454,7 @@ static void __init create_mapping_noalloc(phys_addr_t phys, unsigned long virt,
>> return;
>> }
>> __create_pgd_mapping(init_mm.pgd, phys, virt, size, prot, NULL,
>> - NO_CONT_MAPPINGS);
>> + NO_CONT_MAPPINGS | NO_ALLOC_MAPPINGS);
>> }
>>
>> void __init create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
>> @@ -481,7 +482,7 @@ static void update_mapping_prot(phys_addr_t phys, unsigned long virt,
>> }
>>
>> __create_pgd_mapping(init_mm.pgd, phys, virt, size, prot, NULL,
>> - NO_CONT_MAPPINGS);
>> + NO_CONT_MAPPINGS | NO_ALLOC_MAPPINGS);
>>
>> /* flush the TLBs after updating live kernel mappings */
>> flush_tlb_kernel_range(virt, virt + size);
>
> This is now more complicated than what we had originally, and it doesn't catch
> the case where the caller sets NO_ALLOC_MAPPINGS but the callee ends up needing
> to perform an allocation, which the old code would have caught.

Well, it's still "caught" as such - all that BUG_ON(!pgtable_alloc) does
in these cases is encode the source location in the backtrace, vs.
having to decode it (if necessary) from the LR in a backtrace from
immediately dereferencing pgtable_alloc(). If that happens before the
user has a console up then the difference is moot anyway.

Cheers,
Robin.

2022-11-21 15:56:35

by Naresh Kamboju

[permalink] [raw]
Subject: Re: [PATCH] arm64/mm: Drop redundant BUG_ON(!pgtable_alloc)

LKFT detected arm64 boot regression on today's Linux next-20221121 tag.
The Kernel boot log did not show anything on the serial console.

Anders bisected this problem and found the subject commit is the
first bad commit.

# first bad commit: [9ed2b4616d4e846ece2a04cb5007ce1d1bd9e3f3]
arm64/mm: Drop redundant BUG_ON(!pgtable_alloc)

Later it was found this lore link which was already reported [1].

ref:
[1] https://lore.kernel.org/all/[email protected]/


--
Linaro LKFT
https://lkft.linaro.org

2022-11-21 18:27:12

by Will Deacon

[permalink] [raw]
Subject: Re: [PATCH] arm64/mm: Drop redundant BUG_ON(!pgtable_alloc)

On Mon, Nov 21, 2022 at 09:17:18PM +0530, Naresh Kamboju wrote:
> LKFT detected arm64 boot regression on today's Linux next-20221121 tag.
> The Kernel boot log did not show anything on the serial console.
>
> Anders bisected this problem and found the subject commit is the
> first bad commit.
>
> # first bad commit: [9ed2b4616d4e846ece2a04cb5007ce1d1bd9e3f3]
> arm64/mm: Drop redundant BUG_ON(!pgtable_alloc)
>
> Later it was found this lore link which was already reported [1].
>
> ref:
> [1] https://lore.kernel.org/all/[email protected]/

Yup, I've queued a revert locally and will push it out this evening.

Cheers,

Will

2022-11-22 03:42:59

by Anshuman Khandual

[permalink] [raw]
Subject: Re: [PATCH] arm64/mm: Drop redundant BUG_ON(!pgtable_alloc)



On 11/21/22 19:26, Robin Murphy wrote:
> On 2022-11-21 12:27, Mark Rutland wrote:
>> On Mon, Nov 21, 2022 at 11:00:42AM +0530, Anshuman Khandual wrote:
>>> Hello Nathan,
>>>
>>> Thanks for the report.
>>>
>>> On 11/20/22 21:46, Nathan Chancellor wrote:
>>>> Hi Anshuman,
>>
>>>> I just bisected a boot failure in our QEMU-based continuous integration
>>>> setup to this change as commit 9ed2b4616d4e ("arm64/mm: Drop redundant
>>>> BUG_ON(!pgtable_alloc)") in the arm64 tree. There is no output so the
>>>> panic clearly happens early at boot. If I move back to the previous
>>>> commit and add a WARN_ON() like so:
>>>>
>>>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
>>>> index d386033a074c..9280a92ff920 100644
>>>> --- a/arch/arm64/mm/mmu.c
>>>> +++ b/arch/arm64/mm/mmu.c
>>>> @@ -383,6 +383,7 @@ static void __create_pgd_mapping_locked(pgd_t *pgdir, phys_addr_t phys,
>>>>       phys &= PAGE_MASK;
>>>>       addr = virt & PAGE_MASK;
>>>>       end = PAGE_ALIGN(virt + size);
>>>> +    WARN_ON(!pgtable_alloc);
>>>>         do {
>>>>           next = pgd_addr_end(addr, end);
>>>>
>>>> I do see some stacktraces. I have attached the boot log from QEMU.
>>>>
>>>> If there is any additional information I can provide or patches I can
>>>> test, I am more than happy to do so.
>>>
>>> There are couple of instances, where __create_pgd_mapping() function gets called
>>> without a valid pgtable alloc function (NULL is passed on instead), as it is not
>>> expected to allocate page table pages, during the mapping process. The following
>>> change after this patch should solve the reported problem.
>>>
>>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
>>> index 9ea8e9039992..a00563122fcb 100644
>>> --- a/arch/arm64/mm/mmu.c
>>> +++ b/arch/arm64/mm/mmu.c
>>> @@ -42,6 +42,7 @@
>>>   #define NO_BLOCK_MAPPINGS      BIT(0)
>>>   #define NO_CONT_MAPPINGS       BIT(1)
>>>   #define NO_EXEC_MAPPINGS       BIT(2)  /* assumes FEAT_HPDS is not used */
>>> +#define NO_ALLOC_MAPPINGS      BIT(3)  /* does not allocate page table pages */
>>>     int idmap_t0sz __ro_after_init;
>>>   @@ -380,7 +381,7 @@ static void __create_pgd_mapping_locked(pgd_t *pgdir, phys_addr_t phys,
>>>          phys &= PAGE_MASK;
>>>          addr = virt & PAGE_MASK;
>>>          end = PAGE_ALIGN(virt + size);
>>> -       BUG_ON(!pgtable_alloc);
>>> +       BUG_ON(!(flags & NO_ALLOC_MAPPINGS) && !pgtable_alloc);
>>>            do {
>>>                  next = pgd_addr_end(addr, end);
>>> @@ -453,7 +454,7 @@ static void __init create_mapping_noalloc(phys_addr_t phys, unsigned long virt,
>>>                  return;
>>>          }
>>>          __create_pgd_mapping(init_mm.pgd, phys, virt, size, prot, NULL,
>>> -                            NO_CONT_MAPPINGS);
>>> +                            NO_CONT_MAPPINGS | NO_ALLOC_MAPPINGS);
>>>   }
>>>     void __init create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
>>> @@ -481,7 +482,7 @@ static void update_mapping_prot(phys_addr_t phys, unsigned long virt,
>>>          }
>>>            __create_pgd_mapping(init_mm.pgd, phys, virt, size, prot, NULL,
>>> -                            NO_CONT_MAPPINGS);
>>> +                            NO_CONT_MAPPINGS | NO_ALLOC_MAPPINGS);
>>>            /* flush the TLBs after updating live kernel mappings */
>>>          flush_tlb_kernel_range(virt, virt + size);
>>
>> This is now more complicated than what we had originally, and it doesn't catch
>> the case where the caller sets NO_ALLOC_MAPPINGS but the callee ends up needing
>> to perform an allocation, which the old code would have caught.
>
> Well, it's still "caught" as such - all that BUG_ON(!pgtable_alloc) does in these cases is encode the source location in the backtrace, vs. having to decode it (if necessary) from the LR in a backtrace from immediately dereferencing pgtable_alloc(). If that happens before the user has a console up then the difference is moot anyway.

Agreed, also the fact that certain callers are sure about no page table page allocation
would be required (hence they just pass "NULL" for pgtable_alloc function), needs to be
notified appropriately when page page table creation stumbles upon a pxd_none().

if (pxd_none(pmd)) {
BUG_ON(flags & NO_ALLOC_MAPPINGS);
}

Although there are might be an argument that all erstwhile BUG_ON(!pgtable_alloc) has
replacements with BUG_ON(flags & NO_ALLOC_MAPPINGS), but there is a subtle difference.
It captures the fact (in a rather structured manner), that caller made a choice with
NO_ALLOC_MAPPINGS but called __create_pgd_mapping() on a page table, where intermediate
page table page alloc would still be required. IMHO adding that explicit structure here
via the new flag NO_ALLOC_MAPPINGS makes sense.

2022-11-22 15:13:15

by Biju Das

[permalink] [raw]
Subject: RE: [PATCH] arm64/mm: Drop redundant BUG_ON(!pgtable_alloc)

Hi All,

> Subject: Re: [PATCH] arm64/mm: Drop redundant BUG_ON(!pgtable_alloc)
>
>
>
> On 11/21/22 19:26, Robin Murphy wrote:
> > On 2022-11-21 12:27, Mark Rutland wrote:
> >> On Mon, Nov 21, 2022 at 11:00:42AM +0530, Anshuman Khandual wrote:
> >>> Hello Nathan,
> >>>
> >>> Thanks for the report.
> >>>
> >>> On 11/20/22 21:46, Nathan Chancellor wrote:
> >>>> Hi Anshuman,
> >>
> >>>> I just bisected a boot failure in our QEMU-based continuous

This patch created boot failure on RZ/G2L platforms as well.
Reverting the patch fixed the boot failure.

Cheers,
Biju

> >>>> integration setup to this change as commit 9ed2b4616d4e ("arm64/mm:
> >>>> Drop redundant
> >>>> BUG_ON(!pgtable_alloc)") in the arm64 tree. There is no output so
> >>>> the panic clearly happens early at boot. If I move back to the
> >>>> previous commit and add a WARN_ON() like so:
> >>>>
> >>>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index
> >>>> d386033a074c..9280a92ff920 100644
> >>>> --- a/arch/arm64/mm/mmu.c
> >>>> +++ b/arch/arm64/mm/mmu.c
> >>>> @@ -383,6 +383,7 @@ static void __create_pgd_mapping_locked(pgd_t
> >>>> *pgdir, phys_addr_t phys,
> >>>> ????? phys &= PAGE_MASK;
> >>>> ????? addr = virt & PAGE_MASK;
> >>>> ????? end = PAGE_ALIGN(virt + size);
> >>>> +??? WARN_ON(!pgtable_alloc);
> >>>> ? ????? do {
> >>>> ????????? next = pgd_addr_end(addr, end);
> >>>>
> >>>> I do see some stacktraces. I have attached the boot log from QEMU.
> >>>>
> >>>> If there is any additional information I can provide or patches I
> >>>> can test, I am more than happy to do so.
> >>>
> >>> There are couple of instances, where __create_pgd_mapping() function
> >>> gets called without a valid pgtable alloc function (NULL is passed
> >>> on instead), as it is not expected to allocate page table pages,
> >>> during the mapping process. The following change after this patch
> should solve the reported problem.
> >>>
> >>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index
> >>> 9ea8e9039992..a00563122fcb 100644
> >>> --- a/arch/arm64/mm/mmu.c
> >>> +++ b/arch/arm64/mm/mmu.c
> >>> @@ -42,6 +42,7 @@
> >>> ? #define NO_BLOCK_MAPPINGS????? BIT(0)
> >>> ? #define NO_CONT_MAPPINGS?????? BIT(1)
> >>> ? #define NO_EXEC_MAPPINGS?????? BIT(2)? /* assumes FEAT_HPDS is not
> >>> used */
> >>> +#define NO_ALLOC_MAPPINGS????? BIT(3)? /* does not allocate page
> >>> +table pages */
> >>> ? ? int idmap_t0sz __ro_after_init;
> >>> ? @@ -380,7 +381,7 @@ static void __create_pgd_mapping_locked(pgd_t
> >>> *pgdir, phys_addr_t phys,
> >>> ???????? phys &= PAGE_MASK;
> >>> ???????? addr = virt & PAGE_MASK;
> >>> ???????? end = PAGE_ALIGN(virt + size);
> >>> -?????? BUG_ON(!pgtable_alloc);
> >>> +?????? BUG_ON(!(flags & NO_ALLOC_MAPPINGS) && !pgtable_alloc);
> >>> ? ???????? do {
> >>> ???????????????? next = pgd_addr_end(addr, end); @@ -453,7 +454,7 @@
> >>> static void __init create_mapping_noalloc(phys_addr_t phys, unsigned
> >>> long virt,
> >>> ???????????????? return;
> >>> ???????? }
> >>> ???????? __create_pgd_mapping(init_mm.pgd, phys, virt, size, prot,
> >>> NULL,
> >>> -??????????????????????????? NO_CONT_MAPPINGS);
> >>> +??????????????????????????? NO_CONT_MAPPINGS | NO_ALLOC_MAPPINGS);
> >>> ? }
> >>> ? ? void __init create_pgd_mapping(struct mm_struct *mm, phys_addr_t
> >>> phys, @@ -481,7 +482,7 @@ static void
> >>> update_mapping_prot(phys_addr_t phys, unsigned long virt,
> >>> ???????? }
> >>> ? ???????? __create_pgd_mapping(init_mm.pgd, phys, virt, size, prot,
> >>> NULL,
> >>> -??????????????????????????? NO_CONT_MAPPINGS);
> >>> +??????????????????????????? NO_CONT_MAPPINGS | NO_ALLOC_MAPPINGS);
> >>> ? ???????? /* flush the TLBs after updating live kernel mappings */
> >>> ???????? flush_tlb_kernel_range(virt, virt + size);
> >>
> >> This is now more complicated than what we had originally, and it
> >> doesn't catch the case where the caller sets NO_ALLOC_MAPPINGS but
> >> the callee ends up needing to perform an allocation, which the old
> code would have caught.
> >
> > Well, it's still "caught" as such - all that BUG_ON(!pgtable_alloc)
> does in these cases is encode the source location in the backtrace, vs.
> having to decode it (if necessary) from the LR in a backtrace from
> immediately dereferencing pgtable_alloc(). If that happens before the
> user has a console up then the difference is moot anyway.
>
> Agreed, also the fact that certain callers are sure about no page table
> page allocation would be required (hence they just pass "NULL" for
> pgtable_alloc function), needs to be notified appropriately when page
> page table creation stumbles upon a pxd_none().
>
> if (pxd_none(pmd)) {
> BUG_ON(flags & NO_ALLOC_MAPPINGS);
> }
>
> Although there are might be an argument that all erstwhile
> BUG_ON(!pgtable_alloc) has replacements with BUG_ON(flags &
> NO_ALLOC_MAPPINGS), but there is a subtle difference.
> It captures the fact (in a rather structured manner), that caller made a
> choice with NO_ALLOC_MAPPINGS but called __create_pgd_mapping() on a
> page table, where intermediate page table page alloc would still be
> required. IMHO adding that explicit structure here via the new flag
> NO_ALLOC_MAPPINGS makes sense.
>
> _______________________________________________
> linux-arm-kernel mailing list
> [email protected]
> https://jpn01.safelinks.protection.outlook.com/?url=http%3A%2F%2Flists.i
> nfradead.org%2Fmailman%2Flistinfo%2Flinux-arm-
> kernel&amp;data=05%7C01%7Cbiju.das.jz%40bp.renesas.com%7Cbcd488ff5f5c44a
> aaf3708dacc387479%7C53d82571da1947e49cb4625a166a4a2a%7C0%7C0%7C638046840
> 104902697%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLC
> JBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;sdata=F%2B3%2F2E583sYkS
> Z2bYIUN6HmHO7aGD5KT2EuGCE8MY2w%3D&amp;reserved=0

2022-11-22 19:23:18

by Robin Murphy

[permalink] [raw]
Subject: Re: [PATCH] arm64/mm: Drop redundant BUG_ON(!pgtable_alloc)

On 2022-11-22 03:13, Anshuman Khandual wrote:
>
>
> On 11/21/22 19:26, Robin Murphy wrote:
>> On 2022-11-21 12:27, Mark Rutland wrote:
>>> On Mon, Nov 21, 2022 at 11:00:42AM +0530, Anshuman Khandual wrote:
>>>> Hello Nathan,
>>>>
>>>> Thanks for the report.
>>>>
>>>> On 11/20/22 21:46, Nathan Chancellor wrote:
>>>>> Hi Anshuman,
>>>
>>>>> I just bisected a boot failure in our QEMU-based continuous integration
>>>>> setup to this change as commit 9ed2b4616d4e ("arm64/mm: Drop redundant
>>>>> BUG_ON(!pgtable_alloc)") in the arm64 tree. There is no output so the
>>>>> panic clearly happens early at boot. If I move back to the previous
>>>>> commit and add a WARN_ON() like so:
>>>>>
>>>>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
>>>>> index d386033a074c..9280a92ff920 100644
>>>>> --- a/arch/arm64/mm/mmu.c
>>>>> +++ b/arch/arm64/mm/mmu.c
>>>>> @@ -383,6 +383,7 @@ static void __create_pgd_mapping_locked(pgd_t *pgdir, phys_addr_t phys,
>>>>>       phys &= PAGE_MASK;
>>>>>       addr = virt & PAGE_MASK;
>>>>>       end = PAGE_ALIGN(virt + size);
>>>>> +    WARN_ON(!pgtable_alloc);
>>>>>         do {
>>>>>           next = pgd_addr_end(addr, end);
>>>>>
>>>>> I do see some stacktraces. I have attached the boot log from QEMU.
>>>>>
>>>>> If there is any additional information I can provide or patches I can
>>>>> test, I am more than happy to do so.
>>>>
>>>> There are couple of instances, where __create_pgd_mapping() function gets called
>>>> without a valid pgtable alloc function (NULL is passed on instead), as it is not
>>>> expected to allocate page table pages, during the mapping process. The following
>>>> change after this patch should solve the reported problem.
>>>>
>>>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
>>>> index 9ea8e9039992..a00563122fcb 100644
>>>> --- a/arch/arm64/mm/mmu.c
>>>> +++ b/arch/arm64/mm/mmu.c
>>>> @@ -42,6 +42,7 @@
>>>>   #define NO_BLOCK_MAPPINGS      BIT(0)
>>>>   #define NO_CONT_MAPPINGS       BIT(1)
>>>>   #define NO_EXEC_MAPPINGS       BIT(2)  /* assumes FEAT_HPDS is not used */
>>>> +#define NO_ALLOC_MAPPINGS      BIT(3)  /* does not allocate page table pages */
>>>>     int idmap_t0sz __ro_after_init;
>>>>   @@ -380,7 +381,7 @@ static void __create_pgd_mapping_locked(pgd_t *pgdir, phys_addr_t phys,
>>>>          phys &= PAGE_MASK;
>>>>          addr = virt & PAGE_MASK;
>>>>          end = PAGE_ALIGN(virt + size);
>>>> -       BUG_ON(!pgtable_alloc);
>>>> +       BUG_ON(!(flags & NO_ALLOC_MAPPINGS) && !pgtable_alloc);
>>>>            do {
>>>>                  next = pgd_addr_end(addr, end);
>>>> @@ -453,7 +454,7 @@ static void __init create_mapping_noalloc(phys_addr_t phys, unsigned long virt,
>>>>                  return;
>>>>          }
>>>>          __create_pgd_mapping(init_mm.pgd, phys, virt, size, prot, NULL,
>>>> -                            NO_CONT_MAPPINGS);
>>>> +                            NO_CONT_MAPPINGS | NO_ALLOC_MAPPINGS);
>>>>   }
>>>>     void __init create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
>>>> @@ -481,7 +482,7 @@ static void update_mapping_prot(phys_addr_t phys, unsigned long virt,
>>>>          }
>>>>            __create_pgd_mapping(init_mm.pgd, phys, virt, size, prot, NULL,
>>>> -                            NO_CONT_MAPPINGS);
>>>> +                            NO_CONT_MAPPINGS | NO_ALLOC_MAPPINGS);
>>>>            /* flush the TLBs after updating live kernel mappings */
>>>>          flush_tlb_kernel_range(virt, virt + size);
>>>
>>> This is now more complicated than what we had originally, and it doesn't catch
>>> the case where the caller sets NO_ALLOC_MAPPINGS but the callee ends up needing
>>> to perform an allocation, which the old code would have caught.
>>
>> Well, it's still "caught" as such - all that BUG_ON(!pgtable_alloc) does in these cases is encode the source location in the backtrace, vs. having to decode it (if necessary) from the LR in a backtrace from immediately dereferencing pgtable_alloc(). If that happens before the user has a console up then the difference is moot anyway.
>
> Agreed, also the fact that certain callers are sure about no page table page allocation
> would be required (hence they just pass "NULL" for pgtable_alloc function), needs to be
> notified appropriately when page page table creation stumbles upon a pxd_none().
>
> if (pxd_none(pmd)) {
> BUG_ON(flags & NO_ALLOC_MAPPINGS);
> }
>
> Although there are might be an argument that all erstwhile BUG_ON(!pgtable_alloc) has
> replacements with BUG_ON(flags & NO_ALLOC_MAPPINGS), but there is a subtle difference.
> It captures the fact (in a rather structured manner), that caller made a choice with
> NO_ALLOC_MAPPINGS but called __create_pgd_mapping() on a page table, where intermediate
> page table page alloc would still be required. IMHO adding that explicit structure here
> via the new flag NO_ALLOC_MAPPINGS makes sense.

Hmm, I dunno, it seems clear enough to me that the callers which do not
intend to allocate indicate that by passing a NULL allocator callback.
Personally I don't see the benefit in adding code to make all those
callers redundantly encode their intent twice over just so that we can
also add code to catch them out and crash if they don't. Weren't you
trying to clean up redundancy?

As I say I think any BUG_ON() has very limited value as documentation
where the only way to proceed is by dereferencing NULL in an
identifiable manner anyway, but that's largely a matter of taste and I
know others might disagree.

Thanks,
Robin.