Booting a power9 server with hash MMU could trigger an undefined
behaviour because pud_offset(p4d, 0) will do,
0 >> (PAGE_SHIFT:16 + PTE_INDEX_SIZE:8 + H_PMD_INDEX_SIZE:10)
UBSAN: shift-out-of-bounds in arch/powerpc/mm/ptdump/ptdump.c:282:15
shift exponent 34 is too large for 32-bit type 'int'
CPU: 6 PID: 1 Comm: swapper/0 Not tainted 5.6.0-rc4-next-20200303+ #13
Call Trace:
dump_stack+0xf4/0x164 (unreliable)
ubsan_epilogue+0x18/0x78
__ubsan_handle_shift_out_of_bounds+0x160/0x21c
walk_pagetables+0x2cc/0x700
walk_pud at arch/powerpc/mm/ptdump/ptdump.c:282
(inlined by) walk_pagetables at arch/powerpc/mm/ptdump/ptdump.c:311
ptdump_check_wx+0x8c/0xf0
mark_rodata_ro+0x48/0x80
kernel_init+0x74/0x194
ret_from_kernel_thread+0x5c/0x74
Fixes: 8eb07b187000 ("powerpc/mm: Dump linux pagetables")
Signed-off-by: Qian Cai <[email protected]>
---
Notes for maintainers:
This is on the top of the linux-next commit "powerpc: add support for
folded p4d page tables" which is in the Andrew's tree.
arch/powerpc/mm/ptdump/ptdump.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/powerpc/mm/ptdump/ptdump.c b/arch/powerpc/mm/ptdump/ptdump.c
index 9d6256b61df3..b530f81398a7 100644
--- a/arch/powerpc/mm/ptdump/ptdump.c
+++ b/arch/powerpc/mm/ptdump/ptdump.c
@@ -279,7 +279,7 @@ static void walk_pmd(struct pg_state *st, pud_t *pud, unsigned long start)
static void walk_pud(struct pg_state *st, p4d_t *p4d, unsigned long start)
{
- pud_t *pud = pud_offset(p4d, 0);
+ pud_t *pud = pud_offset(p4d, 0UL);
unsigned long addr;
unsigned int i;
--
2.21.0 (Apple Git-122.2)
Le 05/03/2020 à 05:47, Qian Cai a écrit :
> Booting a power9 server with hash MMU could trigger an undefined
> behaviour because pud_offset(p4d, 0) will do,
>
> 0 >> (PAGE_SHIFT:16 + PTE_INDEX_SIZE:8 + H_PMD_INDEX_SIZE:10)
>
> UBSAN: shift-out-of-bounds in arch/powerpc/mm/ptdump/ptdump.c:282:15
> shift exponent 34 is too large for 32-bit type 'int'
> CPU: 6 PID: 1 Comm: swapper/0 Not tainted 5.6.0-rc4-next-20200303+ #13
> Call Trace:
> dump_stack+0xf4/0x164 (unreliable)
> ubsan_epilogue+0x18/0x78
> __ubsan_handle_shift_out_of_bounds+0x160/0x21c
> walk_pagetables+0x2cc/0x700
> walk_pud at arch/powerpc/mm/ptdump/ptdump.c:282
> (inlined by) walk_pagetables at arch/powerpc/mm/ptdump/ptdump.c:311
> ptdump_check_wx+0x8c/0xf0
> mark_rodata_ro+0x48/0x80
> kernel_init+0x74/0x194
> ret_from_kernel_thread+0x5c/0x74
>
> Fixes: 8eb07b187000 ("powerpc/mm: Dump linux pagetables")
> Signed-off-by: Qian Cai <[email protected]>
> ---
>
> Notes for maintainers:
>
> This is on the top of the linux-next commit "powerpc: add support for
> folded p4d page tables" which is in the Andrew's tree.
>
> arch/powerpc/mm/ptdump/ptdump.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/arch/powerpc/mm/ptdump/ptdump.c b/arch/powerpc/mm/ptdump/ptdump.c
> index 9d6256b61df3..b530f81398a7 100644
> --- a/arch/powerpc/mm/ptdump/ptdump.c
> +++ b/arch/powerpc/mm/ptdump/ptdump.c
> @@ -279,7 +279,7 @@ static void walk_pmd(struct pg_state *st, pud_t *pud, unsigned long start)
>
> static void walk_pud(struct pg_state *st, p4d_t *p4d, unsigned long start)
> {
> - pud_t *pud = pud_offset(p4d, 0);
> + pud_t *pud = pud_offset(p4d, 0UL);
Is that the only place we have to do this ?
(In 5.6-rc) I see the same in:
/arch/powerpc/mm/ptdump/hashpagetable.c
/arch/powerpc/kvm/book3s_64_mmu_radix.c
Wouldn't it be better to:
- Either cast addr to unsigned long in pud_index() macro
- Or change pud_index() macro to a static inline function as x86 ?
Christophe
Christophe Leroy <[email protected]> writes:
> Le 05/03/2020 à 05:47, Qian Cai a écrit :
>> Booting a power9 server with hash MMU could trigger an undefined
>> behaviour because pud_offset(p4d, 0) will do,
>>
>> 0 >> (PAGE_SHIFT:16 + PTE_INDEX_SIZE:8 + H_PMD_INDEX_SIZE:10)
>>
>> UBSAN: shift-out-of-bounds in arch/powerpc/mm/ptdump/ptdump.c:282:15
>> shift exponent 34 is too large for 32-bit type 'int'
>> CPU: 6 PID: 1 Comm: swapper/0 Not tainted 5.6.0-rc4-next-20200303+ #13
>> Call Trace:
>> dump_stack+0xf4/0x164 (unreliable)
>> ubsan_epilogue+0x18/0x78
>> __ubsan_handle_shift_out_of_bounds+0x160/0x21c
>> walk_pagetables+0x2cc/0x700
>> walk_pud at arch/powerpc/mm/ptdump/ptdump.c:282
>> (inlined by) walk_pagetables at arch/powerpc/mm/ptdump/ptdump.c:311
>> ptdump_check_wx+0x8c/0xf0
>> mark_rodata_ro+0x48/0x80
>> kernel_init+0x74/0x194
>> ret_from_kernel_thread+0x5c/0x74
>>
>> Fixes: 8eb07b187000 ("powerpc/mm: Dump linux pagetables")
>> Signed-off-by: Qian Cai <[email protected]>
>> ---
>>
>> Notes for maintainers:
>>
>> This is on the top of the linux-next commit "powerpc: add support for
>> folded p4d page tables" which is in the Andrew's tree.
>>
>> arch/powerpc/mm/ptdump/ptdump.c | 2 +-
>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/arch/powerpc/mm/ptdump/ptdump.c b/arch/powerpc/mm/ptdump/ptdump.c
>> index 9d6256b61df3..b530f81398a7 100644
>> --- a/arch/powerpc/mm/ptdump/ptdump.c
>> +++ b/arch/powerpc/mm/ptdump/ptdump.c
>> @@ -279,7 +279,7 @@ static void walk_pmd(struct pg_state *st, pud_t *pud, unsigned long start)
>>
>> static void walk_pud(struct pg_state *st, p4d_t *p4d, unsigned long start)
>> {
>> - pud_t *pud = pud_offset(p4d, 0);
>> + pud_t *pud = pud_offset(p4d, 0UL);
>
> Is that the only place we have to do this ?
>
> (In 5.6-rc) I see the same in:
> /arch/powerpc/mm/ptdump/hashpagetable.c
> /arch/powerpc/kvm/book3s_64_mmu_radix.c
>
> Wouldn't it be better to:
> - Either cast addr to unsigned long in pud_index() macro
> - Or change pud_index() macro to a static inline function as x86 ?
Yes, either would be better, but preferably the latter.
It's hostile to require the caller to pass an unsigned long when there's
no way they can know that's required.
cheers