2024-02-22 08:01:07

by Christophe Leroy

[permalink] [raw]
Subject: Re: [PATCH 1/2] powerpc: Refactor __kernel_map_pages()



Le 22/02/2024 à 06:32, Michael Ellerman a écrit :
> Christophe Leroy <[email protected]> writes:
>> __kernel_map_pages() is almost identical for PPC32 and RADIX.
>>
>> Refactor it.
>>
>> On PPC32 it is not needed for KFENCE, but to keep it simple
>> just make it similar to PPC64.
>>
>> Signed-off-by: Christophe Leroy <[email protected]>
>> ---
>> arch/powerpc/include/asm/book3s/64/pgtable.h | 10 ----------
>> arch/powerpc/include/asm/book3s/64/radix.h | 2 --
>> arch/powerpc/mm/book3s64/radix_pgtable.c | 14 --------------
>> arch/powerpc/mm/pageattr.c | 19 +++++++++++++++++++
>> arch/powerpc/mm/pgtable_32.c | 15 ---------------
>> 5 files changed, 19 insertions(+), 41 deletions(-)
>>
>> diff --git a/arch/powerpc/mm/pageattr.c b/arch/powerpc/mm/pageattr.c
>> index 421db7c4f2a4..16b8d20d6ca8 100644
>> --- a/arch/powerpc/mm/pageattr.c
>> +++ b/arch/powerpc/mm/pageattr.c
>> @@ -101,3 +101,22 @@ int change_memory_attr(unsigned long addr, int numpages, long action)
>> return apply_to_existing_page_range(&init_mm, start, size,
>> change_page_attr, (void *)action);
>> }
>> +
>> +#if defined(CONFIG_DEBUG_PAGEALLOC) || defined(CONFIG_KFENCE)
>> +#ifdef CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC
>> +void __kernel_map_pages(struct page *page, int numpages, int enable)
>> +{
>> + unsigned long addr = (unsigned long)page_address(page);
>> +
>> + if (PageHighMem(page))
>> + return;
>> +
>> + if (IS_ENABLED(CONFIG_PPC_BOOK3S_64) && !radix_enabled())
>> + hash__kernel_map_pages(page, numpages, enable);
>> + else if (enable)
>> + set_memory_p(addr, numpages);
>> + else
>> + set_memory_np(addr, numpages);
>> +}
>
> This doesn't build on 32-bit, eg. ppc32_allmodconfig:
>
> ../arch/powerpc/mm/pageattr.c: In function '__kernel_map_pages':
> ../arch/powerpc/mm/pageattr.c:116:23: error: implicit declaration of function 'hash__kernel_map_pages' [-Werror=implicit-function-declaration]
> 116 | err = hash__kernel_map_pages(page, numpages, enable);
> | ^~~~~~~~~~~~~~~~~~~~~~
>
> I couldn't see a nice way to get around it, so ended up with:
>
> void __kernel_map_pages(struct page *page, int numpages, int enable)
> {
> int err;
> unsigned long addr = (unsigned long)page_address(page);
>
> if (PageHighMem(page))
> return;
>
> #ifdef CONFIG_PPC_BOOK3S_64
> if (!radix_enabled())
> err = hash__kernel_map_pages(page, numpages, enable);
> else
> #endif
> if (enable)
> err = set_memory_p(addr, numpages);
> else
> err = set_memory_np(addr, numpages);
>


I missed something it seems. Not good to leave something unterminated
when you leave for vacation and think it was finished when you come back.

The best solution I see is to move hash__kernel_map_pages() prototype
somewhere else.

$ git grep -e hash__ -e radix__ -- arch/powerpc/include/asm/*.h
arch/powerpc/include/asm/bug.h:void hash__do_page_fault(struct pt_regs *);
arch/powerpc/include/asm/mmu.h:extern void radix__mmu_cleanup_all(void);
arch/powerpc/include/asm/mmu_context.h:extern void
radix__switch_mmu_context(struct mm_struct *prev,
arch/powerpc/include/asm/mmu_context.h: return
radix__switch_mmu_context(prev, next);
arch/powerpc/include/asm/mmu_context.h:extern int
hash__alloc_context_id(void);
arch/powerpc/include/asm/mmu_context.h:void __init
hash__reserve_context_id(int id);
arch/powerpc/include/asm/mmu_context.h: context_id =
hash__alloc_context_id();
arch/powerpc/include/asm/mmu_context.h: * radix__flush_all_mm() to
determine the scope (local/global)
arch/powerpc/include/asm/mmu_context.h: radix__flush_all_mm(mm);


Maybe asm/mmu.h ?

Or mm/mmu_decl.h ?

Christophe


2024-02-23 06:28:52

by Michael Ellerman

[permalink] [raw]
Subject: Re: [PATCH 1/2] powerpc: Refactor __kernel_map_pages()

Christophe Leroy <[email protected]> writes:
> Le 22/02/2024 à 06:32, Michael Ellerman a écrit :
>> Christophe Leroy <[email protected]> writes:
>>> __kernel_map_pages() is almost identical for PPC32 and RADIX.
>>>
>>> Refactor it.
>>>
>>> On PPC32 it is not needed for KFENCE, but to keep it simple
>>> just make it similar to PPC64.
>>>
>>> Signed-off-by: Christophe Leroy <[email protected]>
>>> ---
>>> arch/powerpc/include/asm/book3s/64/pgtable.h | 10 ----------
>>> arch/powerpc/include/asm/book3s/64/radix.h | 2 --
>>> arch/powerpc/mm/book3s64/radix_pgtable.c | 14 --------------
>>> arch/powerpc/mm/pageattr.c | 19 +++++++++++++++++++
>>> arch/powerpc/mm/pgtable_32.c | 15 ---------------
>>> 5 files changed, 19 insertions(+), 41 deletions(-)
>>>
>>> diff --git a/arch/powerpc/mm/pageattr.c b/arch/powerpc/mm/pageattr.c
>>> index 421db7c4f2a4..16b8d20d6ca8 100644
>>> --- a/arch/powerpc/mm/pageattr.c
>>> +++ b/arch/powerpc/mm/pageattr.c
>>> @@ -101,3 +101,22 @@ int change_memory_attr(unsigned long addr, int numpages, long action)
>>> return apply_to_existing_page_range(&init_mm, start, size,
>>> change_page_attr, (void *)action);
>>> }
>>> +
>>> +#if defined(CONFIG_DEBUG_PAGEALLOC) || defined(CONFIG_KFENCE)
>>> +#ifdef CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC
>>> +void __kernel_map_pages(struct page *page, int numpages, int enable)
>>> +{
>>> + unsigned long addr = (unsigned long)page_address(page);
>>> +
>>> + if (PageHighMem(page))
>>> + return;
>>> +
>>> + if (IS_ENABLED(CONFIG_PPC_BOOK3S_64) && !radix_enabled())
>>> + hash__kernel_map_pages(page, numpages, enable);
>>> + else if (enable)
>>> + set_memory_p(addr, numpages);
>>> + else
>>> + set_memory_np(addr, numpages);
>>> +}
>>
>> This doesn't build on 32-bit, eg. ppc32_allmodconfig:
>>
>> ../arch/powerpc/mm/pageattr.c: In function '__kernel_map_pages':
>> ../arch/powerpc/mm/pageattr.c:116:23: error: implicit declaration of function 'hash__kernel_map_pages' [-Werror=implicit-function-declaration]
>> 116 | err = hash__kernel_map_pages(page, numpages, enable);
>> | ^~~~~~~~~~~~~~~~~~~~~~
>>
>> I couldn't see a nice way to get around it, so ended up with:
>>
>> void __kernel_map_pages(struct page *page, int numpages, int enable)
>> {
>> int err;
>> unsigned long addr = (unsigned long)page_address(page);
>>
>> if (PageHighMem(page))
>> return;
>>
>> #ifdef CONFIG_PPC_BOOK3S_64
>> if (!radix_enabled())
>> err = hash__kernel_map_pages(page, numpages, enable);
>> else
>> #endif
>> if (enable)
>> err = set_memory_p(addr, numpages);
>> else
>> err = set_memory_np(addr, numpages);
>>
>
>
> I missed something it seems. Not good to leave something unterminated
> when you leave for vacation and think it was finished when you come back.
>
> The best solution I see is to move hash__kernel_map_pages() prototype
> somewhere else.

> $ git grep -e hash__ -e radix__ -- arch/powerpc/include/asm/*.h
> arch/powerpc/include/asm/bug.h:void hash__do_page_fault(struct pt_regs *);
> arch/powerpc/include/asm/mmu.h:extern void radix__mmu_cleanup_all(void);
> arch/powerpc/include/asm/mmu_context.h:extern void radix__switch_mmu_context(struct mm_struct *prev,
> arch/powerpc/include/asm/mmu_context.h: return radix__switch_mmu_context(prev, next);
> arch/powerpc/include/asm/mmu_context.h:extern int hash__alloc_context_id(void);
> arch/powerpc/include/asm/mmu_context.h:void __init hash__reserve_context_id(int id);
> arch/powerpc/include/asm/mmu_context.h: context_id = hash__alloc_context_id();
> arch/powerpc/include/asm/mmu_context.h: * radix__flush_all_mm() to determine the scope (local/global)
> arch/powerpc/include/asm/mmu_context.h: radix__flush_all_mm(mm);

If anything I'd prefer to move those out of there into the book3s/64/
headers :)

> Maybe asm/mmu.h ?
>
> Or mm/mmu_decl.h ?

Yeah I'll do that. It's a bit of a dumping ground, but at least it's
internal to arch code, not exported to the rest of the kernel.

cheers