2022-11-16 02:19:21

by Nadav Amit

[permalink] [raw]
Subject: Re: [PATCH v6 2/2] arm64: support batched/deferred tlb shootdown during page reclamation

On Nov 15, 2022, at 5:50 PM, Yicong Yang <[email protected]> wrote:

> !! External Email
>
> On 2022/11/16 7:38, Nadav Amit wrote:
>> On Nov 14, 2022, at 7:14 PM, Yicong Yang <[email protected]> wrote:
>>
>>> diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
>>> index 8a497d902c16..5bd78ae55cd4 100644
>>> --- a/arch/x86/include/asm/tlbflush.h
>>> +++ b/arch/x86/include/asm/tlbflush.h
>>> @@ -264,7 +264,8 @@ static inline u64 inc_mm_tlb_gen(struct mm_struct *mm)
>>> }
>>>
>>> static inline void arch_tlbbatch_add_mm(struct arch_tlbflush_unmap_batch *batch,
>>> - struct mm_struct *mm)
>>> + struct mm_struct *mm,
>>> + unsigned long uaddr)
>>
>> Logic-wise it looks fine. I notice the “v6", and it should not be blocking,
>> but I would note that the name "arch_tlbbatch_add_mm()” does not make much
>> sense once the function also takes an address.
>
> ok the add_mm should still apply to x86 since the address is not used, but not for arm64.
>
>> It could’ve been something like arch_set_tlb_ubc_flush_pending() but that’s
>> too long. I’m not very good with naming, but the current name is not great.
>
> What about arch_tlbbatch_add_pending()? Considering the x86 is pending the flush operation
> while arm64 is pending the sychronization operation, arch_tlbbatch_add_pending() should
> make sense to both.

Sounds reasonable. Thanks.



2022-11-16 03:50:25

by Anshuman Khandual

[permalink] [raw]
Subject: Re: [PATCH v6 2/2] arm64: support batched/deferred tlb shootdown during page reclamation



On 11/16/22 07:26, Nadav Amit wrote:
> On Nov 15, 2022, at 5:50 PM, Yicong Yang <[email protected]> wrote:
>
>> !! External Email
>>
>> On 2022/11/16 7:38, Nadav Amit wrote:
>>> On Nov 14, 2022, at 7:14 PM, Yicong Yang <[email protected]> wrote:
>>>
>>>> diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
>>>> index 8a497d902c16..5bd78ae55cd4 100644
>>>> --- a/arch/x86/include/asm/tlbflush.h
>>>> +++ b/arch/x86/include/asm/tlbflush.h
>>>> @@ -264,7 +264,8 @@ static inline u64 inc_mm_tlb_gen(struct mm_struct *mm)
>>>> }
>>>>
>>>> static inline void arch_tlbbatch_add_mm(struct arch_tlbflush_unmap_batch *batch,
>>>> - struct mm_struct *mm)
>>>> + struct mm_struct *mm,
>>>> + unsigned long uaddr)
>>>
>>> Logic-wise it looks fine. I notice the “v6", and it should not be blocking,
>>> but I would note that the name "arch_tlbbatch_add_mm()” does not make much
>>> sense once the function also takes an address.
>>
>> ok the add_mm should still apply to x86 since the address is not used, but not for arm64.
>>
>>> It could’ve been something like arch_set_tlb_ubc_flush_pending() but that’s
>>> too long. I’m not very good with naming, but the current name is not great.
>>
>> What about arch_tlbbatch_add_pending()? Considering the x86 is pending the flush operation
>> while arm64 is pending the sychronization operation, arch_tlbbatch_add_pending() should
>> make sense to both.
>
> Sounds reasonable. Thanks.

+1, agreed.