On Sat, Jan 30, 2021 at 4:16 PM Nadav Amit <[email protected]> wrote:
>
> From: Nadav Amit <[email protected]>
>
> There are currently (at least?) 5 different TLB batching schemes in the
> kernel:
>
> 1. Using mmu_gather (e.g., zap_page_range()).
>
> 2. Using {inc|dec}_tlb_flush_pending() to inform other threads on the
> ongoing deferred TLB flush and flushing the entire range eventually
> (e.g., change_protection_range()).
>
> 3. arch_{enter|leave}_lazy_mmu_mode() for sparc and powerpc (and Xen?).
>
> 4. Batching per-table flushes (move_ptes()).
>
> 5. By setting a flag on that a deferred TLB flush operation takes place,
> flushing when (try_to_unmap_one() on x86).
Are you referring to the arch_tlbbatch_add_mm/flush mechanism?
> On Jan 30, 2021, at 4:39 PM, Andy Lutomirski <[email protected]> wrote:
>
> On Sat, Jan 30, 2021 at 4:16 PM Nadav Amit <[email protected]> wrote:
>> From: Nadav Amit <[email protected]>
>>
>> There are currently (at least?) 5 different TLB batching schemes in the
>> kernel:
>>
>> 1. Using mmu_gather (e.g., zap_page_range()).
>>
>> 2. Using {inc|dec}_tlb_flush_pending() to inform other threads on the
>> ongoing deferred TLB flush and flushing the entire range eventually
>> (e.g., change_protection_range()).
>>
>> 3. arch_{enter|leave}_lazy_mmu_mode() for sparc and powerpc (and Xen?).
>>
>> 4. Batching per-table flushes (move_ptes()).
>>
>> 5. By setting a flag on that a deferred TLB flush operation takes place,
>> flushing when (try_to_unmap_one() on x86).
>
> Are you referring to the arch_tlbbatch_add_mm/flush mechanism?
Yes.