2023-09-16 04:34:57

by Mike Kravetz

[permalink] [raw]
Subject: Re: [PATCH v3 00/12] Batch hugetlb vmemmap modification operations

On 09/15/23 15:15, Mike Kravetz wrote:
> The following series attempts to reduce amount of time spent in TLB flushing.
> The idea is to batch the vmemmap modification operations for multiple hugetlb
> pages. Instead of doing one or two TLB flushes for each page, we do two TLB
> flushes for each batch of pages. One flush after splitting pages mapped at
> the PMD level, and another after remapping vmemmap associated with all
> hugetlb pages. Results of such batching are as follows:
>
> Joao Martins (2):
> hugetlb: batch PMD split for bulk vmemmap dedup
> hugetlb: batch TLB flushes when freeing vmemmap
>
> Johannes Weiner (1):
> mm: page_alloc: remove pcppage migratetype caching fix
>
> Matthew Wilcox (Oracle) (3):
> hugetlb: Use a folio in free_hpage_workfn()
> hugetlb: Remove a few calls to page_folio()
> hugetlb: Convert remove_pool_huge_page() to
> remove_pool_hugetlb_folio()
>
> Mike Kravetz (6):
> hugetlb: optimize update_and_free_pages_bulk to avoid lock cycles
> hugetlb: restructure pool allocations
> hugetlb: perform vmemmap optimization on a list of pages
> hugetlb: perform vmemmap restoration on a list of pages
> hugetlb: batch freeing of vmemmap pages
> hugetlb: batch TLB flushes when restoring vmemmap
>
> mm/hugetlb.c | 288 ++++++++++++++++++++++++++++++++-----------
> mm/hugetlb_vmemmap.c | 255 ++++++++++++++++++++++++++++++++------
> mm/hugetlb_vmemmap.h | 16 +++
> mm/page_alloc.c | 3 -
> 4 files changed, 452 insertions(+), 110 deletions(-)

Just realized that I should have based this on top of/taken into account
this series as well:
https://lore.kernel.org/linux-mm/[email protected]/

Sorry!
Changes should be minimal, but modifying the same code.
--
Mike Kravetz