On Tue, Aug 01, 2023 at 10:31:45AM +0800, Kefeng Wang wrote:
> +#define __HAVE_ARCH_FLUSH_HUGETLB_TLB_RANGE
> +static inline void flush_hugetlb_tlb_range(struct vm_area_struct *vma,
> + unsigned long start,
> + unsigned long end)
> +{
> + unsigned long stride = huge_page_size(hstate_vma(vma));
> +
> + if (stride != PMD_SIZE && stride != PUD_SIZE)
> + stride = PAGE_SIZE;
> + __flush_tlb_range(vma, start, end, stride, false, 0);
We could use some hints here for the tlb_level (2 for pmd, 1 for pud).
Regarding the last_level argument to __flush_tlb_range(), I think it
needs to stay false since this function is also called on the
hugetlb_unshare_pmds() path where the pud is cleared and needs
invalidating.
That said, maybe you can rewrite it as a switch statement and call
flush_pmd_tlb_range() or flush_pud_tlb_range() (just make sure these are
defined when CONFIG_HUGETLBFS is enabled).
--
Catalin