2023-07-31 08:04:57

by Kefeng Wang

[permalink] [raw]
Subject: [PATCH 1/4] mm: hugetlb: use flush_hugetlb_tlb_range() in move_hugetlb_page_tables()

Archs may need to do special things when flushing hugepage tlb,
so use the more applicable flush_hugetlb_tlb_range() instead of
flush_tlb_range().

Fixes: 550a7d60bd5e ("mm, hugepages: add mremap() support for hugepage backed vma")
Signed-off-by: Kefeng Wang <[email protected]>
---
mm/hugetlb.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 64a3239b6407..ac876bfba340 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -5281,9 +5281,9 @@ int move_hugetlb_page_tables(struct vm_area_struct *vma,
}

if (shared_pmd)
- flush_tlb_range(vma, range.start, range.end);
+ flush_hugetlb_tlb_range(vma, range.start, range.end);
else
- flush_tlb_range(vma, old_end - len, old_end);
+ flush_hugetlb_tlb_range(vma, old_end - len, old_end);
mmu_notifier_invalidate_range_end(&range);
i_mmap_unlock_write(mapping);
hugetlb_vma_unlock_write(vma);
--
2.41.0



2023-08-01 00:20:09

by Mike Kravetz

[permalink] [raw]
Subject: Re: [PATCH 1/4] mm: hugetlb: use flush_hugetlb_tlb_range() in move_hugetlb_page_tables()

On 07/31/23 15:48, Kefeng Wang wrote:
> Archs may need to do special things when flushing hugepage tlb,
> so use the more applicable flush_hugetlb_tlb_range() instead of
> flush_tlb_range().
>
> Fixes: 550a7d60bd5e ("mm, hugepages: add mremap() support for hugepage backed vma")
> Signed-off-by: Kefeng Wang <[email protected]>

Thanks!

Reviewed-by: Mike Kravetz <[email protected]>

Although, I missed this in 550a7d60bd5e :(

Looks like only powerpc provides an arch specific flush_hugetlb_tlb_range
today.
--
Mike Kravetz

> ---
> mm/hugetlb.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 64a3239b6407..ac876bfba340 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -5281,9 +5281,9 @@ int move_hugetlb_page_tables(struct vm_area_struct *vma,
> }
>
> if (shared_pmd)
> - flush_tlb_range(vma, range.start, range.end);
> + flush_hugetlb_tlb_range(vma, range.start, range.end);
> else
> - flush_tlb_range(vma, old_end - len, old_end);
> + flush_hugetlb_tlb_range(vma, old_end - len, old_end);
> mmu_notifier_invalidate_range_end(&range);
> i_mmap_unlock_write(mapping);
> hugetlb_vma_unlock_write(vma);
> --
> 2.41.0
>

2023-08-01 03:10:30

by Muchun Song

[permalink] [raw]
Subject: Re: [PATCH 1/4] mm: hugetlb: use flush_hugetlb_tlb_range() in move_hugetlb_page_tables()



> On Jul 31, 2023, at 15:48, Kefeng Wang <[email protected]> wrote:
>
> Archs may need to do special things when flushing hugepage tlb,
> so use the more applicable flush_hugetlb_tlb_range() instead of
> flush_tlb_range().
>
> Fixes: 550a7d60bd5e ("mm, hugepages: add mremap() support for hugepage backed vma")
> Signed-off-by: Kefeng Wang <[email protected]>

Acked-by: Muchun Song <[email protected]>



2023-08-03 01:39:34

by Mina Almasry

[permalink] [raw]
Subject: Re: [PATCH 1/4] mm: hugetlb: use flush_hugetlb_tlb_range() in move_hugetlb_page_tables()

On Mon, Jul 31, 2023 at 4:40 PM Mike Kravetz <[email protected]> wrote:
>
> On 07/31/23 15:48, Kefeng Wang wrote:
> > Archs may need to do special things when flushing hugepage tlb,
> > so use the more applicable flush_hugetlb_tlb_range() instead of
> > flush_tlb_range().
> >
> > Fixes: 550a7d60bd5e ("mm, hugepages: add mremap() support for hugepage backed vma")
> > Signed-off-by: Kefeng Wang <[email protected]>
>
> Thanks!
>
> Reviewed-by: Mike Kravetz <[email protected]>
>

Sorry for jumping in late, but given the concerns raised around HGM
and the deviation between hugetlb and the rest of MM, does it make
sense to try to make an incremental effort towards avoiding hugetlb
specialization?

In the context of this patch, I would prefer that the arch upgrade
flush_tlb_range() to handle hugetlb correctly, instead of adding more
hugetlb specific deviations, ala flush_hugetlb_tlb_range. While it's
at it, maybe replace flush_hugetlb_tlb_range() in the code with
flush_tlb_range().

Although, I don't have the expertise to judge if upgrading
flush_tlb_range() to handle hugetlb is easy or feasible at all.

> Although, I missed this in 550a7d60bd5e :(
>
> Looks like only powerpc provides an arch specific flush_hugetlb_tlb_range
> today.
> --
> Mike Kravetz
>
> > ---
> > mm/hugetlb.c | 4 ++--
> > 1 file changed, 2 insertions(+), 2 deletions(-)
> >
> > diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> > index 64a3239b6407..ac876bfba340 100644
> > --- a/mm/hugetlb.c
> > +++ b/mm/hugetlb.c
> > @@ -5281,9 +5281,9 @@ int move_hugetlb_page_tables(struct vm_area_struct *vma,
> > }
> >
> > if (shared_pmd)
> > - flush_tlb_range(vma, range.start, range.end);
> > + flush_hugetlb_tlb_range(vma, range.start, range.end);
> > else
> > - flush_tlb_range(vma, old_end - len, old_end);
> > + flush_hugetlb_tlb_range(vma, old_end - len, old_end);
> > mmu_notifier_invalidate_range_end(&range);
> > i_mmap_unlock_write(mapping);
> > hugetlb_vma_unlock_write(vma);
> > --
> > 2.41.0
> >



--
Thanks,
Mina