2022-11-29 19:49:16

by Peter Xu

[permalink] [raw]
Subject: [PATCH 09/10] mm/hugetlb: Make page_vma_mapped_walk() safe to pmd unshare

Since page_vma_mapped_walk() walks the pgtable, it needs the vma lock
to make sure the pgtable page will not be freed concurrently.

Signed-off-by: Peter Xu <[email protected]>
---
include/linux/rmap.h | 4 ++++
mm/page_vma_mapped.c | 5 ++++-
2 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index bd3504d11b15..a50d18bb86aa 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -13,6 +13,7 @@
#include <linux/highmem.h>
#include <linux/pagemap.h>
#include <linux/memremap.h>
+#include <linux/hugetlb.h>

/*
* The anon_vma heads a list of private "related" vmas, to scan if
@@ -408,6 +409,9 @@ static inline void page_vma_mapped_walk_done(struct page_vma_mapped_walk *pvmw)
pte_unmap(pvmw->pte);
if (pvmw->ptl)
spin_unlock(pvmw->ptl);
+ /* This needs to be after unlock of the spinlock */
+ if (is_vm_hugetlb_page(pvmw->vma))
+ hugetlb_vma_unlock_read(pvmw->vma);
}

bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw);
diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
index 93e13fc17d3c..f94ec78b54ff 100644
--- a/mm/page_vma_mapped.c
+++ b/mm/page_vma_mapped.c
@@ -169,10 +169,13 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
if (pvmw->pte)
return not_found(pvmw);

+ hugetlb_vma_lock_read(vma);
/* when pud is not present, pte will be NULL */
pvmw->pte = huge_pte_offset(mm, pvmw->address, size);
- if (!pvmw->pte)
+ if (!pvmw->pte) {
+ hugetlb_vma_unlock_read(vma);
return false;
+ }

pvmw->ptl = huge_pte_lock(hstate, mm, pvmw->pte);
if (!check_pte(pvmw))
--
2.37.3


2022-11-30 16:56:30

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH 09/10] mm/hugetlb: Make page_vma_mapped_walk() safe to pmd unshare

On 29.11.22 20:35, Peter Xu wrote:
> Since page_vma_mapped_walk() walks the pgtable, it needs the vma lock
> to make sure the pgtable page will not be freed concurrently.
>
> Signed-off-by: Peter Xu <[email protected]>
> ---
> include/linux/rmap.h | 4 ++++
> mm/page_vma_mapped.c | 5 ++++-
> 2 files changed, 8 insertions(+), 1 deletion(-)
>
> diff --git a/include/linux/rmap.h b/include/linux/rmap.h
> index bd3504d11b15..a50d18bb86aa 100644
> --- a/include/linux/rmap.h
> +++ b/include/linux/rmap.h
> @@ -13,6 +13,7 @@
> #include <linux/highmem.h>
> #include <linux/pagemap.h>
> #include <linux/memremap.h>
> +#include <linux/hugetlb.h>
>
> /*
> * The anon_vma heads a list of private "related" vmas, to scan if
> @@ -408,6 +409,9 @@ static inline void page_vma_mapped_walk_done(struct page_vma_mapped_walk *pvmw)
> pte_unmap(pvmw->pte);
> if (pvmw->ptl)
> spin_unlock(pvmw->ptl);
> + /* This needs to be after unlock of the spinlock */
> + if (is_vm_hugetlb_page(pvmw->vma))
> + hugetlb_vma_unlock_read(pvmw->vma);
> }
>
> bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw);
> diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
> index 93e13fc17d3c..f94ec78b54ff 100644
> --- a/mm/page_vma_mapped.c
> +++ b/mm/page_vma_mapped.c
> @@ -169,10 +169,13 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
> if (pvmw->pte)
> return not_found(pvmw);
>
> + hugetlb_vma_lock_read(vma);
> /* when pud is not present, pte will be NULL */
> pvmw->pte = huge_pte_offset(mm, pvmw->address, size);
> - if (!pvmw->pte)
> + if (!pvmw->pte) {
> + hugetlb_vma_unlock_read(vma);
> return false;
> + }
>
> pvmw->ptl = huge_pte_lock(hstate, mm, pvmw->pte);
> if (!check_pte(pvmw))

Looking at code like mm/damon/paddr.c:__damon_pa_mkold() and reading
the doc of page_vma_mapped_walk(), this might be broken.

Can't we get page_vma_mapped_walk() called multiple times? Wouldn't we
have to remember that we already took the lock to not lock twice, and to
see if we really have to unlock in page_vma_mapped_walk_done() ?

--
Thanks,

David / dhildenb

2022-11-30 16:58:40

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH 09/10] mm/hugetlb: Make page_vma_mapped_walk() safe to pmd unshare

On 30.11.22 17:32, Peter Xu wrote:
> On Wed, Nov 30, 2022 at 05:18:45PM +0100, David Hildenbrand wrote:
>> On 29.11.22 20:35, Peter Xu wrote:
>>> Since page_vma_mapped_walk() walks the pgtable, it needs the vma lock
>>> to make sure the pgtable page will not be freed concurrently.
>>>
>>> Signed-off-by: Peter Xu <[email protected]>
>>> ---
>>> include/linux/rmap.h | 4 ++++
>>> mm/page_vma_mapped.c | 5 ++++-
>>> 2 files changed, 8 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/include/linux/rmap.h b/include/linux/rmap.h
>>> index bd3504d11b15..a50d18bb86aa 100644
>>> --- a/include/linux/rmap.h
>>> +++ b/include/linux/rmap.h
>>> @@ -13,6 +13,7 @@
>>> #include <linux/highmem.h>
>>> #include <linux/pagemap.h>
>>> #include <linux/memremap.h>
>>> +#include <linux/hugetlb.h>
>>> /*
>>> * The anon_vma heads a list of private "related" vmas, to scan if
>>> @@ -408,6 +409,9 @@ static inline void page_vma_mapped_walk_done(struct page_vma_mapped_walk *pvmw)
>>> pte_unmap(pvmw->pte);
>>> if (pvmw->ptl)
>>> spin_unlock(pvmw->ptl);
>>> + /* This needs to be after unlock of the spinlock */
>>> + if (is_vm_hugetlb_page(pvmw->vma))
>>> + hugetlb_vma_unlock_read(pvmw->vma);
>>> }
>>> bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw);
>>> diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
>>> index 93e13fc17d3c..f94ec78b54ff 100644
>>> --- a/mm/page_vma_mapped.c
>>> +++ b/mm/page_vma_mapped.c
>>> @@ -169,10 +169,13 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
>>> if (pvmw->pte)
>>> return not_found(pvmw);
>>> + hugetlb_vma_lock_read(vma);
>>> /* when pud is not present, pte will be NULL */
>>> pvmw->pte = huge_pte_offset(mm, pvmw->address, size);
>>> - if (!pvmw->pte)
>>> + if (!pvmw->pte) {
>>> + hugetlb_vma_unlock_read(vma);
>>> return false;
>>> + }
>>> pvmw->ptl = huge_pte_lock(hstate, mm, pvmw->pte);
>>> if (!check_pte(pvmw))
>>
>> Looking at code like mm/damon/paddr.c:__damon_pa_mkold() and reading the
>> doc of page_vma_mapped_walk(), this might be broken.
>>
>> Can't we get page_vma_mapped_walk() called multiple times?
>
> Yes it normally can, but not for hugetlbfs? Feel free to check:
>
> if (unlikely(is_vm_hugetlb_page(vma))) {
> ...
> /* The only possible mapping was handled on last iteration */
> if (pvmw->pte)
> return not_found(pvmw);
> }

Ah, I see, thanks.

Acked-by: David Hildenbrand <[email protected]>


--
Thanks,

David / dhildenb

2022-11-30 17:23:43

by Peter Xu

[permalink] [raw]
Subject: Re: [PATCH 09/10] mm/hugetlb: Make page_vma_mapped_walk() safe to pmd unshare

On Wed, Nov 30, 2022 at 05:18:45PM +0100, David Hildenbrand wrote:
> On 29.11.22 20:35, Peter Xu wrote:
> > Since page_vma_mapped_walk() walks the pgtable, it needs the vma lock
> > to make sure the pgtable page will not be freed concurrently.
> >
> > Signed-off-by: Peter Xu <[email protected]>
> > ---
> > include/linux/rmap.h | 4 ++++
> > mm/page_vma_mapped.c | 5 ++++-
> > 2 files changed, 8 insertions(+), 1 deletion(-)
> >
> > diff --git a/include/linux/rmap.h b/include/linux/rmap.h
> > index bd3504d11b15..a50d18bb86aa 100644
> > --- a/include/linux/rmap.h
> > +++ b/include/linux/rmap.h
> > @@ -13,6 +13,7 @@
> > #include <linux/highmem.h>
> > #include <linux/pagemap.h>
> > #include <linux/memremap.h>
> > +#include <linux/hugetlb.h>
> > /*
> > * The anon_vma heads a list of private "related" vmas, to scan if
> > @@ -408,6 +409,9 @@ static inline void page_vma_mapped_walk_done(struct page_vma_mapped_walk *pvmw)
> > pte_unmap(pvmw->pte);
> > if (pvmw->ptl)
> > spin_unlock(pvmw->ptl);
> > + /* This needs to be after unlock of the spinlock */
> > + if (is_vm_hugetlb_page(pvmw->vma))
> > + hugetlb_vma_unlock_read(pvmw->vma);
> > }
> > bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw);
> > diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
> > index 93e13fc17d3c..f94ec78b54ff 100644
> > --- a/mm/page_vma_mapped.c
> > +++ b/mm/page_vma_mapped.c
> > @@ -169,10 +169,13 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
> > if (pvmw->pte)
> > return not_found(pvmw);
> > + hugetlb_vma_lock_read(vma);
> > /* when pud is not present, pte will be NULL */
> > pvmw->pte = huge_pte_offset(mm, pvmw->address, size);
> > - if (!pvmw->pte)
> > + if (!pvmw->pte) {
> > + hugetlb_vma_unlock_read(vma);
> > return false;
> > + }
> > pvmw->ptl = huge_pte_lock(hstate, mm, pvmw->pte);
> > if (!check_pte(pvmw))
>
> Looking at code like mm/damon/paddr.c:__damon_pa_mkold() and reading the
> doc of page_vma_mapped_walk(), this might be broken.
>
> Can't we get page_vma_mapped_walk() called multiple times?

Yes it normally can, but not for hugetlbfs? Feel free to check:

if (unlikely(is_vm_hugetlb_page(vma))) {
...
/* The only possible mapping was handled on last iteration */
if (pvmw->pte)
return not_found(pvmw);
}

> Wouldn't we have to remember that we already took the lock to not lock
> twice, and to see if we really have to unlock in
> page_vma_mapped_walk_done() ?

--
Peter Xu

2022-12-06 00:08:20

by Mike Kravetz

[permalink] [raw]
Subject: Re: [PATCH 09/10] mm/hugetlb: Make page_vma_mapped_walk() safe to pmd unshare

On 11/29/22 14:35, Peter Xu wrote:
> Since page_vma_mapped_walk() walks the pgtable, it needs the vma lock
> to make sure the pgtable page will not be freed concurrently.
>
> Signed-off-by: Peter Xu <[email protected]>
> ---
> include/linux/rmap.h | 4 ++++
> mm/page_vma_mapped.c | 5 ++++-
> 2 files changed, 8 insertions(+), 1 deletion(-)
>
> diff --git a/include/linux/rmap.h b/include/linux/rmap.h
> index bd3504d11b15..a50d18bb86aa 100644
> --- a/include/linux/rmap.h
> +++ b/include/linux/rmap.h
> @@ -13,6 +13,7 @@
> #include <linux/highmem.h>
> #include <linux/pagemap.h>
> #include <linux/memremap.h>
> +#include <linux/hugetlb.h>
>
> /*
> * The anon_vma heads a list of private "related" vmas, to scan if
> @@ -408,6 +409,9 @@ static inline void page_vma_mapped_walk_done(struct page_vma_mapped_walk *pvmw)
> pte_unmap(pvmw->pte);
> if (pvmw->ptl)
> spin_unlock(pvmw->ptl);
> + /* This needs to be after unlock of the spinlock */
> + if (is_vm_hugetlb_page(pvmw->vma))
> + hugetlb_vma_unlock_read(pvmw->vma);
> }
>
> bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw);
> diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
> index 93e13fc17d3c..f94ec78b54ff 100644
> --- a/mm/page_vma_mapped.c
> +++ b/mm/page_vma_mapped.c
> @@ -169,10 +169,13 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
> if (pvmw->pte)
> return not_found(pvmw);
>
> + hugetlb_vma_lock_read(vma);
> /* when pud is not present, pte will be NULL */
> pvmw->pte = huge_pte_offset(mm, pvmw->address, size);
> - if (!pvmw->pte)
> + if (!pvmw->pte) {
> + hugetlb_vma_unlock_read(vma);
> return false;
> + }
>
> pvmw->ptl = huge_pte_lock(hstate, mm, pvmw->pte);
> if (!check_pte(pvmw))

I think this is going to cause try_to_unmap() to always fail for hugetlb
shared pages. See try_to_unmap_one:

while (page_vma_mapped_walk(&pvmw)) {
...
if (folio_test_hugetlb(folio)) {
...
/*
* To call huge_pmd_unshare, i_mmap_rwsem must be
* held in write mode. Caller needs to explicitly
* do this outside rmap routines.
*
* We also must hold hugetlb vma_lock in write mode.
* Lock order dictates acquiring vma_lock BEFORE
* i_mmap_rwsem. We can only try lock here and fail
* if unsuccessful.
*/
if (!anon) {
VM_BUG_ON(!(flags & TTU_RMAP_LOCKED));
if (!hugetlb_vma_trylock_write(vma)) {
page_vma_mapped_walk_done(&pvmw);
ret = false;
}


Can not think of a great solution right now.
--
Mike Kravetz

2022-12-06 17:18:17

by Mike Kravetz

[permalink] [raw]
Subject: Re: [PATCH 09/10] mm/hugetlb: Make page_vma_mapped_walk() safe to pmd unshare

On 12/05/22 15:52, Mike Kravetz wrote:
> On 11/29/22 14:35, Peter Xu wrote:
> > Since page_vma_mapped_walk() walks the pgtable, it needs the vma lock
> > to make sure the pgtable page will not be freed concurrently.
> >
> > Signed-off-by: Peter Xu <[email protected]>
> > ---
> > include/linux/rmap.h | 4 ++++
> > mm/page_vma_mapped.c | 5 ++++-
> > 2 files changed, 8 insertions(+), 1 deletion(-)
> >
> > diff --git a/include/linux/rmap.h b/include/linux/rmap.h
> > index bd3504d11b15..a50d18bb86aa 100644
> > --- a/include/linux/rmap.h
> > +++ b/include/linux/rmap.h
> > @@ -13,6 +13,7 @@
> > #include <linux/highmem.h>
> > #include <linux/pagemap.h>
> > #include <linux/memremap.h>
> > +#include <linux/hugetlb.h>
> >
> > /*
> > * The anon_vma heads a list of private "related" vmas, to scan if
> > @@ -408,6 +409,9 @@ static inline void page_vma_mapped_walk_done(struct page_vma_mapped_walk *pvmw)
> > pte_unmap(pvmw->pte);
> > if (pvmw->ptl)
> > spin_unlock(pvmw->ptl);
> > + /* This needs to be after unlock of the spinlock */
> > + if (is_vm_hugetlb_page(pvmw->vma))
> > + hugetlb_vma_unlock_read(pvmw->vma);
> > }
> >
> > bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw);
> > diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
> > index 93e13fc17d3c..f94ec78b54ff 100644
> > --- a/mm/page_vma_mapped.c
> > +++ b/mm/page_vma_mapped.c
> > @@ -169,10 +169,13 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
> > if (pvmw->pte)
> > return not_found(pvmw);
> >
> > + hugetlb_vma_lock_read(vma);
> > /* when pud is not present, pte will be NULL */
> > pvmw->pte = huge_pte_offset(mm, pvmw->address, size);
> > - if (!pvmw->pte)
> > + if (!pvmw->pte) {
> > + hugetlb_vma_unlock_read(vma);
> > return false;
> > + }
> >
> > pvmw->ptl = huge_pte_lock(hstate, mm, pvmw->pte);
> > if (!check_pte(pvmw))
>
> I think this is going to cause try_to_unmap() to always fail for hugetlb
> shared pages. See try_to_unmap_one:
>
> while (page_vma_mapped_walk(&pvmw)) {
> ...
> if (folio_test_hugetlb(folio)) {
> ...
> /*
> * To call huge_pmd_unshare, i_mmap_rwsem must be
> * held in write mode. Caller needs to explicitly
> * do this outside rmap routines.
> *
> * We also must hold hugetlb vma_lock in write mode.
> * Lock order dictates acquiring vma_lock BEFORE
> * i_mmap_rwsem. We can only try lock here and fail
> * if unsuccessful.
> */
> if (!anon) {
> VM_BUG_ON(!(flags & TTU_RMAP_LOCKED));
> if (!hugetlb_vma_trylock_write(vma)) {
> page_vma_mapped_walk_done(&pvmw);
> ret = false;
> }
>
>
> Can not think of a great solution right now.

Thought of this last night ...

Perhaps we do not need vma_lock in this code path (not sure about all
page_vma_mapped_walk calls). Why? We already hold i_mmap_rwsem.
--
Mike Kravetz

2022-12-06 17:45:34

by Peter Xu

[permalink] [raw]
Subject: Re: [PATCH 09/10] mm/hugetlb: Make page_vma_mapped_walk() safe to pmd unshare

On Tue, Dec 06, 2022 at 09:10:00AM -0800, Mike Kravetz wrote:
> On 12/05/22 15:52, Mike Kravetz wrote:
> > On 11/29/22 14:35, Peter Xu wrote:
> > > Since page_vma_mapped_walk() walks the pgtable, it needs the vma lock
> > > to make sure the pgtable page will not be freed concurrently.
> > >
> > > Signed-off-by: Peter Xu <[email protected]>
> > > ---
> > > include/linux/rmap.h | 4 ++++
> > > mm/page_vma_mapped.c | 5 ++++-
> > > 2 files changed, 8 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/include/linux/rmap.h b/include/linux/rmap.h
> > > index bd3504d11b15..a50d18bb86aa 100644
> > > --- a/include/linux/rmap.h
> > > +++ b/include/linux/rmap.h
> > > @@ -13,6 +13,7 @@
> > > #include <linux/highmem.h>
> > > #include <linux/pagemap.h>
> > > #include <linux/memremap.h>
> > > +#include <linux/hugetlb.h>
> > >
> > > /*
> > > * The anon_vma heads a list of private "related" vmas, to scan if
> > > @@ -408,6 +409,9 @@ static inline void page_vma_mapped_walk_done(struct page_vma_mapped_walk *pvmw)
> > > pte_unmap(pvmw->pte);
> > > if (pvmw->ptl)
> > > spin_unlock(pvmw->ptl);
> > > + /* This needs to be after unlock of the spinlock */
> > > + if (is_vm_hugetlb_page(pvmw->vma))
> > > + hugetlb_vma_unlock_read(pvmw->vma);
> > > }
> > >
> > > bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw);
> > > diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
> > > index 93e13fc17d3c..f94ec78b54ff 100644
> > > --- a/mm/page_vma_mapped.c
> > > +++ b/mm/page_vma_mapped.c
> > > @@ -169,10 +169,13 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
> > > if (pvmw->pte)
> > > return not_found(pvmw);
> > >
> > > + hugetlb_vma_lock_read(vma);
> > > /* when pud is not present, pte will be NULL */
> > > pvmw->pte = huge_pte_offset(mm, pvmw->address, size);
> > > - if (!pvmw->pte)
> > > + if (!pvmw->pte) {
> > > + hugetlb_vma_unlock_read(vma);
> > > return false;
> > > + }
> > >
> > > pvmw->ptl = huge_pte_lock(hstate, mm, pvmw->pte);
> > > if (!check_pte(pvmw))
> >
> > I think this is going to cause try_to_unmap() to always fail for hugetlb
> > shared pages. See try_to_unmap_one:
> >
> > while (page_vma_mapped_walk(&pvmw)) {
> > ...
> > if (folio_test_hugetlb(folio)) {
> > ...
> > /*
> > * To call huge_pmd_unshare, i_mmap_rwsem must be
> > * held in write mode. Caller needs to explicitly
> > * do this outside rmap routines.
> > *
> > * We also must hold hugetlb vma_lock in write mode.
> > * Lock order dictates acquiring vma_lock BEFORE
> > * i_mmap_rwsem. We can only try lock here and fail
> > * if unsuccessful.
> > */
> > if (!anon) {
> > VM_BUG_ON(!(flags & TTU_RMAP_LOCKED));
> > if (!hugetlb_vma_trylock_write(vma)) {
> > page_vma_mapped_walk_done(&pvmw);
> > ret = false;
> > }
> >
> >
> > Can not think of a great solution right now.
>
> Thought of this last night ...
>
> Perhaps we do not need vma_lock in this code path (not sure about all
> page_vma_mapped_walk calls). Why? We already hold i_mmap_rwsem.

Exactly. The only concern is when it's not in a rmap.

I'm actually preparing something that adds a new flag to PVMW, like:

#define PVMW_HUGETLB_NEEDS_LOCK (1 << 2)

But maybe we don't need that at all, since I had a closer look the only
outliers of not using a rmap is:

__replace_page
write_protect_page

I'm pretty sure ksm doesn't have hugetlb involved, then the other one is
uprobe (uprobe_write_opcode). I think it's the same. If it's true, we can
simply drop this patch. Then we also have hugetlb_walk and the lock checks
there guarantee that we're safe anyways.

Potentially we can document this fact, which I also attached a comment
patch just for it to be appended to the end of the patchset.

Mike, let me know what do you think.

Andrew, if this patch to be dropped then the last patch may not cleanly
apply. Let me know if you want a full repost of the things.

Thanks,

--
Peter Xu

2022-12-06 18:13:10

by Peter Xu

[permalink] [raw]
Subject: Re: [PATCH 09/10] mm/hugetlb: Make page_vma_mapped_walk() safe to pmd unshare

On Tue, Dec 06, 2022 at 12:39:53PM -0500, Peter Xu wrote:
> On Tue, Dec 06, 2022 at 09:10:00AM -0800, Mike Kravetz wrote:
> > On 12/05/22 15:52, Mike Kravetz wrote:
> > > On 11/29/22 14:35, Peter Xu wrote:
> > > > Since page_vma_mapped_walk() walks the pgtable, it needs the vma lock
> > > > to make sure the pgtable page will not be freed concurrently.
> > > >
> > > > Signed-off-by: Peter Xu <[email protected]>
> > > > ---
> > > > include/linux/rmap.h | 4 ++++
> > > > mm/page_vma_mapped.c | 5 ++++-
> > > > 2 files changed, 8 insertions(+), 1 deletion(-)
> > > >
> > > > diff --git a/include/linux/rmap.h b/include/linux/rmap.h
> > > > index bd3504d11b15..a50d18bb86aa 100644
> > > > --- a/include/linux/rmap.h
> > > > +++ b/include/linux/rmap.h
> > > > @@ -13,6 +13,7 @@
> > > > #include <linux/highmem.h>
> > > > #include <linux/pagemap.h>
> > > > #include <linux/memremap.h>
> > > > +#include <linux/hugetlb.h>
> > > >
> > > > /*
> > > > * The anon_vma heads a list of private "related" vmas, to scan if
> > > > @@ -408,6 +409,9 @@ static inline void page_vma_mapped_walk_done(struct page_vma_mapped_walk *pvmw)
> > > > pte_unmap(pvmw->pte);
> > > > if (pvmw->ptl)
> > > > spin_unlock(pvmw->ptl);
> > > > + /* This needs to be after unlock of the spinlock */
> > > > + if (is_vm_hugetlb_page(pvmw->vma))
> > > > + hugetlb_vma_unlock_read(pvmw->vma);
> > > > }
> > > >
> > > > bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw);
> > > > diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
> > > > index 93e13fc17d3c..f94ec78b54ff 100644
> > > > --- a/mm/page_vma_mapped.c
> > > > +++ b/mm/page_vma_mapped.c
> > > > @@ -169,10 +169,13 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
> > > > if (pvmw->pte)
> > > > return not_found(pvmw);
> > > >
> > > > + hugetlb_vma_lock_read(vma);
> > > > /* when pud is not present, pte will be NULL */
> > > > pvmw->pte = huge_pte_offset(mm, pvmw->address, size);
> > > > - if (!pvmw->pte)
> > > > + if (!pvmw->pte) {
> > > > + hugetlb_vma_unlock_read(vma);
> > > > return false;
> > > > + }
> > > >
> > > > pvmw->ptl = huge_pte_lock(hstate, mm, pvmw->pte);
> > > > if (!check_pte(pvmw))
> > >
> > > I think this is going to cause try_to_unmap() to always fail for hugetlb
> > > shared pages. See try_to_unmap_one:
> > >
> > > while (page_vma_mapped_walk(&pvmw)) {
> > > ...
> > > if (folio_test_hugetlb(folio)) {
> > > ...
> > > /*
> > > * To call huge_pmd_unshare, i_mmap_rwsem must be
> > > * held in write mode. Caller needs to explicitly
> > > * do this outside rmap routines.
> > > *
> > > * We also must hold hugetlb vma_lock in write mode.
> > > * Lock order dictates acquiring vma_lock BEFORE
> > > * i_mmap_rwsem. We can only try lock here and fail
> > > * if unsuccessful.
> > > */
> > > if (!anon) {
> > > VM_BUG_ON(!(flags & TTU_RMAP_LOCKED));
> > > if (!hugetlb_vma_trylock_write(vma)) {
> > > page_vma_mapped_walk_done(&pvmw);
> > > ret = false;
> > > }
> > >
> > >
> > > Can not think of a great solution right now.
> >
> > Thought of this last night ...
> >
> > Perhaps we do not need vma_lock in this code path (not sure about all
> > page_vma_mapped_walk calls). Why? We already hold i_mmap_rwsem.
>
> Exactly. The only concern is when it's not in a rmap.
>
> I'm actually preparing something that adds a new flag to PVMW, like:
>
> #define PVMW_HUGETLB_NEEDS_LOCK (1 << 2)
>
> But maybe we don't need that at all, since I had a closer look the only
> outliers of not using a rmap is:
>
> __replace_page
> write_protect_page
>
> I'm pretty sure ksm doesn't have hugetlb involved, then the other one is
> uprobe (uprobe_write_opcode). I think it's the same. If it's true, we can
> simply drop this patch. Then we also have hugetlb_walk and the lock checks
> there guarantee that we're safe anyways.
>
> Potentially we can document this fact, which I also attached a comment
> patch just for it to be appended to the end of the patchset.
>
> Mike, let me know what do you think.
>
> Andrew, if this patch to be dropped then the last patch may not cleanly
> apply. Let me know if you want a full repost of the things.

The document patch that can be appended to the end of this series attached.
I referenced hugetlb_walk() so it needs to be the last patch.

--
Peter Xu


Attachments:
(No filename) (4.76 kB)
0001-mm-hugetlb-Document-why-page_vma_mapped_walk-is-safe.patch (1.41 kB)
Download all attachments

2022-12-06 20:27:16

by Mike Kravetz

[permalink] [raw]
Subject: Re: [PATCH 09/10] mm/hugetlb: Make page_vma_mapped_walk() safe to pmd unshare

On 12/06/22 12:43, Peter Xu wrote:
> On Tue, Dec 06, 2022 at 12:39:53PM -0500, Peter Xu wrote:
> > On Tue, Dec 06, 2022 at 09:10:00AM -0800, Mike Kravetz wrote:
> > > On 12/05/22 15:52, Mike Kravetz wrote:
> > > > On 11/29/22 14:35, Peter Xu wrote:
> > > > > Since page_vma_mapped_walk() walks the pgtable, it needs the vma lock
> > > > > to make sure the pgtable page will not be freed concurrently.
> > > > >
> > > > > Signed-off-by: Peter Xu <[email protected]>
> > > > > ---
> > > > > include/linux/rmap.h | 4 ++++
> > > > > mm/page_vma_mapped.c | 5 ++++-
> > > > > 2 files changed, 8 insertions(+), 1 deletion(-)
> > > > >
> > > > > diff --git a/include/linux/rmap.h b/include/linux/rmap.h
> > > > > index bd3504d11b15..a50d18bb86aa 100644
> > > > > --- a/include/linux/rmap.h
> > > > > +++ b/include/linux/rmap.h
> > > > > @@ -13,6 +13,7 @@
> > > > > #include <linux/highmem.h>
> > > > > #include <linux/pagemap.h>
> > > > > #include <linux/memremap.h>
> > > > > +#include <linux/hugetlb.h>
> > > > >
> > > > > /*
> > > > > * The anon_vma heads a list of private "related" vmas, to scan if
> > > > > @@ -408,6 +409,9 @@ static inline void page_vma_mapped_walk_done(struct page_vma_mapped_walk *pvmw)
> > > > > pte_unmap(pvmw->pte);
> > > > > if (pvmw->ptl)
> > > > > spin_unlock(pvmw->ptl);
> > > > > + /* This needs to be after unlock of the spinlock */
> > > > > + if (is_vm_hugetlb_page(pvmw->vma))
> > > > > + hugetlb_vma_unlock_read(pvmw->vma);
> > > > > }
> > > > >
> > > > > bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw);
> > > > > diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
> > > > > index 93e13fc17d3c..f94ec78b54ff 100644
> > > > > --- a/mm/page_vma_mapped.c
> > > > > +++ b/mm/page_vma_mapped.c
> > > > > @@ -169,10 +169,13 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
> > > > > if (pvmw->pte)
> > > > > return not_found(pvmw);
> > > > >
> > > > > + hugetlb_vma_lock_read(vma);
> > > > > /* when pud is not present, pte will be NULL */
> > > > > pvmw->pte = huge_pte_offset(mm, pvmw->address, size);
> > > > > - if (!pvmw->pte)
> > > > > + if (!pvmw->pte) {
> > > > > + hugetlb_vma_unlock_read(vma);
> > > > > return false;
> > > > > + }
> > > > >
> > > > > pvmw->ptl = huge_pte_lock(hstate, mm, pvmw->pte);
> > > > > if (!check_pte(pvmw))
> > > >
> > > > I think this is going to cause try_to_unmap() to always fail for hugetlb
> > > > shared pages. See try_to_unmap_one:
> > > >
> > > > while (page_vma_mapped_walk(&pvmw)) {
> > > > ...
> > > > if (folio_test_hugetlb(folio)) {
> > > > ...
> > > > /*
> > > > * To call huge_pmd_unshare, i_mmap_rwsem must be
> > > > * held in write mode. Caller needs to explicitly
> > > > * do this outside rmap routines.
> > > > *
> > > > * We also must hold hugetlb vma_lock in write mode.
> > > > * Lock order dictates acquiring vma_lock BEFORE
> > > > * i_mmap_rwsem. We can only try lock here and fail
> > > > * if unsuccessful.
> > > > */
> > > > if (!anon) {
> > > > VM_BUG_ON(!(flags & TTU_RMAP_LOCKED));
> > > > if (!hugetlb_vma_trylock_write(vma)) {
> > > > page_vma_mapped_walk_done(&pvmw);
> > > > ret = false;
> > > > }
> > > >
> > > >
> > > > Can not think of a great solution right now.
> > >
> > > Thought of this last night ...
> > >
> > > Perhaps we do not need vma_lock in this code path (not sure about all
> > > page_vma_mapped_walk calls). Why? We already hold i_mmap_rwsem.
> >
> > Exactly. The only concern is when it's not in a rmap.
> >
> > I'm actually preparing something that adds a new flag to PVMW, like:
> >
> > #define PVMW_HUGETLB_NEEDS_LOCK (1 << 2)
> >
> > But maybe we don't need that at all, since I had a closer look the only
> > outliers of not using a rmap is:
> >
> > __replace_page
> > write_protect_page
> >
> > I'm pretty sure ksm doesn't have hugetlb involved, then the other one is
> > uprobe (uprobe_write_opcode). I think it's the same. If it's true, we can
> > simply drop this patch. Then we also have hugetlb_walk and the lock checks
> > there guarantee that we're safe anyways.
> >
> > Potentially we can document this fact, which I also attached a comment
> > patch just for it to be appended to the end of the patchset.
> >
> > Mike, let me know what do you think.
> >
> > Andrew, if this patch to be dropped then the last patch may not cleanly
> > apply. Let me know if you want a full repost of the things.
>
> The document patch that can be appended to the end of this series attached.
> I referenced hugetlb_walk() so it needs to be the last patch.
>
> --
> Peter Xu

Agree with dropping this patch and adding the document patch below.

Reviewed-by: Mike Kravetz <[email protected]>

Also, happy we have the warnings in place to catch incorrect locking.
--
Mike Kravetz

> From 754c2180804e9e86accf131573cbd956b8c62829 Mon Sep 17 00:00:00 2001
> From: Peter Xu <[email protected]>
> Date: Tue, 6 Dec 2022 12:36:04 -0500
> Subject: [PATCH] mm/hugetlb: Document why page_vma_mapped_walk() is safe to
> walk
> Content-type: text/plain
>
> Taking vma lock here is not needed for now because all potential hugetlb
> walkers here should have i_mmap_rwsem held. Document the fact.
>
> Signed-off-by: Peter Xu <[email protected]>
> ---
> mm/page_vma_mapped.c | 10 ++++++++--
> 1 file changed, 8 insertions(+), 2 deletions(-)
>
> diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
> index e97b2e23bd28..2e59a0419d22 100644
> --- a/mm/page_vma_mapped.c
> +++ b/mm/page_vma_mapped.c
> @@ -168,8 +168,14 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
> /* The only possible mapping was handled on last iteration */
> if (pvmw->pte)
> return not_found(pvmw);
> -
> - /* when pud is not present, pte will be NULL */
> + /*
> + * NOTE: we don't need explicit lock here to walk the
> + * hugetlb pgtable because either (1) potential callers of
> + * hugetlb pvmw currently holds i_mmap_rwsem, or (2) the
> + * caller will not walk a hugetlb vma (e.g. ksm or uprobe).
> + * When one day this rule breaks, one will get a warning
> + * in hugetlb_walk(), and then we'll figure out what to do.
> + */
> pvmw->pte = hugetlb_walk(vma, pvmw->address, size);
> if (!pvmw->pte)
> return false;
> --
> 2.37.3
>