2024-04-04 17:22:03

by Matthew Wilcox

[permalink] [raw]
Subject: Re: [PATCH] userfaultfd: change src_folio after ensuring it's unpinned in UFFDIO_MOVE

On Thu, Apr 04, 2024 at 10:17:26AM -0700, Lokesh Gidra wrote:
> - folio_move_anon_rmap(src_folio, dst_vma);
> - WRITE_ONCE(src_folio->index, linear_page_index(dst_vma, dst_addr));
> -
> src_pmdval = pmdp_huge_clear_flush(src_vma, src_addr, src_pmd);
> /* Folio got pinned from under us. Put it back and fail the move. */
> if (folio_maybe_dma_pinned(src_folio)) {
> @@ -2270,6 +2267,9 @@ int move_pages_huge_pmd(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd, pm
> goto unlock_ptls;
> }
>
> + folio_move_anon_rmap(src_folio, dst_vma);
> + WRITE_ONCE(src_folio->index, linear_page_index(dst_vma, dst_addr));
> +

This use of WRITE_ONCE scares me. We hold the folio locked. Why do
we need to use WRITE_ONCE? Who's looking at folio->index without
holding the folio lock?


2024-04-04 20:08:14

by Suren Baghdasaryan

[permalink] [raw]
Subject: Re: [PATCH] userfaultfd: change src_folio after ensuring it's unpinned in UFFDIO_MOVE

On Thu, Apr 4, 2024 at 10:21 AM Matthew Wilcox <[email protected]> wrote:
>
> On Thu, Apr 04, 2024 at 10:17:26AM -0700, Lokesh Gidra wrote:
> > - folio_move_anon_rmap(src_folio, dst_vma);
> > - WRITE_ONCE(src_folio->index, linear_page_index(dst_vma, dst_addr));
> > -
> > src_pmdval = pmdp_huge_clear_flush(src_vma, src_addr, src_pmd);
> > /* Folio got pinned from under us. Put it back and fail the move. */
> > if (folio_maybe_dma_pinned(src_folio)) {
> > @@ -2270,6 +2267,9 @@ int move_pages_huge_pmd(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd, pm
> > goto unlock_ptls;
> > }
> >
> > + folio_move_anon_rmap(src_folio, dst_vma);
> > + WRITE_ONCE(src_folio->index, linear_page_index(dst_vma, dst_addr));
> > +
>
> This use of WRITE_ONCE scares me. We hold the folio locked. Why do
> we need to use WRITE_ONCE? Who's looking at folio->index without
> holding the folio lock?

Indeed that seems to be unnecessary here. Both here and in
move_present_pte() we are holding folio lock while moving the page. I
must have just blindly copied that from Andrea's original patch [1].

https://gitlab.com/aarcange/aa/-/commit/2aec7aea56b10438a3881a20a411aa4b1fc19e92

2024-04-04 20:16:13

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH] userfaultfd: change src_folio after ensuring it's unpinned in UFFDIO_MOVE

On 04.04.24 22:07, Suren Baghdasaryan wrote:
> On Thu, Apr 4, 2024 at 10:21 AM Matthew Wilcox <[email protected]> wrote:
>>
>> On Thu, Apr 04, 2024 at 10:17:26AM -0700, Lokesh Gidra wrote:
>>> - folio_move_anon_rmap(src_folio, dst_vma);
>>> - WRITE_ONCE(src_folio->index, linear_page_index(dst_vma, dst_addr));
>>> -
>>> src_pmdval = pmdp_huge_clear_flush(src_vma, src_addr, src_pmd);
>>> /* Folio got pinned from under us. Put it back and fail the move. */
>>> if (folio_maybe_dma_pinned(src_folio)) {
>>> @@ -2270,6 +2267,9 @@ int move_pages_huge_pmd(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd, pm
>>> goto unlock_ptls;
>>> }
>>>
>>> + folio_move_anon_rmap(src_folio, dst_vma);
>>> + WRITE_ONCE(src_folio->index, linear_page_index(dst_vma, dst_addr));
>>> +
>>
>> This use of WRITE_ONCE scares me. We hold the folio locked. Why do
>> we need to use WRITE_ONCE? Who's looking at folio->index without
>> holding the folio lock?
>
> Indeed that seems to be unnecessary here. Both here and in
> move_present_pte() we are holding folio lock while moving the page. I
> must have just blindly copied that from Andrea's original patch [1].

Agreed, I don't think it is required for ->index. (I also don't spot any
corresponding READ_ONCE)

--
Cheers,

David / dhildenb


2024-04-04 20:34:00

by Peter Xu

[permalink] [raw]
Subject: Re: [PATCH] userfaultfd: change src_folio after ensuring it's unpinned in UFFDIO_MOVE

On Thu, Apr 04, 2024 at 06:21:50PM +0100, Matthew Wilcox wrote:
> On Thu, Apr 04, 2024 at 10:17:26AM -0700, Lokesh Gidra wrote:
> > - folio_move_anon_rmap(src_folio, dst_vma);
> > - WRITE_ONCE(src_folio->index, linear_page_index(dst_vma, dst_addr));
> > -
> > src_pmdval = pmdp_huge_clear_flush(src_vma, src_addr, src_pmd);
> > /* Folio got pinned from under us. Put it back and fail the move. */
> > if (folio_maybe_dma_pinned(src_folio)) {
> > @@ -2270,6 +2267,9 @@ int move_pages_huge_pmd(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd, pm
> > goto unlock_ptls;
> > }
> >
> > + folio_move_anon_rmap(src_folio, dst_vma);
> > + WRITE_ONCE(src_folio->index, linear_page_index(dst_vma, dst_addr));
> > +
>
> This use of WRITE_ONCE scares me. We hold the folio locked. Why do
> we need to use WRITE_ONCE? Who's looking at folio->index without
> holding the folio lock?

Seems true, but maybe suitable for a separate patch to clean it even so?
We also have the other pte level which has the same WRITE_ONCE(), so if we
want to drop we may want to drop both.

I just got to start reading some the new move codes (Lokesh, apologies on
not be able to provide feedbacks previously..), but then I found one thing
unclear, on special handling of private file mappings only in userfault
context, and I didn't know why:

lock_vma():
if (vma) {
/*
* lock_vma_under_rcu() only checks anon_vma for private
* anonymous mappings. But we need to ensure it is assigned in
* private file-backed vmas as well.
*/
if (!(vma->vm_flags & VM_SHARED) && unlikely(!vma->anon_vma))
vma_end_read(vma);
else
return vma;
}

AFAIU even for generic users of lock_vma_under_rcu(), anon_vma must be
stable to be used. Here it's weird to become an userfault specific
operation to me.

I was surprised how it worked for private file maps on faults, then I had a
check and it seems we postponed such check until vmf_anon_prepare(), which
is the CoW path already, so we do as I expected, but seems unnecessary to
that point?

Would something like below make it much cleaner for us? As I just don't
yet see why userfault is special here.

Thanks,

===8<===
diff --git a/mm/memory.c b/mm/memory.c
index 984b138f85b4..d5cf1d31c671 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3213,10 +3213,8 @@ vm_fault_t vmf_anon_prepare(struct vm_fault *vmf)

if (likely(vma->anon_vma))
return 0;
- if (vmf->flags & FAULT_FLAG_VMA_LOCK) {
- vma_end_read(vma);
- return VM_FAULT_RETRY;
- }
+ /* We shouldn't try a per-vma fault at all if anon_vma isn't solid */
+ WARN_ON_ONCE(vmf->flags & FAULT_FLAG_VMA_LOCK);
if (__anon_vma_prepare(vma))
return VM_FAULT_OOM;
return 0;
@@ -5817,9 +5815,9 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm,
* find_mergeable_anon_vma uses adjacent vmas which are not locked.
* This check must happen after vma_start_read(); otherwise, a
* concurrent mremap() with MREMAP_DONTUNMAP could dissociate the VMA
- * from its anon_vma.
+ * from its anon_vma. This applies to both anon or private file maps.
*/
- if (unlikely(vma_is_anonymous(vma) && !vma->anon_vma))
+ if (unlikely(!(vma->vm_flags & VM_SHARED) && !vma->anon_vma))
goto inval_end_read;

/* Check since vm_start/vm_end might change before we lock the VMA */
diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
index f6267afe65d1..61f21da77dcd 100644
--- a/mm/userfaultfd.c
+++ b/mm/userfaultfd.c
@@ -72,17 +72,8 @@ static struct vm_area_struct *lock_vma(struct mm_struct *mm,
struct vm_area_struct *vma;

vma = lock_vma_under_rcu(mm, address);
- if (vma) {
- /*
- * lock_vma_under_rcu() only checks anon_vma for private
- * anonymous mappings. But we need to ensure it is assigned in
- * private file-backed vmas as well.
- */
- if (!(vma->vm_flags & VM_SHARED) && unlikely(!vma->anon_vma))
- vma_end_read(vma);
- else
- return vma;
- }
+ if (vma)
+ return vma;

mmap_read_lock(mm);
vma = find_vma_and_prepare_anon(mm, address);
--
2.44.0


--
Peter Xu


2024-04-04 20:55:51

by Suren Baghdasaryan

[permalink] [raw]
Subject: Re: [PATCH] userfaultfd: change src_folio after ensuring it's unpinned in UFFDIO_MOVE

On Thu, Apr 4, 2024 at 1:32 PM Peter Xu <[email protected]> wrote:
>
> On Thu, Apr 04, 2024 at 06:21:50PM +0100, Matthew Wilcox wrote:
> > On Thu, Apr 04, 2024 at 10:17:26AM -0700, Lokesh Gidra wrote:
> > > - folio_move_anon_rmap(src_folio, dst_vma);
> > > - WRITE_ONCE(src_folio->index, linear_page_index(dst_vma, dst_addr));
> > > -
> > > src_pmdval = pmdp_huge_clear_flush(src_vma, src_addr, src_pmd);
> > > /* Folio got pinned from under us. Put it back and fail the move. */
> > > if (folio_maybe_dma_pinned(src_folio)) {
> > > @@ -2270,6 +2267,9 @@ int move_pages_huge_pmd(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd, pm
> > > goto unlock_ptls;
> > > }
> > >
> > > + folio_move_anon_rmap(src_folio, dst_vma);
> > > + WRITE_ONCE(src_folio->index, linear_page_index(dst_vma, dst_addr));
> > > +
> >
> > This use of WRITE_ONCE scares me. We hold the folio locked. Why do
> > we need to use WRITE_ONCE? Who's looking at folio->index without
> > holding the folio lock?
>
> Seems true, but maybe suitable for a separate patch to clean it even so?
> We also have the other pte level which has the same WRITE_ONCE(), so if we
> want to drop we may want to drop both.

Yes, I'll do that separately and will remove WRITE_ONCE() in both places.

>
> I just got to start reading some the new move codes (Lokesh, apologies on
> not be able to provide feedbacks previously..), but then I found one thing
> unclear, on special handling of private file mappings only in userfault
> context, and I didn't know why:
>
> lock_vma():
> if (vma) {
> /*
> * lock_vma_under_rcu() only checks anon_vma for private
> * anonymous mappings. But we need to ensure it is assigned in
> * private file-backed vmas as well.
> */
> if (!(vma->vm_flags & VM_SHARED) && unlikely(!vma->anon_vma))
> vma_end_read(vma);
> else
> return vma;
> }
>
> AFAIU even for generic users of lock_vma_under_rcu(), anon_vma must be
> stable to be used. Here it's weird to become an userfault specific
> operation to me.
>
> I was surprised how it worked for private file maps on faults, then I had a
> check and it seems we postponed such check until vmf_anon_prepare(), which
> is the CoW path already, so we do as I expected, but seems unnecessary to
> that point?
>
> Would something like below make it much cleaner for us? As I just don't
> yet see why userfault is special here.
>
> Thanks,
>
> ===8<===
> diff --git a/mm/memory.c b/mm/memory.c
> index 984b138f85b4..d5cf1d31c671 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -3213,10 +3213,8 @@ vm_fault_t vmf_anon_prepare(struct vm_fault *vmf)
>
> if (likely(vma->anon_vma))
> return 0;
> - if (vmf->flags & FAULT_FLAG_VMA_LOCK) {
> - vma_end_read(vma);
> - return VM_FAULT_RETRY;
> - }
> + /* We shouldn't try a per-vma fault at all if anon_vma isn't solid */
> + WARN_ON_ONCE(vmf->flags & FAULT_FLAG_VMA_LOCK);
> if (__anon_vma_prepare(vma))
> return VM_FAULT_OOM;
> return 0;
> @@ -5817,9 +5815,9 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm,
> * find_mergeable_anon_vma uses adjacent vmas which are not locked.
> * This check must happen after vma_start_read(); otherwise, a
> * concurrent mremap() with MREMAP_DONTUNMAP could dissociate the VMA
> - * from its anon_vma.
> + * from its anon_vma. This applies to both anon or private file maps.
> */
> - if (unlikely(vma_is_anonymous(vma) && !vma->anon_vma))
> + if (unlikely(!(vma->vm_flags & VM_SHARED) && !vma->anon_vma))
> goto inval_end_read;
>
> /* Check since vm_start/vm_end might change before we lock the VMA */
> diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
> index f6267afe65d1..61f21da77dcd 100644
> --- a/mm/userfaultfd.c
> +++ b/mm/userfaultfd.c
> @@ -72,17 +72,8 @@ static struct vm_area_struct *lock_vma(struct mm_struct *mm,
> struct vm_area_struct *vma;
>
> vma = lock_vma_under_rcu(mm, address);
> - if (vma) {
> - /*
> - * lock_vma_under_rcu() only checks anon_vma for private
> - * anonymous mappings. But we need to ensure it is assigned in
> - * private file-backed vmas as well.
> - */
> - if (!(vma->vm_flags & VM_SHARED) && unlikely(!vma->anon_vma))
> - vma_end_read(vma);
> - else
> - return vma;
> - }
> + if (vma)
> + return vma;
>
> mmap_read_lock(mm);
> vma = find_vma_and_prepare_anon(mm, address);
> --
> 2.44.0
>
>
> --
> Peter Xu
>

2024-04-04 21:04:52

by Peter Xu

[permalink] [raw]
Subject: Re: [PATCH] userfaultfd: change src_folio after ensuring it's unpinned in UFFDIO_MOVE

On Thu, Apr 04, 2024 at 01:55:07PM -0700, Suren Baghdasaryan wrote:
> On Thu, Apr 4, 2024 at 1:32 PM Peter Xu <[email protected]> wrote:
> >
> > On Thu, Apr 04, 2024 at 06:21:50PM +0100, Matthew Wilcox wrote:
> > > On Thu, Apr 04, 2024 at 10:17:26AM -0700, Lokesh Gidra wrote:
> > > > - folio_move_anon_rmap(src_folio, dst_vma);
> > > > - WRITE_ONCE(src_folio->index, linear_page_index(dst_vma, dst_addr));
> > > > -
> > > > src_pmdval = pmdp_huge_clear_flush(src_vma, src_addr, src_pmd);
> > > > /* Folio got pinned from under us. Put it back and fail the move. */
> > > > if (folio_maybe_dma_pinned(src_folio)) {
> > > > @@ -2270,6 +2267,9 @@ int move_pages_huge_pmd(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd, pm
> > > > goto unlock_ptls;
> > > > }
> > > >
> > > > + folio_move_anon_rmap(src_folio, dst_vma);
> > > > + WRITE_ONCE(src_folio->index, linear_page_index(dst_vma, dst_addr));
> > > > +
> > >
> > > This use of WRITE_ONCE scares me. We hold the folio locked. Why do
> > > we need to use WRITE_ONCE? Who's looking at folio->index without
> > > holding the folio lock?
> >
> > Seems true, but maybe suitable for a separate patch to clean it even so?
> > We also have the other pte level which has the same WRITE_ONCE(), so if we
> > want to drop we may want to drop both.
>
> Yes, I'll do that separately and will remove WRITE_ONCE() in both places.

Thanks, Suren. Besides, any comment on below?

It's definely a generic per-vma question too (besides my willingness to
remove that userfault specific code..), so comments welcomed.

>
> >
> > I just got to start reading some the new move codes (Lokesh, apologies on
> > not be able to provide feedbacks previously..), but then I found one thing
> > unclear, on special handling of private file mappings only in userfault
> > context, and I didn't know why:
> >
> > lock_vma():
> > if (vma) {
> > /*
> > * lock_vma_under_rcu() only checks anon_vma for private
> > * anonymous mappings. But we need to ensure it is assigned in
> > * private file-backed vmas as well.
> > */
> > if (!(vma->vm_flags & VM_SHARED) && unlikely(!vma->anon_vma))
> > vma_end_read(vma);
> > else
> > return vma;
> > }
> >
> > AFAIU even for generic users of lock_vma_under_rcu(), anon_vma must be
> > stable to be used. Here it's weird to become an userfault specific
> > operation to me.
> >
> > I was surprised how it worked for private file maps on faults, then I had a
> > check and it seems we postponed such check until vmf_anon_prepare(), which
> > is the CoW path already, so we do as I expected, but seems unnecessary to
> > that point?
> >
> > Would something like below make it much cleaner for us? As I just don't
> > yet see why userfault is special here.
> >
> > Thanks,
> >
> > ===8<===
> > diff --git a/mm/memory.c b/mm/memory.c
> > index 984b138f85b4..d5cf1d31c671 100644
> > --- a/mm/memory.c
> > +++ b/mm/memory.c
> > @@ -3213,10 +3213,8 @@ vm_fault_t vmf_anon_prepare(struct vm_fault *vmf)
> >
> > if (likely(vma->anon_vma))
> > return 0;
> > - if (vmf->flags & FAULT_FLAG_VMA_LOCK) {
> > - vma_end_read(vma);
> > - return VM_FAULT_RETRY;
> > - }
> > + /* We shouldn't try a per-vma fault at all if anon_vma isn't solid */
> > + WARN_ON_ONCE(vmf->flags & FAULT_FLAG_VMA_LOCK);
> > if (__anon_vma_prepare(vma))
> > return VM_FAULT_OOM;
> > return 0;
> > @@ -5817,9 +5815,9 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm,
> > * find_mergeable_anon_vma uses adjacent vmas which are not locked.
> > * This check must happen after vma_start_read(); otherwise, a
> > * concurrent mremap() with MREMAP_DONTUNMAP could dissociate the VMA
> > - * from its anon_vma.
> > + * from its anon_vma. This applies to both anon or private file maps.
> > */
> > - if (unlikely(vma_is_anonymous(vma) && !vma->anon_vma))
> > + if (unlikely(!(vma->vm_flags & VM_SHARED) && !vma->anon_vma))
> > goto inval_end_read;
> >
> > /* Check since vm_start/vm_end might change before we lock the VMA */
> > diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
> > index f6267afe65d1..61f21da77dcd 100644
> > --- a/mm/userfaultfd.c
> > +++ b/mm/userfaultfd.c
> > @@ -72,17 +72,8 @@ static struct vm_area_struct *lock_vma(struct mm_struct *mm,
> > struct vm_area_struct *vma;
> >
> > vma = lock_vma_under_rcu(mm, address);
> > - if (vma) {
> > - /*
> > - * lock_vma_under_rcu() only checks anon_vma for private
> > - * anonymous mappings. But we need to ensure it is assigned in
> > - * private file-backed vmas as well.
> > - */
> > - if (!(vma->vm_flags & VM_SHARED) && unlikely(!vma->anon_vma))
> > - vma_end_read(vma);
> > - else
> > - return vma;
> > - }
> > + if (vma)
> > + return vma;
> >
> > mmap_read_lock(mm);
> > vma = find_vma_and_prepare_anon(mm, address);
> > --
> > 2.44.0
> >
> >
> > --
> > Peter Xu
> >
>

--
Peter Xu


2024-04-04 21:08:08

by Suren Baghdasaryan

[permalink] [raw]
Subject: Re: [PATCH] userfaultfd: change src_folio after ensuring it's unpinned in UFFDIO_MOVE

On Thu, Apr 4, 2024 at 2:04 PM Peter Xu <[email protected]> wrote:
>
> On Thu, Apr 04, 2024 at 01:55:07PM -0700, Suren Baghdasaryan wrote:
> > On Thu, Apr 4, 2024 at 1:32 PM Peter Xu <[email protected]> wrote:
> > >
> > > On Thu, Apr 04, 2024 at 06:21:50PM +0100, Matthew Wilcox wrote:
> > > > On Thu, Apr 04, 2024 at 10:17:26AM -0700, Lokesh Gidra wrote:
> > > > > - folio_move_anon_rmap(src_folio, dst_vma);
> > > > > - WRITE_ONCE(src_folio->index, linear_page_index(dst_vma, dst_addr));
> > > > > -
> > > > > src_pmdval = pmdp_huge_clear_flush(src_vma, src_addr, src_pmd);
> > > > > /* Folio got pinned from under us. Put it back and fail the move. */
> > > > > if (folio_maybe_dma_pinned(src_folio)) {
> > > > > @@ -2270,6 +2267,9 @@ int move_pages_huge_pmd(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd, pm
> > > > > goto unlock_ptls;
> > > > > }
> > > > >
> > > > > + folio_move_anon_rmap(src_folio, dst_vma);
> > > > > + WRITE_ONCE(src_folio->index, linear_page_index(dst_vma, dst_addr));
> > > > > +
> > > >
> > > > This use of WRITE_ONCE scares me. We hold the folio locked. Why do
> > > > we need to use WRITE_ONCE? Who's looking at folio->index without
> > > > holding the folio lock?
> > >
> > > Seems true, but maybe suitable for a separate patch to clean it even so?
> > > We also have the other pte level which has the same WRITE_ONCE(), so if we
> > > want to drop we may want to drop both.
> >
> > Yes, I'll do that separately and will remove WRITE_ONCE() in both places.
>
> Thanks, Suren. Besides, any comment on below?
>
> It's definely a generic per-vma question too (besides my willingness to
> remove that userfault specific code..), so comments welcomed.

Yes, I was typing my reply :)
This might have happened simply because lock_vma_under_rcu() was
originally developed to handle only anonymous page faults and then got
expanded to cover file-backed cases as well. Your suggestion seems
fine to me but I would feel much more comfortable after Matthew (who
added file-backed support) reviewed it.

>
> >
> > >
> > > I just got to start reading some the new move codes (Lokesh, apologies on
> > > not be able to provide feedbacks previously..), but then I found one thing
> > > unclear, on special handling of private file mappings only in userfault
> > > context, and I didn't know why:
> > >
> > > lock_vma():
> > > if (vma) {
> > > /*
> > > * lock_vma_under_rcu() only checks anon_vma for private
> > > * anonymous mappings. But we need to ensure it is assigned in
> > > * private file-backed vmas as well.
> > > */
> > > if (!(vma->vm_flags & VM_SHARED) && unlikely(!vma->anon_vma))
> > > vma_end_read(vma);
> > > else
> > > return vma;
> > > }
> > >
> > > AFAIU even for generic users of lock_vma_under_rcu(), anon_vma must be
> > > stable to be used. Here it's weird to become an userfault specific
> > > operation to me.
> > >
> > > I was surprised how it worked for private file maps on faults, then I had a
> > > check and it seems we postponed such check until vmf_anon_prepare(), which
> > > is the CoW path already, so we do as I expected, but seems unnecessary to
> > > that point?
> > >
> > > Would something like below make it much cleaner for us? As I just don't
> > > yet see why userfault is special here.
> > >
> > > Thanks,
> > >
> > > ===8<===
> > > diff --git a/mm/memory.c b/mm/memory.c
> > > index 984b138f85b4..d5cf1d31c671 100644
> > > --- a/mm/memory.c
> > > +++ b/mm/memory.c
> > > @@ -3213,10 +3213,8 @@ vm_fault_t vmf_anon_prepare(struct vm_fault *vmf)
> > >
> > > if (likely(vma->anon_vma))
> > > return 0;
> > > - if (vmf->flags & FAULT_FLAG_VMA_LOCK) {
> > > - vma_end_read(vma);
> > > - return VM_FAULT_RETRY;
> > > - }
> > > + /* We shouldn't try a per-vma fault at all if anon_vma isn't solid */
> > > + WARN_ON_ONCE(vmf->flags & FAULT_FLAG_VMA_LOCK);
> > > if (__anon_vma_prepare(vma))
> > > return VM_FAULT_OOM;
> > > return 0;
> > > @@ -5817,9 +5815,9 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm,
> > > * find_mergeable_anon_vma uses adjacent vmas which are not locked.
> > > * This check must happen after vma_start_read(); otherwise, a
> > > * concurrent mremap() with MREMAP_DONTUNMAP could dissociate the VMA
> > > - * from its anon_vma.
> > > + * from its anon_vma. This applies to both anon or private file maps.
> > > */
> > > - if (unlikely(vma_is_anonymous(vma) && !vma->anon_vma))
> > > + if (unlikely(!(vma->vm_flags & VM_SHARED) && !vma->anon_vma))
> > > goto inval_end_read;
> > >
> > > /* Check since vm_start/vm_end might change before we lock the VMA */
> > > diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
> > > index f6267afe65d1..61f21da77dcd 100644
> > > --- a/mm/userfaultfd.c
> > > +++ b/mm/userfaultfd.c
> > > @@ -72,17 +72,8 @@ static struct vm_area_struct *lock_vma(struct mm_struct *mm,
> > > struct vm_area_struct *vma;
> > >
> > > vma = lock_vma_under_rcu(mm, address);
> > > - if (vma) {
> > > - /*
> > > - * lock_vma_under_rcu() only checks anon_vma for private
> > > - * anonymous mappings. But we need to ensure it is assigned in
> > > - * private file-backed vmas as well.
> > > - */
> > > - if (!(vma->vm_flags & VM_SHARED) && unlikely(!vma->anon_vma))
> > > - vma_end_read(vma);
> > > - else
> > > - return vma;
> > > - }
> > > + if (vma)
> > > + return vma;
> > >
> > > mmap_read_lock(mm);
> > > vma = find_vma_and_prepare_anon(mm, address);
> > > --
> > > 2.44.0
> > >
> > >
> > > --
> > > Peter Xu
> > >
> >
>
> --
> Peter Xu
>

2024-04-04 20:23:31

by Suren Baghdasaryan

[permalink] [raw]
Subject: Re: [PATCH] userfaultfd: change src_folio after ensuring it's unpinned in UFFDIO_MOVE

On Thu, Apr 4, 2024 at 1:16 PM David Hildenbrand <[email protected]> wrote:
>
> On 04.04.24 22:07, Suren Baghdasaryan wrote:
> > On Thu, Apr 4, 2024 at 10:21 AM Matthew Wilcox <willy@infradeadorg> wrote:
> >>
> >> On Thu, Apr 04, 2024 at 10:17:26AM -0700, Lokesh Gidra wrote:
> >>> - folio_move_anon_rmap(src_folio, dst_vma);
> >>> - WRITE_ONCE(src_folio->index, linear_page_index(dst_vma, dst_addr));
> >>> -
> >>> src_pmdval = pmdp_huge_clear_flush(src_vma, src_addr, src_pmd);
> >>> /* Folio got pinned from under us. Put it back and fail the move. */
> >>> if (folio_maybe_dma_pinned(src_folio)) {
> >>> @@ -2270,6 +2267,9 @@ int move_pages_huge_pmd(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd, pm
> >>> goto unlock_ptls;
> >>> }
> >>>
> >>> + folio_move_anon_rmap(src_folio, dst_vma);
> >>> + WRITE_ONCE(src_folio->index, linear_page_index(dst_vma, dst_addr));
> >>> +
> >>
> >> This use of WRITE_ONCE scares me. We hold the folio locked. Why do
> >> we need to use WRITE_ONCE? Who's looking at folio->index without
> >> holding the folio lock?
> >
> > Indeed that seems to be unnecessary here. Both here and in
> > move_present_pte() we are holding folio lock while moving the page. I
> > must have just blindly copied that from Andrea's original patch [1].
>
> Agreed, I don't think it is required for ->index. (I also don't spot any
> corresponding READ_ONCE)

Since this patch just got Ack'ed, I'll wait for Andrew to take it into
mm-unstable and then will send a fix removing those WRITE_ONCE(). That
way we won't have merge conflicts,

>
> --
> Cheers,
>
> David / dhildenb
>

2024-04-10 17:10:12

by Peter Xu

[permalink] [raw]
Subject: Re: [PATCH] userfaultfd: change src_folio after ensuring it's unpinned in UFFDIO_MOVE

On Thu, Apr 04, 2024 at 02:07:45PM -0700, Suren Baghdasaryan wrote:
> Yes, I was typing my reply :)
> This might have happened simply because lock_vma_under_rcu() was
> originally developed to handle only anonymous page faults and then got
> expanded to cover file-backed cases as well. Your suggestion seems
> fine to me but I would feel much more comfortable after Matthew (who
> added file-backed support) reviewed it.

Thanks.

Just in case this will fall through the cracks (while I still think we
should do it..), I sent a formal patch just now with some more information
in the commit log. Any further review comments welcomed there.

--
Peter Xu