2023-10-02 17:46:25

by David Hildenbrand

[permalink] [raw]
Subject: [PATCH v1 1/3] mm/rmap: move SetPageAnonExclusive() out of page_move_anon_rmap()

Let's move it into the caller: there is a difference between whether an
anon folio can only be mapped by one process (e.g., into one VMA), and
whether it is truly exclusive (e.g., no references -- including GUP --
from other processes).

Further, for large folios the page might not actually be pointing at the
head page of the folio, so it better be handled in the caller. This is a
preparation for converting page_move_anon_rmap() to consume a folio.

Signed-off-by: David Hildenbrand <[email protected]>
---
mm/huge_memory.c | 1 +
mm/hugetlb.c | 4 +++-
mm/memory.c | 1 +
mm/rmap.c | 1 -
4 files changed, 5 insertions(+), 2 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index e54fb9c542bb..01d0d65ece13 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1506,6 +1506,7 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf)
pmd_t entry;

page_move_anon_rmap(page, vma);
+ SetPageAnonExclusive(page);
folio_unlock(folio);
reuse:
if (unlikely(unshare)) {
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 9c22297d9c57..24591fc145ff 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -5460,8 +5460,10 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma,
* owner and can reuse this page.
*/
if (folio_mapcount(old_folio) == 1 && folio_test_anon(old_folio)) {
- if (!PageAnonExclusive(&old_folio->page))
+ if (!PageAnonExclusive(&old_folio->page)) {
page_move_anon_rmap(&old_folio->page, vma);
+ SetPageAnonExclusive(&old_folio->page);
+ }
if (likely(!unshare))
set_huge_ptep_writable(vma, haddr, ptep);

diff --git a/mm/memory.c b/mm/memory.c
index d4820802b01b..9de231c92769 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3484,6 +3484,7 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf)
* sunglasses. Hit it.
*/
page_move_anon_rmap(vmf->page, vma);
+ SetPageAnonExclusive(vmf->page);
folio_unlock(folio);
reuse:
if (unlikely(unshare)) {
diff --git a/mm/rmap.c b/mm/rmap.c
index 77222adccda1..854ccbd66954 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1165,7 +1165,6 @@ void page_move_anon_rmap(struct page *page, struct vm_area_struct *vma)
* folio_test_anon()) will not see one without the other.
*/
WRITE_ONCE(folio->mapping, anon_vma);
- SetPageAnonExclusive(page);
}

/**
--
2.41.0


2023-10-03 16:57:45

by Suren Baghdasaryan

[permalink] [raw]
Subject: Re: [PATCH v1 1/3] mm/rmap: move SetPageAnonExclusive() out of page_move_anon_rmap()

On Mon, Oct 2, 2023 at 7:29 AM David Hildenbrand <[email protected]> wrote:
>
> Let's move it into the caller: there is a difference between whether an
> anon folio can only be mapped by one process (e.g., into one VMA), and
> whether it is truly exclusive (e.g., no references -- including GUP --
> from other processes).
>
> Further, for large folios the page might not actually be pointing at the
> head page of the folio, so it better be handled in the caller. This is a
> preparation for converting page_move_anon_rmap() to consume a folio.
>
> Signed-off-by: David Hildenbrand <[email protected]>

Reviewed-by: Suren Baghdasaryan <[email protected]>

> ---
> mm/huge_memory.c | 1 +
> mm/hugetlb.c | 4 +++-
> mm/memory.c | 1 +
> mm/rmap.c | 1 -
> 4 files changed, 5 insertions(+), 2 deletions(-)
>
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index e54fb9c542bb..01d0d65ece13 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -1506,6 +1506,7 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf)
> pmd_t entry;
>
> page_move_anon_rmap(page, vma);
> + SetPageAnonExclusive(page);
> folio_unlock(folio);
> reuse:
> if (unlikely(unshare)) {
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 9c22297d9c57..24591fc145ff 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -5460,8 +5460,10 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma,
> * owner and can reuse this page.
> */
> if (folio_mapcount(old_folio) == 1 && folio_test_anon(old_folio)) {
> - if (!PageAnonExclusive(&old_folio->page))
> + if (!PageAnonExclusive(&old_folio->page)) {
> page_move_anon_rmap(&old_folio->page, vma);
> + SetPageAnonExclusive(&old_folio->page);
> + }
> if (likely(!unshare))
> set_huge_ptep_writable(vma, haddr, ptep);
>
> diff --git a/mm/memory.c b/mm/memory.c
> index d4820802b01b..9de231c92769 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -3484,6 +3484,7 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf)
> * sunglasses. Hit it.
> */
> page_move_anon_rmap(vmf->page, vma);
> + SetPageAnonExclusive(vmf->page);
> folio_unlock(folio);
> reuse:
> if (unlikely(unshare)) {
> diff --git a/mm/rmap.c b/mm/rmap.c
> index 77222adccda1..854ccbd66954 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -1165,7 +1165,6 @@ void page_move_anon_rmap(struct page *page, struct vm_area_struct *vma)
> * folio_test_anon()) will not see one without the other.
> */
> WRITE_ONCE(folio->mapping, anon_vma);
> - SetPageAnonExclusive(page);
> }
>
> /**
> --
> 2.41.0
>

2023-10-03 17:16:12

by Vishal Moola

[permalink] [raw]
Subject: Re: [PATCH v1 1/3] mm/rmap: move SetPageAnonExclusive() out of page_move_anon_rmap()

On Mon, Oct 02, 2023 at 04:29:47PM +0200, David Hildenbrand wrote:
> Let's move it into the caller: there is a difference between whether an
> anon folio can only be mapped by one process (e.g., into one VMA), and
> whether it is truly exclusive (e.g., no references -- including GUP --
> from other processes).
>
> Further, for large folios the page might not actually be pointing at the
> head page of the folio, so it better be handled in the caller. This is a
> preparation for converting page_move_anon_rmap() to consume a folio.

Reviewed-by: Vishal Moola (Oracle) <[email protected]>