2022-04-24 17:29:53

by Miaohe Lin

[permalink] [raw]
Subject: [PATCH v3 0/3] A few fixup patches for mm

Hi everyone,
This series contains a few patches to avoid mapping random data if swap
read fails and fix lost swap bits in unuse_pte. Also we free hwpoison and
swapin error entry in madvise_free_pte_range. More details can be found
in the respective changelogs. Thanks!

---
v3:
collect Acked-by tag per David
remove unneeded pte wrprotect per David
v2:
make the terminology consistent and collect Acked-by tag per David
fix lost swap bits in unuse_pte per Peter
free hwpoison and swapin error entry per Alistair
Many thanks Alistair, David and Peter for review!
---
Miaohe Lin (3):
mm/swapfile: unuse_pte can map random data if swap read fails
mm/swapfile: Fix lost swap bits in unuse_pte()
mm/madvise: free hwpoison and swapin error entry in
madvise_free_pte_range

include/linux/swap.h | 7 ++++++-
include/linux/swapops.h | 10 ++++++++++
mm/madvise.c | 13 ++++++++-----
mm/memory.c | 5 ++++-
mm/swapfile.c | 21 ++++++++++++++++++---
5 files changed, 46 insertions(+), 10 deletions(-)

--
2.23.0


2022-04-25 07:35:22

by Miaohe Lin

[permalink] [raw]
Subject: [PATCH v3 2/3] mm/swapfile: Fix lost swap bits in unuse_pte()

This is observed by code review only but not any real report.

When we turn off swapping we could have lost the bits stored in the swap
ptes. The new rmap-exclusive bit is fine since that turned into a page
flag, but not for soft-dirty and uffd-wp. Add them.

Suggested-by: Peter Xu <[email protected]>
Signed-off-by: Miaohe Lin <[email protected]>
---
mm/swapfile.c | 10 +++++++---
1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/mm/swapfile.c b/mm/swapfile.c
index 95b63f69f388..522a0eb16bf1 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -1783,7 +1783,7 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd,
{
struct page *swapcache;
spinlock_t *ptl;
- pte_t *pte;
+ pte_t *pte, new_pte;
int ret = 1;

swapcache = page;
@@ -1832,8 +1832,12 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd,
page_add_new_anon_rmap(page, vma, addr);
lru_cache_add_inactive_or_unevictable(page, vma);
}
- set_pte_at(vma->vm_mm, addr, pte,
- pte_mkold(mk_pte(page, vma->vm_page_prot)));
+ new_pte = pte_mkold(mk_pte(page, vma->vm_page_prot));
+ if (pte_swp_soft_dirty(*pte))
+ new_pte = pte_mksoft_dirty(new_pte);
+ if (pte_swp_uffd_wp(*pte))
+ new_pte = pte_mkuffd_wp(new_pte);
+ set_pte_at(vma->vm_mm, addr, pte, new_pte);
swap_free(entry);
out:
pte_unmap_unlock(pte, ptl);
--
2.23.0

2022-04-25 11:51:47

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH v3 2/3] mm/swapfile: Fix lost swap bits in unuse_pte()

On 24.04.22 11:11, Miaohe Lin wrote:
> This is observed by code review only but not any real report.
>
> When we turn off swapping we could have lost the bits stored in the swap
> ptes. The new rmap-exclusive bit is fine since that turned into a page
> flag, but not for soft-dirty and uffd-wp. Add them.
>
> Suggested-by: Peter Xu <[email protected]>
> Signed-off-by: Miaohe Lin <[email protected]>
> ---
> mm/swapfile.c | 10 +++++++---
> 1 file changed, 7 insertions(+), 3 deletions(-)
>
> diff --git a/mm/swapfile.c b/mm/swapfile.c
> index 95b63f69f388..522a0eb16bf1 100644
> --- a/mm/swapfile.c
> +++ b/mm/swapfile.c
> @@ -1783,7 +1783,7 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd,
> {
> struct page *swapcache;
> spinlock_t *ptl;
> - pte_t *pte;
> + pte_t *pte, new_pte;
> int ret = 1;
>
> swapcache = page;
> @@ -1832,8 +1832,12 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd,
> page_add_new_anon_rmap(page, vma, addr);
> lru_cache_add_inactive_or_unevictable(page, vma);
> }
> - set_pte_at(vma->vm_mm, addr, pte,
> - pte_mkold(mk_pte(page, vma->vm_page_prot)));
> + new_pte = pte_mkold(mk_pte(page, vma->vm_page_prot));
> + if (pte_swp_soft_dirty(*pte))
> + new_pte = pte_mksoft_dirty(new_pte);
> + if (pte_swp_uffd_wp(*pte))
> + new_pte = pte_mkuffd_wp(new_pte);
> + set_pte_at(vma->vm_mm, addr, pte, new_pte);
> swap_free(entry);
> out:
> pte_unmap_unlock(pte, ptl);

Reviewed-by: David Hildenbrand <[email protected]>

--
Thanks,

David / dhildenb