2020-10-17 09:51:15

by Shijie Luo

[permalink] [raw]
Subject: [PATCH V2] mm: fix potential pte_unmap_unlock pte error

When flags don't have MPOL_MF_MOVE or MPOL_MF_MOVE_ALL bits, code breaks
and passing origin pte - 1 to pte_unmap_unlock seems like not a good idea.

Signed-off-by: Shijie Luo <[email protected]>
Signed-off-by: Michal Hocko <[email protected]>
Signed-off-by: Miaohe Lin <[email protected]>
---
mm/mempolicy.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 3fde772ef5ef..3ca4898f3f24 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -525,7 +525,7 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr,
unsigned long flags = qp->flags;
int ret;
bool has_unmovable = false;
- pte_t *pte;
+ pte_t *pte, *mapped_pte;
spinlock_t *ptl;

ptl = pmd_trans_huge_lock(pmd, vma);
@@ -539,7 +539,7 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr,
if (pmd_trans_unstable(pmd))
return 0;

- pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
+ mapped_pte = pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
for (; addr != end; pte++, addr += PAGE_SIZE) {
if (!pte_present(*pte))
continue;
@@ -571,7 +571,7 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr,
} else
break;
}
- pte_unmap_unlock(pte - 1, ptl);
+ pte_unmap_unlock(mapped_pte, ptl);
cond_resched();

if (has_unmovable)
--
2.19.1


2020-10-19 10:22:25

by Michal Hocko

[permalink] [raw]
Subject: Re: [PATCH V2] mm: fix potential pte_unmap_unlock pte error

On Fri 16-10-20 22:11:51, Shijie Luo wrote:
> When flags don't have MPOL_MF_MOVE or MPOL_MF_MOVE_ALL bits, code breaks
> and passing origin pte - 1 to pte_unmap_unlock seems like not a good idea.

This would really benefit from some improvements. It is preferable to
provide a user visibile effect of the patch. I would propose this, feel
free to reuse parts as you find fit.
"
queue_pages_pte_range can run in MPOL_MF_MOVE_ALL mode which doesn't
migrate misplaced pages but returns with EIO when encountering such a
page. Since a7f40cfe3b7a ("mm: mempolicy: make mbind() return -EIO when
MPOL_MF_STRICT is specified") and early break on the first pte in the
range results in pte_unmap_unlock on an underflow pte. This can lead to
lockups later on when somebody tries to lock the pte resp.
page_table_lock again..

Fixes: a7f40cfe3b7a ("mm: mempolicy: make mbind() return -EIO when
MPOL_MF_STRICT is specified")
"

> Signed-off-by: Shijie Luo <[email protected]>
> Signed-off-by: Michal Hocko <[email protected]>
> Signed-off-by: Miaohe Lin <[email protected]>

No need to add my s-o-b.

> ---
> mm/mempolicy.c | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/mm/mempolicy.c b/mm/mempolicy.c
> index 3fde772ef5ef..3ca4898f3f24 100644
> --- a/mm/mempolicy.c
> +++ b/mm/mempolicy.c
> @@ -525,7 +525,7 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr,
> unsigned long flags = qp->flags;
> int ret;
> bool has_unmovable = false;
> - pte_t *pte;
> + pte_t *pte, *mapped_pte;
> spinlock_t *ptl;
>
> ptl = pmd_trans_huge_lock(pmd, vma);
> @@ -539,7 +539,7 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr,
> if (pmd_trans_unstable(pmd))
> return 0;
>
> - pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
> + mapped_pte = pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
> for (; addr != end; pte++, addr += PAGE_SIZE) {
> if (!pte_present(*pte))
> continue;
> @@ -571,7 +571,7 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr,
> } else
> break;
> }
> - pte_unmap_unlock(pte - 1, ptl);
> + pte_unmap_unlock(mapped_pte, ptl);
> cond_resched();
>
> if (has_unmovable)
> --
> 2.19.1
>

--
Michal Hocko
SUSE Labs

2020-10-19 21:16:40

by Shijie Luo

[permalink] [raw]
Subject: Re: [PATCH V2] mm: fix potential pte_unmap_unlock pte error

On 2020/10/19 14:59, Michal Hocko wrote:
> On Fri 16-10-20 22:11:51, Shijie Luo wrote:
>> When flags don't have MPOL_MF_MOVE or MPOL_MF_MOVE_ALL bits, code breaks
>> and passing origin pte - 1 to pte_unmap_unlock seems like not a good idea.
> This would really benefit from some improvements. It is preferable to
> provide a user visibile effect of the patch. I would propose this, feel
> free to reuse parts as you find fit.
> "
> queue_pages_pte_range can run in MPOL_MF_MOVE_ALL mode which doesn't
> migrate misplaced pages but returns with EIO when encountering such a
> page. Since a7f40cfe3b7a ("mm: mempolicy: make mbind() return -EIO when
> MPOL_MF_STRICT is specified") and early break on the first pte in the
> range results in pte_unmap_unlock on an underflow pte. This can lead to
> lockups later on when somebody tries to lock the pte resp.
> page_table_lock again..
>
> Fixes: a7f40cfe3b7a ("mm: mempolicy: make mbind() return -EIO when
> MPOL_MF_STRICT is specified")
> "
I will take these in my patch description and send version 3, Thanks.