2020-10-15 18:00:52

by Shijie Luo

[permalink] [raw]
Subject: [PATCH] mm: fix potential pte_unmap_unlock pte error

When flags don't have MPOL_MF_MOVE or MPOL_MF_MOVE_ALL bits, code breaks
and passing origin pte - 1 to pte_unmap_unlock seems like not a good idea.

Signed-off-by: Shijie Luo <[email protected]>
Signed-off-by: linmiaohe <[email protected]>
---
mm/mempolicy.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 3fde772ef5ef..01f088630d1d 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -571,7 +571,11 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr,
} else
break;
}
- pte_unmap_unlock(pte - 1, ptl);
+
+ if (addr >= end)
+ pte = pte - 1;
+
+ pte_unmap_unlock(pte, ptl);
cond_resched();

if (has_unmovable)
--
2.19.1


2020-10-16 13:14:04

by Oscar Salvador

[permalink] [raw]
Subject: Re: [PATCH] mm: fix potential pte_unmap_unlock pte error

On 2020-10-16 14:31, Michal Hocko wrote:
> I do not like the fix though. The code is really confusing. Why should
> we check for flags in each iteration of the loop when it cannot change?
> Also why should we take the ptl lock in the first place when the look
> is
> broken out immediately?

About checking the flags:

https://lore.kernel.org/linux-mm/[email protected]/#t

2020-10-16 13:15:49

by Michal Hocko

[permalink] [raw]
Subject: Re: [PATCH] mm: fix potential pte_unmap_unlock pte error

On Thu 15-10-20 08:15:34, Shijie Luo wrote:
> When flags don't have MPOL_MF_MOVE or MPOL_MF_MOVE_ALL bits, code breaks
> and passing origin pte - 1 to pte_unmap_unlock seems like not a good idea.

Yes the code is suspicious to say the least. At least mbind can reach to
here with both MPOL_MF_MOVE, MPOL_MF_MOVE_ALL unset and then the pte
would be pointing outside of the current pmd.

I do not like the fix though. The code is really confusing. Why should
we check for flags in each iteration of the loop when it cannot change?
Also why should we take the ptl lock in the first place when the look is
broken out immediately?

I have to admit that I do not fully understand a7f40cfe3b7ad so this
should be carefuly evaluated.

If anything something like below would be a better fix

diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index eddbe4e56c73..7877b36a5a6d 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -539,6 +539,10 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr,
if (pmd_trans_unstable(pmd))
return 0;

+ /* A COMMENT GOES HERE. */
+ if (!(flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)))
+ return -EIO;
+
pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
for (; addr != end; pte++, addr += PAGE_SIZE) {
if (!pte_present(*pte))
@@ -554,28 +558,26 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr,
continue;
if (!queue_pages_required(page, qp))
continue;
- if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) {
- /* MPOL_MF_STRICT must be specified if we get here */
- if (!vma_migratable(vma)) {
- has_unmovable = true;
- break;
- }

- /*
- * Do not abort immediately since there may be
- * temporary off LRU pages in the range. Still
- * need migrate other LRU pages.
- */
- if (migrate_page_add(page, qp->pagelist, flags))
- has_unmovable = true;
- } else
+ /* MPOL_MF_STRICT must be specified if we get here */
+ if (!vma_migratable(vma)) {
+ has_unmovable = true;
break;
+ }
+
+ /*
+ * Do not abort immediately since there may be
+ * temporary off LRU pages in the range. Still
+ * need migrate other LRU pages.
+ */
+ if (migrate_page_add(page, qp->pagelist, flags))
+ has_unmovable = true;
}
pte_unmap_unlock(pte - 1, ptl);
cond_resched();

if (has_unmovable)
return 1;
return addr != end ? -EIO : 0;
}
--
Michal Hocko
SUSE Labs

2020-10-16 13:21:49

by Michal Hocko

[permalink] [raw]
Subject: Re: [PATCH] mm: fix potential pte_unmap_unlock pte error

On Fri 16-10-20 14:37:08, [email protected] wrote:
> On 2020-10-16 14:31, Michal Hocko wrote:
> > I do not like the fix though. The code is really confusing. Why should
> > we check for flags in each iteration of the loop when it cannot change?
> > Also why should we take the ptl lock in the first place when the look is
> > broken out immediately?
>
> About checking the flags:
>
> https://lore.kernel.org/linux-mm/[email protected]/#t

This didn't really help. Maybe the code was different back then but
right now the code doesn't make much sense TBH. The only reason to check
inside the loop would be to have a completely unpopulated address range.
Note about MPOL_MF_STRICT is not checked explicitly and I do not see how
it makes any difference.

Anyway this function would benefit from some uncluttering!

--
Michal Hocko
SUSE Labs

2020-10-16 13:22:51

by Michal Hocko

[permalink] [raw]
Subject: Re: [PATCH] mm: fix potential pte_unmap_unlock pte error

On Fri 16-10-20 15:11:17, Michal Hocko wrote:
> On Fri 16-10-20 14:37:08, [email protected] wrote:
> > On 2020-10-16 14:31, Michal Hocko wrote:
> > > I do not like the fix though. The code is really confusing. Why should
> > > we check for flags in each iteration of the loop when it cannot change?
> > > Also why should we take the ptl lock in the first place when the look is
> > > broken out immediately?
> >
> > About checking the flags:
> >
> > https://lore.kernel.org/linux-mm/[email protected]/#t
>
> This didn't really help. Maybe the code was different back then but
> right now the code doesn't make much sense TBH. The only reason to check
> inside the loop would be to have a completely unpopulated address range.
> Note about MPOL_MF_STRICT is not checked explicitly and I do not see how
> it makes any difference.

Ohh, I have missed queue_pages_required. Let me think some more.

--
Michal Hocko
SUSE Labs

2020-10-16 13:43:48

by Michal Hocko

[permalink] [raw]
Subject: Re: [PATCH] mm: fix potential pte_unmap_unlock pte error

On Fri 16-10-20 15:15:32, Michal Hocko wrote:
> On Fri 16-10-20 15:11:17, Michal Hocko wrote:
> > On Fri 16-10-20 14:37:08, [email protected] wrote:
> > > On 2020-10-16 14:31, Michal Hocko wrote:
> > > > I do not like the fix though. The code is really confusing. Why should
> > > > we check for flags in each iteration of the loop when it cannot change?
> > > > Also why should we take the ptl lock in the first place when the look is
> > > > broken out immediately?
> > >
> > > About checking the flags:
> > >
> > > https://lore.kernel.org/linux-mm/[email protected]/#t
> >
> > This didn't really help. Maybe the code was different back then but
> > right now the code doesn't make much sense TBH. The only reason to check
> > inside the loop would be to have a completely unpopulated address range.
> > Note about MPOL_MF_STRICT is not checked explicitly and I do not see how
> > it makes any difference.
>
> Ohh, I have missed queue_pages_required. Let me think some more.

OK, I finally managed to convince my friday brain to think and grasped
what the code is intended to do. The loop is hairy and we want to
prevent from spurious EIO when all the pages are on a proper node. So
the check has to be done inside the loop. Anyway I would find the
following fix less error prone and easier to follow
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index eddbe4e56c73..8cc1fc9c4d13 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -525,7 +525,7 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr,
unsigned long flags = qp->flags;
int ret;
bool has_unmovable = false;
- pte_t *pte;
+ pte_t *pte, *mapped_pte;
spinlock_t *ptl;

ptl = pmd_trans_huge_lock(pmd, vma);
@@ -539,7 +539,7 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr,
if (pmd_trans_unstable(pmd))
return 0;

- pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
+ mapped_pte = pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
for (; addr != end; pte++, addr += PAGE_SIZE) {
if (!pte_present(*pte))
continue;
@@ -571,7 +571,7 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr,
} else
break;
}
- pte_unmap_unlock(pte - 1, ptl);
+ pte_unmap_unlock(mapped_pte, ptl);
cond_resched();

if (has_unmovable)
--
Michal Hocko
SUSE Labs

2020-10-16 15:39:02

by Oscar Salvador

[permalink] [raw]
Subject: Re: [PATCH] mm: fix potential pte_unmap_unlock pte error

On 2020-10-16 15:42, Michal Hocko wrote:
> OK, I finally managed to convince my friday brain to think and grasped
> what the code is intended to do. The loop is hairy and we want to
> prevent from spurious EIO when all the pages are on a proper node. So
> the check has to be done inside the loop. Anyway I would find the
> following fix less error prone and easier to follow
> diff --git a/mm/mempolicy.c b/mm/mempolicy.c
> index eddbe4e56c73..8cc1fc9c4d13 100644
> --- a/mm/mempolicy.c
> +++ b/mm/mempolicy.c
> @@ -525,7 +525,7 @@ static int queue_pages_pte_range(pmd_t *pmd,
> unsigned long addr,
> unsigned long flags = qp->flags;
> int ret;
> bool has_unmovable = false;
> - pte_t *pte;
> + pte_t *pte, *mapped_pte;
> spinlock_t *ptl;
>
> ptl = pmd_trans_huge_lock(pmd, vma);
> @@ -539,7 +539,7 @@ static int queue_pages_pte_range(pmd_t *pmd,
> unsigned long addr,
> if (pmd_trans_unstable(pmd))
> return 0;
>
> - pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
> + mapped_pte = pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
> for (; addr != end; pte++, addr += PAGE_SIZE) {
> if (!pte_present(*pte))
> continue;
> @@ -571,7 +571,7 @@ static int queue_pages_pte_range(pmd_t *pmd,
> unsigned long addr,
> } else
> break;
> }
> - pte_unmap_unlock(pte - 1, ptl);
> + pte_unmap_unlock(mapped_pte, ptl);
> cond_resched();
>
> if (has_unmovable)

It is more clear to grasp, definitely.

2020-10-17 05:50:25

by Shijie Luo

[permalink] [raw]
Subject: Re: [PATCH] mm: fix potential pte_unmap_unlock pte error

On 2020/10/16 22:05, [email protected] wrote:
> On 2020-10-16 15:42, Michal Hocko wrote:
>> OK, I finally managed to convince my friday brain to think and grasped
>> what the code is intended to do. The loop is hairy and we want to
>> prevent from spurious EIO when all the pages are on a proper node. So
>> the check has to be done inside the loop. Anyway I would find the
>> following fix less error prone and easier to follow
>> diff --git a/mm/mempolicy.c b/mm/mempolicy.c
>> index eddbe4e56c73..8cc1fc9c4d13 100644
>> --- a/mm/mempolicy.c
>> +++ b/mm/mempolicy.c
>> @@ -525,7 +525,7 @@ static int queue_pages_pte_range(pmd_t *pmd,
>> unsigned long addr,
>>      unsigned long flags = qp->flags;
>>      int ret;
>>      bool has_unmovable = false;
>> -    pte_t *pte;
>> +    pte_t *pte, *mapped_pte;
>>      spinlock_t *ptl;
>>
>>      ptl = pmd_trans_huge_lock(pmd, vma);
>> @@ -539,7 +539,7 @@ static int queue_pages_pte_range(pmd_t *pmd,
>> unsigned long addr,
>>      if (pmd_trans_unstable(pmd))
>>          return 0;
>>
>> -    pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
>> +    mapped_pte = pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
>>      for (; addr != end; pte++, addr += PAGE_SIZE) {
>>          if (!pte_present(*pte))
>>              continue;
>> @@ -571,7 +571,7 @@ static int queue_pages_pte_range(pmd_t *pmd,
>> unsigned long addr,
>>          } else
>>              break;
>>      }
>> -    pte_unmap_unlock(pte - 1, ptl);
>> +    pte_unmap_unlock(mapped_pte, ptl);
>>      cond_resched();
>>
>>      if (has_unmovable)
>
> It is more clear to grasp, definitely.
Yeah, this one is more comprehensible, I 'll send a v2 patch, thank you.