2023-06-02 23:22:34

by Peter Xu

[permalink] [raw]
Subject: [PATCH 0/4] mm: Fix pmd_trans_unstable() call sites on retry

When hit pmd_trans_unstable() under mmap read lock, it means we raced with
something else. Per the comment above the helper, we can definitely treat
it as some pmd (none?) but the 100% correct way is always retry, and I
don't think it should race again in most cases.

Not taking care of that retry can mean different things on different
paths.

For example, for smaps it means inaccurate accountings when we skip those
raced regions, but it's fine anyway because the accounting is not for 100%
accurate.

I think it's broken for pagemap OTOH, because we have the pagemap buffer
linear to the VA we're scanning, it means if we skip some region the follow
up scans can fill in the wrong slots, I think. It means the pagemap
results returned to userapp will be wrong when very unlucky.

This reminded me that I should have a look at all call sites of
pmd_trans_unstable(), some of them are alright but I do see many of them
may still be better to give another shot when hit.

This series tries to resolve all call sites for it on that retry attempt.

I really don't know whether I missed something, even if not, whether it
matters a lot to anyone. Still, _if_ I'm correct may worth consider
fixing. Happy to be prove wrong. Then Muhammad should know how to code
his.

The patchset is only smoke tested, nothing wrong I see.

Please have a look, thanks.

Peter Xu (4):
mm/mprotect: Retry on pmd_trans_unstable()
mm/migrate: Unify and retry an unstable pmd when hit
mm: Warn for unstable pmd in move_page_tables()
mm: Make most walk page paths with pmd_trans_unstable() to retry

fs/proc/task_mmu.c | 17 +++++++++++++----
mm/madvise.c | 8 ++++++--
mm/memcontrol.c | 8 ++++++--
mm/memory-failure.c | 4 +++-
mm/mempolicy.c | 4 +++-
mm/migrate_device.c | 9 ++++-----
mm/mprotect.c | 20 +++++++++++---------
mm/mremap.c | 4 ++--
8 files changed, 48 insertions(+), 26 deletions(-)

--
2.40.1



2023-06-02 23:33:31

by Peter Xu

[permalink] [raw]
Subject: [PATCH 3/4] mm: Warn for unstable pmd in move_page_tables()

We already hold write mmap lock here, not possible to trigger unstable
pmd. Make it a WARN_ON_ONCE instead.

Cc: Naoya Horiguchi <[email protected]>
Signed-off-by: Peter Xu <[email protected]>
---
mm/mremap.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/mremap.c b/mm/mremap.c
index da107f2c71bf..9303e4da4e7f 100644
--- a/mm/mremap.c
+++ b/mm/mremap.c
@@ -544,8 +544,8 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
old_pmd, new_pmd, need_rmap_locks))
continue;
split_huge_pmd(vma, old_pmd, old_addr);
- if (pmd_trans_unstable(old_pmd))
- continue;
+ /* We're with the mmap write lock, not possible to happen */
+ WARN_ON_ONCE(pmd_trans_unstable(old_pmd));
} else if (IS_ENABLED(CONFIG_HAVE_MOVE_PMD) &&
extent == PMD_SIZE) {
/*
--
2.40.1


2023-06-07 16:05:56

by Peter Xu

[permalink] [raw]
Subject: Re: [PATCH 0/4] mm: Fix pmd_trans_unstable() call sites on retry

On Fri, Jun 02, 2023 at 07:05:48PM -0400, Peter Xu wrote:
> Please have a look, thanks.

Hello, all,

This one seems to have more or less conflict with Hugh's rework on pmd
collapse. Please hold off review or merging until I prepare another one
(probably based on Hugh's, after I have a closer read).

Sorry for the noise.

--
Peter Xu


2023-06-07 16:22:10

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH 0/4] mm: Fix pmd_trans_unstable() call sites on retry

On 07.06.23 15:49, Peter Xu wrote:
> On Fri, Jun 02, 2023 at 07:05:48PM -0400, Peter Xu wrote:
>> Please have a look, thanks.
>
> Hello, all,
>
> This one seems to have more or less conflict with Hugh's rework on pmd
> collapse. Please hold off review or merging until I prepare another one
> (probably based on Hugh's, after I have a closer read).
>
> Sorry for the noise.
>

[did not have time to look yet]

Are there any fixes buried in there that we'd like to have in earlier? I
skimmed over the patches and all read like "cleanup" + "consistency",
correct?

--
Cheers,

David / dhildenb


2023-06-07 16:44:19

by Peter Xu

[permalink] [raw]
Subject: Re: [PATCH 0/4] mm: Fix pmd_trans_unstable() call sites on retry

On Wed, Jun 07, 2023 at 05:45:28PM +0200, David Hildenbrand wrote:
> On 07.06.23 15:49, Peter Xu wrote:
> > On Fri, Jun 02, 2023 at 07:05:48PM -0400, Peter Xu wrote:
> > > Please have a look, thanks.
> >
> > Hello, all,
> >
> > This one seems to have more or less conflict with Hugh's rework on pmd
> > collapse. Please hold off review or merging until I prepare another one
> > (probably based on Hugh's, after I have a closer read).
> >
> > Sorry for the noise.
> >
>
> [did not have time to look yet]
>
> Are there any fixes buried in there that we'd like to have in earlier? I
> skimmed over the patches and all read like "cleanup" + "consistency",
> correct?

There are bug fixes when unluckily hitting unstable pmd I think, these ones
worth mentioning:

- pagemap can be broken, causing read to be shifted over to the next
(wrong data read)

- memcg wrong accounting, e.g., moving one task from memcg1 to memcg2, we
can skip an unstable pmd while it could quickly contain something that
can belong to memcg1, I think. This one needs some eyes from memcg
developers.

I don't rush on having them because these are all theoretical and no bug
report I saw, no reproducer I wrote, only observed by my eyes.

At least the pagemap issue should have been there for 10+ years without
being noticed even if rightfully spot this time. Meanwhile this seems to
have conflict with Hugh's series which should have been posted earlier - I
still need to check on how that will affect this series, but not yet.

Said that, let me know if any of you hit any (potential) issue with above
or think that we should to move this in earlier.

Thanks,

--
Peter Xu


2023-06-07 17:10:37

by Yang Shi

[permalink] [raw]
Subject: Re: [PATCH 0/4] mm: Fix pmd_trans_unstable() call sites on retry

On Wed, Jun 7, 2023 at 9:21 AM Peter Xu <[email protected]> wrote:
>
> On Wed, Jun 07, 2023 at 05:45:28PM +0200, David Hildenbrand wrote:
> > On 07.06.23 15:49, Peter Xu wrote:
> > > On Fri, Jun 02, 2023 at 07:05:48PM -0400, Peter Xu wrote:
> > > > Please have a look, thanks.
> > >
> > > Hello, all,
> > >
> > > This one seems to have more or less conflict with Hugh's rework on pmd
> > > collapse. Please hold off review or merging until I prepare another one
> > > (probably based on Hugh's, after I have a closer read).
> > >
> > > Sorry for the noise.
> > >
> >
> > [did not have time to look yet]
> >
> > Are there any fixes buried in there that we'd like to have in earlier? I
> > skimmed over the patches and all read like "cleanup" + "consistency",
> > correct?
>
> There are bug fixes when unluckily hitting unstable pmd I think, these ones
> worth mentioning:
>
> - pagemap can be broken, causing read to be shifted over to the next
> (wrong data read)

Yes, it may corrupt the pagemap data. But anyway it seems like nobody
was busted by this one as you said.

>
> - memcg wrong accounting, e.g., moving one task from memcg1 to memcg2, we
> can skip an unstable pmd while it could quickly contain something that
> can belong to memcg1, I think. This one needs some eyes from memcg
> developers.

I don't think this is an important thing. There are plenty of other
conditions that could make the accounting inaccurate, for example,
isolating page from LRU fails, force charge, etc. And it seems like
nobody was bothered by this either.

>
> I don't rush on having them because these are all theoretical and no bug
> report I saw, no reproducer I wrote, only observed by my eyes.
>
> At least the pagemap issue should have been there for 10+ years without
> being noticed even if rightfully spot this time. Meanwhile this seems to
> have conflict with Hugh's series which should have been posted earlier - I
> still need to check on how that will affect this series, but not yet.
>
> Said that, let me know if any of you hit any (potential) issue with above
> or think that we should to move this in earlier.
>
> Thanks,
>
> --
> Peter Xu
>
>

2023-06-07 18:55:45

by Peter Xu

[permalink] [raw]
Subject: Re: [PATCH 0/4] mm: Fix pmd_trans_unstable() call sites on retry

On Wed, Jun 07, 2023 at 09:39:44AM -0700, Yang Shi wrote:
> I don't think this is an important thing. There are plenty of other
> conditions that could make the accounting inaccurate, for example,
> isolating page from LRU fails, force charge, etc. And it seems like
> nobody was bothered by this either.

Yes, I read that a bit more and I agree. So let me summarize after I read
Hugh's series just now..

With the pre-requisite of the new __pte_map_offset() that Hugh proposed
here:

[PATCH 04/31] mm/pgtable: allow pte_offset_map[_lock]() to fail
https://lore.kernel.org/r/[email protected]

We should not need pmd_trans_unstable() anymore as Hugh pointed out, which
I fully agree. I think Hugh has covered all the issues that this series
wanted to address alongside, namely:

Patch 1 (mprotect) is covered in:

[PATCH 18/31] mm/mprotect: delete pmd_none_or_clear_bad_unless_trans_huge()
https://lore.kernel.org/r/[email protected]

No way to move spinlock outside, so one -EAGAIN needed which makes sense.

Patch 2 (migrate_device) is covered in:

[PATCH 24/31] mm/migrate_device: allow pte_offset_map_lock() to fail
https://lore.kernel.org/r/[email protected]

By a direct retry, and more code unified so even better.

Patch 3 () is covered in:

[PATCH 19/31] mm/mremap: retry if either pte_offset_map_*lock() fails
https://lore.kernel.org/r/[email protected]

Instead of WARN_ON_ONCE(), it got dropped which looks all good, too.

Most of patch 4's changes are done in:

[PATCH 09/31] mm/pagewalkers: ACTION_AGAIN if pte_offset_map_lock() fails
https://lore.kernel.org/r/[email protected]

There're some different handling on memcg changes, where in Hugh's
series it was put separately here:

[PATCH 17/31] mm/various: give up if pte_offset_map[_lock]() fails
https://lore.kernel.org/r/[email protected]

After double check, I agree with Yang's comment and Hugh's approach,
that no retry is needed for memcg.

Let's ignore this series, not needed anymore.

Thanks,

--
Peter Xu