2023-07-28 22:29:34

by John Hubbard

[permalink] [raw]
Subject: Re: [PATCH v1 0/4] smaps / mm/gup: fix gup_can_follow_protnone fallout

On 7/28/23 14:20, Peter Xu wrote:
> On Fri, Jul 28, 2023 at 11:02:46PM +0200, David Hildenbrand wrote:
>> Can we get a simple revert in first (without that FOLL_FORCE special casing
>> and ideally with a better name) to handle stable backports, and I'll
>> follow-up with more documentation and letting GUP callers pass in that flag
>> instead?
>>
>> That would help a lot. Then we also have more time to let that "move it to
>> GUP callers" mature a bit in -next, to see if we find any surprises?
>
> As I raised my concern over the other thread, I still worry numa users can
> be affected by this change. After all, numa isn't so uncommon to me, at
> least fedora / rhel as CONFIG_NUMA_BALANCING_DEFAULT_ENABLED=y. I highly
> suspect that's also true to major distros. Meanwhile all kernel modules
> use gup..
>
> I'd say we can go ahead and try if we want, but I really don't know why
> that helps in any form to move it to the callers.. with the risk of
> breaking someone.

It's worth the trouble, in order to clear up this historical mess. It's
helping *future* callers of the API, and future maintenance efforts. Yes
there is some risk, but it seems very manageable.

The story of how FOLL_NUMA and FOLL_FORCE became entangled was enlightening,
by the way, and now that I've read it I don't want to go back. :)


thanks,
--
John Hubbard
NVIDIA



2023-07-28 23:01:08

by Peter Xu

[permalink] [raw]
Subject: Re: [PATCH v1 0/4] smaps / mm/gup: fix gup_can_follow_protnone fallout

Hello, John,

On Fri, Jul 28, 2023 at 02:32:12PM -0700, John Hubbard wrote:
> On 7/28/23 14:20, Peter Xu wrote:
> > On Fri, Jul 28, 2023 at 11:02:46PM +0200, David Hildenbrand wrote:
> > > Can we get a simple revert in first (without that FOLL_FORCE special casing
> > > and ideally with a better name) to handle stable backports, and I'll
> > > follow-up with more documentation and letting GUP callers pass in that flag
> > > instead?
> > >
> > > That would help a lot. Then we also have more time to let that "move it to
> > > GUP callers" mature a bit in -next, to see if we find any surprises?
> >
> > As I raised my concern over the other thread, I still worry numa users can
> > be affected by this change. After all, numa isn't so uncommon to me, at
> > least fedora / rhel as CONFIG_NUMA_BALANCING_DEFAULT_ENABLED=y. I highly
> > suspect that's also true to major distros. Meanwhile all kernel modules
> > use gup..
> >
> > I'd say we can go ahead and try if we want, but I really don't know why
> > that helps in any form to move it to the callers.. with the risk of
> > breaking someone.
>
> It's worth the trouble, in order to clear up this historical mess. It's
> helping *future* callers of the API, and future maintenance efforts. Yes
> there is some risk, but it seems very manageable.
>
> The story of how FOLL_NUMA and FOLL_FORCE became entangled was enlightening,
> by the way, and now that I've read it I don't want to go back. :)

Yeah I fully agree we should hopefully remove the NUMA / FORCE
tangling.. even if we want to revert back to the FOLL_NUMA flag we may want
to not revive that specific part. I had a feeling that we're all on the
same page there.

It's more about the further step to make FOLL_NUMA opt-in for GUP.

Thanks,

--
Peter Xu


2023-07-28 23:26:12

by John Hubbard

[permalink] [raw]
Subject: Re: [PATCH v1 0/4] smaps / mm/gup: fix gup_can_follow_protnone fallout

On 7/28/23 14:49, Peter Xu wrote:
>> The story of how FOLL_NUMA and FOLL_FORCE became entangled was enlightening,
>> by the way, and now that I've read it I don't want to go back. :)
>
> Yeah I fully agree we should hopefully remove the NUMA / FORCE
> tangling.. even if we want to revert back to the FOLL_NUMA flag we may want
> to not revive that specific part. I had a feeling that we're all on the
> same page there.
>

Yes, I think so. :)

> It's more about the further step to make FOLL_NUMA opt-in for GUP.

Let's say "FOLL_HONOR_NUMA_FAULT" for this next discussion, but yes. So
given that our API allows passing in FOLL_ flags, I don't understand the
objection to letting different callers pass in, or not pass in, that
flag.

It's the perfect way to clean up the whole thing. As Linus suggested
slightly earlier here, there can be a comment at the call site,
explaining why KVM needs FOLL_HONOR_NUMA_FAULT, and you're good, right?


thanks,
--
John Hubbard
NVIDIA


2023-07-31 17:48:03

by Peter Xu

[permalink] [raw]
Subject: Re: [PATCH v1 0/4] smaps / mm/gup: fix gup_can_follow_protnone fallout

On Fri, Jul 28, 2023 at 03:00:04PM -0700, John Hubbard wrote:
> On 7/28/23 14:49, Peter Xu wrote:
> > > The story of how FOLL_NUMA and FOLL_FORCE became entangled was enlightening,
> > > by the way, and now that I've read it I don't want to go back. :)
> >
> > Yeah I fully agree we should hopefully remove the NUMA / FORCE
> > tangling.. even if we want to revert back to the FOLL_NUMA flag we may want
> > to not revive that specific part. I had a feeling that we're all on the
> > same page there.
> >
>
> Yes, I think so. :)
>
> > It's more about the further step to make FOLL_NUMA opt-in for GUP.
>
> Let's say "FOLL_HONOR_NUMA_FAULT" for this next discussion, but yes. So
> given that our API allows passing in FOLL_ flags, I don't understand the
> objection to letting different callers pass in, or not pass in, that
> flag.
>
> It's the perfect way to clean up the whole thing. As Linus suggested
> slightly earlier here, there can be a comment at the call site,
> explaining why KVM needs FOLL_HONOR_NUMA_FAULT, and you're good, right?

I'm good even if we want to experiment anything, as long as (at least) kvm
is all covered then I'm not against it, not yet strongly.

But again, IMHO we're not only talking about "cleaning up" of any flag - if
that falls into "cleanup" first, which I'm not 100% sure yet - we're
takling about changing GUP abi. What I wanted to point out to be careful
is we're literally changing the GUP abi for all kernel modules on numa
implications.

Thanks,

--
Peter Xu