2018-04-24 16:27:20

by Michal Hocko

[permalink] [raw]
Subject: vmalloc with GFP_NOFS

Hi,
it seems that we still have few vmalloc users who perform GFP_NOFS
allocation:
drivers/mtd/ubi/io.c
fs/ext4/xattr.c
fs/gfs2/dir.c
fs/gfs2/quota.c
fs/nfs/blocklayout/extent_tree.c
fs/ubifs/debug.c
fs/ubifs/lprops.c
fs/ubifs/lpt_commit.c
fs/ubifs/orphan.c

Unfortunatelly vmalloc doesn't suppoer GFP_NOFS semantinc properly
because we do have hardocded GFP_KERNEL allocations deep inside the
vmalloc layers. That means that if GFP_NOFS really protects from
recursion into the fs deadlocks then the vmalloc call is broken.

What to do about this? Well, there are two things. Firstly, it would be
really great to double check whether the GFP_NOFS is really needed. I
cannot judge that because I am not familiar with the code. It would be
great if the respective maintainers (hopefully get_maintainer.sh pointed
me to all relevant ones). If there is not reclaim recursion issue then
simply use the standard vmalloc (aka GFP_KERNEL request).

If the use is really valid then we have a way to do the vmalloc
allocation properly. We have memalloc_nofs_{save,restore} scope api. How
does that work? You simply call memalloc_nofs_save when the reclaim
recursion critical section starts (e.g. when you take a lock which is
then used in the reclaim path - e.g. shrinker) and memalloc_nofs_restore
when the critical section ends. _All_ allocations within that scope
will get GFP_NOFS semantic automagically. If you are not sure about the
scope itself then the easiest workaround is to wrap the vmalloc itself
with a big fat comment that this should be revisited.

Does that sound like something that can be done in a reasonable time?
I have tried to bring this up in the past but our speed is glacial and
there are attempts to do hacks like checking for abusers inside the
vmalloc which is just too ugly to live.

Please do not hesitate to get back to me if something is not clear.

Thanks!
--
Michal Hocko
SUSE Labs


2018-04-24 16:46:59

by Mikulas Patocka

[permalink] [raw]
Subject: Re: vmalloc with GFP_NOFS



On Tue, 24 Apr 2018, Michal Hocko wrote:

> Hi,
> it seems that we still have few vmalloc users who perform GFP_NOFS
> allocation:
> drivers/mtd/ubi/io.c
> fs/ext4/xattr.c
> fs/gfs2/dir.c
> fs/gfs2/quota.c
> fs/nfs/blocklayout/extent_tree.c
> fs/ubifs/debug.c
> fs/ubifs/lprops.c
> fs/ubifs/lpt_commit.c
> fs/ubifs/orphan.c
>
> Unfortunatelly vmalloc doesn't suppoer GFP_NOFS semantinc properly
> because we do have hardocded GFP_KERNEL allocations deep inside the
> vmalloc layers. That means that if GFP_NOFS really protects from
> recursion into the fs deadlocks then the vmalloc call is broken.
>
> What to do about this? Well, there are two things. Firstly, it would be
> really great to double check whether the GFP_NOFS is really needed. I
> cannot judge that because I am not familiar with the code. It would be
> great if the respective maintainers (hopefully get_maintainer.sh pointed
> me to all relevant ones). If there is not reclaim recursion issue then
> simply use the standard vmalloc (aka GFP_KERNEL request).
>
> If the use is really valid then we have a way to do the vmalloc
> allocation properly. We have memalloc_nofs_{save,restore} scope api. How
> does that work? You simply call memalloc_nofs_save when the reclaim
> recursion critical section starts (e.g. when you take a lock which is
> then used in the reclaim path - e.g. shrinker) and memalloc_nofs_restore
> when the critical section ends. _All_ allocations within that scope
> will get GFP_NOFS semantic automagically. If you are not sure about the
> scope itself then the easiest workaround is to wrap the vmalloc itself
> with a big fat comment that this should be revisited.
>
> Does that sound like something that can be done in a reasonable time?
> I have tried to bring this up in the past but our speed is glacial and
> there are attempts to do hacks like checking for abusers inside the
> vmalloc which is just too ugly to live.
>
> Please do not hesitate to get back to me if something is not clear.
>
> Thanks!
> --
> Michal Hocko
> SUSE Labs

I made a patch that adds memalloc_noio/fs_save around these calls a year
ago: http://lkml.iu.edu/hypermail/linux/kernel/1707.0/01376.html

Mikulas

2018-04-24 16:55:38

by Michal Hocko

[permalink] [raw]
Subject: Re: vmalloc with GFP_NOFS

On Tue 24-04-18 12:46:55, Mikulas Patocka wrote:
>
>
> On Tue, 24 Apr 2018, Michal Hocko wrote:
>
> > Hi,
> > it seems that we still have few vmalloc users who perform GFP_NOFS
> > allocation:
> > drivers/mtd/ubi/io.c
> > fs/ext4/xattr.c
> > fs/gfs2/dir.c
> > fs/gfs2/quota.c
> > fs/nfs/blocklayout/extent_tree.c
> > fs/ubifs/debug.c
> > fs/ubifs/lprops.c
> > fs/ubifs/lpt_commit.c
> > fs/ubifs/orphan.c
> >
> > Unfortunatelly vmalloc doesn't suppoer GFP_NOFS semantinc properly
> > because we do have hardocded GFP_KERNEL allocations deep inside the
> > vmalloc layers. That means that if GFP_NOFS really protects from
> > recursion into the fs deadlocks then the vmalloc call is broken.
> >
> > What to do about this? Well, there are two things. Firstly, it would be
> > really great to double check whether the GFP_NOFS is really needed. I
> > cannot judge that because I am not familiar with the code. It would be
> > great if the respective maintainers (hopefully get_maintainer.sh pointed
> > me to all relevant ones). If there is not reclaim recursion issue then
> > simply use the standard vmalloc (aka GFP_KERNEL request).
> >
> > If the use is really valid then we have a way to do the vmalloc
> > allocation properly. We have memalloc_nofs_{save,restore} scope api. How
> > does that work? You simply call memalloc_nofs_save when the reclaim
> > recursion critical section starts (e.g. when you take a lock which is
> > then used in the reclaim path - e.g. shrinker) and memalloc_nofs_restore
> > when the critical section ends. _All_ allocations within that scope
> > will get GFP_NOFS semantic automagically. If you are not sure about the
> > scope itself then the easiest workaround is to wrap the vmalloc itself
> > with a big fat comment that this should be revisited.
> >
> > Does that sound like something that can be done in a reasonable time?
> > I have tried to bring this up in the past but our speed is glacial and
> > there are attempts to do hacks like checking for abusers inside the
> > vmalloc which is just too ugly to live.
> >
> > Please do not hesitate to get back to me if something is not clear.
> >
> > Thanks!
> > --
> > Michal Hocko
> > SUSE Labs
>
> I made a patch that adds memalloc_noio/fs_save around these calls a year
> ago: http://lkml.iu.edu/hypermail/linux/kernel/1707.0/01376.html

Yeah, and that is the wrong approach. Let's try to fix this properly
this time. As the above outlines, the worst case we can end up mid-term
would be to wrap vmalloc calls with the scope api with a TODO. But I am
pretty sure the respective maintainers can come up with a better
solution. I am definitely willing to help here.
--
Michal Hocko
SUSE Labs

2018-04-24 17:05:27

by Mikulas Patocka

[permalink] [raw]
Subject: Re: vmalloc with GFP_NOFS



On Tue, 24 Apr 2018, Michal Hocko wrote:

> On Tue 24-04-18 12:46:55, Mikulas Patocka wrote:
> >
> >
> > On Tue, 24 Apr 2018, Michal Hocko wrote:
> >
> > > Hi,
> > > it seems that we still have few vmalloc users who perform GFP_NOFS
> > > allocation:
> > > drivers/mtd/ubi/io.c
> > > fs/ext4/xattr.c
> > > fs/gfs2/dir.c
> > > fs/gfs2/quota.c
> > > fs/nfs/blocklayout/extent_tree.c
> > > fs/ubifs/debug.c
> > > fs/ubifs/lprops.c
> > > fs/ubifs/lpt_commit.c
> > > fs/ubifs/orphan.c
> > >
> > > Unfortunatelly vmalloc doesn't suppoer GFP_NOFS semantinc properly
> > > because we do have hardocded GFP_KERNEL allocations deep inside the
> > > vmalloc layers. That means that if GFP_NOFS really protects from
> > > recursion into the fs deadlocks then the vmalloc call is broken.
> > >
> > > What to do about this? Well, there are two things. Firstly, it would be
> > > really great to double check whether the GFP_NOFS is really needed. I
> > > cannot judge that because I am not familiar with the code. It would be
> > > great if the respective maintainers (hopefully get_maintainer.sh pointed
> > > me to all relevant ones). If there is not reclaim recursion issue then
> > > simply use the standard vmalloc (aka GFP_KERNEL request).
> > >
> > > If the use is really valid then we have a way to do the vmalloc
> > > allocation properly. We have memalloc_nofs_{save,restore} scope api. How
> > > does that work? You simply call memalloc_nofs_save when the reclaim
> > > recursion critical section starts (e.g. when you take a lock which is
> > > then used in the reclaim path - e.g. shrinker) and memalloc_nofs_restore
> > > when the critical section ends. _All_ allocations within that scope
> > > will get GFP_NOFS semantic automagically. If you are not sure about the
> > > scope itself then the easiest workaround is to wrap the vmalloc itself
> > > with a big fat comment that this should be revisited.
> > >
> > > Does that sound like something that can be done in a reasonable time?
> > > I have tried to bring this up in the past but our speed is glacial and
> > > there are attempts to do hacks like checking for abusers inside the
> > > vmalloc which is just too ugly to live.
> > >
> > > Please do not hesitate to get back to me if something is not clear.
> > >
> > > Thanks!
> > > --
> > > Michal Hocko
> > > SUSE Labs
> >
> > I made a patch that adds memalloc_noio/fs_save around these calls a year
> > ago: http://lkml.iu.edu/hypermail/linux/kernel/1707.0/01376.html
>
> Yeah, and that is the wrong approach.

It is crude, but it fixes the deadlock possibility. Then, the maintainers
will have a lot of time to refactor the code and move these
memalloc_noio_save calls to the proper scope.

> Let's try to fix this properly
> this time. As the above outlines, the worst case we can end up mid-term
> would be to wrap vmalloc calls with the scope api with a TODO. But I am
> pretty sure the respective maintainers can come up with a better
> solution. I am definitely willing to help here.
> --
> Michal Hocko
> SUSE Labs

Mikulas

2018-04-24 18:35:59

by Theodore Ts'o

[permalink] [raw]
Subject: Re: vmalloc with GFP_NOFS

On Tue, Apr 24, 2018 at 10:27:12AM -0600, Michal Hocko wrote:
> fs/ext4/xattr.c
>
> What to do about this? Well, there are two things. Firstly, it would be
> really great to double check whether the GFP_NOFS is really needed. I
> cannot judge that because I am not familiar with the code.

*Most* of the time it's not needed, but there are times when it is.
We could be more smart about sending down GFP_NOFS only when it is
needed. If we are sending too many GFP_NOFS's allocations such that
it's causing heartburn, we could fix this. (xattr commands are rare
enough that I dind't think it was worth it to modulate the GFP flags
for this particular case, but we could make it be smarter if it would
help.)

> If the use is really valid then we have a way to do the vmalloc
> allocation properly. We have memalloc_nofs_{save,restore} scope api. How
> does that work? You simply call memalloc_nofs_save when the reclaim
> recursion critical section starts (e.g. when you take a lock which is
> then used in the reclaim path - e.g. shrinker) and memalloc_nofs_restore
> when the critical section ends. _All_ allocations within that scope
> will get GFP_NOFS semantic automagically. If you are not sure about the
> scope itself then the easiest workaround is to wrap the vmalloc itself
> with a big fat comment that this should be revisited.

This is something we could do in ext4. It hadn't been high priority,
because we've been rather overloaded. As a suggestion, could you take
documentation about how to convert to the memalloc_nofs_{save,restore}
scope api (which I think you've written about e-mails at length
before), and put that into a file in Documentation/core-api?

The question I was trying to figure out which triggered the above
request is how/whether to gradually convert to that scope API. Is it
safe to add the memalloc_nofs_{save,restore} to code and keep the
GFP_NOFS flags until we're sure we got it all right, for all of the
code paths, and then drop the GFP_NOFS?

Thanks,

- Ted

2018-04-24 19:10:12

by Richard Weinberger

[permalink] [raw]
Subject: Re: vmalloc with GFP_NOFS

[resending without html ...]

Am Dienstag, 24. April 2018, 18:27:12 CEST schrieb Michal Hocko:
> Hi,
> it seems that we still have few vmalloc users who perform GFP_NOFS
> allocation:
> drivers/mtd/ubi/io.c

UBI is not a big deal. We use it here like in UBIFS for debugging
when self-checks are enabled.

> fs/ext4/xattr.c
> fs/gfs2/dir.c
> fs/gfs2/quota.c
> fs/nfs/blocklayout/extent_tree.c
> fs/ubifs/debug.c
> fs/ubifs/lprops.c
> fs/ubifs/lpt_commit.c
> fs/ubifs/orphan.c

All users in UBIFS are debugging code and some error reporting.
No fast paths.
I think we can switch to prealloation + locking without much hassle.
I can prepare a patch.

Thanks,
//richard

2018-04-24 19:25:48

by Michal Hocko

[permalink] [raw]
Subject: Re: vmalloc with GFP_NOFS

On Tue 24-04-18 14:35:36, Theodore Ts'o wrote:
> On Tue, Apr 24, 2018 at 10:27:12AM -0600, Michal Hocko wrote:
> > fs/ext4/xattr.c
> >
> > What to do about this? Well, there are two things. Firstly, it would be
> > really great to double check whether the GFP_NOFS is really needed. I
> > cannot judge that because I am not familiar with the code.
>
> *Most* of the time it's not needed, but there are times when it is.
> We could be more smart about sending down GFP_NOFS only when it is
> needed.

Well, the primary idea is that you do not have to. All you care about is
to use the scope api where it matters + a comment describing the
reclaim recursion context (e.g. this lock will be held in the reclaim
path here and there).

> If we are sending too many GFP_NOFS's allocations such that
> it's causing heartburn, we could fix this. (xattr commands are rare
> enough that I dind't think it was worth it to modulate the GFP flags
> for this particular case, but we could make it be smarter if it would
> help.)

Well, the vmalloc is actually a correctness issue rather than a
heartburn...

> > If the use is really valid then we have a way to do the vmalloc
> > allocation properly. We have memalloc_nofs_{save,restore} scope api. How
> > does that work? You simply call memalloc_nofs_save when the reclaim
> > recursion critical section starts (e.g. when you take a lock which is
> > then used in the reclaim path - e.g. shrinker) and memalloc_nofs_restore
> > when the critical section ends. _All_ allocations within that scope
> > will get GFP_NOFS semantic automagically. If you are not sure about the
> > scope itself then the easiest workaround is to wrap the vmalloc itself
> > with a big fat comment that this should be revisited.
>
> This is something we could do in ext4. It hadn't been high priority,
> because we've been rather overloaded.

Well, ext/jbd already has scopes defined for the transaction context so
anything down that road can be converted to GFP_KERNEL (well, unless the
same code path is shared outside of the transaction context and still
requires a protection). It would be really great to identify other
contexts and slowly move away from the explicit GFP_NOFS. Are you aware
of other contexts?

> As a suggestion, could you take
> documentation about how to convert to the memalloc_nofs_{save,restore}
> scope api (which I think you've written about e-mails at length
> before), and put that into a file in Documentation/core-api?

I can.

> The question I was trying to figure out which triggered the above
> request is how/whether to gradually convert to that scope API. Is it
> safe to add the memalloc_nofs_{save,restore} to code and keep the
> GFP_NOFS flags until we're sure we got it all right, for all of the
> code paths, and then drop the GFP_NOFS?

The first stage is to define and document those scopes. I have provided
a debugging patch [1] in the past that would dump_stack when seeing an
explicit GFP_NOFS from a scope which could help to eliminate existing
users.

[1] http://lkml.kernel.org/r/[email protected]
--
Michal Hocko
SUSE Labs

2018-04-24 19:28:08

by Michal Hocko

[permalink] [raw]
Subject: Re: vmalloc with GFP_NOFS

On Tue 24-04-18 21:03:43, Richard Weinberger wrote:
> Am Dienstag, 24. April 2018, 18:27:12 CEST schrieb Michal Hocko:
> > fs/ubifs/debug.c
>
> This one is just for debugging.
> So, preallocating + locking would not hurt much.
>
> > fs/ubifs/lprops.c
>
> Ditto.
>
> > fs/ubifs/lpt_commit.c
>
> Here we use it also only in debugging mode and in one case for
> fatal error reporting.
> No hot paths.
>
> > fs/ubifs/orphan.c
>
> Also only for debugging.
> Getting rid of vmalloc with GFP_NOFS in UBIFS is no big problem.
> I can prepare a patch.

Cool!

Anyway, if UBIFS has some reclaim recursion critical sections in general
it would be really great to have them documented and that is where the
scope api is really handy. Just add the scope and document what is the
recursion issue. This will help people reading the code as well. Ideally
there shouldn't be any explicit GFP_NOFS in the code.

Thanks for a quick turnaround.

--
Michal Hocko
SUSE Labs

2018-04-24 19:26:30

by Steven Whitehouse

[permalink] [raw]
Subject: Re: vmalloc with GFP_NOFS

Hi,


On 24/04/18 17:27, Michal Hocko wrote:
> Hi,
> it seems that we still have few vmalloc users who perform GFP_NOFS
> allocation:
> drivers/mtd/ubi/io.c
> fs/ext4/xattr.c
> fs/gfs2/dir.c
> fs/gfs2/quota.c
> fs/nfs/blocklayout/extent_tree.c
> fs/ubifs/debug.c
> fs/ubifs/lprops.c
> fs/ubifs/lpt_commit.c
> fs/ubifs/orphan.c
>
> Unfortunatelly vmalloc doesn't suppoer GFP_NOFS semantinc properly
> because we do have hardocded GFP_KERNEL allocations deep inside the
> vmalloc layers. That means that if GFP_NOFS really protects from
> recursion into the fs deadlocks then the vmalloc call is broken.
>
> What to do about this? Well, there are two things. Firstly, it would be
> really great to double check whether the GFP_NOFS is really needed. I
> cannot judge that because I am not familiar with the code. It would be
> great if the respective maintainers (hopefully get_maintainer.sh pointed
> me to all relevant ones). If there is not reclaim recursion issue then
> simply use the standard vmalloc (aka GFP_KERNEL request).
For GFS2, and I suspect for other fs too, it is really needed. We don't
want to enter reclaim while holding filesystem locks.

> If the use is really valid then we have a way to do the vmalloc
> allocation properly. We have memalloc_nofs_{save,restore} scope api. How
> does that work? You simply call memalloc_nofs_save when the reclaim
> recursion critical section starts (e.g. when you take a lock which is
> then used in the reclaim path - e.g. shrinker) and memalloc_nofs_restore
> when the critical section ends. _All_ allocations within that scope
> will get GFP_NOFS semantic automagically. If you are not sure about the
> scope itself then the easiest workaround is to wrap the vmalloc itself
> with a big fat comment that this should be revisited.
>
> Does that sound like something that can be done in a reasonable time?
> I have tried to bring this up in the past but our speed is glacial and
> there are attempts to do hacks like checking for abusers inside the
> vmalloc which is just too ugly to live.
>
> Please do not hesitate to get back to me if something is not clear.
>
> Thanks!

It would be good to fix this, and it has been known as an issue for a
long time. We might well be able to make use of the new API though. It
might be as simple as adding the calls when we get & release glocks, but
I'd have to check the code to be sure,

Steve.


2018-04-24 20:09:31

by Michal Hocko

[permalink] [raw]
Subject: Re: vmalloc with GFP_NOFS

On Tue 24-04-18 20:26:23, Steven Whitehouse wrote:
[...]
> It would be good to fix this, and it has been known as an issue for a long
> time. We might well be able to make use of the new API though. It might be
> as simple as adding the calls when we get & release glocks, but I'd have to
> check the code to be sure,

Yeah, starting with annotating those locking contexts and how document
how their are used in the reclaim is the great first step. This has to
be done per-fs obviously.
--
Michal Hocko
SUSE Labs

2018-04-24 22:18:49

by Richard Weinberger

[permalink] [raw]
Subject: Re: vmalloc with GFP_NOFS

Am Dienstag, 24. April 2018, 21:28:03 CEST schrieb Michal Hocko:
> > Also only for debugging.
> > Getting rid of vmalloc with GFP_NOFS in UBIFS is no big problem.
> > I can prepare a patch.
>
> Cool!
>
> Anyway, if UBIFS has some reclaim recursion critical sections in general
> it would be really great to have them documented and that is where the
> scope api is really handy. Just add the scope and document what is the
> recursion issue. This will help people reading the code as well. Ideally
> there shouldn't be any explicit GFP_NOFS in the code.

So in a perfect world a filesystem calls memalloc_nofs_save/restore and
always uses GFP_KERNEL for kmalloc/vmalloc?

Thanks,
//richard



2018-04-24 23:09:51

by Michal Hocko

[permalink] [raw]
Subject: Re: vmalloc with GFP_NOFS

On Wed 25-04-18 00:18:40, Richard Weinberger wrote:
> Am Dienstag, 24. April 2018, 21:28:03 CEST schrieb Michal Hocko:
> > > Also only for debugging.
> > > Getting rid of vmalloc with GFP_NOFS in UBIFS is no big problem.
> > > I can prepare a patch.
> >
> > Cool!
> >
> > Anyway, if UBIFS has some reclaim recursion critical sections in general
> > it would be really great to have them documented and that is where the
> > scope api is really handy. Just add the scope and document what is the
> > recursion issue. This will help people reading the code as well. Ideally
> > there shouldn't be any explicit GFP_NOFS in the code.
>
> So in a perfect world a filesystem calls memalloc_nofs_save/restore and
> always uses GFP_KERNEL for kmalloc/vmalloc?

Exactly! And in a dream world those memalloc_nofs_save act as a
documentation of the reclaim recursion documentation ;)
--
Michal Hocko
SUSE Labs

2018-04-24 23:17:17

by Mikulas Patocka

[permalink] [raw]
Subject: Re: vmalloc with GFP_NOFS



On Tue, 24 Apr 2018, Michal Hocko wrote:

> On Wed 25-04-18 00:18:40, Richard Weinberger wrote:
> > Am Dienstag, 24. April 2018, 21:28:03 CEST schrieb Michal Hocko:
> > > > Also only for debugging.
> > > > Getting rid of vmalloc with GFP_NOFS in UBIFS is no big problem.
> > > > I can prepare a patch.
> > >
> > > Cool!
> > >
> > > Anyway, if UBIFS has some reclaim recursion critical sections in general
> > > it would be really great to have them documented and that is where the
> > > scope api is really handy. Just add the scope and document what is the
> > > recursion issue. This will help people reading the code as well. Ideally
> > > there shouldn't be any explicit GFP_NOFS in the code.
> >
> > So in a perfect world a filesystem calls memalloc_nofs_save/restore and
> > always uses GFP_KERNEL for kmalloc/vmalloc?
>
> Exactly! And in a dream world those memalloc_nofs_save act as a
> documentation of the reclaim recursion documentation ;)
> --
> Michal Hocko
> SUSE Labs

BTW. should memalloc_nofs_save and memalloc_noio_save be merged into just
one that prevents both I/O and FS recursion?

memalloc_nofs_save allows submitting bios to I/O stack and the bios
created under memalloc_nofs_save could be sent to the loop device and the
loop device calls the filesystem...

Mikulas

2018-04-24 23:25:22

by Michal Hocko

[permalink] [raw]
Subject: Re: vmalloc with GFP_NOFS

On Tue 24-04-18 19:17:12, Mikulas Patocka wrote:
>
>
> On Tue, 24 Apr 2018, Michal Hocko wrote:
>
> > On Wed 25-04-18 00:18:40, Richard Weinberger wrote:
> > > Am Dienstag, 24. April 2018, 21:28:03 CEST schrieb Michal Hocko:
> > > > > Also only for debugging.
> > > > > Getting rid of vmalloc with GFP_NOFS in UBIFS is no big problem.
> > > > > I can prepare a patch.
> > > >
> > > > Cool!
> > > >
> > > > Anyway, if UBIFS has some reclaim recursion critical sections in general
> > > > it would be really great to have them documented and that is where the
> > > > scope api is really handy. Just add the scope and document what is the
> > > > recursion issue. This will help people reading the code as well. Ideally
> > > > there shouldn't be any explicit GFP_NOFS in the code.
> > >
> > > So in a perfect world a filesystem calls memalloc_nofs_save/restore and
> > > always uses GFP_KERNEL for kmalloc/vmalloc?
> >
> > Exactly! And in a dream world those memalloc_nofs_save act as a
> > documentation of the reclaim recursion documentation ;)
> > --
> > Michal Hocko
> > SUSE Labs
>
> BTW. should memalloc_nofs_save and memalloc_noio_save be merged into just
> one that prevents both I/O and FS recursion?

Why should FS usage stop IO altogether?

> memalloc_nofs_save allows submitting bios to I/O stack and the bios
> created under memalloc_nofs_save could be sent to the loop device and the
> loop device calls the filesystem...

Don't those use NOIO context?
--
Michal Hocko
SUSE Labs

2018-04-25 12:43:37

by Mikulas Patocka

[permalink] [raw]
Subject: Re: vmalloc with GFP_NOFS



On Tue, 24 Apr 2018, Michal Hocko wrote:

> On Tue 24-04-18 19:17:12, Mikulas Patocka wrote:
> >
> >
> > On Tue, 24 Apr 2018, Michal Hocko wrote:
> >
> > > On Wed 25-04-18 00:18:40, Richard Weinberger wrote:
> > > > Am Dienstag, 24. April 2018, 21:28:03 CEST schrieb Michal Hocko:
> > > > > > Also only for debugging.
> > > > > > Getting rid of vmalloc with GFP_NOFS in UBIFS is no big problem.
> > > > > > I can prepare a patch.
> > > > >
> > > > > Cool!
> > > > >
> > > > > Anyway, if UBIFS has some reclaim recursion critical sections in general
> > > > > it would be really great to have them documented and that is where the
> > > > > scope api is really handy. Just add the scope and document what is the
> > > > > recursion issue. This will help people reading the code as well. Ideally
> > > > > there shouldn't be any explicit GFP_NOFS in the code.
> > > >
> > > > So in a perfect world a filesystem calls memalloc_nofs_save/restore and
> > > > always uses GFP_KERNEL for kmalloc/vmalloc?
> > >
> > > Exactly! And in a dream world those memalloc_nofs_save act as a
> > > documentation of the reclaim recursion documentation ;)
> > > --
> > > Michal Hocko
> > > SUSE Labs
> >
> > BTW. should memalloc_nofs_save and memalloc_noio_save be merged into just
> > one that prevents both I/O and FS recursion?
>
> Why should FS usage stop IO altogether?

Because the IO may reach loop and loop may redirect it to the same
filesystem that is running under memalloc_nofs_save and deadlock.

> > memalloc_nofs_save allows submitting bios to I/O stack and the bios
> > created under memalloc_nofs_save could be sent to the loop device and the
> > loop device calls the filesystem...
>
> Don't those use NOIO context?

What do you mean?

Mikulas

2018-04-25 14:46:05

by Michal Hocko

[permalink] [raw]
Subject: Re: vmalloc with GFP_NOFS

On Wed 25-04-18 08:43:32, Mikulas Patocka wrote:
>
>
> On Tue, 24 Apr 2018, Michal Hocko wrote:
>
> > On Tue 24-04-18 19:17:12, Mikulas Patocka wrote:
> > >
> > >
> > > On Tue, 24 Apr 2018, Michal Hocko wrote:
> > >
> > > > On Wed 25-04-18 00:18:40, Richard Weinberger wrote:
> > > > > Am Dienstag, 24. April 2018, 21:28:03 CEST schrieb Michal Hocko:
> > > > > > > Also only for debugging.
> > > > > > > Getting rid of vmalloc with GFP_NOFS in UBIFS is no big problem.
> > > > > > > I can prepare a patch.
> > > > > >
> > > > > > Cool!
> > > > > >
> > > > > > Anyway, if UBIFS has some reclaim recursion critical sections in general
> > > > > > it would be really great to have them documented and that is where the
> > > > > > scope api is really handy. Just add the scope and document what is the
> > > > > > recursion issue. This will help people reading the code as well. Ideally
> > > > > > there shouldn't be any explicit GFP_NOFS in the code.
> > > > >
> > > > > So in a perfect world a filesystem calls memalloc_nofs_save/restore and
> > > > > always uses GFP_KERNEL for kmalloc/vmalloc?
> > > >
> > > > Exactly! And in a dream world those memalloc_nofs_save act as a
> > > > documentation of the reclaim recursion documentation ;)
> > > > --
> > > > Michal Hocko
> > > > SUSE Labs
> > >
> > > BTW. should memalloc_nofs_save and memalloc_noio_save be merged into just
> > > one that prevents both I/O and FS recursion?
> >
> > Why should FS usage stop IO altogether?
>
> Because the IO may reach loop and loop may redirect it to the same
> filesystem that is running under memalloc_nofs_save and deadlock.

So what is the difference with the current GFP_NOFS?

> > > memalloc_nofs_save allows submitting bios to I/O stack and the bios
> > > created under memalloc_nofs_save could be sent to the loop device and the
> > > loop device calls the filesystem...
> >
> > Don't those use NOIO context?
>
> What do you mean?

That the loop driver should make sure it will not recurse. The scope API
doesn't add anything new here.
--
Michal Hocko
SUSE Labs

2018-04-25 15:25:12

by Mikulas Patocka

[permalink] [raw]
Subject: Re: vmalloc with GFP_NOFS



On Wed, 25 Apr 2018, Michal Hocko wrote:

> On Wed 25-04-18 08:43:32, Mikulas Patocka wrote:
> >
> >
> > On Tue, 24 Apr 2018, Michal Hocko wrote:
> >
> > > On Tue 24-04-18 19:17:12, Mikulas Patocka wrote:
> > > >
> > > >
> > > > On Tue, 24 Apr 2018, Michal Hocko wrote:
> > > >
> > > > > > So in a perfect world a filesystem calls memalloc_nofs_save/restore and
> > > > > > always uses GFP_KERNEL for kmalloc/vmalloc?
> > > > >
> > > > > Exactly! And in a dream world those memalloc_nofs_save act as a
> > > > > documentation of the reclaim recursion documentation ;)
> > > > > --
> > > > > Michal Hocko
> > > > > SUSE Labs
> > > >
> > > > BTW. should memalloc_nofs_save and memalloc_noio_save be merged into just
> > > > one that prevents both I/O and FS recursion?
> > >
> > > Why should FS usage stop IO altogether?
> >
> > Because the IO may reach loop and loop may redirect it to the same
> > filesystem that is running under memalloc_nofs_save and deadlock.
>
> So what is the difference with the current GFP_NOFS?

My point is that filesystems should use GFP_NOIO too. If
alloc_pages(GFP_NOFS) issues some random I/O to some block device, the I/O
may be end up being redirected (via block loop device) to the filesystem
that is calling alloc_pages(GFP_NOFS).

> > > > memalloc_nofs_save allows submitting bios to I/O stack and the bios
> > > > created under memalloc_nofs_save could be sent to the loop device and the
> > > > loop device calls the filesystem...
> > >
> > > Don't those use NOIO context?
> >
> > What do you mean?
>
> That the loop driver should make sure it will not recurse. The scope API
> doesn't add anything new here.

The loop driver doesn't recurse. The loop driver will add the request to a
queue and wake up a thread that processes it. But if the request queue is
full, __get_request will wait until the loop thread finishes processing
some other request.

It doesn't recurse, but it waits until the filesystem makes some progress.

Mikulas

2018-04-25 16:57:02

by Michal Hocko

[permalink] [raw]
Subject: Re: vmalloc with GFP_NOFS

On Wed 25-04-18 11:25:09, Mikulas Patocka wrote:
>
>
> On Wed, 25 Apr 2018, Michal Hocko wrote:
>
> > On Wed 25-04-18 08:43:32, Mikulas Patocka wrote:
> > >
> > >
> > > On Tue, 24 Apr 2018, Michal Hocko wrote:
> > >
> > > > On Tue 24-04-18 19:17:12, Mikulas Patocka wrote:
> > > > >
> > > > >
> > > > > On Tue, 24 Apr 2018, Michal Hocko wrote:
> > > > >
> > > > > > > So in a perfect world a filesystem calls memalloc_nofs_save/restore and
> > > > > > > always uses GFP_KERNEL for kmalloc/vmalloc?
> > > > > >
> > > > > > Exactly! And in a dream world those memalloc_nofs_save act as a
> > > > > > documentation of the reclaim recursion documentation ;)
> > > > > > --
> > > > > > Michal Hocko
> > > > > > SUSE Labs
> > > > >
> > > > > BTW. should memalloc_nofs_save and memalloc_noio_save be merged into just
> > > > > one that prevents both I/O and FS recursion?
> > > >
> > > > Why should FS usage stop IO altogether?
> > >
> > > Because the IO may reach loop and loop may redirect it to the same
> > > filesystem that is running under memalloc_nofs_save and deadlock.
> >
> > So what is the difference with the current GFP_NOFS?
>
> My point is that filesystems should use GFP_NOIO too. If
> alloc_pages(GFP_NOFS) issues some random I/O to some block device, the I/O
> may be end up being redirected (via block loop device) to the filesystem
> that is calling alloc_pages(GFP_NOFS).

Talk to FS people, but I believe there is a good reason to distinguish
the two.

--
Michal Hocko
SUSE Labs

2018-05-09 13:42:28

by Michal Hocko

[permalink] [raw]
Subject: Re: vmalloc with GFP_NOFS

On Tue 24-04-18 13:25:42, Michal Hocko wrote:
[...]
> > As a suggestion, could you take
> > documentation about how to convert to the memalloc_nofs_{save,restore}
> > scope api (which I think you've written about e-mails at length
> > before), and put that into a file in Documentation/core-api?
>
> I can.

Does something like the below sound reasonable/helpful?
---
=================================
GFP masks used from FS/IO context
=================================

:Date: Mapy, 2018
:Author: Michal Hocko <[email protected]>

Introduction
============

FS resp. IO submitting code paths have to be careful when allocating
memory to prevent from potential recursion deadlocks caused by direct
memory reclaim calling back into the FS/IO path and block on already
held resources (e.g. locks). Traditional way to avoid this problem
is to clear __GFP_FS resp. __GFP_IO (note the later implies clearing
the first as well) in the gfp mask when calling an allocator. GFP_NOFS
resp. GFP_NOIO can be used as shortcut.

This has been the traditional way to avoid deadlocks since ages. It
turned out though that above approach has led to abuses when the restricted
gfp mask is used "just in case" without a deeper consideration which leads
to problems because an excessive use of GFP_NOFS/GFP_NOIO can lead to
memory over-reclaim or other memory reclaim issues.

New API
=======

Since 4.12 we do have a generic scope API for both NOFS and NOIO context
``memalloc_nofs_save``, ``memalloc_nofs_restore`` resp. ``memalloc_noio_save``,
``memalloc_noio_restore`` which allow to mark a scope to be a critical
section from the memory reclaim recursion into FS/IO POV. Any allocation
from that scope will inherently drop __GFP_FS resp. __GFP_IO from the given
mask so no memory allocation can recurse back in the FS/IO.

FS/IO code then simply calls the appropriate save function right at
the layer where a lock taken from the reclaim context (e.g. shrinker)
is taken and the corresponding restore function when the lock is
released. All that ideally along with an explanation what is the reclaim
context for easier maintenance.

What about __vmalloc(GFP_NOFS)
==============================

vmalloc doesn't support GFP_NOFS semantic because there are hardcoded
GFP_KERNEL allocations deep inside the allocator which are quit non-trivial
to fix up. That means that calling ``vmalloc`` with GFP_NOFS/GFP_NOIO is
almost always a bug. The good news is that the NOFS/NOIO semantic can be
achieved by the scope api.

In the ideal world, upper layers should already mark dangerous contexts
and so no special care is required and vmalloc should be called without
any problems. Sometimes if the context is not really clear or there are
layering violations then the recommended way around that is to wrap ``vmalloc``
by the scope API with a comment explaining the problem.
--
Michal Hocko
SUSE Labs

2018-05-09 14:16:17

by David Sterba

[permalink] [raw]
Subject: Re: vmalloc with GFP_NOFS

On Wed, May 09, 2018 at 03:42:22PM +0200, Michal Hocko wrote:
> On Tue 24-04-18 13:25:42, Michal Hocko wrote:
> [...]
> > > As a suggestion, could you take
> > > documentation about how to convert to the memalloc_nofs_{save,restore}
> > > scope api (which I think you've written about e-mails at length
> > > before), and put that into a file in Documentation/core-api?
> >
> > I can.
>
> Does something like the below sound reasonable/helpful?

Sounds good to me and matches how we've been using the vmalloc/nofs so
far.

2018-05-09 15:15:34

by Darrick J. Wong

[permalink] [raw]
Subject: Re: vmalloc with GFP_NOFS

On Wed, May 09, 2018 at 03:42:22PM +0200, Michal Hocko wrote:
> On Tue 24-04-18 13:25:42, Michal Hocko wrote:
> [...]
> > > As a suggestion, could you take
> > > documentation about how to convert to the memalloc_nofs_{save,restore}
> > > scope api (which I think you've written about e-mails at length
> > > before), and put that into a file in Documentation/core-api?
> >
> > I can.
>
> Does something like the below sound reasonable/helpful?
> ---
> =================================
> GFP masks used from FS/IO context
> =================================
>
> :Date: Mapy, 2018
> :Author: Michal Hocko <[email protected]>
>
> Introduction
> ============
>
> FS resp. IO submitting code paths have to be careful when allocating

Not sure what 'FS resp. IO' means here -- 'FS and IO' ?

(Or is this one of those things where this looks like plain English text
but in reality it's some sort of markup that I'm not so familiar with?)

Confused because I've seen 'resp.' used as shorthand for
'responsible'...

> memory to prevent from potential recursion deadlocks caused by direct
> memory reclaim calling back into the FS/IO path and block on already
> held resources (e.g. locks). Traditional way to avoid this problem

'The traditional way to avoid this deadlock problem...'

> is to clear __GFP_FS resp. __GFP_IO (note the later implies clearing
> the first as well) in the gfp mask when calling an allocator. GFP_NOFS
> resp. GFP_NOIO can be used as shortcut.
>
> This has been the traditional way to avoid deadlocks since ages. It

I think this sentence is a little redundant with the previous sentence,
you could chop it out and join this paragraph to the one before it.

> turned out though that above approach has led to abuses when the restricted
> gfp mask is used "just in case" without a deeper consideration which leads
> to problems because an excessive use of GFP_NOFS/GFP_NOIO can lead to
> memory over-reclaim or other memory reclaim issues.
>
> New API
> =======
>
> Since 4.12 we do have a generic scope API for both NOFS and NOIO context
> ``memalloc_nofs_save``, ``memalloc_nofs_restore`` resp. ``memalloc_noio_save``,
> ``memalloc_noio_restore`` which allow to mark a scope to be a critical
> section from the memory reclaim recursion into FS/IO POV. Any allocation
> from that scope will inherently drop __GFP_FS resp. __GFP_IO from the given
> mask so no memory allocation can recurse back in the FS/IO.
>
> FS/IO code then simply calls the appropriate save function right at
> the layer where a lock taken from the reclaim context (e.g. shrinker)
> is taken and the corresponding restore function when the lock is
> released. All that ideally along with an explanation what is the reclaim
> context for easier maintenance.
>
> What about __vmalloc(GFP_NOFS)
> ==============================
>
> vmalloc doesn't support GFP_NOFS semantic because there are hardcoded
> GFP_KERNEL allocations deep inside the allocator which are quit non-trivial

...which are quite non-trivial...

> to fix up. That means that calling ``vmalloc`` with GFP_NOFS/GFP_NOIO is
> almost always a bug. The good news is that the NOFS/NOIO semantic can be
> achieved by the scope api.
>
> In the ideal world, upper layers should already mark dangerous contexts
> and so no special care is required and vmalloc should be called without
> any problems. Sometimes if the context is not really clear or there are
> layering violations then the recommended way around that is to wrap ``vmalloc``
> by the scope API with a comment explaining the problem.

Otherwise looks ok to me based on my understanding of how all this is
supposed to work...

Reviewed-by: Darrick J. Wong <[email protected]>

--D

> --
> Michal Hocko
> SUSE Labs

2018-05-09 16:25:09

by Mike Rapoport

[permalink] [raw]
Subject: Re: vmalloc with GFP_NOFS

On Wed, May 09, 2018 at 08:13:51AM -0700, Darrick J. Wong wrote:
> On Wed, May 09, 2018 at 03:42:22PM +0200, Michal Hocko wrote:
> > On Tue 24-04-18 13:25:42, Michal Hocko wrote:
> > [...]
> > > > As a suggestion, could you take
> > > > documentation about how to convert to the memalloc_nofs_{save,restore}
> > > > scope api (which I think you've written about e-mails at length
> > > > before), and put that into a file in Documentation/core-api?
> > >
> > > I can.
> >
> > Does something like the below sound reasonable/helpful?
> > ---
> > =================================
> > GFP masks used from FS/IO context
> > =================================
> >
> > :Date: Mapy, 2018
> > :Author: Michal Hocko <[email protected]>
> >
> > Introduction
> > ============
> >
> > FS resp. IO submitting code paths have to be careful when allocating
>
> Not sure what 'FS resp. IO' means here -- 'FS and IO' ?
>
> (Or is this one of those things where this looks like plain English text
> but in reality it's some sort of markup that I'm not so familiar with?)
>
> Confused because I've seen 'resp.' used as shorthand for
> 'responsible'...
>
> > memory to prevent from potential recursion deadlocks caused by direct
> > memory reclaim calling back into the FS/IO path and block on already
> > held resources (e.g. locks). Traditional way to avoid this problem
>
> 'The traditional way to avoid this deadlock problem...'
>
> > is to clear __GFP_FS resp. __GFP_IO (note the later implies clearing
> > the first as well) in the gfp mask when calling an allocator. GFP_NOFS
> > resp. GFP_NOIO can be used as shortcut.
> >
> > This has been the traditional way to avoid deadlocks since ages. It
>
> I think this sentence is a little redundant with the previous sentence,
> you could chop it out and join this paragraph to the one before it.
>
> > turned out though that above approach has led to abuses when the restricted
> > gfp mask is used "just in case" without a deeper consideration which leads
> > to problems because an excessive use of GFP_NOFS/GFP_NOIO can lead to
> > memory over-reclaim or other memory reclaim issues.
> >
> > New API
> > =======
> >
> > Since 4.12 we do have a generic scope API for both NOFS and NOIO context
> > ``memalloc_nofs_save``, ``memalloc_nofs_restore`` resp. ``memalloc_noio_save``,
> > ``memalloc_noio_restore`` which allow to mark a scope to be a critical
> > section from the memory reclaim recursion into FS/IO POV. Any allocation
> > from that scope will inherently drop __GFP_FS resp. __GFP_IO from the given
> > mask so no memory allocation can recurse back in the FS/IO.
> >
> > FS/IO code then simply calls the appropriate save function right at
> > the layer where a lock taken from the reclaim context (e.g. shrinker)
> > is taken and the corresponding restore function when the lock is

Seems like the second "is taken" got there by mistake

> > released. All that ideally along with an explanation what is the reclaim
> > context for easier maintenance.
> >
> > What about __vmalloc(GFP_NOFS)
> > ==============================
> >
> > vmalloc doesn't support GFP_NOFS semantic because there are hardcoded
> > GFP_KERNEL allocations deep inside the allocator which are quit non-trivial
>
> ...which are quite non-trivial...
>
> > to fix up. That means that calling ``vmalloc`` with GFP_NOFS/GFP_NOIO is
> > almost always a bug. The good news is that the NOFS/NOIO semantic can be
> > achieved by the scope api.
> >
> > In the ideal world, upper layers should already mark dangerous contexts
> > and so no special care is required and vmalloc should be called without
> > any problems. Sometimes if the context is not really clear or there are
> > layering violations then the recommended way around that is to wrap ``vmalloc``
> > by the scope API with a comment explaining the problem.
>
> Otherwise looks ok to me based on my understanding of how all this is
> supposed to work...
>
> Reviewed-by: Darrick J. Wong <[email protected]>
>
> --D
>
> > --
> > Michal Hocko
> > SUSE Labs
>

--
Sincerely yours,
Mike.


2018-05-09 21:04:51

by Michal Hocko

[permalink] [raw]
Subject: Re: vmalloc with GFP_NOFS

On Wed 09-05-18 08:13:51, Darrick J. Wong wrote:
> On Wed, May 09, 2018 at 03:42:22PM +0200, Michal Hocko wrote:
> > On Tue 24-04-18 13:25:42, Michal Hocko wrote:
> > [...]
> > > > As a suggestion, could you take
> > > > documentation about how to convert to the memalloc_nofs_{save,restore}
> > > > scope api (which I think you've written about e-mails at length
> > > > before), and put that into a file in Documentation/core-api?
> > >
> > > I can.
> >
> > Does something like the below sound reasonable/helpful?
> > ---
> > =================================
> > GFP masks used from FS/IO context
> > =================================
> >
> > :Date: Mapy, 2018
> > :Author: Michal Hocko <[email protected]>
> >
> > Introduction
> > ============
> >
> > FS resp. IO submitting code paths have to be careful when allocating
>
> Not sure what 'FS resp. IO' means here -- 'FS and IO' ?
>
> (Or is this one of those things where this looks like plain English text
> but in reality it's some sort of markup that I'm not so familiar with?)
>
> Confused because I've seen 'resp.' used as shorthand for
> 'responsible'...

Well, I've tried to cover both. Filesystem and IO code paths which
allocate while in sensitive context. IO submission is kinda clear but I
am not sure what a general term for filsystem code paths would be. I
would be greatful for any hints here.

>
> > memory to prevent from potential recursion deadlocks caused by direct
> > memory reclaim calling back into the FS/IO path and block on already
> > held resources (e.g. locks). Traditional way to avoid this problem
>
> 'The traditional way to avoid this deadlock problem...'

Done

> > is to clear __GFP_FS resp. __GFP_IO (note the later implies clearing
> > the first as well) in the gfp mask when calling an allocator. GFP_NOFS
> > resp. GFP_NOIO can be used as shortcut.
> >
> > This has been the traditional way to avoid deadlocks since ages. It
>
> I think this sentence is a little redundant with the previous sentence,
> you could chop it out and join this paragraph to the one before it.

OK

>
> > turned out though that above approach has led to abuses when the restricted
> > gfp mask is used "just in case" without a deeper consideration which leads
> > to problems because an excessive use of GFP_NOFS/GFP_NOIO can lead to
> > memory over-reclaim or other memory reclaim issues.
> >
> > New API
> > =======
> >
> > Since 4.12 we do have a generic scope API for both NOFS and NOIO context
> > ``memalloc_nofs_save``, ``memalloc_nofs_restore`` resp. ``memalloc_noio_save``,
> > ``memalloc_noio_restore`` which allow to mark a scope to be a critical
> > section from the memory reclaim recursion into FS/IO POV. Any allocation
> > from that scope will inherently drop __GFP_FS resp. __GFP_IO from the given
> > mask so no memory allocation can recurse back in the FS/IO.
> >
> > FS/IO code then simply calls the appropriate save function right at
> > the layer where a lock taken from the reclaim context (e.g. shrinker)
> > is taken and the corresponding restore function when the lock is
> > released. All that ideally along with an explanation what is the reclaim
> > context for easier maintenance.
> >
> > What about __vmalloc(GFP_NOFS)
> > ==============================
> >
> > vmalloc doesn't support GFP_NOFS semantic because there are hardcoded
> > GFP_KERNEL allocations deep inside the allocator which are quit non-trivial
>
> ...which are quite non-trivial...

fixed

> > to fix up. That means that calling ``vmalloc`` with GFP_NOFS/GFP_NOIO is
> > almost always a bug. The good news is that the NOFS/NOIO semantic can be
> > achieved by the scope api.
> >
> > In the ideal world, upper layers should already mark dangerous contexts
> > and so no special care is required and vmalloc should be called without
> > any problems. Sometimes if the context is not really clear or there are
> > layering violations then the recommended way around that is to wrap ``vmalloc``
> > by the scope API with a comment explaining the problem.
>
> Otherwise looks ok to me based on my understanding of how all this is
> supposed to work...
>
> Reviewed-by: Darrick J. Wong <[email protected]>

Thanks for your review!

--
Michal Hocko
SUSE Labs

2018-05-09 21:06:20

by Michal Hocko

[permalink] [raw]
Subject: Re: vmalloc with GFP_NOFS

On Wed 09-05-18 19:24:51, Mike Rapoport wrote:
> On Wed, May 09, 2018 at 08:13:51AM -0700, Darrick J. Wong wrote:
> > On Wed, May 09, 2018 at 03:42:22PM +0200, Michal Hocko wrote:
[...]
> > > FS/IO code then simply calls the appropriate save function right at
> > > the layer where a lock taken from the reclaim context (e.g. shrinker)
> > > is taken and the corresponding restore function when the lock is
>
> Seems like the second "is taken" got there by mistake

yeah, fixed. Thanks!
--
Michal Hocko
SUSE Labs

2018-05-09 22:04:07

by Darrick J. Wong

[permalink] [raw]
Subject: Re: vmalloc with GFP_NOFS

On Wed, May 09, 2018 at 11:04:47PM +0200, Michal Hocko wrote:
> On Wed 09-05-18 08:13:51, Darrick J. Wong wrote:
> > On Wed, May 09, 2018 at 03:42:22PM +0200, Michal Hocko wrote:
> > > On Tue 24-04-18 13:25:42, Michal Hocko wrote:
> > > [...]
> > > > > As a suggestion, could you take
> > > > > documentation about how to convert to the memalloc_nofs_{save,restore}
> > > > > scope api (which I think you've written about e-mails at length
> > > > > before), and put that into a file in Documentation/core-api?
> > > >
> > > > I can.
> > >
> > > Does something like the below sound reasonable/helpful?
> > > ---
> > > =================================
> > > GFP masks used from FS/IO context
> > > =================================
> > >
> > > :Date: Mapy, 2018
> > > :Author: Michal Hocko <[email protected]>
> > >
> > > Introduction
> > > ============
> > >
> > > FS resp. IO submitting code paths have to be careful when allocating
> >
> > Not sure what 'FS resp. IO' means here -- 'FS and IO' ?
> >
> > (Or is this one of those things where this looks like plain English text
> > but in reality it's some sort of markup that I'm not so familiar with?)
> >
> > Confused because I've seen 'resp.' used as shorthand for
> > 'responsible'...
>
> Well, I've tried to cover both. Filesystem and IO code paths which
> allocate while in sensitive context. IO submission is kinda clear but I
> am not sure what a general term for filsystem code paths would be. I
> would be greatful for any hints here.

"Code paths in the filesystem and IO stacks must be careful when
allocating memory to prevent recursion deadlocks caused by direct memory
reclaim calling back into the FS or IO paths and blocking on already
held resources (e.g. locks)." ?

--D

>
> >
> > > memory to prevent from potential recursion deadlocks caused by direct
> > > memory reclaim calling back into the FS/IO path and block on already
> > > held resources (e.g. locks). Traditional way to avoid this problem
> >
> > 'The traditional way to avoid this deadlock problem...'
>
> Done
>
> > > is to clear __GFP_FS resp. __GFP_IO (note the later implies clearing
> > > the first as well) in the gfp mask when calling an allocator. GFP_NOFS
> > > resp. GFP_NOIO can be used as shortcut.
> > >
> > > This has been the traditional way to avoid deadlocks since ages. It
> >
> > I think this sentence is a little redundant with the previous sentence,
> > you could chop it out and join this paragraph to the one before it.
>
> OK
>
> >
> > > turned out though that above approach has led to abuses when the restricted
> > > gfp mask is used "just in case" without a deeper consideration which leads
> > > to problems because an excessive use of GFP_NOFS/GFP_NOIO can lead to
> > > memory over-reclaim or other memory reclaim issues.
> > >
> > > New API
> > > =======
> > >
> > > Since 4.12 we do have a generic scope API for both NOFS and NOIO context
> > > ``memalloc_nofs_save``, ``memalloc_nofs_restore`` resp. ``memalloc_noio_save``,
> > > ``memalloc_noio_restore`` which allow to mark a scope to be a critical
> > > section from the memory reclaim recursion into FS/IO POV. Any allocation
> > > from that scope will inherently drop __GFP_FS resp. __GFP_IO from the given
> > > mask so no memory allocation can recurse back in the FS/IO.
> > >
> > > FS/IO code then simply calls the appropriate save function right at
> > > the layer where a lock taken from the reclaim context (e.g. shrinker)
> > > is taken and the corresponding restore function when the lock is
> > > released. All that ideally along with an explanation what is the reclaim
> > > context for easier maintenance.
> > >
> > > What about __vmalloc(GFP_NOFS)
> > > ==============================
> > >
> > > vmalloc doesn't support GFP_NOFS semantic because there are hardcoded
> > > GFP_KERNEL allocations deep inside the allocator which are quit non-trivial
> >
> > ...which are quite non-trivial...
>
> fixed
>
> > > to fix up. That means that calling ``vmalloc`` with GFP_NOFS/GFP_NOIO is
> > > almost always a bug. The good news is that the NOFS/NOIO semantic can be
> > > achieved by the scope api.
> > >
> > > In the ideal world, upper layers should already mark dangerous contexts
> > > and so no special care is required and vmalloc should be called without
> > > any problems. Sometimes if the context is not really clear or there are
> > > layering violations then the recommended way around that is to wrap ``vmalloc``
> > > by the scope API with a comment explaining the problem.
> >
> > Otherwise looks ok to me based on my understanding of how all this is
> > supposed to work...
> >
> > Reviewed-by: Darrick J. Wong <[email protected]>
>
> Thanks for your review!
>
> --
> Michal Hocko
> SUSE Labs

2018-05-10 05:58:29

by Michal Hocko

[permalink] [raw]
Subject: Re: vmalloc with GFP_NOFS

On Wed 09-05-18 15:02:31, Darrick J. Wong wrote:
> On Wed, May 09, 2018 at 11:04:47PM +0200, Michal Hocko wrote:
> > On Wed 09-05-18 08:13:51, Darrick J. Wong wrote:
[...]
> > > > FS resp. IO submitting code paths have to be careful when allocating
> > >
> > > Not sure what 'FS resp. IO' means here -- 'FS and IO' ?
> > >
> > > (Or is this one of those things where this looks like plain English text
> > > but in reality it's some sort of markup that I'm not so familiar with?)
> > >
> > > Confused because I've seen 'resp.' used as shorthand for
> > > 'responsible'...
> >
> > Well, I've tried to cover both. Filesystem and IO code paths which
> > allocate while in sensitive context. IO submission is kinda clear but I
> > am not sure what a general term for filsystem code paths would be. I
> > would be greatful for any hints here.
>
> "Code paths in the filesystem and IO stacks must be careful when
> allocating memory to prevent recursion deadlocks caused by direct memory
> reclaim calling back into the FS or IO paths and blocking on already
> held resources (e.g. locks)." ?

Great, thanks!
--
Michal Hocko
SUSE Labs

2018-05-10 07:18:31

by Michal Hocko

[permalink] [raw]
Subject: Re: vmalloc with GFP_NOFS

On Thu 10-05-18 07:58:25, Michal Hocko wrote:
> On Wed 09-05-18 15:02:31, Darrick J. Wong wrote:
> > On Wed, May 09, 2018 at 11:04:47PM +0200, Michal Hocko wrote:
> > > On Wed 09-05-18 08:13:51, Darrick J. Wong wrote:
> [...]
> > > > > FS resp. IO submitting code paths have to be careful when allocating
> > > >
> > > > Not sure what 'FS resp. IO' means here -- 'FS and IO' ?
> > > >
> > > > (Or is this one of those things where this looks like plain English text
> > > > but in reality it's some sort of markup that I'm not so familiar with?)
> > > >
> > > > Confused because I've seen 'resp.' used as shorthand for
> > > > 'responsible'...
> > >
> > > Well, I've tried to cover both. Filesystem and IO code paths which
> > > allocate while in sensitive context. IO submission is kinda clear but I
> > > am not sure what a general term for filsystem code paths would be. I
> > > would be greatful for any hints here.
> >
> > "Code paths in the filesystem and IO stacks must be careful when
> > allocating memory to prevent recursion deadlocks caused by direct memory
> > reclaim calling back into the FS or IO paths and blocking on already
> > held resources (e.g. locks)." ?
>
> Great, thanks!

I dared to extend the last part to "(e.g. locks - most commonly those
used for the transaction context)"
--
Michal Hocko
SUSE Labs

2018-07-17 13:19:39

by Michal Hocko

[permalink] [raw]
Subject: Re: vmalloc with GFP_NOFS

On Tue 24-04-18 21:03:43, Richard Weinberger wrote:
> Am Dienstag, 24. April 2018, 18:27:12 CEST schrieb Michal Hocko:
> > fs/ubifs/debug.c
>
> This one is just for debugging.
> So, preallocating + locking would not hurt much.
>
> > fs/ubifs/lprops.c
>
> Ditto.
>
> > fs/ubifs/lpt_commit.c
>
> Here we use it also only in debugging mode and in one case for
> fatal error reporting.
> No hot paths.
>
> > fs/ubifs/orphan.c
>
> Also only for debugging.
> Getting rid of vmalloc with GFP_NOFS in UBIFS is no big problem.
> I can prepare a patch.

Hi Richard, I have just got back to this and noticed that the vmalloc
NOFS usage is still there. Do you have any plans to push changes to
remove it?
--
Michal Hocko
SUSE Labs

2018-07-17 13:22:03

by Michal Hocko

[permalink] [raw]
Subject: Re: vmalloc with GFP_NOFS

On Tue 24-04-18 14:35:36, Theodore Ts'o wrote:
> On Tue, Apr 24, 2018 at 10:27:12AM -0600, Michal Hocko wrote:
> > fs/ext4/xattr.c
> >
> > What to do about this? Well, there are two things. Firstly, it would be
> > really great to double check whether the GFP_NOFS is really needed. I
> > cannot judge that because I am not familiar with the code.
>
> *Most* of the time it's not needed, but there are times when it is.
> We could be more smart about sending down GFP_NOFS only when it is
> needed. If we are sending too many GFP_NOFS's allocations such that
> it's causing heartburn, we could fix this. (xattr commands are rare
> enough that I dind't think it was worth it to modulate the GFP flags
> for this particular case, but we could make it be smarter if it would
> help.)

There still seem to be ext4_kvmalloc(NOFS) callers in the ext4 code. Do
you have any plans to get rid of those?
--
Michal Hocko
SUSE Labs

2018-07-17 13:23:33

by Michal Hocko

[permalink] [raw]
Subject: Re: vmalloc with GFP_NOFS

On Tue 24-04-18 14:09:27, Michal Hocko wrote:
> On Tue 24-04-18 20:26:23, Steven Whitehouse wrote:
> [...]
> > It would be good to fix this, and it has been known as an issue for a long
> > time. We might well be able to make use of the new API though. It might be
> > as simple as adding the calls when we get & release glocks, but I'd have to
> > check the code to be sure,
>
> Yeah, starting with annotating those locking contexts and how document
> how their are used in the reclaim is the great first step. This has to
> be done per-fs obviously.

Any chance of progress here?
--
Michal Hocko
SUSE Labs