2015-06-30 15:17:41

by Michal Hocko

[permalink] [raw]
Subject: [PATCH] mm, vmscan: Do not wait for page writeback for GFP_NOFS allocations

Nikolay has reported a hang when a memcg reclaim got stuck with the
following backtrace:
PID: 18308 TASK: ffff883d7c9b0a30 CPU: 1 COMMAND: "rsync"
#0 [ffff88177374ac60] __schedule at ffffffff815ab152
#1 [ffff88177374acb0] schedule at ffffffff815ab76e
#2 [ffff88177374acd0] schedule_timeout at ffffffff815ae5e5
#3 [ffff88177374ad70] io_schedule_timeout at ffffffff815aad6a
#4 [ffff88177374ada0] bit_wait_io at ffffffff815abfc6
#5 [ffff88177374adb0] __wait_on_bit at ffffffff815abda5
#6 [ffff88177374ae00] wait_on_page_bit at ffffffff8111fd4f
#7 [ffff88177374ae50] shrink_page_list at ffffffff81135445
#8 [ffff88177374af50] shrink_inactive_list at ffffffff81135845
#9 [ffff88177374b060] shrink_lruvec at ffffffff81135ead
#10 [ffff88177374b150] shrink_zone at ffffffff811360c3
#11 [ffff88177374b220] shrink_zones at ffffffff81136eff
#12 [ffff88177374b2a0] do_try_to_free_pages at ffffffff8113712f
#13 [ffff88177374b300] try_to_free_mem_cgroup_pages at ffffffff811372be
#14 [ffff88177374b380] try_charge at ffffffff81189423
#15 [ffff88177374b430] mem_cgroup_try_charge at ffffffff8118c6f5
#16 [ffff88177374b470] __add_to_page_cache_locked at ffffffff8112137d
#17 [ffff88177374b4e0] add_to_page_cache_lru at ffffffff81121618
#18 [ffff88177374b510] pagecache_get_page at ffffffff8112170b
#19 [ffff88177374b560] grow_dev_page at ffffffff811c8297
#20 [ffff88177374b5c0] __getblk_slow at ffffffff811c91d6
#21 [ffff88177374b600] __getblk_gfp at ffffffff811c92c1
#22 [ffff88177374b630] ext4_ext_grow_indepth at ffffffff8124565c
#23 [ffff88177374b690] ext4_ext_create_new_leaf at ffffffff81246ca8
#24 [ffff88177374b6e0] ext4_ext_insert_extent at ffffffff81246f09
#25 [ffff88177374b750] ext4_ext_map_blocks at ffffffff8124a848
#26 [ffff88177374b870] ext4_map_blocks at ffffffff8121a5b7
#27 [ffff88177374b910] mpage_map_one_extent at ffffffff8121b1fa
#28 [ffff88177374b950] mpage_map_and_submit_extent at ffffffff8121f07b
#29 [ffff88177374b9b0] ext4_writepages at ffffffff8121f6d5
#30 [ffff88177374bb20] do_writepages at ffffffff8112c490
#31 [ffff88177374bb30] __filemap_fdatawrite_range at ffffffff81120199
#32 [ffff88177374bb80] filemap_flush at ffffffff8112041c
#33 [ffff88177374bb90] ext4_alloc_da_blocks at ffffffff81219da1
#34 [ffff88177374bbb0] ext4_rename at ffffffff81229b91
#35 [ffff88177374bcd0] ext4_rename2 at ffffffff81229e32
#36 [ffff88177374bce0] vfs_rename at ffffffff811a08a5
#37 [ffff88177374bd60] SYSC_renameat2 at ffffffff811a3ffc
#38 [ffff88177374bf60] sys_renameat2 at ffffffff811a408e
#39 [ffff88177374bf70] sys_rename at ffffffff8119e51e
#40 [ffff88177374bf80] system_call_fastpath at ffffffff815afa89

Dave Chinner has properly pointed out that this is deadlock in the
reclaim code because ext4 doesn't submit pages which are marked by
PG_writeback right away. The heuristic introduced by e62e384e9da8
("memcg: prevent OOM with too many dirty pages") assumes that pages
marked as writeback will be written out eventually without requiring any
memcg charges. This is not true for ext4 though.

ext4_bio_write_page calls io_submit_add_bh but that doesn't necessarily
submit the bio. Instead it tries to map more pages into the bio and
mpage_map_one_extent might trigger memcg charge which might end up
waiting on a page which is marked PG_writeback but hasn't been submitted
yet so we would end up waiting for something that never finishes.

Fix this issue by limiting the wait to reclaim triggered by __GFP_FS
allocations to make sure we are not called from filesystem paths
which might be doing exactly this kind of IO optimizations. The page
fault path shouldn't require GFP_NOFS and so we shouldn't reintroduce
the premature OOM killer issue which was originally addressed by the
heuristic.

Reported-by: Nikolay Borisov <[email protected]>
Signed-off-by: Michal Hocko <[email protected]>
---

Hi,
the issue has been reported http://marc.info/?l=linux-kernel&m=143522730927480.
This obviously requires a patch ot make ext4_ext_grow_indepth call
sb_getblk with the GFP_NOFS mask but that one makes sense on its own
and Ted has mentioned he will push it. I haven't marked the patch for
stable yet. This is the first time the issue has been reported and
ext4 writeout code has changed considerably in 3.11 and I am not sure
the issue was present before. e62e384e9da8 which has introduced the
wait_on_page_writeback has been merged in 3.6 which is quite some time
ago. If we go with stable I would suggest marking it for 3.11+ and it
should obviously go with the ext4_ext_grow_indepth fix.

mm/vmscan.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 37e90db1520b..6c44d424968e 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -995,7 +995,7 @@ static unsigned long shrink_page_list(struct list_head *page_list,
goto keep_locked;

/* Case 3 above */
- } else {
+ } else if (sc->gfp_mask & __GFP_FS) {
wait_on_page_writeback(page);
}
}
--
2.1.4


2015-07-01 06:17:36

by Michal Hocko

[permalink] [raw]
Subject: Re: [PATCH] mm, vmscan: Do not wait for page writeback for GFP_NOFS allocations

On Tue 30-06-15 17:17:17, Michal Hocko wrote:
[...]
> Hi,
> the issue has been reported http://marc.info/?l=linux-kernel&m=143522730927480.
> This obviously requires a patch ot make ext4_ext_grow_indepth call
> sb_getblk with the GFP_NOFS mask but that one makes sense on its own
> and Ted has mentioned he will push it. I haven't marked the patch for
> stable yet. This is the first time the issue has been reported and
> ext4 writeout code has changed considerably in 3.11 and I am not sure
> the issue was present before. e62e384e9da8 which has introduced the
> wait_on_page_writeback has been merged in 3.6 which is quite some time
> ago. If we go with stable I would suggest marking it for 3.11+ and it
> should obviously go with the ext4_ext_grow_indepth fix.

After Dave's additional explanation
(http://marc.info/?l=linux-ext4&m=143570521212215) it is clear that the
lack of __GFP_FS check was wrong from the very beginning. XFS is doing
the similar thing from before the e62e384e9da8 was merged. I guess we
were just lucky not to hit this problem sooner.

That being said I think the patch should be marked for stable and the
changelog updated:

As per David Chinner the xfs is doing similar thing since 2.6.15 already
so ext4 is not the only affected filesystem. Moreover he notes:
: For example: IO completion might require unwritten extent conversion
: which executes filesystem transactions and GFP_NOFS allocations. The
: writeback flag on the pages can not be cleared until unwritten
: extent conversion completes. Hence memory reclaim cannot wait on
: page writeback to complete in GFP_NOFS context because it is not
: safe to do so, memcg reclaim or otherwise.

Cc: stable # 3.6+
Fixes: e62e384e9da8 ("memcg: prevent OOM with too many dirty pages")

Andrew let me know whether I should repost the patch with the updated
changelog or you can take it from here.
--
Michal Hocko
SUSE Labs

2015-07-01 13:37:44

by Michal Hocko

[permalink] [raw]
Subject: Re: [PATCH] mm, vmscan: Do not wait for page writeback for GFP_NOFS allocations

[CCing Hugh as well]

On Wed 01-07-15 08:17:31, Michal Hocko wrote:
> On Tue 30-06-15 17:17:17, Michal Hocko wrote:
> [...]
> > Hi,
> > the issue has been reported http://marc.info/?l=linux-kernel&m=143522730927480.
> > This obviously requires a patch ot make ext4_ext_grow_indepth call
> > sb_getblk with the GFP_NOFS mask but that one makes sense on its own
> > and Ted has mentioned he will push it. I haven't marked the patch for
> > stable yet. This is the first time the issue has been reported and
> > ext4 writeout code has changed considerably in 3.11 and I am not sure
> > the issue was present before. e62e384e9da8 which has introduced the
> > wait_on_page_writeback has been merged in 3.6 which is quite some time
> > ago. If we go with stable I would suggest marking it for 3.11+ and it
> > should obviously go with the ext4_ext_grow_indepth fix.
>
> After Dave's additional explanation
> (http://marc.info/?l=linux-ext4&m=143570521212215) it is clear that the
> lack of __GFP_FS check was wrong from the very beginning. XFS is doing
> the similar thing from before the e62e384e9da8 was merged. I guess we
> were just lucky not to hit this problem sooner.
>
> That being said I think the patch should be marked for stable and the
> changelog updated:
>
> As per David Chinner the xfs is doing similar thing since 2.6.15 already
> so ext4 is not the only affected filesystem. Moreover he notes:
> : For example: IO completion might require unwritten extent conversion
> : which executes filesystem transactions and GFP_NOFS allocations. The
> : writeback flag on the pages can not be cleared until unwritten
> : extent conversion completes. Hence memory reclaim cannot wait on
> : page writeback to complete in GFP_NOFS context because it is not
> : safe to do so, memcg reclaim or otherwise.
>
> Cc: stable # 3.6+
> Fixes: e62e384e9da8 ("memcg: prevent OOM with too many dirty pages")
>
> Andrew let me know whether I should repost the patch with the updated
> changelog or you can take it from here.

Hmm, I have double checked the original commit and it turned out my
memory was failing me. e62e384e9da8 had may_enter_fs check. This has
been changed later in the same merge window by c3b94f44fcb0 ("memcg:
further prevent OOM with too many dirty pages") and the code has been
refactored later some more. So the changelog needs some more rewording:
---
>From ca18f213a2c2c94d792e6cca5e391745dcb3484b Mon Sep 17 00:00:00 2001
From: Michal Hocko <[email protected]>
Date: Tue, 30 Jun 2015 16:34:50 +0200
Subject: [PATCH] mm, vmscan: Do not wait for page writeback for GFP_NOFS
allocations

Nikolay has reported a hang when a memcg reclaim got stuck with the
following backtrace:
PID: 18308 TASK: ffff883d7c9b0a30 CPU: 1 COMMAND: "rsync"
#0 [ffff88177374ac60] __schedule at ffffffff815ab152
#1 [ffff88177374acb0] schedule at ffffffff815ab76e
#2 [ffff88177374acd0] schedule_timeout at ffffffff815ae5e5
#3 [ffff88177374ad70] io_schedule_timeout at ffffffff815aad6a
#4 [ffff88177374ada0] bit_wait_io at ffffffff815abfc6
#5 [ffff88177374adb0] __wait_on_bit at ffffffff815abda5
#6 [ffff88177374ae00] wait_on_page_bit at ffffffff8111fd4f
#7 [ffff88177374ae50] shrink_page_list at ffffffff81135445
#8 [ffff88177374af50] shrink_inactive_list at ffffffff81135845
#9 [ffff88177374b060] shrink_lruvec at ffffffff81135ead
#10 [ffff88177374b150] shrink_zone at ffffffff811360c3
#11 [ffff88177374b220] shrink_zones at ffffffff81136eff
#12 [ffff88177374b2a0] do_try_to_free_pages at ffffffff8113712f
#13 [ffff88177374b300] try_to_free_mem_cgroup_pages at ffffffff811372be
#14 [ffff88177374b380] try_charge at ffffffff81189423
#15 [ffff88177374b430] mem_cgroup_try_charge at ffffffff8118c6f5
#16 [ffff88177374b470] __add_to_page_cache_locked at ffffffff8112137d
#17 [ffff88177374b4e0] add_to_page_cache_lru at ffffffff81121618
#18 [ffff88177374b510] pagecache_get_page at ffffffff8112170b
#19 [ffff88177374b560] grow_dev_page at ffffffff811c8297
#20 [ffff88177374b5c0] __getblk_slow at ffffffff811c91d6
#21 [ffff88177374b600] __getblk_gfp at ffffffff811c92c1
#22 [ffff88177374b630] ext4_ext_grow_indepth at ffffffff8124565c
#23 [ffff88177374b690] ext4_ext_create_new_leaf at ffffffff81246ca8
#24 [ffff88177374b6e0] ext4_ext_insert_extent at ffffffff81246f09
#25 [ffff88177374b750] ext4_ext_map_blocks at ffffffff8124a848
#26 [ffff88177374b870] ext4_map_blocks at ffffffff8121a5b7
#27 [ffff88177374b910] mpage_map_one_extent at ffffffff8121b1fa
#28 [ffff88177374b950] mpage_map_and_submit_extent at ffffffff8121f07b
#29 [ffff88177374b9b0] ext4_writepages at ffffffff8121f6d5
#30 [ffff88177374bb20] do_writepages at ffffffff8112c490
#31 [ffff88177374bb30] __filemap_fdatawrite_range at ffffffff81120199
#32 [ffff88177374bb80] filemap_flush at ffffffff8112041c
#33 [ffff88177374bb90] ext4_alloc_da_blocks at ffffffff81219da1
#34 [ffff88177374bbb0] ext4_rename at ffffffff81229b91
#35 [ffff88177374bcd0] ext4_rename2 at ffffffff81229e32
#36 [ffff88177374bce0] vfs_rename at ffffffff811a08a5
#37 [ffff88177374bd60] SYSC_renameat2 at ffffffff811a3ffc
#38 [ffff88177374bf60] sys_renameat2 at ffffffff811a408e
#39 [ffff88177374bf70] sys_rename at ffffffff8119e51e
#40 [ffff88177374bf80] system_call_fastpath at ffffffff815afa89

Dave Chinner has properly pointed out that this is a deadlock in the
reclaim code because ext4 doesn't submit pages which are marked by
PG_writeback right away. The heuristic was introduced by e62e384e9da8
("memcg: prevent OOM with too many dirty pages") and it was applied
only when may_enter_fs was specified. The code has been changed by
c3b94f44fcb0 ("memcg: further prevent OOM with too many dirty pages")
which has removed the __GFP_FS restriction with a reasoning that we
do not get into the fs code. But this is not sufficient apparently
because the fs doesn't necessarily submit pages marked PG_writeback
for IO right away.

ext4_bio_write_page calls io_submit_add_bh but that doesn't necessarily
submit the bio. Instead it tries to map more pages into the bio and
mpage_map_one_extent might trigger memcg charge which might end up
waiting on a page which is marked PG_writeback but hasn't been submitted
yet so we would end up waiting for something that never finishes.

Fix this issue by limiting the wait to reclaim triggered by __GFP_FS
allocations to make sure we are not called from filesystem paths which
might be doing exactly this kind of IO optimizations. The page fault
path, which is the only path that triggers memcg oom killer since 3.12,
shouldn't require GFP_NOFS and so we shouldn't reintroduce the premature
OOM killer issue which was originally addressed by the heuristic.

As per David Chinner the xfs is doing similar thing since 2.6.15 already
so ext4 is not the only affected filesystem. Moreover he notes:
: For example: IO completion might require unwritten extent conversion
: which executes filesystem transactions and GFP_NOFS allocations. The
: writeback flag on the pages can not be cleared until unwritten
: extent conversion completes. Hence memory reclaim cannot wait on
: page writeback to complete in GFP_NOFS context because it is not
: safe to do so, memcg reclaim or otherwise.

Cc: stable # 3.6+
Fixes: c3b94f44fcb0 ("memcg: further prevent OOM with too many dirty pages")
Reported-by: Nikolay Borisov <[email protected]>
Signed-off-by: Michal Hocko <[email protected]>
---
mm/vmscan.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 37e90db1520b..6c44d424968e 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -995,7 +995,7 @@ static unsigned long shrink_page_list(struct list_head *page_list,
goto keep_locked;

/* Case 3 above */
- } else {
+ } else if (sc->gfp_mask & __GFP_FS) {
wait_on_page_writeback(page);
}
}
--
2.1.4

--
Michal Hocko
SUSE Labs

2015-07-01 14:30:48

by Rik van Riel

[permalink] [raw]
Subject: Re: [PATCH] mm, vmscan: Do not wait for page writeback for GFP_NOFS allocations

On 07/01/2015 09:37 AM, Michal Hocko wrote:

> Fix this issue by limiting the wait to reclaim triggered by __GFP_FS
> allocations to make sure we are not called from filesystem paths which
> might be doing exactly this kind of IO optimizations. The page fault
> path, which is the only path that triggers memcg oom killer since 3.12,
> shouldn't require GFP_NOFS and so we shouldn't reintroduce the premature
> OOM killer issue which was originally addressed by the heuristic.
>
> As per David Chinner the xfs is doing similar thing since 2.6.15 already
> so ext4 is not the only affected filesystem. Moreover he notes:
> : For example: IO completion might require unwritten extent conversion
> : which executes filesystem transactions and GFP_NOFS allocations. The
> : writeback flag on the pages can not be cleared until unwritten
> : extent conversion completes. Hence memory reclaim cannot wait on
> : page writeback to complete in GFP_NOFS context because it is not
> : safe to do so, memcg reclaim or otherwise.

I remember fixing something like this back in the 2.2
days. Funny how these bugs keep coming back.

> Cc: stable # 3.6+
> Fixes: c3b94f44fcb0 ("memcg: further prevent OOM with too many dirty pages")
> Reported-by: Nikolay Borisov <[email protected]>
> Signed-off-by: Michal Hocko <[email protected]>

Reviewed-by: Rik van Riel <[email protected]>

--
All rights reversed

2015-07-02 14:26:04

by Theodore Ts'o

[permalink] [raw]
Subject: Re: [PATCH] mm, vmscan: Do not wait for page writeback for GFP_NOFS allocations

On Wed, Jul 01, 2015 at 03:37:15PM +0200, Michal Hocko wrote:
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 37e90db1520b..6c44d424968e 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -995,7 +995,7 @@ static unsigned long shrink_page_list(struct list_head *page_list,
> goto keep_locked;
>
> /* Case 3 above */
> - } else {
> + } else if (sc->gfp_mask & __GFP_FS) {
> wait_on_page_writeback(page);
> }
> }

Um, I've just taken a closer look at this code now that I'm back from
vacation, and I'm not sure this is right. This Case 3 code occurs
inside an

if (PageWriteback(page)) {
...
}

conditional, and if I'm not mistaken, if the flow of control exits
this conditional, it is assumed that the page is *not* under writeback.
This patch will assume the page has been cleaned if __GFP_FS is set,
which could lead to a dirty page getting dropped, so I believe this is
a bug. No?

It would seem to me that a better fix would be to change the Case 2
handling:

/* Case 2 above */
} else if (global_reclaim(sc) ||
- !PageReclaim(page) || !(sc->gfp_mask & __GFP_IO)) {
+ !PageReclaim(page) || !(sc->gfp_mask & __GFP_FS)) {
/*
* This is slightly racy - end_page_writeback()
* might have just cleared PageReclaim, then
* setting PageReclaim here end up interpreted
* as PageReadahead - but that does not matter
* enough to care. What we do want is for this
* page to have PageReclaim set next time memcg
* reclaim reaches the tests above, so it will
* then wait_on_page_writeback() to avoid OOM;
* and it's also appropriate in global reclaim.
*/
SetPageReclaim(page);
nr_writeback++;

goto keep_locked;


Am I missing something?

- Ted

2015-07-02 15:13:38

by Michal Hocko

[permalink] [raw]
Subject: Re: [PATCH] mm, vmscan: Do not wait for page writeback for GFP_NOFS allocations

On Thu 02-07-15 10:25:51, Theodore Ts'o wrote:
> On Wed, Jul 01, 2015 at 03:37:15PM +0200, Michal Hocko wrote:
> > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > index 37e90db1520b..6c44d424968e 100644
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -995,7 +995,7 @@ static unsigned long shrink_page_list(struct list_head *page_list,
> > goto keep_locked;
> >
> > /* Case 3 above */
> > - } else {
> > + } else if (sc->gfp_mask & __GFP_FS) {
> > wait_on_page_writeback(page);
> > }
> > }
>
> Um, I've just taken a closer look at this code now that I'm back from
> vacation, and I'm not sure this is right. This Case 3 code occurs
> inside an
>
> if (PageWriteback(page)) {
> ...
> }
>
> conditional, and if I'm not mistaken, if the flow of control exits
> this conditional, it is assumed that the page is *not* under writeback.
> This patch will assume the page has been cleaned if __GFP_FS is set,
> which could lead to a dirty page getting dropped, so I believe this is
> a bug. No?

Yes you are right! My bad. I should have noticed that. Sorry about that.

> It would seem to me that a better fix would be to change the Case 2
> handling:
>
> /* Case 2 above */
> } else if (global_reclaim(sc) ||
> - !PageReclaim(page) || !(sc->gfp_mask & __GFP_IO)) {
> + !PageReclaim(page) || !(sc->gfp_mask & __GFP_FS)) {

OK, this should work because the loopback path should clear both
__GFP_IO and __GFP_FS. I would be tempted to use may_enter_fs here
as the original patch which introduced wait_on_page_writeback did
but this sounds more clear.

> /*
> * This is slightly racy - end_page_writeback()
> * might have just cleared PageReclaim, then
> * setting PageReclaim here end up interpreted
> * as PageReadahead - but that does not matter
> * enough to care. What we do want is for this
> * page to have PageReclaim set next time memcg
> * reclaim reaches the tests above, so it will
> * then wait_on_page_writeback() to avoid OOM;
> * and it's also appropriate in global reclaim.
> */
> SetPageReclaim(page);
> nr_writeback++;
>
> goto keep_locked;
>
>
> Am I missing something?

You are not missing anything and thanks for the double checking. This
was wery well spotted!
The updated patch with the full changelog:
---
>From 91f6afeb230337b2cf7f326ffc6a9bf00732e77f Mon Sep 17 00:00:00 2001
From: Michal Hocko <[email protected]>
Date: Thu, 2 Jul 2015 17:05:05 +0200
Subject: [PATCH] mm, vmscan: Do not wait for page writeback for GFP_NOFS
allocations

Nikolay has reported a hang when a memcg reclaim got stuck with the
following backtrace:
PID: 18308 TASK: ffff883d7c9b0a30 CPU: 1 COMMAND: "rsync"
#0 [ffff88177374ac60] __schedule at ffffffff815ab152
#1 [ffff88177374acb0] schedule at ffffffff815ab76e
#2 [ffff88177374acd0] schedule_timeout at ffffffff815ae5e5
#3 [ffff88177374ad70] io_schedule_timeout at ffffffff815aad6a
#4 [ffff88177374ada0] bit_wait_io at ffffffff815abfc6
#5 [ffff88177374adb0] __wait_on_bit at ffffffff815abda5
#6 [ffff88177374ae00] wait_on_page_bit at ffffffff8111fd4f
#7 [ffff88177374ae50] shrink_page_list at ffffffff81135445
#8 [ffff88177374af50] shrink_inactive_list at ffffffff81135845
#9 [ffff88177374b060] shrink_lruvec at ffffffff81135ead
#10 [ffff88177374b150] shrink_zone at ffffffff811360c3
#11 [ffff88177374b220] shrink_zones at ffffffff81136eff
#12 [ffff88177374b2a0] do_try_to_free_pages at ffffffff8113712f
#13 [ffff88177374b300] try_to_free_mem_cgroup_pages at ffffffff811372be
#14 [ffff88177374b380] try_charge at ffffffff81189423
#15 [ffff88177374b430] mem_cgroup_try_charge at ffffffff8118c6f5
#16 [ffff88177374b470] __add_to_page_cache_locked at ffffffff8112137d
#17 [ffff88177374b4e0] add_to_page_cache_lru at ffffffff81121618
#18 [ffff88177374b510] pagecache_get_page at ffffffff8112170b
#19 [ffff88177374b560] grow_dev_page at ffffffff811c8297
#20 [ffff88177374b5c0] __getblk_slow at ffffffff811c91d6
#21 [ffff88177374b600] __getblk_gfp at ffffffff811c92c1
#22 [ffff88177374b630] ext4_ext_grow_indepth at ffffffff8124565c
#23 [ffff88177374b690] ext4_ext_create_new_leaf at ffffffff81246ca8
#24 [ffff88177374b6e0] ext4_ext_insert_extent at ffffffff81246f09
#25 [ffff88177374b750] ext4_ext_map_blocks at ffffffff8124a848
#26 [ffff88177374b870] ext4_map_blocks at ffffffff8121a5b7
#27 [ffff88177374b910] mpage_map_one_extent at ffffffff8121b1fa
#28 [ffff88177374b950] mpage_map_and_submit_extent at ffffffff8121f07b
#29 [ffff88177374b9b0] ext4_writepages at ffffffff8121f6d5
#30 [ffff88177374bb20] do_writepages at ffffffff8112c490
#31 [ffff88177374bb30] __filemap_fdatawrite_range at ffffffff81120199
#32 [ffff88177374bb80] filemap_flush at ffffffff8112041c
#33 [ffff88177374bb90] ext4_alloc_da_blocks at ffffffff81219da1
#34 [ffff88177374bbb0] ext4_rename at ffffffff81229b91
#35 [ffff88177374bcd0] ext4_rename2 at ffffffff81229e32
#36 [ffff88177374bce0] vfs_rename at ffffffff811a08a5
#37 [ffff88177374bd60] SYSC_renameat2 at ffffffff811a3ffc
#38 [ffff88177374bf60] sys_renameat2 at ffffffff811a408e
#39 [ffff88177374bf70] sys_rename at ffffffff8119e51e
#40 [ffff88177374bf80] system_call_fastpath at ffffffff815afa89

Dave Chinner has properly pointed out that this is a deadlock in the
reclaim code because ext4 doesn't submit pages which are marked by
PG_writeback right away. The heuristic was introduced by e62e384e9da8
("memcg: prevent OOM with too many dirty pages") and it was applied
only when may_enter_fs was specified. The code has been changed by
c3b94f44fcb0 ("memcg: further prevent OOM with too many dirty pages")
which has removed the __GFP_FS restriction with a reasoning that we
do not get into the fs code. But this is not sufficient apparently
because the fs doesn't necessarily submit pages marked PG_writeback
for IO right away.

ext4_bio_write_page calls io_submit_add_bh but that doesn't necessarily
submit the bio. Instead it tries to map more pages into the bio and
mpage_map_one_extent might trigger memcg charge which might end up
waiting on a page which is marked PG_writeback but hasn't been submitted
yet so we would end up waiting for something that never finishes.

Fix this issue by replacing __GFP_IO by __GFP_FS check (for case 2)
before we go to wait on the writeback. The page fault path, which is the
only path that triggers memcg oom killer since 3.12, shouldn't require
GFP_NOFS and so we shouldn't reintroduce the premature OOM killer issue
which was originally addressed by the heuristic.

As per David Chinner the xfs is doing similar thing since 2.6.15 already
so ext4 is not the only affected filesystem. Moreover he notes:
: For example: IO completion might require unwritten extent conversion
: which executes filesystem transactions and GFP_NOFS allocations. The
: writeback flag on the pages can not be cleared until unwritten
: extent conversion completes. Hence memory reclaim cannot wait on
: page writeback to complete in GFP_NOFS context because it is not
: safe to do so, memcg reclaim or otherwise.

Cc: stable # 3.6+
[[email protected]: check for __GFP_FS rather than __GFP_IO]
Fixes: c3b94f44fcb0 ("memcg: further prevent OOM with too many dirty pages")
Reported-by: Nikolay Borisov <[email protected]>
Signed-off-by: Michal Hocko <[email protected]>
---
mm/vmscan.c | 24 ++++++++++--------------
1 file changed, 10 insertions(+), 14 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 37e90db1520b..9f89d9ac578f 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -946,21 +946,17 @@ static unsigned long shrink_page_list(struct list_head *page_list,
*
* 2) Global reclaim encounters a page, memcg encounters a
* page that is not marked for immediate reclaim or
- * the caller does not have __GFP_IO. In this case mark
+ * the caller does not have __GFP_FS. In this case mark
* the page for immediate reclaim and continue scanning.
*
- * __GFP_IO is checked because a loop driver thread might
- * enter reclaim, and deadlock if it waits on a page for
- * which it is needed to do the write (loop masks off
+ * Require __GFP_FS even though we are not entering fs
+ * because we are waiting for a fs activity and we might
+ * be in the middle of the writeout. Moreover a loop driver
+ * might enter reclaim, and deadlock of it waits on a page
+ * for which it is needed to do the write (loop masks off
* __GFP_IO|__GFP_FS for this reason); but more thought
* would probably show more reasons.
*
- * Don't require __GFP_FS, since we're not going into the
- * FS, just waiting on its writeback completion. Worryingly,
- * ext4 gfs2 and xfs allocate pages with
- * grab_cache_page_write_begin(,,AOP_FLAG_NOFS), so testing
- * may_enter_fs here is liable to OOM on them.
- *
* 3) memcg encounters a page that is not already marked
* PageReclaim. memcg does not have any dirty pages
* throttling so we could easily OOM just because too many
@@ -977,7 +973,7 @@ static unsigned long shrink_page_list(struct list_head *page_list,

/* Case 2 above */
} else if (global_reclaim(sc) ||
- !PageReclaim(page) || !(sc->gfp_mask & __GFP_IO)) {
+ !PageReclaim(page) || !(sc->gfp_mask & __GFP_FS)) {
/*
* This is slightly racy - end_page_writeback()
* might have just cleared PageReclaim, then
@@ -994,10 +990,10 @@ static unsigned long shrink_page_list(struct list_head *page_list,

goto keep_locked;

- /* Case 3 above */
- } else {
- wait_on_page_writeback(page);
}
+
+ /* Case 3 above */
+ wait_on_page_writeback(page);
}

if (!force_reclaim)
--
2.1.4

--
Michal Hocko
SUSE Labs

2015-08-04 06:33:08

by Hugh Dickins

[permalink] [raw]
Subject: Re: [PATCH] mm, vmscan: Do not wait for page writeback for GFP_NOFS allocations

Hi Michal,

On Thu, 2 Jul 2015, Michal Hocko wrote:
> On Thu 02-07-15 10:25:51, Theodore Ts'o wrote:
> > On Wed, Jul 01, 2015 at 03:37:15PM +0200, Michal Hocko wrote:
> From: Michal Hocko <[email protected]>
> Date: Thu, 2 Jul 2015 17:05:05 +0200
> Subject: [PATCH] mm, vmscan: Do not wait for page writeback for GFP_NOFS
> allocations
>
> Nikolay has reported a hang when a memcg reclaim got stuck with the
> following backtrace...

Sorry, I couldn't manage more than to ignore you when you Cc'ed me on
this a month ago. Dave's perfectly correct, we had ourselves come to
notice that recently: although in an ideal world a filesystem would
only mark PageWriteback once the IO is all ready to go, in the real
world that's not quite so, and a memory allocation may stand between.
Which leaves my v3.6 c3b94f44fcb0 in danger of deadlocking.

And suddenly now, in v4.2-rc or perhaps in v4.1 also, that has started
hitting me too (I don't know which release Nicolay noticed this on).
And it has become urgent to fix: I've added Linus to the Cc because
I believe his comment in the rc5 announcement, "There's also a pending
question about some of the VM changes", reflects this. Twice when I
was trying to verify fixes to the dcache issue which came up at the
end of last week, I was frustrated by unrelated hangs in my load.
The first time I didn't recognize it, but the second time I did,
and then came to realize that your patch is just what is needed.

But I have modified it a little, I don't think you'll mind. As you
suggested yourself, I actually prefer to test may_enter_fs there, rather
than __GFP_FS: not a big deal, I certainly wouldn't want to delay the
fix if someone thinks differently; but I tend to feel that may_enter_fs
is what we already use for such decisions there, so better to use it.
(And the SwapCache case immune to ext4 or xfs IO submission pattern.)

I've fixed up the patch and updated the comments, since Tejun has
meanwhile introduced sane_reclaim(sc) - I'm staying on in the insane
asylum for now (and sane_reclaim is clearly unaffected by the change).

I've omitted your hunk unindenting Case 3 wait_on_page_writeback(page):
I prefer your style too, but thought it better to minimize the patch,
especially if this is heading to the stables. (I was tempted to add in
my unlock_page there, that we discussed once before: but again thought
it better to minimize the fix - it is "selfish" not to unlock_page,
but I think that anything heading for deadlock on the locked page would
in other circumstances be heading for deadlock on the writeback page -
I've never found that change critical.)

And I've done quite a bit of testing. The loads that hung at the
weekend have been running nicely for 24 hours now, no problem with the
writeback hang and no problem with the dcache ENOTDIR issue. Though
I've no idea of what recent VM change turned this into a hot issue.

And more testing on the history of it, considering your stable 3.6+
designation that I wasn't satisfied with. Getting out that USB stick
again, I find that 3.6, 3.7 and 3.8 all OOM if their __GFP_IO test
is updated to a may_enter_fs test; but something happened in 3.9
to make it and subsequent releases safe with the may_enter_fs test.
You can certainly argue that the remote chance of a deadlock is
worse than the fair chance of a spurious OOM; but if you insist
on 3.6+, then I think it would have to go back even further,
because we marked that commit for stable itself. I suggest 3.9+.


[PATCH] mm, vmscan: Do not wait for page writeback for GFP_NOFS allocations

From: Michal Hocko <[email protected]>

Nikolay has reported a hang when a memcg reclaim got stuck with the
following backtrace:
PID: 18308 TASK: ffff883d7c9b0a30 CPU: 1 COMMAND: "rsync"
#0 [ffff88177374ac60] __schedule at ffffffff815ab152
#1 [ffff88177374acb0] schedule at ffffffff815ab76e
#2 [ffff88177374acd0] schedule_timeout at ffffffff815ae5e5
#3 [ffff88177374ad70] io_schedule_timeout at ffffffff815aad6a
#4 [ffff88177374ada0] bit_wait_io at ffffffff815abfc6
#5 [ffff88177374adb0] __wait_on_bit at ffffffff815abda5
#6 [ffff88177374ae00] wait_on_page_bit at ffffffff8111fd4f
#7 [ffff88177374ae50] shrink_page_list at ffffffff81135445
#8 [ffff88177374af50] shrink_inactive_list at ffffffff81135845
#9 [ffff88177374b060] shrink_lruvec at ffffffff81135ead
#10 [ffff88177374b150] shrink_zone at ffffffff811360c3
#11 [ffff88177374b220] shrink_zones at ffffffff81136eff
#12 [ffff88177374b2a0] do_try_to_free_pages at ffffffff8113712f
#13 [ffff88177374b300] try_to_free_mem_cgroup_pages at ffffffff811372be
#14 [ffff88177374b380] try_charge at ffffffff81189423
#15 [ffff88177374b430] mem_cgroup_try_charge at ffffffff8118c6f5
#16 [ffff88177374b470] __add_to_page_cache_locked at ffffffff8112137d
#17 [ffff88177374b4e0] add_to_page_cache_lru at ffffffff81121618
#18 [ffff88177374b510] pagecache_get_page at ffffffff8112170b
#19 [ffff88177374b560] grow_dev_page at ffffffff811c8297
#20 [ffff88177374b5c0] __getblk_slow at ffffffff811c91d6
#21 [ffff88177374b600] __getblk_gfp at ffffffff811c92c1
#22 [ffff88177374b630] ext4_ext_grow_indepth at ffffffff8124565c
#23 [ffff88177374b690] ext4_ext_create_new_leaf at ffffffff81246ca8
#24 [ffff88177374b6e0] ext4_ext_insert_extent at ffffffff81246f09
#25 [ffff88177374b750] ext4_ext_map_blocks at ffffffff8124a848
#26 [ffff88177374b870] ext4_map_blocks at ffffffff8121a5b7
#27 [ffff88177374b910] mpage_map_one_extent at ffffffff8121b1fa
#28 [ffff88177374b950] mpage_map_and_submit_extent at ffffffff8121f07b
#29 [ffff88177374b9b0] ext4_writepages at ffffffff8121f6d5
#30 [ffff88177374bb20] do_writepages at ffffffff8112c490
#31 [ffff88177374bb30] __filemap_fdatawrite_range at ffffffff81120199
#32 [ffff88177374bb80] filemap_flush at ffffffff8112041c
#33 [ffff88177374bb90] ext4_alloc_da_blocks at ffffffff81219da1
#34 [ffff88177374bbb0] ext4_rename at ffffffff81229b91
#35 [ffff88177374bcd0] ext4_rename2 at ffffffff81229e32
#36 [ffff88177374bce0] vfs_rename at ffffffff811a08a5
#37 [ffff88177374bd60] SYSC_renameat2 at ffffffff811a3ffc
#38 [ffff88177374bf60] sys_renameat2 at ffffffff811a408e
#39 [ffff88177374bf70] sys_rename at ffffffff8119e51e
#40 [ffff88177374bf80] system_call_fastpath at ffffffff815afa89

Dave Chinner has properly pointed out that this is a deadlock in the
reclaim code because ext4 doesn't submit pages which are marked by
PG_writeback right away. The heuristic was introduced by e62e384e9da8
("memcg: prevent OOM with too many dirty pages") and it was applied
only when may_enter_fs was specified. The code has been changed by
c3b94f44fcb0 ("memcg: further prevent OOM with too many dirty pages")
which has removed the __GFP_FS restriction with a reasoning that we
do not get into the fs code. But this is not sufficient apparently
because the fs doesn't necessarily submit pages marked PG_writeback
for IO right away.

ext4_bio_write_page calls io_submit_add_bh but that doesn't necessarily
submit the bio. Instead it tries to map more pages into the bio and
mpage_map_one_extent might trigger memcg charge which might end up
waiting on a page which is marked PG_writeback but hasn't been submitted
yet so we would end up waiting for something that never finishes.

Fix this issue by replacing __GFP_IO by may_enter_fs check (for case 2)
before we go to wait on the writeback. The page fault path, which is the
only path that triggers memcg oom killer since 3.12, shouldn't require
GFP_NOFS and so we shouldn't reintroduce the premature OOM killer issue
which was originally addressed by the heuristic.

As per David Chinner the xfs is doing similar thing since 2.6.15 already
so ext4 is not the only affected filesystem. Moreover he notes:
: For example: IO completion might require unwritten extent conversion
: which executes filesystem transactions and GFP_NOFS allocations. The
: writeback flag on the pages can not be cleared until unwritten
: extent conversion completes. Hence memory reclaim cannot wait on
: page writeback to complete in GFP_NOFS context because it is not
: safe to do so, memcg reclaim or otherwise.

Cc: [email protected] # 3.9+
[[email protected]: corrected the control flow]
Fixes: c3b94f44fcb0 ("memcg: further prevent OOM with too many dirty pages")
Reported-by: Nikolay Borisov <[email protected]>
Signed-off-by: Michal Hocko <[email protected]>
Signed-off-by: Hugh Dickins <[email protected]>
---

mm/vmscan.c | 16 ++++++----------
1 file changed, 6 insertions(+), 10 deletions(-)

--- 4.2-rc5/mm/vmscan.c 2015-07-05 19:25:02.856131170 -0700
+++ linux/mm/vmscan.c 2015-08-02 21:24:03.000614050 -0700
@@ -973,22 +973,18 @@ static unsigned long shrink_page_list(st
* caller can stall after page list has been processed.
*
* 2) Global or new memcg reclaim encounters a page that is
- * not marked for immediate reclaim or the caller does not
- * have __GFP_IO. In this case mark the page for immediate
+ * not marked for immediate reclaim, or the caller does not
+ * have __GFP_FS (or __GFP_IO if it's simply going to swap,
+ * not to fs). In this case mark the page for immediate
* reclaim and continue scanning.
*
- * __GFP_IO is checked because a loop driver thread might
+ * Require may_enter_fs because we would wait on fs, which
+ * may not have submitted IO yet. And the loop driver might
* enter reclaim, and deadlock if it waits on a page for
* which it is needed to do the write (loop masks off
* __GFP_IO|__GFP_FS for this reason); but more thought
* would probably show more reasons.
*
- * Don't require __GFP_FS, since we're not going into the
- * FS, just waiting on its writeback completion. Worryingly,
- * ext4 gfs2 and xfs allocate pages with
- * grab_cache_page_write_begin(,,AOP_FLAG_NOFS), so testing
- * may_enter_fs here is liable to OOM on them.
- *
* 3) Legacy memcg encounters a page that is not already marked
* PageReclaim. memcg does not have any dirty pages
* throttling so we could easily OOM just because too many
@@ -1005,7 +1001,7 @@ static unsigned long shrink_page_list(st

/* Case 2 above */
} else if (sane_reclaim(sc) ||
- !PageReclaim(page) || !(sc->gfp_mask & __GFP_IO)) {
+ !PageReclaim(page) || !may_enter_fs) {
/*
* This is slightly racy - end_page_writeback()
* might have just cleared PageReclaim, then

2015-08-04 09:52:04

by Michal Hocko

[permalink] [raw]
Subject: Re: [PATCH] mm, vmscan: Do not wait for page writeback for GFP_NOFS allocations

On Mon 03-08-15 23:32:00, Hugh Dickins wrote:
[...]
> But I have modified it a little, I don't think you'll mind. As you
> suggested yourself, I actually prefer to test may_enter_fs there, rather
> than __GFP_FS: not a big deal, I certainly wouldn't want to delay the
> fix if someone thinks differently; but I tend to feel that may_enter_fs
> is what we already use for such decisions there, so better to use it.
> (And the SwapCache case immune to ext4 or xfs IO submission pattern.)

I am not opposed. This is closer to what we had before.

[...]
> (I was tempted to add in
> my unlock_page there, that we discussed once before: but again thought
> it better to minimize the fix - it is "selfish" not to unlock_page,
> but I think that anything heading for deadlock on the locked page would
> in other circumstances be heading for deadlock on the writeback page -
> I've never found that change critical.)

I agree. It would deserve a separate patch.

> And I've done quite a bit of testing. The loads that hung at the
> weekend have been running nicely for 24 hours now, no problem with the
> writeback hang and no problem with the dcache ENOTDIR issue. Though
> I've no idea of what recent VM change turned this into a hot issue.
>
> And more testing on the history of it, considering your stable 3.6+
> designation that I wasn't satisfied with. Getting out that USB stick
> again, I find that 3.6, 3.7 and 3.8 all OOM if their __GFP_IO test
> is updated to a may_enter_fs test; but something happened in 3.9
> to make it and subsequent releases safe with the may_enter_fs test.

Interesting. I would have guessed that 3.12 would make a difference (as
mentioned in the changelog). Why would 3.9 make a difference is not
entirely clear to me.

> You can certainly argue that the remote chance of a deadlock is
> worse than the fair chance of a spurious OOM; but if you insist
> on 3.6+, then I think it would have to go back even further,
> because we marked that commit for stable itself. I suggest 3.9+.

Agreed and thanks!
--
Michal Hocko
SUSE Labs

2015-08-04 21:33:40

by Hugh Dickins

[permalink] [raw]
Subject: Re: [PATCH] mm, vmscan: Do not wait for page writeback for GFP_NOFS allocations

On Tue, 4 Aug 2015, Michal Hocko wrote:
> On Mon 03-08-15 23:32:00, Hugh Dickins wrote:
> [...]
> > But I have modified it a little, I don't think you'll mind. As you
> > suggested yourself, I actually prefer to test may_enter_fs there, rather
> > than __GFP_FS: not a big deal, I certainly wouldn't want to delay the
> > fix if someone thinks differently; but I tend to feel that may_enter_fs
> > is what we already use for such decisions there, so better to use it.
> > (And the SwapCache case immune to ext4 or xfs IO submission pattern.)
>
> I am not opposed. This is closer to what we had before.

Yes, it is what you had there before.

>
> [...]
> > (I was tempted to add in
> > my unlock_page there, that we discussed once before: but again thought
> > it better to minimize the fix - it is "selfish" not to unlock_page,
> > but I think that anything heading for deadlock on the locked page would
> > in other circumstances be heading for deadlock on the writeback page -
> > I've never found that change critical.)
>
> I agree. It would deserve a separate patch.

I'll send one day, but not for v4.2.

>
> > And I've done quite a bit of testing. The loads that hung at the
> > weekend have been running nicely for 24 hours now, no problem with the
> > writeback hang and no problem with the dcache ENOTDIR issue. Though
> > I've no idea of what recent VM change turned this into a hot issue.
> >
> > And more testing on the history of it, considering your stable 3.6+
> > designation that I wasn't satisfied with. Getting out that USB stick
> > again, I find that 3.6, 3.7 and 3.8 all OOM if their __GFP_IO test
> > is updated to a may_enter_fs test; but something happened in 3.9
> > to make it and subsequent releases safe with the may_enter_fs test.
>
> Interesting. I would have guessed that 3.12 would make a difference (as
> mentioned in the changelog). Why would 3.9 make a difference is not
> entirely clear to me.

Nor to me. You were right to single out 3.12 in the changelog, but
clearly some earlier change in 3.9 altered the delicate balance on this.
It was unambiguous, so a bisection between 3.8 and 3.9 should easily
find it. Yet, somehow, that's not very high on my TODO list...

It would be more interesting to find why this deadlock has become so
much more visible just now. But that would be a difficult bisection,
taking many days, of restarts after wrong decisions. Again, not
something I'll get into.

>
> > You can certainly argue that the remote chance of a deadlock is
> > worse than the fair chance of a spurious OOM; but if you insist
> > on 3.6+, then I think it would have to go back even further,
> > because we marked that commit for stable itself. I suggest 3.9+.
>
> Agreed and thanks!

Thanks so much for getting back to us on it so very promptly.
I'll detach the patch, unchanged, and send direct to Linus now.

Hugh

2015-08-04 21:37:58

by Hugh Dickins

[permalink] [raw]
Subject: [PATCH] mm, vmscan: Do not wait for page writeback for GFP_NOFS allocations

From: Michal Hocko <[email protected]>

Nikolay has reported a hang when a memcg reclaim got stuck with the
following backtrace:
PID: 18308 TASK: ffff883d7c9b0a30 CPU: 1 COMMAND: "rsync"
#0 [ffff88177374ac60] __schedule at ffffffff815ab152
#1 [ffff88177374acb0] schedule at ffffffff815ab76e
#2 [ffff88177374acd0] schedule_timeout at ffffffff815ae5e5
#3 [ffff88177374ad70] io_schedule_timeout at ffffffff815aad6a
#4 [ffff88177374ada0] bit_wait_io at ffffffff815abfc6
#5 [ffff88177374adb0] __wait_on_bit at ffffffff815abda5
#6 [ffff88177374ae00] wait_on_page_bit at ffffffff8111fd4f
#7 [ffff88177374ae50] shrink_page_list at ffffffff81135445
#8 [ffff88177374af50] shrink_inactive_list at ffffffff81135845
#9 [ffff88177374b060] shrink_lruvec at ffffffff81135ead
#10 [ffff88177374b150] shrink_zone at ffffffff811360c3
#11 [ffff88177374b220] shrink_zones at ffffffff81136eff
#12 [ffff88177374b2a0] do_try_to_free_pages at ffffffff8113712f
#13 [ffff88177374b300] try_to_free_mem_cgroup_pages at ffffffff811372be
#14 [ffff88177374b380] try_charge at ffffffff81189423
#15 [ffff88177374b430] mem_cgroup_try_charge at ffffffff8118c6f5
#16 [ffff88177374b470] __add_to_page_cache_locked at ffffffff8112137d
#17 [ffff88177374b4e0] add_to_page_cache_lru at ffffffff81121618
#18 [ffff88177374b510] pagecache_get_page at ffffffff8112170b
#19 [ffff88177374b560] grow_dev_page at ffffffff811c8297
#20 [ffff88177374b5c0] __getblk_slow at ffffffff811c91d6
#21 [ffff88177374b600] __getblk_gfp at ffffffff811c92c1
#22 [ffff88177374b630] ext4_ext_grow_indepth at ffffffff8124565c
#23 [ffff88177374b690] ext4_ext_create_new_leaf at ffffffff81246ca8
#24 [ffff88177374b6e0] ext4_ext_insert_extent at ffffffff81246f09
#25 [ffff88177374b750] ext4_ext_map_blocks at ffffffff8124a848
#26 [ffff88177374b870] ext4_map_blocks at ffffffff8121a5b7
#27 [ffff88177374b910] mpage_map_one_extent at ffffffff8121b1fa
#28 [ffff88177374b950] mpage_map_and_submit_extent at ffffffff8121f07b
#29 [ffff88177374b9b0] ext4_writepages at ffffffff8121f6d5
#30 [ffff88177374bb20] do_writepages at ffffffff8112c490
#31 [ffff88177374bb30] __filemap_fdatawrite_range at ffffffff81120199
#32 [ffff88177374bb80] filemap_flush at ffffffff8112041c
#33 [ffff88177374bb90] ext4_alloc_da_blocks at ffffffff81219da1
#34 [ffff88177374bbb0] ext4_rename at ffffffff81229b91
#35 [ffff88177374bcd0] ext4_rename2 at ffffffff81229e32
#36 [ffff88177374bce0] vfs_rename at ffffffff811a08a5
#37 [ffff88177374bd60] SYSC_renameat2 at ffffffff811a3ffc
#38 [ffff88177374bf60] sys_renameat2 at ffffffff811a408e
#39 [ffff88177374bf70] sys_rename at ffffffff8119e51e
#40 [ffff88177374bf80] system_call_fastpath at ffffffff815afa89

Dave Chinner has properly pointed out that this is a deadlock in the
reclaim code because ext4 doesn't submit pages which are marked by
PG_writeback right away. The heuristic was introduced by e62e384e9da8
("memcg: prevent OOM with too many dirty pages") and it was applied
only when may_enter_fs was specified. The code has been changed by
c3b94f44fcb0 ("memcg: further prevent OOM with too many dirty pages")
which has removed the __GFP_FS restriction with a reasoning that we
do not get into the fs code. But this is not sufficient apparently
because the fs doesn't necessarily submit pages marked PG_writeback
for IO right away.

ext4_bio_write_page calls io_submit_add_bh but that doesn't necessarily
submit the bio. Instead it tries to map more pages into the bio and
mpage_map_one_extent might trigger memcg charge which might end up
waiting on a page which is marked PG_writeback but hasn't been submitted
yet so we would end up waiting for something that never finishes.

Fix this issue by replacing __GFP_IO by may_enter_fs check (for case 2)
before we go to wait on the writeback. The page fault path, which is the
only path that triggers memcg oom killer since 3.12, shouldn't require
GFP_NOFS and so we shouldn't reintroduce the premature OOM killer issue
which was originally addressed by the heuristic.

As per David Chinner the xfs is doing similar thing since 2.6.15 already
so ext4 is not the only affected filesystem. Moreover he notes:
: For example: IO completion might require unwritten extent conversion
: which executes filesystem transactions and GFP_NOFS allocations. The
: writeback flag on the pages can not be cleared until unwritten
: extent conversion completes. Hence memory reclaim cannot wait on
: page writeback to complete in GFP_NOFS context because it is not
: safe to do so, memcg reclaim or otherwise.

Cc: [email protected] # 3.9+
[[email protected]: corrected the control flow]
Fixes: c3b94f44fcb0 ("memcg: further prevent OOM with too many dirty pages")
Reported-by: Nikolay Borisov <[email protected]>
Signed-off-by: Michal Hocko <[email protected]>
Signed-off-by: Hugh Dickins <[email protected]>
---

mm/vmscan.c | 16 ++++++----------
1 file changed, 6 insertions(+), 10 deletions(-)

--- 4.2-rc5/mm/vmscan.c 2015-07-05 19:25:02.856131170 -0700
+++ linux/mm/vmscan.c 2015-08-02 21:24:03.000614050 -0700
@@ -973,22 +973,18 @@ static unsigned long shrink_page_list(st
* caller can stall after page list has been processed.
*
* 2) Global or new memcg reclaim encounters a page that is
- * not marked for immediate reclaim or the caller does not
- * have __GFP_IO. In this case mark the page for immediate
+ * not marked for immediate reclaim, or the caller does not
+ * have __GFP_FS (or __GFP_IO if it's simply going to swap,
+ * not to fs). In this case mark the page for immediate
* reclaim and continue scanning.
*
- * __GFP_IO is checked because a loop driver thread might
+ * Require may_enter_fs because we would wait on fs, which
+ * may not have submitted IO yet. And the loop driver might
* enter reclaim, and deadlock if it waits on a page for
* which it is needed to do the write (loop masks off
* __GFP_IO|__GFP_FS for this reason); but more thought
* would probably show more reasons.
*
- * Don't require __GFP_FS, since we're not going into the
- * FS, just waiting on its writeback completion. Worryingly,
- * ext4 gfs2 and xfs allocate pages with
- * grab_cache_page_write_begin(,,AOP_FLAG_NOFS), so testing
- * may_enter_fs here is liable to OOM on them.
- *
* 3) Legacy memcg encounters a page that is not already marked
* PageReclaim. memcg does not have any dirty pages
* throttling so we could easily OOM just because too many
@@ -1005,7 +1001,7 @@ static unsigned long shrink_page_list(st

/* Case 2 above */
} else if (sane_reclaim(sc) ||
- !PageReclaim(page) || !(sc->gfp_mask & __GFP_IO)) {
+ !PageReclaim(page) || !may_enter_fs) {
/*
* This is slightly racy - end_page_writeback()
* might have just cleared PageReclaim, then

2015-08-07 07:52:28

by Angel Shtilianov

[permalink] [raw]
Subject: Re: [PATCH] mm, vmscan: Do not wait for page writeback for GFP_NOFS allocations

Hello Hugh,

On 08/04/2015 09:32 AM, Hugh Dickins wrote:
> Hi Michal,
>
> On Thu, 2 Jul 2015, Michal Hocko wrote:
>> On Thu 02-07-15 10:25:51, Theodore Ts'o wrote:
>>> On Wed, Jul 01, 2015 at 03:37:15PM +0200, Michal Hocko wrote:
>> From: Michal Hocko <[email protected]>
>> Date: Thu, 2 Jul 2015 17:05:05 +0200
>> Subject: [PATCH] mm, vmscan: Do not wait for page writeback for GFP_NOFS
>> allocations
>>
>> Nikolay has reported a hang when a memcg reclaim got stuck with the
>> following backtrace...
>
> Sorry, I couldn't manage more than to ignore you when you Cc'ed me on
> this a month ago. Dave's perfectly correct, we had ourselves come to
> notice that recently: although in an ideal world a filesystem would
> only mark PageWriteback once the IO is all ready to go, in the real
> world that's not quite so, and a memory allocation may stand between.
> Which leaves my v3.6 c3b94f44fcb0 in danger of deadlocking.
>
> And suddenly now, in v4.2-rc or perhaps in v4.1 also, that has started
> hitting me too (I don't know which release Nicolay noticed this on).
> And it has become urgent to fix: I've added Linus to the Cc because
> I believe his comment in the rc5 announcement, "There's also a pending
> question about some of the VM changes", reflects this. Twice when I
> was trying to verify fixes to the dcache issue which came up at the
> end of last week, I was frustrated by unrelated hangs in my load.
> The first time I didn't recognize it, but the second time I did,
> and then came to realize that your patch is just what is needed.
>
> But I have modified it a little, I don't think you'll mind. As you
> suggested yourself, I actually prefer to test may_enter_fs there, rather
> than __GFP_FS: not a big deal, I certainly wouldn't want to delay the
> fix if someone thinks differently; but I tend to feel that may_enter_fs
> is what we already use for such decisions there, so better to use it.
> (And the SwapCache case immune to ext4 or xfs IO submission pattern.)
>
> I've fixed up the patch and updated the comments, since Tejun has
> meanwhile introduced sane_reclaim(sc) - I'm staying on in the insane
> asylum for now (and sane_reclaim is clearly unaffected by the change).
>
> I've omitted your hunk unindenting Case 3 wait_on_page_writeback(page):
> I prefer your style too, but thought it better to minimize the patch,
> especially if this is heading to the stables. (I was tempted to add in
> my unlock_page there, that we discussed once before: but again thought
> it better to minimize the fix - it is "selfish" not to unlock_page,
> but I think that anything heading for deadlock on the locked page would
> in other circumstances be heading for deadlock on the writeback page -
> I've never found that change critical.)
>
> And I've done quite a bit of testing. The loads that hung at the
> weekend have been running nicely for 24 hours now, no problem with the
> writeback hang and no problem with the dcache ENOTDIR issue. Though
> I've no idea of what recent VM change turned this into a hot issue.
>

Are these production loads you are referring to that have been able to
reproduce the issue or are they some synthetic ones which? So far I
haven't been able to reproduce the issue using artifical loads so I'm
interested in incorporating this into my test set setup if it's available?

> And more testing on the history of it, considering your stable 3.6+
> designation that I wasn't satisfied with. Getting out that USB stick
> again, I find that 3.6, 3.7 and 3.8 all OOM if their __GFP_IO test
> is updated to a may_enter_fs test; but something happened in 3.9
> to make it and subsequent releases safe with the may_enter_fs test.
> You can certainly argue that the remote chance of a deadlock is
> worse than the fair chance of a spurious OOM; but if you insist
> on 3.6+, then I think it would have to go back even further,
> because we marked that commit for stable itself. I suggest 3.9+.
>
>
> [PATCH] mm, vmscan: Do not wait for page writeback for GFP_NOFS allocations
>
> From: Michal Hocko <[email protected]>
>
> Nikolay has reported a hang when a memcg reclaim got stuck with the
> following backtrace:
> PID: 18308 TASK: ffff883d7c9b0a30 CPU: 1 COMMAND: "rsync"
> #0 [ffff88177374ac60] __schedule at ffffffff815ab152
> #1 [ffff88177374acb0] schedule at ffffffff815ab76e
> #2 [ffff88177374acd0] schedule_timeout at ffffffff815ae5e5
> #3 [ffff88177374ad70] io_schedule_timeout at ffffffff815aad6a
> #4 [ffff88177374ada0] bit_wait_io at ffffffff815abfc6
> #5 [ffff88177374adb0] __wait_on_bit at ffffffff815abda5
> #6 [ffff88177374ae00] wait_on_page_bit at ffffffff8111fd4f
> #7 [ffff88177374ae50] shrink_page_list at ffffffff81135445
> #8 [ffff88177374af50] shrink_inactive_list at ffffffff81135845
> #9 [ffff88177374b060] shrink_lruvec at ffffffff81135ead
> #10 [ffff88177374b150] shrink_zone at ffffffff811360c3
> #11 [ffff88177374b220] shrink_zones at ffffffff81136eff
> #12 [ffff88177374b2a0] do_try_to_free_pages at ffffffff8113712f
> #13 [ffff88177374b300] try_to_free_mem_cgroup_pages at ffffffff811372be
> #14 [ffff88177374b380] try_charge at ffffffff81189423
> #15 [ffff88177374b430] mem_cgroup_try_charge at ffffffff8118c6f5
> #16 [ffff88177374b470] __add_to_page_cache_locked at ffffffff8112137d
> #17 [ffff88177374b4e0] add_to_page_cache_lru at ffffffff81121618
> #18 [ffff88177374b510] pagecache_get_page at ffffffff8112170b
> #19 [ffff88177374b560] grow_dev_page at ffffffff811c8297
> #20 [ffff88177374b5c0] __getblk_slow at ffffffff811c91d6
> #21 [ffff88177374b600] __getblk_gfp at ffffffff811c92c1
> #22 [ffff88177374b630] ext4_ext_grow_indepth at ffffffff8124565c
> #23 [ffff88177374b690] ext4_ext_create_new_leaf at ffffffff81246ca8
> #24 [ffff88177374b6e0] ext4_ext_insert_extent at ffffffff81246f09
> #25 [ffff88177374b750] ext4_ext_map_blocks at ffffffff8124a848
> #26 [ffff88177374b870] ext4_map_blocks at ffffffff8121a5b7
> #27 [ffff88177374b910] mpage_map_one_extent at ffffffff8121b1fa
> #28 [ffff88177374b950] mpage_map_and_submit_extent at ffffffff8121f07b
> #29 [ffff88177374b9b0] ext4_writepages at ffffffff8121f6d5
> #30 [ffff88177374bb20] do_writepages at ffffffff8112c490
> #31 [ffff88177374bb30] __filemap_fdatawrite_range at ffffffff81120199
> #32 [ffff88177374bb80] filemap_flush at ffffffff8112041c
> #33 [ffff88177374bb90] ext4_alloc_da_blocks at ffffffff81219da1
> #34 [ffff88177374bbb0] ext4_rename at ffffffff81229b91
> #35 [ffff88177374bcd0] ext4_rename2 at ffffffff81229e32
> #36 [ffff88177374bce0] vfs_rename at ffffffff811a08a5
> #37 [ffff88177374bd60] SYSC_renameat2 at ffffffff811a3ffc
> #38 [ffff88177374bf60] sys_renameat2 at ffffffff811a408e
> #39 [ffff88177374bf70] sys_rename at ffffffff8119e51e
> #40 [ffff88177374bf80] system_call_fastpath at ffffffff815afa89
>
> Dave Chinner has properly pointed out that this is a deadlock in the
> reclaim code because ext4 doesn't submit pages which are marked by
> PG_writeback right away. The heuristic was introduced by e62e384e9da8
> ("memcg: prevent OOM with too many dirty pages") and it was applied
> only when may_enter_fs was specified. The code has been changed by
> c3b94f44fcb0 ("memcg: further prevent OOM with too many dirty pages")
> which has removed the __GFP_FS restriction with a reasoning that we
> do not get into the fs code. But this is not sufficient apparently
> because the fs doesn't necessarily submit pages marked PG_writeback
> for IO right away.
>
> ext4_bio_write_page calls io_submit_add_bh but that doesn't necessarily
> submit the bio. Instead it tries to map more pages into the bio and
> mpage_map_one_extent might trigger memcg charge which might end up
> waiting on a page which is marked PG_writeback but hasn't been submitted
> yet so we would end up waiting for something that never finishes.
>
> Fix this issue by replacing __GFP_IO by may_enter_fs check (for case 2)
> before we go to wait on the writeback. The page fault path, which is the
> only path that triggers memcg oom killer since 3.12, shouldn't require
> GFP_NOFS and so we shouldn't reintroduce the premature OOM killer issue
> which was originally addressed by the heuristic.
>
> As per David Chinner the xfs is doing similar thing since 2.6.15 already
> so ext4 is not the only affected filesystem. Moreover he notes:
> : For example: IO completion might require unwritten extent conversion
> : which executes filesystem transactions and GFP_NOFS allocations. The
> : writeback flag on the pages can not be cleared until unwritten
> : extent conversion completes. Hence memory reclaim cannot wait on
> : page writeback to complete in GFP_NOFS context because it is not
> : safe to do so, memcg reclaim or otherwise.
>
> Cc: [email protected] # 3.9+
> [[email protected]: corrected the control flow]
> Fixes: c3b94f44fcb0 ("memcg: further prevent OOM with too many dirty pages")
> Reported-by: Nikolay Borisov <[email protected]>
> Signed-off-by: Michal Hocko <[email protected]>
> Signed-off-by: Hugh Dickins <[email protected]>
> ---
>
> mm/vmscan.c | 16 ++++++----------
> 1 file changed, 6 insertions(+), 10 deletions(-)
>
> --- 4.2-rc5/mm/vmscan.c 2015-07-05 19:25:02.856131170 -0700
> +++ linux/mm/vmscan.c 2015-08-02 21:24:03.000614050 -0700
> @@ -973,22 +973,18 @@ static unsigned long shrink_page_list(st
> * caller can stall after page list has been processed.
> *
> * 2) Global or new memcg reclaim encounters a page that is
> - * not marked for immediate reclaim or the caller does not
> - * have __GFP_IO. In this case mark the page for immediate
> + * not marked for immediate reclaim, or the caller does not
> + * have __GFP_FS (or __GFP_IO if it's simply going to swap,
> + * not to fs). In this case mark the page for immediate
> * reclaim and continue scanning.
> *
> - * __GFP_IO is checked because a loop driver thread might
> + * Require may_enter_fs because we would wait on fs, which
> + * may not have submitted IO yet. And the loop driver might
> * enter reclaim, and deadlock if it waits on a page for
> * which it is needed to do the write (loop masks off
> * __GFP_IO|__GFP_FS for this reason); but more thought
> * would probably show more reasons.
> *
> - * Don't require __GFP_FS, since we're not going into the
> - * FS, just waiting on its writeback completion. Worryingly,
> - * ext4 gfs2 and xfs allocate pages with
> - * grab_cache_page_write_begin(,,AOP_FLAG_NOFS), so testing
> - * may_enter_fs here is liable to OOM on them.
> - *
> * 3) Legacy memcg encounters a page that is not already marked
> * PageReclaim. memcg does not have any dirty pages
> * throttling so we could easily OOM just because too many
> @@ -1005,7 +1001,7 @@ static unsigned long shrink_page_list(st
>
> /* Case 2 above */
> } else if (sane_reclaim(sc) ||
> - !PageReclaim(page) || !(sc->gfp_mask & __GFP_IO)) {
> + !PageReclaim(page) || !may_enter_fs) {
> /*
> * This is slightly racy - end_page_writeback()
> * might have just cleared PageReclaim, then
>

2015-08-13 02:14:23

by Hugh Dickins

[permalink] [raw]
Subject: Re: [PATCH] mm, vmscan: Do not wait for page writeback for GFP_NOFS allocations

On Fri, 7 Aug 2015, Nikolay Borisov wrote:
> On 08/04/2015 09:32 AM, Hugh Dickins wrote:
> >
> > And I've done quite a bit of testing. The loads that hung at the
> > weekend have been running nicely for 24 hours now, no problem with the
> > writeback hang and no problem with the dcache ENOTDIR issue. Though
> > I've no idea of what recent VM change turned this into a hot issue.
> >
>
> Are these production loads you are referring to that have been able to
> reproduce the issue or are they some synthetic ones which? So far I
> haven't been able to reproduce the issue using artifical loads so I'm
> interested in incorporating this into my test set setup if it's available?

Not production loads, no, just an artificial load. But not very good
at reproducing the hang: variable, but took hours, and only showed up
on one faster machine; I had to run the load for 2 days, then again 2
days, to feel confident that this hang was fixed.

And I'm sorry, but describing it in full detail is not something I find
time to do, in days or in years - partly because once I try to detail it,
I need to simplify this and streamline that, and it turns into something
else. As happened when I sent it, offlist, to someone in 2009: I looked
back at that with a view to forwarding to you, but a lot of the details
don't match what I reverted to or advanced to doing since.

Broadly, it's a pair of repeated make -j20 kernel builds, one in tmpfs,
one in ext4 over loop over tmpfs, in limited memory 700M with 1.5G swap.
And to test this particular hang, it needed to use memcg (of what's now
branded an "insane" variety, CONFIG_CGROUP_WRITEBACK=n): I was using 1G
not 700M ram for this, but 300M memcg limit and 250M soft limit on each
memcg that was hosting one of the pair of repeated builds. It can be
difficult to tune the right balance, swapping heavily but not OOMing:
it's a 2.6.24 tree I've gone back to building, because that's so much
smaller than current, with a greater proportion active in the build.

Hugh