2023-05-19 11:20:14

by Johannes Weiner

[permalink] [raw]
Subject: [PATCH] mm: compaction: avoid GFP_NOFS ABBA deadlock

During stress testing with higher-order allocations, a deadlock
scenario was observed in compaction: One GFP_NOFS allocation was
sleeping on mm/compaction.c::too_many_isolated(), while all CPUs in
the system were busy with compactors spinning on buffer locks held by
the sleeping GFP_NOFS allocation.

Reclaim is susceptible to this same deadlock; we fixed it by granting
GFP_NOFS allocations additional LRU isolation headroom, to ensure it
makes forward progress while holding fs locks that other reclaimers
might acquire. Do the same here.

This code has been like this since compaction was initially merged,
and I only managed to trigger this with out-of-tree patches that
dramatically increase the contexts that do GFP_NOFS compaction. While
the issue is real, it seems theoretical in nature given existing
allocation sites. Worth fixing now, but no Fixes tag or stable CC.

Signed-off-by: Johannes Weiner <[email protected]>
---
mm/compaction.c | 16 ++++++++++++++--
1 file changed, 14 insertions(+), 2 deletions(-)

v2:
- clarify too_many_isolated() comment (Mel)
- split isolation deadlock from no-contiguous-anon lockups as that's
a different scenario and deserves its own patch

diff --git a/mm/compaction.c b/mm/compaction.c
index c8bcdea15f5f..c9a4b6dffcf2 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -745,8 +745,9 @@ isolate_freepages_range(struct compact_control *cc,
}

/* Similar to reclaim, but different enough that they don't share logic */
-static bool too_many_isolated(pg_data_t *pgdat)
+static bool too_many_isolated(struct compact_control *cc)
{
+ pg_data_t *pgdat = cc->zone->zone_pgdat;
bool too_many;

unsigned long active, inactive, isolated;
@@ -758,6 +759,17 @@ static bool too_many_isolated(pg_data_t *pgdat)
isolated = node_page_state(pgdat, NR_ISOLATED_FILE) +
node_page_state(pgdat, NR_ISOLATED_ANON);

+ /*
+ * Allow GFP_NOFS to isolate past the limit set for regular
+ * compaction runs. This prevents an ABBA deadlock when other
+ * compactors have already isolated to the limit, but are
+ * blocked on filesystem locks held by the GFP_NOFS thread.
+ */
+ if (cc->gfp_mask & __GFP_FS) {
+ inactive >>= 3;
+ active >>= 3;
+ }
+
too_many = isolated > (inactive + active) / 2;
if (!too_many)
wake_throttle_isolated(pgdat);
@@ -806,7 +818,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
* list by either parallel reclaimers or compaction. If there are,
* delay for some time until fewer pages are isolated
*/
- while (unlikely(too_many_isolated(pgdat))) {
+ while (unlikely(too_many_isolated(cc))) {
/* stop isolation if there are still pages not migrated */
if (cc->nr_migratepages)
return -EAGAIN;
--
2.40.0



2023-05-24 09:24:14

by Vlastimil Babka

[permalink] [raw]
Subject: Re: [PATCH] mm: compaction: avoid GFP_NOFS ABBA deadlock

On 5/19/23 13:13, Johannes Weiner wrote:
> During stress testing with higher-order allocations, a deadlock
> scenario was observed in compaction: One GFP_NOFS allocation was
> sleeping on mm/compaction.c::too_many_isolated(), while all CPUs in
> the system were busy with compactors spinning on buffer locks held by
> the sleeping GFP_NOFS allocation.
>
> Reclaim is susceptible to this same deadlock; we fixed it by granting
> GFP_NOFS allocations additional LRU isolation headroom, to ensure it
> makes forward progress while holding fs locks that other reclaimers
> might acquire. Do the same here.
>
> This code has been like this since compaction was initially merged,
> and I only managed to trigger this with out-of-tree patches that
> dramatically increase the contexts that do GFP_NOFS compaction. While
> the issue is real, it seems theoretical in nature given existing
> allocation sites. Worth fixing now, but no Fixes tag or stable CC.

> Signed-off-by: Johannes Weiner <[email protected]>

So IIUC the change is done by not giving GFP_NOFS extra headroom, but
instead restricting the headroom of __GFP_FS allocations. But the original
one was probably too generous anyway so it should be fine?

Acked-by: Vlastimil Babka <[email protected]>

> ---
> mm/compaction.c | 16 ++++++++++++++--
> 1 file changed, 14 insertions(+), 2 deletions(-)
>
> v2:
> - clarify too_many_isolated() comment (Mel)
> - split isolation deadlock from no-contiguous-anon lockups as that's
> a different scenario and deserves its own patch
>
> diff --git a/mm/compaction.c b/mm/compaction.c
> index c8bcdea15f5f..c9a4b6dffcf2 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -745,8 +745,9 @@ isolate_freepages_range(struct compact_control *cc,
> }
>
> /* Similar to reclaim, but different enough that they don't share logic */
> -static bool too_many_isolated(pg_data_t *pgdat)
> +static bool too_many_isolated(struct compact_control *cc)
> {
> + pg_data_t *pgdat = cc->zone->zone_pgdat;
> bool too_many;
>
> unsigned long active, inactive, isolated;
> @@ -758,6 +759,17 @@ static bool too_many_isolated(pg_data_t *pgdat)
> isolated = node_page_state(pgdat, NR_ISOLATED_FILE) +
> node_page_state(pgdat, NR_ISOLATED_ANON);
>
> + /*
> + * Allow GFP_NOFS to isolate past the limit set for regular
> + * compaction runs. This prevents an ABBA deadlock when other
> + * compactors have already isolated to the limit, but are
> + * blocked on filesystem locks held by the GFP_NOFS thread.
> + */
> + if (cc->gfp_mask & __GFP_FS) {
> + inactive >>= 3;
> + active >>= 3;
> + }
> +
> too_many = isolated > (inactive + active) / 2;
> if (!too_many)
> wake_throttle_isolated(pgdat);
> @@ -806,7 +818,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
> * list by either parallel reclaimers or compaction. If there are,
> * delay for some time until fewer pages are isolated
> */
> - while (unlikely(too_many_isolated(pgdat))) {
> + while (unlikely(too_many_isolated(cc))) {
> /* stop isolation if there are still pages not migrated */
> if (cc->nr_migratepages)
> return -EAGAIN;


2023-05-24 16:30:35

by Johannes Weiner

[permalink] [raw]
Subject: Re: [PATCH] mm: compaction: avoid GFP_NOFS ABBA deadlock

On Wed, May 24, 2023 at 11:21:43AM +0200, Vlastimil Babka wrote:
> On 5/19/23 13:13, Johannes Weiner wrote:
> > During stress testing with higher-order allocations, a deadlock
> > scenario was observed in compaction: One GFP_NOFS allocation was
> > sleeping on mm/compaction.c::too_many_isolated(), while all CPUs in
> > the system were busy with compactors spinning on buffer locks held by
> > the sleeping GFP_NOFS allocation.
> >
> > Reclaim is susceptible to this same deadlock; we fixed it by granting
> > GFP_NOFS allocations additional LRU isolation headroom, to ensure it
> > makes forward progress while holding fs locks that other reclaimers
> > might acquire. Do the same here.
> >
> > This code has been like this since compaction was initially merged,
> > and I only managed to trigger this with out-of-tree patches that
> > dramatically increase the contexts that do GFP_NOFS compaction. While
> > the issue is real, it seems theoretical in nature given existing
> > allocation sites. Worth fixing now, but no Fixes tag or stable CC.
>
> > Signed-off-by: Johannes Weiner <[email protected]>
>
> So IIUC the change is done by not giving GFP_NOFS extra headroom, but
> instead restricting the headroom of __GFP_FS allocations. But the original
> one was probably too generous anyway so it should be fine?

Yes, the original limit is generally half the LRU, which is quite high.

The new limit is 1/16th of the LRU for regular compactors and half for
GFP_NOFS ones. Note that I didn't make these up; they're stolen from
too_many_isolated() in vmscan.c. I figured those are proven values and
no sense in deviating from them until we have a reason to do so.

> Acked-by: Vlastimil Babka <[email protected]>

Thanks!

2023-05-25 08:47:51

by Mel Gorman

[permalink] [raw]
Subject: Re: [PATCH] mm: compaction: avoid GFP_NOFS ABBA deadlock

On Fri, May 19, 2023 at 01:13:59PM +0200, Johannes Weiner wrote:
> During stress testing with higher-order allocations, a deadlock
> scenario was observed in compaction: One GFP_NOFS allocation was
> sleeping on mm/compaction.c::too_many_isolated(), while all CPUs in
> the system were busy with compactors spinning on buffer locks held by
> the sleeping GFP_NOFS allocation.
>
> Reclaim is susceptible to this same deadlock; we fixed it by granting
> GFP_NOFS allocations additional LRU isolation headroom, to ensure it
> makes forward progress while holding fs locks that other reclaimers
> might acquire. Do the same here.
>
> This code has been like this since compaction was initially merged,
> and I only managed to trigger this with out-of-tree patches that
> dramatically increase the contexts that do GFP_NOFS compaction. While
> the issue is real, it seems theoretical in nature given existing
> allocation sites. Worth fixing now, but no Fixes tag or stable CC.
>
> Signed-off-by: Johannes Weiner <[email protected]>

Acked-by: Mel Gorman <[email protected]>

> ---
> mm/compaction.c | 16 ++++++++++++++--
> 1 file changed, 14 insertions(+), 2 deletions(-)
>
> v2:
> - clarify too_many_isolated() comment (Mel)
> - split isolation deadlock from no-contiguous-anon lockups as that's
> a different scenario and deserves its own patch
>
> diff --git a/mm/compaction.c b/mm/compaction.c
> index c8bcdea15f5f..c9a4b6dffcf2 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -745,8 +745,9 @@ isolate_freepages_range(struct compact_control *cc,
> }
>
> /* Similar to reclaim, but different enough that they don't share logic */
> -static bool too_many_isolated(pg_data_t *pgdat)
> +static bool too_many_isolated(struct compact_control *cc)
> {
> + pg_data_t *pgdat = cc->zone->zone_pgdat;
> bool too_many;
>
> unsigned long active, inactive, isolated;
> @@ -758,6 +759,17 @@ static bool too_many_isolated(pg_data_t *pgdat)
> isolated = node_page_state(pgdat, NR_ISOLATED_FILE) +
> node_page_state(pgdat, NR_ISOLATED_ANON);
>
> + /*
> + * Allow GFP_NOFS to isolate past the limit set for regular
> + * compaction runs. This prevents an ABBA deadlock when other
> + * compactors have already isolated to the limit, but are
> + * blocked on filesystem locks held by the GFP_NOFS thread.
> + */
> + if (cc->gfp_mask & __GFP_FS) {
> + inactive >>= 3;
> + active >>= 3;
> + }
> +
> too_many = isolated > (inactive + active) / 2;
> if (!too_many)
> wake_throttle_isolated(pgdat);
> @@ -806,7 +818,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
> * list by either parallel reclaimers or compaction. If there are,
> * delay for some time until fewer pages are isolated
> */
> - while (unlikely(too_many_isolated(pgdat))) {
> + while (unlikely(too_many_isolated(cc))) {
> /* stop isolation if there are still pages not migrated */
> if (cc->nr_migratepages)
> return -EAGAIN;
> --
> 2.40.0
>

--
Mel Gorman
SUSE Labs