2020-11-05 08:58:24

by Alex Shi

[permalink] [raw]
Subject: [PATCH v21 15/19] mm/compaction: do page isolation first in compaction

Currently, compaction would get the lru_lock and then do page isolation
which works fine with pgdat->lru_lock, since any page isoltion would
compete for the lru_lock. If we want to change to memcg lru_lock, we
have to isolate the page before getting lru_lock, thus isoltion would
block page's memcg change which relay on page isoltion too. Then we
could safely use per memcg lru_lock later.

The new page isolation use previous introduced TestClearPageLRU() +
pgdat lru locking which will be changed to memcg lru lock later.

Hugh Dickins <[email protected]> fixed following bugs in this patch's
early version:

Fix lots of crashes under compaction load: isolate_migratepages_block()
must clean up appropriately when rejecting a page, setting PageLRU again
if it had been cleared; and a put_page() after get_page_unless_zero()
cannot safely be done while holding locked_lruvec - it may turn out to
be the final put_page(), which will take an lruvec lock when PageLRU.
And move __isolate_lru_page_prepare back after get_page_unless_zero to
make trylock_page() safe:
trylock_page() is not safe to use at this time: its setting PG_locked
can race with the page being freed or allocated ("Bad page"), and can
also erase flags being set by one of those "sole owners" of a freshly
allocated page who use non-atomic __SetPageFlag().

Suggested-by: Johannes Weiner <[email protected]>
Signed-off-by: Alex Shi <[email protected]>
Acked-by: Hugh Dickins <[email protected]>
Acked-by: Johannes Weiner <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Matthew Wilcox <[email protected]>
Cc: [email protected]
Cc: [email protected]
---
include/linux/swap.h | 2 +-
mm/compaction.c | 42 +++++++++++++++++++++++++++++++++---------
mm/vmscan.c | 43 ++++++++++++++++++++++---------------------
3 files changed, 56 insertions(+), 31 deletions(-)

diff --git a/include/linux/swap.h b/include/linux/swap.h
index 5e1e967c225f..596bc2f4d9b0 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -356,7 +356,7 @@ extern void lru_cache_add_inactive_or_unevictable(struct page *page,
extern unsigned long zone_reclaimable_pages(struct zone *zone);
extern unsigned long try_to_free_pages(struct zonelist *zonelist, int order,
gfp_t gfp_mask, nodemask_t *mask);
-extern int __isolate_lru_page(struct page *page, isolate_mode_t mode);
+extern int __isolate_lru_page_prepare(struct page *page, isolate_mode_t mode);
extern unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *memcg,
unsigned long nr_pages,
gfp_t gfp_mask,
diff --git a/mm/compaction.c b/mm/compaction.c
index ee1f8439369e..7b1cf48884dd 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -886,6 +886,7 @@ static bool too_many_isolated(pg_data_t *pgdat)
if (!valid_page && IS_ALIGNED(low_pfn, pageblock_nr_pages)) {
if (!cc->ignore_skip_hint && get_pageblock_skip(page)) {
low_pfn = end_pfn;
+ page = NULL;
goto isolate_abort;
}
valid_page = page;
@@ -967,6 +968,21 @@ static bool too_many_isolated(pg_data_t *pgdat)
if (!(cc->gfp_mask & __GFP_FS) && page_mapping(page))
goto isolate_fail;

+ /*
+ * Be careful not to clear PageLRU until after we're
+ * sure the page is not being freed elsewhere -- the
+ * page release code relies on it.
+ */
+ if (unlikely(!get_page_unless_zero(page)))
+ goto isolate_fail;
+
+ if (__isolate_lru_page_prepare(page, isolate_mode) != 0)
+ goto isolate_fail_put;
+
+ /* Try isolate the page */
+ if (!TestClearPageLRU(page))
+ goto isolate_fail_put;
+
/* If we already hold the lock, we can skip some rechecking */
if (!locked) {
locked = compact_lock_irqsave(&pgdat->lru_lock,
@@ -979,10 +995,6 @@ static bool too_many_isolated(pg_data_t *pgdat)
goto isolate_abort;
}

- /* Recheck PageLRU and PageCompound under lock */
- if (!PageLRU(page))
- goto isolate_fail;
-
/*
* Page become compound since the non-locked check,
* and it's on LRU. It can only be a THP so the order
@@ -990,16 +1002,13 @@ static bool too_many_isolated(pg_data_t *pgdat)
*/
if (unlikely(PageCompound(page) && !cc->alloc_contig)) {
low_pfn += compound_nr(page) - 1;
- goto isolate_fail;
+ SetPageLRU(page);
+ goto isolate_fail_put;
}
}

lruvec = mem_cgroup_page_lruvec(page, pgdat);

- /* Try isolate the page */
- if (__isolate_lru_page(page, isolate_mode) != 0)
- goto isolate_fail;
-
/* The whole page is taken off the LRU; skip the tail pages. */
if (PageCompound(page))
low_pfn += compound_nr(page) - 1;
@@ -1028,6 +1037,15 @@ static bool too_many_isolated(pg_data_t *pgdat)
}

continue;
+
+isolate_fail_put:
+ /* Avoid potential deadlock in freeing page under lru_lock */
+ if (locked) {
+ spin_unlock_irqrestore(&pgdat->lru_lock, flags);
+ locked = false;
+ }
+ put_page(page);
+
isolate_fail:
if (!skip_on_failure)
continue;
@@ -1064,9 +1082,15 @@ static bool too_many_isolated(pg_data_t *pgdat)
if (unlikely(low_pfn > end_pfn))
low_pfn = end_pfn;

+ page = NULL;
+
isolate_abort:
if (locked)
spin_unlock_irqrestore(&pgdat->lru_lock, flags);
+ if (page) {
+ SetPageLRU(page);
+ put_page(page);
+ }

/*
* Updated the cached scanner pfn once the pageblock has been scanned
diff --git a/mm/vmscan.c b/mm/vmscan.c
index ab7a0104d1e1..0be55d875fde 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1540,7 +1540,7 @@ unsigned int reclaim_clean_pages_from_list(struct zone *zone,
*
* returns 0 on success, -ve errno on failure.
*/
-int __isolate_lru_page(struct page *page, isolate_mode_t mode)
+int __isolate_lru_page_prepare(struct page *page, isolate_mode_t mode)
{
int ret = -EBUSY;

@@ -1592,22 +1592,9 @@ int __isolate_lru_page(struct page *page, isolate_mode_t mode)
if ((mode & ISOLATE_UNMAPPED) && page_mapped(page))
return ret;

- if (likely(get_page_unless_zero(page))) {
- /*
- * Be careful not to clear PageLRU until after we're
- * sure the page is not being freed elsewhere -- the
- * page release code relies on it.
- */
- if (TestClearPageLRU(page))
- ret = 0;
- else
- put_page(page);
- }
-
- return ret;
+ return 0;
}

-
/*
* Update LRU sizes after isolating pages. The LRU size updates must
* be complete before mem_cgroup_update_lru_size due to a sanity check.
@@ -1687,20 +1674,34 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
* only when the page is being freed somewhere else.
*/
scan += nr_pages;
- switch (__isolate_lru_page(page, mode)) {
+ switch (__isolate_lru_page_prepare(page, mode)) {
case 0:
+ /*
+ * Be careful not to clear PageLRU until after we're
+ * sure the page is not being freed elsewhere -- the
+ * page release code relies on it.
+ */
+ if (unlikely(!get_page_unless_zero(page)))
+ goto busy;
+
+ if (!TestClearPageLRU(page)) {
+ /*
+ * This page may in other isolation path,
+ * but we still hold lru_lock.
+ */
+ put_page(page);
+ goto busy;
+ }
+
nr_taken += nr_pages;
nr_zone_taken[page_zonenum(page)] += nr_pages;
list_move(&page->lru, dst);
break;

- case -EBUSY:
+ default:
+busy:
/* else it is being freed elsewhere */
list_move(&page->lru, src);
- continue;
-
- default:
- BUG();
}
}

--
1.8.3.1


2020-11-11 17:14:41

by Vlastimil Babka

[permalink] [raw]
Subject: Re: [PATCH v21 15/19] mm/compaction: do page isolation first in compaction

On 11/5/20 9:55 AM, Alex Shi wrote:
> Currently, compaction would get the lru_lock and then do page isolation
> which works fine with pgdat->lru_lock, since any page isoltion would
> compete for the lru_lock. If we want to change to memcg lru_lock, we
> have to isolate the page before getting lru_lock, thus isoltion would
> block page's memcg change which relay on page isoltion too. Then we
> could safely use per memcg lru_lock later.
>
> The new page isolation use previous introduced TestClearPageLRU() +
> pgdat lru locking which will be changed to memcg lru lock later.
>
> Hugh Dickins <[email protected]> fixed following bugs in this patch's
> early version:
>
> Fix lots of crashes under compaction load: isolate_migratepages_block()
> must clean up appropriately when rejecting a page, setting PageLRU again
> if it had been cleared; and a put_page() after get_page_unless_zero()
> cannot safely be done while holding locked_lruvec - it may turn out to
> be the final put_page(), which will take an lruvec lock when PageLRU.
> And move __isolate_lru_page_prepare back after get_page_unless_zero to
> make trylock_page() safe:
> trylock_page() is not safe to use at this time: its setting PG_locked
> can race with the page being freed or allocated ("Bad page"), and can
> also erase flags being set by one of those "sole owners" of a freshly
> allocated page who use non-atomic __SetPageFlag().
>
> Suggested-by: Johannes Weiner <[email protected]>
> Signed-off-by: Alex Shi <[email protected]>
> Acked-by: Hugh Dickins <[email protected]>
> Acked-by: Johannes Weiner <[email protected]>
> Cc: Andrew Morton <[email protected]>
> Cc: Matthew Wilcox <[email protected]>
> Cc: [email protected]
> Cc: [email protected]

Acked-by: Vlastimil Babka <[email protected]>

A question below:

> @@ -979,10 +995,6 @@ static bool too_many_isolated(pg_data_t *pgdat)
> goto isolate_abort;
> }
>
> - /* Recheck PageLRU and PageCompound under lock */
> - if (!PageLRU(page))
> - goto isolate_fail;
> -
> /*
> * Page become compound since the non-locked check,
> * and it's on LRU. It can only be a THP so the order
> @@ -990,16 +1002,13 @@ static bool too_many_isolated(pg_data_t *pgdat)
> */
> if (unlikely(PageCompound(page) && !cc->alloc_contig)) {
> low_pfn += compound_nr(page) - 1;
> - goto isolate_fail;
> + SetPageLRU(page);
> + goto isolate_fail_put;
> }

IIUC the danger here is khugepaged will collapse a THP. For that,
__collapse_huge_page_isolate() has to succeed isolate_lru_page(). Under the new
scheme, it shouldn't be possible, right? If that's correct, we can remove this part?

2020-11-12 05:38:37

by Hugh Dickins

[permalink] [raw]
Subject: Re: [PATCH v21 15/19] mm/compaction: do page isolation first in compaction

On Wed, 11 Nov 2020, Vlastimil Babka wrote:
> On 11/5/20 9:55 AM, Alex Shi wrote:
>
> > @@ -979,10 +995,6 @@ static bool too_many_isolated(pg_data_t *pgdat)
> > goto isolate_abort;
> > }
> > - /* Recheck PageLRU and PageCompound under lock */
> > - if (!PageLRU(page))
> > - goto isolate_fail;
> > -
> > /*
> > * Page become compound since the non-locked check,
> > * and it's on LRU. It can only be a THP so the order
> > @@ -990,16 +1002,13 @@ static bool too_many_isolated(pg_data_t *pgdat)

Completely off-topic, and won't matter at all when Andrew rediffs into
mmotm: but isn't it weird that this is showing "too_many_isolated(",
when actually the function is isolate_migratepages_block()?

> > */
> > if (unlikely(PageCompound(page) &&
> > !cc->alloc_contig)) {
> > low_pfn += compound_nr(page) - 1;
> > - goto isolate_fail;
> > + SetPageLRU(page);
> > + goto isolate_fail_put;
> > }
>
> IIUC the danger here is khugepaged will collapse a THP. For that,
> __collapse_huge_page_isolate() has to succeed isolate_lru_page(). Under the
> new scheme, it shouldn't be possible, right? If that's correct, we can remove
> this part?

I don't think so. A preliminary check for PageCompound was made much
higher up, before taking a reference on the page, but it can easily have
become PageCompound since then (when racing prep_new_page() calls
prep_compound_page()).

And __collapse_huge_page_isolate() does not turn a non-compound page
into a compound page: it isolates small pages before copying them into
the compound page (in the usual case: I can see there's also allowance
for PageCompound there too, which will do something different).

Hugh

2020-11-12 05:41:52

by Alex Shi

[permalink] [raw]
Subject: Re: [PATCH v21 15/19] mm/compaction: do page isolation first in compaction



?? 2020/11/12 ????10:28, Hugh Dickins ะด??:
>>> * Page become compound since the non-locked check,
>>> * and it's on LRU. It can only be a THP so the order
>>> @@ -990,16 +1002,13 @@ static bool too_many_isolated(pg_data_t *pgdat)
> Completely off-topic, and won't matter at all when Andrew rediffs into
> mmotm: but isn't it weird that this is showing "too_many_isolated(",
> when actually the function is isolate_migratepages_block()?
>

My git version is too low for this. Thanks for reminder. the latest git
work fine on correct function name.

Thanks

2020-11-12 11:27:52

by Vlastimil Babka

[permalink] [raw]
Subject: Re: [PATCH v21 15/19] mm/compaction: do page isolation first in compaction

On 11/12/20 3:28 AM, Hugh Dickins wrote:
> On Wed, 11 Nov 2020, Vlastimil Babka wrote:
>> On 11/5/20 9:55 AM, Alex Shi wrote:
>>
>> > @@ -979,10 +995,6 @@ static bool too_many_isolated(pg_data_t *pgdat)
>> > goto isolate_abort;
>> > }
>> > - /* Recheck PageLRU and PageCompound under lock */
>> > - if (!PageLRU(page))
>> > - goto isolate_fail;
>> > -
>> > /*
>> > * Page become compound since the non-locked check,
>> > * and it's on LRU. It can only be a THP so the order
>> > @@ -990,16 +1002,13 @@ static bool too_many_isolated(pg_data_t *pgdat)
>
> Completely off-topic, and won't matter at all when Andrew rediffs into
> mmotm: but isn't it weird that this is showing "too_many_isolated(",
> when actually the function is isolate_migratepages_block()?
>
>> > */
>> > if (unlikely(PageCompound(page) &&
>> > !cc->alloc_contig)) {
>> > low_pfn += compound_nr(page) - 1;
>> > - goto isolate_fail;
>> > + SetPageLRU(page);
>> > + goto isolate_fail_put;
>> > }
>>
>> IIUC the danger here is khugepaged will collapse a THP. For that,
>> __collapse_huge_page_isolate() has to succeed isolate_lru_page(). Under the
>> new scheme, it shouldn't be possible, right? If that's correct, we can remove
>> this part?
>
> I don't think so. A preliminary check for PageCompound was made much
> higher up, before taking a reference on the page, but it can easily have
> become PageCompound since then (when racing prep_new_page() calls
> prep_compound_page()).
>
> And __collapse_huge_page_isolate() does not turn a non-compound page
> into a compound page: it isolates small pages before copying them into
> the compound page (in the usual case: I can see there's also allowance
> for PageCompound there too, which will do something different).

Right, on both points, got too confused.

> Hugh
>