2023-03-16 11:07:09

by Baolin Wang

[permalink] [raw]
Subject: [PATCH v2 1/2] mm: compaction: consider the number of scanning compound pages in isolate fail path

The commit b717d6b93b54 ("mm: compaction: include compound page count
for scanning in pageblock isolation") had added compound page statistics
for scanning in pageblock isolation, to make sure the number of scanned
pages are always larger than the number of isolated pages when isolating
mirgratable or free pageblock.

However, when failed to isolate the pages when scanning the mirgratable or
free pageblock, the isolation failure path did not consider the scanning
statistics of the compound pages, which can show the incorrect number of
scanned pages in tracepoints or the vmstats to make people confusing about
the page scanning pressure in memory compaction.

Thus we should take into account the number of scanning pages when failed
to isolate the compound pages to make the statistics accurate.

Signed-off-by: Baolin Wang <[email protected]>
---
Changes from v1:
- Move the compound pages statistics after sanity order checking.
---
mm/compaction.c | 11 +++++++++--
1 file changed, 9 insertions(+), 2 deletions(-)

diff --git a/mm/compaction.c b/mm/compaction.c
index 5a9501e0ae01..7e645cdfc2e9 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -586,6 +586,7 @@ static unsigned long isolate_freepages_block(struct compact_control *cc,
if (likely(order < MAX_ORDER)) {
blockpfn += (1UL << order) - 1;
cursor += (1UL << order) - 1;
+ nr_scanned += (1UL << order) - 1;
}
goto isolate_fail;
}
@@ -904,6 +905,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
if (ret == -EBUSY)
ret = 0;
low_pfn += compound_nr(page) - 1;
+ nr_scanned += compound_nr(page) - 1;
goto isolate_fail;
}

@@ -938,8 +940,10 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
* a valid page order. Consider only values in the
* valid order range to prevent low_pfn overflow.
*/
- if (freepage_order > 0 && freepage_order < MAX_ORDER)
+ if (freepage_order > 0 && freepage_order < MAX_ORDER) {
low_pfn += (1UL << freepage_order) - 1;
+ nr_scanned += (1UL << freepage_order) - 1;
+ }
continue;
}

@@ -954,8 +958,10 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
if (PageCompound(page) && !cc->alloc_contig) {
const unsigned int order = compound_order(page);

- if (likely(order < MAX_ORDER))
+ if (likely(order < MAX_ORDER)) {
low_pfn += (1UL << order) - 1;
+ nr_scanned += (1UL << order) - 1;
+ }
goto isolate_fail;
}

@@ -1077,6 +1083,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
*/
if (unlikely(PageCompound(page) && !cc->alloc_contig)) {
low_pfn += compound_nr(page) - 1;
+ nr_scanned += compound_nr(page) - 1;
SetPageLRU(page);
goto isolate_fail_put;
}
--
2.27.0



2023-03-16 11:07:13

by Baolin Wang

[permalink] [raw]
Subject: [PATCH v2 2/2] mm: compaction: fix the possible deadlock when isolating hugetlb pages

When trying to isolate a migratable pageblock, it can contain several
normal pages or several hugetlb pages (e.g. CONT-PTE 64K hugetlb on arm64)
in a pageblock. That means we may hold the lru lock of a normal page to
continue to isolate the next hugetlb page by isolate_or_dissolve_huge_page()
in the same migratable pageblock.

However in the isolate_or_dissolve_huge_page(), it may allocate a new hugetlb
page and dissolve the old one by alloc_and_dissolve_hugetlb_folio() if the
hugetlb's refcount is zero. That means we can still enter the direct compaction
path to allocate a new hugetlb page under the current lru lock, which
may cause possible deadlock.

To avoid this possible deadlock, we should release the lru lock when trying
to isolate a hugetbl page. Moreover it does not make sense to take the lru
lock to isolate a hugetlb, which is not in the lru list.

Fixes: 369fa227c219 ("mm: make alloc_contig_range handle free hugetlb pages")
Signed-off-by: Baolin Wang <[email protected]>
Reviewed-by: Vlastimil Babka <[email protected]>
Reviewed-by: Mike Kravetz <[email protected]>
---
Changes from v1:
- Collect reviewed tags. Thanks Mike and Vlastimil.
---
mm/compaction.c | 5 +++++
1 file changed, 5 insertions(+)

diff --git a/mm/compaction.c b/mm/compaction.c
index 7e645cdfc2e9..3df076716691 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -894,6 +894,11 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
}

if (PageHuge(page) && cc->alloc_contig) {
+ if (locked) {
+ unlock_page_lruvec_irqrestore(locked, flags);
+ locked = NULL;
+ }
+
ret = isolate_or_dissolve_huge_page(page, &cc->migratepages);

/*
--
2.27.0


2023-03-16 12:12:41

by Vlastimil Babka

[permalink] [raw]
Subject: Re: [PATCH v2 1/2] mm: compaction: consider the number of scanning compound pages in isolate fail path

On 3/16/23 12:06, Baolin Wang wrote:
> The commit b717d6b93b54 ("mm: compaction: include compound page count
> for scanning in pageblock isolation") had added compound page statistics
> for scanning in pageblock isolation, to make sure the number of scanned
> pages are always larger than the number of isolated pages when isolating
> mirgratable or free pageblock.
>
> However, when failed to isolate the pages when scanning the mirgratable or
> free pageblock, the isolation failure path did not consider the scanning
> statistics of the compound pages, which can show the incorrect number of
> scanned pages in tracepoints or the vmstats to make people confusing about
> the page scanning pressure in memory compaction.
>
> Thus we should take into account the number of scanning pages when failed
> to isolate the compound pages to make the statistics accurate.
>
> Signed-off-by: Baolin Wang <[email protected]>

Reviewed-by: Vlastimil Babka <[email protected]>

Thanks!


2023-04-05 10:37:08

by Mel Gorman

[permalink] [raw]
Subject: Re: [PATCH v2 1/2] mm: compaction: consider the number of scanning compound pages in isolate fail path

On Thu, Mar 16, 2023 at 07:06:46PM +0800, Baolin Wang wrote:
> The commit b717d6b93b54 ("mm: compaction: include compound page count
> for scanning in pageblock isolation") had added compound page statistics
> for scanning in pageblock isolation, to make sure the number of scanned
> pages are always larger than the number of isolated pages when isolating
> mirgratable or free pageblock.
>
> However, when failed to isolate the pages when scanning the mirgratable or
> free pageblock, the isolation failure path did not consider the scanning
> statistics of the compound pages, which can show the incorrect number of
> scanned pages in tracepoints or the vmstats to make people confusing about
> the page scanning pressure in memory compaction.
>
> Thus we should take into account the number of scanning pages when failed
> to isolate the compound pages to make the statistics accurate.
>
> Signed-off-by: Baolin Wang <[email protected]>

Acked-by: Mel Gorman <[email protected]>

However, the patch highlights weakeness in the tracepoints and how
useful they are.

Minimally, I think that the change might be misleading when comparing
tracepoints across kernel versions as it'll be necessary to check the exact
meaning of nr_scanned for a given kernel version. That's not a killer problem
as such, just a hazard if using an analysis tool comparing kernel versions.

As an example, consider this

if (PageCompound(page)) {
const unsigned int order = compound_order(page);

if (likely(order < MAX_ORDER)) {
blockpfn += (1UL << order) - 1;
cursor += (1UL << order) - 1;
nr_scanned += compound_nr(page) - 1; <<< patch adds
}
goto isolate_fail;
}

Only the head page is "scanned", the tail pages are not scanned so
accounting for them as "scanned" is not an accurate reflection of the
amount of work done. Isolation is different because the compound pages
isolated is a prediction of how much work is necessary to migrate that
page as it's obviously more work to copy 2M of data than 4K. The migrated
pages combined with isolation then can measure efficiency of isolation
vs migration although imperfectly as isolation is a span while migration
probably fails at the head page.

The same applies when skipping buddies, the tail pages are not scanned
so skipping them is not additional work.

Everything depends on what the tracepoint is being used for. If it's a
measure of work done, then accounting for skipped tail pages over-estimates
the amount of work. However, if the intent is to measure efficiency of
isolation vs migration then the "span" scanned is more useful.

None of this kills the patch, it only notes that the tracepoints as-is
probably cannot answer all relevant questions, most of which are only
relevant when making a modification to compaction in general. The patch
means that an unspecified pressure metric can be derived (maybe interesting
to sysadmins) but loses a metric about time spent on scanning (maybe
interesting to developers writing a patch). Of those concerns, sysadmins
are probably more common so the patch is acceptable but some care will be
need if modifying the tracepoints further if it enables one type of
analysis at the cost of another.

--
Mel Gorman
SUSE Labs

2023-04-05 10:39:52

by Mel Gorman

[permalink] [raw]
Subject: Re: [PATCH v2 2/2] mm: compaction: fix the possible deadlock when isolating hugetlb pages

On Thu, Mar 16, 2023 at 07:06:47PM +0800, Baolin Wang wrote:
> When trying to isolate a migratable pageblock, it can contain several
> normal pages or several hugetlb pages (e.g. CONT-PTE 64K hugetlb on arm64)
> in a pageblock. That means we may hold the lru lock of a normal page to
> continue to isolate the next hugetlb page by isolate_or_dissolve_huge_page()
> in the same migratable pageblock.
>
> However in the isolate_or_dissolve_huge_page(), it may allocate a new hugetlb
> page and dissolve the old one by alloc_and_dissolve_hugetlb_folio() if the
> hugetlb's refcount is zero. That means we can still enter the direct compaction
> path to allocate a new hugetlb page under the current lru lock, which
> may cause possible deadlock.
>
> To avoid this possible deadlock, we should release the lru lock when trying
> to isolate a hugetbl page. Moreover it does not make sense to take the lru
> lock to isolate a hugetlb, which is not in the lru list.
>
> Fixes: 369fa227c219 ("mm: make alloc_contig_range handle free hugetlb pages")
> Signed-off-by: Baolin Wang <[email protected]>
> Reviewed-by: Vlastimil Babka <[email protected]>
> Reviewed-by: Mike Kravetz <[email protected]>

Acked-by: Mel Gorman <[email protected]>

--
Mel Gorman
SUSE Labs

2023-04-12 03:18:48

by Baolin Wang

[permalink] [raw]
Subject: Re: [PATCH v2 1/2] mm: compaction: consider the number of scanning compound pages in isolate fail path



On 4/5/2023 6:31 PM, Mel Gorman wrote:
> On Thu, Mar 16, 2023 at 07:06:46PM +0800, Baolin Wang wrote:
>> The commit b717d6b93b54 ("mm: compaction: include compound page count
>> for scanning in pageblock isolation") had added compound page statistics
>> for scanning in pageblock isolation, to make sure the number of scanned
>> pages are always larger than the number of isolated pages when isolating
>> mirgratable or free pageblock.
>>
>> However, when failed to isolate the pages when scanning the mirgratable or
>> free pageblock, the isolation failure path did not consider the scanning
>> statistics of the compound pages, which can show the incorrect number of
>> scanned pages in tracepoints or the vmstats to make people confusing about
>> the page scanning pressure in memory compaction.
>>
>> Thus we should take into account the number of scanning pages when failed
>> to isolate the compound pages to make the statistics accurate.
>>
>> Signed-off-by: Baolin Wang <[email protected]>
>
> Acked-by: Mel Gorman <[email protected]>

Thanks Mel.

>
> However, the patch highlights weakeness in the tracepoints and how
> useful they are.
>
> Minimally, I think that the change might be misleading when comparing
> tracepoints across kernel versions as it'll be necessary to check the exact
> meaning of nr_scanned for a given kernel version. That's not a killer problem
> as such, just a hazard if using an analysis tool comparing kernel versions.
>
> As an example, consider this
>
> if (PageCompound(page)) {
> const unsigned int order = compound_order(page);
>
> if (likely(order < MAX_ORDER)) {
> blockpfn += (1UL << order) - 1;
> cursor += (1UL << order) - 1;
> nr_scanned += compound_nr(page) - 1; <<< patch adds
> }
> goto isolate_fail;
> }
>
> Only the head page is "scanned", the tail pages are not scanned so
> accounting for them as "scanned" is not an accurate reflection of the
> amount of work done. Isolation is different because the compound pages
> isolated is a prediction of how much work is necessary to migrate that
> page as it's obviously more work to copy 2M of data than 4K. The migrated
> pages combined with isolation then can measure efficiency of isolation
> vs migration although imperfectly as isolation is a span while migration
> probably fails at the head page.
>
> The same applies when skipping buddies, the tail pages are not scanned
> so skipping them is not additional work.
>
> Everything depends on what the tracepoint is being used for. If it's a
> measure of work done, then accounting for skipped tail pages over-estimates
> the amount of work. However, if the intent is to measure efficiency of
> isolation vs migration then the "span" scanned is more useful.

Yes, we are more concered about the efficiency of isolation vs migration.

> None of this kills the patch, it only notes that the tracepoints as-is
> probably cannot answer all relevant questions, most of which are only
> relevant when making a modification to compaction in general. The patch
> means that an unspecified pressure metric can be derived (maybe interesting
> to sysadmins) but loses a metric about time spent on scanning (maybe
> interesting to developers writing a patch). Of those concerns, sysadmins
> are probably more common so the patch is acceptable but some care will be
> need if modifying the tracepoints further if it enables one type of
> analysis at the cost of another.

I learned, and thanks for your excellent explaination.