Hi all, this series contains some random cleanups and fixes to
compation. Details can be found in respective patches.
This patchset is base on another cleanups to lock in compaction
at [1]. Thanks!
v1->v2:
-Add comment to skip_offline_sections_reverse in patch 1
-Add impact in git message in patch 2
-Correct comment to fast_find_block instead of remove fast_find_block
in patch 4
-Collect RVB from David and Baolin.
[1] https://lore.kernel.org/all/[email protected]/
Kemeng Shi (8):
mm/compaction: avoid missing last page block in section after skip
offline sections
mm/compaction: correct last_migrated_pfn update in compact_zone
mm/compaction: skip page block marked skip in
isolate_migratepages_block
mm/compaction: correct comment of fast_find_migrateblock in
isolate_migratepages
mm/compaction: correct comment of cached migrate pfn update
mm/compaction: correct comment to complete migration failure
mm/compaction: remove unnecessary return for void function
mm/compaction: only set skip flag if cc->no_set_skip_hint is false
mm/compaction.c | 28 ++++++++++++++++------------
1 file changed, 16 insertions(+), 12 deletions(-)
--
2.30.0
skip_offline_sections_reverse will return the last pfn in found online
section. Then we set block_start_pfn to start of page block which
contains the last pfn in section. Then we continue, move one page
block forward and ignore the last page block in the online section.
Make block_start_pfn point to first page block after online section to fix
this:
1. make skip_offline_sections_reverse return end pfn of online section,
i.e. pfn of page block after online section.
2. assign block_start_pfn with next_pfn.
Fixes: f63224525309 ("mm: compaction: skip the memory hole rapidly when isolating free pages")
Signed-off-by: Kemeng Shi <[email protected]>
---
mm/compaction.c | 10 +++++++---
1 file changed, 7 insertions(+), 3 deletions(-)
diff --git a/mm/compaction.c b/mm/compaction.c
index cd23da4d2a5b..a8cea916df9d 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -250,6 +250,11 @@ static unsigned long skip_offline_sections(unsigned long start_pfn)
return 0;
}
+/*
+ * If the PFN falls into an offline section, return the end PFN of the
+ * next online section in reverse. If the PFN falls into an online section
+ * or if there is no next online section in reverse, return 0.
+ */
static unsigned long skip_offline_sections_reverse(unsigned long start_pfn)
{
unsigned long start_nr = pfn_to_section_nr(start_pfn);
@@ -259,7 +264,7 @@ static unsigned long skip_offline_sections_reverse(unsigned long start_pfn)
while (start_nr-- > 0) {
if (online_section_nr(start_nr))
- return section_nr_to_pfn(start_nr) + PAGES_PER_SECTION - 1;
+ return section_nr_to_pfn(start_nr) + PAGES_PER_SECTION;
}
return 0;
@@ -1668,8 +1673,7 @@ static void isolate_freepages(struct compact_control *cc)
next_pfn = skip_offline_sections_reverse(block_start_pfn);
if (next_pfn)
- block_start_pfn = max(pageblock_start_pfn(next_pfn),
- low_pfn);
+ block_start_pfn = max(next_pfn, low_pfn);
continue;
}
--
2.30.0
Move migrate_pfn to page block end when block is marked skip to avoid
unnecessary scan retry of that block from upper caller.
For example, compact_zone may wrongly rescan skip page block with
finish_pageblock set as following:
1. cc->migrate point to the start of page block
2. compact_zone record last_migrated_pfn to cc->migrate
3. compact_zone->isolate_migratepages->isolate_migratepages_block tries to
scan the block. The low_pfn maybe moved forward to middle of block because
of free pages at beginning of block.
4. we find first lru page could be isolated but block was exclusive marked
skip.
5. abort isolate_migratepages_block and make cc->migrate_pfn point to
found lru page at middle of block.
6. compact_zone find cc->migrate_pfn and last_migrated_pfn are in the same
block and wrongly rescan the block with finish_pageblock set.
Signed-off-by: Kemeng Shi <[email protected]>
Reviewed-by: Baolin Wang <[email protected]>
---
mm/compaction.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/mm/compaction.c b/mm/compaction.c
index ec3a96b7afce..984c17a5c8fd 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -1126,6 +1126,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
skip_updated = true;
if (test_and_set_skip(cc, valid_page) &&
!cc->finish_pageblock) {
+ low_pfn = end_pfn;
goto isolate_abort;
}
}
--
2.30.0
On 02.08.23 11:37, Kemeng Shi wrote:
> skip_offline_sections_reverse will return the last pfn in found online
> section. Then we set block_start_pfn to start of page block which
> contains the last pfn in section. Then we continue, move one page
> block forward and ignore the last page block in the online section.
> Make block_start_pfn point to first page block after online section to fix
> this:
> 1. make skip_offline_sections_reverse return end pfn of online section,
> i.e. pfn of page block after online section.
> 2. assign block_start_pfn with next_pfn.
>
> Fixes: f63224525309 ("mm: compaction: skip the memory hole rapidly when isolating free pages")
> Signed-off-by: Kemeng Shi <[email protected]>
> ---
> mm/compaction.c | 10 +++++++---
> 1 file changed, 7 insertions(+), 3 deletions(-)
>
> diff --git a/mm/compaction.c b/mm/compaction.c
> index cd23da4d2a5b..a8cea916df9d 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -250,6 +250,11 @@ static unsigned long skip_offline_sections(unsigned long start_pfn)
> return 0;
> }
>
> +/*
> + * If the PFN falls into an offline section, return the end PFN of the
> + * next online section in reverse. If the PFN falls into an online section
> + * or if there is no next online section in reverse, return 0.
> + */
> static unsigned long skip_offline_sections_reverse(unsigned long start_pfn)
> {
> unsigned long start_nr = pfn_to_section_nr(start_pfn);
> @@ -259,7 +264,7 @@ static unsigned long skip_offline_sections_reverse(unsigned long start_pfn)
>
> while (start_nr-- > 0) {
> if (online_section_nr(start_nr))
> - return section_nr_to_pfn(start_nr) + PAGES_PER_SECTION - 1;
> + return section_nr_to_pfn(start_nr) + PAGES_PER_SECTION;
> }
>
> return 0;
> @@ -1668,8 +1673,7 @@ static void isolate_freepages(struct compact_control *cc)
>
> next_pfn = skip_offline_sections_reverse(block_start_pfn);
> if (next_pfn)
> - block_start_pfn = max(pageblock_start_pfn(next_pfn),
> - low_pfn);
> + block_start_pfn = max(next_pfn, low_pfn);
>
> continue;
> }
Acked-by: David Hildenbrand <[email protected]>
--
Cheers,
David / dhildenb
On 8/2/2023 5:37 PM, Kemeng Shi wrote:
> skip_offline_sections_reverse will return the last pfn in found online
> section. Then we set block_start_pfn to start of page block which
> contains the last pfn in section. Then we continue, move one page
> block forward and ignore the last page block in the online section.
> Make block_start_pfn point to first page block after online section to fix
> this:
> 1. make skip_offline_sections_reverse return end pfn of online section,
> i.e. pfn of page block after online section.
> 2. assign block_start_pfn with next_pfn.
>
> Fixes: f63224525309 ("mm: compaction: skip the memory hole rapidly when isolating free pages")
The changes look good to me.
But the commit id is not stable, since it is not merged into mm-stable
branch yet. Not sure how to handle this patch, squash it into the
original patch? Andrew, what do you prefer?
> Signed-off-by: Kemeng Shi <[email protected]>
> ---
> mm/compaction.c | 10 +++++++---
> 1 file changed, 7 insertions(+), 3 deletions(-)
>
> diff --git a/mm/compaction.c b/mm/compaction.c
> index cd23da4d2a5b..a8cea916df9d 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -250,6 +250,11 @@ static unsigned long skip_offline_sections(unsigned long start_pfn)
> return 0;
> }
>
> +/*
> + * If the PFN falls into an offline section, return the end PFN of the
> + * next online section in reverse. If the PFN falls into an online section
> + * or if there is no next online section in reverse, return 0.
> + */
> static unsigned long skip_offline_sections_reverse(unsigned long start_pfn)
> {
> unsigned long start_nr = pfn_to_section_nr(start_pfn);
> @@ -259,7 +264,7 @@ static unsigned long skip_offline_sections_reverse(unsigned long start_pfn)
>
> while (start_nr-- > 0) {
> if (online_section_nr(start_nr))
> - return section_nr_to_pfn(start_nr) + PAGES_PER_SECTION - 1;
> + return section_nr_to_pfn(start_nr) + PAGES_PER_SECTION;
> }
>
> return 0;
> @@ -1668,8 +1673,7 @@ static void isolate_freepages(struct compact_control *cc)
>
> next_pfn = skip_offline_sections_reverse(block_start_pfn);
> if (next_pfn)
> - block_start_pfn = max(pageblock_start_pfn(next_pfn),
> - low_pfn);
> + block_start_pfn = max(next_pfn, low_pfn);
>
> continue;
> }