2023-11-22 10:30:11

by Ryan Roberts

[permalink] [raw]
Subject: Re: [PATCH v1 3/4] mm/compaction: optimize >0 order folio compaction with free page split.

On 13/11/2023 17:01, Zi Yan wrote:
> From: Zi Yan <[email protected]>
>
> During migration in a memory compaction, free pages are placed in an array
> of page lists based on their order. But the desired free page order (i.e.,
> the order of a source page) might not be always present, thus leading to
> migration failures. Split a high order free pages when source migration
> page has a lower order to increase migration successful rate.
>
> Note: merging free pages when a migration fails and a lower order free
> page is returned via compaction_free() is possible, but there is too much
> work. Since the free pages are not buddy pages, it is hard to identify
> these free pages using existing PFN-based page merging algorithm.
>
> Signed-off-by: Zi Yan <[email protected]>
> ---
> mm/compaction.c | 40 +++++++++++++++++++++++++++++++++++++++-
> 1 file changed, 39 insertions(+), 1 deletion(-)
>
> diff --git a/mm/compaction.c b/mm/compaction.c
> index ec6b5cc7e907..9c083e6b399a 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -1806,9 +1806,46 @@ static struct folio *compaction_alloc(struct folio *src, unsigned long data)
> struct compact_control *cc = (struct compact_control *)data;
> struct folio *dst;
> int order = folio_order(src);
> + bool has_isolated_pages = false;
>
> +again:
> if (!cc->freepages[order].nr_pages) {
> - isolate_freepages(cc);
> + int i;
> +
> + for (i = order + 1; i <= MAX_ORDER; i++) {
> + if (cc->freepages[i].nr_pages) {
> + struct page *freepage =
> + list_first_entry(&cc->freepages[i].pages,
> + struct page, lru);
> +
> + int start_order = i;
> + unsigned long size = 1 << start_order;
> +
> + list_del(&freepage->lru);
> + cc->freepages[i].nr_pages--;
> +
> + while (start_order > order) {
> + start_order--;
> + size >>= 1;
> +
> + list_add(&freepage[size].lru,
> + &cc->freepages[start_order].pages);
> + cc->freepages[start_order].nr_pages++;
> + set_page_private(&freepage[size], start_order);
> + }
> + post_alloc_hook(freepage, order, __GFP_MOVABLE);
> + if (order)
> + prep_compound_page(freepage, order);
> + dst = page_folio(freepage);
> + goto done;

Perhaps just do:

dst = (struct folio *)freepage;
goto done;

then move done: up a couple of statements below, so that post_alloc_hook() and
prep_compound_page() are always done below in common path? Although perhaps the
cast is frowned upon, you're already making the assumption that page and folio
are interchangable the way you call list_first_entry().

> + }
> + }
> + if (!has_isolated_pages) {
> + isolate_freepages(cc);
> + has_isolated_pages = true;
> + goto again;
> + }
> +
> if (!cc->freepages[order].nr_pages)
> return NULL;
> }
> @@ -1819,6 +1856,7 @@ static struct folio *compaction_alloc(struct folio *src, unsigned long data)
> post_alloc_hook(&dst->page, order, __GFP_MOVABLE);
> if (order)
> prep_compound_page(&dst->page, order);
> +done:
> cc->nr_freepages -= 1 << order;
> return page_rmappable_folio(&dst->page);
> }


2023-11-22 14:36:16

by Zi Yan

[permalink] [raw]
Subject: Re: [PATCH v1 3/4] mm/compaction: optimize >0 order folio compaction with free page split.

On 22 Nov 2023, at 5:26, Ryan Roberts wrote:

> On 13/11/2023 17:01, Zi Yan wrote:
>> From: Zi Yan <[email protected]>
>>
>> During migration in a memory compaction, free pages are placed in an array
>> of page lists based on their order. But the desired free page order (i.e.,
>> the order of a source page) might not be always present, thus leading to
>> migration failures. Split a high order free pages when source migration
>> page has a lower order to increase migration successful rate.
>>
>> Note: merging free pages when a migration fails and a lower order free
>> page is returned via compaction_free() is possible, but there is too much
>> work. Since the free pages are not buddy pages, it is hard to identify
>> these free pages using existing PFN-based page merging algorithm.
>>
>> Signed-off-by: Zi Yan <[email protected]>
>> ---
>> mm/compaction.c | 40 +++++++++++++++++++++++++++++++++++++++-
>> 1 file changed, 39 insertions(+), 1 deletion(-)
>>
>> diff --git a/mm/compaction.c b/mm/compaction.c
>> index ec6b5cc7e907..9c083e6b399a 100644
>> --- a/mm/compaction.c
>> +++ b/mm/compaction.c
>> @@ -1806,9 +1806,46 @@ static struct folio *compaction_alloc(struct folio *src, unsigned long data)
>> struct compact_control *cc = (struct compact_control *)data;
>> struct folio *dst;
>> int order = folio_order(src);
>> + bool has_isolated_pages = false;
>>
>> +again:
>> if (!cc->freepages[order].nr_pages) {
>> - isolate_freepages(cc);
>> + int i;
>> +
>> + for (i = order + 1; i <= MAX_ORDER; i++) {
>> + if (cc->freepages[i].nr_pages) {
>> + struct page *freepage =
>> + list_first_entry(&cc->freepages[i].pages,
>> + struct page, lru);
>> +
>> + int start_order = i;
>> + unsigned long size = 1 << start_order;
>> +
>> + list_del(&freepage->lru);
>> + cc->freepages[i].nr_pages--;
>> +
>> + while (start_order > order) {
>> + start_order--;
>> + size >>= 1;
>> +
>> + list_add(&freepage[size].lru,
>> + &cc->freepages[start_order].pages);
>> + cc->freepages[start_order].nr_pages++;
>> + set_page_private(&freepage[size], start_order);
>> + }
>> + post_alloc_hook(freepage, order, __GFP_MOVABLE);
>> + if (order)
>> + prep_compound_page(freepage, order);
>> + dst = page_folio(freepage);
>> + goto done;
>
> Perhaps just do:
>
> dst = (struct folio *)freepage;
> goto done;
>
> then move done: up a couple of statements below, so that post_alloc_hook() and
> prep_compound_page() are always done below in common path? Although perhaps the

Sure. Thanks for the suggestion.

> cast is frowned upon, you're already making the assumption that page and folio
> are interchangable the way you call list_first_entry().

To save the _compound_head() in page_folio()? OK.

>
>> + }
>> + }
>> + if (!has_isolated_pages) {
>> + isolate_freepages(cc);
>> + has_isolated_pages = true;
>> + goto again;
>> + }
>> +
>> if (!cc->freepages[order].nr_pages)
>> return NULL;
>> }
>> @@ -1819,6 +1856,7 @@ static struct folio *compaction_alloc(struct folio *src, unsigned long data)
>> post_alloc_hook(&dst->page, order, __GFP_MOVABLE);
>> if (order)
>> prep_compound_page(&dst->page, order);
>> +done:
>> cc->nr_freepages -= 1 << order;
>> return page_rmappable_folio(&dst->page);
>> }


--
Best Regards,
Yan, Zi


Attachments:
signature.asc (871.00 B)
OpenPGP digital signature