2020-02-27 21:33:09

by Rik van Riel

[permalink] [raw]
Subject: [PATCH v2 2/2] mm,thp,compaction,cma: allow THP migration for CMA allocations

The code to implement THP migrations already exists, and the code
for CMA to clear out a region of memory already exists.

Only a few small tweaks are needed to allow CMA to move THP memory
when attempting an allocation from alloc_contig_range.

With these changes, migrating THPs from a CMA area works when
allocating a 1GB hugepage from CMA memory.

Signed-off-by: Rik van Riel <[email protected]>
Reviewed-by: Zi Yan <[email protected]>
---
mm/compaction.c | 22 +++++++++++++---------
mm/page_alloc.c | 9 +++++++--
2 files changed, 20 insertions(+), 11 deletions(-)

diff --git a/mm/compaction.c b/mm/compaction.c
index 672d3c78c6ab..000ade085b89 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -894,12 +894,13 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,

/*
* Regardless of being on LRU, compound pages such as THP and
- * hugetlbfs are not to be compacted. We can potentially save
- * a lot of iterations if we skip them at once. The check is
- * racy, but we can consider only valid values and the only
- * danger is skipping too much.
+ * hugetlbfs are not to be compacted unless we are attempting
+ * an allocation much larger than the huge page size (eg CMA).
+ * We can potentially save a lot of iterations if we skip them
+ * at once. The check is racy, but we can consider only valid
+ * values and the only danger is skipping too much.
*/
- if (PageCompound(page)) {
+ if (PageCompound(page) && !cc->alloc_contig) {
const unsigned int order = compound_order(page);

if (likely(order < MAX_ORDER))
@@ -969,7 +970,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
* and it's on LRU. It can only be a THP so the order
* is safe to read and it's 0 for tail pages.
*/
- if (unlikely(PageCompound(page))) {
+ if (unlikely(PageCompound(page) && !cc->alloc_contig)) {
low_pfn += compound_nr(page) - 1;
goto isolate_fail;
}
@@ -981,12 +982,15 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
if (__isolate_lru_page(page, isolate_mode) != 0)
goto isolate_fail;

- VM_BUG_ON_PAGE(PageCompound(page), page);
+ /* The whole page is taken off the LRU; skip the tail pages. */
+ if (PageCompound(page))
+ low_pfn += compound_nr(page) - 1;

/* Successfully isolated */
del_page_from_lru_list(page, lruvec, page_lru(page));
- inc_node_page_state(page,
- NR_ISOLATED_ANON + page_is_file_cache(page));
+ mod_node_page_state(page_pgdat(page),
+ NR_ISOLATED_ANON + page_is_file_cache(page),
+ hpage_nr_pages(page));

isolate_success:
list_add(&page->lru, &cc->migratepages);
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index a36736812596..6257c849cc00 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -8253,14 +8253,19 @@ struct page *has_unmovable_pages(struct zone *zone, struct page *page,

/*
* Hugepages are not in LRU lists, but they're movable.
+ * THPs are on the LRU, but need to be counted as #small pages.
* We need not scan over tail pages because we don't
* handle each tail page individually in migration.
*/
- if (PageHuge(page)) {
+ if (PageHuge(page) || PageTransCompound(page)) {
struct page *head = compound_head(page);
unsigned int skip_pages;

- if (!hugepage_migration_supported(page_hstate(head)))
+ if (PageHuge(page) &&
+ !hugepage_migration_supported(page_hstate(head)))
+ return page;
+
+ if (!PageLRU(head) && !__PageMovable(head))
return page;

skip_pages = compound_nr(head) - (page - head);
--
2.24.1


2020-02-27 23:43:37

by Mike Kravetz

[permalink] [raw]
Subject: Re: [PATCH v2 2/2] mm,thp,compaction,cma: allow THP migration for CMA allocations

On 2/27/20 1:32 PM, Rik van Riel wrote:
> The code to implement THP migrations already exists, and the code
> for CMA to clear out a region of memory already exists.
>
> Only a few small tweaks are needed to allow CMA to move THP memory
> when attempting an allocation from alloc_contig_range.
>
> With these changes, migrating THPs from a CMA area works when
> allocating a 1GB hugepage from CMA memory.
>
> Signed-off-by: Rik van Riel <[email protected]>
> Reviewed-by: Zi Yan <[email protected]>
> ---
> mm/compaction.c | 22 +++++++++++++---------
> mm/page_alloc.c | 9 +++++++--
> 2 files changed, 20 insertions(+), 11 deletions(-)
>
> diff --git a/mm/compaction.c b/mm/compaction.c
> index 672d3c78c6ab..000ade085b89 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -894,12 +894,13 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
>
> /*
> * Regardless of being on LRU, compound pages such as THP and
> - * hugetlbfs are not to be compacted. We can potentially save
> - * a lot of iterations if we skip them at once. The check is
> - * racy, but we can consider only valid values and the only
> - * danger is skipping too much.
> + * hugetlbfs are not to be compacted unless we are attempting
> + * an allocation much larger than the huge page size (eg CMA).
> + * We can potentially save a lot of iterations if we skip them
> + * at once. The check is racy, but we can consider only valid
> + * values and the only danger is skipping too much.
> */
> - if (PageCompound(page)) {
> + if (PageCompound(page) && !cc->alloc_contig) {
> const unsigned int order = compound_order(page);
>
> if (likely(order < MAX_ORDER))
> @@ -969,7 +970,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
> * and it's on LRU. It can only be a THP so the order
> * is safe to read and it's 0 for tail pages.
> */
> - if (unlikely(PageCompound(page))) {
> + if (unlikely(PageCompound(page) && !cc->alloc_contig)) {
> low_pfn += compound_nr(page) - 1;
> goto isolate_fail;
> }
> @@ -981,12 +982,15 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
> if (__isolate_lru_page(page, isolate_mode) != 0)
> goto isolate_fail;
>
> - VM_BUG_ON_PAGE(PageCompound(page), page);
> + /* The whole page is taken off the LRU; skip the tail pages. */
> + if (PageCompound(page))
> + low_pfn += compound_nr(page) - 1;
>
> /* Successfully isolated */
> del_page_from_lru_list(page, lruvec, page_lru(page));
> - inc_node_page_state(page,
> - NR_ISOLATED_ANON + page_is_file_cache(page));
> + mod_node_page_state(page_pgdat(page),
> + NR_ISOLATED_ANON + page_is_file_cache(page),
> + hpage_nr_pages(page));
>
> isolate_success:
> list_add(&page->lru, &cc->migratepages);
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index a36736812596..6257c849cc00 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -8253,14 +8253,19 @@ struct page *has_unmovable_pages(struct zone *zone, struct page *page,
>
> /*
> * Hugepages are not in LRU lists, but they're movable.
> + * THPs are on the LRU, but need to be counted as #small pages.
> * We need not scan over tail pages because we don't
> * handle each tail page individually in migration.
> */
> - if (PageHuge(page)) {
> + if (PageHuge(page) || PageTransCompound(page)) {
> struct page *head = compound_head(page);
> unsigned int skip_pages;
>
> - if (!hugepage_migration_supported(page_hstate(head)))
> + if (PageHuge(page) &&
> + !hugepage_migration_supported(page_hstate(head)))
> + return page;
> +
> + if (!PageLRU(head) && !__PageMovable(head))

Pretty sure this is going to be true for hugetlb pages. So, this will change
behavior and make all hugetlb pages look unmovable. Perhaps, only check this
condition for THP pages?
--
Mike Kravetz

> return page;
>
> skip_pages = compound_nr(head) - (page - head);
>

2020-02-28 01:23:01

by Rik van Riel

[permalink] [raw]
Subject: Re: [PATCH v2 2/2] mm,thp,compaction,cma: allow THP migration for CMA allocations

On Thu, 2020-02-27 at 15:41 -0800, Mike Kravetz wrote:
> On 2/27/20 1:32 PM, Rik van Riel wrote:
> >
> > +++ b/mm/page_alloc.c
> > @@ -8253,14 +8253,19 @@ struct page *has_unmovable_pages(struct
> > zone *zone, struct page *page,
> >
> > /*
> > * Hugepages are not in LRU lists, but they're movable.
> > + * THPs are on the LRU, but need to be counted as
> > #small pages.
> > * We need not scan over tail pages because we don't
> > * handle each tail page individually in migration.
> > */
> > - if (PageHuge(page)) {
> > + if (PageHuge(page) || PageTransCompound(page)) {
> > struct page *head = compound_head(page);
> > unsigned int skip_pages;
> >
> > - if
> > (!hugepage_migration_supported(page_hstate(head)))
> > + if (PageHuge(page) &&
> > + !hugepage_migration_supported(page_hstate(h
> > ead)))
> > + return page;
> > +
> > + if (!PageLRU(head) && !__PageMovable(head))
>
> Pretty sure this is going to be true for hugetlb pages. So, this
> will change
> behavior and make all hugetlb pages look unmovable. Perhaps, only
> check this
> condition for THP pages?

Does that need to be the following, then?

if (PageTransHuge(head) && !PageHuge(page) && !PageLRU(head) &&
!__PageMovable(head))
return page;

That's an easy one liner I would be happy to send in
if everybody agrees that should fix things :)

--
All Rights Reversed.


Attachments:
signature.asc (499.00 B)
This is a digitally signed message part

2020-02-28 04:31:09

by Mike Kravetz

[permalink] [raw]
Subject: Re: [PATCH v2 2/2] mm,thp,compaction,cma: allow THP migration for CMA allocations

On 2/27/20 5:21 PM, Rik van Riel wrote:
> On Thu, 2020-02-27 at 15:41 -0800, Mike Kravetz wrote:
>> On 2/27/20 1:32 PM, Rik van Riel wrote:
>>>
>>> +++ b/mm/page_alloc.c
>>> @@ -8253,14 +8253,19 @@ struct page *has_unmovable_pages(struct
>>> zone *zone, struct page *page,
>>>
>>> /*
>>> * Hugepages are not in LRU lists, but they're movable.
>>> + * THPs are on the LRU, but need to be counted as
>>> #small pages.
>>> * We need not scan over tail pages because we don't
>>> * handle each tail page individually in migration.
>>> */
>>> - if (PageHuge(page)) {
>>> + if (PageHuge(page) || PageTransCompound(page)) {
>>> struct page *head = compound_head(page);
>>> unsigned int skip_pages;
>>>
>>> - if
>>> (!hugepage_migration_supported(page_hstate(head)))
>>> + if (PageHuge(page) &&
>>> + !hugepage_migration_supported(page_hstate(h
>>> ead)))
>>> + return page;
>>> +
>>> + if (!PageLRU(head) && !__PageMovable(head))
>>
>> Pretty sure this is going to be true for hugetlb pages. So, this
>> will change
>> behavior and make all hugetlb pages look unmovable. Perhaps, only
>> check this
>> condition for THP pages?
>
> Does that need to be the following, then?
>
> if (PageTransHuge(head) && !PageHuge(page) && !PageLRU(head) &&
> !__PageMovable(head))
> return page;

Yes, that is what I would suggest. Results of a simple test on small VM.

Unmodified Kernel
-----------------
# cat /sys/devices/system/memory/memory*/removable | grep 1 | wc -l
50
# hugeadm --hard --pool-pages-min DEFAULT:4G
# cat /sys/devices/system/memory/memory*/removable | grep 1 | wc -l
50

V2 patches
----------
# cat /sys/devices/system/memory/memory*/removable | grep 1 | wc -l
50
# hugeadm --hard --pool-pages-min DEFAULT:4G
# cat /sys/devices/system/memory/memory*/removable | grep 1 | wc -l
14

V2 patches + above modification
-------------------------------
# cat /sys/devices/system/memory/memory*/removable | grep 1 | wc -l
50
# hugeadm --hard --pool-pages-min DEFAULT:4G
# cat /sys/devices/system/memory/memory*/removable | grep 1 | wc -l
50

> That's an easy one liner I would be happy to send in
> if everybody agrees that should fix things :)

Another option might be to make hugetlb pages more like other pages and
__SetPageMovable on movable hugetlb pages. This could be used instead
of hugepage_migration_supported() calls. That is only a quick thought
and would be beyond the scope of this patch. If people think that is
worth doing, and I have not overlooked something, I could look into it
separately.
--
Mike Kravetz

2020-02-28 08:26:52

by Vlastimil Babka

[permalink] [raw]
Subject: Re: [PATCH v2 2/2] mm,thp,compaction,cma: allow THP migration for CMA allocations

On 2/28/20 2:21 AM, Rik van Riel wrote:
> On Thu, 2020-02-27 at 15:41 -0800, Mike Kravetz wrote:
>> On 2/27/20 1:32 PM, Rik van Riel wrote:
>>>
>>> +++ b/mm/page_alloc.c
>>> @@ -8253,14 +8253,19 @@ struct page *has_unmovable_pages(struct
>>> zone *zone, struct page *page,
>>>
>>> /*
>>> * Hugepages are not in LRU lists, but they're movable.
>>> + * THPs are on the LRU, but need to be counted as
>>> #small pages.
>>> * We need not scan over tail pages because we don't
>>> * handle each tail page individually in migration.
>>> */
>>> - if (PageHuge(page)) {
>>> + if (PageHuge(page) || PageTransCompound(page)) {
>>> struct page *head = compound_head(page);
>>> unsigned int skip_pages;
>>>
>>> - if
>>> (!hugepage_migration_supported(page_hstate(head)))
>>> + if (PageHuge(page) &&
>>> + !hugepage_migration_supported(page_hstate(h
>>> ead)))
>>> + return page;
>>> +
>>> + if (!PageLRU(head) && !__PageMovable(head))
>>
>> Pretty sure this is going to be true for hugetlb pages. So, this
>> will change
>> behavior and make all hugetlb pages look unmovable. Perhaps, only
>> check this
>> condition for THP pages?

Oh right you are.

> Does that need to be the following, then?
>
> if (PageTransHuge(head) && !PageHuge(page) && !PageLRU(head) &&
> !__PageMovable(head))
> return page;

I would instead make it an "else if" to the "if (PageHuge(page)...)" above.

> That's an easy one liner I would be happy to send in
> if everybody agrees that should fix things :)
>

2020-02-28 14:34:03

by Rik van Riel

[permalink] [raw]
Subject: Re: [PATCH v2 2/2] mm,thp,compaction,cma: allow THP migration for CMA allocations

On Fri, 2020-02-28 at 09:25 +0100, Vlastimil Babka wrote:
> On 2/28/20 2:21 AM, Rik van Riel wrote:
> > On Thu, 2020-02-27 at 15:41 -0800, Mike Kravetz wrote:
> > > On 2/27/20 1:32 PM, Rik van Riel wrote:
> > > > +++ b/mm/page_alloc.c
> > > > @@ -8253,14 +8253,19 @@ struct page *has_unmovable_pages(struct
> > > > zone *zone, struct page *page,
> > > >
> > > > /*
> > > > * Hugepages are not in LRU lists, but they're
> > > > movable.
> > > > + * THPs are on the LRU, but need to be counted
> > > > as
> > > > #small pages.
> > > > * We need not scan over tail pages because we
> > > > don't
> > > > * handle each tail page individually in
> > > > migration.
> > > > */
> > > > - if (PageHuge(page)) {
> > > > + if (PageHuge(page) || PageTransCompound(page))
> > > > {
> > > > struct page *head =
> > > > compound_head(page);
> > > > unsigned int skip_pages;
> > > >
> > > > - if
> > > > (!hugepage_migration_supported(page_hstate(head)))
> > > > + if (PageHuge(page) &&
> > > > + !hugepage_migration_supported(page_
> > > > hstate(h
> > > > ead)))
> > > > + return page;
> > > > +
> > > > + if (!PageLRU(head) &&
> > > > !__PageMovable(head))
> > >
> > > Pretty sure this is going to be true for hugetlb pages. So, this
> > > will change
> > > behavior and make all hugetlb pages look unmovable. Perhaps,
> > > only
> > > check this
> > > condition for THP pages?
>
> Oh right you are.
>
> > Does that need to be the following, then?
> >
> > if (PageTransHuge(head) && !PageHuge(page) && !PageLRU(head)
> > &&
> > !__PageMovable(head))
> > return page;
>
> I would instead make it an "else if" to the "if (PageHuge(page)...)"
> above.

That was my first thought too, but that could break on
pages that are PageHuge when hugepage_migration_supported
returns true.

--
All Rights Reversed.


Attachments:
signature.asc (499.00 B)
This is a digitally signed message part

2020-02-28 14:40:07

by Vlastimil Babka

[permalink] [raw]
Subject: Re: [PATCH v2 2/2] mm,thp,compaction,cma: allow THP migration for CMA allocations

On 2/28/20 3:32 PM, Rik van Riel wrote:

>>> Does that need to be the following, then?
>>>
>>> if (PageTransHuge(head) && !PageHuge(page) && !PageLRU(head)
>>> &&
>>> !__PageMovable(head))
>>> return page;
>>
>> I would instead make it an "else if" to the "if (PageHuge(page)...)"
>> above.
>
> That was my first thought too, but that could break on
> pages that are PageHuge when hugepage_migration_supported
> returns true.

Right, so then

if (PageHuge()) {
if (!migration_supported) return false;
} else if (!PageLRU(head) ...) {
etc...

IMHO it's better than adding more tests to the second if.

2020-02-28 15:47:36

by Rik van Riel

[permalink] [raw]
Subject: [PATCH] fix mmthpcompactioncma-allow-thp-migration-for-cma-allocations.patch

Thank you Mike & Vlastimil!

---8<---

commit 27f3cd5473d8bbf591b61d8b93b98bc333980d0d
Author: Rik van Riel <[email protected]>
Date: Fri Feb 28 10:41:48 2020 -0500

Subject: fix mmthpcompactioncma-allow-thp-migration-for-cma-allocations.patch

Mike Kravetz pointed out that the second if condition could do the
wrong thing for hugetlbfs pages, and that check really only needs
to run on THPs.

Cleanup suggested by Vlastimil.

Thank you both!

Suggested-by: Mike Kravetz <[email protected]>
Suggested-by: Vlastimil Babka <[email protected]>
Signed-off-by: Rik van Riel <[email protected]>

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 4afa13dd3738..71f78a590236 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -8274,12 +8274,12 @@ struct page *has_unmovable_pages(struct zone *zone, struct page *page,
struct page *head = compound_head(page);
unsigned int skip_pages;

- if (PageHuge(page) &&
- !hugepage_migration_supported(page_hstate(head)))
- return page;
-
- if (!PageLRU(head) && !__PageMovable(head))
+ if (PageHuge(page)) {
+ if (!hugepage_migration_supported(page_hstate(head)))
+ return page;
+ } else if (!PageLRU(head) && !__PageMovable(head)) {
return page;
+ }

skip_pages = compound_nr(head) - (page - head);
iter += skip_pages - 1;

2020-03-02 09:33:46

by Vlastimil Babka

[permalink] [raw]
Subject: Re: [PATCH v2 2/2] mm,thp,compaction,cma: allow THP migration for CMA allocations

On 2/27/20 10:32 PM, Rik van Riel wrote:
> The code to implement THP migrations already exists, and the code
> for CMA to clear out a region of memory already exists.
>
> Only a few small tweaks are needed to allow CMA to move THP memory
> when attempting an allocation from alloc_contig_range.
>
> With these changes, migrating THPs from a CMA area works when
> allocating a 1GB hugepage from CMA memory.
>
> Signed-off-by: Rik van Riel <[email protected]>
> Reviewed-by: Zi Yan <[email protected]>

With the followup fix,

Reviewed-by: Vlastimil Babka <[email protected]>

Thanks.