2023-11-23 13:37:13

by Gang Li

[permalink] [raw]
Subject: [RFC PATCH v1 0/4] hugetlb: parallelize hugetlb page allocation on boot

From: Gang Li <[email protected]>

Inspired by these patches [1][2], this series aims to speed up the
initialization of hugetlb during the boot process through
parallelization.

It is particularly effective in large systems. On a machine equipped
with 1TB of memory and two NUMA nodes, the time for hugetlb
initialization was reduced from 2 seconds to 1 second.

In the future, as memory continues to grow, more and more time can
be saved.

This series currently focuses on optimizing 2MB hugetlb. Since
gigantic pages are few in number, their optimization effects
are not as pronounced. We may explore optimizations for
gigantic pages in the future.

Thanks,
Gang Li

Gang Li (4):
hugetlb: code clean for hugetlb_hstate_alloc_pages
hugetlb: split hugetlb_hstate_alloc_pages
hugetlb: add timing to hugetlb allocations on boot
hugetlb: parallelize hugetlb page allocation

mm/hugetlb.c | 191 ++++++++++++++++++++++++++++++++++++---------------
1 file changed, 134 insertions(+), 57 deletions(-)

--
2.20.1


2023-11-23 13:37:12

by Gang Li

[permalink] [raw]
Subject: [RFC PATCH v1 1/4] hugetlb: code clean for hugetlb_hstate_alloc_pages

From: Gang Li <[email protected]>

This patch focus on cleaning up the code related to per node allocation
and error reporting in the hugetlb alloc:

- hugetlb_hstate_alloc_pages_node_specific() to handle iterates through
each online node and performs allocation if necessary.
- hugetlb_hstate_alloc_pages_report() report error during allocation.
And the value of h->max_huge_pages is updated accordingly.

This patch has no functional changes.

Signed-off-by: Gang Li <[email protected]>
---
mm/hugetlb.c | 46 +++++++++++++++++++++++++++++-----------------
1 file changed, 29 insertions(+), 17 deletions(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index c466551e2fd9..7af2ee08ad1b 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -3482,6 +3482,33 @@ static void __init hugetlb_hstate_alloc_pages_onenode(struct hstate *h, int nid)
h->max_huge_pages_node[nid] = i;
}

+static bool __init hugetlb_hstate_alloc_pages_node_specific(struct hstate *h)
+{
+ int i;
+ bool node_specific_alloc = false;
+
+ for_each_online_node(i) {
+ if (h->max_huge_pages_node[i] > 0) {
+ hugetlb_hstate_alloc_pages_onenode(h, i);
+ node_specific_alloc = true;
+ }
+ }
+
+ return node_specific_alloc;
+}
+
+static void __init hugetlb_hstate_alloc_pages_report(unsigned long allocated, struct hstate *h)
+{
+ if (allocated < h->max_huge_pages) {
+ char buf[32];
+
+ string_get_size(huge_page_size(h), 1, STRING_UNITS_2, buf, 32);
+ pr_warn("HugeTLB: allocating %lu of page size %s failed. Only allocated %lu hugepages.\n",
+ h->max_huge_pages, buf, allocated);
+ h->max_huge_pages = allocated;
+ }
+}
+
/*
* NOTE: this routine is called in different contexts for gigantic and
* non-gigantic pages.
@@ -3499,7 +3526,6 @@ static void __init hugetlb_hstate_alloc_pages(struct hstate *h)
struct folio *folio;
LIST_HEAD(folio_list);
nodemask_t *node_alloc_noretry;
- bool node_specific_alloc = false;

/* skip gigantic hugepages allocation if hugetlb_cma enabled */
if (hstate_is_gigantic(h) && hugetlb_cma_size) {
@@ -3508,14 +3534,7 @@ static void __init hugetlb_hstate_alloc_pages(struct hstate *h)
}

/* do node specific alloc */
- for_each_online_node(i) {
- if (h->max_huge_pages_node[i] > 0) {
- hugetlb_hstate_alloc_pages_onenode(h, i);
- node_specific_alloc = true;
- }
- }
-
- if (node_specific_alloc)
+ if (hugetlb_hstate_alloc_pages_node_specific(h))
return;

/* below will do all node balanced alloc */
@@ -3558,14 +3577,7 @@ static void __init hugetlb_hstate_alloc_pages(struct hstate *h)
/* list will be empty if hstate_is_gigantic */
prep_and_add_allocated_folios(h, &folio_list);

- if (i < h->max_huge_pages) {
- char buf[32];
-
- string_get_size(huge_page_size(h), 1, STRING_UNITS_2, buf, 32);
- pr_warn("HugeTLB: allocating %lu of page size %s failed. Only allocated %lu hugepages.\n",
- h->max_huge_pages, buf, i);
- h->max_huge_pages = i;
- }
+ hugetlb_hstate_alloc_pages_report(i, h);
kfree(node_alloc_noretry);
}

--
2.20.1

2023-11-23 13:37:20

by Gang Li

[permalink] [raw]
Subject: [RFC PATCH v1 3/4] hugetlb: add timing to hugetlb allocations on boot

From: Gang Li <[email protected]>

Add timing to hugetlb allocations for further optimization.

Signed-off-by: Gang Li <[email protected]>
---
mm/hugetlb.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 7f9ff0855dd0..ac8558724cc2 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -3563,7 +3563,7 @@ static unsigned long __init hugetlb_hstate_alloc_pages_non_gigantic(struct hstat
*/
static void __init hugetlb_hstate_alloc_pages(struct hstate *h)
{
- unsigned long allocated;
+ unsigned long allocated, start;

/* skip gigantic hugepages allocation if hugetlb_cma enabled */
if (hstate_is_gigantic(h) && hugetlb_cma_size) {
@@ -3576,11 +3576,13 @@ static void __init hugetlb_hstate_alloc_pages(struct hstate *h)
return;

/* below will do all node balanced alloc */
+ start = jiffies;
if (!hstate_is_gigantic(h)) {
allocated = hugetlb_hstate_alloc_pages_non_gigantic(h);
} else {
allocated = hugetlb_hstate_alloc_pages_gigantic(h);
}
+ pr_info("HugeTLB: Allocation takes %u ms\n", jiffies_to_msecs(jiffies - start));

hugetlb_hstate_alloc_pages_report(allocated, h);
}
--
2.20.1

2023-11-23 14:11:06

by David Hildenbrand

[permalink] [raw]
Subject: Re: [RFC PATCH v1 0/4] hugetlb: parallelize hugetlb page allocation on boot

On 23.11.23 14:30, Gang Li wrote:
> From: Gang Li <[email protected]>
>
> Inspired by these patches [1][2], this series aims to speed up the
> initialization of hugetlb during the boot process through
> parallelization.
>
> It is particularly effective in large systems. On a machine equipped
> with 1TB of memory and two NUMA nodes, the time for hugetlb
> initialization was reduced from 2 seconds to 1 second.

Sorry to say, but why is that a scenario worth adding complexity for /
optimizing for? You don't cover that, so there is a clear lack in the
motivation.

2 vs. 1 second on a 1 TiB system is usually really just noise.

--
Cheers,

David / dhildenb

2023-11-24 19:44:58

by David Rientjes

[permalink] [raw]
Subject: Re: [RFC PATCH v1 0/4] hugetlb: parallelize hugetlb page allocation on boot

On Thu, 23 Nov 2023, David Hildenbrand wrote:

> On 23.11.23 14:30, Gang Li wrote:
> > From: Gang Li <[email protected]>
> >
> > Inspired by these patches [1][2], this series aims to speed up the
> > initialization of hugetlb during the boot process through
> > parallelization.
> >
> > It is particularly effective in large systems. On a machine equipped
> > with 1TB of memory and two NUMA nodes, the time for hugetlb
> > initialization was reduced from 2 seconds to 1 second.
>
> Sorry to say, but why is that a scenario worth adding complexity for /
> optimizing for? You don't cover that, so there is a clear lack in the
> motivation.
>
> 2 vs. 1 second on a 1 TiB system is usually really just noise.
>

The cost will continue to grow over time, so I presume that Gang is trying
to get out in front of the issue even though it may not be a large savings
today.

Running single boot tests, with the latest upstream kernel, allocating
1,440 1GB hugetlb pages on a 1.5TB AMD host appears to take 1.47s.

But allocating 11,776 1GB hugetlb pages on a 12TB Intel host takes 65.2s
today with the current implementation.

So it's likely something worth optimizing.

Gang, I'm curious about this in the cover letter:

"""
This series currently focuses on optimizing 2MB hugetlb. Since
gigantic pages are few in number, their optimization effects
are not as pronounced. We may explore optimizations for
gigantic pages in the future.
"""

For >1TB hosts, why the emphasis on 2MB hugetlb? :) I would have expected
1GB pages. Are you really allocating ~500k 2MB hugetlb pages?

So if the patchset optimizes for the more likely scenario on these large
hosts, which would be 1GB pages, that would be great.

2023-11-24 19:48:01

by David Hildenbrand

[permalink] [raw]
Subject: Re: [RFC PATCH v1 0/4] hugetlb: parallelize hugetlb page allocation on boot

On 24.11.23 20:44, David Rientjes wrote:
> On Thu, 23 Nov 2023, David Hildenbrand wrote:
>
>> On 23.11.23 14:30, Gang Li wrote:
>>> From: Gang Li <[email protected]>
>>>
>>> Inspired by these patches [1][2], this series aims to speed up the
>>> initialization of hugetlb during the boot process through
>>> parallelization.
>>>
>>> It is particularly effective in large systems. On a machine equipped
>>> with 1TB of memory and two NUMA nodes, the time for hugetlb
>>> initialization was reduced from 2 seconds to 1 second.
>>
>> Sorry to say, but why is that a scenario worth adding complexity for /
>> optimizing for? You don't cover that, so there is a clear lack in the
>> motivation.
>>
>> 2 vs. 1 second on a 1 TiB system is usually really just noise.
>>
>
> The cost will continue to grow over time, so I presume that Gang is trying
> to get out in front of the issue even though it may not be a large savings
> today.
>
> Running single boot tests, with the latest upstream kernel, allocating
> 1,440 1GB hugetlb pages on a 1.5TB AMD host appears to take 1.47s.
>
> But allocating 11,776 1GB hugetlb pages on a 12TB Intel host takes 65.2s
> today with the current implementation.

And there, the 65.2s won't be noise because that 12TB system is up by a
snap of a finger? :)

--
Cheers,

David / dhildenb

2023-11-24 20:01:52

by David Rientjes

[permalink] [raw]
Subject: Re: [RFC PATCH v1 0/4] hugetlb: parallelize hugetlb page allocation on boot

On Fri, 24 Nov 2023, David Hildenbrand wrote:

> On 24.11.23 20:44, David Rientjes wrote:
> > On Thu, 23 Nov 2023, David Hildenbrand wrote:
> >
> > > On 23.11.23 14:30, Gang Li wrote:
> > > > From: Gang Li <[email protected]>
> > > >
> > > > Inspired by these patches [1][2], this series aims to speed up the
> > > > initialization of hugetlb during the boot process through
> > > > parallelization.
> > > >
> > > > It is particularly effective in large systems. On a machine equipped
> > > > with 1TB of memory and two NUMA nodes, the time for hugetlb
> > > > initialization was reduced from 2 seconds to 1 second.
> > >
> > > Sorry to say, but why is that a scenario worth adding complexity for /
> > > optimizing for? You don't cover that, so there is a clear lack in the
> > > motivation.
> > >
> > > 2 vs. 1 second on a 1 TiB system is usually really just noise.
> > >
> >
> > The cost will continue to grow over time, so I presume that Gang is trying
> > to get out in front of the issue even though it may not be a large savings
> > today.
> >
> > Running single boot tests, with the latest upstream kernel, allocating
> > 1,440 1GB hugetlb pages on a 1.5TB AMD host appears to take 1.47s.
> >
> > But allocating 11,776 1GB hugetlb pages on a 12TB Intel host takes 65.2s
> > today with the current implementation.
>
> And there, the 65.2s won't be noise because that 12TB system is up by a snap
> of a finger? :)
>

In this single boot test, total boot time was 373.78s, so 1GB hugetlb
allocation is 17.4% of that.

Would love to see what the numbers would look like if 1GB pages were
supported.

2023-11-28 03:19:26

by Gang Li

[permalink] [raw]
Subject: Re: [RFC PATCH v1 0/4] hugetlb: parallelize hugetlb page allocation on boot


On 2023/11/25 04:00, David Rientjes wrote:
> On Fri, 24 Nov 2023, David Hildenbrand wrote:
>
>> And there, the 65.2s won't be noise because that 12TB system is up by a snap
>> of a finger? :)
>>
>
> In this single boot test, total boot time was 373.78s, so 1GB hugetlb
> allocation is 17.4% of that.

Thank you for sharing these data. Currently, I don't have access to a
machine of such large capacity, so the benefits in my tests are not as
pronounced.

I believe testing on a system of this scale would yield significant
benefits.

>
> Would love to see what the numbers would look like if 1GB pages were
> supported.
>

Support for 1GB hugetlb is not yet perfect, so it wasn't included in v1.
But I'm happy to refine and introduce 1GB hugetlb support in future
versions.

2023-11-28 06:53:41

by Gang Li

[permalink] [raw]
Subject: Re: [RFC PATCH v1 0/4] hugetlb: parallelize hugetlb page allocation on boot

Hi David Hildenbrand :),

On 2023/11/23 22:10, David Hildenbrand wrote:
> Sorry to say, but why is that a scenario worth adding complexity for /
> optimizing for? You don't cover that, so there is a clear lack in the
> motivation.

Regarding your concern about complexity, this is indeed something to
consider. There is a precedent of parallelization in pgdata[1] which
might be reused (or other methods) to reduce the complexity of this
series.

[1]
https://lore.kernel.org/all/[email protected]/

2023-11-28 08:10:09

by David Hildenbrand

[permalink] [raw]
Subject: Re: [RFC PATCH v1 0/4] hugetlb: parallelize hugetlb page allocation on boot

On 28.11.23 07:52, Gang Li wrote:
> Hi David Hildenbrand :),
>
> On 2023/11/23 22:10, David Hildenbrand wrote:
>> Sorry to say, but why is that a scenario worth adding complexity for /
>> optimizing for? You don't cover that, so there is a clear lack in the
>> motivation.
>
> Regarding your concern about complexity, this is indeed something to
> consider. There is a precedent of parallelization in pgdata[1] which
> might be reused (or other methods) to reduce the complexity of this
> series.

Yes, please!

--
Cheers,

David / dhildenb

2023-11-29 19:42:04

by David Rientjes

[permalink] [raw]
Subject: Re: [RFC PATCH v1 0/4] hugetlb: parallelize hugetlb page allocation on boot

On Tue, 28 Nov 2023, Gang Li wrote:

> >
> > > And there, the 65.2s won't be noise because that 12TB system is up by a
> > > snap
> > > of a finger? :)
> > >
> >
> > In this single boot test, total boot time was 373.78s, so 1GB hugetlb
> > allocation is 17.4% of that.
>
> Thank you for sharing these data. Currently, I don't have access to a machine
> of such large capacity, so the benefits in my tests are not as pronounced.
>
> I believe testing on a system of this scale would yield significant benefits.
>
> >
> > Would love to see what the numbers would look like if 1GB pages were
> > supported.
> >
>
> Support for 1GB hugetlb is not yet perfect, so it wasn't included in v1. But
> I'm happy to refine and introduce 1GB hugetlb support in future versions.
>

That would be very appreciated, thank you! I'm happy to test and collect
data for any proposed patch series on 12TB systems booted with a lot of
1GB hugetlb pages on the kernel command line.