2022-05-11 06:41:47

by Zhou Guanghui

[permalink] [raw]
Subject: [PATCH] memblock: config the number of init memblock regions

During early boot, the number of memblocks may exceed 128(some memory
areas are not reported to the kernel due to test failures. As a result,
contiguous memory is divided into multiple parts for reporting). If
the size of the init memblock regions is exceeded before the array size
can be resized, the excess memory will be lost.

Signed-off-by: Zhou Guanghui <[email protected]>
---
mm/Kconfig | 8 ++++++++
mm/memblock.c | 2 +-
2 files changed, 9 insertions(+), 1 deletion(-)

diff --git a/mm/Kconfig b/mm/Kconfig
index 034d87953600..c6881802cccc 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -89,6 +89,14 @@ config SPARSEMEM_VMEMMAP
pfn_to_page and page_to_pfn operations. This is the most
efficient option when sufficient kernel resources are available.

+config MEMBLOCK_INIT_REGIONS
+ int "Number of init memblock regions"
+ range 128 1024
+ default 128
+ help
+ The number of init memblock regions which used to track "memory" and
+ "reserved" memblocks during early boot.
+
config HAVE_MEMBLOCK_PHYS_MAP
bool

diff --git a/mm/memblock.c b/mm/memblock.c
index e4f03a6e8e56..6893d26b750e 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -22,7 +22,7 @@

#include "internal.h"

-#define INIT_MEMBLOCK_REGIONS 128
+#define INIT_MEMBLOCK_REGIONS CONFIG_MEMBLOCK_INIT_REGIONS
#define INIT_PHYSMEM_REGIONS 4

#ifndef INIT_MEMBLOCK_RESERVED_REGIONS
--
2.17.1



2022-05-12 06:24:10

by Zhou Guanghui

[permalink] [raw]
Subject: Re: [PATCH] memblock: config the number of init memblock regions

$B:_(B 2022/5/11 14:03, Mike Rapoport $B<LF;(B:
> On Tue, May 10, 2022 at 06:55:23PM -0700, Andrew Morton wrote:
>> On Wed, 11 May 2022 01:05:30 +0000 Zhou Guanghui <[email protected]> wrote:
>>
>>> During early boot, the number of memblocks may exceed 128(some memory
>>> areas are not reported to the kernel due to test failures. As a result,
>>> contiguous memory is divided into multiple parts for reporting). If
>>> the size of the init memblock regions is exceeded before the array size
>>> can be resized, the excess memory will be lost.
>
> I'd like to see more details about how firmware creates that sparse memory
> map in the changelog.
>

The scenario is as follows: In a system using HBM, a multi-bit ECC error
occurs, and the BIOS saves the corresponding area (for example, 2 MB).
When the system restarts next time, these areas are isolated and not
reported or reported as EFI_UNUSABLE_MEMORY. Both of them lead to an
increase in the number of memblocks, whereas EFI_UNUSABLE_MEMORY leads
to a larger number of memblocks.

For example, if the EFI_UNUSABLE_MEMORY type is reported:
...
memory[0x92] [0x0000200834a00000-0x0000200835bfffff],
0x0000000001200000 bytes on node 7 flags: 0x0
memory[0x93] [0x0000200835c00000-0x0000200835dfffff],
0x0000000000200000 bytes on node 7 flags: 0x4
memory[0x94] [0x0000200835e00000-0x00002008367fffff],
0x0000000000a00000 bytes on node 7 flags: 0x0
memory[0x95] [0x0000200836800000-0x00002008369fffff],
0x0000000000200000 bytes on node 7 flags: 0x4
memory[0x96] [0x0000200836a00000-0x0000200837bfffff],
0x0000000001200000 bytes on node 7 flags: 0x0
memory[0x97] [0x0000200837c00000-0x0000200837dfffff],
0x0000000000200000 bytes on node 7 flags: 0x4
memory[0x98] [0x0000200837e00000-0x000020087fffffff],
0x0000000048200000 bytes on node 7 flags: 0x0
memory[0x99] [0x0000200880000000-0x0000200bcfffffff],
0x0000000350000000 bytes on node 6 flags: 0x0
memory[0x9a] [0x0000200bd0000000-0x0000200bd01fffff],
0x0000000000200000 bytes on node 6 flags: 0x4
memory[0x9b] [0x0000200bd0200000-0x0000200bd07fffff],
0x0000000000600000 bytes on node 6 flags: 0x0
memory[0x9c] [0x0000200bd0800000-0x0000200bd09fffff],
0x0000000000200000 bytes on node 6 flags: 0x4
memory[0x9d] [0x0000200bd0a00000-0x0000200fcfffffff],
0x00000003ff600000 bytes on node 6 flags: 0x0
memory[0x9e] [0x0000200fd0000000-0x0000200fd01fffff],
0x0000000000200000 bytes on node 6 flags: 0x4
memory[0x9f] [0x0000200fd0200000-0x0000200fffffffff],
0x000000002fe00000 bytes on node 6 flags: 0x0
...

>>>
>>> ...
>>>
>>> --- a/mm/Kconfig
>>> +++ b/mm/Kconfig
>>> @@ -89,6 +89,14 @@ config SPARSEMEM_VMEMMAP
>>> pfn_to_page and page_to_pfn operations. This is the most
>>> efficient option when sufficient kernel resources are available.
>>>
>>> +config MEMBLOCK_INIT_REGIONS
>>> + int "Number of init memblock regions"
>>> + range 128 1024
>>> + default 128
>>> + help
>>> + The number of init memblock regions which used to track "memory" and
>>> + "reserved" memblocks during early boot.
>>> +
>>> config HAVE_MEMBLOCK_PHYS_MAP
>>> bool
>>>
>>> diff --git a/mm/memblock.c b/mm/memblock.c
>>> index e4f03a6e8e56..6893d26b750e 100644
>>> --- a/mm/memblock.c
>>> +++ b/mm/memblock.c
>>> @@ -22,7 +22,7 @@
>>>
>>> #include "internal.h"
>>>
>>> -#define INIT_MEMBLOCK_REGIONS 128
>>> +#define INIT_MEMBLOCK_REGIONS CONFIG_MEMBLOCK_INIT_REGIONS
>>
>> Consistent naming would be nice - MEMBLOCK_INIT versus INIT_MEMBLOCK.

I agree.

>>
>> Can we simply increase INIT_MEMBLOCK_REGIONS to 1024 and avoid the
>> config option? It appears that the overhead from this would be 60kB or
>> so.
>
> 60k is not big, but using 1024 entries array for 2-4 memory banks on
> systems that don't report that fragmented memory map is really a waste.
>
> We can make this per platform opt-in, like INIT_MEMBLOCK_RESERVED_REGIONS ...
>

As I described above, is this a general scenario?

>> Or zero if CONFIG_ARCH_KEEP_MEMBLOCK and CONFIG_MEMORY_HOTPLUG
>> are cooperating.
>
> ... or add code that will discard unused parts of memblock arrays even if
> CONFIG_ARCH_KEEP_MEMBLOCK=y.
>

In scenarios where the memory usage is sensitive, should
CONFIG_ARCH_KEEP_MEMBLOCK be set to n or set the number by adding config?

Andrew, Mike, thank you.

2022-05-14 03:33:07

by Mike Rapoport

[permalink] [raw]
Subject: Re: [PATCH] memblock: config the number of init memblock regions

On Thu, May 12, 2022 at 02:46:25AM +0000, Zhouguanghui (OS Kernel) wrote:
> 在 2022/5/11 14:03, Mike Rapoport 写道:
> > On Tue, May 10, 2022 at 06:55:23PM -0700, Andrew Morton wrote:
> >> On Wed, 11 May 2022 01:05:30 +0000 Zhou Guanghui <[email protected]> wrote:
> >>
> >>> During early boot, the number of memblocks may exceed 128(some memory
> >>> areas are not reported to the kernel due to test failures. As a result,
> >>> contiguous memory is divided into multiple parts for reporting). If
> >>> the size of the init memblock regions is exceeded before the array size
> >>> can be resized, the excess memory will be lost.
> >
> > I'd like to see more details about how firmware creates that sparse memory
> > map in the changelog.
> >
>
> The scenario is as follows: In a system using HBM, a multi-bit ECC error
> occurs, and the BIOS saves the corresponding area (for example, 2 MB).
> When the system restarts next time, these areas are isolated and not
> reported or reported as EFI_UNUSABLE_MEMORY. Both of them lead to an
> increase in the number of memblocks, whereas EFI_UNUSABLE_MEMORY leads
> to a larger number of memblocks.
>
> For example, if the EFI_UNUSABLE_MEMORY type is reported:
> ...
> memory[0x92] [0x0000200834a00000-0x0000200835bfffff], 0x0000000001200000 bytes on node 7 flags: 0x0
> memory[0x93] [0x0000200835c00000-0x0000200835dfffff], 0x0000000000200000 bytes on node 7 flags: 0x4
> memory[0x94] [0x0000200835e00000-0x00002008367fffff], 0x0000000000a00000 bytes on node 7 flags: 0x0
> memory[0x95] [0x0000200836800000-0x00002008369fffff], 0x0000000000200000 bytes on node 7 flags: 0x4
> memory[0x96] [0x0000200836a00000-0x0000200837bfffff], 0x0000000001200000 bytes on node 7 flags: 0x0
> memory[0x97] [0x0000200837c00000-0x0000200837dfffff], 0x0000000000200000 bytes on node 7 flags: 0x4
> memory[0x98] [0x0000200837e00000-0x000020087fffffff], 0x0000000048200000 bytes on node 7 flags: 0x0
> memory[0x99] [0x0000200880000000-0x0000200bcfffffff], 0x0000000350000000 bytes on node 6 flags: 0x0
> memory[0x9a] [0x0000200bd0000000-0x0000200bd01fffff], 0x0000000000200000 bytes on node 6 flags: 0x4
> memory[0x9b] [0x0000200bd0200000-0x0000200bd07fffff], 0x0000000000600000 bytes on node 6 flags: 0x0
> memory[0x9c] [0x0000200bd0800000-0x0000200bd09fffff], 0x0000000000200000 bytes on node 6 flags: 0x4
> memory[0x9d] [0x0000200bd0a00000-0x0000200fcfffffff], 0x00000003ff600000 bytes on node 6 flags: 0x0
> memory[0x9e] [0x0000200fd0000000-0x0000200fd01fffff], 0x0000000000200000 bytes on node 6 flags: 0x4
> memory[0x9f] [0x0000200fd0200000-0x0000200fffffffff], 0x000000002fe00000 bytes on node 6 flags: 0x0
> ...

Thanks for the clarification of the usecase.

> >>>
> >>> ...
> >>>
> >>
> >> Can we simply increase INIT_MEMBLOCK_REGIONS to 1024 and avoid the
> >> config option? It appears that the overhead from this would be 60kB or
> >> so.
> >
> > 60k is not big, but using 1024 entries array for 2-4 memory banks on
> > systems that don't report that fragmented memory map is really a waste.
> >
> > We can make this per platform opt-in, like INIT_MEMBLOCK_RESERVED_REGIONS ...
> >
>
> As I described above, is this a general scenario?

The EFI memory on arm64 is generally bad because of all those NOMAP
carveouts spread all over the place even in cases without memory faults. So
it would make sense to increase the number of memblock regions on arm64
when CONFIG_EFI=y.

> >> Or zero if CONFIG_ARCH_KEEP_MEMBLOCK and CONFIG_MEMORY_HOTPLUG
> >> are cooperating.
> >
> > ... or add code that will discard unused parts of memblock arrays even if
> > CONFIG_ARCH_KEEP_MEMBLOCK=y.
> >
>
> In scenarios where the memory usage is sensitive, should
> CONFIG_ARCH_KEEP_MEMBLOCK be set to n or set the number by adding config?

We are talking about 20 or so architectures that are doing well with 128
memblock regions. I don't see why they need to be altered to accommodate
this use-case.

> Andrew, Mike, thank you.

--
Sincerely yours,
Mike.

2022-05-25 20:03:32

by Darren Hart

[permalink] [raw]
Subject: Re: [PATCH] memblock: config the number of init memblock regions

On Thu, May 12, 2022 at 09:28:42AM +0300, Mike Rapoport wrote:
> On Thu, May 12, 2022 at 02:46:25AM +0000, Zhouguanghui (OS Kernel) wrote:
> > 在 2022/5/11 14:03, Mike Rapoport 写道:
> > > On Tue, May 10, 2022 at 06:55:23PM -0700, Andrew Morton wrote:
> > >> On Wed, 11 May 2022 01:05:30 +0000 Zhou Guanghui <[email protected]> wrote:
> > >>
> > >>> During early boot, the number of memblocks may exceed 128(some memory
> > >>> areas are not reported to the kernel due to test failures. As a result,
> > >>> contiguous memory is divided into multiple parts for reporting). If
> > >>> the size of the init memblock regions is exceeded before the array size
> > >>> can be resized, the excess memory will be lost.
> > >
> > > I'd like to see more details about how firmware creates that sparse memory
> > > map in the changelog.
> > >
> >
> > The scenario is as follows: In a system using HBM, a multi-bit ECC error
> > occurs, and the BIOS saves the corresponding area (for example, 2 MB).
> > When the system restarts next time, these areas are isolated and not
> > reported or reported as EFI_UNUSABLE_MEMORY. Both of them lead to an
> > increase in the number of memblocks, whereas EFI_UNUSABLE_MEMORY leads
> > to a larger number of memblocks.
> >
> > For example, if the EFI_UNUSABLE_MEMORY type is reported:
> > ...
> > memory[0x92] [0x0000200834a00000-0x0000200835bfffff], 0x0000000001200000 bytes on node 7 flags: 0x0
> > memory[0x93] [0x0000200835c00000-0x0000200835dfffff], 0x0000000000200000 bytes on node 7 flags: 0x4
> > memory[0x94] [0x0000200835e00000-0x00002008367fffff], 0x0000000000a00000 bytes on node 7 flags: 0x0
> > memory[0x95] [0x0000200836800000-0x00002008369fffff], 0x0000000000200000 bytes on node 7 flags: 0x4
> > memory[0x96] [0x0000200836a00000-0x0000200837bfffff], 0x0000000001200000 bytes on node 7 flags: 0x0
> > memory[0x97] [0x0000200837c00000-0x0000200837dfffff], 0x0000000000200000 bytes on node 7 flags: 0x4
> > memory[0x98] [0x0000200837e00000-0x000020087fffffff], 0x0000000048200000 bytes on node 7 flags: 0x0
> > memory[0x99] [0x0000200880000000-0x0000200bcfffffff], 0x0000000350000000 bytes on node 6 flags: 0x0
> > memory[0x9a] [0x0000200bd0000000-0x0000200bd01fffff], 0x0000000000200000 bytes on node 6 flags: 0x4
> > memory[0x9b] [0x0000200bd0200000-0x0000200bd07fffff], 0x0000000000600000 bytes on node 6 flags: 0x0
> > memory[0x9c] [0x0000200bd0800000-0x0000200bd09fffff], 0x0000000000200000 bytes on node 6 flags: 0x4
> > memory[0x9d] [0x0000200bd0a00000-0x0000200fcfffffff], 0x00000003ff600000 bytes on node 6 flags: 0x0
> > memory[0x9e] [0x0000200fd0000000-0x0000200fd01fffff], 0x0000000000200000 bytes on node 6 flags: 0x4
> > memory[0x9f] [0x0000200fd0200000-0x0000200fffffffff], 0x000000002fe00000 bytes on node 6 flags: 0x0
> > ...
>
> Thanks for the clarification of the usecase.
>
> > >>>
> > >>> ...
> > >>>
> > >>
> > >> Can we simply increase INIT_MEMBLOCK_REGIONS to 1024 and avoid the
> > >> config option? It appears that the overhead from this would be 60kB or
> > >> so.
> > >
> > > 60k is not big, but using 1024 entries array for 2-4 memory banks on
> > > systems that don't report that fragmented memory map is really a waste.
> > >
> > > We can make this per platform opt-in, like INIT_MEMBLOCK_RESERVED_REGIONS ...
> > >
> >
> > As I described above, is this a general scenario?
>
> The EFI memory on arm64 is generally bad because of all those NOMAP
> carveouts spread all over the place even in cases without memory faults. So
> it would make sense to increase the number of memblock regions on arm64
> when CONFIG_EFI=y.

This seems like a reasonable approach. I would prefer not to have to increase
the default for EFI arm64 systems (and all that entails with coordinating with
distros, etc.). The point below about other arch's not needing this is a good
one too. This seems like a reasonable way make the defaults work everywhere
while only paying the price on the systems that require it.

>
> > >> Or zero if CONFIG_ARCH_KEEP_MEMBLOCK and CONFIG_MEMORY_HOTPLUG
> > >> are cooperating.
> > >
> > > ... or add code that will discard unused parts of memblock arrays even if
> > > CONFIG_ARCH_KEEP_MEMBLOCK=y.
> > >
> >
> > In scenarios where the memory usage is sensitive, should
> > CONFIG_ARCH_KEEP_MEMBLOCK be set to n or set the number by adding config?
>
> We are talking about 20 or so architectures that are doing well with 128
> memblock regions. I don't see why they need to be altered to accommodate
> this use-case.
>
> > Andrew, Mike, thank you.
>
> --
> Sincerely yours,
> Mike.
>

--
Darren Hart
Ampere Computing / OS and Kernel

2022-05-27 09:07:48

by Mike Rapoport

[permalink] [raw]
Subject: Re: [PATCH] memblock: config the number of init memblock regions

On Wed, May 25, 2022 at 09:44:56AM -0700, Darren Hart wrote:
> On Thu, May 12, 2022 at 09:28:42AM +0300, Mike Rapoport wrote:
> > On Thu, May 12, 2022 at 02:46:25AM +0000, Zhouguanghui (OS Kernel) wrote:
> > > 在 2022/5/11 14:03, Mike Rapoport 写道:
> > > > On Tue, May 10, 2022 at 06:55:23PM -0700, Andrew Morton wrote:
> > > >>
> > > >> Can we simply increase INIT_MEMBLOCK_REGIONS to 1024 and avoid the
> > > >> config option? It appears that the overhead from this would be 60kB or
> > > >> so.
> > > >
> > > > 60k is not big, but using 1024 entries array for 2-4 memory banks on
> > > > systems that don't report that fragmented memory map is really a waste.
> > > >
> > > > We can make this per platform opt-in, like INIT_MEMBLOCK_RESERVED_REGIONS ...
> > >
> > > As I described above, is this a general scenario?
> >
> > The EFI memory on arm64 is generally bad because of all those NOMAP
> > carveouts spread all over the place even in cases without memory faults. So
> > it would make sense to increase the number of memblock regions on arm64
> > when CONFIG_EFI=y.
>
> This seems like a reasonable approach. I would prefer not to have to increase
> the default for EFI arm64 systems (and all that entails with coordinating with
> distros, etc.). The point below about other arch's not needing this is a good
> one too. This seems like a reasonable way make the defaults work everywhere
> while only paying the price on the systems that require it.

There's already v2 of this patch:

https://lore.kernel.org/all/[email protected]

> --
> Darren Hart
> Ampere Computing / OS and Kernel

--
Sincerely yours,
Mike.