2023-01-06 23:00:47

by Aaron Thompson

[permalink] [raw]
Subject: [PATCH v3 1/1] mm: Always release pages to the buddy allocator in memblock_free_late().

If CONFIG_DEFERRED_STRUCT_PAGE_INIT is enabled, memblock_free_pages()
only releases pages to the buddy allocator if they are not in the
deferred range. This is correct for free pages (as defined by
for_each_free_mem_pfn_range_in_zone()) because free pages in the
deferred range will be initialized and released as part of the deferred
init process. memblock_free_pages() is called by memblock_free_late(),
which is used to free reserved ranges after memblock_free_all() has
run. All pages in reserved ranges have been initialized at that point,
and accordingly, those pages are not touched by the deferred init
process. This means that currently, if the pages that
memblock_free_late() intends to release are in the deferred range, they
will never be released to the buddy allocator. They will forever be
reserved.

In addition, memblock_free_pages() calls kmsan_memblock_free_pages(),
which is also correct for free pages but is not correct for reserved
pages. KMSAN metadata for reserved pages is initialized by
kmsan_init_shadow(), which runs shortly before memblock_free_all().

For both of these reasons, memblock_free_pages() should only be called
for free pages, and memblock_free_late() should call __free_pages_core()
directly instead.

One case where this issue can occur in the wild is EFI boot on
x86_64. The x86 EFI code reserves all EFI boot services memory ranges
via memblock_reserve() and frees them later via memblock_free_late()
(efi_reserve_boot_services() and efi_free_boot_services(),
respectively). If any of those ranges happens to fall within the
deferred init range, the pages will not be released and that memory will
be unavailable.

For example, on an Amazon EC2 t3.micro VM (1 GB) booting via EFI:

v6.2-rc2:
# grep -E 'Node|spanned|present|managed' /proc/zoneinfo
Node 0, zone DMA
spanned 4095
present 3999
managed 3840
Node 0, zone DMA32
spanned 246652
present 245868
managed 178867

v6.2-rc2 + patch:
# grep -E 'Node|spanned|present|managed' /proc/zoneinfo
Node 0, zone DMA
spanned 4095
present 3999
managed 3840
Node 0, zone DMA32
spanned 246652
present 245868
managed 222816 # +43,949 pages

Fixes: 3a80a7fa7989 ("mm: meminit: initialise a subset of struct pages if CONFIG_DEFERRED_STRUCT_PAGE_INIT is set")
Signed-off-by: Aaron Thompson <[email protected]>
---
mm/memblock.c | 8 +++++++-
tools/testing/memblock/internal.h | 4 ++++
2 files changed, 11 insertions(+), 1 deletion(-)

diff --git a/mm/memblock.c b/mm/memblock.c
index 511d4783dcf1..fc3d8fbd2060 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -1640,7 +1640,13 @@ void __init memblock_free_late(phys_addr_t base, phys_addr_t size)
end = PFN_DOWN(base + size);

for (; cursor < end; cursor++) {
- memblock_free_pages(pfn_to_page(cursor), cursor, 0);
+ /*
+ * Reserved pages are always initialized by the end of
+ * memblock_free_all() (by memmap_init() and, if deferred
+ * initialization is enabled, memmap_init_reserved_pages()), so
+ * these pages can be released directly to the buddy allocator.
+ */
+ __free_pages_core(pfn_to_page(cursor), 0);
totalram_pages_inc();
}
}
diff --git a/tools/testing/memblock/internal.h b/tools/testing/memblock/internal.h
index fdb7f5db7308..85973e55489e 100644
--- a/tools/testing/memblock/internal.h
+++ b/tools/testing/memblock/internal.h
@@ -15,6 +15,10 @@ bool mirrored_kernelcore = false;

struct page {};

+void __free_pages_core(struct page *page, unsigned int order)
+{
+}
+
void memblock_free_pages(struct page *page, unsigned long pfn,
unsigned int order)
{
--
2.30.2


2023-01-08 17:03:14

by Mike Rapoport

[permalink] [raw]
Subject: Re: [PATCH v3 1/1] mm: Always release pages to the buddy allocator in memblock_free_late().

On Fri, Jan 06, 2023 at 10:22:44PM +0000, Aaron Thompson wrote:
> If CONFIG_DEFERRED_STRUCT_PAGE_INIT is enabled, memblock_free_pages()
> only releases pages to the buddy allocator if they are not in the
> deferred range. This is correct for free pages (as defined by
> for_each_free_mem_pfn_range_in_zone()) because free pages in the
> deferred range will be initialized and released as part of the deferred
> init process. memblock_free_pages() is called by memblock_free_late(),
> which is used to free reserved ranges after memblock_free_all() has
> run. All pages in reserved ranges have been initialized at that point,
> and accordingly, those pages are not touched by the deferred init
> process. This means that currently, if the pages that
> memblock_free_late() intends to release are in the deferred range, they
> will never be released to the buddy allocator. They will forever be
> reserved.
>
> In addition, memblock_free_pages() calls kmsan_memblock_free_pages(),
> which is also correct for free pages but is not correct for reserved
> pages. KMSAN metadata for reserved pages is initialized by
> kmsan_init_shadow(), which runs shortly before memblock_free_all().
>
> For both of these reasons, memblock_free_pages() should only be called
> for free pages, and memblock_free_late() should call __free_pages_core()
> directly instead.
>
> One case where this issue can occur in the wild is EFI boot on
> x86_64. The x86 EFI code reserves all EFI boot services memory ranges
> via memblock_reserve() and frees them later via memblock_free_late()
> (efi_reserve_boot_services() and efi_free_boot_services(),
> respectively). If any of those ranges happens to fall within the
> deferred init range, the pages will not be released and that memory will
> be unavailable.
>
> For example, on an Amazon EC2 t3.micro VM (1 GB) booting via EFI:
>
> v6.2-rc2:
> # grep -E 'Node|spanned|present|managed' /proc/zoneinfo
> Node 0, zone DMA
> spanned 4095
> present 3999
> managed 3840
> Node 0, zone DMA32
> spanned 246652
> present 245868
> managed 178867
>
> v6.2-rc2 + patch:
> # grep -E 'Node|spanned|present|managed' /proc/zoneinfo
> Node 0, zone DMA
> spanned 4095
> present 3999
> managed 3840
> Node 0, zone DMA32
> spanned 246652
> present 245868
> managed 222816 # +43,949 pages
>
> Fixes: 3a80a7fa7989 ("mm: meminit: initialise a subset of struct pages if CONFIG_DEFERRED_STRUCT_PAGE_INIT is set")
> Signed-off-by: Aaron Thompson <[email protected]>

Applied, thanks!

> ---
> mm/memblock.c | 8 +++++++-
> tools/testing/memblock/internal.h | 4 ++++
> 2 files changed, 11 insertions(+), 1 deletion(-)
>
> diff --git a/mm/memblock.c b/mm/memblock.c
> index 511d4783dcf1..fc3d8fbd2060 100644
> --- a/mm/memblock.c
> +++ b/mm/memblock.c
> @@ -1640,7 +1640,13 @@ void __init memblock_free_late(phys_addr_t base, phys_addr_t size)
> end = PFN_DOWN(base + size);
>
> for (; cursor < end; cursor++) {
> - memblock_free_pages(pfn_to_page(cursor), cursor, 0);
> + /*
> + * Reserved pages are always initialized by the end of
> + * memblock_free_all() (by memmap_init() and, if deferred
> + * initialization is enabled, memmap_init_reserved_pages()), so
> + * these pages can be released directly to the buddy allocator.
> + */
> + __free_pages_core(pfn_to_page(cursor), 0);
> totalram_pages_inc();
> }
> }
> diff --git a/tools/testing/memblock/internal.h b/tools/testing/memblock/internal.h
> index fdb7f5db7308..85973e55489e 100644
> --- a/tools/testing/memblock/internal.h
> +++ b/tools/testing/memblock/internal.h
> @@ -15,6 +15,10 @@ bool mirrored_kernelcore = false;
>
> struct page {};
>
> +void __free_pages_core(struct page *page, unsigned int order)
> +{
> +}
> +
> void memblock_free_pages(struct page *page, unsigned long pfn,
> unsigned int order)
> {
> --
> 2.30.2
>

--
Sincerely yours,
Mike.