2020-07-21 07:32:55

by Li Wei

[permalink] [raw]
Subject: [PATCH] arm64: mm: free unused memmap for sparse memory model that define VMEMMAP

For the memory hole, sparse memory model that define SPARSEMEM_VMEMMAP
do not free the reserved memory for the page map, this patch do it.

Signed-off-by: Wei Li <[email protected]>
Signed-off-by: Chen Feng <[email protected]>
Signed-off-by: Xia Qing <[email protected]>
---
arch/arm64/mm/init.c | 81 +++++++++++++++++++++++++++++++++++++++++++++-------
1 file changed, 71 insertions(+), 10 deletions(-)

diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 1e93cfc7c47a..d1b56b47d5ba 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -441,7 +441,48 @@ void __init bootmem_init(void)
memblock_dump_all();
}

-#ifndef CONFIG_SPARSEMEM_VMEMMAP
+#ifdef CONFIG_SPARSEMEM_VMEMMAP
+#define VMEMMAP_PAGE_INUSE 0xFD
+static inline void free_memmap(unsigned long start_pfn, unsigned long end_pfn)
+{
+ unsigned long addr, end;
+ unsigned long next;
+ pmd_t *pmd;
+ void *page_addr;
+ phys_addr_t phys_addr;
+
+ addr = (unsigned long)pfn_to_page(start_pfn);
+ end = (unsigned long)pfn_to_page(end_pfn);
+
+ pmd = pmd_offset(pud_offset(pgd_offset_k(addr), addr), addr);
+ for (; addr < end; addr = next, pmd++) {
+ next = pmd_addr_end(addr, end);
+
+ if (!pmd_present(*pmd))
+ continue;
+
+ if (IS_ALIGNED(addr, PMD_SIZE) &&
+ IS_ALIGNED(next, PMD_SIZE)) {
+ phys_addr = __pfn_to_phys(pmd_pfn(*pmd));
+ free_bootmem(phys_addr, PMD_SIZE);
+ pmd_clear(pmd);
+ } else {
+ /* If here, we are freeing vmemmap pages. */
+ memset((void *)addr, VMEMMAP_PAGE_INUSE, next - addr);
+ page_addr = page_address(pmd_page(*pmd));
+
+ if (!memchr_inv(page_addr, VMEMMAP_PAGE_INUSE,
+ PMD_SIZE)) {
+ phys_addr = __pfn_to_phys(pmd_pfn(*pmd));
+ free_bootmem(phys_addr, PMD_SIZE);
+ pmd_clear(pmd);
+ }
+ }
+ }
+
+ flush_tlb_all();
+}
+#else
static inline void free_memmap(unsigned long start_pfn, unsigned long end_pfn)
{
struct page *start_pg, *end_pg;
@@ -468,31 +509,53 @@ static inline void free_memmap(unsigned long start_pfn, unsigned long end_pfn)
memblock_free(pg, pgend - pg);
}

+#endif
+
/*
* The mem_map array can get very big. Free the unused area of the memory map.
*/
static void __init free_unused_memmap(void)
{
- unsigned long start, prev_end = 0;
+ unsigned long start, cur_start, prev_end = 0;
struct memblock_region *reg;

for_each_memblock(memory, reg) {
- start = __phys_to_pfn(reg->base);
+ cur_start = __phys_to_pfn(reg->base);

#ifdef CONFIG_SPARSEMEM
/*
* Take care not to free memmap entries that don't exist due
* to SPARSEMEM sections which aren't present.
*/
- start = min(start, ALIGN(prev_end, PAGES_PER_SECTION));
-#endif
+ start = min(cur_start, ALIGN(prev_end, PAGES_PER_SECTION));
+
/*
- * If we had a previous bank, and there is a space between the
- * current bank and the previous, free it.
+ * Free memory in the case of:
+ * 1. if cur_start - prev_end <= PAGES_PER_SECTION,
+ * free pre_end ~ cur_start.
+ * 2. if cur_start - prev_end > PAGES_PER_SECTION,
+ * free pre_end ~ ALIGN(prev_end, PAGES_PER_SECTION).
*/
if (prev_end && prev_end < start)
free_memmap(prev_end, start);

+ /*
+ * Free memory in the case of:
+ * if cur_start - prev_end > PAGES_PER_SECTION,
+ * free ALIGN_DOWN(cur_start, PAGES_PER_SECTION) ~ cur_start.
+ */
+ if (cur_start > start &&
+ !IS_ALIGNED(cur_start, PAGES_PER_SECTION))
+ free_memmap(ALIGN_DOWN(cur_start, PAGES_PER_SECTION),
+ cur_start);
+#else
+ /*
+ * If we had a previous bank, and there is a space between the
+ * current bank and the previous, free it.
+ */
+ if (prev_end && prev_end < cur_start)
+ free_memmap(prev_end, cur_start);
+#endif
/*
* Align up here since the VM subsystem insists that the
* memmap entries are valid from the bank end aligned to
@@ -507,7 +570,6 @@ static void __init free_unused_memmap(void)
free_memmap(prev_end, ALIGN(prev_end, PAGES_PER_SECTION));
#endif
}
-#endif /* !CONFIG_SPARSEMEM_VMEMMAP */

/*
* mem_init() marks the free areas in the mem_map and tells us how much memory
@@ -524,9 +586,8 @@ void __init mem_init(void)

set_max_mapnr(max_pfn - PHYS_PFN_OFFSET);

-#ifndef CONFIG_SPARSEMEM_VMEMMAP
free_unused_memmap();
-#endif
+
/* this will put all unused low memory onto the freelists */
memblock_free_all();

--
2.15.0


2020-07-22 06:11:22

by Mike Rapoport

[permalink] [raw]
Subject: Re: [PATCH] arm64: mm: free unused memmap for sparse memory model that define VMEMMAP

Hi,

On Tue, Jul 21, 2020 at 03:32:03PM +0800, Wei Li wrote:
> For the memory hole, sparse memory model that define SPARSEMEM_VMEMMAP
> do not free the reserved memory for the page map, this patch do it.

Are there numbers showing how much memory is actually freed?

The freeing of empty memmap would become rather complex with these
changes, do the memory savings justify it?

> Signed-off-by: Wei Li <[email protected]>
> Signed-off-by: Chen Feng <[email protected]>
> Signed-off-by: Xia Qing <[email protected]>
> ---
> arch/arm64/mm/init.c | 81 +++++++++++++++++++++++++++++++++++++++++++++-------
> 1 file changed, 71 insertions(+), 10 deletions(-)
>
> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> index 1e93cfc7c47a..d1b56b47d5ba 100644
> --- a/arch/arm64/mm/init.c
> +++ b/arch/arm64/mm/init.c
> @@ -441,7 +441,48 @@ void __init bootmem_init(void)
> memblock_dump_all();
> }
>
> -#ifndef CONFIG_SPARSEMEM_VMEMMAP
> +#ifdef CONFIG_SPARSEMEM_VMEMMAP
> +#define VMEMMAP_PAGE_INUSE 0xFD
> +static inline void free_memmap(unsigned long start_pfn, unsigned long end_pfn)
> +{
> + unsigned long addr, end;
> + unsigned long next;
> + pmd_t *pmd;
> + void *page_addr;
> + phys_addr_t phys_addr;
> +
> + addr = (unsigned long)pfn_to_page(start_pfn);
> + end = (unsigned long)pfn_to_page(end_pfn);
> +
> + pmd = pmd_offset(pud_offset(pgd_offset_k(addr), addr), addr);
> + for (; addr < end; addr = next, pmd++) {
> + next = pmd_addr_end(addr, end);
> +
> + if (!pmd_present(*pmd))
> + continue;
> +
> + if (IS_ALIGNED(addr, PMD_SIZE) &&
> + IS_ALIGNED(next, PMD_SIZE)) {
> + phys_addr = __pfn_to_phys(pmd_pfn(*pmd));
> + free_bootmem(phys_addr, PMD_SIZE);
> + pmd_clear(pmd);
> + } else {
> + /* If here, we are freeing vmemmap pages. */
> + memset((void *)addr, VMEMMAP_PAGE_INUSE, next - addr);
> + page_addr = page_address(pmd_page(*pmd));
> +
> + if (!memchr_inv(page_addr, VMEMMAP_PAGE_INUSE,
> + PMD_SIZE)) {
> + phys_addr = __pfn_to_phys(pmd_pfn(*pmd));
> + free_bootmem(phys_addr, PMD_SIZE);
> + pmd_clear(pmd);
> + }
> + }
> + }
> +
> + flush_tlb_all();
> +}
> +#else
> static inline void free_memmap(unsigned long start_pfn, unsigned long end_pfn)
> {
> struct page *start_pg, *end_pg;
> @@ -468,31 +509,53 @@ static inline void free_memmap(unsigned long start_pfn, unsigned long end_pfn)
> memblock_free(pg, pgend - pg);
> }
>
> +#endif
> +
> /*
> * The mem_map array can get very big. Free the unused area of the memory map.
> */
> static void __init free_unused_memmap(void)
> {
> - unsigned long start, prev_end = 0;
> + unsigned long start, cur_start, prev_end = 0;
> struct memblock_region *reg;
>
> for_each_memblock(memory, reg) {
> - start = __phys_to_pfn(reg->base);
> + cur_start = __phys_to_pfn(reg->base);
>
> #ifdef CONFIG_SPARSEMEM
> /*
> * Take care not to free memmap entries that don't exist due
> * to SPARSEMEM sections which aren't present.
> */
> - start = min(start, ALIGN(prev_end, PAGES_PER_SECTION));
> -#endif
> + start = min(cur_start, ALIGN(prev_end, PAGES_PER_SECTION));
> +
> /*
> - * If we had a previous bank, and there is a space between the
> - * current bank and the previous, free it.
> + * Free memory in the case of:
> + * 1. if cur_start - prev_end <= PAGES_PER_SECTION,
> + * free pre_end ~ cur_start.
> + * 2. if cur_start - prev_end > PAGES_PER_SECTION,
> + * free pre_end ~ ALIGN(prev_end, PAGES_PER_SECTION).
> */
> if (prev_end && prev_end < start)
> free_memmap(prev_end, start);
>
> + /*
> + * Free memory in the case of:
> + * if cur_start - prev_end > PAGES_PER_SECTION,
> + * free ALIGN_DOWN(cur_start, PAGES_PER_SECTION) ~ cur_start.
> + */
> + if (cur_start > start &&
> + !IS_ALIGNED(cur_start, PAGES_PER_SECTION))
> + free_memmap(ALIGN_DOWN(cur_start, PAGES_PER_SECTION),
> + cur_start);
> +#else
> + /*
> + * If we had a previous bank, and there is a space between the
> + * current bank and the previous, free it.
> + */
> + if (prev_end && prev_end < cur_start)
> + free_memmap(prev_end, cur_start);
> +#endif
> /*
> * Align up here since the VM subsystem insists that the
> * memmap entries are valid from the bank end aligned to
> @@ -507,7 +570,6 @@ static void __init free_unused_memmap(void)
> free_memmap(prev_end, ALIGN(prev_end, PAGES_PER_SECTION));
> #endif
> }
> -#endif /* !CONFIG_SPARSEMEM_VMEMMAP */
>
> /*
> * mem_init() marks the free areas in the mem_map and tells us how much memory
> @@ -524,9 +586,8 @@ void __init mem_init(void)
>
> set_max_mapnr(max_pfn - PHYS_PFN_OFFSET);
>
> -#ifndef CONFIG_SPARSEMEM_VMEMMAP
> free_unused_memmap();
> -#endif
> +
> /* this will put all unused low memory onto the freelists */
> memblock_free_all();
>
> --
> 2.15.0
>

--
Sincerely yours,
Mike.

2020-07-22 08:44:23

by Li Wei

[permalink] [raw]
Subject: 答复: [PATCH] arm64: mm: free unused memmap f or sparse memory model that define VMEMMAP



-----?ʼ?ԭ??-----
??????: Mike Rapoport [mailto:[email protected]]
????ʱ??: 2020??7??22?? 14:07
?ռ???: liwei (CM) <[email protected]>
????: [email protected]; [email protected]; Xiaqing (A) <[email protected]>; Chenfeng (puck) <[email protected]>; butao <[email protected]>; fengbaopeng <[email protected]>; [email protected]; [email protected]; Song Bao Hua (Barry Song) <[email protected]>; [email protected]; [email protected]; sujunfei <[email protected]>
????: Re: [PATCH] arm64: mm: free unused memmap for sparse memory model that define VMEMMAP

Hi,

On Tue, Jul 21, 2020 at 03:32:03PM +0800, Wei Li wrote:
> For the memory hole, sparse memory model that define SPARSEMEM_VMEMMAP
> do not free the reserved memory for the page map, this patch do it.

Are there numbers showing how much memory is actually freed?

The freeing of empty memmap would become rather complex with these changes, do the memory savings justify it?

Hi, Mike
In the sparse memory model, the size of a section is 1 GB (SECTION_SIZE_BITS 30) by default. Therefore, when the memory is less than the size of a section, that is, when there is a hole, the patch takes effect??

1) For example, the DDR size used by our platform is 8 GB, however, 3.5 ~ 4 GB is the space where the SOC register is located. Therefore, if the page map is performed on memory3 (3 ~4 GB), 512 MB/4 KB x 64 bytes = 8 MB is wasted because the page map is not required for the 3.5~4 GB space; However, the DDR memory space of 3.5~4 GB is shift to 16~16.5 GB. In this case, memory is wasted because the page map is not required for 16.5~17 GB, the patch can also save 8 MB. Therefore, the total saved memory is 16 MB.

2) In the reserved memory, some modules need to reserve a large amount of memory (no-map attr). On our platform, the modem module needs to reserve more than 256 MB memory, and the patch can save 4 MB. Actually, if the reserved memory is greater than 128 MB, the patch can free unnecessary page map memory.

It may be possible to save some waste by reducing the section size, but free the waste page map that defines VMEMMAP is another approach similar to flat memory model and sparse memory mode that does not define VMEMMAP, and makes the entire code look more complete.

If you have a better idea, I'd be happy to discuss it with you.

Thanks!

> Signed-off-by: Wei Li <[email protected]>
> Signed-off-by: Chen Feng <[email protected]>
> Signed-off-by: Xia Qing <[email protected]>
> ---
> arch/arm64/mm/init.c | 81
> +++++++++++++++++++++++++++++++++++++++++++++-------
> 1 file changed, 71 insertions(+), 10 deletions(-)
>
> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index
> 1e93cfc7c47a..d1b56b47d5ba 100644
> --- a/arch/arm64/mm/init.c
> +++ b/arch/arm64/mm/init.c
> @@ -441,7 +441,48 @@ void __init bootmem_init(void)
> memblock_dump_all();
> }
>
> -#ifndef CONFIG_SPARSEMEM_VMEMMAP
> +#ifdef CONFIG_SPARSEMEM_VMEMMAP
> +#define VMEMMAP_PAGE_INUSE 0xFD
> +static inline void free_memmap(unsigned long start_pfn, unsigned long
> +end_pfn) {
> + unsigned long addr, end;
> + unsigned long next;
> + pmd_t *pmd;
> + void *page_addr;
> + phys_addr_t phys_addr;
> +
> + addr = (unsigned long)pfn_to_page(start_pfn);
> + end = (unsigned long)pfn_to_page(end_pfn);
> +
> + pmd = pmd_offset(pud_offset(pgd_offset_k(addr), addr), addr);
> + for (; addr < end; addr = next, pmd++) {
> + next = pmd_addr_end(addr, end);
> +
> + if (!pmd_present(*pmd))
> + continue;
> +
> + if (IS_ALIGNED(addr, PMD_SIZE) &&
> + IS_ALIGNED(next, PMD_SIZE)) {
> + phys_addr = __pfn_to_phys(pmd_pfn(*pmd));
> + free_bootmem(phys_addr, PMD_SIZE);
> + pmd_clear(pmd);
> + } else {
> + /* If here, we are freeing vmemmap pages. */
> + memset((void *)addr, VMEMMAP_PAGE_INUSE, next - addr);
> + page_addr = page_address(pmd_page(*pmd));
> +
> + if (!memchr_inv(page_addr, VMEMMAP_PAGE_INUSE,
> + PMD_SIZE)) {
> + phys_addr = __pfn_to_phys(pmd_pfn(*pmd));
> + free_bootmem(phys_addr, PMD_SIZE);
> + pmd_clear(pmd);
> + }
> + }
> + }
> +
> + flush_tlb_all();
> +}
> +#else
> static inline void free_memmap(unsigned long start_pfn, unsigned long
> end_pfn) {
> struct page *start_pg, *end_pg;
> @@ -468,31 +509,53 @@ static inline void free_memmap(unsigned long start_pfn, unsigned long end_pfn)
> memblock_free(pg, pgend - pg);
> }
>
> +#endif
> +
> /*
> * The mem_map array can get very big. Free the unused area of the memory map.
> */
> static void __init free_unused_memmap(void) {
> - unsigned long start, prev_end = 0;
> + unsigned long start, cur_start, prev_end = 0;
> struct memblock_region *reg;
>
> for_each_memblock(memory, reg) {
> - start = __phys_to_pfn(reg->base);
> + cur_start = __phys_to_pfn(reg->base);
>
> #ifdef CONFIG_SPARSEMEM
> /*
> * Take care not to free memmap entries that don't exist due
> * to SPARSEMEM sections which aren't present.
> */
> - start = min(start, ALIGN(prev_end, PAGES_PER_SECTION));
> -#endif
> + start = min(cur_start, ALIGN(prev_end, PAGES_PER_SECTION));
> +
> /*
> - * If we had a previous bank, and there is a space between the
> - * current bank and the previous, free it.
> + * Free memory in the case of:
> + * 1. if cur_start - prev_end <= PAGES_PER_SECTION,
> + * free pre_end ~ cur_start.
> + * 2. if cur_start - prev_end > PAGES_PER_SECTION,
> + * free pre_end ~ ALIGN(prev_end, PAGES_PER_SECTION).
> */
> if (prev_end && prev_end < start)
> free_memmap(prev_end, start);
>
> + /*
> + * Free memory in the case of:
> + * if cur_start - prev_end > PAGES_PER_SECTION,
> + * free ALIGN_DOWN(cur_start, PAGES_PER_SECTION) ~ cur_start.
> + */
> + if (cur_start > start &&
> + !IS_ALIGNED(cur_start, PAGES_PER_SECTION))
> + free_memmap(ALIGN_DOWN(cur_start, PAGES_PER_SECTION),
> + cur_start);
> +#else
> + /*
> + * If we had a previous bank, and there is a space between the
> + * current bank and the previous, free it.
> + */
> + if (prev_end && prev_end < cur_start)
> + free_memmap(prev_end, cur_start);
> +#endif
> /*
> * Align up here since the VM subsystem insists that the
> * memmap entries are valid from the bank end aligned to @@ -507,7
> +570,6 @@ static void __init free_unused_memmap(void)
> free_memmap(prev_end, ALIGN(prev_end, PAGES_PER_SECTION)); #endif
> }
> -#endif /* !CONFIG_SPARSEMEM_VMEMMAP */
>
> /*
> * mem_init() marks the free areas in the mem_map and tells us how
> much memory @@ -524,9 +586,8 @@ void __init mem_init(void)
>
> set_max_mapnr(max_pfn - PHYS_PFN_OFFSET);
>
> -#ifndef CONFIG_SPARSEMEM_VMEMMAP
> free_unused_memmap();
> -#endif
> +
> /* this will put all unused low memory onto the freelists */
> memblock_free_all();
>
> --
> 2.15.0
>

--
Sincerely yours,
Mike.

2020-07-22 12:52:46

by Catalin Marinas

[permalink] [raw]
Subject: Re: 答复: [PATCH] arm64 : mm: fre e unused memmap for sparse memory model that define VMEMMAP

On Wed, Jul 22, 2020 at 08:41:17AM +0000, liwei (CM) wrote:
> Mike Rapoport wrote:
> > On Tue, Jul 21, 2020 at 03:32:03PM +0800, Wei Li wrote:
> > > For the memory hole, sparse memory model that define SPARSEMEM_VMEMMAP
> > > do not free the reserved memory for the page map, this patch do it.
> >
> > Are there numbers showing how much memory is actually freed?
> >
> > The freeing of empty memmap would become rather complex with these
> > changes, do the memory savings justify it?
>
> In the sparse memory model, the size of a section is 1 GB
> (SECTION_SIZE_BITS 30) by default.

Can we reduce SECTION_SIZE_BITS instead? Say 26?

--
Catalin

2020-07-22 13:44:04

by Li Wei

[permalink] [raw]
Subject: 答复: 答复: [PATCH] arm64: mm: free unused me mmap for sparse memory model that define VMEM MAP


-----?ʼ?ԭ??-----
??????: Catalin Marinas [mailto:[email protected]]
????ʱ??: 2020??7??22?? 20:49
?ռ???: liwei (CM) <[email protected]>
????: Mike Rapoport <[email protected]>; [email protected]; Xiaqing (A) <[email protected]>; Chenfeng (puck) <[email protected]>; butao <[email protected]>; fengbaopeng <[email protected]>; [email protected]; [email protected]; Song Bao Hua (Barry Song) <[email protected]>; [email protected]; [email protected]; sujunfei <[email protected]>; zhaojiapeng <[email protected]>
????: Re: ????: [PATCH] arm64: mm: free unused memmap for sparse memory model that define VMEMMAP

On Wed, Jul 22, 2020 at 08:41:17AM +0000, liwei (CM) wrote:
> Mike Rapoport wrote:
> > On Tue, Jul 21, 2020 at 03:32:03PM +0800, Wei Li wrote:
> > > For the memory hole, sparse memory model that define
> > > SPARSEMEM_VMEMMAP do not free the reserved memory for the page map, this patch do it.
> >
> > Are there numbers showing how much memory is actually freed?
> >
> > The freeing of empty memmap would become rather complex with these
> > changes, do the memory savings justify it?
>
> In the sparse memory model, the size of a section is 1 GB
> (SECTION_SIZE_BITS 30) by default.

Can we reduce SECTION_SIZE_BITS instead? Say 26?

Hi, Catalin

Yes, you are right, reduce SECTION_SIZE_BITS to 26 can save almost the same memory as the patch.

1) However, it is not clear whether changing the section size has any other impact.

2) Just like the flat memory model and the sparse memory model that does not define VMEMMAP, both of them have their own ways to free unused memmap. I think we've given a similar way for sparse memory define VMEMMAP.

3) This explicit free unused memmap method does reduce unnecessary memory waste for users who do not notice the section size modification.

Hope you will reconsider the purpose and significance of this patch, thanks.

--
Catalin

2020-07-23 02:34:39

by Anshuman Khandual

[permalink] [raw]
Subject: Re: [PATCH] arm64: mm: free unused memmap for sparse memory model that define VMEMMAP



On 07/21/2020 01:02 PM, Wei Li wrote:
> For the memory hole, sparse memory model that define SPARSEMEM_VMEMMAP
> do not free the reserved memory for the page map, this patch do it.
>
> Signed-off-by: Wei Li <[email protected]>
> Signed-off-by: Chen Feng <[email protected]>
> Signed-off-by: Xia Qing <[email protected]>
> ---
> arch/arm64/mm/init.c | 81 +++++++++++++++++++++++++++++++++++++++++++++-------
> 1 file changed, 71 insertions(+), 10 deletions(-)

This patch does not compile on 5.8-rc6 with defconfig.

2020-07-23 03:31:23

by Li Wei

[permalink] [raw]
Subject: 答复: [PATCH] arm64: mm: free unused memmap for sparse memory model that define VMEMMAP



-----邮件原件-----
发件人: Anshuman Khandual [mailto:[email protected]]
发送时间: 2020年7月23日 10:33
收件人: liwei (CM) <[email protected]>; [email protected]; [email protected]
抄送: Song Bao Hua (Barry Song) <[email protected]>; sujunfei <[email protected]>; Xiaqing (A) <[email protected]>; [email protected]; [email protected]; Chenfeng (puck) <[email protected]>; [email protected]; [email protected]; fengbaopeng <[email protected]>; [email protected]; butao <[email protected]>
主题: Re: [PATCH] arm64: mm: free unused memmap for sparse memory model that define VMEMMAP



On 07/21/2020 01:02 PM, Wei Li wrote:
> For the memory hole, sparse memory model that define SPARSEMEM_VMEMMAP
> do not free the reserved memory for the page map, this patch do it.
>
> Signed-off-by: Wei Li <[email protected]>
> Signed-off-by: Chen Feng <[email protected]>
> Signed-off-by: Xia Qing <[email protected]>
> ---
> arch/arm64/mm/init.c | 81
> +++++++++++++++++++++++++++++++++++++++++++++-------
> 1 file changed, 71 insertions(+), 10 deletions(-)

This patch does not compile on 5.8-rc6 with defconfig.

Hi,

We're working on patch v2 as soon as possible.

Thanks.

2020-07-23 11:32:15

by Catalin Marinas

[permalink] [raw]
Subject: Re: 答复: 答复: [PATC H] arm64: mm: fre e unused memmap for sparse memory model that define VMEMMAP

On Wed, Jul 22, 2020 at 01:40:34PM +0000, liwei (CM) wrote:
> Catalin Marinas wrote:
> > On Wed, Jul 22, 2020 at 08:41:17AM +0000, liwei (CM) wrote:
> > > Mike Rapoport wrote:
> > > > On Tue, Jul 21, 2020 at 03:32:03PM +0800, Wei Li wrote:
> > > > > For the memory hole, sparse memory model that define
> > > > > SPARSEMEM_VMEMMAP do not free the reserved memory for the page
> > > > > map, this patch do it.
> > > >
> > > > Are there numbers showing how much memory is actually freed?
> > > >
> > > > The freeing of empty memmap would become rather complex with these
> > > > changes, do the memory savings justify it?
> > >
> > > In the sparse memory model, the size of a section is 1 GB
> > > (SECTION_SIZE_BITS 30) by default.
> >
> > Can we reduce SECTION_SIZE_BITS instead? Say 26?
>
> Yes, you are right, reduce SECTION_SIZE_BITS to 26 can save almost the
> same memory as the patch.
>
> 1) However, it is not clear whether changing the section size has any
> other impact.

Well, we should analyse this.

> 2) Just like the flat memory model and the sparse memory model that
> does not define VMEMMAP, both of them have their own ways to free
> unused memmap. I think we've given a similar way for sparse memory
> define VMEMMAP.

I think we did it for flatmem initially (on arm32) and added support for
sparsemem later on, so free_unused_memmap() had to cope with sparse
sections. On arm64 we introduced vmemmap support and didn't bother with
the freeing at all because of the added complexity of the vmemmap page
tables.

I wonder whether we should just disallow flatmem and non-vmemmap
sparsemem on arm64. Is there any value in keeping them around?

> 3) This explicit free unused memmap method does reduce unnecessary
> memory waste for users who do not notice the section size
> modification.

But if we changed SECTION_SIZE_BITS in the mainline kernel, then we
wouldn't need additional code to free the unused memmap.

--
Catalin

2020-07-23 13:20:34

by Mike Rapoport

[permalink] [raw]
Subject: Re: 答复: 答复: [PATC H] arm64: mm: fre e unused memmap for sparse memory model that define VMEMMAP

On Thu, Jul 23, 2020 at 12:29:26PM +0100, Catalin Marinas wrote:
> On Wed, Jul 22, 2020 at 01:40:34PM +0000, liwei (CM) wrote:
> > Catalin Marinas wrote:
> > > On Wed, Jul 22, 2020 at 08:41:17AM +0000, liwei (CM) wrote:
> > > > Mike Rapoport wrote:
> > > > > On Tue, Jul 21, 2020 at 03:32:03PM +0800, Wei Li wrote:
> > > > > > For the memory hole, sparse memory model that define
> > > > > > SPARSEMEM_VMEMMAP do not free the reserved memory for the page
> > > > > > map, this patch do it.
> > > > >
> > > > > Are there numbers showing how much memory is actually freed?
> > > > >
> > > > > The freeing of empty memmap would become rather complex with these
> > > > > changes, do the memory savings justify it?
> > > >
> > > > In the sparse memory model, the size of a section is 1 GB
> > > > (SECTION_SIZE_BITS 30) by default.
> > >
> > > Can we reduce SECTION_SIZE_BITS instead? Say 26?
> >
> > Yes, you are right, reduce SECTION_SIZE_BITS to 26 can save almost the
> > same memory as the patch.
> >
> > 1) However, it is not clear whether changing the section size has any
> > other impact.
>
> Well, we should analyse this.
>
> > 2) Just like the flat memory model and the sparse memory model that
> > does not define VMEMMAP, both of them have their own ways to free
> > unused memmap. I think we've given a similar way for sparse memory
> > define VMEMMAP.
>
> I think we did it for flatmem initially (on arm32) and added support for
> sparsemem later on, so free_unused_memmap() had to cope with sparse
> sections. On arm64 we introduced vmemmap support and didn't bother with
> the freeing at all because of the added complexity of the vmemmap page
> tables.
>
> I wonder whether we should just disallow flatmem and non-vmemmap
> sparsemem on arm64. Is there any value in keeping them around?

FLATMEM is useful for UMA systems with a single memory bank, so probably
it's worth keeping it for low end machines.

Non-vmemmap sparsemem is essentially disable in arch/arm64/Kconfig, so
for NUMA configurations SPARSEMEM_VMEMMAP is the only choice.

> > 3) This explicit free unused memmap method does reduce unnecessary
> > memory waste for users who do not notice the section size
> > modification.
>
> But if we changed SECTION_SIZE_BITS in the mainline kernel, then we
> wouldn't need additional code to free the unused memmap.

Moreover if we reduce SECTION_SIZE_BITS, we can drop
free_unused_memmap() and since the arm64 memory map for sparse will not
differ from other arches we can drop custom pfn_valid() as well.

> --
> Catalin

--
Sincerely yours,
Mike.

2020-07-24 03:41:36

by Li Wei

[permalink] [raw]
Subject: 答复: 答复: 答复: [PATCH] arm64: mm: free unu sed memmap for sparse memory model that defin e VMEMMAP



-----?ʼ?ԭ??-----
??????: Mike Rapoport [mailto:[email protected]]
????ʱ??: 2020??7??23?? 21:19
?ռ???: Catalin Marinas <[email protected]>
????: liwei (CM) <[email protected]>; [email protected]; Xiaqing (A) <[email protected]>; Chenfeng (puck) <[email protected]>; butao <[email protected]>; fengbaopeng <[email protected]>; [email protected]; [email protected]; Song Bao Hua (Barry Song) <[email protected]>; [email protected]; [email protected]; sujunfei <[email protected]>; zhaojiapeng <[email protected]>
????: Re: ????: ????: [PATCH] arm64: mm: free unused memmap for sparse memory model that define VMEMMAP

On Thu, Jul 23, 2020 at 12:29:26PM +0100, Catalin Marinas wrote:
> On Wed, Jul 22, 2020 at 01:40:34PM +0000, liwei (CM) wrote:
> > Catalin Marinas wrote:
> > > On Wed, Jul 22, 2020 at 08:41:17AM +0000, liwei (CM) wrote:
> > > > Mike Rapoport wrote:
> > > > > On Tue, Jul 21, 2020 at 03:32:03PM +0800, Wei Li wrote:
> > > > > > For the memory hole, sparse memory model that define
> > > > > > SPARSEMEM_VMEMMAP do not free the reserved memory for the
> > > > > > page map, this patch do it.
> > > > >
> > > > > Are there numbers showing how much memory is actually freed?
> > > > >
> > > > > The freeing of empty memmap would become rather complex with
> > > > > these changes, do the memory savings justify it?
> > > >
> > > > In the sparse memory model, the size of a section is 1 GB
> > > > (SECTION_SIZE_BITS 30) by default.
> > >
> > > Can we reduce SECTION_SIZE_BITS instead? Say 26?
> >
> > Yes, you are right, reduce SECTION_SIZE_BITS to 26 can save almost
> > the same memory as the patch.
> >
> > 1) However, it is not clear whether changing the section size has
> > any other impact.
>
> Well, we should analyse this.
>
> > 2) Just like the flat memory model and the sparse memory model that
> > does not define VMEMMAP, both of them have their own ways to free
> > unused memmap. I think we've given a similar way for sparse memory
> > define VMEMMAP.
>
> I think we did it for flatmem initially (on arm32) and added support
> for sparsemem later on, so free_unused_memmap() had to cope with
> sparse sections. On arm64 we introduced vmemmap support and didn't
> bother with the freeing at all because of the added complexity of the
> vmemmap page tables.
>
> I wonder whether we should just disallow flatmem and non-vmemmap
> sparsemem on arm64. Is there any value in keeping them around?

FLATMEM is useful for UMA systems with a single memory bank, so probably it's worth keeping it for low end machines.

Non-vmemmap sparsemem is essentially disable in arch/arm64/Kconfig, so for NUMA configurations SPARSEMEM_VMEMMAP is the only choice.

> > 3) This explicit free unused memmap method does reduce unnecessary
> > memory waste for users who do not notice the section size
> > modification.
>
> But if we changed SECTION_SIZE_BITS in the mainline kernel, then we
> wouldn't need additional code to free the unused memmap.

Moreover if we reduce SECTION_SIZE_BITS, we can drop
free_unused_memmap() and since the arm64 memory map for sparse will not differ from other arches we can drop custom pfn_valid() as well.

Hi, Mike & Catalin

Let's think and discuss together about the impact of directly reducing the section size??

1) Currently, the memory of PC or Mobile devices are increasing. If the section size is reduced, the consumption of the section structure will also increase.

2) If the section size is too small, memory hotplug may be affected. Hotplug add or remove a memblock means that you need to online or offline many sections. In this case, software consumption may increase.

Currently, the page map is wasted when the default section size is used. In some cases, the waste is serious. Please help to check whether the section size reduction has other impacts and whether it meets the long-term evolution.

Thanks.

> --
> Catalin

--
Sincerely yours,
Mike.