2021-05-31 09:21:11

by Aisheng Dong

[permalink] [raw]
Subject: [PATCH V2 0/6] mm/sparse: a few minor fixes and improvements

a few minor fixes and improvements

Dong Aisheng (6):
mm: drop SECTION_SHIFT in code comments
mm/sparse: free section usage memory in case populate_section_memmap
failed
mm/sparse: move mem_sections allocation out of memory_present()
mm: rename the global section array to mem_sections
mm/page_alloc: improve memmap_pages dbg msg
mm/sparse: remove one duplicated #ifdef CONFIG_SPARSEMEM_EXTREME

include/linux/mmzone.h | 12 ++++----
kernel/crash_core.c | 4 +--
mm/page_alloc.c | 2 +-
mm/sparse.c | 66 ++++++++++++++++++++++--------------------
4 files changed, 43 insertions(+), 41 deletions(-)

--
2.25.1


2021-05-31 09:21:17

by Aisheng Dong

[permalink] [raw]
Subject: [PATCH V2 1/6] mm: drop SECTION_SHIFT in code comments

Actually SECTIONS_SHIFT is used in the kernel code, so the code
comments is strictly incorrect. And since
commit bbeae5b05ef6 ("mm: move page flags layout to separate header"),
SECTIONS_SHIFT definition has been moved to include/linux/page-flags-layout.h,
since code itself looks quite straighforward, instead of moving
the code comment into the new place as well, we just simply remove it.

This also fixed a checkpatch complain derived from the original code:
WARNING: please, no space before tabs
+ * SECTIONS_SHIFT ^I^I#bits space required to store a section #$

Cc: Andrew Morton <[email protected]>
Cc: Yu Zhao <[email protected]>
Cc: Andrey Konovalov <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Kees Cook <[email protected]>
Suggested-by: Yu Zhao <[email protected]>
Signed-off-by: Dong Aisheng <[email protected]>
---
Changelog:
v1->v2:
* drop the SECTION_SHIFT code comments instead of moving it
---
include/linux/mmzone.h | 2 --
1 file changed, 2 deletions(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 05cbcddbf432..a6bfde85ddb0 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -1203,8 +1203,6 @@ static inline struct zoneref *first_zones_zonelist(struct zonelist *zonelist,
#ifdef CONFIG_SPARSEMEM

/*
- * SECTION_SHIFT #bits space required to store a section #
- *
* PA_SECTION_SHIFT physical address to/from section number
* PFN_SECTION_SHIFT pfn to/from section number
*/
--
2.25.1

2021-05-31 09:21:27

by Aisheng Dong

[permalink] [raw]
Subject: [PATCH V2 2/6] mm/sparse: free section usage memory in case populate_section_memmap failed

Free section usage memory in case populate_section_memmap failed.
We use map_count to track the remain unused memory to be freed.

Cc: Andrew Morton <[email protected]>
Cc: Mike Rapoport <[email protected]>
Signed-off-by: Dong Aisheng <[email protected]>
---
ChangeLog:
v1->v2:
* using goto + lable according to Mike's suggestion
---
mm/sparse.c | 11 +++++++----
1 file changed, 7 insertions(+), 4 deletions(-)

diff --git a/mm/sparse.c b/mm/sparse.c
index 7ac481353b6b..408b737e168e 100644
--- a/mm/sparse.c
+++ b/mm/sparse.c
@@ -533,7 +533,7 @@ static void __init sparse_init_nid(int nid, unsigned long pnum_begin,
mem_section_usage_size() * map_count);
if (!usage) {
pr_err("%s: node[%d] usemap allocation failed", __func__, nid);
- goto failed;
+ goto failed1;
}
sparse_buffer_init(map_count * section_map_size(), nid);
for_each_present_section_nr(pnum_begin, pnum) {
@@ -548,17 +548,20 @@ static void __init sparse_init_nid(int nid, unsigned long pnum_begin,
pr_err("%s: node[%d] memory map backing failed. Some memory will not be available.",
__func__, nid);
pnum_begin = pnum;
- sparse_buffer_fini();
- goto failed;
+ goto failed2;
}
check_usemap_section_nr(nid, usage);
sparse_init_one_section(__nr_to_section(pnum), pnum, map, usage,
SECTION_IS_EARLY);
usage = (void *) usage + mem_section_usage_size();
+ map_count--;
}
sparse_buffer_fini();
return;
-failed:
+failed2:
+ sparse_buffer_fini();
+ memblock_free_early(__pa(usage), map_count * mem_section_usage_size());
+failed1:
/* We failed to allocate, mark all the following pnums as not present */
for_each_present_section_nr(pnum_begin, pnum) {
struct mem_section *ms;
--
2.25.1

2021-05-31 09:21:48

by Aisheng Dong

[permalink] [raw]
Subject: [PATCH V2 3/6] mm/sparse: move mem_sections allocation out of memory_present()

The only path to call memory_present() is from memblocks_present().
The struct mem_section **mem_section only needs to be initialized once,
so no need put the initialization/allocation code in memory_present()
which will be called multiple times for each section.

After moving, the 'unlikely' condition statement becomes to be
meaningless as it's only initialized one time, so dropped as well.

Cc: Andrew Morton <[email protected]>
Cc: Mike Rapoport <[email protected]>
Signed-off-by: Dong Aisheng <[email protected]>
---
ChangeLog:
v1->v2:
* split into a helper function and called directly from sparse_init
---
mm/sparse.c | 29 ++++++++++++++++-------------
1 file changed, 16 insertions(+), 13 deletions(-)

diff --git a/mm/sparse.c b/mm/sparse.c
index 408b737e168e..d02ee6bb7cbc 100644
--- a/mm/sparse.c
+++ b/mm/sparse.c
@@ -60,6 +60,18 @@ static inline void set_section_nid(unsigned long section_nr, int nid)
#endif

#ifdef CONFIG_SPARSEMEM_EXTREME
+static void __init sparse_alloc_section_roots(void)
+{
+ unsigned long size, align;
+
+ size = sizeof(struct mem_section *) * NR_SECTION_ROOTS;
+ align = 1 << (INTERNODE_CACHE_SHIFT);
+ mem_section = memblock_alloc(size, align);
+ if (!mem_section)
+ panic("%s: Failed to allocate %lu bytes align=0x%lx\n",
+ __func__, size, align);
+}
+
static noinline struct mem_section __ref *sparse_index_alloc(int nid)
{
struct mem_section *section = NULL;
@@ -107,6 +119,8 @@ static inline int sparse_index_init(unsigned long section_nr, int nid)
{
return 0;
}
+
+static inline void sparse_alloc_section_roots(void) {}
#endif

#ifdef CONFIG_SPARSEMEM_EXTREME
@@ -254,19 +268,6 @@ static void __init memory_present(int nid, unsigned long start, unsigned long en
{
unsigned long pfn;

-#ifdef CONFIG_SPARSEMEM_EXTREME
- if (unlikely(!mem_section)) {
- unsigned long size, align;
-
- size = sizeof(struct mem_section *) * NR_SECTION_ROOTS;
- align = 1 << (INTERNODE_CACHE_SHIFT);
- mem_section = memblock_alloc(size, align);
- if (!mem_section)
- panic("%s: Failed to allocate %lu bytes align=0x%lx\n",
- __func__, size, align);
- }
-#endif
-
start &= PAGE_SECTION_MASK;
mminit_validate_memmodel_limits(&start, &end);
for (pfn = start; pfn < end; pfn += PAGES_PER_SECTION) {
@@ -582,6 +583,8 @@ void __init sparse_init(void)
unsigned long pnum_end, pnum_begin, map_count = 1;
int nid_begin;

+ sparse_alloc_section_roots();
+
memblocks_present();

pnum_begin = first_present_section_nr();
--
2.25.1

2021-05-31 09:22:03

by Aisheng Dong

[permalink] [raw]
Subject: [PATCH V2 4/6] mm: rename the global section array to mem_sections

In order to distinguish the struct mem_section for a better code
readability and align with kernel doc [1] name below, change the
global mem section name to 'mem_sections' from 'mem_section'.

[1] Documentation/vm/memory-model.rst
"The `mem_section` objects are arranged in a two-dimensional array
called `mem_sections`."

Cc: Andrew Morton <[email protected]>
Cc: Dave Young <[email protected]>
Cc: Baoquan He <[email protected]>
Cc: Vivek Goyal <[email protected]>
Cc: [email protected]
Signed-off-by: Dong Aisheng <[email protected]>
---
v1->v2:
* no changes
---
include/linux/mmzone.h | 10 +++++-----
kernel/crash_core.c | 4 ++--
mm/sparse.c | 16 ++++++++--------
3 files changed, 15 insertions(+), 15 deletions(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index a6bfde85ddb0..0ed61f32d898 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -1302,9 +1302,9 @@ struct mem_section {
#define SECTION_ROOT_MASK (SECTIONS_PER_ROOT - 1)

#ifdef CONFIG_SPARSEMEM_EXTREME
-extern struct mem_section **mem_section;
+extern struct mem_section **mem_sections;
#else
-extern struct mem_section mem_section[NR_SECTION_ROOTS][SECTIONS_PER_ROOT];
+extern struct mem_section mem_sections[NR_SECTION_ROOTS][SECTIONS_PER_ROOT];
#endif

static inline unsigned long *section_to_usemap(struct mem_section *ms)
@@ -1315,12 +1315,12 @@ static inline unsigned long *section_to_usemap(struct mem_section *ms)
static inline struct mem_section *__nr_to_section(unsigned long nr)
{
#ifdef CONFIG_SPARSEMEM_EXTREME
- if (!mem_section)
+ if (!mem_sections)
return NULL;
#endif
- if (!mem_section[SECTION_NR_TO_ROOT(nr)])
+ if (!mem_sections[SECTION_NR_TO_ROOT(nr)])
return NULL;
- return &mem_section[SECTION_NR_TO_ROOT(nr)][nr & SECTION_ROOT_MASK];
+ return &mem_sections[SECTION_NR_TO_ROOT(nr)][nr & SECTION_ROOT_MASK];
}
extern unsigned long __section_nr(struct mem_section *ms);
extern size_t mem_section_usage_size(void);
diff --git a/kernel/crash_core.c b/kernel/crash_core.c
index 29cc15398ee4..fb1180d81b5a 100644
--- a/kernel/crash_core.c
+++ b/kernel/crash_core.c
@@ -414,8 +414,8 @@ static int __init crash_save_vmcoreinfo_init(void)
VMCOREINFO_SYMBOL(contig_page_data);
#endif
#ifdef CONFIG_SPARSEMEM
- VMCOREINFO_SYMBOL_ARRAY(mem_section);
- VMCOREINFO_LENGTH(mem_section, NR_SECTION_ROOTS);
+ VMCOREINFO_SYMBOL_ARRAY(mem_sections);
+ VMCOREINFO_LENGTH(mem_sections, NR_SECTION_ROOTS);
VMCOREINFO_STRUCT_SIZE(mem_section);
VMCOREINFO_OFFSET(mem_section, section_mem_map);
VMCOREINFO_NUMBER(MAX_PHYSMEM_BITS);
diff --git a/mm/sparse.c b/mm/sparse.c
index d02ee6bb7cbc..6412010478f7 100644
--- a/mm/sparse.c
+++ b/mm/sparse.c
@@ -24,12 +24,12 @@
* 1) mem_section - memory sections, mem_map's for valid memory
*/
#ifdef CONFIG_SPARSEMEM_EXTREME
-struct mem_section **mem_section;
+struct mem_section **mem_sections;
#else
-struct mem_section mem_section[NR_SECTION_ROOTS][SECTIONS_PER_ROOT]
+struct mem_section mem_sections[NR_SECTION_ROOTS][SECTIONS_PER_ROOT]
____cacheline_internodealigned_in_smp;
#endif
-EXPORT_SYMBOL(mem_section);
+EXPORT_SYMBOL(mem_sections);

#ifdef NODE_NOT_IN_PAGE_FLAGS
/*
@@ -66,8 +66,8 @@ static void __init sparse_alloc_section_roots(void)

size = sizeof(struct mem_section *) * NR_SECTION_ROOTS;
align = 1 << (INTERNODE_CACHE_SHIFT);
- mem_section = memblock_alloc(size, align);
- if (!mem_section)
+ mem_sections = memblock_alloc(size, align);
+ if (!mem_sections)
panic("%s: Failed to allocate %lu bytes align=0x%lx\n",
__func__, size, align);
}
@@ -103,14 +103,14 @@ static int __meminit sparse_index_init(unsigned long section_nr, int nid)
*
* The mem_hotplug_lock resolves the apparent race below.
*/
- if (mem_section[root])
+ if (mem_sections[root])
return 0;

section = sparse_index_alloc(nid);
if (!section)
return -ENOMEM;

- mem_section[root] = section;
+ mem_sections[root] = section;

return 0;
}
@@ -145,7 +145,7 @@ unsigned long __section_nr(struct mem_section *ms)
#else
unsigned long __section_nr(struct mem_section *ms)
{
- return (unsigned long)(ms - mem_section[0]);
+ return (unsigned long)(ms - mem_sections[0]);
}
#endif

--
2.25.1

2021-05-31 09:22:30

by Aisheng Dong

[permalink] [raw]
Subject: [PATCH V2 6/6] mm/sparse: remove one duplicated #ifdef CONFIG_SPARSEMEM_EXTREME

Those two blocks of code contained by #ifdef CONFIG_SPARSEMEM_EXTREME
condition are right along with each other. Not need using another #ifdef
condition.

Signed-off-by: Dong Aisheng <[email protected]>
---
ChangeLog:
*new patch
---
mm/sparse.c | 18 ++++++++----------
1 file changed, 8 insertions(+), 10 deletions(-)

diff --git a/mm/sparse.c b/mm/sparse.c
index 6412010478f7..2905ee9fde10 100644
--- a/mm/sparse.c
+++ b/mm/sparse.c
@@ -114,16 +114,7 @@ static int __meminit sparse_index_init(unsigned long section_nr, int nid)

return 0;
}
-#else /* !SPARSEMEM_EXTREME */
-static inline int sparse_index_init(unsigned long section_nr, int nid)
-{
- return 0;
-}

-static inline void sparse_alloc_section_roots(void) {}
-#endif
-
-#ifdef CONFIG_SPARSEMEM_EXTREME
unsigned long __section_nr(struct mem_section *ms)
{
unsigned long root_nr;
@@ -142,11 +133,18 @@ unsigned long __section_nr(struct mem_section *ms)

return (root_nr * SECTIONS_PER_ROOT) + (ms - root);
}
-#else
+#else /* !SPARSEMEM_EXTREME */
+static inline int sparse_index_init(unsigned long section_nr, int nid)
+{
+ return 0;
+}
+
unsigned long __section_nr(struct mem_section *ms)
{
return (unsigned long)(ms - mem_sections[0]);
}
+
+static inline void sparse_alloc_section_roots(void) {}
#endif

/*
--
2.25.1

2021-05-31 09:23:26

by Aisheng Dong

[permalink] [raw]
Subject: [PATCH V2 5/6] mm/page_alloc: improve memmap_pages dbg msg

Make debug message more accurately.

Cc: Andrew Morton <[email protected]>
Cc: David Hildenbrand <[email protected]>
Signed-off-by: Dong Aisheng <[email protected]>
---
ChangeLog:
v1->v2:
* drop dma_reserve log changing
---
mm/page_alloc.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 1786a24cdc5a..1bfbe178a9ed 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -7347,7 +7347,7 @@ static void __init free_area_init_core(struct pglist_data *pgdat)
pr_debug(" %s zone: %lu pages used for memmap\n",
zone_names[j], memmap_pages);
} else
- pr_warn(" %s zone: %lu pages exceeds freesize %lu\n",
+ pr_warn(" %s zone: %lu memmap pages exceeds freesize %lu\n",
zone_names[j], memmap_pages, freesize);
}

--
2.25.1

2021-05-31 16:20:07

by Liam R. Howlett

[permalink] [raw]
Subject: Re: [PATCH V2 3/6] mm/sparse: move mem_sections allocation out of memory_present()

* Dong Aisheng <[email protected]> [210531 05:20]:
> The only path to call memory_present() is from memblocks_present().
> The struct mem_section **mem_section only needs to be initialized once,
> so no need put the initialization/allocation code in memory_present()
> which will be called multiple times for each section.
>
> After moving, the 'unlikely' condition statement becomes to be
> meaningless as it's only initialized one time, so dropped as well.
>
> Cc: Andrew Morton <[email protected]>
> Cc: Mike Rapoport <[email protected]>
> Signed-off-by: Dong Aisheng <[email protected]>
> ---
> ChangeLog:
> v1->v2:
> * split into a helper function and called directly from sparse_init
> ---
> mm/sparse.c | 29 ++++++++++++++++-------------
> 1 file changed, 16 insertions(+), 13 deletions(-)
>
> diff --git a/mm/sparse.c b/mm/sparse.c
> index 408b737e168e..d02ee6bb7cbc 100644
> --- a/mm/sparse.c
> +++ b/mm/sparse.c
> @@ -60,6 +60,18 @@ static inline void set_section_nid(unsigned long section_nr, int nid)
> #endif
>
> #ifdef CONFIG_SPARSEMEM_EXTREME
> +static void __init sparse_alloc_section_roots(void)
> +{
> + unsigned long size, align;
> +
> + size = sizeof(struct mem_section *) * NR_SECTION_ROOTS;
> + align = 1 << (INTERNODE_CACHE_SHIFT);
> + mem_section = memblock_alloc(size, align);
> + if (!mem_section)
> + panic("%s: Failed to allocate %lu bytes align=0x%lx\n",
> + __func__, size, align);
> +}
> +
> static noinline struct mem_section __ref *sparse_index_alloc(int nid)
> {
> struct mem_section *section = NULL;
> @@ -107,6 +119,8 @@ static inline int sparse_index_init(unsigned long section_nr, int nid)
> {
> return 0;
> }
> +
> +static inline void sparse_alloc_section_roots(void) {}
> #endif
>
> #ifdef CONFIG_SPARSEMEM_EXTREME
> @@ -254,19 +268,6 @@ static void __init memory_present(int nid, unsigned long start, unsigned long en
> {
> unsigned long pfn;
>
> -#ifdef CONFIG_SPARSEMEM_EXTREME
> - if (unlikely(!mem_section)) {
> - unsigned long size, align;
> -
> - size = sizeof(struct mem_section *) * NR_SECTION_ROOTS;
> - align = 1 << (INTERNODE_CACHE_SHIFT);
> - mem_section = memblock_alloc(size, align);
> - if (!mem_section)
> - panic("%s: Failed to allocate %lu bytes align=0x%lx\n",
> - __func__, size, align);
> - }
> -#endif
> -
> start &= PAGE_SECTION_MASK;
> mminit_validate_memmodel_limits(&start, &end);
> for (pfn = start; pfn < end; pfn += PAGES_PER_SECTION) {
> @@ -582,6 +583,8 @@ void __init sparse_init(void)
> unsigned long pnum_end, pnum_begin, map_count = 1;
> int nid_begin;
>
> + sparse_alloc_section_roots();

nit: The newline below is unnecessary

> +
> memblocks_present();
>
> pnum_begin = first_present_section_nr();
> --
> 2.25.1
>
>

2021-05-31 16:58:04

by Liam R. Howlett

[permalink] [raw]
Subject: Re: [PATCH V2 2/6] mm/sparse: free section usage memory in case populate_section_memmap failed

* Dong Aisheng <[email protected]> [210531 05:20]:
> Free section usage memory in case populate_section_memmap failed.
> We use map_count to track the remain unused memory to be freed.
>
> Cc: Andrew Morton <[email protected]>
> Cc: Mike Rapoport <[email protected]>
> Signed-off-by: Dong Aisheng <[email protected]>
> ---
> ChangeLog:
> v1->v2:
> * using goto + lable according to Mike's suggestion
> ---
> mm/sparse.c | 11 +++++++----
> 1 file changed, 7 insertions(+), 4 deletions(-)
>
> diff --git a/mm/sparse.c b/mm/sparse.c
> index 7ac481353b6b..408b737e168e 100644
> --- a/mm/sparse.c
> +++ b/mm/sparse.c
> @@ -533,7 +533,7 @@ static void __init sparse_init_nid(int nid, unsigned long pnum_begin,
> mem_section_usage_size() * map_count);
> if (!usage) {
> pr_err("%s: node[%d] usemap allocation failed", __func__, nid);
> - goto failed;
> + goto failed1;

Please use better labels for goto statements. Perhaps usemap_failed ?


> }
> sparse_buffer_init(map_count * section_map_size(), nid);
> for_each_present_section_nr(pnum_begin, pnum) {
> @@ -548,17 +548,20 @@ static void __init sparse_init_nid(int nid, unsigned long pnum_begin,
> pr_err("%s: node[%d] memory map backing failed. Some memory will not be available.",
> __func__, nid);
> pnum_begin = pnum;
> - sparse_buffer_fini();
> - goto failed;
> + goto failed2;

Again, this goto label is not descriptive.

> }
> check_usemap_section_nr(nid, usage);
> sparse_init_one_section(__nr_to_section(pnum), pnum, map, usage,
> SECTION_IS_EARLY);
> usage = (void *) usage + mem_section_usage_size();
> + map_count--;
> }
> sparse_buffer_fini();
> return;
> -failed:
> +failed2:
> + sparse_buffer_fini();
> + memblock_free_early(__pa(usage), map_count * mem_section_usage_size());
> +failed1:
> /* We failed to allocate, mark all the following pnums as not present */
> for_each_present_section_nr(pnum_begin, pnum) {
> struct mem_section *ms;
> --
> 2.25.1
>
>

2021-06-01 02:40:31

by Dong Aisheng

[permalink] [raw]
Subject: Re: [PATCH V2 3/6] mm/sparse: move mem_sections allocation out of memory_present()

On Mon, May 31, 2021 at 10:39 PM Liam Howlett <[email protected]> wrote:
>
> * Dong Aisheng <[email protected]> [210531 05:20]:
> > The only path to call memory_present() is from memblocks_present().
> > The struct mem_section **mem_section only needs to be initialized once,
> > so no need put the initialization/allocation code in memory_present()
> > which will be called multiple times for each section.
> >
> > After moving, the 'unlikely' condition statement becomes to be
> > meaningless as it's only initialized one time, so dropped as well.
> >
> > Cc: Andrew Morton <[email protected]>
> > Cc: Mike Rapoport <[email protected]>
> > Signed-off-by: Dong Aisheng <[email protected]>
> > ---
> > ChangeLog:
> > v1->v2:
> > * split into a helper function and called directly from sparse_init
> > ---
> > mm/sparse.c | 29 ++++++++++++++++-------------
> > 1 file changed, 16 insertions(+), 13 deletions(-)
> >
> > diff --git a/mm/sparse.c b/mm/sparse.c
> > index 408b737e168e..d02ee6bb7cbc 100644
> > --- a/mm/sparse.c
> > +++ b/mm/sparse.c
> > @@ -60,6 +60,18 @@ static inline void set_section_nid(unsigned long section_nr, int nid)
> > #endif
> >
> > #ifdef CONFIG_SPARSEMEM_EXTREME
> > +static void __init sparse_alloc_section_roots(void)
> > +{
> > + unsigned long size, align;
> > +
> > + size = sizeof(struct mem_section *) * NR_SECTION_ROOTS;
> > + align = 1 << (INTERNODE_CACHE_SHIFT);
> > + mem_section = memblock_alloc(size, align);
> > + if (!mem_section)
> > + panic("%s: Failed to allocate %lu bytes align=0x%lx\n",
> > + __func__, size, align);
> > +}
> > +
> > static noinline struct mem_section __ref *sparse_index_alloc(int nid)
> > {
> > struct mem_section *section = NULL;
> > @@ -107,6 +119,8 @@ static inline int sparse_index_init(unsigned long section_nr, int nid)
> > {
> > return 0;
> > }
> > +
> > +static inline void sparse_alloc_section_roots(void) {}
> > #endif
> >
> > #ifdef CONFIG_SPARSEMEM_EXTREME
> > @@ -254,19 +268,6 @@ static void __init memory_present(int nid, unsigned long start, unsigned long en
> > {
> > unsigned long pfn;
> >
> > -#ifdef CONFIG_SPARSEMEM_EXTREME
> > - if (unlikely(!mem_section)) {
> > - unsigned long size, align;
> > -
> > - size = sizeof(struct mem_section *) * NR_SECTION_ROOTS;
> > - align = 1 << (INTERNODE_CACHE_SHIFT);
> > - mem_section = memblock_alloc(size, align);
> > - if (!mem_section)
> > - panic("%s: Failed to allocate %lu bytes align=0x%lx\n",
> > - __func__, size, align);
> > - }
> > -#endif
> > -
> > start &= PAGE_SECTION_MASK;
> > mminit_validate_memmodel_limits(&start, &end);
> > for (pfn = start; pfn < end; pfn += PAGES_PER_SECTION) {
> > @@ -582,6 +583,8 @@ void __init sparse_init(void)
> > unsigned long pnum_end, pnum_begin, map_count = 1;
> > int nid_begin;
> >
> > + sparse_alloc_section_roots();
>
> nit: The newline below is unnecessary
>

Can drop it , thanks

Regards
AIsheng

> > +
> > memblocks_present();
> >
> > pnum_begin = first_present_section_nr();
> > --
> > 2.25.1
> >
> >

2021-06-01 02:41:34

by Dong Aisheng

[permalink] [raw]
Subject: Re: [PATCH V2 2/6] mm/sparse: free section usage memory in case populate_section_memmap failed

On Mon, May 31, 2021 at 11:06 PM Liam Howlett <[email protected]> wrote:
>
> * Dong Aisheng <[email protected]> [210531 05:20]:
> > Free section usage memory in case populate_section_memmap failed.
> > We use map_count to track the remain unused memory to be freed.
> >
> > Cc: Andrew Morton <[email protected]>
> > Cc: Mike Rapoport <[email protected]>
> > Signed-off-by: Dong Aisheng <[email protected]>
> > ---
> > ChangeLog:
> > v1->v2:
> > * using goto + lable according to Mike's suggestion
> > ---
> > mm/sparse.c | 11 +++++++----
> > 1 file changed, 7 insertions(+), 4 deletions(-)
> >
> > diff --git a/mm/sparse.c b/mm/sparse.c
> > index 7ac481353b6b..408b737e168e 100644
> > --- a/mm/sparse.c
> > +++ b/mm/sparse.c
> > @@ -533,7 +533,7 @@ static void __init sparse_init_nid(int nid, unsigned long pnum_begin,
> > mem_section_usage_size() * map_count);
> > if (!usage) {
> > pr_err("%s: node[%d] usemap allocation failed", __func__, nid);
> > - goto failed;
> > + goto failed1;
>
> Please use better labels for goto statements. Perhaps usemap_failed ?
>

Thanks, I will improve it.

Regards
Aisheng

>
> > }
> > sparse_buffer_init(map_count * section_map_size(), nid);
> > for_each_present_section_nr(pnum_begin, pnum) {
> > @@ -548,17 +548,20 @@ static void __init sparse_init_nid(int nid, unsigned long pnum_begin,
> > pr_err("%s: node[%d] memory map backing failed. Some memory will not be available.",
> > __func__, nid);
> > pnum_begin = pnum;
> > - sparse_buffer_fini();
> > - goto failed;
> > + goto failed2;
>
> Again, this goto label is not descriptive.
>
> > }
> > check_usemap_section_nr(nid, usage);
> > sparse_init_one_section(__nr_to_section(pnum), pnum, map, usage,
> > SECTION_IS_EARLY);
> > usage = (void *) usage + mem_section_usage_size();
> > + map_count--;
> > }
> > sparse_buffer_fini();
> > return;
> > -failed:
> > +failed2:
> > + sparse_buffer_fini();
> > + memblock_free_early(__pa(usage), map_count * mem_section_usage_size());
> > +failed1:
> > /* We failed to allocate, mark all the following pnums as not present */
> > for_each_present_section_nr(pnum_begin, pnum) {
> > struct mem_section *ms;
> > --
> > 2.25.1
> >
> >

2021-06-01 08:23:02

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH V2 2/6] mm/sparse: free section usage memory in case populate_section_memmap failed

On 31.05.21 11:19, Dong Aisheng wrote:
> Free section usage memory in case populate_section_memmap failed.
> We use map_count to track the remain unused memory to be freed.
>
> Cc: Andrew Morton <[email protected]>
> Cc: Mike Rapoport <[email protected]>
> Signed-off-by: Dong Aisheng <[email protected]>
> ---
> ChangeLog:
> v1->v2:
> * using goto + lable according to Mike's suggestion
> ---
> mm/sparse.c | 11 +++++++----
> 1 file changed, 7 insertions(+), 4 deletions(-)
>
> diff --git a/mm/sparse.c b/mm/sparse.c
> index 7ac481353b6b..408b737e168e 100644
> --- a/mm/sparse.c
> +++ b/mm/sparse.c
> @@ -533,7 +533,7 @@ static void __init sparse_init_nid(int nid, unsigned long pnum_begin,
> mem_section_usage_size() * map_count);
> if (!usage) {
> pr_err("%s: node[%d] usemap allocation failed", __func__, nid);
> - goto failed;
> + goto failed1;
> }
> sparse_buffer_init(map_count * section_map_size(), nid);
> for_each_present_section_nr(pnum_begin, pnum) {
> @@ -548,17 +548,20 @@ static void __init sparse_init_nid(int nid, unsigned long pnum_begin,
> pr_err("%s: node[%d] memory map backing failed. Some memory will not be available.",
> __func__, nid);
> pnum_begin = pnum;
> - sparse_buffer_fini();
> - goto failed;
> + goto failed2;
> }
> check_usemap_section_nr(nid, usage);
> sparse_init_one_section(__nr_to_section(pnum), pnum, map, usage,
> SECTION_IS_EARLY);
> usage = (void *) usage + mem_section_usage_size();
> + map_count--;
> }
> sparse_buffer_fini();
> return;
> -failed:
> +failed2:
> + sparse_buffer_fini();
> + memblock_free_early(__pa(usage), map_count * mem_section_usage_size());
> +failed1:
> /* We failed to allocate, mark all the following pnums as not present */
> for_each_present_section_nr(pnum_begin, pnum) {
> struct mem_section *ms;
>

I still don't think we need this. Did you even manage to trigger this to
test your patch?

--
Thanks,

David / dhildenb

2021-06-01 08:23:46

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH V2 3/6] mm/sparse: move mem_sections allocation out of memory_present()

On 31.05.21 11:19, Dong Aisheng wrote:
> The only path to call memory_present() is from memblocks_present().
> The struct mem_section **mem_section only needs to be initialized once,
> so no need put the initialization/allocation code in memory_present()
> which will be called multiple times for each section.
>
> After moving, the 'unlikely' condition statement becomes to be
> meaningless as it's only initialized one time, so dropped as well.
>
> Cc: Andrew Morton <[email protected]>
> Cc: Mike Rapoport <[email protected]>
> Signed-off-by: Dong Aisheng <[email protected]>
> ---
> ChangeLog:
> v1->v2:
> * split into a helper function and called directly from sparse_init
> ---
> mm/sparse.c | 29 ++++++++++++++++-------------
> 1 file changed, 16 insertions(+), 13 deletions(-)
>
> diff --git a/mm/sparse.c b/mm/sparse.c
> index 408b737e168e..d02ee6bb7cbc 100644
> --- a/mm/sparse.c
> +++ b/mm/sparse.c
> @@ -60,6 +60,18 @@ static inline void set_section_nid(unsigned long section_nr, int nid)
> #endif
>
> #ifdef CONFIG_SPARSEMEM_EXTREME
> +static void __init sparse_alloc_section_roots(void)
> +{
> + unsigned long size, align;
> +
> + size = sizeof(struct mem_section *) * NR_SECTION_ROOTS;
> + align = 1 << (INTERNODE_CACHE_SHIFT);
> + mem_section = memblock_alloc(size, align);
> + if (!mem_section)
> + panic("%s: Failed to allocate %lu bytes align=0x%lx\n",
> + __func__, size, align);
> +}
> +
> static noinline struct mem_section __ref *sparse_index_alloc(int nid)
> {
> struct mem_section *section = NULL;
> @@ -107,6 +119,8 @@ static inline int sparse_index_init(unsigned long section_nr, int nid)
> {
> return 0;
> }
> +
> +static inline void sparse_alloc_section_roots(void) {}
> #endif
>
> #ifdef CONFIG_SPARSEMEM_EXTREME
> @@ -254,19 +268,6 @@ static void __init memory_present(int nid, unsigned long start, unsigned long en
> {
> unsigned long pfn;
>
> -#ifdef CONFIG_SPARSEMEM_EXTREME
> - if (unlikely(!mem_section)) {
> - unsigned long size, align;
> -
> - size = sizeof(struct mem_section *) * NR_SECTION_ROOTS;
> - align = 1 << (INTERNODE_CACHE_SHIFT);
> - mem_section = memblock_alloc(size, align);
> - if (!mem_section)
> - panic("%s: Failed to allocate %lu bytes align=0x%lx\n",
> - __func__, size, align);
> - }
> -#endif
> -
> start &= PAGE_SECTION_MASK;
> mminit_validate_memmodel_limits(&start, &end);
> for (pfn = start; pfn < end; pfn += PAGES_PER_SECTION) {
> @@ -582,6 +583,8 @@ void __init sparse_init(void)
> unsigned long pnum_end, pnum_begin, map_count = 1;
> int nid_begin;
>
> + sparse_alloc_section_roots();
> +
> memblocks_present();
>
> pnum_begin = first_present_section_nr();
>

Acked-by: David Hildenbrand <[email protected]>

--
Thanks,

David / dhildenb

2021-06-01 08:25:11

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH V2 5/6] mm/page_alloc: improve memmap_pages dbg msg

On 31.05.21 11:19, Dong Aisheng wrote:
> Make debug message more accurately.
>
> Cc: Andrew Morton <[email protected]>
> Cc: David Hildenbrand <[email protected]>
> Signed-off-by: Dong Aisheng <[email protected]>
> ---
> ChangeLog:
> v1->v2:
> * drop dma_reserve log changing
> ---
> mm/page_alloc.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 1786a24cdc5a..1bfbe178a9ed 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -7347,7 +7347,7 @@ static void __init free_area_init_core(struct pglist_data *pgdat)
> pr_debug(" %s zone: %lu pages used for memmap\n",
> zone_names[j], memmap_pages);
> } else
> - pr_warn(" %s zone: %lu pages exceeds freesize %lu\n",
> + pr_warn(" %s zone: %lu memmap pages exceeds freesize %lu\n",
> zone_names[j], memmap_pages, freesize);

I guess it should be s/exceeds/exceed/.

Apart from that

Reviewed-by: David Hildenbrand <[email protected]>


--
Thanks,

David / dhildenb

2021-06-01 08:26:42

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH V2 4/6] mm: rename the global section array to mem_sections

On 31.05.21 11:19, Dong Aisheng wrote:
> In order to distinguish the struct mem_section for a better code
> readability and align with kernel doc [1] name below, change the
> global mem section name to 'mem_sections' from 'mem_section'.
>
> [1] Documentation/vm/memory-model.rst
> "The `mem_section` objects are arranged in a two-dimensional array
> called `mem_sections`."
>
> Cc: Andrew Morton <[email protected]>
> Cc: Dave Young <[email protected]>
> Cc: Baoquan He <[email protected]>
> Cc: Vivek Goyal <[email protected]>
> Cc: [email protected]
> Signed-off-by: Dong Aisheng <[email protected]>
> ---
> v1->v2:
> * no changes
> ---
> include/linux/mmzone.h | 10 +++++-----
> kernel/crash_core.c | 4 ++--
> mm/sparse.c | 16 ++++++++--------
> 3 files changed, 15 insertions(+), 15 deletions(-)
>
> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> index a6bfde85ddb0..0ed61f32d898 100644
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -1302,9 +1302,9 @@ struct mem_section {
> #define SECTION_ROOT_MASK (SECTIONS_PER_ROOT - 1)
>
> #ifdef CONFIG_SPARSEMEM_EXTREME
> -extern struct mem_section **mem_section;
> +extern struct mem_section **mem_sections;
> #else
> -extern struct mem_section mem_section[NR_SECTION_ROOTS][SECTIONS_PER_ROOT];
> +extern struct mem_section mem_sections[NR_SECTION_ROOTS][SECTIONS_PER_ROOT];
> #endif
>
> static inline unsigned long *section_to_usemap(struct mem_section *ms)
> @@ -1315,12 +1315,12 @@ static inline unsigned long *section_to_usemap(struct mem_section *ms)
> static inline struct mem_section *__nr_to_section(unsigned long nr)
> {
> #ifdef CONFIG_SPARSEMEM_EXTREME
> - if (!mem_section)
> + if (!mem_sections)
> return NULL;
> #endif
> - if (!mem_section[SECTION_NR_TO_ROOT(nr)])
> + if (!mem_sections[SECTION_NR_TO_ROOT(nr)])
> return NULL;
> - return &mem_section[SECTION_NR_TO_ROOT(nr)][nr & SECTION_ROOT_MASK];
> + return &mem_sections[SECTION_NR_TO_ROOT(nr)][nr & SECTION_ROOT_MASK];
> }
> extern unsigned long __section_nr(struct mem_section *ms);
> extern size_t mem_section_usage_size(void);
> diff --git a/kernel/crash_core.c b/kernel/crash_core.c
> index 29cc15398ee4..fb1180d81b5a 100644
> --- a/kernel/crash_core.c
> +++ b/kernel/crash_core.c
> @@ -414,8 +414,8 @@ static int __init crash_save_vmcoreinfo_init(void)
> VMCOREINFO_SYMBOL(contig_page_data);
> #endif
> #ifdef CONFIG_SPARSEMEM
> - VMCOREINFO_SYMBOL_ARRAY(mem_section);
> - VMCOREINFO_LENGTH(mem_section, NR_SECTION_ROOTS);
> + VMCOREINFO_SYMBOL_ARRAY(mem_sections);
> + VMCOREINFO_LENGTH(mem_sections, NR_SECTION_ROOTS);
> VMCOREINFO_STRUCT_SIZE(mem_section);
> VMCOREINFO_OFFSET(mem_section, section_mem_map);
> VMCOREINFO_NUMBER(MAX_PHYSMEM_BITS);
> diff --git a/mm/sparse.c b/mm/sparse.c
> index d02ee6bb7cbc..6412010478f7 100644
> --- a/mm/sparse.c
> +++ b/mm/sparse.c
> @@ -24,12 +24,12 @@
> * 1) mem_section - memory sections, mem_map's for valid memory
> */
> #ifdef CONFIG_SPARSEMEM_EXTREME
> -struct mem_section **mem_section;
> +struct mem_section **mem_sections;
> #else
> -struct mem_section mem_section[NR_SECTION_ROOTS][SECTIONS_PER_ROOT]
> +struct mem_section mem_sections[NR_SECTION_ROOTS][SECTIONS_PER_ROOT]
> ____cacheline_internodealigned_in_smp;
> #endif
> -EXPORT_SYMBOL(mem_section);
> +EXPORT_SYMBOL(mem_sections);
>
> #ifdef NODE_NOT_IN_PAGE_FLAGS
> /*
> @@ -66,8 +66,8 @@ static void __init sparse_alloc_section_roots(void)
>
> size = sizeof(struct mem_section *) * NR_SECTION_ROOTS;
> align = 1 << (INTERNODE_CACHE_SHIFT);
> - mem_section = memblock_alloc(size, align);
> - if (!mem_section)
> + mem_sections = memblock_alloc(size, align);
> + if (!mem_sections)
> panic("%s: Failed to allocate %lu bytes align=0x%lx\n",
> __func__, size, align);
> }
> @@ -103,14 +103,14 @@ static int __meminit sparse_index_init(unsigned long section_nr, int nid)
> *
> * The mem_hotplug_lock resolves the apparent race below.
> */
> - if (mem_section[root])
> + if (mem_sections[root])
> return 0;
>
> section = sparse_index_alloc(nid);
> if (!section)
> return -ENOMEM;
>
> - mem_section[root] = section;
> + mem_sections[root] = section;
>
> return 0;
> }
> @@ -145,7 +145,7 @@ unsigned long __section_nr(struct mem_section *ms)
> #else
> unsigned long __section_nr(struct mem_section *ms)
> {
> - return (unsigned long)(ms - mem_section[0]);
> + return (unsigned long)(ms - mem_sections[0]);
> }
> #endif
>
>

I repeat: unnecessary code churn IMHO.

--
Thanks,

David / dhildenb

2021-06-01 08:27:32

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH V2 6/6] mm/sparse: remove one duplicated #ifdef CONFIG_SPARSEMEM_EXTREME

On 31.05.21 11:19, Dong Aisheng wrote:
> Those two blocks of code contained by #ifdef CONFIG_SPARSEMEM_EXTREME
> condition are right along with each other. Not need using another #ifdef
> condition.
>
> Signed-off-by: Dong Aisheng <[email protected]>
> ---
> ChangeLog:
> *new patch
> ---
> mm/sparse.c | 18 ++++++++----------
> 1 file changed, 8 insertions(+), 10 deletions(-)
>
> diff --git a/mm/sparse.c b/mm/sparse.c
> index 6412010478f7..2905ee9fde10 100644
> --- a/mm/sparse.c
> +++ b/mm/sparse.c
> @@ -114,16 +114,7 @@ static int __meminit sparse_index_init(unsigned long section_nr, int nid)
>
> return 0;
> }
> -#else /* !SPARSEMEM_EXTREME */
> -static inline int sparse_index_init(unsigned long section_nr, int nid)
> -{
> - return 0;
> -}
>
> -static inline void sparse_alloc_section_roots(void) {}
> -#endif
> -
> -#ifdef CONFIG_SPARSEMEM_EXTREME
> unsigned long __section_nr(struct mem_section *ms)
> {
> unsigned long root_nr;
> @@ -142,11 +133,18 @@ unsigned long __section_nr(struct mem_section *ms)
>
> return (root_nr * SECTIONS_PER_ROOT) + (ms - root);
> }
> -#else
> +#else /* !SPARSEMEM_EXTREME */
> +static inline int sparse_index_init(unsigned long section_nr, int nid)
> +{
> + return 0;
> +}
> +
> unsigned long __section_nr(struct mem_section *ms)
> {
> return (unsigned long)(ms - mem_sections[0]);
> }
> +
> +static inline void sparse_alloc_section_roots(void) {}
> #endif

Want to tag that one (endif) with /* SPARSEMEM_EXTREME */ as well while
at it?

Acked-by: David Hildenbrand <[email protected]>

--
Thanks,

David / dhildenb

2021-06-01 08:41:40

by Dong Aisheng

[permalink] [raw]
Subject: Re: [PATCH V2 4/6] mm: rename the global section array to mem_sections

On Tue, Jun 1, 2021 at 4:22 PM David Hildenbrand <[email protected]> wrote:
>
> On 31.05.21 11:19, Dong Aisheng wrote:
> > In order to distinguish the struct mem_section for a better code
> > readability and align with kernel doc [1] name below, change the
> > global mem section name to 'mem_sections' from 'mem_section'.
> >
> > [1] Documentation/vm/memory-model.rst
> > "The `mem_section` objects are arranged in a two-dimensional array
> > called `mem_sections`."
> >
> > Cc: Andrew Morton <[email protected]>
> > Cc: Dave Young <[email protected]>
> > Cc: Baoquan He <[email protected]>
> > Cc: Vivek Goyal <[email protected]>
> > Cc: [email protected]
> > Signed-off-by: Dong Aisheng <[email protected]>
> > ---
> > v1->v2:
> > * no changes
> > ---
> > include/linux/mmzone.h | 10 +++++-----
> > kernel/crash_core.c | 4 ++--
> > mm/sparse.c | 16 ++++++++--------
> > 3 files changed, 15 insertions(+), 15 deletions(-)
> >
> > diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> > index a6bfde85ddb0..0ed61f32d898 100644
> > --- a/include/linux/mmzone.h
> > +++ b/include/linux/mmzone.h
> > @@ -1302,9 +1302,9 @@ struct mem_section {
> > #define SECTION_ROOT_MASK (SECTIONS_PER_ROOT - 1)
> >
> > #ifdef CONFIG_SPARSEMEM_EXTREME
> > -extern struct mem_section **mem_section;
> > +extern struct mem_section **mem_sections;
> > #else
> > -extern struct mem_section mem_section[NR_SECTION_ROOTS][SECTIONS_PER_ROOT];
> > +extern struct mem_section mem_sections[NR_SECTION_ROOTS][SECTIONS_PER_ROOT];
> > #endif
> >
> > static inline unsigned long *section_to_usemap(struct mem_section *ms)
> > @@ -1315,12 +1315,12 @@ static inline unsigned long *section_to_usemap(struct mem_section *ms)
> > static inline struct mem_section *__nr_to_section(unsigned long nr)
> > {
> > #ifdef CONFIG_SPARSEMEM_EXTREME
> > - if (!mem_section)
> > + if (!mem_sections)
> > return NULL;
> > #endif
> > - if (!mem_section[SECTION_NR_TO_ROOT(nr)])
> > + if (!mem_sections[SECTION_NR_TO_ROOT(nr)])
> > return NULL;
> > - return &mem_section[SECTION_NR_TO_ROOT(nr)][nr & SECTION_ROOT_MASK];
> > + return &mem_sections[SECTION_NR_TO_ROOT(nr)][nr & SECTION_ROOT_MASK];
> > }
> > extern unsigned long __section_nr(struct mem_section *ms);
> > extern size_t mem_section_usage_size(void);
> > diff --git a/kernel/crash_core.c b/kernel/crash_core.c
> > index 29cc15398ee4..fb1180d81b5a 100644
> > --- a/kernel/crash_core.c
> > +++ b/kernel/crash_core.c
> > @@ -414,8 +414,8 @@ static int __init crash_save_vmcoreinfo_init(void)
> > VMCOREINFO_SYMBOL(contig_page_data);
> > #endif
> > #ifdef CONFIG_SPARSEMEM
> > - VMCOREINFO_SYMBOL_ARRAY(mem_section);
> > - VMCOREINFO_LENGTH(mem_section, NR_SECTION_ROOTS);
> > + VMCOREINFO_SYMBOL_ARRAY(mem_sections);
> > + VMCOREINFO_LENGTH(mem_sections, NR_SECTION_ROOTS);
> > VMCOREINFO_STRUCT_SIZE(mem_section);
> > VMCOREINFO_OFFSET(mem_section, section_mem_map);
> > VMCOREINFO_NUMBER(MAX_PHYSMEM_BITS);
> > diff --git a/mm/sparse.c b/mm/sparse.c
> > index d02ee6bb7cbc..6412010478f7 100644
> > --- a/mm/sparse.c
> > +++ b/mm/sparse.c
> > @@ -24,12 +24,12 @@
> > * 1) mem_section - memory sections, mem_map's for valid memory
> > */
> > #ifdef CONFIG_SPARSEMEM_EXTREME
> > -struct mem_section **mem_section;
> > +struct mem_section **mem_sections;
> > #else
> > -struct mem_section mem_section[NR_SECTION_ROOTS][SECTIONS_PER_ROOT]
> > +struct mem_section mem_sections[NR_SECTION_ROOTS][SECTIONS_PER_ROOT]
> > ____cacheline_internodealigned_in_smp;
> > #endif
> > -EXPORT_SYMBOL(mem_section);
> > +EXPORT_SYMBOL(mem_sections);
> >
> > #ifdef NODE_NOT_IN_PAGE_FLAGS
> > /*
> > @@ -66,8 +66,8 @@ static void __init sparse_alloc_section_roots(void)
> >
> > size = sizeof(struct mem_section *) * NR_SECTION_ROOTS;
> > align = 1 << (INTERNODE_CACHE_SHIFT);
> > - mem_section = memblock_alloc(size, align);
> > - if (!mem_section)
> > + mem_sections = memblock_alloc(size, align);
> > + if (!mem_sections)
> > panic("%s: Failed to allocate %lu bytes align=0x%lx\n",
> > __func__, size, align);
> > }
> > @@ -103,14 +103,14 @@ static int __meminit sparse_index_init(unsigned long section_nr, int nid)
> > *
> > * The mem_hotplug_lock resolves the apparent race below.
> > */
> > - if (mem_section[root])
> > + if (mem_sections[root])
> > return 0;
> >
> > section = sparse_index_alloc(nid);
> > if (!section)
> > return -ENOMEM;
> >
> > - mem_section[root] = section;
> > + mem_sections[root] = section;
> >
> > return 0;
> > }
> > @@ -145,7 +145,7 @@ unsigned long __section_nr(struct mem_section *ms)
> > #else
> > unsigned long __section_nr(struct mem_section *ms)
> > {
> > - return (unsigned long)(ms - mem_section[0]);
> > + return (unsigned long)(ms - mem_sections[0]);
> > }
> > #endif
> >
> >
>
> I repeat: unnecessary code churn IMHO.

Hi David,

Thanks, i explained the reason during my last reply.
Andrew has already picked this patch to -mm tree.

Regards
Aisheng

>
> --
> Thanks,
>
> David / dhildenb
>

2021-06-01 08:42:43

by Dong Aisheng

[permalink] [raw]
Subject: Re: [PATCH V2 6/6] mm/sparse: remove one duplicated #ifdef CONFIG_SPARSEMEM_EXTREME

On Tue, Jun 1, 2021 at 4:26 PM David Hildenbrand <[email protected]> wrote:
>
> On 31.05.21 11:19, Dong Aisheng wrote:
> > Those two blocks of code contained by #ifdef CONFIG_SPARSEMEM_EXTREME
> > condition are right along with each other. Not need using another #ifdef
> > condition.
> >
> > Signed-off-by: Dong Aisheng <[email protected]>
> > ---
> > ChangeLog:
> > *new patch
> > ---
> > mm/sparse.c | 18 ++++++++----------
> > 1 file changed, 8 insertions(+), 10 deletions(-)
> >
> > diff --git a/mm/sparse.c b/mm/sparse.c
> > index 6412010478f7..2905ee9fde10 100644
> > --- a/mm/sparse.c
> > +++ b/mm/sparse.c
> > @@ -114,16 +114,7 @@ static int __meminit sparse_index_init(unsigned long section_nr, int nid)
> >
> > return 0;
> > }
> > -#else /* !SPARSEMEM_EXTREME */
> > -static inline int sparse_index_init(unsigned long section_nr, int nid)
> > -{
> > - return 0;
> > -}
> >
> > -static inline void sparse_alloc_section_roots(void) {}
> > -#endif
> > -
> > -#ifdef CONFIG_SPARSEMEM_EXTREME
> > unsigned long __section_nr(struct mem_section *ms)
> > {
> > unsigned long root_nr;
> > @@ -142,11 +133,18 @@ unsigned long __section_nr(struct mem_section *ms)
> >
> > return (root_nr * SECTIONS_PER_ROOT) + (ms - root);
> > }
> > -#else
> > +#else /* !SPARSEMEM_EXTREME */
> > +static inline int sparse_index_init(unsigned long section_nr, int nid)
> > +{
> > + return 0;
> > +}
> > +
> > unsigned long __section_nr(struct mem_section *ms)
> > {
> > return (unsigned long)(ms - mem_sections[0]);
> > }
> > +
> > +static inline void sparse_alloc_section_roots(void) {}
> > #endif
>
> Want to tag that one (endif) with /* SPARSEMEM_EXTREME */ as well while
> at it?

Thanks, i could add it in v3 later with your tag.

Regards
Aisheng

>
> Acked-by: David Hildenbrand <[email protected]>
>
> --
> Thanks,
>
> David / dhildenb
>

2021-06-01 08:43:01

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH V2 4/6] mm: rename the global section array to mem_sections

On 01.06.21 10:37, Dong Aisheng wrote:
> On Tue, Jun 1, 2021 at 4:22 PM David Hildenbrand <[email protected]> wrote:
>>
>> On 31.05.21 11:19, Dong Aisheng wrote:
>>> In order to distinguish the struct mem_section for a better code
>>> readability and align with kernel doc [1] name below, change the
>>> global mem section name to 'mem_sections' from 'mem_section'.
>>>
>>> [1] Documentation/vm/memory-model.rst
>>> "The `mem_section` objects are arranged in a two-dimensional array
>>> called `mem_sections`."
>>>
>>> Cc: Andrew Morton <[email protected]>
>>> Cc: Dave Young <[email protected]>
>>> Cc: Baoquan He <[email protected]>
>>> Cc: Vivek Goyal <[email protected]>
>>> Cc: [email protected]
>>> Signed-off-by: Dong Aisheng <[email protected]>
>>> ---
>>> v1->v2:
>>> * no changes
>>> ---
>>> include/linux/mmzone.h | 10 +++++-----
>>> kernel/crash_core.c | 4 ++--
>>> mm/sparse.c | 16 ++++++++--------
>>> 3 files changed, 15 insertions(+), 15 deletions(-)
>>>
>>> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
>>> index a6bfde85ddb0..0ed61f32d898 100644
>>> --- a/include/linux/mmzone.h
>>> +++ b/include/linux/mmzone.h
>>> @@ -1302,9 +1302,9 @@ struct mem_section {
>>> #define SECTION_ROOT_MASK (SECTIONS_PER_ROOT - 1)
>>>
>>> #ifdef CONFIG_SPARSEMEM_EXTREME
>>> -extern struct mem_section **mem_section;
>>> +extern struct mem_section **mem_sections;
>>> #else
>>> -extern struct mem_section mem_section[NR_SECTION_ROOTS][SECTIONS_PER_ROOT];
>>> +extern struct mem_section mem_sections[NR_SECTION_ROOTS][SECTIONS_PER_ROOT];
>>> #endif
>>>
>>> static inline unsigned long *section_to_usemap(struct mem_section *ms)
>>> @@ -1315,12 +1315,12 @@ static inline unsigned long *section_to_usemap(struct mem_section *ms)
>>> static inline struct mem_section *__nr_to_section(unsigned long nr)
>>> {
>>> #ifdef CONFIG_SPARSEMEM_EXTREME
>>> - if (!mem_section)
>>> + if (!mem_sections)
>>> return NULL;
>>> #endif
>>> - if (!mem_section[SECTION_NR_TO_ROOT(nr)])
>>> + if (!mem_sections[SECTION_NR_TO_ROOT(nr)])
>>> return NULL;
>>> - return &mem_section[SECTION_NR_TO_ROOT(nr)][nr & SECTION_ROOT_MASK];
>>> + return &mem_sections[SECTION_NR_TO_ROOT(nr)][nr & SECTION_ROOT_MASK];
>>> }
>>> extern unsigned long __section_nr(struct mem_section *ms);
>>> extern size_t mem_section_usage_size(void);
>>> diff --git a/kernel/crash_core.c b/kernel/crash_core.c
>>> index 29cc15398ee4..fb1180d81b5a 100644
>>> --- a/kernel/crash_core.c
>>> +++ b/kernel/crash_core.c
>>> @@ -414,8 +414,8 @@ static int __init crash_save_vmcoreinfo_init(void)
>>> VMCOREINFO_SYMBOL(contig_page_data);
>>> #endif
>>> #ifdef CONFIG_SPARSEMEM
>>> - VMCOREINFO_SYMBOL_ARRAY(mem_section);
>>> - VMCOREINFO_LENGTH(mem_section, NR_SECTION_ROOTS);
>>> + VMCOREINFO_SYMBOL_ARRAY(mem_sections);
>>> + VMCOREINFO_LENGTH(mem_sections, NR_SECTION_ROOTS);
>>> VMCOREINFO_STRUCT_SIZE(mem_section);
>>> VMCOREINFO_OFFSET(mem_section, section_mem_map);
>>> VMCOREINFO_NUMBER(MAX_PHYSMEM_BITS);
>>> diff --git a/mm/sparse.c b/mm/sparse.c
>>> index d02ee6bb7cbc..6412010478f7 100644
>>> --- a/mm/sparse.c
>>> +++ b/mm/sparse.c
>>> @@ -24,12 +24,12 @@
>>> * 1) mem_section - memory sections, mem_map's for valid memory
>>> */
>>> #ifdef CONFIG_SPARSEMEM_EXTREME
>>> -struct mem_section **mem_section;
>>> +struct mem_section **mem_sections;
>>> #else
>>> -struct mem_section mem_section[NR_SECTION_ROOTS][SECTIONS_PER_ROOT]
>>> +struct mem_section mem_sections[NR_SECTION_ROOTS][SECTIONS_PER_ROOT]
>>> ____cacheline_internodealigned_in_smp;
>>> #endif
>>> -EXPORT_SYMBOL(mem_section);
>>> +EXPORT_SYMBOL(mem_sections);
>>>
>>> #ifdef NODE_NOT_IN_PAGE_FLAGS
>>> /*
>>> @@ -66,8 +66,8 @@ static void __init sparse_alloc_section_roots(void)
>>>
>>> size = sizeof(struct mem_section *) * NR_SECTION_ROOTS;
>>> align = 1 << (INTERNODE_CACHE_SHIFT);
>>> - mem_section = memblock_alloc(size, align);
>>> - if (!mem_section)
>>> + mem_sections = memblock_alloc(size, align);
>>> + if (!mem_sections)
>>> panic("%s: Failed to allocate %lu bytes align=0x%lx\n",
>>> __func__, size, align);
>>> }
>>> @@ -103,14 +103,14 @@ static int __meminit sparse_index_init(unsigned long section_nr, int nid)
>>> *
>>> * The mem_hotplug_lock resolves the apparent race below.
>>> */
>>> - if (mem_section[root])
>>> + if (mem_sections[root])
>>> return 0;
>>>
>>> section = sparse_index_alloc(nid);
>>> if (!section)
>>> return -ENOMEM;
>>>
>>> - mem_section[root] = section;
>>> + mem_sections[root] = section;
>>>
>>> return 0;
>>> }
>>> @@ -145,7 +145,7 @@ unsigned long __section_nr(struct mem_section *ms)
>>> #else
>>> unsigned long __section_nr(struct mem_section *ms)
>>> {
>>> - return (unsigned long)(ms - mem_section[0]);
>>> + return (unsigned long)(ms - mem_sections[0]);
>>> }
>>> #endif
>>>
>>>
>>
>> I repeat: unnecessary code churn IMHO.
>
> Hi David,
>
> Thanks, i explained the reason during my last reply.
> Andrew has already picked this patch to -mm tree.

Just because it's in Andrews tree doesn't mean it will end up upstream. ;)

Anyhow, no really strong opinion, it's simply unnecessary code churn
that makes bisecting harder without real value IMHO.

--
Thanks,

David / dhildenb

2021-06-01 08:50:28

by Dong Aisheng

[permalink] [raw]
Subject: Re: [PATCH V2 4/6] mm: rename the global section array to mem_sections

On Tue, Jun 1, 2021 at 4:40 PM David Hildenbrand <[email protected]> wrote:
>
> On 01.06.21 10:37, Dong Aisheng wrote:
> > On Tue, Jun 1, 2021 at 4:22 PM David Hildenbrand <[email protected]> wrote:
> >>
> >> On 31.05.21 11:19, Dong Aisheng wrote:
> >>> In order to distinguish the struct mem_section for a better code
> >>> readability and align with kernel doc [1] name below, change the
> >>> global mem section name to 'mem_sections' from 'mem_section'.
> >>>
> >>> [1] Documentation/vm/memory-model.rst
> >>> "The `mem_section` objects are arranged in a two-dimensional array
> >>> called `mem_sections`."
> >>>
> >>> Cc: Andrew Morton <[email protected]>
> >>> Cc: Dave Young <[email protected]>
> >>> Cc: Baoquan He <[email protected]>
> >>> Cc: Vivek Goyal <[email protected]>
> >>> Cc: [email protected]
> >>> Signed-off-by: Dong Aisheng <[email protected]>
> >>> ---
> >>> v1->v2:
> >>> * no changes
> >>> ---
> >>> include/linux/mmzone.h | 10 +++++-----
> >>> kernel/crash_core.c | 4 ++--
> >>> mm/sparse.c | 16 ++++++++--------
> >>> 3 files changed, 15 insertions(+), 15 deletions(-)
> >>>
> >>> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> >>> index a6bfde85ddb0..0ed61f32d898 100644
> >>> --- a/include/linux/mmzone.h
> >>> +++ b/include/linux/mmzone.h
> >>> @@ -1302,9 +1302,9 @@ struct mem_section {
> >>> #define SECTION_ROOT_MASK (SECTIONS_PER_ROOT - 1)
> >>>
> >>> #ifdef CONFIG_SPARSEMEM_EXTREME
> >>> -extern struct mem_section **mem_section;
> >>> +extern struct mem_section **mem_sections;
> >>> #else
> >>> -extern struct mem_section mem_section[NR_SECTION_ROOTS][SECTIONS_PER_ROOT];
> >>> +extern struct mem_section mem_sections[NR_SECTION_ROOTS][SECTIONS_PER_ROOT];
> >>> #endif
> >>>
> >>> static inline unsigned long *section_to_usemap(struct mem_section *ms)
> >>> @@ -1315,12 +1315,12 @@ static inline unsigned long *section_to_usemap(struct mem_section *ms)
> >>> static inline struct mem_section *__nr_to_section(unsigned long nr)
> >>> {
> >>> #ifdef CONFIG_SPARSEMEM_EXTREME
> >>> - if (!mem_section)
> >>> + if (!mem_sections)
> >>> return NULL;
> >>> #endif
> >>> - if (!mem_section[SECTION_NR_TO_ROOT(nr)])
> >>> + if (!mem_sections[SECTION_NR_TO_ROOT(nr)])
> >>> return NULL;
> >>> - return &mem_section[SECTION_NR_TO_ROOT(nr)][nr & SECTION_ROOT_MASK];
> >>> + return &mem_sections[SECTION_NR_TO_ROOT(nr)][nr & SECTION_ROOT_MASK];
> >>> }
> >>> extern unsigned long __section_nr(struct mem_section *ms);
> >>> extern size_t mem_section_usage_size(void);
> >>> diff --git a/kernel/crash_core.c b/kernel/crash_core.c
> >>> index 29cc15398ee4..fb1180d81b5a 100644
> >>> --- a/kernel/crash_core.c
> >>> +++ b/kernel/crash_core.c
> >>> @@ -414,8 +414,8 @@ static int __init crash_save_vmcoreinfo_init(void)
> >>> VMCOREINFO_SYMBOL(contig_page_data);
> >>> #endif
> >>> #ifdef CONFIG_SPARSEMEM
> >>> - VMCOREINFO_SYMBOL_ARRAY(mem_section);
> >>> - VMCOREINFO_LENGTH(mem_section, NR_SECTION_ROOTS);
> >>> + VMCOREINFO_SYMBOL_ARRAY(mem_sections);
> >>> + VMCOREINFO_LENGTH(mem_sections, NR_SECTION_ROOTS);
> >>> VMCOREINFO_STRUCT_SIZE(mem_section);
> >>> VMCOREINFO_OFFSET(mem_section, section_mem_map);
> >>> VMCOREINFO_NUMBER(MAX_PHYSMEM_BITS);
> >>> diff --git a/mm/sparse.c b/mm/sparse.c
> >>> index d02ee6bb7cbc..6412010478f7 100644
> >>> --- a/mm/sparse.c
> >>> +++ b/mm/sparse.c
> >>> @@ -24,12 +24,12 @@
> >>> * 1) mem_section - memory sections, mem_map's for valid memory
> >>> */
> >>> #ifdef CONFIG_SPARSEMEM_EXTREME
> >>> -struct mem_section **mem_section;
> >>> +struct mem_section **mem_sections;
> >>> #else
> >>> -struct mem_section mem_section[NR_SECTION_ROOTS][SECTIONS_PER_ROOT]
> >>> +struct mem_section mem_sections[NR_SECTION_ROOTS][SECTIONS_PER_ROOT]
> >>> ____cacheline_internodealigned_in_smp;
> >>> #endif
> >>> -EXPORT_SYMBOL(mem_section);
> >>> +EXPORT_SYMBOL(mem_sections);
> >>>
> >>> #ifdef NODE_NOT_IN_PAGE_FLAGS
> >>> /*
> >>> @@ -66,8 +66,8 @@ static void __init sparse_alloc_section_roots(void)
> >>>
> >>> size = sizeof(struct mem_section *) * NR_SECTION_ROOTS;
> >>> align = 1 << (INTERNODE_CACHE_SHIFT);
> >>> - mem_section = memblock_alloc(size, align);
> >>> - if (!mem_section)
> >>> + mem_sections = memblock_alloc(size, align);
> >>> + if (!mem_sections)
> >>> panic("%s: Failed to allocate %lu bytes align=0x%lx\n",
> >>> __func__, size, align);
> >>> }
> >>> @@ -103,14 +103,14 @@ static int __meminit sparse_index_init(unsigned long section_nr, int nid)
> >>> *
> >>> * The mem_hotplug_lock resolves the apparent race below.
> >>> */
> >>> - if (mem_section[root])
> >>> + if (mem_sections[root])
> >>> return 0;
> >>>
> >>> section = sparse_index_alloc(nid);
> >>> if (!section)
> >>> return -ENOMEM;
> >>>
> >>> - mem_section[root] = section;
> >>> + mem_sections[root] = section;
> >>>
> >>> return 0;
> >>> }
> >>> @@ -145,7 +145,7 @@ unsigned long __section_nr(struct mem_section *ms)
> >>> #else
> >>> unsigned long __section_nr(struct mem_section *ms)
> >>> {
> >>> - return (unsigned long)(ms - mem_section[0]);
> >>> + return (unsigned long)(ms - mem_sections[0]);
> >>> }
> >>> #endif
> >>>
> >>>
> >>
> >> I repeat: unnecessary code churn IMHO.
> >
> > Hi David,
> >
> > Thanks, i explained the reason during my last reply.
> > Andrew has already picked this patch to -mm tree.
>
> Just because it's in Andrews tree doesn't mean it will end up upstream. ;)
>
> Anyhow, no really strong opinion, it's simply unnecessary code churn
> that makes bisecting harder without real value IMHO.

In my practice, it helps improve the code reading efficiency with
scope and vim hotkey.
Before the change, I really feel mixed definition causes troubles in
reading code efficiently.
Anyway, that's my personal experience while others may have different options.
Thanks for the feedback.

Regards
Aisheng

>
> --
> Thanks,
>
> David / dhildenb
>

2021-06-01 23:53:46

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH V2 4/6] mm: rename the global section array to mem_sections

On Tue, 1 Jun 2021 10:40:09 +0200 David Hildenbrand <[email protected]> wrote:

> > Thanks, i explained the reason during my last reply.
> > Andrew has already picked this patch to -mm tree.
>
> Just because it's in Andrews tree doesn't mean it will end up upstream. ;)
>
> Anyhow, no really strong opinion, it's simply unnecessary code churn
> that makes bisecting harder without real value IMHO.

I think it's a good change - mem_sections refers to multiple instances
of a mem_section. Churn is a pain, but that's the price we pay for more
readable code. And for having screwed it up originally ;)

2021-06-02 02:28:11

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH V2 4/6] mm: rename the global section array to mem_sections

On Wed, 2 Jun 2021 01:11:07 +0000 HAGIO KAZUHITO(萩尾 一仁) <[email protected]> wrote:

> -----Original Message-----
> > On Tue, 1 Jun 2021 10:40:09 +0200 David Hildenbrand <[email protected]> wrote:
> >
> > > > Thanks, i explained the reason during my last reply.
> > > > Andrew has already picked this patch to -mm tree.
> > >
> > > Just because it's in Andrews tree doesn't mean it will end up upstream. ;)
> > >
> > > Anyhow, no really strong opinion, it's simply unnecessary code churn
> > > that makes bisecting harder without real value IMHO.
> >
> > I think it's a good change - mem_sections refers to multiple instances
> > of a mem_section. Churn is a pain, but that's the price we pay for more
> > readable code. And for having screwed it up originally ;)
>
> >From a makedumpfile/crash-utility viewpoint, I don't deny kernel improvement
> and probably the change will not be hard for them to support, but I'd like
> you to remember that the tool users will need to update them for the change.
>
> The situation where we need to update the tools for new kernels is usual, but
> there are not many cases that they cannot even start session, and this change
> will cause it. Personally I wonder the change is worth forcing users to update
> them.

Didn't know that. I guess I'll drop it then.

We could do an assembly-level alias I assume..

2021-06-02 04:04:10

by Baoquan He

[permalink] [raw]
Subject: Re: [PATCH V2 4/6] mm: rename the global section array to mem_sections

On 06/02/21 at 01:11am, HAGIO KAZUHITO(萩尾 一仁) wrote:
> -----Original Message-----
> > On Tue, 1 Jun 2021 10:40:09 +0200 David Hildenbrand <[email protected]> wrote:
> >
> > > > Thanks, i explained the reason during my last reply.
> > > > Andrew has already picked this patch to -mm tree.
> > >
> > > Just because it's in Andrews tree doesn't mean it will end up upstream. ;)
> > >
> > > Anyhow, no really strong opinion, it's simply unnecessary code churn
> > > that makes bisecting harder without real value IMHO.
> >
> > I think it's a good change - mem_sections refers to multiple instances
> > of a mem_section. Churn is a pain, but that's the price we pay for more
> > readable code. And for having screwed it up originally ;)
>
> From a makedumpfile/crash-utility viewpoint, I don't deny kernel improvement
> and probably the change will not be hard for them to support, but I'd like
> you to remember that the tool users will need to update them for the change.

As VIM user, I can understand Aisheng's feeling on the mem_section
variable which has the same symbol name as its type. Meanwhile it does
cause makedumpfile/crash having to be changed accordingly.

Maybe we can carry it when any essential change is needed in both kernel
and makedumpfile/crash around it.

>
> The situation where we need to update the tools for new kernels is usual, but
> there are not many cases that they cannot even start session, and this change

By the way, Kazu, about a case starting session, could you be more specific
or rephrase? I may not get it clearly. Thanks.

> will cause it. Personally I wonder the change is worth forcing users to update
> them.
>
> Thanks,
> Kazu
>

Subject: RE: [PATCH V2 4/6] mm: rename the global section array to mem_sections

-----Original Message-----
> On Tue, 1 Jun 2021 10:40:09 +0200 David Hildenbrand <[email protected]> wrote:
>
> > > Thanks, i explained the reason during my last reply.
> > > Andrew has already picked this patch to -mm tree.
> >
> > Just because it's in Andrews tree doesn't mean it will end up upstream. ;)
> >
> > Anyhow, no really strong opinion, it's simply unnecessary code churn
> > that makes bisecting harder without real value IMHO.
>
> I think it's a good change - mem_sections refers to multiple instances
> of a mem_section. Churn is a pain, but that's the price we pay for more
> readable code. And for having screwed it up originally ;)

From a makedumpfile/crash-utility viewpoint, I don't deny kernel improvement
and probably the change will not be hard for them to support, but I'd like
you to remember that the tool users will need to update them for the change.

The situation where we need to update the tools for new kernels is usual, but
there are not many cases that they cannot even start session, and this change
will cause it. Personally I wonder the change is worth forcing users to update
them.

Thanks,
Kazu

2021-06-02 06:42:45

by Baoquan He

[permalink] [raw]
Subject: Re: [PATCH V2 4/6] mm: rename the global section array to mem_sections

On 06/02/21 at 05:02am, HAGIO KAZUHITO(萩尾 一仁) wrote:
> -----Original Message-----
> > On 06/02/21 at 01:11am, HAGIO KAZUHITO(萩尾 一仁) wrote:
> > > -----Original Message-----
> > > > On Tue, 1 Jun 2021 10:40:09 +0200 David Hildenbrand <[email protected]> wrote:
> > > >
> > > > > > Thanks, i explained the reason during my last reply.
> > > > > > Andrew has already picked this patch to -mm tree.
> > > > >
> > > > > Just because it's in Andrews tree doesn't mean it will end up upstream. ;)
> > > > >
> > > > > Anyhow, no really strong opinion, it's simply unnecessary code churn
> > > > > that makes bisecting harder without real value IMHO.
> > > >
> > > > I think it's a good change - mem_sections refers to multiple instances
> > > > of a mem_section. Churn is a pain, but that's the price we pay for more
> > > > readable code. And for having screwed it up originally ;)
> > >
> > > From a makedumpfile/crash-utility viewpoint, I don't deny kernel improvement
> > > and probably the change will not be hard for them to support, but I'd like
> > > you to remember that the tool users will need to update them for the change.
> >
> > As VIM user, I can understand Aisheng's feeling on the mem_section
> > variable which has the same symbol name as its type. Meanwhile it does
> > cause makedumpfile/crash having to be changed accordingly.
> >
> > Maybe we can carry it when any essential change is needed in both kernel
> > and makedumpfile/crash around it.
>
> Yes, that is a possible option.
>
> >
> > >
> > > The situation where we need to update the tools for new kernels is usual, but
> > > there are not many cases that they cannot even start session, and this change
> >
> > By the way, Kazu, about a case starting session, could you be more specific
> > or rephrase? I may not get it clearly. Thanks.
>
> As for the current crash, the "mem_section" symbol is used to determine
> which memory model is used.
>
> if (kernel_symbol_exists("mem_section"))
> vt->flags |= SPARSEMEM;
> else if (kernel_symbol_exists("mem_map")) {
> get_symbol_data("mem_map", sizeof(char *), &vt->mem_map);
> vt->flags |= FLATMEM;
> } else
> vt->flags |= DISCONTIGMEM;
>
> So without updating, crash will assume that the memory model is DISCONTIGMEM,
> fail during vm_init() and cannot start a session. This is an imitation of
> the situation though:
>
> - if (kernel_symbol_exists("mem_section"))
> + if (kernel_symbol_exists("mem_sectionX"))
>
> # crash
> ...
> crash: invalid structure member offset: pglist_data_node_mem_map
> FILE: memory.c LINE: 16420 FUNCTION: dump_memory_nodes()
>
> [/root/bin/crash] error trace: 465304 => 4ac2bf => 4aae19 => 57f4d7
>
> 57f4d7: OFFSET_verify+164
> 4aae19: dump_memory_nodes+5321
> 4ac2bf: vm_init+4031
> 465304: main_loop+392
>
> #
>
> Every time a kernel is released, there are some changes that crash can
> start up with but cannot run a specific crash's command, but a change
> that crash cannot start up like this case does not occur often.

Ah,I see. You mean this patch will cause startup failure of crash/makedumpfile
during application's earlier stage, and this is a severer situation than
others. Then we may need defer the patch acceptance to a future suitable
time. Thanks for explanation.

>
> Also as for makedumpfile, the "SYMBOL(mem_section)" vmcore entry is used
> to determine the memory model, so it will fail with the following error
> without an update.
>
> # ./makedumpfile --mem-usage /proc/kcore
> get_mem_map: Can't distinguish the memory type.
>
> makedumpfile Failed.
>
> Thanks,
> Kazu

Subject: RE: [PATCH V2 4/6] mm: rename the global section array to mem_sections

-----Original Message-----
> On 06/02/21 at 01:11am, HAGIO KAZUHITO(萩尾 一仁) wrote:
> > -----Original Message-----
> > > On Tue, 1 Jun 2021 10:40:09 +0200 David Hildenbrand <[email protected]> wrote:
> > >
> > > > > Thanks, i explained the reason during my last reply.
> > > > > Andrew has already picked this patch to -mm tree.
> > > >
> > > > Just because it's in Andrews tree doesn't mean it will end up upstream. ;)
> > > >
> > > > Anyhow, no really strong opinion, it's simply unnecessary code churn
> > > > that makes bisecting harder without real value IMHO.
> > >
> > > I think it's a good change - mem_sections refers to multiple instances
> > > of a mem_section. Churn is a pain, but that's the price we pay for more
> > > readable code. And for having screwed it up originally ;)
> >
> > From a makedumpfile/crash-utility viewpoint, I don't deny kernel improvement
> > and probably the change will not be hard for them to support, but I'd like
> > you to remember that the tool users will need to update them for the change.
>
> As VIM user, I can understand Aisheng's feeling on the mem_section
> variable which has the same symbol name as its type. Meanwhile it does
> cause makedumpfile/crash having to be changed accordingly.
>
> Maybe we can carry it when any essential change is needed in both kernel
> and makedumpfile/crash around it.

Yes, that is a possible option.

>
> >
> > The situation where we need to update the tools for new kernels is usual, but
> > there are not many cases that they cannot even start session, and this change
>
> By the way, Kazu, about a case starting session, could you be more specific
> or rephrase? I may not get it clearly. Thanks.

As for the current crash, the "mem_section" symbol is used to determine
which memory model is used.

if (kernel_symbol_exists("mem_section"))
vt->flags |= SPARSEMEM;
else if (kernel_symbol_exists("mem_map")) {
get_symbol_data("mem_map", sizeof(char *), &vt->mem_map);
vt->flags |= FLATMEM;
} else
vt->flags |= DISCONTIGMEM;

So without updating, crash will assume that the memory model is DISCONTIGMEM,
fail during vm_init() and cannot start a session. This is an imitation of
the situation though:

- if (kernel_symbol_exists("mem_section"))
+ if (kernel_symbol_exists("mem_sectionX"))

# crash
...
crash: invalid structure member offset: pglist_data_node_mem_map
FILE: memory.c LINE: 16420 FUNCTION: dump_memory_nodes()

[/root/bin/crash] error trace: 465304 => 4ac2bf => 4aae19 => 57f4d7

57f4d7: OFFSET_verify+164
4aae19: dump_memory_nodes+5321
4ac2bf: vm_init+4031
465304: main_loop+392

#

Every time a kernel is released, there are some changes that crash can
start up with but cannot run a specific crash's command, but a change
that crash cannot start up like this case does not occur often.

Also as for makedumpfile, the "SYMBOL(mem_section)" vmcore entry is used
to determine the memory model, so it will fail with the following error
without an update.

# ./makedumpfile --mem-usage /proc/kcore
get_mem_map: Can't distinguish the memory type.

makedumpfile Failed.

Thanks,
Kazu