2022-10-27 13:07:31

by Huacai Chen

[permalink] [raw]
Subject: [PATCH V14 0/4] mm/sparse-vmemmap: Generalise helpers and enable for LoongArch

This series is in order to enable sparse-vmemmap for LoongArch. But
LoongArch cannot use generic helpers directly because MIPS&LoongArch
need to call pgd_init()/pud_init()/pmd_init() when populating page
tables. So we adjust the prototypes of p?d_init() to make generic
helpers can call them, then enable sparse-vmemmap with generic helpers,
and to be further, generalise vmemmap_populate_hugepages() for ARM64,
X86 and LoongArch.

V1 -> V2:
Split ARCH_WANT_HUGETLB_PAGE_OPTIMIZE_VMEMMAP to a separate patch.

V2 -> V3:
1, Change the Signed-off-by order of author and committer;
2, Update commit message about the build error on LoongArch.

V3 -> V4:
Change pmd to pmdp for ARM64 for consistency.

V4 -> V5:
Add a detailed comment for no-fallback in the altmap case.

V5 -> V6:
1, Fix build error for NIOS2;
2, Fix build error for allnoconfig;
3, Update comment for no-fallback in the altmap case.

V6 -> V7:
Fix build warnings of "no previous prototype".

V7 -> V8:
Fix build error for MIPS pud_init().

V8 -> V9:
Remove redundant #include to avoid build error with latest upstream
kernel.

V9 -> V10:
Fix build error due to VMEMMAP changes in 6.0-rc1.

V10 -> V11:
Adjust context due to ARM64 changes in 6.1-rc1.

V11 -> V12:
1, Fix build error for !SPARSEMEM;
2, Simplify pagetable_init() for MIPS32.

V12 -> V13:
1, Add Acked-by and Reviewed-by tags;
2, Update commit message for the 4th patch.

V13 -> V14:
Remove the static_key.h inclusion in the 4th patch.

Huacai Chen and Feiyang Chen(4):
MIPS&LoongArch&NIOS2: Adjust prototypes of p?d_init().
LoongArch: Add sparse memory vmemmap support.
mm/sparse-vmemmap: Generalise vmemmap_populate_hugepages().
LoongArch: Enable ARCH_WANT_HUGETLB_PAGE_OPTIMIZE_VMEMMAP.

Signed-off-by: Huacai Chen <[email protected]>
Signed-off-by: Feiyang Chen <[email protected]>
---
arch/arm64/mm/mmu.c | 53 ++++++--------------
arch/loongarch/Kconfig | 2 +
arch/loongarch/include/asm/pgalloc.h | 13 +----
arch/loongarch/include/asm/pgtable.h | 13 +++--
arch/loongarch/include/asm/sparsemem.h | 8 +++
arch/loongarch/kernel/numa.c | 4 +-
arch/loongarch/mm/init.c | 44 +++++++++++++++-
arch/loongarch/mm/pgtable.c | 23 +++++----
arch/mips/include/asm/pgalloc.h | 8 +--
arch/mips/include/asm/pgtable-64.h | 8 +--
arch/mips/kvm/mmu.c | 3 +-
arch/mips/mm/pgtable-32.c | 10 ++--
arch/mips/mm/pgtable-64.c | 18 ++++---
arch/mips/mm/pgtable.c | 2 +-
arch/x86/mm/init_64.c | 92 ++++++++++++----------------------
include/linux/mm.h | 8 +++
include/linux/page-flags.h | 1 +
mm/sparse-vmemmap.c | 64 +++++++++++++++++++++++
18 files changed, 222 insertions(+), 152 deletions(-)
--
2.27.0



2022-10-27 13:07:40

by Huacai Chen

[permalink] [raw]
Subject: [PATCH V14 3/4] mm/sparse-vmemmap: Generalise vmemmap_populate_hugepages()

From: Feiyang Chen <[email protected]>

Generalise vmemmap_populate_hugepages() so ARM64 & X86 & LoongArch can
share its implementation.

Acked-by: Will Deacon <[email protected]>
Acked-by: Dave Hansen <[email protected]>
Signed-off-by: Feiyang Chen <[email protected]>
Signed-off-by: Huacai Chen <[email protected]>
---
arch/arm64/mm/mmu.c | 55 +++++++-----------------
arch/loongarch/mm/init.c | 59 +++++++-------------------
arch/x86/mm/init_64.c | 92 ++++++++++++++--------------------------
include/linux/mm.h | 6 +++
mm/sparse-vmemmap.c | 63 +++++++++++++++++++++++++++
5 files changed, 132 insertions(+), 143 deletions(-)

diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 9a7c38965154..e3822cb0118b 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -1184,53 +1184,28 @@ static void free_empty_tables(unsigned long addr, unsigned long end,
}
#endif

+void __meminit vmemmap_set_pmd(pmd_t *pmdp, void *p, int node,
+ unsigned long addr, unsigned long next)
+{
+ pmd_set_huge(pmdp, __pa(p), __pgprot(PROT_SECT_NORMAL));
+}
+
+int __meminit vmemmap_check_pmd(pmd_t *pmdp, int node,
+ unsigned long addr, unsigned long next)
+{
+ vmemmap_verify((pte_t *)pmdp, node, addr, next);
+ return 1;
+}
+
int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
struct vmem_altmap *altmap)
{
- unsigned long addr = start;
- unsigned long next;
- pgd_t *pgdp;
- p4d_t *p4dp;
- pud_t *pudp;
- pmd_t *pmdp;
-
WARN_ON((start < VMEMMAP_START) || (end > VMEMMAP_END));

if (!ARM64_KERNEL_USES_PMD_MAPS)
return vmemmap_populate_basepages(start, end, node, altmap);
-
- do {
- next = pmd_addr_end(addr, end);
-
- pgdp = vmemmap_pgd_populate(addr, node);
- if (!pgdp)
- return -ENOMEM;
-
- p4dp = vmemmap_p4d_populate(pgdp, addr, node);
- if (!p4dp)
- return -ENOMEM;
-
- pudp = vmemmap_pud_populate(p4dp, addr, node);
- if (!pudp)
- return -ENOMEM;
-
- pmdp = pmd_offset(pudp, addr);
- if (pmd_none(READ_ONCE(*pmdp))) {
- void *p = NULL;
-
- p = vmemmap_alloc_block_buf(PMD_SIZE, node, altmap);
- if (!p) {
- if (vmemmap_populate_basepages(addr, next, node, altmap))
- return -ENOMEM;
- continue;
- }
-
- pmd_set_huge(pmdp, __pa(p), __pgprot(PROT_SECT_NORMAL));
- } else
- vmemmap_verify((pte_t *)pmdp, node, addr, next);
- } while (addr = next, addr != end);
-
- return 0;
+ else
+ return vmemmap_populate_hugepages(start, end, node, altmap);
}

#ifdef CONFIG_MEMORY_HOTPLUG
diff --git a/arch/loongarch/mm/init.c b/arch/loongarch/mm/init.c
index 451d93667bcc..e018aed34586 100644
--- a/arch/loongarch/mm/init.c
+++ b/arch/loongarch/mm/init.c
@@ -153,52 +153,25 @@ EXPORT_SYMBOL_GPL(memory_add_physaddr_to_nid);
#endif

#ifdef CONFIG_SPARSEMEM_VMEMMAP
-static int __meminit vmemmap_populate_hugepages(unsigned long start, unsigned long end,
- int node, struct vmem_altmap *altmap)
+void __meminit vmemmap_set_pmd(pmd_t *pmd, void *p, int node,
+ unsigned long addr, unsigned long next)
{
- unsigned long addr = start;
- unsigned long next;
- pgd_t *pgd;
- p4d_t *p4d;
- pud_t *pud;
- pmd_t *pmd;
+ pmd_t entry;

- for (addr = start; addr < end; addr = next) {
- next = pmd_addr_end(addr, end);
-
- pgd = vmemmap_pgd_populate(addr, node);
- if (!pgd)
- return -ENOMEM;
- p4d = vmemmap_p4d_populate(pgd, addr, node);
- if (!p4d)
- return -ENOMEM;
- pud = vmemmap_pud_populate(p4d, addr, node);
- if (!pud)
- return -ENOMEM;
-
- pmd = pmd_offset(pud, addr);
- if (pmd_none(*pmd)) {
- void *p = NULL;
-
- p = vmemmap_alloc_block_buf(PMD_SIZE, node, NULL);
- if (p) {
- pmd_t entry;
-
- entry = pfn_pmd(virt_to_pfn(p), PAGE_KERNEL);
- pmd_val(entry) |= _PAGE_HUGE | _PAGE_HGLOBAL;
- set_pmd_at(&init_mm, addr, pmd, entry);
-
- continue;
- }
- } else if (pmd_val(*pmd) & _PAGE_HUGE) {
- vmemmap_verify((pte_t *)pmd, node, addr, next);
- continue;
- }
- if (vmemmap_populate_basepages(addr, next, node, NULL))
- return -ENOMEM;
- }
+ entry = pfn_pmd(virt_to_pfn(p), PAGE_KERNEL);
+ pmd_val(entry) |= _PAGE_HUGE | _PAGE_HGLOBAL;
+ set_pmd_at(&init_mm, addr, pmd, entry);
+}
+
+int __meminit vmemmap_check_pmd(pmd_t *pmd, int node,
+ unsigned long addr, unsigned long next)
+{
+ int huge = pmd_val(*pmd) & _PAGE_HUGE;
+
+ if (huge)
+ vmemmap_verify((pte_t *)pmd, node, addr, next);

- return 0;
+ return huge;
}

int __meminit vmemmap_populate(unsigned long start, unsigned long end,
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 3f040c6e5d13..58febaf569f6 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -1533,72 +1533,44 @@ static long __meminitdata addr_start, addr_end;
static void __meminitdata *p_start, *p_end;
static int __meminitdata node_start;

-static int __meminit vmemmap_populate_hugepages(unsigned long start,
- unsigned long end, int node, struct vmem_altmap *altmap)
+void __meminit vmemmap_set_pmd(pmd_t *pmd, void *p, int node,
+ unsigned long addr, unsigned long next)
{
- unsigned long addr;
- unsigned long next;
- pgd_t *pgd;
- p4d_t *p4d;
- pud_t *pud;
- pmd_t *pmd;
-
- for (addr = start; addr < end; addr = next) {
- next = pmd_addr_end(addr, end);
-
- pgd = vmemmap_pgd_populate(addr, node);
- if (!pgd)
- return -ENOMEM;
-
- p4d = vmemmap_p4d_populate(pgd, addr, node);
- if (!p4d)
- return -ENOMEM;
-
- pud = vmemmap_pud_populate(p4d, addr, node);
- if (!pud)
- return -ENOMEM;
-
- pmd = pmd_offset(pud, addr);
- if (pmd_none(*pmd)) {
- void *p;
-
- p = vmemmap_alloc_block_buf(PMD_SIZE, node, altmap);
- if (p) {
- pte_t entry;
-
- entry = pfn_pte(__pa(p) >> PAGE_SHIFT,
- PAGE_KERNEL_LARGE);
- set_pmd(pmd, __pmd(pte_val(entry)));
+ pte_t entry;
+
+ entry = pfn_pte(__pa(p) >> PAGE_SHIFT,
+ PAGE_KERNEL_LARGE);
+ set_pmd(pmd, __pmd(pte_val(entry)));
+
+ /* check to see if we have contiguous blocks */
+ if (p_end != p || node_start != node) {
+ if (p_start)
+ pr_debug(" [%lx-%lx] PMD -> [%p-%p] on node %d\n",
+ addr_start, addr_end-1, p_start, p_end-1, node_start);
+ addr_start = addr;
+ node_start = node;
+ p_start = p;
+ }

- /* check to see if we have contiguous blocks */
- if (p_end != p || node_start != node) {
- if (p_start)
- pr_debug(" [%lx-%lx] PMD -> [%p-%p] on node %d\n",
- addr_start, addr_end-1, p_start, p_end-1, node_start);
- addr_start = addr;
- node_start = node;
- p_start = p;
- }
+ addr_end = addr + PMD_SIZE;
+ p_end = p + PMD_SIZE;

- addr_end = addr + PMD_SIZE;
- p_end = p + PMD_SIZE;
+ if (!IS_ALIGNED(addr, PMD_SIZE) ||
+ !IS_ALIGNED(next, PMD_SIZE))
+ vmemmap_use_new_sub_pmd(addr, next);
+}

- if (!IS_ALIGNED(addr, PMD_SIZE) ||
- !IS_ALIGNED(next, PMD_SIZE))
- vmemmap_use_new_sub_pmd(addr, next);
+int __meminit vmemmap_check_pmd(pmd_t *pmd, int node,
+ unsigned long addr, unsigned long next)
+{
+ int large = pmd_large(*pmd);

- continue;
- } else if (altmap)
- return -ENOMEM; /* no fallback */
- } else if (pmd_large(*pmd)) {
- vmemmap_verify((pte_t *)pmd, node, addr, next);
- vmemmap_use_sub_pmd(addr, next);
- continue;
- }
- if (vmemmap_populate_basepages(addr, next, node, NULL))
- return -ENOMEM;
+ if (pmd_large(*pmd)) {
+ vmemmap_verify((pte_t *)pmd, node, addr, next);
+ vmemmap_use_sub_pmd(addr, next);
}
- return 0;
+
+ return large;
}

int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 45eb04c43c6b..2eb88b63a0e0 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -3246,8 +3246,14 @@ struct vmem_altmap;
void *vmemmap_alloc_block_buf(unsigned long size, int node,
struct vmem_altmap *altmap);
void vmemmap_verify(pte_t *, int, unsigned long, unsigned long);
+void vmemmap_set_pmd(pmd_t *pmd, void *p, int node,
+ unsigned long addr, unsigned long next);
+int vmemmap_check_pmd(pmd_t *pmd, int node,
+ unsigned long addr, unsigned long next);
int vmemmap_populate_basepages(unsigned long start, unsigned long end,
int node, struct vmem_altmap *altmap);
+int vmemmap_populate_hugepages(unsigned long start, unsigned long end,
+ int node, struct vmem_altmap *altmap);
int vmemmap_populate(unsigned long start, unsigned long end, int node,
struct vmem_altmap *altmap);
void vmemmap_populate_print_last(void);
diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
index 797b30e9050c..c5398a5960d0 100644
--- a/mm/sparse-vmemmap.c
+++ b/mm/sparse-vmemmap.c
@@ -295,6 +295,69 @@ int __meminit vmemmap_populate_basepages(unsigned long start, unsigned long end,
return vmemmap_populate_range(start, end, node, altmap, NULL);
}

+void __weak __meminit vmemmap_set_pmd(pmd_t *pmd, void *p, int node,
+ unsigned long addr, unsigned long next)
+{
+}
+
+int __weak __meminit vmemmap_check_pmd(pmd_t *pmd, int node,
+ unsigned long addr, unsigned long next)
+{
+ return 0;
+}
+
+int __meminit vmemmap_populate_hugepages(unsigned long start, unsigned long end,
+ int node, struct vmem_altmap *altmap)
+{
+ unsigned long addr;
+ unsigned long next;
+ pgd_t *pgd;
+ p4d_t *p4d;
+ pud_t *pud;
+ pmd_t *pmd;
+
+ for (addr = start; addr < end; addr = next) {
+ next = pmd_addr_end(addr, end);
+
+ pgd = vmemmap_pgd_populate(addr, node);
+ if (!pgd)
+ return -ENOMEM;
+
+ p4d = vmemmap_p4d_populate(pgd, addr, node);
+ if (!p4d)
+ return -ENOMEM;
+
+ pud = vmemmap_pud_populate(p4d, addr, node);
+ if (!pud)
+ return -ENOMEM;
+
+ pmd = pmd_offset(pud, addr);
+ if (pmd_none(READ_ONCE(*pmd))) {
+ void *p;
+
+ p = vmemmap_alloc_block_buf(PMD_SIZE, node, altmap);
+ if (p) {
+ vmemmap_set_pmd(pmd, p, node, addr, next);
+ continue;
+ } else if (altmap) {
+ /*
+ * No fallback: In any case we care about, the
+ * altmap should be reasonably sized and aligned
+ * such that vmemmap_alloc_block_buf() will always
+ * succeed. For consistency with the PTE case,
+ * return an error here as failure could indicate
+ * a configuration issue with the size of the altmap.
+ */
+ return -ENOMEM;
+ }
+ } else if (vmemmap_check_pmd(pmd, node, addr, next))
+ continue;
+ if (vmemmap_populate_basepages(addr, next, node, altmap))
+ return -ENOMEM;
+ }
+ return 0;
+}
+
/*
* For compound pages bigger than section size (e.g. x86 1G compound
* pages with 2M subsection size) fill the rest of sections as tail
--
2.31.1


2022-10-27 13:09:55

by Huacai Chen

[permalink] [raw]
Subject: [PATCH V14 2/4] LoongArch: Add sparse memory vmemmap support

From: Feiyang Chen <[email protected]>

Add sparse memory vmemmap support for LoongArch. SPARSEMEM_VMEMMAP
uses a virtually mapped memmap to optimise pfn_to_page and page_to_pfn
operations. This is the most efficient option when sufficient kernel
resources are available.

Signed-off-by: Min Zhou <[email protected]>
Signed-off-by: Feiyang Chen <[email protected]>
Signed-off-by: Huacai Chen <[email protected]>
---
arch/loongarch/Kconfig | 1 +
arch/loongarch/include/asm/pgtable.h | 7 ++-
arch/loongarch/include/asm/sparsemem.h | 8 +++
arch/loongarch/mm/init.c | 72 ++++++++++++++++++++++++--
include/linux/mm.h | 2 +
mm/sparse-vmemmap.c | 10 ++++
6 files changed, 96 insertions(+), 4 deletions(-)

diff --git a/arch/loongarch/Kconfig b/arch/loongarch/Kconfig
index 903096bd87f8..6f7fa0c0ca08 100644
--- a/arch/loongarch/Kconfig
+++ b/arch/loongarch/Kconfig
@@ -487,6 +487,7 @@ config ARCH_FLATMEM_ENABLE

config ARCH_SPARSEMEM_ENABLE
def_bool y
+ select SPARSEMEM_VMEMMAP_ENABLE
help
Say Y to support efficient handling of sparse physical memory,
for architectures which are either NUMA (Non-Uniform Memory Access)
diff --git a/arch/loongarch/include/asm/pgtable.h b/arch/loongarch/include/asm/pgtable.h
index 72eb8bc352fb..2e635dca2420 100644
--- a/arch/loongarch/include/asm/pgtable.h
+++ b/arch/loongarch/include/asm/pgtable.h
@@ -11,6 +11,7 @@

#include <linux/compiler.h>
#include <asm/addrspace.h>
+#include <asm/page.h>
#include <asm/pgtable-bits.h>

#if CONFIG_PGTABLE_LEVELS == 2
@@ -59,6 +60,7 @@
#include <linux/mm_types.h>
#include <linux/mmzone.h>
#include <asm/fixmap.h>
+#include <asm/sparsemem.h>

struct mm_struct;
struct vm_area_struct;
@@ -86,7 +88,10 @@ extern unsigned long zero_page_mask;
#define VMALLOC_START MODULES_END
#define VMALLOC_END \
(vm_map_base + \
- min(PTRS_PER_PGD * PTRS_PER_PUD * PTRS_PER_PMD * PTRS_PER_PTE * PAGE_SIZE, (1UL << cpu_vabits)) - PMD_SIZE)
+ min(PTRS_PER_PGD * PTRS_PER_PUD * PTRS_PER_PMD * PTRS_PER_PTE * PAGE_SIZE, (1UL << cpu_vabits)) - PMD_SIZE - VMEMMAP_SIZE)
+
+#define vmemmap ((struct page *)((VMALLOC_END + PMD_SIZE) & PMD_MASK))
+#define VMEMMAP_END ((unsigned long)vmemmap + VMEMMAP_SIZE - 1)

#define pte_ERROR(e) \
pr_err("%s:%d: bad pte %016lx.\n", __FILE__, __LINE__, pte_val(e))
diff --git a/arch/loongarch/include/asm/sparsemem.h b/arch/loongarch/include/asm/sparsemem.h
index 3d18cdf1b069..8d4af6aff8a8 100644
--- a/arch/loongarch/include/asm/sparsemem.h
+++ b/arch/loongarch/include/asm/sparsemem.h
@@ -11,8 +11,16 @@
#define SECTION_SIZE_BITS 29 /* 2^29 = Largest Huge Page Size */
#define MAX_PHYSMEM_BITS 48

+#ifdef CONFIG_SPARSEMEM_VMEMMAP
+#define VMEMMAP_SIZE (sizeof(struct page) * (1UL << (cpu_pabits + 1 - PAGE_SHIFT)))
+#endif
+
#endif /* CONFIG_SPARSEMEM */

+#ifndef VMEMMAP_SIZE
+#define VMEMMAP_SIZE 0 /* 1, For FLATMEM; 2, For SPARSEMEM without VMEMMAP. */
+#endif
+
#ifdef CONFIG_MEMORY_HOTPLUG
int memory_add_physaddr_to_nid(u64 addr);
#define memory_add_physaddr_to_nid memory_add_physaddr_to_nid
diff --git a/arch/loongarch/mm/init.c b/arch/loongarch/mm/init.c
index 080061793c85..451d93667bcc 100644
--- a/arch/loongarch/mm/init.c
+++ b/arch/loongarch/mm/init.c
@@ -22,7 +22,7 @@
#include <linux/pfn.h>
#include <linux/hardirq.h>
#include <linux/gfp.h>
-#include <linux/initrd.h>
+#include <linux/hugetlb.h>
#include <linux/mmzone.h>

#include <asm/asm-offsets.h>
@@ -152,6 +152,72 @@ EXPORT_SYMBOL_GPL(memory_add_physaddr_to_nid);
#endif
#endif

+#ifdef CONFIG_SPARSEMEM_VMEMMAP
+static int __meminit vmemmap_populate_hugepages(unsigned long start, unsigned long end,
+ int node, struct vmem_altmap *altmap)
+{
+ unsigned long addr = start;
+ unsigned long next;
+ pgd_t *pgd;
+ p4d_t *p4d;
+ pud_t *pud;
+ pmd_t *pmd;
+
+ for (addr = start; addr < end; addr = next) {
+ next = pmd_addr_end(addr, end);
+
+ pgd = vmemmap_pgd_populate(addr, node);
+ if (!pgd)
+ return -ENOMEM;
+ p4d = vmemmap_p4d_populate(pgd, addr, node);
+ if (!p4d)
+ return -ENOMEM;
+ pud = vmemmap_pud_populate(p4d, addr, node);
+ if (!pud)
+ return -ENOMEM;
+
+ pmd = pmd_offset(pud, addr);
+ if (pmd_none(*pmd)) {
+ void *p = NULL;
+
+ p = vmemmap_alloc_block_buf(PMD_SIZE, node, NULL);
+ if (p) {
+ pmd_t entry;
+
+ entry = pfn_pmd(virt_to_pfn(p), PAGE_KERNEL);
+ pmd_val(entry) |= _PAGE_HUGE | _PAGE_HGLOBAL;
+ set_pmd_at(&init_mm, addr, pmd, entry);
+
+ continue;
+ }
+ } else if (pmd_val(*pmd) & _PAGE_HUGE) {
+ vmemmap_verify((pte_t *)pmd, node, addr, next);
+ continue;
+ }
+ if (vmemmap_populate_basepages(addr, next, node, NULL))
+ return -ENOMEM;
+ }
+
+ return 0;
+}
+
+int __meminit vmemmap_populate(unsigned long start, unsigned long end,
+ int node, struct vmem_altmap *altmap)
+{
+#if CONFIG_PGTABLE_LEVELS == 2
+ return vmemmap_populate_basepages(start, end, node, NULL);
+#else
+ return vmemmap_populate_hugepages(start, end, node, NULL);
+#endif
+}
+
+#ifdef CONFIG_MEMORY_HOTPLUG
+void vmemmap_free(unsigned long start, unsigned long end, struct vmem_altmap *altmap)
+{
+}
+#endif
+#endif
+
static pte_t *fixmap_pte(unsigned long addr)
{
pgd_t *pgd;
@@ -168,7 +234,7 @@ static pte_t *fixmap_pte(unsigned long addr)
new = memblock_alloc_low(PAGE_SIZE, PAGE_SIZE);
pgd_populate(&init_mm, pgd, new);
#ifndef __PAGETABLE_PUD_FOLDED
- pud_init((unsigned long)new, (unsigned long)invalid_pmd_table);
+ pud_init(new);
#endif
}

@@ -179,7 +245,7 @@ static pte_t *fixmap_pte(unsigned long addr)
new = memblock_alloc_low(PAGE_SIZE, PAGE_SIZE);
pud_populate(&init_mm, pud, new);
#ifndef __PAGETABLE_PMD_FOLDED
- pmd_init((unsigned long)new, (unsigned long)invalid_pte_table);
+ pmd_init(new);
#endif
}

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 8bbcccbc5565..45eb04c43c6b 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -3233,6 +3233,8 @@ void *sparse_buffer_alloc(unsigned long size);
struct page * __populate_section_memmap(unsigned long pfn,
unsigned long nr_pages, int nid, struct vmem_altmap *altmap,
struct dev_pagemap *pgmap);
+void pmd_init(void *addr);
+void pud_init(void *addr);
pgd_t *vmemmap_pgd_populate(unsigned long addr, int node);
p4d_t *vmemmap_p4d_populate(pgd_t *pgd, unsigned long addr, int node);
pud_t *vmemmap_pud_populate(p4d_t *p4d, unsigned long addr, int node);
diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
index 46ae542118c0..797b30e9050c 100644
--- a/mm/sparse-vmemmap.c
+++ b/mm/sparse-vmemmap.c
@@ -196,6 +196,10 @@ pmd_t * __meminit vmemmap_pmd_populate(pud_t *pud, unsigned long addr, int node)
return pmd;
}

+void __weak __meminit pmd_init(void *addr)
+{
+}
+
pud_t * __meminit vmemmap_pud_populate(p4d_t *p4d, unsigned long addr, int node)
{
pud_t *pud = pud_offset(p4d, addr);
@@ -203,11 +207,16 @@ pud_t * __meminit vmemmap_pud_populate(p4d_t *p4d, unsigned long addr, int node)
void *p = vmemmap_alloc_block_zero(PAGE_SIZE, node);
if (!p)
return NULL;
+ pmd_init(p);
pud_populate(&init_mm, pud, p);
}
return pud;
}

+void __weak __meminit pud_init(void *addr)
+{
+}
+
p4d_t * __meminit vmemmap_p4d_populate(pgd_t *pgd, unsigned long addr, int node)
{
p4d_t *p4d = p4d_offset(pgd, addr);
@@ -215,6 +224,7 @@ p4d_t * __meminit vmemmap_p4d_populate(pgd_t *pgd, unsigned long addr, int node)
void *p = vmemmap_alloc_block_zero(PAGE_SIZE, node);
if (!p)
return NULL;
+ pud_init(p);
p4d_populate(&init_mm, p4d, p);
}
return p4d;
--
2.31.1


2022-10-27 14:40:22

by Huacai Chen

[permalink] [raw]
Subject: [PATCH V14 4/4] LoongArch: Enable ARCH_WANT_HUGETLB_PAGE_OPTIMIZE_VMEMMAP

From: Feiyang Chen <[email protected]>

The feature of minimizing overhead of struct page associated with each
HugeTLB page is implemented on x86_64. However, the infrastructure of
this feature is already there, so just select ARCH_WANT_HUGETLB_PAGE_
OPTIMIZE_VMEMMAP is enough to enable this feature for LoongArch.

Reviewed-by: Philippe Mathieu-Daudé <[email protected]>
Signed-off-by: Feiyang Chen <[email protected]>
Signed-off-by: Huacai Chen <[email protected]>
---
arch/loongarch/Kconfig | 1 +
1 files changed, 1 insertions(+)

diff --git a/arch/loongarch/Kconfig b/arch/loongarch/Kconfig
index 6f7fa0c0ca08..0a6ef613124c 100644
--- a/arch/loongarch/Kconfig
+++ b/arch/loongarch/Kconfig
@@ -52,6 +52,7 @@ config LOONGARCH
select ARCH_USE_QUEUED_RWLOCKS
select ARCH_USE_QUEUED_SPINLOCKS
select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT
+ select ARCH_WANT_HUGETLB_PAGE_OPTIMIZE_VMEMMAP
select ARCH_WANT_LD_ORPHAN_WARN
select ARCH_WANTS_NO_INSTR
select BUILDTIME_TABLE_SORT
--
2.31.1


2022-10-27 14:42:14

by Huacai Chen

[permalink] [raw]
Subject: [PATCH V14 1/4] MIPS&LoongArch&NIOS2: Adjust prototypes of p?d_init()

From: Feiyang Chen <[email protected]>

We are preparing to add sparse vmemmap support to LoongArch. MIPS and
LoongArch need to call pgd_init()/pud_init()/pmd_init() when populating
page tables, so adjust their prototypes to make generic helpers can call
them.

NIOS2 declares pmd_init() but doesn't use, just remove it to avoid build
errors.

Reviewed-by: Jiaxun Yang <[email protected]>
Reviewed-by: Philippe Mathieu-Daudé <[email protected]>
Signed-off-by: Feiyang Chen <[email protected]>
Signed-off-by: Huacai Chen <[email protected]>
---
arch/loongarch/include/asm/pgalloc.h | 13 ++-----------
arch/loongarch/include/asm/pgtable.h | 8 ++++----
arch/loongarch/kernel/numa.c | 4 ++--
arch/loongarch/mm/pgtable.c | 23 +++++++++++++----------
arch/mips/include/asm/pgalloc.h | 10 +++++-----
arch/mips/include/asm/pgtable-64.h | 8 ++++----
arch/mips/kvm/mmu.c | 3 +--
arch/mips/mm/pgtable-32.c | 9 ++++-----
arch/mips/mm/pgtable-64.c | 18 ++++++++++--------
arch/mips/mm/pgtable.c | 2 +-
arch/nios2/include/asm/pgalloc.h | 5 -----
11 files changed, 46 insertions(+), 57 deletions(-)

diff --git a/arch/loongarch/include/asm/pgalloc.h b/arch/loongarch/include/asm/pgalloc.h
index 4bfeb3c9c9ac..af1d1e4a6965 100644
--- a/arch/loongarch/include/asm/pgalloc.h
+++ b/arch/loongarch/include/asm/pgalloc.h
@@ -42,15 +42,6 @@ static inline void p4d_populate(struct mm_struct *mm, p4d_t *p4d, pud_t *pud)

extern void pagetable_init(void);

-/*
- * Initialize a new pmd table with invalid pointers.
- */
-extern void pmd_init(unsigned long page, unsigned long pagetable);
-
-/*
- * Initialize a new pgd / pmd table with invalid pointers.
- */
-extern void pgd_init(unsigned long page);
extern pgd_t *pgd_alloc(struct mm_struct *mm);

#define __pte_free_tlb(tlb, pte, address) \
@@ -76,7 +67,7 @@ static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long address)
}

pmd = (pmd_t *)page_address(pg);
- pmd_init((unsigned long)pmd, (unsigned long)invalid_pte_table);
+ pmd_init(pmd);
return pmd;
}

@@ -92,7 +83,7 @@ static inline pud_t *pud_alloc_one(struct mm_struct *mm, unsigned long address)

pud = (pud_t *) __get_free_page(GFP_KERNEL);
if (pud)
- pud_init((unsigned long)pud, (unsigned long)invalid_pmd_table);
+ pud_init(pud);
return pud;
}

diff --git a/arch/loongarch/include/asm/pgtable.h b/arch/loongarch/include/asm/pgtable.h
index 946704bee599..72eb8bc352fb 100644
--- a/arch/loongarch/include/asm/pgtable.h
+++ b/arch/loongarch/include/asm/pgtable.h
@@ -237,11 +237,11 @@ extern void set_pmd_at(struct mm_struct *mm, unsigned long addr, pmd_t *pmdp, pm
#define pfn_pmd(pfn, prot) __pmd(((pfn) << _PFN_SHIFT) | pgprot_val(prot))

/*
- * Initialize a new pgd / pmd table with invalid pointers.
+ * Initialize a new pgd / pud / pmd table with invalid pointers.
*/
-extern void pgd_init(unsigned long page);
-extern void pud_init(unsigned long page, unsigned long pagetable);
-extern void pmd_init(unsigned long page, unsigned long pagetable);
+extern void pgd_init(void *addr);
+extern void pud_init(void *addr);
+extern void pmd_init(void *addr);

/*
* Non-present pages: high 40 bits are offset, next 8 bits type,
diff --git a/arch/loongarch/kernel/numa.c b/arch/loongarch/kernel/numa.c
index a13f92593cfd..eb5d3a4c8a7a 100644
--- a/arch/loongarch/kernel/numa.c
+++ b/arch/loongarch/kernel/numa.c
@@ -78,7 +78,7 @@ void __init pcpu_populate_pte(unsigned long addr)
new = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
pgd_populate(&init_mm, pgd, new);
#ifndef __PAGETABLE_PUD_FOLDED
- pud_init((unsigned long)new, (unsigned long)invalid_pmd_table);
+ pud_init(new);
#endif
}

@@ -89,7 +89,7 @@ void __init pcpu_populate_pte(unsigned long addr)
new = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
pud_populate(&init_mm, pud, new);
#ifndef __PAGETABLE_PMD_FOLDED
- pmd_init((unsigned long)new, (unsigned long)invalid_pte_table);
+ pmd_init(new);
#endif
}

diff --git a/arch/loongarch/mm/pgtable.c b/arch/loongarch/mm/pgtable.c
index ee179ccd3e3f..36a6dc0148ae 100644
--- a/arch/loongarch/mm/pgtable.c
+++ b/arch/loongarch/mm/pgtable.c
@@ -16,7 +16,7 @@ pgd_t *pgd_alloc(struct mm_struct *mm)
ret = (pgd_t *) __get_free_page(GFP_KERNEL);
if (ret) {
init = pgd_offset(&init_mm, 0UL);
- pgd_init((unsigned long)ret);
+ pgd_init(ret);
memcpy(ret + USER_PTRS_PER_PGD, init + USER_PTRS_PER_PGD,
(PTRS_PER_PGD - USER_PTRS_PER_PGD) * sizeof(pgd_t));
}
@@ -25,7 +25,7 @@ pgd_t *pgd_alloc(struct mm_struct *mm)
}
EXPORT_SYMBOL_GPL(pgd_alloc);

-void pgd_init(unsigned long page)
+void pgd_init(void *addr)
{
unsigned long *p, *end;
unsigned long entry;
@@ -38,7 +38,7 @@ void pgd_init(unsigned long page)
entry = (unsigned long)invalid_pte_table;
#endif

- p = (unsigned long *) page;
+ p = (unsigned long *)addr;
end = p + PTRS_PER_PGD;

do {
@@ -56,11 +56,12 @@ void pgd_init(unsigned long page)
EXPORT_SYMBOL_GPL(pgd_init);

#ifndef __PAGETABLE_PMD_FOLDED
-void pmd_init(unsigned long addr, unsigned long pagetable)
+void pmd_init(void *addr)
{
unsigned long *p, *end;
+ unsigned long pagetable = (unsigned long)invalid_pte_table;

- p = (unsigned long *) addr;
+ p = (unsigned long *)addr;
end = p + PTRS_PER_PMD;

do {
@@ -79,9 +80,10 @@ EXPORT_SYMBOL_GPL(pmd_init);
#endif

#ifndef __PAGETABLE_PUD_FOLDED
-void pud_init(unsigned long addr, unsigned long pagetable)
+void pud_init(void *addr)
{
unsigned long *p, *end;
+ unsigned long pagetable = (unsigned long)invalid_pmd_table;

p = (unsigned long *)addr;
end = p + PTRS_PER_PUD;
@@ -98,6 +100,7 @@ void pud_init(unsigned long addr, unsigned long pagetable)
p[-1] = pagetable;
} while (p != end);
}
+EXPORT_SYMBOL_GPL(pud_init);
#endif

pmd_t mk_pmd(struct page *page, pgprot_t prot)
@@ -119,12 +122,12 @@ void set_pmd_at(struct mm_struct *mm, unsigned long addr,
void __init pagetable_init(void)
{
/* Initialize the entire pgd. */
- pgd_init((unsigned long)swapper_pg_dir);
- pgd_init((unsigned long)invalid_pg_dir);
+ pgd_init(swapper_pg_dir);
+ pgd_init(invalid_pg_dir);
#ifndef __PAGETABLE_PUD_FOLDED
- pud_init((unsigned long)invalid_pud_table, (unsigned long)invalid_pmd_table);
+ pud_init(invalid_pud_table);
#endif
#ifndef __PAGETABLE_PMD_FOLDED
- pmd_init((unsigned long)invalid_pmd_table, (unsigned long)invalid_pte_table);
+ pmd_init(invalid_pmd_table);
#endif
}
diff --git a/arch/mips/include/asm/pgalloc.h b/arch/mips/include/asm/pgalloc.h
index 796035784c73..f72e737dda21 100644
--- a/arch/mips/include/asm/pgalloc.h
+++ b/arch/mips/include/asm/pgalloc.h
@@ -33,7 +33,7 @@ static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd,
/*
* Initialize a new pmd table with invalid pointers.
*/
-extern void pmd_init(unsigned long page, unsigned long pagetable);
+extern void pmd_init(void *addr);

#ifndef __PAGETABLE_PMD_FOLDED

@@ -44,9 +44,9 @@ static inline void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd)
#endif

/*
- * Initialize a new pgd / pmd table with invalid pointers.
+ * Initialize a new pgd table with invalid pointers.
*/
-extern void pgd_init(unsigned long page);
+extern void pgd_init(void *addr);
extern pgd_t *pgd_alloc(struct mm_struct *mm);

static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd)
@@ -77,7 +77,7 @@ static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long address)
}

pmd = (pmd_t *)page_address(pg);
- pmd_init((unsigned long)pmd, (unsigned long)invalid_pte_table);
+ pmd_init(pmd);
return pmd;
}

@@ -93,7 +93,7 @@ static inline pud_t *pud_alloc_one(struct mm_struct *mm, unsigned long address)

pud = (pud_t *) __get_free_pages(GFP_KERNEL, PUD_TABLE_ORDER);
if (pud)
- pud_init((unsigned long)pud, (unsigned long)invalid_pmd_table);
+ pud_init(pud);
return pud;
}

diff --git a/arch/mips/include/asm/pgtable-64.h b/arch/mips/include/asm/pgtable-64.h
index 436c29d698fa..c6310192b654 100644
--- a/arch/mips/include/asm/pgtable-64.h
+++ b/arch/mips/include/asm/pgtable-64.h
@@ -313,11 +313,11 @@ static inline pmd_t *pud_pgtable(pud_t pud)
#endif

/*
- * Initialize a new pgd / pmd table with invalid pointers.
+ * Initialize a new pgd / pud / pmd table with invalid pointers.
*/
-extern void pgd_init(unsigned long page);
-extern void pud_init(unsigned long page, unsigned long pagetable);
-extern void pmd_init(unsigned long page, unsigned long pagetable);
+extern void pgd_init(void *addr);
+extern void pud_init(void *addr);
+extern void pmd_init(void *addr);

/*
* Non-present pages: high 40 bits are offset, next 8 bits type,
diff --git a/arch/mips/kvm/mmu.c b/arch/mips/kvm/mmu.c
index 74cd64a24d05..e8c08988ed37 100644
--- a/arch/mips/kvm/mmu.c
+++ b/arch/mips/kvm/mmu.c
@@ -122,8 +122,7 @@ static pte_t *kvm_mips_walk_pgd(pgd_t *pgd, struct kvm_mmu_memory_cache *cache,
if (!cache)
return NULL;
new_pmd = kvm_mmu_memory_cache_alloc(cache);
- pmd_init((unsigned long)new_pmd,
- (unsigned long)invalid_pte_table);
+ pmd_init(new_pmd);
pud_populate(NULL, pud, new_pmd);
}
pmd = pmd_offset(pud, addr);
diff --git a/arch/mips/mm/pgtable-32.c b/arch/mips/mm/pgtable-32.c
index 61891af25019..f57fb69472f8 100644
--- a/arch/mips/mm/pgtable-32.c
+++ b/arch/mips/mm/pgtable-32.c
@@ -13,9 +13,9 @@
#include <asm/pgalloc.h>
#include <asm/tlbflush.h>

-void pgd_init(unsigned long page)
+void pgd_init(void *addr)
{
- unsigned long *p = (unsigned long *) page;
+ unsigned long *p = (unsigned long *)addr;
int i;

for (i = 0; i < USER_PTRS_PER_PGD; i+=8) {
@@ -61,9 +61,8 @@ void __init pagetable_init(void)
#endif

/* Initialize the entire pgd. */
- pgd_init((unsigned long)swapper_pg_dir);
- pgd_init((unsigned long)swapper_pg_dir
- + sizeof(pgd_t) * USER_PTRS_PER_PGD);
+ pgd_init(swapper_pg_dir);
+ pgd_init(&swapper_pg_dir[USER_PTRS_PER_PGD]);

pgd_base = swapper_pg_dir;

diff --git a/arch/mips/mm/pgtable-64.c b/arch/mips/mm/pgtable-64.c
index 7536f7804c44..b4386a0e2ef8 100644
--- a/arch/mips/mm/pgtable-64.c
+++ b/arch/mips/mm/pgtable-64.c
@@ -13,7 +13,7 @@
#include <asm/pgalloc.h>
#include <asm/tlbflush.h>

-void pgd_init(unsigned long page)
+void pgd_init(void *addr)
{
unsigned long *p, *end;
unsigned long entry;
@@ -26,7 +26,7 @@ void pgd_init(unsigned long page)
entry = (unsigned long)invalid_pte_table;
#endif

- p = (unsigned long *) page;
+ p = (unsigned long *) addr;
end = p + PTRS_PER_PGD;

do {
@@ -43,11 +43,12 @@ void pgd_init(unsigned long page)
}

#ifndef __PAGETABLE_PMD_FOLDED
-void pmd_init(unsigned long addr, unsigned long pagetable)
+void pmd_init(void *addr)
{
unsigned long *p, *end;
+ unsigned long pagetable = (unsigned long)invalid_pte_table;

- p = (unsigned long *) addr;
+ p = (unsigned long *)addr;
end = p + PTRS_PER_PMD;

do {
@@ -66,9 +67,10 @@ EXPORT_SYMBOL_GPL(pmd_init);
#endif

#ifndef __PAGETABLE_PUD_FOLDED
-void pud_init(unsigned long addr, unsigned long pagetable)
+void pud_init(void *addr)
{
unsigned long *p, *end;
+ unsigned long pagetable = (unsigned long)invalid_pmd_table;

p = (unsigned long *)addr;
end = p + PTRS_PER_PUD;
@@ -108,12 +110,12 @@ void __init pagetable_init(void)
pgd_t *pgd_base;

/* Initialize the entire pgd. */
- pgd_init((unsigned long)swapper_pg_dir);
+ pgd_init(swapper_pg_dir);
#ifndef __PAGETABLE_PUD_FOLDED
- pud_init((unsigned long)invalid_pud_table, (unsigned long)invalid_pmd_table);
+ pud_init(invalid_pud_table);
#endif
#ifndef __PAGETABLE_PMD_FOLDED
- pmd_init((unsigned long)invalid_pmd_table, (unsigned long)invalid_pte_table);
+ pmd_init(invalid_pmd_table);
#endif
pgd_base = swapper_pg_dir;
/*
diff --git a/arch/mips/mm/pgtable.c b/arch/mips/mm/pgtable.c
index 3b7590660a04..b13314be5d0e 100644
--- a/arch/mips/mm/pgtable.c
+++ b/arch/mips/mm/pgtable.c
@@ -15,7 +15,7 @@ pgd_t *pgd_alloc(struct mm_struct *mm)
ret = (pgd_t *) __get_free_pages(GFP_KERNEL, PGD_TABLE_ORDER);
if (ret) {
init = pgd_offset(&init_mm, 0UL);
- pgd_init((unsigned long)ret);
+ pgd_init(ret);
memcpy(ret + USER_PTRS_PER_PGD, init + USER_PTRS_PER_PGD,
(PTRS_PER_PGD - USER_PTRS_PER_PGD) * sizeof(pgd_t));
}
diff --git a/arch/nios2/include/asm/pgalloc.h b/arch/nios2/include/asm/pgalloc.h
index 3c4ae74d5798..ecd1657bb2ce 100644
--- a/arch/nios2/include/asm/pgalloc.h
+++ b/arch/nios2/include/asm/pgalloc.h
@@ -26,11 +26,6 @@ static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd,
set_pmd(pmd, __pmd((unsigned long)page_address(pte)));
}

-/*
- * Initialize a new pmd table with invalid pointers.
- */
-extern void pmd_init(unsigned long page, unsigned long pagetable);
-
extern pgd_t *pgd_alloc(struct mm_struct *mm);

#define __pte_free_tlb(tlb, pte, addr) \
--
2.31.1


2022-11-03 03:43:56

by Muchun Song

[permalink] [raw]
Subject: Re: [PATCH V14 4/4] LoongArch: Enable ARCH_WANT_HUGETLB_PAGE_OPTIMIZE_VMEMMAP



> On Oct 27, 2022, at 20:52, Huacai Chen <[email protected]> wrote:
>
> From: Feiyang Chen <[email protected]>
>
> The feature of minimizing overhead of struct page associated with each
> HugeTLB page is implemented on x86_64. However, the infrastructure of

I'd like to refer to this feature as HVO (more simplified).

> this feature is already there, so just select ARCH_WANT_HUGETLB_PAGE_
> OPTIMIZE_VMEMMAP is enough to enable this feature for LoongArch.
>
> Reviewed-by: Philippe Mathieu-Daudé <[email protected]>
> Signed-off-by: Feiyang Chen <[email protected]>
> Signed-off-by: Huacai Chen <[email protected]>

Acked-by: Muchun Song <[email protected]>

Thanks.

2022-11-12 10:34:57

by Huacai Chen

[permalink] [raw]
Subject: Re: [PATCH V14 0/4] mm/sparse-vmemmap: Generalise helpers and enable for LoongArch

Hi, Arnd,

Just a gentle ping, is this series good enough now? I think the last
problem (static-key.h inclusion) has also been solved.


Huacai

On Thu, Oct 27, 2022 at 8:54 PM Huacai Chen <[email protected]> wrote:
>
> This series is in order to enable sparse-vmemmap for LoongArch. But
> LoongArch cannot use generic helpers directly because MIPS&LoongArch
> need to call pgd_init()/pud_init()/pmd_init() when populating page
> tables. So we adjust the prototypes of p?d_init() to make generic
> helpers can call them, then enable sparse-vmemmap with generic helpers,
> and to be further, generalise vmemmap_populate_hugepages() for ARM64,
> X86 and LoongArch.
>
> V1 -> V2:
> Split ARCH_WANT_HUGETLB_PAGE_OPTIMIZE_VMEMMAP to a separate patch.
>
> V2 -> V3:
> 1, Change the Signed-off-by order of author and committer;
> 2, Update commit message about the build error on LoongArch.
>
> V3 -> V4:
> Change pmd to pmdp for ARM64 for consistency.
>
> V4 -> V5:
> Add a detailed comment for no-fallback in the altmap case.
>
> V5 -> V6:
> 1, Fix build error for NIOS2;
> 2, Fix build error for allnoconfig;
> 3, Update comment for no-fallback in the altmap case.
>
> V6 -> V7:
> Fix build warnings of "no previous prototype".
>
> V7 -> V8:
> Fix build error for MIPS pud_init().
>
> V8 -> V9:
> Remove redundant #include to avoid build error with latest upstream
> kernel.
>
> V9 -> V10:
> Fix build error due to VMEMMAP changes in 6.0-rc1.
>
> V10 -> V11:
> Adjust context due to ARM64 changes in 6.1-rc1.
>
> V11 -> V12:
> 1, Fix build error for !SPARSEMEM;
> 2, Simplify pagetable_init() for MIPS32.
>
> V12 -> V13:
> 1, Add Acked-by and Reviewed-by tags;
> 2, Update commit message for the 4th patch.
>
> V13 -> V14:
> Remove the static_key.h inclusion in the 4th patch.
>
> Huacai Chen and Feiyang Chen(4):
> MIPS&LoongArch&NIOS2: Adjust prototypes of p?d_init().
> LoongArch: Add sparse memory vmemmap support.
> mm/sparse-vmemmap: Generalise vmemmap_populate_hugepages().
> LoongArch: Enable ARCH_WANT_HUGETLB_PAGE_OPTIMIZE_VMEMMAP.
>
> Signed-off-by: Huacai Chen <[email protected]>
> Signed-off-by: Feiyang Chen <[email protected]>
> ---
> arch/arm64/mm/mmu.c | 53 ++++++--------------
> arch/loongarch/Kconfig | 2 +
> arch/loongarch/include/asm/pgalloc.h | 13 +----
> arch/loongarch/include/asm/pgtable.h | 13 +++--
> arch/loongarch/include/asm/sparsemem.h | 8 +++
> arch/loongarch/kernel/numa.c | 4 +-
> arch/loongarch/mm/init.c | 44 +++++++++++++++-
> arch/loongarch/mm/pgtable.c | 23 +++++----
> arch/mips/include/asm/pgalloc.h | 8 +--
> arch/mips/include/asm/pgtable-64.h | 8 +--
> arch/mips/kvm/mmu.c | 3 +-
> arch/mips/mm/pgtable-32.c | 10 ++--
> arch/mips/mm/pgtable-64.c | 18 ++++---
> arch/mips/mm/pgtable.c | 2 +-
> arch/x86/mm/init_64.c | 92 ++++++++++++----------------------
> include/linux/mm.h | 8 +++
> include/linux/page-flags.h | 1 +
> mm/sparse-vmemmap.c | 64 +++++++++++++++++++++++
> 18 files changed, 222 insertions(+), 152 deletions(-)
> --
> 2.27.0
>

2022-11-14 20:37:46

by Arnd Bergmann

[permalink] [raw]
Subject: Re: [PATCH V14 0/4] mm/sparse-vmemmap: Generalise helpers and enable for LoongArch

On Sat, Nov 12, 2022, at 11:26, Huacai Chen wrote:
> Hi, Arnd,
>
> Just a gentle ping, is this series good enough now? I think the last
> problem (static-key.h inclusion) has also been solved.

Yes, this looks fine to me. Sorry I didn't have this on my
radar any more.

Reviewed-by: Arnd Bergmann <[email protected]>

I guess the series should be merged through Andrew's linux-mm
tree. Let me know if for some reason I should pick it up into
the asm-generic tree instead.

Arnd

2022-11-27 06:15:32

by Huacai Chen

[permalink] [raw]
Subject: Re: [PATCH V14 0/4] mm/sparse-vmemmap: Generalise helpers and enable for LoongArch

Hi, Andrew,

On Tue, Nov 15, 2022 at 4:09 AM Arnd Bergmann <[email protected]> wrote:
>
> On Sat, Nov 12, 2022, at 11:26, Huacai Chen wrote:
> > Hi, Arnd,
> >
> > Just a gentle ping, is this series good enough now? I think the last
> > problem (static-key.h inclusion) has also been solved.
>
> Yes, this looks fine to me. Sorry I didn't have this on my
> radar any more.
>
> Reviewed-by: Arnd Bergmann <[email protected]>
>
> I guess the series should be merged through Andrew's linux-mm
> tree. Let me know if for some reason I should pick it up into
> the asm-generic tree instead.
Another gentle ping, can this series be merged to linux-mm in the 6.2 cycle?

Huacai
>
> Arnd

2022-11-28 23:39:43

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH V14 0/4] mm/sparse-vmemmap: Generalise helpers and enable for LoongArch

On Sun, 27 Nov 2022 13:01:19 +0800 Huacai Chen <[email protected]> wrote:

> Hi, Andrew,
>
> On Tue, Nov 15, 2022 at 4:09 AM Arnd Bergmann <[email protected]> wrote:
> >
> > On Sat, Nov 12, 2022, at 11:26, Huacai Chen wrote:
> > > Hi, Arnd,
> > >
> > > Just a gentle ping, is this series good enough now? I think the last
> > > problem (static-key.h inclusion) has also been solved.
> >
> > Yes, this looks fine to me. Sorry I didn't have this on my
> > radar any more.
> >
> > Reviewed-by: Arnd Bergmann <[email protected]>
> >
> > I guess the series should be merged through Andrew's linux-mm
> > tree. Let me know if for some reason I should pick it up into
> > the asm-generic tree instead.
> Another gentle ping, can this series be merged to linux-mm in the 6.2 cycle?

It's a pretty large patchset and I'm a bit concerned about the amount
of review and test which it has received from the MIPS side?

Prudence suggest that we merge this in 6.3-rc1. But I'll queue it up
for now, get a bit of testing while we consider this.

2022-11-29 04:02:53

by Huacai Chen

[permalink] [raw]
Subject: Re: [PATCH V14 0/4] mm/sparse-vmemmap: Generalise helpers and enable for LoongArch

On Tue, Nov 29, 2022 at 7:10 AM Andrew Morton <[email protected]> wrote:
>
> On Sun, 27 Nov 2022 13:01:19 +0800 Huacai Chen <[email protected]> wrote:
>
> > Hi, Andrew,
> >
> > On Tue, Nov 15, 2022 at 4:09 AM Arnd Bergmann <[email protected]> wrote:
> > >
> > > On Sat, Nov 12, 2022, at 11:26, Huacai Chen wrote:
> > > > Hi, Arnd,
> > > >
> > > > Just a gentle ping, is this series good enough now? I think the last
> > > > problem (static-key.h inclusion) has also been solved.
> > >
> > > Yes, this looks fine to me. Sorry I didn't have this on my
> > > radar any more.
> > >
> > > Reviewed-by: Arnd Bergmann <[email protected]>
> > >
> > > I guess the series should be merged through Andrew's linux-mm
> > > tree. Let me know if for some reason I should pick it up into
> > > the asm-generic tree instead.
> > Another gentle ping, can this series be merged to linux-mm in the 6.2 cycle?
>
> It's a pretty large patchset and I'm a bit concerned about the amount
> of review and test which it has received from the MIPS side?
We have tested on MIPS-based Loongson. :)

>
> Prudence suggest that we merge this in 6.3-rc1. But I'll queue it up
> for now, get a bit of testing while we consider this.
>
OK, thanks, you are free to decide this.

Huacai