Most architectures current have a debugfs file for dumping the kernel
page tables. Currently each architecture has to implement custom
functions for walking the page tables because the generic
walk_page_range() function is unable to walk the page tables used by the
kernel.
This series extends the capabilities of walk_page_range() so that it can
deal with the page tables of the kernel (which have no VMAs and can
contain larger huge pages than exist for user space). x86 and arm64 are
then converted to make use of walk_page_range() removing the custom page
table walkers.
To enable a generic page table walker to walk the unusual mappings of
the kernel we need to implement a set of functions which let us know
when the walker has reached the leaf entry. Since arm, powerpc, s390,
sparc and x86 all have p?d_large macros lets standardise on that and
implement those that are missing.
Potentially future changes could unify the implementations of the
debugfs walkers further, moving the common functionality into common
code. This would require a common way of handling the effective
permissions (currently implemented only for x86) along with a per-arch
way of formatting the page table information for debugfs. One
immediate benefit would be getting the KASAN speed up optimisation in
arm64 (and other arches) which is currently only implemented for x86.
Changes since v3:
* Restored the generic macros, only implement p?d_large() for
architectures that have support for large pages. This also means
adding dummy #defines for architectures that define p?d_large as
static inline to avoid picking up the generic macro.
* Drop the 'depth' argument from pte_hole
* Because we no longer have the depth for holes, we also drop support
in x86 for showing missing pages in debugfs. See discussion below:
https://lore.kernel.org/lkml/[email protected]/
* mips: only define p?d_large when _PAGE_HUGE is defined.
Changes since v2:
* Rather than attemping to provide generic macros, actually implement
p?d_large() for each architecture.
Changes since v1:
* Added p4d_large() macro
* Comments to explain p?d_large() macro semantics
* Expanded comment for pte_hole() callback to explain mapping between
depth and P?D
* Handle folded page tables at all levels, so depth from pte_hole()
ignores folding at any level (see real_depth() function in
mm/pagewalk.c)
Steven Price (19):
arc: mm: Add p?d_large() definitions
arm64: mm: Add p?d_large() definitions
mips: mm: Add p?d_large() definitions
powerpc: mm: Add p?d_large() definitions
riscv: mm: Add p?d_large() definitions
s390: mm: Add p?d_large() definitions
sparc: mm: Add p?d_large() definitions
x86: mm: Add p?d_large() definitions
mm: Add generic p?d_large() macros
mm: pagewalk: Add p4d_entry() and pgd_entry()
mm: pagewalk: Allow walking without vma
mm: pagewalk: Add test_p?d callbacks
arm64: mm: Convert mm/dump.c to use walk_page_range()
x86: mm: Don't display pages which aren't present in debugfs
x86: mm: Point to struct seq_file from struct pg_state
x86: mm+efi: Convert ptdump_walk_pgd_level() to take a mm_struct
x86: mm: Convert ptdump_walk_pgd_level_debugfs() to take an mm_struct
x86: mm: Convert ptdump_walk_pgd_level_core() to take an mm_struct
x86: mm: Convert dump_pagetables to use walk_page_range
arch/arc/include/asm/pgtable.h | 1 +
arch/arm64/include/asm/pgtable.h | 2 +
arch/arm64/mm/dump.c | 117 ++++---
arch/mips/include/asm/pgtable-64.h | 8 +
arch/powerpc/include/asm/book3s/64/pgtable.h | 30 +-
arch/powerpc/kvm/book3s_64_mmu_radix.c | 12 +-
arch/riscv/include/asm/pgtable-64.h | 7 +
arch/riscv/include/asm/pgtable.h | 7 +
arch/s390/include/asm/pgtable.h | 2 +
arch/sparc/include/asm/pgtable_64.h | 2 +
arch/x86/include/asm/pgtable.h | 10 +-
arch/x86/mm/debug_pagetables.c | 8 +-
arch/x86/mm/dump_pagetables.c | 349 ++++++++++---------
arch/x86/platform/efi/efi_32.c | 2 +-
arch/x86/platform/efi/efi_64.c | 4 +-
include/asm-generic/pgtable.h | 19 +
include/linux/mm.h | 20 +-
mm/pagewalk.c | 76 +++-
18 files changed, 404 insertions(+), 272 deletions(-)
--
2.20.1
walk_page_range() is going to be allowed to walk page tables other than
those of user space. For this it needs to know when it has reached a
'leaf' entry in the page tables. This information is provided by the
p?d_large() functions/macros.
For powerpc pmd_large() was already implemented, so hoist it out of the
CONFIG_TRANSPARENT_HUGEPAGE condition and implement the other levels.
Also since we now have a pmd_large always implemented we can drop the
pmd_is_leaf() function.
CC: Benjamin Herrenschmidt <[email protected]>
CC: Paul Mackerras <[email protected]>
CC: Michael Ellerman <[email protected]>
CC: [email protected]
CC: [email protected]
Signed-off-by: Steven Price <[email protected]>
---
arch/powerpc/include/asm/book3s/64/pgtable.h | 30 ++++++++++++++------
arch/powerpc/kvm/book3s_64_mmu_radix.c | 12 ++------
2 files changed, 24 insertions(+), 18 deletions(-)
diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
index c9bfe526ca9d..c4b29caf2a3b 100644
--- a/arch/powerpc/include/asm/book3s/64/pgtable.h
+++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
@@ -907,6 +907,12 @@ static inline int pud_present(pud_t pud)
return (pud_raw(pud) & cpu_to_be64(_PAGE_PRESENT));
}
+#define pud_large pud_large
+static inline int pud_large(pud_t pud)
+{
+ return !!(pud_raw(pud) & cpu_to_be64(_PAGE_PTE));
+}
+
extern struct page *pud_page(pud_t pud);
extern struct page *pmd_page(pmd_t pmd);
static inline pte_t pud_pte(pud_t pud)
@@ -954,6 +960,12 @@ static inline int pgd_present(pgd_t pgd)
return (pgd_raw(pgd) & cpu_to_be64(_PAGE_PRESENT));
}
+#define pgd_large pgd_large
+static inline int pgd_large(pgd_t pgd)
+{
+ return !!(pgd_raw(pgd) & cpu_to_be64(_PAGE_PTE));
+}
+
static inline pte_t pgd_pte(pgd_t pgd)
{
return __pte_raw(pgd_raw(pgd));
@@ -1107,6 +1119,15 @@ static inline bool pmd_access_permitted(pmd_t pmd, bool write)
return pte_access_permitted(pmd_pte(pmd), write);
}
+#define pmd_large pmd_large
+/*
+ * returns true for pmd migration entries, THP, devmap, hugetlb
+ */
+static inline int pmd_large(pmd_t pmd)
+{
+ return !!(pmd_raw(pmd) & cpu_to_be64(_PAGE_PTE));
+}
+
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
extern pmd_t pfn_pmd(unsigned long pfn, pgprot_t pgprot);
extern pmd_t mk_pmd(struct page *page, pgprot_t pgprot);
@@ -1133,15 +1154,6 @@ pmd_hugepage_update(struct mm_struct *mm, unsigned long addr, pmd_t *pmdp,
return hash__pmd_hugepage_update(mm, addr, pmdp, clr, set);
}
-/*
- * returns true for pmd migration entries, THP, devmap, hugetlb
- * But compile time dependent on THP config
- */
-static inline int pmd_large(pmd_t pmd)
-{
- return !!(pmd_raw(pmd) & cpu_to_be64(_PAGE_PTE));
-}
-
static inline pmd_t pmd_mknotpresent(pmd_t pmd)
{
return __pmd(pmd_val(pmd) & ~_PAGE_PRESENT);
diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c
index 1b821c6efdef..040db20ac2ab 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_radix.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c
@@ -363,12 +363,6 @@ static void kvmppc_pte_free(pte_t *ptep)
kmem_cache_free(kvm_pte_cache, ptep);
}
-/* Like pmd_huge() and pmd_large(), but works regardless of config options */
-static inline int pmd_is_leaf(pmd_t pmd)
-{
- return !!(pmd_val(pmd) & _PAGE_PTE);
-}
-
static pmd_t *kvmppc_pmd_alloc(void)
{
return kmem_cache_alloc(kvm_pmd_cache, GFP_KERNEL);
@@ -455,7 +449,7 @@ static void kvmppc_unmap_free_pmd(struct kvm *kvm, pmd_t *pmd, bool full,
for (im = 0; im < PTRS_PER_PMD; ++im, ++p) {
if (!pmd_present(*p))
continue;
- if (pmd_is_leaf(*p)) {
+ if (pmd_large(*p)) {
if (full) {
pmd_clear(p);
} else {
@@ -588,7 +582,7 @@ int kvmppc_create_pte(struct kvm *kvm, pgd_t *pgtable, pte_t pte,
else if (level <= 1)
new_pmd = kvmppc_pmd_alloc();
- if (level == 0 && !(pmd && pmd_present(*pmd) && !pmd_is_leaf(*pmd)))
+ if (level == 0 && !(pmd && pmd_present(*pmd) && !pmd_large(*pmd)))
new_ptep = kvmppc_pte_alloc();
/* Check if we might have been invalidated; let the guest retry if so */
@@ -657,7 +651,7 @@ int kvmppc_create_pte(struct kvm *kvm, pgd_t *pgtable, pte_t pte,
new_pmd = NULL;
}
pmd = pmd_offset(pud, gpa);
- if (pmd_is_leaf(*pmd)) {
+ if (pmd_large(*pmd)) {
unsigned long lgpa = gpa & PMD_MASK;
/* Check if we raced and someone else has set the same thing */
--
2.20.1
To enable x86 to use the generic walk_page_range() function, the
callers of ptdump_walk_pgd_level() need to pass an mm_struct rather
than the raw pgd_t pointer. Luckily since commit 7e904a91bf60
("efi: Use efi_mm in x86 as well as ARM") we now have an mm_struct
for EFI on x86.
Signed-off-by: Steven Price <[email protected]>
---
arch/x86/include/asm/pgtable.h | 2 +-
arch/x86/mm/dump_pagetables.c | 4 ++--
arch/x86/platform/efi/efi_32.c | 2 +-
arch/x86/platform/efi/efi_64.c | 4 ++--
4 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index 0dd04cf6ebeb..579959750f34 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -27,7 +27,7 @@
extern pgd_t early_top_pgt[PTRS_PER_PGD];
int __init __early_make_pgtable(unsigned long address, pmdval_t pmd);
-void ptdump_walk_pgd_level(struct seq_file *m, pgd_t *pgd);
+void ptdump_walk_pgd_level(struct seq_file *m, struct mm_struct *mm);
void ptdump_walk_pgd_level_debugfs(struct seq_file *m, pgd_t *pgd, bool user);
void ptdump_walk_pgd_level_checkwx(void);
void ptdump_walk_user_pgd_level_checkwx(void);
diff --git a/arch/x86/mm/dump_pagetables.c b/arch/x86/mm/dump_pagetables.c
index b448546277f4..f3663c5e8c6a 100644
--- a/arch/x86/mm/dump_pagetables.c
+++ b/arch/x86/mm/dump_pagetables.c
@@ -576,9 +576,9 @@ static void ptdump_walk_pgd_level_core(struct seq_file *m, pgd_t *pgd,
pr_info("x86/mm: Checked W+X mappings: passed, no W+X pages found.\n");
}
-void ptdump_walk_pgd_level(struct seq_file *m, pgd_t *pgd)
+void ptdump_walk_pgd_level(struct seq_file *m, struct mm_struct *mm)
{
- ptdump_walk_pgd_level_core(m, pgd, false, true);
+ ptdump_walk_pgd_level_core(m, mm->pgd, false, true);
}
void ptdump_walk_pgd_level_debugfs(struct seq_file *m, pgd_t *pgd, bool user)
diff --git a/arch/x86/platform/efi/efi_32.c b/arch/x86/platform/efi/efi_32.c
index 9959657127f4..9175ceaa6e72 100644
--- a/arch/x86/platform/efi/efi_32.c
+++ b/arch/x86/platform/efi/efi_32.c
@@ -49,7 +49,7 @@ void efi_sync_low_kernel_mappings(void) {}
void __init efi_dump_pagetable(void)
{
#ifdef CONFIG_EFI_PGT_DUMP
- ptdump_walk_pgd_level(NULL, swapper_pg_dir);
+ ptdump_walk_pgd_level(NULL, init_mm);
#endif
}
diff --git a/arch/x86/platform/efi/efi_64.c b/arch/x86/platform/efi/efi_64.c
index cf0347f61b21..a2e0f9800190 100644
--- a/arch/x86/platform/efi/efi_64.c
+++ b/arch/x86/platform/efi/efi_64.c
@@ -611,9 +611,9 @@ void __init efi_dump_pagetable(void)
{
#ifdef CONFIG_EFI_PGT_DUMP
if (efi_enabled(EFI_OLD_MEMMAP))
- ptdump_walk_pgd_level(NULL, swapper_pg_dir);
+ ptdump_walk_pgd_level(NULL, init_mm);
else
- ptdump_walk_pgd_level(NULL, efi_mm.pgd);
+ ptdump_walk_pgd_level(NULL, efi_mm);
#endif
}
--
2.20.1
walk_page_range() is going to be allowed to walk page tables other than
those of user space. For this it needs to know when it has reached a
'leaf' entry in the page tables. This information will be provided by the
p?d_large() functions/macros.
For arc, we only have two levels, so only pmd_large() is needed.
CC: Vineet Gupta <[email protected]>
CC: [email protected]
Signed-off-by: Steven Price <[email protected]>
Acked-by: Vineet Gupta <[email protected]>
---
arch/arc/include/asm/pgtable.h | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/arc/include/asm/pgtable.h b/arch/arc/include/asm/pgtable.h
index cf4be70d5892..0edd27bc7018 100644
--- a/arch/arc/include/asm/pgtable.h
+++ b/arch/arc/include/asm/pgtable.h
@@ -277,6 +277,7 @@ static inline void pmd_set(pmd_t *pmdp, pte_t *ptep)
#define pmd_none(x) (!pmd_val(x))
#define pmd_bad(x) ((pmd_val(x) & ~PAGE_MASK))
#define pmd_present(x) (pmd_val(x))
+#define pmd_large(x) (pmd_val(pmd) & _PAGE_HW_SZ)
#define pmd_clear(xp) do { pmd_val(*(xp)) = 0; } while (0)
#define pte_page(pte) pfn_to_page(pte_pfn(pte))
--
2.20.1
walk_page_range() is going to be allowed to walk page tables other than
those of user space. For this it needs to know when it has reached a
'leaf' entry in the page tables. This information will be provided by the
p?d_large() functions/macros.
For arm64, we already have p?d_sect() macros which we can reuse for
p?d_large().
CC: Catalin Marinas <[email protected]>
CC: Will Deacon <[email protected]>
Signed-off-by: Steven Price <[email protected]>
---
arch/arm64/include/asm/pgtable.h | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index de70c1eabf33..6eef345dbaf4 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -428,6 +428,7 @@ extern pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,
PMD_TYPE_TABLE)
#define pmd_sect(pmd) ((pmd_val(pmd) & PMD_TYPE_MASK) == \
PMD_TYPE_SECT)
+#define pmd_large(pmd) pmd_sect(pmd)
#if defined(CONFIG_ARM64_64K_PAGES) || CONFIG_PGTABLE_LEVELS < 3
#define pud_sect(pud) (0)
@@ -511,6 +512,7 @@ static inline phys_addr_t pmd_page_paddr(pmd_t pmd)
#define pud_none(pud) (!pud_val(pud))
#define pud_bad(pud) (!(pud_val(pud) & PUD_TABLE_BIT))
#define pud_present(pud) pte_present(pud_pte(pud))
+#define pud_large(pud) pud_sect(pud)
#define pud_valid(pud) pte_valid(pud_pte(pud))
static inline void set_pud(pud_t *pudp, pud_t pud)
--
2.20.1
walk_page_range() is going to be allowed to walk page tables other than
those of user space. For this it needs to know when it has reached a
'leaf' entry in the page tables. This information is provided by the
p?d_large() functions/macros.
For mips, we only support large pages on 64 bit.
For 64 bit if _PAGE_HUGE is defined we can simply look for it. When not
defined we can be confident that there are no large pages in existence
and fall back on the generic implementation (added in a later patch)
which returns 0.
CC: Ralf Baechle <[email protected]>
CC: Paul Burton <[email protected]>
CC: James Hogan <[email protected]>
CC: [email protected]
Signed-off-by: Steven Price <[email protected]>
---
arch/mips/include/asm/pgtable-64.h | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/arch/mips/include/asm/pgtable-64.h b/arch/mips/include/asm/pgtable-64.h
index 93a9dce31f25..42162877ac62 100644
--- a/arch/mips/include/asm/pgtable-64.h
+++ b/arch/mips/include/asm/pgtable-64.h
@@ -273,6 +273,10 @@ static inline int pmd_present(pmd_t pmd)
return pmd_val(pmd) != (unsigned long) invalid_pte_table;
}
+#ifdef _PAGE_HUGE
+#define pmd_large(pmd) ((pmd_val(pmd) & _PAGE_HUGE) != 0)
+#endif
+
static inline void pmd_clear(pmd_t *pmdp)
{
pmd_val(*pmdp) = ((unsigned long) invalid_pte_table);
@@ -297,6 +301,10 @@ static inline int pud_present(pud_t pud)
return pud_val(pud) != (unsigned long) invalid_pmd_table;
}
+#ifdef _PAGE_HUGE
+#define pud_large(pud) ((pud_val(pud) & _PAGE_HUGE) != 0)
+#endif
+
static inline void pud_clear(pud_t *pudp)
{
pud_val(*pudp) = ((unsigned long) invalid_pmd_table);
--
2.20.1
walk_page_range() is going to be allowed to walk page tables other than
those of user space. For this it needs to know when it has reached a
'leaf' entry in the page tables. This information is provided by the
p?d_large() functions/macros.
For riscv a page is large when it has a read, write or execute bit
set on it.
CC: Palmer Dabbelt <[email protected]>
CC: Albert Ou <[email protected]>
CC: [email protected]
Signed-off-by: Steven Price <[email protected]>
---
arch/riscv/include/asm/pgtable-64.h | 7 +++++++
arch/riscv/include/asm/pgtable.h | 7 +++++++
2 files changed, 14 insertions(+)
diff --git a/arch/riscv/include/asm/pgtable-64.h b/arch/riscv/include/asm/pgtable-64.h
index 7aa0ea9bd8bb..73747d9d7c66 100644
--- a/arch/riscv/include/asm/pgtable-64.h
+++ b/arch/riscv/include/asm/pgtable-64.h
@@ -51,6 +51,13 @@ static inline int pud_bad(pud_t pud)
return !pud_present(pud);
}
+#define pud_large pud_large
+static inline int pud_large(pud_t pud)
+{
+ return pud_present(pud)
+ && (pud_val(pud) & (_PAGE_READ | _PAGE_WRITE | _PAGE_EXEC));
+}
+
static inline void set_pud(pud_t *pudp, pud_t pud)
{
*pudp = pud;
diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
index 16301966d65b..c66e2a69215f 100644
--- a/arch/riscv/include/asm/pgtable.h
+++ b/arch/riscv/include/asm/pgtable.h
@@ -111,6 +111,13 @@ static inline int pmd_bad(pmd_t pmd)
return !pmd_present(pmd);
}
+#define pmd_large pmd_large
+static inline int pmd_large(pmd_t pmd)
+{
+ return pmd_present(pmd)
+ && (pmd_val(pmd) & (_PAGE_READ | _PAGE_WRITE | _PAGE_EXEC));
+}
+
static inline void set_pmd(pmd_t *pmdp, pmd_t pmd)
{
*pmdp = pmd;
--
2.20.1
walk_page_range() is going to be allowed to walk page tables other than
those of user space. For this it needs to know when it has reached a
'leaf' entry in the page tables. This information is provided by the
p?d_large() functions/macros.
For s390, pud_large() and pmd_large() are already implemented as static
inline functions. Add a #define so we don't pick up the generic version
introduced in a later patch.
CC: Martin Schwidefsky <[email protected]>
CC: Heiko Carstens <[email protected]>
CC: [email protected]
Signed-off-by: Steven Price <[email protected]>
---
arch/s390/include/asm/pgtable.h | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h
index 063732414dfb..1f188004ba95 100644
--- a/arch/s390/include/asm/pgtable.h
+++ b/arch/s390/include/asm/pgtable.h
@@ -679,6 +679,7 @@ static inline int pud_none(pud_t pud)
return pud_val(pud) == _REGION3_ENTRY_EMPTY;
}
+#define pud_large pud_large
static inline int pud_large(pud_t pud)
{
if ((pud_val(pud) & _REGION_ENTRY_TYPE_MASK) != _REGION_ENTRY_TYPE_R3)
@@ -696,6 +697,7 @@ static inline unsigned long pud_pfn(pud_t pud)
return (pud_val(pud) & origin_mask) >> PAGE_SHIFT;
}
+#define pmd_large pmd_large
static inline int pmd_large(pmd_t pmd)
{
return (pmd_val(pmd) & _SEGMENT_ENTRY_LARGE) != 0;
--
2.20.1
Exposing the pud/pgd levels of the page tables to walk_page_range() means
we may come across the exotic large mappings that come with large areas
of contiguous memory (such as the kernel's linear map).
For architectures that don't provide p?d_large() macros, provide generic
does nothing defaults.
Signed-off-by: Steven Price <[email protected]>
---
include/asm-generic/pgtable.h | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)
diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h
index 05e61e6c843f..f0de24100ac6 100644
--- a/include/asm-generic/pgtable.h
+++ b/include/asm-generic/pgtable.h
@@ -1186,4 +1186,23 @@ static inline bool arch_has_pfn_modify_check(void)
#define mm_pmd_folded(mm) __is_defined(__PAGETABLE_PMD_FOLDED)
#endif
+/*
+ * p?d_large() - true if this entry is a final mapping to a physical address.
+ * This differs from p?d_huge() by the fact that they are always available (if
+ * the architecture supports large pages at the appropriate level) even
+ * if CONFIG_HUGETLB_PAGE is not defined.
+ */
+#ifndef pgd_large
+#define pgd_large(x) 0
+#endif
+#ifndef p4d_large
+#define p4d_large(x) 0
+#endif
+#ifndef pud_large
+#define pud_large(x) 0
+#endif
+#ifndef pmd_large
+#define pmd_large(x) 0
+#endif
+
#endif /* _ASM_GENERIC_PGTABLE_H */
--
2.20.1
Now walk_page_range() can walk kernel page tables, we can switch the
arm64 ptdump code over to using it, simplifying the code.
Signed-off-by: Steven Price <[email protected]>
---
arch/arm64/mm/dump.c | 117 ++++++++++++++++++++++---------------------
1 file changed, 59 insertions(+), 58 deletions(-)
diff --git a/arch/arm64/mm/dump.c b/arch/arm64/mm/dump.c
index 99bb8facb5cb..c5e936507565 100644
--- a/arch/arm64/mm/dump.c
+++ b/arch/arm64/mm/dump.c
@@ -72,7 +72,7 @@ struct pg_state {
struct seq_file *seq;
const struct addr_marker *marker;
unsigned long start_address;
- unsigned level;
+ int level;
u64 current_prot;
bool check_wx;
unsigned long wx_pages;
@@ -234,11 +234,14 @@ static void note_prot_wx(struct pg_state *st, unsigned long addr)
st->wx_pages += (addr - st->start_address) / PAGE_SIZE;
}
-static void note_page(struct pg_state *st, unsigned long addr, unsigned level,
+static void note_page(struct pg_state *st, unsigned long addr, int level,
u64 val)
{
static const char units[] = "KMGTPE";
- u64 prot = val & pg_level[level].mask;
+ u64 prot = 0;
+
+ if (level >= 0)
+ prot = val & pg_level[level].mask;
if (!st->level) {
st->level = level;
@@ -286,73 +289,71 @@ static void note_page(struct pg_state *st, unsigned long addr, unsigned level,
}
-static void walk_pte(struct pg_state *st, pmd_t *pmdp, unsigned long start,
- unsigned long end)
+static int pud_entry(pud_t *pud, unsigned long addr,
+ unsigned long next, struct mm_walk *walk)
{
- unsigned long addr = start;
- pte_t *ptep = pte_offset_kernel(pmdp, start);
+ struct pg_state *st = walk->private;
+ pud_t val = READ_ONCE(*pud);
+
+ if (pud_table(val))
+ return 0;
+
+ note_page(st, addr, 2, pud_val(val));
- do {
- note_page(st, addr, 4, READ_ONCE(pte_val(*ptep)));
- } while (ptep++, addr += PAGE_SIZE, addr != end);
+ return 0;
}
-static void walk_pmd(struct pg_state *st, pud_t *pudp, unsigned long start,
- unsigned long end)
+static int pmd_entry(pmd_t *pmd, unsigned long addr,
+ unsigned long next, struct mm_walk *walk)
{
- unsigned long next, addr = start;
- pmd_t *pmdp = pmd_offset(pudp, start);
-
- do {
- pmd_t pmd = READ_ONCE(*pmdp);
- next = pmd_addr_end(addr, end);
-
- if (pmd_none(pmd) || pmd_sect(pmd)) {
- note_page(st, addr, 3, pmd_val(pmd));
- } else {
- BUG_ON(pmd_bad(pmd));
- walk_pte(st, pmdp, addr, next);
- }
- } while (pmdp++, addr = next, addr != end);
+ struct pg_state *st = walk->private;
+ pmd_t val = READ_ONCE(*pmd);
+
+ if (pmd_table(val))
+ return 0;
+
+ note_page(st, addr, 3, pmd_val(val));
+
+ return 0;
}
-static void walk_pud(struct pg_state *st, pgd_t *pgdp, unsigned long start,
- unsigned long end)
+static int pte_entry(pte_t *pte, unsigned long addr,
+ unsigned long next, struct mm_walk *walk)
{
- unsigned long next, addr = start;
- pud_t *pudp = pud_offset(pgdp, start);
-
- do {
- pud_t pud = READ_ONCE(*pudp);
- next = pud_addr_end(addr, end);
-
- if (pud_none(pud) || pud_sect(pud)) {
- note_page(st, addr, 2, pud_val(pud));
- } else {
- BUG_ON(pud_bad(pud));
- walk_pmd(st, pudp, addr, next);
- }
- } while (pudp++, addr = next, addr != end);
+ struct pg_state *st = walk->private;
+ pte_t val = READ_ONCE(*pte);
+
+ note_page(st, addr, 4, pte_val(val));
+
+ return 0;
+}
+
+static int pte_hole(unsigned long addr, unsigned long next,
+ struct mm_walk *walk)
+{
+ struct pg_state *st = walk->private;
+
+ note_page(st, addr, -1, 0);
+
+ return 0;
}
static void walk_pgd(struct pg_state *st, struct mm_struct *mm,
- unsigned long start)
+ unsigned long start)
{
- unsigned long end = (start < TASK_SIZE_64) ? TASK_SIZE_64 : 0;
- unsigned long next, addr = start;
- pgd_t *pgdp = pgd_offset(mm, start);
-
- do {
- pgd_t pgd = READ_ONCE(*pgdp);
- next = pgd_addr_end(addr, end);
-
- if (pgd_none(pgd)) {
- note_page(st, addr, 1, pgd_val(pgd));
- } else {
- BUG_ON(pgd_bad(pgd));
- walk_pud(st, pgdp, addr, next);
- }
- } while (pgdp++, addr = next, addr != end);
+ struct mm_walk walk = {
+ .mm = mm,
+ .private = st,
+ .pud_entry = pud_entry,
+ .pmd_entry = pmd_entry,
+ .pte_entry = pte_entry,
+ .pte_hole = pte_hole
+ };
+ down_read(&mm->mmap_sem);
+ walk_page_range(start, start | (((unsigned long)PTRS_PER_PGD <<
+ PGDIR_SHIFT) - 1),
+ &walk);
+ up_read(&mm->mmap_sem);
}
void ptdump_walk_pgd(struct seq_file *m, struct ptdump_info *info)
--
2.20.1
mm/dump_pagetables.c passes both struct seq_file and struct pg_state
down the chain of walk_*_level() functions to be passed to note_page().
Instead place the struct seq_file in struct pg_state and access it from
struct pg_state (which is private to this file) in note_page().
Signed-off-by: Steven Price <[email protected]>
---
arch/x86/mm/dump_pagetables.c | 69 ++++++++++++++++++-----------------
1 file changed, 35 insertions(+), 34 deletions(-)
diff --git a/arch/x86/mm/dump_pagetables.c b/arch/x86/mm/dump_pagetables.c
index f9eb25dd3766..b448546277f4 100644
--- a/arch/x86/mm/dump_pagetables.c
+++ b/arch/x86/mm/dump_pagetables.c
@@ -40,6 +40,7 @@ struct pg_state {
bool to_dmesg;
bool check_wx;
unsigned long wx_pages;
+ struct seq_file *seq;
};
struct addr_marker {
@@ -268,11 +269,12 @@ static void note_wx(struct pg_state *st)
* of PTE entries; the next one is different so we need to
* print what we collected so far.
*/
-static void note_page(struct seq_file *m, struct pg_state *st,
- pgprot_t new_prot, pgprotval_t new_eff, int level)
+static void note_page(struct pg_state *st, pgprot_t new_prot,
+ pgprotval_t new_eff, int level)
{
pgprotval_t prot, cur, eff;
static const char units[] = "BKMGTPE";
+ struct seq_file *m = st->seq;
/*
* If we have a "break" in the series, we need to flush the state that
@@ -358,8 +360,8 @@ static inline pgprotval_t effective_prot(pgprotval_t prot1, pgprotval_t prot2)
((prot1 | prot2) & _PAGE_NX);
}
-static void walk_pte_level(struct seq_file *m, struct pg_state *st, pmd_t addr,
- pgprotval_t eff_in, unsigned long P)
+static void walk_pte_level(struct pg_state *st, pmd_t addr, pgprotval_t eff_in,
+ unsigned long P)
{
int i;
pte_t *pte;
@@ -370,7 +372,7 @@ static void walk_pte_level(struct seq_file *m, struct pg_state *st, pmd_t addr,
pte = pte_offset_map(&addr, st->current_address);
prot = pte_flags(*pte);
eff = effective_prot(eff_in, prot);
- note_page(m, st, __pgprot(prot), eff, 5);
+ note_page(st, __pgprot(prot), eff, 5);
pte_unmap(pte);
}
}
@@ -383,22 +385,20 @@ static void walk_pte_level(struct seq_file *m, struct pg_state *st, pmd_t addr,
* us dozens of seconds (minutes for 5-level config) while checking for
* W+X mapping or reading kernel_page_tables debugfs file.
*/
-static inline bool kasan_page_table(struct seq_file *m, struct pg_state *st,
- void *pt)
+static inline bool kasan_page_table(struct pg_state *st, void *pt)
{
if (__pa(pt) == __pa(kasan_early_shadow_pmd) ||
(pgtable_l5_enabled() &&
__pa(pt) == __pa(kasan_early_shadow_p4d)) ||
__pa(pt) == __pa(kasan_early_shadow_pud)) {
pgprotval_t prot = pte_flags(kasan_early_shadow_pte[0]);
- note_page(m, st, __pgprot(prot), 0, 5);
+ note_page(st, __pgprot(prot), 0, 5);
return true;
}
return false;
}
#else
-static inline bool kasan_page_table(struct seq_file *m, struct pg_state *st,
- void *pt)
+static inline bool kasan_page_table(struct pg_state *st, void *pt)
{
return false;
}
@@ -406,7 +406,7 @@ static inline bool kasan_page_table(struct seq_file *m, struct pg_state *st,
#if PTRS_PER_PMD > 1
-static void walk_pmd_level(struct seq_file *m, struct pg_state *st, pud_t addr,
+static void walk_pmd_level(struct pg_state *st, pud_t addr,
pgprotval_t eff_in, unsigned long P)
{
int i;
@@ -420,19 +420,19 @@ static void walk_pmd_level(struct seq_file *m, struct pg_state *st, pud_t addr,
prot = pmd_flags(*start);
eff = effective_prot(eff_in, prot);
if (pmd_large(*start) || !pmd_present(*start)) {
- note_page(m, st, __pgprot(prot), eff, 4);
- } else if (!kasan_page_table(m, st, pmd_start)) {
- walk_pte_level(m, st, *start, eff,
+ note_page(st, __pgprot(prot), eff, 4);
+ } else if (!kasan_page_table(st, pmd_start)) {
+ walk_pte_level(st, *start, eff,
P + i * PMD_LEVEL_MULT);
}
} else
- note_page(m, st, __pgprot(0), 0, 4);
+ note_page(st, __pgprot(0), 0, 4);
start++;
}
}
#else
-#define walk_pmd_level(m,s,a,e,p) walk_pte_level(m,s,__pmd(pud_val(a)),e,p)
+#define walk_pmd_level(s,a,e,p) walk_pte_level(s,__pmd(pud_val(a)),e,p)
#undef pud_large
#define pud_large(a) pmd_large(__pmd(pud_val(a)))
#define pud_none(a) pmd_none(__pmd(pud_val(a)))
@@ -440,8 +440,8 @@ static void walk_pmd_level(struct seq_file *m, struct pg_state *st, pud_t addr,
#if PTRS_PER_PUD > 1
-static void walk_pud_level(struct seq_file *m, struct pg_state *st, p4d_t addr,
- pgprotval_t eff_in, unsigned long P)
+static void walk_pud_level(struct pg_state *st, p4d_t addr, pgprotval_t eff_in,
+ unsigned long P)
{
int i;
pud_t *start, *pud_start;
@@ -456,13 +456,13 @@ static void walk_pud_level(struct seq_file *m, struct pg_state *st, p4d_t addr,
prot = pud_flags(*start);
eff = effective_prot(eff_in, prot);
if (pud_large(*start) || !pud_present(*start)) {
- note_page(m, st, __pgprot(prot), eff, 3);
- } else if (!kasan_page_table(m, st, pud_start)) {
- walk_pmd_level(m, st, *start, eff,
+ note_page(st, __pgprot(prot), eff, 3);
+ } else if (!kasan_page_table(st, pud_start)) {
+ walk_pmd_level(st, *start, eff,
P + i * PUD_LEVEL_MULT);
}
} else
- note_page(m, st, __pgprot(0), 0, 3);
+ note_page(st, __pgprot(0), 0, 3);
prev_pud = start;
start++;
@@ -470,21 +470,21 @@ static void walk_pud_level(struct seq_file *m, struct pg_state *st, p4d_t addr,
}
#else
-#define walk_pud_level(m,s,a,e,p) walk_pmd_level(m,s,__pud(p4d_val(a)),e,p)
+#define walk_pud_level(s,a,e,p) walk_pmd_level(s,__pud(p4d_val(a)),e,p)
#undef p4d_large
#define p4d_large(a) pud_large(__pud(p4d_val(a)))
#define p4d_none(a) pud_none(__pud(p4d_val(a)))
#endif
-static void walk_p4d_level(struct seq_file *m, struct pg_state *st, pgd_t addr,
- pgprotval_t eff_in, unsigned long P)
+static void walk_p4d_level(struct pg_state *st, pgd_t addr, pgprotval_t eff_in,
+ unsigned long P)
{
int i;
p4d_t *start, *p4d_start;
pgprotval_t prot, eff;
if (PTRS_PER_P4D == 1)
- return walk_pud_level(m, st, __p4d(pgd_val(addr)), eff_in, P);
+ return walk_pud_level(st, __p4d(pgd_val(addr)), eff_in, P);
p4d_start = start = (p4d_t *)pgd_page_vaddr(addr);
@@ -494,13 +494,13 @@ static void walk_p4d_level(struct seq_file *m, struct pg_state *st, pgd_t addr,
prot = p4d_flags(*start);
eff = effective_prot(eff_in, prot);
if (p4d_large(*start) || !p4d_present(*start)) {
- note_page(m, st, __pgprot(prot), eff, 2);
- } else if (!kasan_page_table(m, st, p4d_start)) {
- walk_pud_level(m, st, *start, eff,
+ note_page(st, __pgprot(prot), eff, 2);
+ } else if (!kasan_page_table(st, p4d_start)) {
+ walk_pud_level(st, *start, eff,
P + i * P4D_LEVEL_MULT);
}
} else
- note_page(m, st, __pgprot(0), 0, 2);
+ note_page(st, __pgprot(0), 0, 2);
start++;
}
@@ -538,6 +538,7 @@ static void ptdump_walk_pgd_level_core(struct seq_file *m, pgd_t *pgd,
}
st.check_wx = checkwx;
+ st.seq = m;
if (checkwx)
st.wx_pages = 0;
@@ -551,13 +552,13 @@ static void ptdump_walk_pgd_level_core(struct seq_file *m, pgd_t *pgd,
eff = prot;
#endif
if (pgd_large(*start) || !pgd_present(*start)) {
- note_page(m, &st, __pgprot(prot), eff, 1);
+ note_page(&st, __pgprot(prot), eff, 1);
} else {
- walk_p4d_level(m, &st, *start, eff,
+ walk_p4d_level(&st, *start, eff,
i * PGD_LEVEL_MULT);
}
} else
- note_page(m, &st, __pgprot(0), 0, 1);
+ note_page(&st, __pgprot(0), 0, 1);
cond_resched();
start++;
@@ -565,7 +566,7 @@ static void ptdump_walk_pgd_level_core(struct seq_file *m, pgd_t *pgd,
/* Flush out the last page */
st.current_address = normalize_addr(PTRS_PER_PGD*PGD_LEVEL_MULT);
- note_page(m, &st, __pgprot(0), 0, 0);
+ note_page(&st, __pgprot(0), 0, 0);
if (!checkwx)
return;
if (st.wx_pages)
--
2.20.1
To enable x86 to use the generic walk_page_range() function, the
callers of ptdump_walk_pgd_level_debugfs() need to pass in the mm_struct.
This means that ptdump_walk_pgd_level_core() is now always passed a
valid pgd, so drop the support for pgd==NULL.
Signed-off-by: Steven Price <[email protected]>
---
arch/x86/include/asm/pgtable.h | 3 ++-
arch/x86/mm/debug_pagetables.c | 8 ++++----
arch/x86/mm/dump_pagetables.c | 14 ++++++--------
3 files changed, 12 insertions(+), 13 deletions(-)
diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index 579959750f34..5abf693dc9b2 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -28,7 +28,8 @@ extern pgd_t early_top_pgt[PTRS_PER_PGD];
int __init __early_make_pgtable(unsigned long address, pmdval_t pmd);
void ptdump_walk_pgd_level(struct seq_file *m, struct mm_struct *mm);
-void ptdump_walk_pgd_level_debugfs(struct seq_file *m, pgd_t *pgd, bool user);
+void ptdump_walk_pgd_level_debugfs(struct seq_file *m, struct mm_struct *mm,
+ bool user);
void ptdump_walk_pgd_level_checkwx(void);
void ptdump_walk_user_pgd_level_checkwx(void);
diff --git a/arch/x86/mm/debug_pagetables.c b/arch/x86/mm/debug_pagetables.c
index cd84f067e41d..824131052574 100644
--- a/arch/x86/mm/debug_pagetables.c
+++ b/arch/x86/mm/debug_pagetables.c
@@ -6,7 +6,7 @@
static int ptdump_show(struct seq_file *m, void *v)
{
- ptdump_walk_pgd_level_debugfs(m, NULL, false);
+ ptdump_walk_pgd_level_debugfs(m, &init_mm, false);
return 0;
}
@@ -16,7 +16,7 @@ static int ptdump_curknl_show(struct seq_file *m, void *v)
{
if (current->mm->pgd) {
down_read(¤t->mm->mmap_sem);
- ptdump_walk_pgd_level_debugfs(m, current->mm->pgd, false);
+ ptdump_walk_pgd_level_debugfs(m, current->mm, false);
up_read(¤t->mm->mmap_sem);
}
return 0;
@@ -31,7 +31,7 @@ static int ptdump_curusr_show(struct seq_file *m, void *v)
{
if (current->mm->pgd) {
down_read(¤t->mm->mmap_sem);
- ptdump_walk_pgd_level_debugfs(m, current->mm->pgd, true);
+ ptdump_walk_pgd_level_debugfs(m, current->mm, true);
up_read(¤t->mm->mmap_sem);
}
return 0;
@@ -46,7 +46,7 @@ static struct dentry *pe_efi;
static int ptdump_efi_show(struct seq_file *m, void *v)
{
if (efi_mm.pgd)
- ptdump_walk_pgd_level_debugfs(m, efi_mm.pgd, false);
+ ptdump_walk_pgd_level_debugfs(m, &efi_mm, false);
return 0;
}
diff --git a/arch/x86/mm/dump_pagetables.c b/arch/x86/mm/dump_pagetables.c
index f3663c5e8c6a..1c1b37c32787 100644
--- a/arch/x86/mm/dump_pagetables.c
+++ b/arch/x86/mm/dump_pagetables.c
@@ -527,16 +527,12 @@ static inline bool is_hypervisor_range(int idx)
static void ptdump_walk_pgd_level_core(struct seq_file *m, pgd_t *pgd,
bool checkwx, bool dmesg)
{
- pgd_t *start = INIT_PGD;
+ pgd_t *start = pgd;
pgprotval_t prot, eff;
int i;
struct pg_state st = {};
- if (pgd) {
- start = pgd;
- st.to_dmesg = dmesg;
- }
-
+ st.to_dmesg = dmesg;
st.check_wx = checkwx;
st.seq = m;
if (checkwx)
@@ -581,8 +577,10 @@ void ptdump_walk_pgd_level(struct seq_file *m, struct mm_struct *mm)
ptdump_walk_pgd_level_core(m, mm->pgd, false, true);
}
-void ptdump_walk_pgd_level_debugfs(struct seq_file *m, pgd_t *pgd, bool user)
+void ptdump_walk_pgd_level_debugfs(struct seq_file *m, struct mm_struct *mm,
+ bool user)
{
+ pgd_t *pgd = mm->pgd;
#ifdef CONFIG_PAGE_TABLE_ISOLATION
if (user && static_cpu_has(X86_FEATURE_PTI))
pgd = kernel_to_user_pgdp(pgd);
@@ -608,7 +606,7 @@ void ptdump_walk_user_pgd_level_checkwx(void)
void ptdump_walk_pgd_level_checkwx(void)
{
- ptdump_walk_pgd_level_core(NULL, NULL, true, false);
+ ptdump_walk_pgd_level_core(NULL, INIT_PGD, true, false);
}
static int __init pt_dump_init(void)
--
2.20.1
An mm_struct is needed to enable x86 to use of the generic
walk_page_range() function.
In the case of walking the user page tables (when
CONFIG_PAGE_TABLE_ISOLATION is enabled), it is necessary to create a
fake_mm structure because there isn't an mm_struct with a pointer
to the pgd of the user page tables. This fake_mm structure is
initialised with the minimum necessary for the generic page walk code.
Signed-off-by: Steven Price <[email protected]>
---
arch/x86/mm/dump_pagetables.c | 36 ++++++++++++++++++++---------------
1 file changed, 21 insertions(+), 15 deletions(-)
diff --git a/arch/x86/mm/dump_pagetables.c b/arch/x86/mm/dump_pagetables.c
index 1c1b37c32787..b1c04ecc18cc 100644
--- a/arch/x86/mm/dump_pagetables.c
+++ b/arch/x86/mm/dump_pagetables.c
@@ -111,8 +111,6 @@ static struct addr_marker address_markers[] = {
[END_OF_SPACE_NR] = { -1, NULL }
};
-#define INIT_PGD ((pgd_t *) &init_top_pgt)
-
#else /* CONFIG_X86_64 */
enum address_markers_idx {
@@ -147,8 +145,6 @@ static struct addr_marker address_markers[] = {
[END_OF_SPACE_NR] = { -1, NULL }
};
-#define INIT_PGD (swapper_pg_dir)
-
#endif /* !CONFIG_X86_64 */
/* Multipliers for offsets within the PTEs */
@@ -524,10 +520,10 @@ static inline bool is_hypervisor_range(int idx)
#endif
}
-static void ptdump_walk_pgd_level_core(struct seq_file *m, pgd_t *pgd,
+static void ptdump_walk_pgd_level_core(struct seq_file *m, struct mm_struct *mm,
bool checkwx, bool dmesg)
{
- pgd_t *start = pgd;
+ pgd_t *start = mm->pgd;
pgprotval_t prot, eff;
int i;
struct pg_state st = {};
@@ -574,39 +570,49 @@ static void ptdump_walk_pgd_level_core(struct seq_file *m, pgd_t *pgd,
void ptdump_walk_pgd_level(struct seq_file *m, struct mm_struct *mm)
{
- ptdump_walk_pgd_level_core(m, mm->pgd, false, true);
+ ptdump_walk_pgd_level_core(m, mm, false, true);
}
+#ifdef CONFIG_PAGE_TABLE_ISOLATION
+static void ptdump_walk_pgd_level_user_core(struct seq_file *m,
+ struct mm_struct *mm,
+ bool checkwx, bool dmesg)
+{
+ struct mm_struct fake_mm = {
+ .pgd = kernel_to_user_pgdp(mm->pgd)
+ };
+ init_rwsem(&fake_mm.mmap_sem);
+ ptdump_walk_pgd_level_core(m, &fake_mm, checkwx, dmesg);
+}
+#endif
+
void ptdump_walk_pgd_level_debugfs(struct seq_file *m, struct mm_struct *mm,
bool user)
{
- pgd_t *pgd = mm->pgd;
#ifdef CONFIG_PAGE_TABLE_ISOLATION
if (user && static_cpu_has(X86_FEATURE_PTI))
- pgd = kernel_to_user_pgdp(pgd);
+ ptdump_walk_pgd_level_user_core(m, mm, false, false);
+ else
#endif
- ptdump_walk_pgd_level_core(m, pgd, false, false);
+ ptdump_walk_pgd_level_core(m, mm, false, false);
}
EXPORT_SYMBOL_GPL(ptdump_walk_pgd_level_debugfs);
void ptdump_walk_user_pgd_level_checkwx(void)
{
#ifdef CONFIG_PAGE_TABLE_ISOLATION
- pgd_t *pgd = INIT_PGD;
-
if (!(__supported_pte_mask & _PAGE_NX) ||
!static_cpu_has(X86_FEATURE_PTI))
return;
pr_info("x86/mm: Checking user space page tables\n");
- pgd = kernel_to_user_pgdp(pgd);
- ptdump_walk_pgd_level_core(NULL, pgd, true, false);
+ ptdump_walk_pgd_level_user_core(NULL, &init_mm, true, false);
#endif
}
void ptdump_walk_pgd_level_checkwx(void)
{
- ptdump_walk_pgd_level_core(NULL, INIT_PGD, true, false);
+ ptdump_walk_pgd_level_core(NULL, &init_mm, true, false);
}
static int __init pt_dump_init(void)
--
2.20.1
Make use of the new functionality in walk_page_range to remove the
arch page walking code and use the generic code to walk the page tables.
The effective permissions are passed down the chain using new fields
in struct pg_state.
The KASAN optimisation is implemented by including test_p?d callbacks
which can decide to skip an entire tree of entries
Signed-off-by: Steven Price <[email protected]>
---
arch/x86/mm/dump_pagetables.c | 282 ++++++++++++++++++----------------
1 file changed, 146 insertions(+), 136 deletions(-)
diff --git a/arch/x86/mm/dump_pagetables.c b/arch/x86/mm/dump_pagetables.c
index b1c04ecc18cc..f6b814aaddf7 100644
--- a/arch/x86/mm/dump_pagetables.c
+++ b/arch/x86/mm/dump_pagetables.c
@@ -33,6 +33,10 @@ struct pg_state {
int level;
pgprot_t current_prot;
pgprotval_t effective_prot;
+ pgprotval_t effective_prot_pgd;
+ pgprotval_t effective_prot_p4d;
+ pgprotval_t effective_prot_pud;
+ pgprotval_t effective_prot_pmd;
unsigned long start_address;
unsigned long current_address;
const struct addr_marker *marker;
@@ -356,22 +360,21 @@ static inline pgprotval_t effective_prot(pgprotval_t prot1, pgprotval_t prot2)
((prot1 | prot2) & _PAGE_NX);
}
-static void walk_pte_level(struct pg_state *st, pmd_t addr, pgprotval_t eff_in,
- unsigned long P)
+static int ptdump_pte_entry(pte_t *pte, unsigned long addr,
+ unsigned long next, struct mm_walk *walk)
{
- int i;
- pte_t *pte;
- pgprotval_t prot, eff;
-
- for (i = 0; i < PTRS_PER_PTE; i++) {
- st->current_address = normalize_addr(P + i * PTE_LEVEL_MULT);
- pte = pte_offset_map(&addr, st->current_address);
- prot = pte_flags(*pte);
- eff = effective_prot(eff_in, prot);
- note_page(st, __pgprot(prot), eff, 5);
- pte_unmap(pte);
- }
+ struct pg_state *st = walk->private;
+ pgprotval_t eff, prot;
+
+ st->current_address = normalize_addr(addr);
+
+ prot = pte_flags(*pte);
+ eff = effective_prot(st->effective_prot_pmd, prot);
+ note_page(st, __pgprot(prot), eff, 5);
+
+ return 0;
}
+
#ifdef CONFIG_KASAN
/*
@@ -400,133 +403,152 @@ static inline bool kasan_page_table(struct pg_state *st, void *pt)
}
#endif
-#if PTRS_PER_PMD > 1
-
-static void walk_pmd_level(struct pg_state *st, pud_t addr,
- pgprotval_t eff_in, unsigned long P)
+static int ptdump_test_pmd(unsigned long addr, unsigned long next,
+ pmd_t *pmd, struct mm_walk *walk)
{
- int i;
- pmd_t *start, *pmd_start;
- pgprotval_t prot, eff;
-
- pmd_start = start = (pmd_t *)pud_page_vaddr(addr);
- for (i = 0; i < PTRS_PER_PMD; i++) {
- st->current_address = normalize_addr(P + i * PMD_LEVEL_MULT);
- if (!pmd_none(*start)) {
- prot = pmd_flags(*start);
- eff = effective_prot(eff_in, prot);
- if (pmd_large(*start) || !pmd_present(*start)) {
- note_page(st, __pgprot(prot), eff, 4);
- } else if (!kasan_page_table(st, pmd_start)) {
- walk_pte_level(st, *start, eff,
- P + i * PMD_LEVEL_MULT);
- }
- } else
- note_page(st, __pgprot(0), 0, 4);
- start++;
- }
+ struct pg_state *st = walk->private;
+
+ st->current_address = normalize_addr(addr);
+
+ if (kasan_page_table(st, pmd))
+ return 1;
+ return 0;
}
-#else
-#define walk_pmd_level(s,a,e,p) walk_pte_level(s,__pmd(pud_val(a)),e,p)
-#undef pud_large
-#define pud_large(a) pmd_large(__pmd(pud_val(a)))
-#define pud_none(a) pmd_none(__pmd(pud_val(a)))
-#endif
+static int ptdump_pmd_entry(pmd_t *pmd, unsigned long addr,
+ unsigned long next, struct mm_walk *walk)
+{
+ struct pg_state *st = walk->private;
+ pgprotval_t eff, prot;
+
+ prot = pmd_flags(*pmd);
+ eff = effective_prot(st->effective_prot_pud, prot);
+
+ st->current_address = normalize_addr(addr);
+
+ if (pmd_large(*pmd))
+ note_page(st, __pgprot(prot), eff, 4);
-#if PTRS_PER_PUD > 1
+ st->effective_prot_pmd = eff;
-static void walk_pud_level(struct pg_state *st, p4d_t addr, pgprotval_t eff_in,
- unsigned long P)
+ return 0;
+}
+
+static int ptdump_test_pud(unsigned long addr, unsigned long next,
+ pud_t *pud, struct mm_walk *walk)
{
- int i;
- pud_t *start, *pud_start;
- pgprotval_t prot, eff;
- pud_t *prev_pud = NULL;
-
- pud_start = start = (pud_t *)p4d_page_vaddr(addr);
-
- for (i = 0; i < PTRS_PER_PUD; i++) {
- st->current_address = normalize_addr(P + i * PUD_LEVEL_MULT);
- if (!pud_none(*start)) {
- prot = pud_flags(*start);
- eff = effective_prot(eff_in, prot);
- if (pud_large(*start) || !pud_present(*start)) {
- note_page(st, __pgprot(prot), eff, 3);
- } else if (!kasan_page_table(st, pud_start)) {
- walk_pmd_level(st, *start, eff,
- P + i * PUD_LEVEL_MULT);
- }
- } else
- note_page(st, __pgprot(0), 0, 3);
+ struct pg_state *st = walk->private;
- prev_pud = start;
- start++;
- }
+ st->current_address = normalize_addr(addr);
+
+ if (kasan_page_table(st, pud))
+ return 1;
+ return 0;
}
-#else
-#define walk_pud_level(s,a,e,p) walk_pmd_level(s,__pud(p4d_val(a)),e,p)
-#undef p4d_large
-#define p4d_large(a) pud_large(__pud(p4d_val(a)))
-#define p4d_none(a) pud_none(__pud(p4d_val(a)))
-#endif
+static int ptdump_pud_entry(pud_t *pud, unsigned long addr,
+ unsigned long next, struct mm_walk *walk)
+{
+ struct pg_state *st = walk->private;
+ pgprotval_t eff, prot;
+
+ prot = pud_flags(*pud);
+ eff = effective_prot(st->effective_prot_p4d, prot);
+
+ st->current_address = normalize_addr(addr);
+
+ if (pud_large(*pud))
+ note_page(st, __pgprot(prot), eff, 3);
+
+ st->effective_prot_pud = eff;
-static void walk_p4d_level(struct pg_state *st, pgd_t addr, pgprotval_t eff_in,
- unsigned long P)
+ return 0;
+}
+
+static int ptdump_test_p4d(unsigned long addr, unsigned long next,
+ p4d_t *p4d, struct mm_walk *walk)
{
- int i;
- p4d_t *start, *p4d_start;
- pgprotval_t prot, eff;
-
- if (PTRS_PER_P4D == 1)
- return walk_pud_level(st, __p4d(pgd_val(addr)), eff_in, P);
-
- p4d_start = start = (p4d_t *)pgd_page_vaddr(addr);
-
- for (i = 0; i < PTRS_PER_P4D; i++) {
- st->current_address = normalize_addr(P + i * P4D_LEVEL_MULT);
- if (!p4d_none(*start)) {
- prot = p4d_flags(*start);
- eff = effective_prot(eff_in, prot);
- if (p4d_large(*start) || !p4d_present(*start)) {
- note_page(st, __pgprot(prot), eff, 2);
- } else if (!kasan_page_table(st, p4d_start)) {
- walk_pud_level(st, *start, eff,
- P + i * P4D_LEVEL_MULT);
- }
- } else
- note_page(st, __pgprot(0), 0, 2);
+ struct pg_state *st = walk->private;
- start++;
- }
+ st->current_address = normalize_addr(addr);
+
+ if (kasan_page_table(st, p4d))
+ return 1;
+ return 0;
}
-#undef pgd_large
-#define pgd_large(a) (pgtable_l5_enabled() ? pgd_large(a) : p4d_large(__p4d(pgd_val(a))))
-#define pgd_none(a) (pgtable_l5_enabled() ? pgd_none(a) : p4d_none(__p4d(pgd_val(a))))
+static int ptdump_p4d_entry(p4d_t *p4d, unsigned long addr,
+ unsigned long next, struct mm_walk *walk)
+{
+ struct pg_state *st = walk->private;
+ pgprotval_t eff, prot;
+
+ prot = p4d_flags(*p4d);
+ eff = effective_prot(st->effective_prot_pgd, prot);
+
+ st->current_address = normalize_addr(addr);
+
+ if (p4d_large(*p4d))
+ note_page(st, __pgprot(prot), eff, 2);
+
+ st->effective_prot_p4d = eff;
+
+ return 0;
+}
-static inline bool is_hypervisor_range(int idx)
+static int ptdump_pgd_entry(pgd_t *pgd, unsigned long addr,
+ unsigned long next, struct mm_walk *walk)
{
-#ifdef CONFIG_X86_64
- /*
- * A hole in the beginning of kernel address space reserved
- * for a hypervisor.
- */
- return (idx >= pgd_index(GUARD_HOLE_BASE_ADDR)) &&
- (idx < pgd_index(GUARD_HOLE_END_ADDR));
+ struct pg_state *st = walk->private;
+ pgprotval_t eff, prot;
+
+ prot = pgd_flags(*pgd);
+
+#ifdef CONFIG_X86_PAE
+ eff = _PAGE_USER | _PAGE_RW;
#else
- return false;
+ eff = prot;
#endif
+
+ st->current_address = normalize_addr(addr);
+
+ if (pgd_large(*pgd))
+ note_page(st, __pgprot(prot), eff, 1);
+
+ st->effective_prot_pgd = eff;
+
+ return 0;
+}
+
+static int ptdump_hole(unsigned long addr, unsigned long next,
+ struct mm_walk *walk)
+{
+ struct pg_state *st = walk->private;
+
+ st->current_address = normalize_addr(addr);
+
+ note_page(st, __pgprot(0), 0, -1);
+
+ return 0;
}
static void ptdump_walk_pgd_level_core(struct seq_file *m, struct mm_struct *mm,
bool checkwx, bool dmesg)
{
- pgd_t *start = mm->pgd;
- pgprotval_t prot, eff;
- int i;
struct pg_state st = {};
+ struct mm_walk walk = {
+ .mm = mm,
+ .pgd_entry = ptdump_pgd_entry,
+ .p4d_entry = ptdump_p4d_entry,
+ .pud_entry = ptdump_pud_entry,
+ .pmd_entry = ptdump_pmd_entry,
+ .pte_entry = ptdump_pte_entry,
+ .test_p4d = ptdump_test_p4d,
+ .test_pud = ptdump_test_pud,
+ .test_pmd = ptdump_test_pmd,
+ .pte_hole = ptdump_hole,
+ .private = &st
+ };
st.to_dmesg = dmesg;
st.check_wx = checkwx;
@@ -534,27 +556,15 @@ static void ptdump_walk_pgd_level_core(struct seq_file *m, struct mm_struct *mm,
if (checkwx)
st.wx_pages = 0;
- for (i = 0; i < PTRS_PER_PGD; i++) {
- st.current_address = normalize_addr(i * PGD_LEVEL_MULT);
- if (!pgd_none(*start) && !is_hypervisor_range(i)) {
- prot = pgd_flags(*start);
-#ifdef CONFIG_X86_PAE
- eff = _PAGE_USER | _PAGE_RW;
+ down_read(&mm->mmap_sem);
+#ifdef CONFIG_X86_64
+ walk_page_range(0, PTRS_PER_PGD*PGD_LEVEL_MULT/2, &walk);
+ walk_page_range(normalize_addr(PTRS_PER_PGD*PGD_LEVEL_MULT/2), ~0,
+ &walk);
#else
- eff = prot;
+ walk_page_range(0, ~0, &walk);
#endif
- if (pgd_large(*start) || !pgd_present(*start)) {
- note_page(&st, __pgprot(prot), eff, 1);
- } else {
- walk_p4d_level(&st, *start, eff,
- i * PGD_LEVEL_MULT);
- }
- } else
- note_page(&st, __pgprot(0), 0, 1);
-
- cond_resched();
- start++;
- }
+ up_read(&mm->mmap_sem);
/* Flush out the last page */
st.current_address = normalize_addr(PTRS_PER_PGD*PGD_LEVEL_MULT);
--
2.20.1
It is useful to be able to skip parts of the page table tree even when
walking without VMAs. Add test_p?d callbacks similar to test_walk but
which are called just before a table at that level is walked. If the
callback returns non-zero then the entire table is skipped.
Signed-off-by: Steven Price <[email protected]>
---
include/linux/mm.h | 11 +++++++++++
mm/pagewalk.c | 24 ++++++++++++++++++++++++
2 files changed, 35 insertions(+)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 1a4b1615d012..4755af1779f6 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1427,6 +1427,11 @@ void unmap_vmas(struct mmu_gather *tlb, struct vm_area_struct *start_vma,
* value means "do page table walk over the current vma,"
* and a negative one means "abort current page table walk
* right now." 1 means "skip the current vma."
+ * @test_pmd: similar to test_walk(), but called for every pmd.
+ * @test_pud: similar to test_walk(), but called for every pud.
+ * @test_p4d: similar to test_walk(), but called for every p4d.
+ * Returning 0 means walk this part of the page tables,
+ * returning 1 means to skip this range.
* @mm: mm_struct representing the target process of page table walk
* @vma: vma currently walked (NULL if walking outside vmas)
* @private: private data for callbacks' usage
@@ -1451,6 +1456,12 @@ struct mm_walk {
struct mm_walk *walk);
int (*test_walk)(unsigned long addr, unsigned long next,
struct mm_walk *walk);
+ int (*test_pmd)(unsigned long addr, unsigned long next,
+ pmd_t *pmd_start, struct mm_walk *walk);
+ int (*test_pud)(unsigned long addr, unsigned long next,
+ pud_t *pud_start, struct mm_walk *walk);
+ int (*test_p4d)(unsigned long addr, unsigned long next,
+ p4d_t *p4d_start, struct mm_walk *walk);
struct mm_struct *mm;
struct vm_area_struct *vma;
void *private;
diff --git a/mm/pagewalk.c b/mm/pagewalk.c
index dac0c848b458..231655db1295 100644
--- a/mm/pagewalk.c
+++ b/mm/pagewalk.c
@@ -32,6 +32,14 @@ static int walk_pmd_range(pud_t *pud, unsigned long addr, unsigned long end,
unsigned long next;
int err = 0;
+ if (walk->test_pmd) {
+ err = walk->test_pmd(addr, end, pmd_offset(pud, 0), walk);
+ if (err < 0)
+ return err;
+ if (err > 0)
+ return 0;
+ }
+
pmd = pmd_offset(pud, addr);
do {
again:
@@ -82,6 +90,14 @@ static int walk_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end,
unsigned long next;
int err = 0;
+ if (walk->test_pud) {
+ err = walk->test_pud(addr, end, pud_offset(p4d, 0), walk);
+ if (err < 0)
+ return err;
+ if (err > 0)
+ return 0;
+ }
+
pud = pud_offset(p4d, addr);
do {
again:
@@ -124,6 +140,14 @@ static int walk_p4d_range(pgd_t *pgd, unsigned long addr, unsigned long end,
unsigned long next;
int err = 0;
+ if (walk->test_p4d) {
+ err = walk->test_p4d(addr, end, p4d_offset(pgd, 0), walk);
+ if (err < 0)
+ return err;
+ if (err > 0)
+ return 0;
+ }
+
p4d = p4d_offset(pgd, addr);
do {
next = p4d_addr_end(addr, end);
--
2.20.1
Since 48684a65b4e3: "mm: pagewalk: fix misbehavior of walk_page_range
for vma(VM_PFNMAP)", page_table_walk() will report any kernel area as
a hole, because it lacks a vma.
This means each arch has re-implemented page table walking when needed,
for example in the per-arch ptdump walker.
Remove the requirement to have a vma except when trying to split huge
pages.
Signed-off-by: Steven Price <[email protected]>
---
mm/pagewalk.c | 25 +++++++++++++++++--------
1 file changed, 17 insertions(+), 8 deletions(-)
diff --git a/mm/pagewalk.c b/mm/pagewalk.c
index 98373a9f88b8..dac0c848b458 100644
--- a/mm/pagewalk.c
+++ b/mm/pagewalk.c
@@ -36,7 +36,7 @@ static int walk_pmd_range(pud_t *pud, unsigned long addr, unsigned long end,
do {
again:
next = pmd_addr_end(addr, end);
- if (pmd_none(*pmd) || !walk->vma) {
+ if (pmd_none(*pmd)) {
if (walk->pte_hole)
err = walk->pte_hole(addr, next, walk);
if (err)
@@ -59,9 +59,14 @@ static int walk_pmd_range(pud_t *pud, unsigned long addr, unsigned long end,
if (!walk->pte_entry)
continue;
- split_huge_pmd(walk->vma, pmd, addr);
- if (pmd_trans_unstable(pmd))
- goto again;
+ if (walk->vma) {
+ split_huge_pmd(walk->vma, pmd, addr);
+ if (pmd_trans_unstable(pmd))
+ goto again;
+ } else if (pmd_large(*pmd)) {
+ continue;
+ }
+
err = walk_pte_range(pmd, addr, next, walk);
if (err)
break;
@@ -81,7 +86,7 @@ static int walk_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end,
do {
again:
next = pud_addr_end(addr, end);
- if (pud_none(*pud) || !walk->vma) {
+ if (pud_none(*pud)) {
if (walk->pte_hole)
err = walk->pte_hole(addr, next, walk);
if (err)
@@ -95,9 +100,13 @@ static int walk_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end,
break;
}
- split_huge_pud(walk->vma, pud, addr);
- if (pud_none(*pud))
- goto again;
+ if (walk->vma) {
+ split_huge_pud(walk->vma, pud, addr);
+ if (pud_none(*pud))
+ goto again;
+ } else if (pud_large(*pud)) {
+ continue;
+ }
if (walk->pmd_entry || walk->pte_entry)
err = walk_pmd_range(pud, addr, next, walk);
--
2.20.1
pgd_entry() and pud_entry() were removed by commit 0b1fbfe50006c410
("mm/pagewalk: remove pgd_entry() and pud_entry()") because there were
no users. We're about to add users so reintroduce them, along with
p4d_entry() as we now have 5 levels of tables.
Note that commit a00cc7d9dd93d66a ("mm, x86: add support for
PUD-sized transparent hugepages") already re-added pud_entry() but with
different semantics to the other callbacks. Since there have never
been upstream users of this, revert the semantics back to match the
other callbacks. This means pud_entry() is called for all entries, not
just transparent huge pages.
Signed-off-by: Steven Price <[email protected]>
---
include/linux/mm.h | 9 ++++++---
mm/pagewalk.c | 27 ++++++++++++++++-----------
2 files changed, 22 insertions(+), 14 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 80bb6408fe73..1a4b1615d012 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1412,10 +1412,9 @@ void unmap_vmas(struct mmu_gather *tlb, struct vm_area_struct *start_vma,
/**
* mm_walk - callbacks for walk_page_range
+ * @pgd_entry: if set, called for each non-empty PGD (top-level) entry
+ * @p4d_entry: if set, called for each non-empty P4D (1st-level) entry
* @pud_entry: if set, called for each non-empty PUD (2nd-level) entry
- * this handler should only handle pud_trans_huge() puds.
- * the pmd_entry or pte_entry callbacks will be used for
- * regular PUDs.
* @pmd_entry: if set, called for each non-empty PMD (3rd-level) entry
* this handler is required to be able to handle
* pmd_trans_huge() pmds. They may simply choose to
@@ -1435,6 +1434,10 @@ void unmap_vmas(struct mmu_gather *tlb, struct vm_area_struct *start_vma,
* (see the comment on walk_page_range() for more details)
*/
struct mm_walk {
+ int (*pgd_entry)(pgd_t *pgd, unsigned long addr,
+ unsigned long next, struct mm_walk *walk);
+ int (*p4d_entry)(p4d_t *p4d, unsigned long addr,
+ unsigned long next, struct mm_walk *walk);
int (*pud_entry)(pud_t *pud, unsigned long addr,
unsigned long next, struct mm_walk *walk);
int (*pmd_entry)(pmd_t *pmd, unsigned long addr,
diff --git a/mm/pagewalk.c b/mm/pagewalk.c
index c3084ff2569d..98373a9f88b8 100644
--- a/mm/pagewalk.c
+++ b/mm/pagewalk.c
@@ -90,15 +90,9 @@ static int walk_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end,
}
if (walk->pud_entry) {
- spinlock_t *ptl = pud_trans_huge_lock(pud, walk->vma);
-
- if (ptl) {
- err = walk->pud_entry(pud, addr, next, walk);
- spin_unlock(ptl);
- if (err)
- break;
- continue;
- }
+ err = walk->pud_entry(pud, addr, next, walk);
+ if (err)
+ break;
}
split_huge_pud(walk->vma, pud, addr);
@@ -131,7 +125,12 @@ static int walk_p4d_range(pgd_t *pgd, unsigned long addr, unsigned long end,
break;
continue;
}
- if (walk->pmd_entry || walk->pte_entry)
+ if (walk->p4d_entry) {
+ err = walk->p4d_entry(p4d, addr, next, walk);
+ if (err)
+ break;
+ }
+ if (walk->pud_entry || walk->pmd_entry || walk->pte_entry)
err = walk_pud_range(p4d, addr, next, walk);
if (err)
break;
@@ -157,7 +156,13 @@ static int walk_pgd_range(unsigned long addr, unsigned long end,
break;
continue;
}
- if (walk->pmd_entry || walk->pte_entry)
+ if (walk->pgd_entry) {
+ err = walk->pgd_entry(pgd, addr, next, walk);
+ if (err)
+ break;
+ }
+ if (walk->p4d_entry || walk->pud_entry || walk->pmd_entry ||
+ walk->pte_entry)
err = walk_p4d_range(pgd, addr, next, walk);
if (err)
break;
--
2.20.1
For the /sys/kernel/debug/page_tables/ files, rather than outputing a
mostly empty line when a block of memory isn't present just skip the
line. This keeps the output shorter and will help with a future change
switching to using the generic page walk code as we no longer care about
the 'level' that the page table holes are at.
Signed-off-by: Steven Price <[email protected]>
---
arch/x86/mm/dump_pagetables.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/arch/x86/mm/dump_pagetables.c b/arch/x86/mm/dump_pagetables.c
index cf37abc0f58a..f9eb25dd3766 100644
--- a/arch/x86/mm/dump_pagetables.c
+++ b/arch/x86/mm/dump_pagetables.c
@@ -304,8 +304,8 @@ static void note_page(struct seq_file *m, struct pg_state *st,
/*
* Now print the actual finished series
*/
- if (!st->marker->max_lines ||
- st->lines < st->marker->max_lines) {
+ if ((cur & _PAGE_PRESENT) && (!st->marker->max_lines ||
+ st->lines < st->marker->max_lines)) {
pt_dump_seq_printf(m, st->to_dmesg,
"0x%0*lx-0x%0*lx ",
width, st->start_address,
@@ -321,7 +321,8 @@ static void note_page(struct seq_file *m, struct pg_state *st,
printk_prot(m, st->current_prot, st->level,
st->to_dmesg);
}
- st->lines++;
+ if (cur & _PAGE_PRESENT)
+ st->lines++;
/*
* We print markers for special areas of address space,
--
2.20.1
Hi Steven,
On Wed, Mar 06, 2019 at 03:50:15PM +0000, Steven Price wrote:
> walk_page_range() is going to be allowed to walk page tables other than
> those of user space. For this it needs to know when it has reached a
> 'leaf' entry in the page tables. This information is provided by the
> p?d_large() functions/macros.
>
> For mips, we only support large pages on 64 bit.
>
> For 64 bit if _PAGE_HUGE is defined we can simply look for it. When not
> defined we can be confident that there are no large pages in existence
> and fall back on the generic implementation (added in a later patch)
> which returns 0.
>
> CC: Ralf Baechle <[email protected]>
> CC: Paul Burton <[email protected]>
> CC: James Hogan <[email protected]>
> CC: [email protected]
> Signed-off-by: Steven Price <[email protected]>
Acked-by: Paul Burton <[email protected]>
Thanks,
Paul
walk_page_range() is going to be allowed to walk page tables other than
those of user space. For this it needs to know when it has reached a
'leaf' entry in the page tables. This information is provided by the
p?d_large() functions/macros.
For x86 we already have static inline functions, so simply add #defines
to prevent the generic versions (added in a later patch) from being
picked up.
We also need to add corresponding #undefs in dump_pagetables.c. This
code will be removed when x86 is switched over to using the generic
pagewalk code in a later patch.
Signed-off-by: Steven Price <[email protected]>
---
arch/x86/include/asm/pgtable.h | 5 +++++
arch/x86/mm/dump_pagetables.c | 3 +++
2 files changed, 8 insertions(+)
diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index 2779ace16d23..0dd04cf6ebeb 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -222,6 +222,7 @@ static inline unsigned long pgd_pfn(pgd_t pgd)
return (pgd_val(pgd) & PTE_PFN_MASK) >> PAGE_SHIFT;
}
+#define p4d_large p4d_large
static inline int p4d_large(p4d_t p4d)
{
/* No 512 GiB pages yet */
@@ -230,6 +231,7 @@ static inline int p4d_large(p4d_t p4d)
#define pte_page(pte) pfn_to_page(pte_pfn(pte))
+#define pmd_large pmd_large
static inline int pmd_large(pmd_t pte)
{
return pmd_flags(pte) & _PAGE_PSE;
@@ -857,6 +859,7 @@ static inline pmd_t *pmd_offset(pud_t *pud, unsigned long address)
return (pmd_t *)pud_page_vaddr(*pud) + pmd_index(address);
}
+#define pud_large pud_large
static inline int pud_large(pud_t pud)
{
return (pud_val(pud) & (_PAGE_PSE | _PAGE_PRESENT)) ==
@@ -868,6 +871,7 @@ static inline int pud_bad(pud_t pud)
return (pud_flags(pud) & ~(_KERNPG_TABLE | _PAGE_USER)) != 0;
}
#else
+#define pud_large pud_large
static inline int pud_large(pud_t pud)
{
return 0;
@@ -1213,6 +1217,7 @@ static inline bool pgdp_maps_userspace(void *__ptr)
return (((ptr & ~PAGE_MASK) / sizeof(pgd_t)) < PGD_KERNEL_START);
}
+#define pgd_large pgd_large
static inline int pgd_large(pgd_t pgd) { return 0; }
#ifdef CONFIG_PAGE_TABLE_ISOLATION
diff --git a/arch/x86/mm/dump_pagetables.c b/arch/x86/mm/dump_pagetables.c
index e3cdc85ce5b6..cf37abc0f58a 100644
--- a/arch/x86/mm/dump_pagetables.c
+++ b/arch/x86/mm/dump_pagetables.c
@@ -432,6 +432,7 @@ static void walk_pmd_level(struct seq_file *m, struct pg_state *st, pud_t addr,
#else
#define walk_pmd_level(m,s,a,e,p) walk_pte_level(m,s,__pmd(pud_val(a)),e,p)
+#undef pud_large
#define pud_large(a) pmd_large(__pmd(pud_val(a)))
#define pud_none(a) pmd_none(__pmd(pud_val(a)))
#endif
@@ -469,6 +470,7 @@ static void walk_pud_level(struct seq_file *m, struct pg_state *st, p4d_t addr,
#else
#define walk_pud_level(m,s,a,e,p) walk_pmd_level(m,s,__pud(p4d_val(a)),e,p)
+#undef p4d_large
#define p4d_large(a) pud_large(__pud(p4d_val(a)))
#define p4d_none(a) pud_none(__pud(p4d_val(a)))
#endif
@@ -503,6 +505,7 @@ static void walk_p4d_level(struct seq_file *m, struct pg_state *st, pgd_t addr,
}
}
+#undef pgd_large
#define pgd_large(a) (pgtable_l5_enabled() ? pgd_large(a) : p4d_large(__p4d(pgd_val(a))))
#define pgd_none(a) (pgtable_l5_enabled() ? pgd_none(a) : p4d_none(__p4d(pgd_val(a))))
--
2.20.1
walk_page_range() is going to be allowed to walk page tables other than
those of user space. For this it needs to know when it has reached a
'leaf' entry in the page tables. This information is provided by the
p?d_large() functions/macros.
For sparc 64 bit, pmd_large() and pud_large() are already provided, so
add #defines to prevent the generic versions (added in a later patch)
from being used.
CC: "David S. Miller" <[email protected]>
CC: [email protected]
Signed-off-by: Steven Price <[email protected]>
---
arch/sparc/include/asm/pgtable_64.h | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h
index 1393a8ac596b..f502e937c8fe 100644
--- a/arch/sparc/include/asm/pgtable_64.h
+++ b/arch/sparc/include/asm/pgtable_64.h
@@ -713,6 +713,7 @@ static inline unsigned long pte_special(pte_t pte)
return pte_val(pte) & _PAGE_SPECIAL;
}
+#define pmd_large pmd_large
static inline unsigned long pmd_large(pmd_t pmd)
{
pte_t pte = __pte(pmd_val(pmd));
@@ -894,6 +895,7 @@ static inline unsigned long pud_page_vaddr(pud_t pud)
#define pgd_present(pgd) (pgd_val(pgd) != 0U)
#define pgd_clear(pgdp) (pgd_val(*(pgdp)) = 0UL)
+#define pud_large pud_large
static inline unsigned long pud_large(pud_t pud)
{
pte_t pte = __pte(pud_val(pud));
--
2.20.1
From: Steven Price <[email protected]>
Date: Wed, 6 Mar 2019 15:50:19 +0000
> walk_page_range() is going to be allowed to walk page tables other than
> those of user space. For this it needs to know when it has reached a
> 'leaf' entry in the page tables. This information is provided by the
> p?d_large() functions/macros.
>
> For sparc 64 bit, pmd_large() and pud_large() are already provided, so
> add #defines to prevent the generic versions (added in a later patch)
> from being used.
>
> CC: "David S. Miller" <[email protected]>
> CC: [email protected]
> Signed-off-by: Steven Price <[email protected]>
Acked-by: David S. Miller <[email protected]>
On Wed, Mar 06, 2019 at 03:50:16PM +0000, Steven Price wrote:
> walk_page_range() is going to be allowed to walk page tables other than
> those of user space. For this it needs to know when it has reached a
> 'leaf' entry in the page tables. This information is provided by the
> p?d_large() functions/macros.
>
> For powerpc pmd_large() was already implemented, so hoist it out of the
> CONFIG_TRANSPARENT_HUGEPAGE condition and implement the other levels.
>
> Also since we now have a pmd_large always implemented we can drop the
> pmd_is_leaf() function.
>
> CC: Benjamin Herrenschmidt <[email protected]>
> CC: Paul Mackerras <[email protected]>
> CC: Michael Ellerman <[email protected]>
> CC: [email protected]
> CC: [email protected]
> Signed-off-by: Steven Price <[email protected]>
> ---
> arch/powerpc/include/asm/book3s/64/pgtable.h | 30 ++++++++++++++------
There is one more definition of pmd_large() in
arch/powerpc/include/asm/pgtable.h
> arch/powerpc/kvm/book3s_64_mmu_radix.c | 12 ++------
> 2 files changed, 24 insertions(+), 18 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
> index c9bfe526ca9d..c4b29caf2a3b 100644
> --- a/arch/powerpc/include/asm/book3s/64/pgtable.h
> +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
> @@ -907,6 +907,12 @@ static inline int pud_present(pud_t pud)
> return (pud_raw(pud) & cpu_to_be64(_PAGE_PRESENT));
> }
>
> +#define pud_large pud_large
> +static inline int pud_large(pud_t pud)
> +{
> + return !!(pud_raw(pud) & cpu_to_be64(_PAGE_PTE));
> +}
> +
> extern struct page *pud_page(pud_t pud);
> extern struct page *pmd_page(pmd_t pmd);
> static inline pte_t pud_pte(pud_t pud)
> @@ -954,6 +960,12 @@ static inline int pgd_present(pgd_t pgd)
> return (pgd_raw(pgd) & cpu_to_be64(_PAGE_PRESENT));
> }
>
> +#define pgd_large pgd_large
> +static inline int pgd_large(pgd_t pgd)
> +{
> + return !!(pgd_raw(pgd) & cpu_to_be64(_PAGE_PTE));
> +}
> +
> static inline pte_t pgd_pte(pgd_t pgd)
> {
> return __pte_raw(pgd_raw(pgd));
> @@ -1107,6 +1119,15 @@ static inline bool pmd_access_permitted(pmd_t pmd, bool write)
> return pte_access_permitted(pmd_pte(pmd), write);
> }
>
> +#define pmd_large pmd_large
> +/*
> + * returns true for pmd migration entries, THP, devmap, hugetlb
> + */
> +static inline int pmd_large(pmd_t pmd)
> +{
> + return !!(pmd_raw(pmd) & cpu_to_be64(_PAGE_PTE));
> +}
> +
> #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> extern pmd_t pfn_pmd(unsigned long pfn, pgprot_t pgprot);
> extern pmd_t mk_pmd(struct page *page, pgprot_t pgprot);
> @@ -1133,15 +1154,6 @@ pmd_hugepage_update(struct mm_struct *mm, unsigned long addr, pmd_t *pmdp,
> return hash__pmd_hugepage_update(mm, addr, pmdp, clr, set);
> }
>
> -/*
> - * returns true for pmd migration entries, THP, devmap, hugetlb
> - * But compile time dependent on THP config
> - */
> -static inline int pmd_large(pmd_t pmd)
> -{
> - return !!(pmd_raw(pmd) & cpu_to_be64(_PAGE_PTE));
> -}
> -
> static inline pmd_t pmd_mknotpresent(pmd_t pmd)
> {
> return __pmd(pmd_val(pmd) & ~_PAGE_PRESENT);
> diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c
> index 1b821c6efdef..040db20ac2ab 100644
> --- a/arch/powerpc/kvm/book3s_64_mmu_radix.c
> +++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c
> @@ -363,12 +363,6 @@ static void kvmppc_pte_free(pte_t *ptep)
> kmem_cache_free(kvm_pte_cache, ptep);
> }
>
> -/* Like pmd_huge() and pmd_large(), but works regardless of config options */
> -static inline int pmd_is_leaf(pmd_t pmd)
> -{
> - return !!(pmd_val(pmd) & _PAGE_PTE);
> -}
> -
> static pmd_t *kvmppc_pmd_alloc(void)
> {
> return kmem_cache_alloc(kvm_pmd_cache, GFP_KERNEL);
> @@ -455,7 +449,7 @@ static void kvmppc_unmap_free_pmd(struct kvm *kvm, pmd_t *pmd, bool full,
> for (im = 0; im < PTRS_PER_PMD; ++im, ++p) {
> if (!pmd_present(*p))
> continue;
> - if (pmd_is_leaf(*p)) {
> + if (pmd_large(*p)) {
> if (full) {
> pmd_clear(p);
> } else {
> @@ -588,7 +582,7 @@ int kvmppc_create_pte(struct kvm *kvm, pgd_t *pgtable, pte_t pte,
> else if (level <= 1)
> new_pmd = kvmppc_pmd_alloc();
>
> - if (level == 0 && !(pmd && pmd_present(*pmd) && !pmd_is_leaf(*pmd)))
> + if (level == 0 && !(pmd && pmd_present(*pmd) && !pmd_large(*pmd)))
> new_ptep = kvmppc_pte_alloc();
>
> /* Check if we might have been invalidated; let the guest retry if so */
> @@ -657,7 +651,7 @@ int kvmppc_create_pte(struct kvm *kvm, pgd_t *pgtable, pte_t pte,
> new_pmd = NULL;
> }
> pmd = pmd_offset(pud, gpa);
> - if (pmd_is_leaf(*pmd)) {
> + if (pmd_large(*pmd)) {
> unsigned long lgpa = gpa & PMD_MASK;
>
> /* Check if we raced and someone else has set the same thing */
> --
> 2.20.1
>
--
Sincerely yours,
Mike.
On 08/03/2019 08:37, Mike Rapoport wrote:
> On Wed, Mar 06, 2019 at 03:50:16PM +0000, Steven Price wrote:
>> walk_page_range() is going to be allowed to walk page tables other than
>> those of user space. For this it needs to know when it has reached a
>> 'leaf' entry in the page tables. This information is provided by the
>> p?d_large() functions/macros.
>>
>> For powerpc pmd_large() was already implemented, so hoist it out of the
>> CONFIG_TRANSPARENT_HUGEPAGE condition and implement the other levels.
>>
>> Also since we now have a pmd_large always implemented we can drop the
>> pmd_is_leaf() function.
>>
>> CC: Benjamin Herrenschmidt <[email protected]>
>> CC: Paul Mackerras <[email protected]>
>> CC: Michael Ellerman <[email protected]>
>> CC: [email protected]
>> CC: [email protected]
>> Signed-off-by: Steven Price <[email protected]>
>> ---
>> arch/powerpc/include/asm/book3s/64/pgtable.h | 30 ++++++++++++++------
>
> There is one more definition of pmd_large() in
> arch/powerpc/include/asm/pgtable.h
True. That is a #define so will work correctly (it will override the
generic version). Since it is only a dummy definition (always returns 0)
it could be removed, but that would need to be in a separate patch after
the asm-generic versions have been added to avoid breaking bisection.
Steve