The main purpose of this big series is to:
- reorganise huge page handling to avoid using mm_slices.
- use huge pages to map kernel memory on the 8xx.
The 8xx supports 4 page sizes: 4k, 16k, 512k and 8M.
It uses 2 Level page tables, PGD having 1024 entries, each entry
covering 4M address space. Then each page table has 1024 entries.
At the time being, page sizes are managed in PGD entries, implying
the use of mm_slices as it can't mix several pages of the same size
in one page table.
The first purpose of this series is to reorganise things so that
standard page tables can also handle 512k pages. This is done by
adding a new _PAGE_HUGE flag which will be copied into the Level 1
entry in the TLB miss handler. That done, we have 2 types of pages:
- PGD entries to regular page tables handling 4k/16k and 512k pages
- PGD entries to hugepd tables handling 8M pages.
There is no need to mix 8M pages with other sizes, because a 8M page
will use more than what a single PGD covers.
Then comes the second purpose of this series. At the time being, the
8xx has implemented special handling in the TLB miss handlers in order
to transparently map kernel linear address space and the IMMR using
huge pages by building the TLB entries in assembly at the time of the
exception.
As mm_slices is only for user space pages, and also because it would
anyway not be convenient to slice kernel address space, it was not
possible to use huge pages for kernel address space. But after step
one of the series, it is now more flexible to use huge pages.
This series drop all assembly 'just in time' handling of huge pages
and use huge pages in page tables instead.
Once the above is done, then comes the cherry on cake:
- Use huge pages for KASAN shadow mapping
- Allow pinned TLBs with strict kernel rwx
- Allow pinned TLBs with debug pagealloc
Then, last but not least, those modifications for the 8xx allows the
following improvement on book3s/32:
- Mapping KASAN shadow with BATs
- Allowing BATs with debug pagealloc
All this allows to considerably simplify TLB miss handlers and associated
initialisation. The overhead of reading page tables is negligible
compared to the reduction of the miss handlers.
While we were at touching pte_update(), some cleanup was done
there too.
Tested widely on 8xx and 832x. Boot tested on QEMU MAC99.
Christophe Leroy (46):
powerpc/kasan: Fix shadow memory protection with CONFIG_KASAN_VMALLOC
powerpc/kasan: Fix error detection on memory allocation
powerpc/kasan: Fix issues by lowering KASAN_SHADOW_END
powerpc/kasan: Fix shadow pages allocation failure
powerpc/kasan: Remove unnecessary page table locking
powerpc/kasan: Refactor update of early shadow mappings
powerpc/kasan: Declare kasan_init_region() weak
powerpc/ptdump: Limit size of flags text to 1/2 chars on PPC32
powerpc/ptdump: Reorder flags
powerpc/ptdump: Add _PAGE_COHERENT flag
powerpc/ptdump: Display size of BATs
powerpc/ptdump: Standardise display of BAT flags
powerpc/ptdump: Properly handle non standard page size
powerpc/ptdump: Handle hugepd at PGD level
powerpc/32s: Don't warn when mapping RO data ROX.
powerpc/mm: Allocate static page tables for fixmap
powerpc/mm: Fix conditions to perform MMU specific management by
blocks on PPC32.
powerpc/mm: PTE_ATOMIC_UPDATES is only for 40x
powerpc/mm: Refactor pte_update() on nohash/32
powerpc/mm: Refactor pte_update() on book3s/32
powerpc/mm: Standardise __ptep_test_and_clear_young() params between
PPC32 and PPC64
powerpc/mm: Standardise pte_update() prototype between PPC32 and PPC64
powerpc/mm: Create a dedicated pte_update() for 8xx
powerpc/mm: Reduce hugepd size for 8M hugepages on 8xx
powerpc/8xx: Drop CONFIG_8xx_COPYBACK option
powerpc/8xx: Prepare handlers for _PAGE_HUGE for 512k pages.
powerpc/8xx: Manage 512k huge pages as standard pages.
powerpc/8xx: Only 8M pages are hugepte pages now
powerpc/8xx: MM_SLICE is not needed anymore
powerpc/8xx: Move PPC_PIN_TLB options into 8xx Kconfig
powerpc/8xx: Add function to update pinned TLBs
powerpc/8xx: Don't set IMMR map anymore at boot
powerpc/8xx: Always pin TLBs at startup.
powerpc/8xx: Drop special handling of Linear and IMMR mappings in I/D
TLB handlers
powerpc/8xx: Remove now unused TLB miss functions
powerpc/8xx: Move DTLB perf handling closer.
powerpc/mm: Don't be too strict with _etext alignment on PPC32
powerpc/8xx: Refactor kernel address boundary comparison
powerpc/8xx: Add a function to early map kernel via huge pages
powerpc/8xx: Map IMMR with a huge page
powerpc/8xx: Map linear memory with huge pages
powerpc/8xx: Allow STRICT_KERNEL_RwX with pinned TLB
powerpc/8xx: Allow large TLBs with DEBUG_PAGEALLOC
powerpc/8xx: Implement dedicated kasan_init_region()
powerpc/32s: Allow mapping with BATs with DEBUG_PAGEALLOC
powerpc/32s: Implement dedicated kasan_init_region()
arch/powerpc/Kconfig | 62 +---
arch/powerpc/configs/adder875_defconfig | 1 -
arch/powerpc/configs/ep88xc_defconfig | 1 -
arch/powerpc/configs/mpc866_ads_defconfig | 1 -
arch/powerpc/configs/mpc885_ads_defconfig | 1 -
arch/powerpc/configs/tqm8xx_defconfig | 1 -
arch/powerpc/include/asm/book3s/32/pgtable.h | 78 ++---
arch/powerpc/include/asm/fixmap.h | 4 +
arch/powerpc/include/asm/hugetlb.h | 6 +-
arch/powerpc/include/asm/kasan.h | 10 +-
.../include/asm/nohash/32/hugetlb-8xx.h | 32 +-
arch/powerpc/include/asm/nohash/32/mmu-8xx.h | 75 +----
arch/powerpc/include/asm/nohash/32/pgtable.h | 104 +++----
arch/powerpc/include/asm/nohash/32/pte-8xx.h | 4 +-
arch/powerpc/include/asm/nohash/32/slice.h | 20 --
arch/powerpc/include/asm/nohash/64/pgtable.h | 28 +-
arch/powerpc/include/asm/nohash/pgtable.h | 2 +-
arch/powerpc/include/asm/pgtable.h | 2 +
arch/powerpc/include/asm/slice.h | 2 -
arch/powerpc/kernel/head_8xx.S | 292 ++++++------------
arch/powerpc/kernel/setup_32.c | 2 +-
arch/powerpc/kernel/vmlinux.lds.S | 3 +-
arch/powerpc/mm/book3s32/mmu.c | 12 +-
arch/powerpc/mm/hugetlbpage.c | 43 +--
arch/powerpc/mm/init_32.c | 12 +-
arch/powerpc/mm/kasan/8xx.c | 74 +++++
arch/powerpc/mm/kasan/Makefile | 2 +
arch/powerpc/mm/kasan/book3s_32.c | 57 ++++
arch/powerpc/mm/kasan/kasan_init_32.c | 91 +++---
arch/powerpc/mm/mmu_decl.h | 4 +
arch/powerpc/mm/nohash/8xx.c | 250 ++++++++-------
arch/powerpc/mm/pgtable.c | 34 +-
arch/powerpc/mm/pgtable_32.c | 22 +-
arch/powerpc/mm/ptdump/8xx.c | 52 ++--
arch/powerpc/mm/ptdump/bats.c | 41 ++-
arch/powerpc/mm/ptdump/ptdump.c | 72 +++--
arch/powerpc/mm/ptdump/ptdump.h | 2 +
arch/powerpc/mm/ptdump/shared.c | 58 ++--
arch/powerpc/perf/8xx-pmu.c | 10 -
arch/powerpc/platforms/8xx/Kconfig | 50 ++-
arch/powerpc/platforms/Kconfig.cputype | 2 +-
arch/powerpc/sysdev/cpm_common.c | 2 +
42 files changed, 820 insertions(+), 801 deletions(-)
delete mode 100644 arch/powerpc/include/asm/nohash/32/slice.h
create mode 100644 arch/powerpc/mm/kasan/8xx.c
create mode 100644 arch/powerpc/mm/kasan/book3s_32.c
--
2.25.0
Reorder flags in a more logical way:
- Page size (huge) first
- User
- RWX
- Present
- WIMG
- Special
- Dirty and Accessed
Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/mm/ptdump/8xx.c | 30 +++++++++++++++---------------
arch/powerpc/mm/ptdump/shared.c | 30 +++++++++++++++---------------
2 files changed, 30 insertions(+), 30 deletions(-)
diff --git a/arch/powerpc/mm/ptdump/8xx.c b/arch/powerpc/mm/ptdump/8xx.c
index ca9ce94672f5..a3169677dced 100644
--- a/arch/powerpc/mm/ptdump/8xx.c
+++ b/arch/powerpc/mm/ptdump/8xx.c
@@ -11,11 +11,6 @@
static const struct flag_info flag_array[] = {
{
- .mask = _PAGE_SH,
- .val = _PAGE_SH,
- .set = "sh",
- .clear = " ",
- }, {
.mask = _PAGE_RO | _PAGE_NA,
.val = 0,
.set = "rw",
@@ -37,11 +32,26 @@ static const struct flag_info flag_array[] = {
.val = _PAGE_PRESENT,
.set = "p",
.clear = " ",
+ }, {
+ .mask = _PAGE_NO_CACHE,
+ .val = _PAGE_NO_CACHE,
+ .set = "i",
+ .clear = " ",
}, {
.mask = _PAGE_GUARDED,
.val = _PAGE_GUARDED,
.set = "g",
.clear = " ",
+ }, {
+ .mask = _PAGE_SH,
+ .val = _PAGE_SH,
+ .set = "sh",
+ .clear = " ",
+ }, {
+ .mask = _PAGE_SPECIAL,
+ .val = _PAGE_SPECIAL,
+ .set = "s",
+ .clear = " ",
}, {
.mask = _PAGE_DIRTY,
.val = _PAGE_DIRTY,
@@ -52,16 +62,6 @@ static const struct flag_info flag_array[] = {
.val = _PAGE_ACCESSED,
.set = "a",
.clear = " ",
- }, {
- .mask = _PAGE_NO_CACHE,
- .val = _PAGE_NO_CACHE,
- .set = "i",
- .clear = " ",
- }, {
- .mask = _PAGE_SPECIAL,
- .val = _PAGE_SPECIAL,
- .set = "s",
- .clear = " ",
}
};
diff --git a/arch/powerpc/mm/ptdump/shared.c b/arch/powerpc/mm/ptdump/shared.c
index 44a8a64a664f..dab5d8028a9b 100644
--- a/arch/powerpc/mm/ptdump/shared.c
+++ b/arch/powerpc/mm/ptdump/shared.c
@@ -30,21 +30,6 @@ static const struct flag_info flag_array[] = {
.val = _PAGE_PRESENT,
.set = "p",
.clear = " ",
- }, {
- .mask = _PAGE_GUARDED,
- .val = _PAGE_GUARDED,
- .set = "g",
- .clear = " ",
- }, {
- .mask = _PAGE_DIRTY,
- .val = _PAGE_DIRTY,
- .set = "d",
- .clear = " ",
- }, {
- .mask = _PAGE_ACCESSED,
- .val = _PAGE_ACCESSED,
- .set = "a",
- .clear = " ",
}, {
.mask = _PAGE_WRITETHRU,
.val = _PAGE_WRITETHRU,
@@ -55,11 +40,26 @@ static const struct flag_info flag_array[] = {
.val = _PAGE_NO_CACHE,
.set = "i",
.clear = " ",
+ }, {
+ .mask = _PAGE_GUARDED,
+ .val = _PAGE_GUARDED,
+ .set = "g",
+ .clear = " ",
}, {
.mask = _PAGE_SPECIAL,
.val = _PAGE_SPECIAL,
.set = "s",
.clear = " ",
+ }, {
+ .mask = _PAGE_DIRTY,
+ .val = _PAGE_DIRTY,
+ .set = "d",
+ .clear = " ",
+ }, {
+ .mask = _PAGE_ACCESSED,
+ .val = _PAGE_ACCESSED,
+ .set = "a",
+ .clear = " ",
}
};
--
2.25.0
Doing kasan pages allocation in MMU_init is too early, kernel doesn't
have access yet to the entire memory space and memblock_alloc() fails
when the kernel is a bit big.
Do it from kasan_init() instead.
Fixes: 2edb16efc899 ("powerpc/32: Add KASAN support")
Cc: [email protected]
Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/include/asm/kasan.h | 2 --
arch/powerpc/mm/init_32.c | 2 --
arch/powerpc/mm/kasan/kasan_init_32.c | 4 +++-
3 files changed, 3 insertions(+), 5 deletions(-)
diff --git a/arch/powerpc/include/asm/kasan.h b/arch/powerpc/include/asm/kasan.h
index fc900937f653..4769bbf7173a 100644
--- a/arch/powerpc/include/asm/kasan.h
+++ b/arch/powerpc/include/asm/kasan.h
@@ -27,12 +27,10 @@
#ifdef CONFIG_KASAN
void kasan_early_init(void);
-void kasan_mmu_init(void);
void kasan_init(void);
void kasan_late_init(void);
#else
static inline void kasan_init(void) { }
-static inline void kasan_mmu_init(void) { }
static inline void kasan_late_init(void) { }
#endif
diff --git a/arch/powerpc/mm/init_32.c b/arch/powerpc/mm/init_32.c
index 872df48ae41b..a6991ef8727d 100644
--- a/arch/powerpc/mm/init_32.c
+++ b/arch/powerpc/mm/init_32.c
@@ -170,8 +170,6 @@ void __init MMU_init(void)
btext_unmap();
#endif
- kasan_mmu_init();
-
setup_kup();
/* Shortly after that, the entire linear mapping will be available */
diff --git a/arch/powerpc/mm/kasan/kasan_init_32.c b/arch/powerpc/mm/kasan/kasan_init_32.c
index 60c2acdf73a7..c41e700153da 100644
--- a/arch/powerpc/mm/kasan/kasan_init_32.c
+++ b/arch/powerpc/mm/kasan/kasan_init_32.c
@@ -131,7 +131,7 @@ static void __init kasan_unmap_early_shadow_vmalloc(void)
flush_tlb_kernel_range(k_start, k_end);
}
-void __init kasan_mmu_init(void)
+static void __init kasan_mmu_init(void)
{
int ret;
struct memblock_region *reg;
@@ -159,6 +159,8 @@ void __init kasan_mmu_init(void)
void __init kasan_init(void)
{
+ kasan_mmu_init();
+
kasan_remap_early_shadow_ro();
clear_page(kasan_early_shadow_page);
--
2.25.0
In order to have all flags fit on a 80 chars wide screen,
reduce the flags to 1 char (2 where ambiguous).
No cache is 'i'
User is 'ur' (Supervisor would be sr)
Shared (for 8xx) becomes 'sh' (it was 'user' when not shared but
that was ambiguous because that's not entirely right)
Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/mm/ptdump/8xx.c | 33 ++++++++++++++++---------------
arch/powerpc/mm/ptdump/shared.c | 35 +++++++++++++++++----------------
2 files changed, 35 insertions(+), 33 deletions(-)
diff --git a/arch/powerpc/mm/ptdump/8xx.c b/arch/powerpc/mm/ptdump/8xx.c
index 9e2d8e847d6e..ca9ce94672f5 100644
--- a/arch/powerpc/mm/ptdump/8xx.c
+++ b/arch/powerpc/mm/ptdump/8xx.c
@@ -12,9 +12,9 @@
static const struct flag_info flag_array[] = {
{
.mask = _PAGE_SH,
- .val = 0,
- .set = "user",
- .clear = " ",
+ .val = _PAGE_SH,
+ .set = "sh",
+ .clear = " ",
}, {
.mask = _PAGE_RO | _PAGE_NA,
.val = 0,
@@ -30,37 +30,38 @@ static const struct flag_info flag_array[] = {
}, {
.mask = _PAGE_EXEC,
.val = _PAGE_EXEC,
- .set = " X ",
- .clear = " ",
+ .set = "x",
+ .clear = " ",
}, {
.mask = _PAGE_PRESENT,
.val = _PAGE_PRESENT,
- .set = "present",
- .clear = " ",
+ .set = "p",
+ .clear = " ",
}, {
.mask = _PAGE_GUARDED,
.val = _PAGE_GUARDED,
- .set = "guarded",
- .clear = " ",
+ .set = "g",
+ .clear = " ",
}, {
.mask = _PAGE_DIRTY,
.val = _PAGE_DIRTY,
- .set = "dirty",
- .clear = " ",
+ .set = "d",
+ .clear = " ",
}, {
.mask = _PAGE_ACCESSED,
.val = _PAGE_ACCESSED,
- .set = "accessed",
- .clear = " ",
+ .set = "a",
+ .clear = " ",
}, {
.mask = _PAGE_NO_CACHE,
.val = _PAGE_NO_CACHE,
- .set = "no cache",
- .clear = " ",
+ .set = "i",
+ .clear = " ",
}, {
.mask = _PAGE_SPECIAL,
.val = _PAGE_SPECIAL,
- .set = "special",
+ .set = "s",
+ .clear = " ",
}
};
diff --git a/arch/powerpc/mm/ptdump/shared.c b/arch/powerpc/mm/ptdump/shared.c
index f7ed2f187cb0..44a8a64a664f 100644
--- a/arch/powerpc/mm/ptdump/shared.c
+++ b/arch/powerpc/mm/ptdump/shared.c
@@ -13,8 +13,8 @@ static const struct flag_info flag_array[] = {
{
.mask = _PAGE_USER,
.val = _PAGE_USER,
- .set = "user",
- .clear = " ",
+ .set = "ur",
+ .clear = " ",
}, {
.mask = _PAGE_RW,
.val = _PAGE_RW,
@@ -23,42 +23,43 @@ static const struct flag_info flag_array[] = {
}, {
.mask = _PAGE_EXEC,
.val = _PAGE_EXEC,
- .set = " X ",
- .clear = " ",
+ .set = "x",
+ .clear = " ",
}, {
.mask = _PAGE_PRESENT,
.val = _PAGE_PRESENT,
- .set = "present",
- .clear = " ",
+ .set = "p",
+ .clear = " ",
}, {
.mask = _PAGE_GUARDED,
.val = _PAGE_GUARDED,
- .set = "guarded",
- .clear = " ",
+ .set = "g",
+ .clear = " ",
}, {
.mask = _PAGE_DIRTY,
.val = _PAGE_DIRTY,
- .set = "dirty",
- .clear = " ",
+ .set = "d",
+ .clear = " ",
}, {
.mask = _PAGE_ACCESSED,
.val = _PAGE_ACCESSED,
- .set = "accessed",
- .clear = " ",
+ .set = "a",
+ .clear = " ",
}, {
.mask = _PAGE_WRITETHRU,
.val = _PAGE_WRITETHRU,
- .set = "write through",
- .clear = " ",
+ .set = "w",
+ .clear = " ",
}, {
.mask = _PAGE_NO_CACHE,
.val = _PAGE_NO_CACHE,
- .set = "no cache",
- .clear = " ",
+ .set = "i",
+ .clear = " ",
}, {
.mask = _PAGE_SPECIAL,
.val = _PAGE_SPECIAL,
- .set = "special",
+ .set = "s",
+ .clear = " ",
}
};
--
2.25.0
On PPC32, __ptep_test_and_clear_young() takes the mm->context.id
In preparation of standardising pte_update() params between PPC32 and
PPC64, __ptep_test_and_clear_young() need mm instead of mm->context.id
Replace context param by mm.
Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/include/asm/book3s/32/pgtable.h | 7 ++++---
arch/powerpc/include/asm/nohash/32/pgtable.h | 5 +++--
2 files changed, 7 insertions(+), 5 deletions(-)
diff --git a/arch/powerpc/include/asm/book3s/32/pgtable.h b/arch/powerpc/include/asm/book3s/32/pgtable.h
index d1108d25e2e5..8122f0b55d21 100644
--- a/arch/powerpc/include/asm/book3s/32/pgtable.h
+++ b/arch/powerpc/include/asm/book3s/32/pgtable.h
@@ -288,18 +288,19 @@ static inline pte_basic_t pte_update(pte_t *p, unsigned long clr, unsigned long
* for our hash-based implementation, we fix that up here.
*/
#define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG
-static inline int __ptep_test_and_clear_young(unsigned int context, unsigned long addr, pte_t *ptep)
+static inline int __ptep_test_and_clear_young(struct mm_struct *mm,
+ unsigned long addr, pte_t *ptep)
{
unsigned long old;
old = pte_update(ptep, _PAGE_ACCESSED, 0);
if (old & _PAGE_HASHPTE) {
unsigned long ptephys = __pa(ptep) & PAGE_MASK;
- flush_hash_pages(context, addr, ptephys, 1);
+ flush_hash_pages(mm->context.id, addr, ptephys, 1);
}
return (old & _PAGE_ACCESSED) != 0;
}
#define ptep_test_and_clear_young(__vma, __addr, __ptep) \
- __ptep_test_and_clear_young((__vma)->vm_mm->context.id, __addr, __ptep)
+ __ptep_test_and_clear_young((__vma)->vm_mm, __addr, __ptep)
#define __HAVE_ARCH_PTEP_GET_AND_CLEAR
static inline pte_t ptep_get_and_clear(struct mm_struct *mm, unsigned long addr,
diff --git a/arch/powerpc/include/asm/nohash/32/pgtable.h b/arch/powerpc/include/asm/nohash/32/pgtable.h
index 9eaf386a747b..ddf681ceb860 100644
--- a/arch/powerpc/include/asm/nohash/32/pgtable.h
+++ b/arch/powerpc/include/asm/nohash/32/pgtable.h
@@ -256,14 +256,15 @@ static inline pte_basic_t pte_update(pte_t *p, unsigned long clr, unsigned long
}
#define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG
-static inline int __ptep_test_and_clear_young(unsigned int context, unsigned long addr, pte_t *ptep)
+static inline int __ptep_test_and_clear_young(struct mm_struct *mm,
+ unsigned long addr, pte_t *ptep)
{
unsigned long old;
old = pte_update(ptep, _PAGE_ACCESSED, 0);
return (old & _PAGE_ACCESSED) != 0;
}
#define ptep_test_and_clear_young(__vma, __addr, __ptep) \
- __ptep_test_and_clear_young((__vma)->vm_mm->context.id, __addr, __ptep)
+ __ptep_test_and_clear_young((__vma)->vm_mm, __addr, __ptep)
#define __HAVE_ARCH_PTEP_GET_AND_CLEAR
static inline pte_t ptep_get_and_clear(struct mm_struct *mm, unsigned long addr,
--
2.25.0
In order to properly display information regardless of the page size,
it is necessary to take into account real page size.
Signed-off-by: Christophe Leroy <[email protected]>
Fixes: cabe8138b23c ("powerpc: dump as a single line areas mapping a single physical page.")
Cc: [email protected]
---
arch/powerpc/mm/ptdump/ptdump.c | 22 +++++++++++++---------
1 file changed, 13 insertions(+), 9 deletions(-)
diff --git a/arch/powerpc/mm/ptdump/ptdump.c b/arch/powerpc/mm/ptdump/ptdump.c
index 1f97668853e3..64434b66f240 100644
--- a/arch/powerpc/mm/ptdump/ptdump.c
+++ b/arch/powerpc/mm/ptdump/ptdump.c
@@ -60,6 +60,7 @@ struct pg_state {
unsigned long start_address;
unsigned long start_pa;
unsigned long last_pa;
+ unsigned long page_size;
unsigned int level;
u64 current_flags;
bool check_wx;
@@ -168,9 +169,9 @@ static void dump_addr(struct pg_state *st, unsigned long addr)
#endif
pt_dump_seq_printf(st->seq, REG "-" REG " ", st->start_address, addr - 1);
- if (st->start_pa == st->last_pa && st->start_address + PAGE_SIZE != addr) {
+ if (st->start_pa == st->last_pa && st->start_address + st->page_size != addr) {
pt_dump_seq_printf(st->seq, "[" REG "]", st->start_pa);
- delta = PAGE_SIZE >> 10;
+ delta = st->page_size >> 10;
} else {
pt_dump_seq_printf(st->seq, " " REG " ", st->start_pa);
delta = (addr - st->start_address) >> 10;
@@ -195,10 +196,11 @@ static void note_prot_wx(struct pg_state *st, unsigned long addr)
}
static void note_page(struct pg_state *st, unsigned long addr,
- unsigned int level, u64 val)
+ unsigned int level, u64 val, int shift)
{
u64 flag = val & pg_level[level].mask;
u64 pa = val & PTE_RPN_MASK;
+ unsigned long page_size = 1 << shift;
/* At first no level is set */
if (!st->level) {
@@ -207,6 +209,7 @@ static void note_page(struct pg_state *st, unsigned long addr,
st->start_address = addr;
st->start_pa = pa;
st->last_pa = pa;
+ st->page_size = page_size;
pt_dump_seq_printf(st->seq, "---[ %s ]---\n", st->marker->name);
/*
* Dump the section of virtual memory when:
@@ -218,7 +221,7 @@ static void note_page(struct pg_state *st, unsigned long addr,
*/
} else if (flag != st->current_flags || level != st->level ||
addr >= st->marker[1].start_address ||
- (pa != st->last_pa + PAGE_SIZE &&
+ (pa != st->last_pa + st->page_size &&
(pa != st->start_pa || st->start_pa != st->last_pa))) {
/* Check the PTE flags */
@@ -246,6 +249,7 @@ static void note_page(struct pg_state *st, unsigned long addr,
st->start_address = addr;
st->start_pa = pa;
st->last_pa = pa;
+ st->page_size = page_size;
st->current_flags = flag;
st->level = level;
} else {
@@ -261,7 +265,7 @@ static void walk_pte(struct pg_state *st, pmd_t *pmd, unsigned long start)
for (i = 0; i < PTRS_PER_PTE; i++, pte++) {
addr = start + i * PAGE_SIZE;
- note_page(st, addr, 4, pte_val(*pte));
+ note_page(st, addr, 4, pte_val(*pte), PAGE_SHIFT);
}
}
@@ -278,7 +282,7 @@ static void walk_pmd(struct pg_state *st, pud_t *pud, unsigned long start)
/* pmd exists */
walk_pte(st, pmd, addr);
else
- note_page(st, addr, 3, pmd_val(*pmd));
+ note_page(st, addr, 3, pmd_val(*pmd), PTE_SHIFT);
}
}
@@ -294,7 +298,7 @@ static void walk_pud(struct pg_state *st, pgd_t *pgd, unsigned long start)
/* pud exists */
walk_pmd(st, pud, addr);
else
- note_page(st, addr, 2, pud_val(*pud));
+ note_page(st, addr, 2, pud_val(*pud), PMD_SHIFT);
}
}
@@ -313,7 +317,7 @@ static void walk_pagetables(struct pg_state *st)
/* pgd exists */
walk_pud(st, pgd, addr);
else
- note_page(st, addr, 1, pgd_val(*pgd));
+ note_page(st, addr, 1, pgd_val(*pgd), PUD_SHIFT);
}
}
@@ -368,7 +372,7 @@ static int ptdump_show(struct seq_file *m, void *v)
/* Traverse kernel page tables */
walk_pagetables(&st);
- note_page(&st, 0, 0, 0);
+ note_page(&st, 0, 0, 0, 0);
return 0;
}
--
2.25.0
Commit 45ff3c559585 ("powerpc/kasan: Fix parallel loading of
modules.") added spinlocks to manage parallele module loading.
Since then commit 47febbeeec44 ("powerpc/32: Force KASAN_VMALLOC for
modules") converted the module loading to KASAN_VMALLOC.
The spinlocking has then become unneeded and can be removed to
simplify kasan_init_shadow_page_tables()
Also remove inclusion of linux/moduleloader.h and linux/vmalloc.h
which are not needed anymore since the removal of modules management.
Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/mm/kasan/kasan_init_32.c | 19 ++++---------------
1 file changed, 4 insertions(+), 15 deletions(-)
diff --git a/arch/powerpc/mm/kasan/kasan_init_32.c b/arch/powerpc/mm/kasan/kasan_init_32.c
index c41e700153da..c9d053078c37 100644
--- a/arch/powerpc/mm/kasan/kasan_init_32.c
+++ b/arch/powerpc/mm/kasan/kasan_init_32.c
@@ -5,9 +5,7 @@
#include <linux/kasan.h>
#include <linux/printk.h>
#include <linux/memblock.h>
-#include <linux/moduleloader.h>
#include <linux/sched/task.h>
-#include <linux/vmalloc.h>
#include <asm/pgalloc.h>
#include <asm/code-patching.h>
#include <mm/mmu_decl.h>
@@ -34,31 +32,22 @@ static int __init kasan_init_shadow_page_tables(unsigned long k_start, unsigned
{
pmd_t *pmd;
unsigned long k_cur, k_next;
- pte_t *new = NULL;
pmd = pmd_ptr_k(k_start);
for (k_cur = k_start; k_cur != k_end; k_cur = k_next, pmd++) {
+ pte_t *new;
+
k_next = pgd_addr_end(k_cur, k_end);
if ((void *)pmd_page_vaddr(*pmd) != kasan_early_shadow_pte)
continue;
- if (!new)
- new = memblock_alloc(PTE_FRAG_SIZE, PTE_FRAG_SIZE);
+ new = memblock_alloc(PTE_FRAG_SIZE, PTE_FRAG_SIZE);
if (!new)
return -ENOMEM;
kasan_populate_pte(new, PAGE_KERNEL);
-
- smp_wmb(); /* See comment in __pte_alloc */
-
- spin_lock(&init_mm.page_table_lock);
- /* Has another populated it ? */
- if (likely((void *)pmd_page_vaddr(*pmd) == kasan_early_shadow_pte)) {
- pmd_populate_kernel(&init_mm, pmd, new);
- new = NULL;
- }
- spin_unlock(&init_mm.page_table_lock);
+ pmd_populate_kernel(&init_mm, pmd, new);
}
return 0;
}
--
2.25.0
In case (k_start & PAGE_MASK) doesn't equal (kstart), 'va' will never be
NULL allthough 'block' is NULL
Check the return of memblock_alloc() directly instead of
the resulting address in the loop.
Fixes: 509cd3f2b473 ("powerpc/32: Simplify KASAN init")
Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/mm/kasan/kasan_init_32.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/arch/powerpc/mm/kasan/kasan_init_32.c b/arch/powerpc/mm/kasan/kasan_init_32.c
index 750a927839d7..60c2acdf73a7 100644
--- a/arch/powerpc/mm/kasan/kasan_init_32.c
+++ b/arch/powerpc/mm/kasan/kasan_init_32.c
@@ -76,15 +76,14 @@ static int __init kasan_init_region(void *start, size_t size)
return ret;
block = memblock_alloc(k_end - k_start, PAGE_SIZE);
+ if (!block)
+ return -ENOMEM;
for (k_cur = k_start & PAGE_MASK; k_cur < k_end; k_cur += PAGE_SIZE) {
pmd_t *pmd = pmd_ptr_k(k_cur);
void *va = block + k_cur - k_start;
pte_t pte = pfn_pte(PHYS_PFN(__pa(va)), PAGE_KERNEL);
- if (!va)
- return -ENOMEM;
-
__set_pte_at(&init_mm, k_cur, pte_offset_kernel(pmd, k_cur), pte, 0);
}
flush_tlb_kernel_range(k_start, k_end);
--
2.25.0