2020-05-19 05:53:57

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v4 00/45] Use hugepages to map kernel mem on 8xx

The main purpose of this big series is to:
- reorganise huge page handling to avoid using mm_slices.
- use huge pages to map kernel memory on the 8xx.

The 8xx supports 4 page sizes: 4k, 16k, 512k and 8M.
It uses 2 Level page tables, PGD having 1024 entries, each entry
covering 4M address space. Then each page table has 1024 entries.

At the time being, page sizes are managed in PGD entries, implying
the use of mm_slices as it can't mix several pages of the same size
in one page table.

The first purpose of this series is to reorganise things so that
standard page tables can also handle 512k pages. This is done by
adding a new _PAGE_HUGE flag which will be copied into the Level 1
entry in the TLB miss handler. That done, we have 2 types of pages:
- PGD entries to regular page tables handling 4k/16k and 512k pages
- PGD entries to hugepd tables handling 8M pages.

There is no need to mix 8M pages with other sizes, because a 8M page
will use more than what a single PGD covers.

Then comes the second purpose of this series. At the time being, the
8xx has implemented special handling in the TLB miss handlers in order
to transparently map kernel linear address space and the IMMR using
huge pages by building the TLB entries in assembly at the time of the
exception.

As mm_slices is only for user space pages, and also because it would
anyway not be convenient to slice kernel address space, it was not
possible to use huge pages for kernel address space. But after step
one of the series, it is now more flexible to use huge pages.

This series drop all assembly 'just in time' handling of huge pages
and use huge pages in page tables instead.

Once the above is done, then comes icing on the cake:
- Use huge pages for KASAN shadow mapping
- Allow pinned TLBs with strict kernel rwx
- Allow pinned TLBs with debug pagealloc

Then, last but not least, those modifications for the 8xx allows the
following improvement on book3s/32:
- Mapping KASAN shadow with BATs
- Allowing BATs with debug pagealloc

All this allows to considerably simplify TLB miss handlers and associated
initialisation. The overhead of reading page tables is negligible
compared to the reduction of the miss handlers.

While we were at touching pte_update(), some cleanup was done
there too.

Tested widely on 8xx and 832x. Boot tested on QEMU MAC99.

Changes in v4:
- Rebased on top of powerpc/next following the merge of prefix instructions series.

Changes in v3:
- Fixed the handling of leaf pages page size which didn't build on PPC64 and was invisibily bogus on PPC32 (patch 12)

Changes in v2:
- Selecting HUGETLBFS instead of HUGETLB_PAGE which leads to link failure.
- Rebase on latest powerpc/merge branch
- Reworked the way TLB 28 to 31 are pinned because it was not working.

Christophe Leroy (45):
powerpc/kasan: Fix error detection on memory allocation
powerpc/kasan: Fix issues by lowering KASAN_SHADOW_END
powerpc/kasan: Fix shadow pages allocation failure
powerpc/kasan: Remove unnecessary page table locking
powerpc/kasan: Refactor update of early shadow mappings
powerpc/kasan: Declare kasan_init_region() weak
powerpc/ptdump: Limit size of flags text to 1/2 chars on PPC32
powerpc/ptdump: Reorder flags
powerpc/ptdump: Add _PAGE_COHERENT flag
powerpc/ptdump: Display size of BATs
powerpc/ptdump: Standardise display of BAT flags
powerpc/ptdump: Properly handle non standard page size
powerpc/ptdump: Handle hugepd at PGD level
powerpc/32s: Don't warn when mapping RO data ROX.
powerpc/mm: Allocate static page tables for fixmap
powerpc/mm: Fix conditions to perform MMU specific management by
blocks on PPC32.
powerpc/mm: PTE_ATOMIC_UPDATES is only for 40x
powerpc/mm: Refactor pte_update() on nohash/32
powerpc/mm: Refactor pte_update() on book3s/32
powerpc/mm: Standardise __ptep_test_and_clear_young() params between
PPC32 and PPC64
powerpc/mm: Standardise pte_update() prototype between PPC32 and PPC64
powerpc/mm: Create a dedicated pte_update() for 8xx
powerpc/mm: Reduce hugepd size for 8M hugepages on 8xx
powerpc/8xx: Drop CONFIG_8xx_COPYBACK option
powerpc/8xx: Prepare handlers for _PAGE_HUGE for 512k pages.
powerpc/8xx: Manage 512k huge pages as standard pages.
powerpc/8xx: Only 8M pages are hugepte pages now
powerpc/8xx: MM_SLICE is not needed anymore
powerpc/8xx: Move PPC_PIN_TLB options into 8xx Kconfig
powerpc/8xx: Add function to set pinned TLBs
powerpc/8xx: Don't set IMMR map anymore at boot
powerpc/8xx: Always pin TLBs at startup.
powerpc/8xx: Drop special handling of Linear and IMMR mappings in I/D
TLB handlers
powerpc/8xx: Remove now unused TLB miss functions
powerpc/8xx: Move DTLB perf handling closer.
powerpc/mm: Don't be too strict with _etext alignment on PPC32
powerpc/8xx: Refactor kernel address boundary comparison
powerpc/8xx: Add a function to early map kernel via huge pages
powerpc/8xx: Map IMMR with a huge page
powerpc/8xx: Map linear memory with huge pages
powerpc/8xx: Allow STRICT_KERNEL_RwX with pinned TLB
powerpc/8xx: Allow large TLBs with DEBUG_PAGEALLOC
powerpc/8xx: Implement dedicated kasan_init_region()
powerpc/32s: Allow mapping with BATs with DEBUG_PAGEALLOC
powerpc/32s: Implement dedicated kasan_init_region()

arch/powerpc/Kconfig | 62 +--
arch/powerpc/configs/adder875_defconfig | 1 -
arch/powerpc/configs/ep88xc_defconfig | 1 -
arch/powerpc/configs/mpc866_ads_defconfig | 1 -
arch/powerpc/configs/mpc885_ads_defconfig | 1 -
arch/powerpc/configs/tqm8xx_defconfig | 1 -
arch/powerpc/include/asm/book3s/32/pgtable.h | 78 ++--
arch/powerpc/include/asm/fixmap.h | 4 +
arch/powerpc/include/asm/hugetlb.h | 4 -
arch/powerpc/include/asm/kasan.h | 10 +-
.../include/asm/nohash/32/hugetlb-8xx.h | 32 +-
arch/powerpc/include/asm/nohash/32/mmu-8xx.h | 74 +---
arch/powerpc/include/asm/nohash/32/pgtable.h | 104 +++--
arch/powerpc/include/asm/nohash/32/pte-8xx.h | 4 +-
arch/powerpc/include/asm/nohash/32/slice.h | 20 -
arch/powerpc/include/asm/nohash/64/pgtable.h | 28 +-
arch/powerpc/include/asm/nohash/pgtable.h | 2 +-
arch/powerpc/include/asm/pgtable.h | 2 +
arch/powerpc/include/asm/slice.h | 2 -
arch/powerpc/kernel/head_8xx.S | 354 ++++++++----------
arch/powerpc/kernel/setup_32.c | 2 +-
arch/powerpc/kernel/vmlinux.lds.S | 3 +-
arch/powerpc/mm/book3s32/mmu.c | 12 +-
arch/powerpc/mm/hugetlbpage.c | 41 +-
arch/powerpc/mm/init_32.c | 12 +-
arch/powerpc/mm/kasan/8xx.c | 74 ++++
arch/powerpc/mm/kasan/Makefile | 2 +
arch/powerpc/mm/kasan/book3s_32.c | 57 +++
arch/powerpc/mm/kasan/kasan_init_32.c | 88 ++---
arch/powerpc/mm/mmu_decl.h | 4 +
arch/powerpc/mm/nohash/8xx.c | 226 ++++++-----
arch/powerpc/mm/pgtable.c | 34 +-
arch/powerpc/mm/pgtable_32.c | 22 +-
arch/powerpc/mm/ptdump/8xx.c | 52 +--
arch/powerpc/mm/ptdump/bats.c | 41 +-
arch/powerpc/mm/ptdump/ptdump.c | 71 +++-
arch/powerpc/mm/ptdump/ptdump.h | 3 +
arch/powerpc/mm/ptdump/shared.c | 58 +--
arch/powerpc/perf/8xx-pmu.c | 10 -
arch/powerpc/platforms/8xx/Kconfig | 50 ++-
arch/powerpc/platforms/Kconfig.cputype | 2 +-
arch/powerpc/sysdev/cpm_common.c | 2 +
42 files changed, 853 insertions(+), 798 deletions(-)
delete mode 100644 arch/powerpc/include/asm/nohash/32/slice.h
create mode 100644 arch/powerpc/mm/kasan/8xx.c
create mode 100644 arch/powerpc/mm/kasan/book3s_32.c

--
2.25.0


2020-05-19 05:54:04

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v4 45/45] powerpc/32s: Implement dedicated kasan_init_region()

Implement a kasan_init_region() dedicated to book3s/32 that
allocates KASAN regions using BATs.

Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/include/asm/kasan.h | 1 +
arch/powerpc/mm/kasan/Makefile | 1 +
arch/powerpc/mm/kasan/book3s_32.c | 57 +++++++++++++++++++++++++++
arch/powerpc/mm/kasan/kasan_init_32.c | 2 +-
4 files changed, 60 insertions(+), 1 deletion(-)
create mode 100644 arch/powerpc/mm/kasan/book3s_32.c

diff --git a/arch/powerpc/include/asm/kasan.h b/arch/powerpc/include/asm/kasan.h
index 107a24c3f7b3..be85c7005fb1 100644
--- a/arch/powerpc/include/asm/kasan.h
+++ b/arch/powerpc/include/asm/kasan.h
@@ -34,6 +34,7 @@ static inline void kasan_init(void) { }
static inline void kasan_late_init(void) { }
#endif

+void kasan_update_early_region(unsigned long k_start, unsigned long k_end, pte_t pte);
int kasan_init_shadow_page_tables(unsigned long k_start, unsigned long k_end);
int kasan_init_region(void *start, size_t size);

diff --git a/arch/powerpc/mm/kasan/Makefile b/arch/powerpc/mm/kasan/Makefile
index 440038ea79f1..bb1a5408b86b 100644
--- a/arch/powerpc/mm/kasan/Makefile
+++ b/arch/powerpc/mm/kasan/Makefile
@@ -4,3 +4,4 @@ KASAN_SANITIZE := n

obj-$(CONFIG_PPC32) += kasan_init_32.o
obj-$(CONFIG_PPC_8xx) += 8xx.o
+obj-$(CONFIG_PPC_BOOK3S_32) += book3s_32.o
diff --git a/arch/powerpc/mm/kasan/book3s_32.c b/arch/powerpc/mm/kasan/book3s_32.c
new file mode 100644
index 000000000000..4bc491a4a1fd
--- /dev/null
+++ b/arch/powerpc/mm/kasan/book3s_32.c
@@ -0,0 +1,57 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#define DISABLE_BRANCH_PROFILING
+
+#include <linux/kasan.h>
+#include <linux/memblock.h>
+#include <asm/pgalloc.h>
+#include <mm/mmu_decl.h>
+
+int __init kasan_init_region(void *start, size_t size)
+{
+ unsigned long k_start = (unsigned long)kasan_mem_to_shadow(start);
+ unsigned long k_end = (unsigned long)kasan_mem_to_shadow(start + size);
+ unsigned long k_cur = k_start;
+ int k_size = k_end - k_start;
+ int k_size_base = 1 << (ffs(k_size) - 1);
+ int ret;
+ void *block;
+
+ block = memblock_alloc(k_size, k_size_base);
+
+ if (block && k_size_base >= SZ_128K && k_start == ALIGN(k_start, k_size_base)) {
+ int k_size_more = 1 << (ffs(k_size - k_size_base) - 1);
+
+ setbat(-1, k_start, __pa(block), k_size_base, PAGE_KERNEL);
+ if (k_size_more >= SZ_128K)
+ setbat(-1, k_start + k_size_base, __pa(block) + k_size_base,
+ k_size_more, PAGE_KERNEL);
+ if (v_block_mapped(k_start))
+ k_cur = k_start + k_size_base;
+ if (v_block_mapped(k_start + k_size_base))
+ k_cur = k_start + k_size_base + k_size_more;
+
+ update_bats();
+ }
+
+ if (!block)
+ block = memblock_alloc(k_size, PAGE_SIZE);
+ if (!block)
+ return -ENOMEM;
+
+ ret = kasan_init_shadow_page_tables(k_start, k_end);
+ if (ret)
+ return ret;
+
+ kasan_update_early_region(k_start, k_cur, __pte(0));
+
+ for (; k_cur < k_end; k_cur += PAGE_SIZE) {
+ pmd_t *pmd = pmd_ptr_k(k_cur);
+ void *va = block + k_cur - k_start;
+ pte_t pte = pfn_pte(PHYS_PFN(__pa(va)), PAGE_KERNEL);
+
+ __set_pte_at(&init_mm, k_cur, pte_offset_kernel(pmd, k_cur), pte, 0);
+ }
+ flush_tlb_kernel_range(k_start, k_end);
+ return 0;
+}
diff --git a/arch/powerpc/mm/kasan/kasan_init_32.c b/arch/powerpc/mm/kasan/kasan_init_32.c
index 76d418af4ce8..c42085801c04 100644
--- a/arch/powerpc/mm/kasan/kasan_init_32.c
+++ b/arch/powerpc/mm/kasan/kasan_init_32.c
@@ -79,7 +79,7 @@ int __init __weak kasan_init_region(void *start, size_t size)
return 0;
}

-static void __init
+void __init
kasan_update_early_region(unsigned long k_start, unsigned long k_end, pte_t pte)
{
unsigned long k_cur;
--
2.25.0

2020-05-19 05:54:16

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v4 44/45] powerpc/32s: Allow mapping with BATs with DEBUG_PAGEALLOC

DEBUG_PAGEALLOC only manages RW data.

Text and RO data can still be mapped with BATs.

In order to map with BATs, also enforce data alignment. Set
by default to 256M which is a good compromise for keeping
enough BATs for also KASAN and IMMR.

Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/Kconfig | 1 +
arch/powerpc/mm/book3s32/mmu.c | 6 ++++++
arch/powerpc/mm/init_32.c | 5 ++---
3 files changed, 9 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index fcb0a9ae9872..752deddc9ed9 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -797,6 +797,7 @@ config DATA_SHIFT
range 17 28 if (STRICT_KERNEL_RWX || DEBUG_PAGEALLOC) && PPC_BOOK3S_32
range 19 23 if (STRICT_KERNEL_RWX || DEBUG_PAGEALLOC) && PPC_8xx
default 22 if STRICT_KERNEL_RWX && PPC_BOOK3S_32
+ default 18 if DEBUG_PAGEALLOC && PPC_BOOK3S_32
default 23 if STRICT_KERNEL_RWX && PPC_8xx
default 23 if DEBUG_PAGEALLOC && PPC_8xx && PIN_TLB_DATA
default 19 if DEBUG_PAGEALLOC && PPC_8xx
diff --git a/arch/powerpc/mm/book3s32/mmu.c b/arch/powerpc/mm/book3s32/mmu.c
index a9b2cbc74797..a6dcc708eee3 100644
--- a/arch/powerpc/mm/book3s32/mmu.c
+++ b/arch/powerpc/mm/book3s32/mmu.c
@@ -170,6 +170,12 @@ unsigned long __init mmu_mapin_ram(unsigned long base, unsigned long top)
pr_debug("RAM mapped without BATs\n");
return base;
}
+ if (debug_pagealloc_enabled()) {
+ if (base >= border)
+ return base;
+ if (top >= border)
+ top = border;
+ }

if (!strict_kernel_rwx_enabled() || base >= border || top <= border)
return __mmu_mapin_ram(base, top);
diff --git a/arch/powerpc/mm/init_32.c b/arch/powerpc/mm/init_32.c
index 8977a7c2543d..36c39bd37256 100644
--- a/arch/powerpc/mm/init_32.c
+++ b/arch/powerpc/mm/init_32.c
@@ -99,10 +99,9 @@ static void __init MMU_setup(void)
if (IS_ENABLED(CONFIG_PPC_8xx))
return;

- if (debug_pagealloc_enabled()) {
- __map_without_bats = 1;
+ if (debug_pagealloc_enabled())
__map_without_ltlbs = 1;
- }
+
if (strict_kernel_rwx_enabled())
__map_without_ltlbs = 1;
}
--
2.25.0

2020-05-19 05:54:21

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v4 24/45] powerpc/8xx: Drop CONFIG_8xx_COPYBACK option

CONFIG_8xx_COPYBACK was there to help disabling copyback cache mode
for debuging hardware. But nobody will design new boards with 8xx now.

All 8xx platforms select it, so make it the default and remove
the option.

Also remove the Mx_RESETVAL values which are pretty useless and hide
the real value while reading code.

Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/configs/adder875_defconfig | 1 -
arch/powerpc/configs/ep88xc_defconfig | 1 -
arch/powerpc/configs/mpc866_ads_defconfig | 1 -
arch/powerpc/configs/mpc885_ads_defconfig | 1 -
arch/powerpc/configs/tqm8xx_defconfig | 1 -
arch/powerpc/include/asm/nohash/32/mmu-8xx.h | 2 --
arch/powerpc/kernel/head_8xx.S | 15 +--------------
arch/powerpc/platforms/8xx/Kconfig | 9 ---------
8 files changed, 1 insertion(+), 30 deletions(-)

diff --git a/arch/powerpc/configs/adder875_defconfig b/arch/powerpc/configs/adder875_defconfig
index f55e23cb176c..5326bc739279 100644
--- a/arch/powerpc/configs/adder875_defconfig
+++ b/arch/powerpc/configs/adder875_defconfig
@@ -10,7 +10,6 @@ CONFIG_EXPERT=y
# CONFIG_BLK_DEV_BSG is not set
CONFIG_PARTITION_ADVANCED=y
CONFIG_PPC_ADDER875=y
-CONFIG_8xx_COPYBACK=y
CONFIG_GEN_RTC=y
CONFIG_HZ_1000=y
# CONFIG_SECCOMP is not set
diff --git a/arch/powerpc/configs/ep88xc_defconfig b/arch/powerpc/configs/ep88xc_defconfig
index 0e2e5e81a359..f5c3e72da719 100644
--- a/arch/powerpc/configs/ep88xc_defconfig
+++ b/arch/powerpc/configs/ep88xc_defconfig
@@ -12,7 +12,6 @@ CONFIG_EXPERT=y
# CONFIG_BLK_DEV_BSG is not set
CONFIG_PARTITION_ADVANCED=y
CONFIG_PPC_EP88XC=y
-CONFIG_8xx_COPYBACK=y
CONFIG_GEN_RTC=y
CONFIG_HZ_100=y
# CONFIG_SECCOMP is not set
diff --git a/arch/powerpc/configs/mpc866_ads_defconfig b/arch/powerpc/configs/mpc866_ads_defconfig
index 5320735395e7..5c56d36cdfc5 100644
--- a/arch/powerpc/configs/mpc866_ads_defconfig
+++ b/arch/powerpc/configs/mpc866_ads_defconfig
@@ -12,7 +12,6 @@ CONFIG_EXPERT=y
# CONFIG_BLK_DEV_BSG is not set
CONFIG_PARTITION_ADVANCED=y
CONFIG_MPC86XADS=y
-CONFIG_8xx_COPYBACK=y
CONFIG_GEN_RTC=y
CONFIG_HZ_1000=y
CONFIG_MATH_EMULATION=y
diff --git a/arch/powerpc/configs/mpc885_ads_defconfig b/arch/powerpc/configs/mpc885_ads_defconfig
index 82a008c04eae..949ff9ccda5e 100644
--- a/arch/powerpc/configs/mpc885_ads_defconfig
+++ b/arch/powerpc/configs/mpc885_ads_defconfig
@@ -11,7 +11,6 @@ CONFIG_EXPERT=y
# CONFIG_VM_EVENT_COUNTERS is not set
# CONFIG_BLK_DEV_BSG is not set
CONFIG_PARTITION_ADVANCED=y
-CONFIG_8xx_COPYBACK=y
CONFIG_GEN_RTC=y
CONFIG_HZ_100=y
# CONFIG_SECCOMP is not set
diff --git a/arch/powerpc/configs/tqm8xx_defconfig b/arch/powerpc/configs/tqm8xx_defconfig
index eda8bfb2d0a3..77857d513022 100644
--- a/arch/powerpc/configs/tqm8xx_defconfig
+++ b/arch/powerpc/configs/tqm8xx_defconfig
@@ -15,7 +15,6 @@ CONFIG_MODULE_SRCVERSION_ALL=y
# CONFIG_BLK_DEV_BSG is not set
CONFIG_PARTITION_ADVANCED=y
CONFIG_TQM8XX=y
-CONFIG_8xx_COPYBACK=y
# CONFIG_8xx_CPU15 is not set
CONFIG_GEN_RTC=y
CONFIG_HZ_100=y
diff --git a/arch/powerpc/include/asm/nohash/32/mmu-8xx.h b/arch/powerpc/include/asm/nohash/32/mmu-8xx.h
index 76af5b0cb16e..26b7cee34dfe 100644
--- a/arch/powerpc/include/asm/nohash/32/mmu-8xx.h
+++ b/arch/powerpc/include/asm/nohash/32/mmu-8xx.h
@@ -19,7 +19,6 @@
#define MI_RSV4I 0x08000000 /* Reserve 4 TLB entries */
#define MI_PPCS 0x02000000 /* Use MI_RPN prob/priv state */
#define MI_IDXMASK 0x00001f00 /* TLB index to be loaded */
-#define MI_RESETVAL 0x00000000 /* Value of register at reset */

/* These are the Ks and Kp from the PowerPC books. For proper operation,
* Ks = 0, Kp = 1.
@@ -95,7 +94,6 @@
#define MD_TWAM 0x04000000 /* Use 4K page hardware assist */
#define MD_PPCS 0x02000000 /* Use MI_RPN prob/priv state */
#define MD_IDXMASK 0x00001f00 /* TLB index to be loaded */
-#define MD_RESETVAL 0x04000000 /* Value of register at reset */

#define SPRN_M_CASID 793 /* Address space ID (context) to match */
#define MC_ASIDMASK 0x0000000f /* Bits used for ASID value */
diff --git a/arch/powerpc/kernel/head_8xx.S b/arch/powerpc/kernel/head_8xx.S
index 073a651787df..905205c79a25 100644
--- a/arch/powerpc/kernel/head_8xx.S
+++ b/arch/powerpc/kernel/head_8xx.S
@@ -779,10 +779,7 @@ start_here:
initial_mmu:
li r8, 0
mtspr SPRN_MI_CTR, r8 /* remove PINNED ITLB entries */
- lis r10, MD_RESETVAL@h
-#ifndef CONFIG_8xx_COPYBACK
- oris r10, r10, MD_WTDEF@h
-#endif
+ lis r10, MD_TWAM@h
mtspr SPRN_MD_CTR, r10 /* remove PINNED DTLB entries */

tlbia /* Invalidate all TLB entries */
@@ -857,17 +854,7 @@ initial_mmu:
mtspr SPRN_DC_CST, r8
lis r8, IDC_ENABLE@h
mtspr SPRN_IC_CST, r8
-#ifdef CONFIG_8xx_COPYBACK
- mtspr SPRN_DC_CST, r8
-#else
- /* For a debug option, I left this here to easily enable
- * the write through cache mode
- */
- lis r8, DC_SFWT@h
mtspr SPRN_DC_CST, r8
- lis r8, IDC_ENABLE@h
- mtspr SPRN_DC_CST, r8
-#endif
/* Disable debug mode entry on breakpoints */
mfspr r8, SPRN_DER
#ifdef CONFIG_PERF_EVENTS
diff --git a/arch/powerpc/platforms/8xx/Kconfig b/arch/powerpc/platforms/8xx/Kconfig
index e0fe670f06f6..b37de62d7e7f 100644
--- a/arch/powerpc/platforms/8xx/Kconfig
+++ b/arch/powerpc/platforms/8xx/Kconfig
@@ -98,15 +98,6 @@ menu "MPC8xx CPM Options"
# 8xx specific questions.
comment "Generic MPC8xx Options"

-config 8xx_COPYBACK
- bool "Copy-Back Data Cache (else Writethrough)"
- help
- Saying Y here will cause the cache on an MPC8xx processor to be used
- in Copy-Back mode. If you say N here, it is used in Writethrough
- mode.
-
- If in doubt, say Y here.
-
config 8xx_GPIO
bool "GPIO API Support"
select GPIOLIB
--
2.25.0

2020-05-19 05:54:29

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v4 27/45] powerpc/8xx: Only 8M pages are hugepte pages now

512k pages are now standard pages, so only 8M pages
are hugepte.

No more handling of normal page tables through hugepd allocation
and freeing, and hugepte helpers can also be simplified.

Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/include/asm/nohash/32/hugetlb-8xx.h | 7 +++----
arch/powerpc/mm/hugetlbpage.c | 16 +++-------------
2 files changed, 6 insertions(+), 17 deletions(-)

diff --git a/arch/powerpc/include/asm/nohash/32/hugetlb-8xx.h b/arch/powerpc/include/asm/nohash/32/hugetlb-8xx.h
index 785437323576..1c7d4693a78e 100644
--- a/arch/powerpc/include/asm/nohash/32/hugetlb-8xx.h
+++ b/arch/powerpc/include/asm/nohash/32/hugetlb-8xx.h
@@ -13,13 +13,13 @@ static inline pte_t *hugepd_page(hugepd_t hpd)

static inline unsigned int hugepd_shift(hugepd_t hpd)
{
- return ((hpd_val(hpd) & _PMD_PAGE_MASK) >> 1) + 17;
+ return PAGE_SHIFT_8M;
}

static inline pte_t *hugepte_offset(hugepd_t hpd, unsigned long addr,
unsigned int pdshift)
{
- unsigned long idx = (addr & ((1UL << pdshift) - 1)) >> PAGE_SHIFT;
+ unsigned long idx = (addr & (SZ_4M - 1)) >> PAGE_SHIFT;

return hugepd_page(hpd) + idx;
}
@@ -32,8 +32,7 @@ static inline void flush_hugetlb_page(struct vm_area_struct *vma,

static inline void hugepd_populate(hugepd_t *hpdp, pte_t *new, unsigned int pshift)
{
- *hpdp = __hugepd(__pa(new) | _PMD_USER | _PMD_PRESENT |
- (pshift == PAGE_SHIFT_8M ? _PMD_PAGE_8M : _PMD_PAGE_512K));
+ *hpdp = __hugepd(__pa(new) | _PMD_USER | _PMD_PRESENT | _PMD_PAGE_8M);
}

static inline int check_and_get_huge_psize(int shift)
diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
index 38bad839e608..cfacd364c7aa 100644
--- a/arch/powerpc/mm/hugetlbpage.c
+++ b/arch/powerpc/mm/hugetlbpage.c
@@ -54,24 +54,17 @@ static int __hugepte_alloc(struct mm_struct *mm, hugepd_t *hpdp,
if (pshift >= pdshift) {
cachep = PGT_CACHE(PTE_T_ORDER);
num_hugepd = 1 << (pshift - pdshift);
- new = NULL;
- } else if (IS_ENABLED(CONFIG_PPC_8xx)) {
- cachep = NULL;
- num_hugepd = 1;
- new = pte_alloc_one(mm);
} else {
cachep = PGT_CACHE(pdshift - pshift);
num_hugepd = 1;
- new = NULL;
}

- if (!cachep && !new) {
+ if (!cachep) {
WARN_ONCE(1, "No page table cache created for hugetlb tables");
return -ENOMEM;
}

- if (cachep)
- new = kmem_cache_alloc(cachep, pgtable_gfp_flags(mm, GFP_KERNEL));
+ new = kmem_cache_alloc(cachep, pgtable_gfp_flags(mm, GFP_KERNEL));

BUG_ON(pshift > HUGEPD_SHIFT_MASK);
BUG_ON((unsigned long)new & HUGEPD_SHIFT_MASK);
@@ -102,10 +95,7 @@ static int __hugepte_alloc(struct mm_struct *mm, hugepd_t *hpdp,
if (i < num_hugepd) {
for (i = i - 1 ; i >= 0; i--, hpdp--)
*hpdp = __hugepd(0);
- if (cachep)
- kmem_cache_free(cachep, new);
- else
- pte_free(mm, new);
+ kmem_cache_free(cachep, new);
} else {
kmemleak_ignore(new);
}
--
2.25.0

2020-05-19 05:54:30

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v4 36/45] powerpc/mm: Don't be too strict with _etext alignment on PPC32

Similar to PPC64, accept to map RO data as ROX as a trade off between
between security and memory usage.

Having RO data executable is not a high risk as RO data can't be
modified to forge an exploit.

Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/Kconfig | 26 --------------------------
arch/powerpc/kernel/vmlinux.lds.S | 3 +--
2 files changed, 1 insertion(+), 28 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 1d4ef4f27dec..d147d379b1b9 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -778,32 +778,6 @@ config THREAD_SHIFT
Used to define the stack size. The default is almost always what you
want. Only change this if you know what you are doing.

-config ETEXT_SHIFT_BOOL
- bool "Set custom etext alignment" if STRICT_KERNEL_RWX && \
- (PPC_BOOK3S_32 || PPC_8xx)
- depends on ADVANCED_OPTIONS
- help
- This option allows you to set the kernel end of text alignment. When
- RAM is mapped by blocks, the alignment needs to fit the size and
- number of possible blocks. The default should be OK for most configs.
-
- Say N here unless you know what you are doing.
-
-config ETEXT_SHIFT
- int "_etext shift" if ETEXT_SHIFT_BOOL
- range 17 28 if STRICT_KERNEL_RWX && PPC_BOOK3S_32
- range 19 23 if STRICT_KERNEL_RWX && PPC_8xx
- default 17 if STRICT_KERNEL_RWX && PPC_BOOK3S_32
- default 19 if STRICT_KERNEL_RWX && PPC_8xx
- default PPC_PAGE_SHIFT
- help
- On Book3S 32 (603+), IBATs are used to map kernel text.
- Smaller is the alignment, greater is the number of necessary IBATs.
-
- On 8xx, large pages (512kb or 8M) are used to map kernel linear
- memory. Aligning to 8M reduces TLB misses as only 8M pages are used
- in that case.
-
config DATA_SHIFT_BOOL
bool "Set custom data alignment" if STRICT_KERNEL_RWX && \
(PPC_BOOK3S_32 || PPC_8xx)
diff --git a/arch/powerpc/kernel/vmlinux.lds.S b/arch/powerpc/kernel/vmlinux.lds.S
index 31a0f201fb6f..54f23205c2b9 100644
--- a/arch/powerpc/kernel/vmlinux.lds.S
+++ b/arch/powerpc/kernel/vmlinux.lds.S
@@ -15,7 +15,6 @@
#include <asm/thread_info.h>

#define STRICT_ALIGN_SIZE (1 << CONFIG_DATA_SHIFT)
-#define ETEXT_ALIGN_SIZE (1 << CONFIG_ETEXT_SHIFT)

ENTRY(_stext)

@@ -116,7 +115,7 @@ SECTIONS

} :text

- . = ALIGN(ETEXT_ALIGN_SIZE);
+ . = ALIGN(PAGE_SIZE);
_etext = .;
PROVIDE32 (etext = .);

--
2.25.0

2020-05-19 05:54:36

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v4 35/45] powerpc/8xx: Move DTLB perf handling closer.

Now that space have been freed next to the DTLB miss handler,
it's associated DTLB perf handling can be brought back in
the same place.

Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/kernel/head_8xx.S | 23 +++++++++++------------
1 file changed, 11 insertions(+), 12 deletions(-)

diff --git a/arch/powerpc/kernel/head_8xx.S b/arch/powerpc/kernel/head_8xx.S
index fb5d17187772..9f3f7f3d03a7 100644
--- a/arch/powerpc/kernel/head_8xx.S
+++ b/arch/powerpc/kernel/head_8xx.S
@@ -344,6 +344,17 @@ DataStoreTLBMiss:
rfi
patch_site 0b, patch__dtlbmiss_exit_1

+#ifdef CONFIG_PERF_EVENTS
+ patch_site 0f, patch__dtlbmiss_perf
+0: lwz r10, (dtlb_miss_counter - PAGE_OFFSET)@l(0)
+ addi r10, r10, 1
+ stw r10, (dtlb_miss_counter - PAGE_OFFSET)@l(0)
+ mfspr r10, SPRN_DAR
+ mtspr SPRN_DAR, r11 /* Tag DAR */
+ mfspr r11, SPRN_M_TW
+ rfi
+#endif
+
/* This is an instruction TLB error on the MPC8xx. This could be due
* to many reasons, such as executing guarded memory or illegal instruction
* addresses. There is nothing to do but handle a big time error fault.
@@ -390,18 +401,6 @@ DARFixed:/* Return from dcbx instruction bug workaround */
/* 0x300 is DataAccess exception, needed by bad_page_fault() */
EXC_XFER_LITE(0x300, handle_page_fault)

-/* Called from DataStoreTLBMiss when perf TLB misses events are activated */
-#ifdef CONFIG_PERF_EVENTS
- patch_site 0f, patch__dtlbmiss_perf
-0: lwz r10, (dtlb_miss_counter - PAGE_OFFSET)@l(0)
- addi r10, r10, 1
- stw r10, (dtlb_miss_counter - PAGE_OFFSET)@l(0)
- mfspr r10, SPRN_DAR
- mtspr SPRN_DAR, r11 /* Tag DAR */
- mfspr r11, SPRN_M_TW
- rfi
-#endif
-
stack_overflow:
vmap_stack_overflow_exception

--
2.25.0

2020-05-19 05:54:50

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v4 34/45] powerpc/8xx: Remove now unused TLB miss functions

The code to setup linear and IMMR mapping via huge TLB entries is
not called anymore. Remove it.

Also remove the handling of removed code exits in the perf driver.

Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/include/asm/nohash/32/mmu-8xx.h | 8 +-
arch/powerpc/kernel/head_8xx.S | 83 --------------------
arch/powerpc/perf/8xx-pmu.c | 10 ---
3 files changed, 1 insertion(+), 100 deletions(-)

diff --git a/arch/powerpc/include/asm/nohash/32/mmu-8xx.h b/arch/powerpc/include/asm/nohash/32/mmu-8xx.h
index 4d3ef3841b00..e82368838416 100644
--- a/arch/powerpc/include/asm/nohash/32/mmu-8xx.h
+++ b/arch/powerpc/include/asm/nohash/32/mmu-8xx.h
@@ -240,13 +240,7 @@ static inline unsigned int mmu_psize_to_shift(unsigned int mmu_psize)
}

/* patch sites */
-extern s32 patch__itlbmiss_linmem_top, patch__itlbmiss_linmem_top8;
-extern s32 patch__dtlbmiss_linmem_top, patch__dtlbmiss_immr_jmp;
-extern s32 patch__fixupdar_linmem_top;
-extern s32 patch__dtlbmiss_romem_top, patch__dtlbmiss_romem_top8;
-
-extern s32 patch__itlbmiss_exit_1, patch__itlbmiss_exit_2;
-extern s32 patch__dtlbmiss_exit_1, patch__dtlbmiss_exit_2, patch__dtlbmiss_exit_3;
+extern s32 patch__itlbmiss_exit_1, patch__dtlbmiss_exit_1;
extern s32 patch__itlbmiss_perf, patch__dtlbmiss_perf;

#endif /* !__ASSEMBLY__ */
diff --git a/arch/powerpc/kernel/head_8xx.S b/arch/powerpc/kernel/head_8xx.S
index d1546f379757..fb5d17187772 100644
--- a/arch/powerpc/kernel/head_8xx.S
+++ b/arch/powerpc/kernel/head_8xx.S
@@ -278,33 +278,6 @@ InstructionTLBMiss:
rfi
#endif

-#ifndef CONFIG_PIN_TLB_TEXT
-ITLBMissLinear:
- mtcr r11
-#if defined(CONFIG_STRICT_KERNEL_RWX) && CONFIG_ETEXT_SHIFT < 23
- patch_site 0f, patch__itlbmiss_linmem_top8
-
- mfspr r10, SPRN_SRR0
-0: subis r11, r10, (PAGE_OFFSET - 0x80000000)@ha
- rlwinm r11, r11, 4, MI_PS8MEG ^ MI_PS512K
- ori r11, r11, MI_PS512K | MI_SVALID
- rlwinm r10, r10, 0, 0x0ff80000 /* 8xx supports max 256Mb RAM */
-#else
- /* Set 8M byte page and mark it valid */
- li r11, MI_PS8MEG | MI_SVALID
- rlwinm r10, r10, 20, 0x0f800000 /* 8xx supports max 256Mb RAM */
-#endif
- mtspr SPRN_MI_TWC, r11
- ori r10, r10, 0xf0 | MI_SPS16K | _PAGE_SH | _PAGE_DIRTY | \
- _PAGE_PRESENT
- mtspr SPRN_MI_RPN, r10 /* Update TLB entry */
-
-0: mfspr r10, SPRN_SPRG_SCRATCH0
- mfspr r11, SPRN_SPRG_SCRATCH1
- rfi
- patch_site 0b, patch__itlbmiss_exit_2
-#endif
-
. = 0x1200
DataStoreTLBMiss:
mtspr SPRN_DAR, r10
@@ -371,62 +344,6 @@ DataStoreTLBMiss:
rfi
patch_site 0b, patch__dtlbmiss_exit_1

-DTLBMissIMMR:
- mtcr r11
- /* Set 512k byte guarded page and mark it valid */
- li r10, MD_PS512K | MD_GUARDED | MD_SVALID
- mtspr SPRN_MD_TWC, r10
- mfspr r10, SPRN_IMMR /* Get current IMMR */
- rlwinm r10, r10, 0, 0xfff80000 /* Get 512 kbytes boundary */
- ori r10, r10, 0xf0 | MD_SPS16K | _PAGE_SH | _PAGE_DIRTY | \
- _PAGE_PRESENT | _PAGE_NO_CACHE
- mtspr SPRN_MD_RPN, r10 /* Update TLB entry */
-
- li r11, RPN_PATTERN
-
-0: mfspr r10, SPRN_DAR
- mtspr SPRN_DAR, r11 /* Tag DAR */
- mfspr r11, SPRN_M_TW
- rfi
- patch_site 0b, patch__dtlbmiss_exit_2
-
-DTLBMissLinear:
- mtcr r11
- rlwinm r10, r10, 20, 0x0f800000 /* 8xx supports max 256Mb RAM */
-#if defined(CONFIG_STRICT_KERNEL_RWX) && CONFIG_DATA_SHIFT < 23
- patch_site 0f, patch__dtlbmiss_romem_top8
-
-0: subis r11, r10, (PAGE_OFFSET - 0x80000000)@ha
- rlwinm r11, r11, 0, 0xff800000
- neg r10, r11
- or r11, r11, r10
- rlwinm r11, r11, 4, MI_PS8MEG ^ MI_PS512K
- ori r11, r11, MI_PS512K | MI_SVALID
- mfspr r10, SPRN_MD_EPN
- rlwinm r10, r10, 0, 0x0ff80000 /* 8xx supports max 256Mb RAM */
-#else
- /* Set 8M byte page and mark it valid */
- li r11, MD_PS8MEG | MD_SVALID
-#endif
- mtspr SPRN_MD_TWC, r11
-#ifdef CONFIG_STRICT_KERNEL_RWX
- patch_site 0f, patch__dtlbmiss_romem_top
-
-0: subis r11, r10, 0
- rlwimi r10, r11, 11, _PAGE_RO
-#endif
- ori r10, r10, 0xf0 | MD_SPS16K | _PAGE_SH | _PAGE_DIRTY | \
- _PAGE_PRESENT
- mtspr SPRN_MD_RPN, r10 /* Update TLB entry */
-
- li r11, RPN_PATTERN
-
-0: mfspr r10, SPRN_DAR
- mtspr SPRN_DAR, r11 /* Tag DAR */
- mfspr r11, SPRN_M_TW
- rfi
- patch_site 0b, patch__dtlbmiss_exit_3
-
/* This is an instruction TLB error on the MPC8xx. This could be due
* to many reasons, such as executing guarded memory or illegal instruction
* addresses. There is nothing to do but handle a big time error fault.
diff --git a/arch/powerpc/perf/8xx-pmu.c b/arch/powerpc/perf/8xx-pmu.c
index acc27fc63eb7..e53c3c161257 100644
--- a/arch/powerpc/perf/8xx-pmu.c
+++ b/arch/powerpc/perf/8xx-pmu.c
@@ -100,9 +100,6 @@ static int mpc8xx_pmu_add(struct perf_event *event, int flags)
unsigned long target = patch_site_addr(&patch__itlbmiss_perf);

patch_branch_site(&patch__itlbmiss_exit_1, target, 0);
-#ifndef CONFIG_PIN_TLB_TEXT
- patch_branch_site(&patch__itlbmiss_exit_2, target, 0);
-#endif
}
val = itlb_miss_counter;
break;
@@ -111,8 +108,6 @@ static int mpc8xx_pmu_add(struct perf_event *event, int flags)
unsigned long target = patch_site_addr(&patch__dtlbmiss_perf);

patch_branch_site(&patch__dtlbmiss_exit_1, target, 0);
- patch_branch_site(&patch__dtlbmiss_exit_2, target, 0);
- patch_branch_site(&patch__dtlbmiss_exit_3, target, 0);
}
val = dtlb_miss_counter;
break;
@@ -175,9 +170,6 @@ static void mpc8xx_pmu_del(struct perf_event *event, int flags)
__PPC_SPR(SPRN_SPRG_SCRATCH0));

patch_instruction_site(&patch__itlbmiss_exit_1, insn);
-#ifndef CONFIG_PIN_TLB_TEXT
- patch_instruction_site(&patch__itlbmiss_exit_2, insn);
-#endif
}
break;
case PERF_8xx_ID_DTLB_LOAD_MISS:
@@ -187,8 +179,6 @@ static void mpc8xx_pmu_del(struct perf_event *event, int flags)
__PPC_SPR(SPRN_DAR));

patch_instruction_site(&patch__dtlbmiss_exit_1, insn);
- patch_instruction_site(&patch__dtlbmiss_exit_2, insn);
- patch_instruction_site(&patch__dtlbmiss_exit_3, insn);
}
break;
}
--
2.25.0

2020-05-19 05:54:59

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v4 29/45] powerpc/8xx: Move PPC_PIN_TLB options into 8xx Kconfig

PPC_PIN_TLB options are dedicated to the 8xx, move them into
the 8xx Kconfig.

While we are at it, add some text to explain what it does.

Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/Kconfig | 20 ---------------
arch/powerpc/platforms/8xx/Kconfig | 41 ++++++++++++++++++++++++++++++
2 files changed, 41 insertions(+), 20 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 30e2111ca15d..1d4ef4f27dec 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -1227,26 +1227,6 @@ config TASK_SIZE
hex "Size of user task space" if TASK_SIZE_BOOL
default "0x80000000" if PPC_8xx
default "0xc0000000"
-
-config PIN_TLB
- bool "Pinned Kernel TLBs (860 ONLY)"
- depends on ADVANCED_OPTIONS && PPC_8xx && \
- !DEBUG_PAGEALLOC && !STRICT_KERNEL_RWX
-
-config PIN_TLB_DATA
- bool "Pinned TLB for DATA"
- depends on PIN_TLB
- default y
-
-config PIN_TLB_IMMR
- bool "Pinned TLB for IMMR"
- depends on PIN_TLB || PPC_EARLY_DEBUG_CPM
- default y
-
-config PIN_TLB_TEXT
- bool "Pinned TLB for TEXT"
- depends on PIN_TLB
- default y
endmenu

if PPC64
diff --git a/arch/powerpc/platforms/8xx/Kconfig b/arch/powerpc/platforms/8xx/Kconfig
index b37de62d7e7f..0d036cd868ef 100644
--- a/arch/powerpc/platforms/8xx/Kconfig
+++ b/arch/powerpc/platforms/8xx/Kconfig
@@ -162,4 +162,45 @@ config UCODE_PATCH
default y
depends on !NO_UCODE_PATCH

+menu "8xx advanced setup"
+ depends on PPC_8xx
+
+config PIN_TLB
+ bool "Pinned Kernel TLBs"
+ depends on ADVANCED_OPTIONS && !DEBUG_PAGEALLOC && !STRICT_KERNEL_RWX
+ help
+ On the 8xx, we have 32 instruction TLBs and 32 data TLBs. In each
+ table 4 TLBs can be pinned.
+
+ It reduces the amount of usable TLBs to 28 (ie by 12%). That's the
+ reason why we make it selectable.
+
+ This option does nothing, it just activate the selection of what
+ to pin.
+
+config PIN_TLB_DATA
+ bool "Pinned TLB for DATA"
+ depends on PIN_TLB
+ default y
+ help
+ This pins the first 32 Mbytes of memory with 8M pages.
+
+config PIN_TLB_IMMR
+ bool "Pinned TLB for IMMR"
+ depends on PIN_TLB || PPC_EARLY_DEBUG_CPM
+ default y
+ help
+ This pins the IMMR area with a 512kbytes page. In case
+ CONFIG_PIN_TLB_DATA is also selected, it will reduce
+ CONFIG_PIN_TLB_DATA to 24 Mbytes.
+
+config PIN_TLB_TEXT
+ bool "Pinned TLB for TEXT"
+ depends on PIN_TLB
+ default y
+ help
+ This pins kernel text with 8M pages.
+
+endmenu
+
endmenu
--
2.25.0

2020-05-19 05:55:07

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v4 25/45] powerpc/8xx: Prepare handlers for _PAGE_HUGE for 512k pages.

Prepare ITLB handler to handle _PAGE_HUGE when CONFIG_HUGETLBFS
is enabled. This means that the L1 entry has to be kept in r11
until L2 entry is read, in order to insert _PAGE_HUGE into it.

Also move pgd_offset helpers before pte_update() as they
will be needed there in next patch.

Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/include/asm/nohash/32/pgtable.h | 13 ++++++-------
arch/powerpc/kernel/head_8xx.S | 15 +++++++++------
2 files changed, 15 insertions(+), 13 deletions(-)

diff --git a/arch/powerpc/include/asm/nohash/32/pgtable.h b/arch/powerpc/include/asm/nohash/32/pgtable.h
index ff78bf25f832..9a287a95acad 100644
--- a/arch/powerpc/include/asm/nohash/32/pgtable.h
+++ b/arch/powerpc/include/asm/nohash/32/pgtable.h
@@ -206,6 +206,12 @@ static inline void pmd_clear(pmd_t *pmdp)
}


+/* to find an entry in a kernel page-table-directory */
+#define pgd_offset_k(address) pgd_offset(&init_mm, address)
+
+/* to find an entry in a page-table-directory */
+#define pgd_index(address) ((address) >> PGDIR_SHIFT)
+#define pgd_offset(mm, address) ((mm)->pgd + pgd_index(address))

/*
* PTE updates. This function is called whenever an existing
@@ -348,13 +354,6 @@ static inline int pte_young(pte_t pte)
pfn_to_page((__pa(pmd_val(pmd)) >> PAGE_SHIFT))
#endif

-/* to find an entry in a kernel page-table-directory */
-#define pgd_offset_k(address) pgd_offset(&init_mm, address)
-
-/* to find an entry in a page-table-directory */
-#define pgd_index(address) ((address) >> PGDIR_SHIFT)
-#define pgd_offset(mm, address) ((mm)->pgd + pgd_index(address))
-
/* Find an entry in the third-level page table.. */
#define pte_index(address) \
(((address) >> PAGE_SHIFT) & (PTRS_PER_PTE - 1))
diff --git a/arch/powerpc/kernel/head_8xx.S b/arch/powerpc/kernel/head_8xx.S
index 905205c79a25..adad8baadcf5 100644
--- a/arch/powerpc/kernel/head_8xx.S
+++ b/arch/powerpc/kernel/head_8xx.S
@@ -196,7 +196,7 @@ SystemCall:

InstructionTLBMiss:
mtspr SPRN_SPRG_SCRATCH0, r10
-#if defined(ITLB_MISS_KERNEL) || defined(CONFIG_SWAP)
+#if defined(ITLB_MISS_KERNEL) || defined(CONFIG_SWAP) || defined(CONFIG_HUGETLBFS)
mtspr SPRN_SPRG_SCRATCH1, r11
#endif

@@ -235,16 +235,19 @@ InstructionTLBMiss:
rlwinm r10, r10, 0, 20, 31
oris r10, r10, (swapper_pg_dir - PAGE_OFFSET)@ha
3:
+ mtcr r11
#endif
+#ifdef CONFIG_HUGETLBFS
+ lwz r11, (swapper_pg_dir-PAGE_OFFSET)@l(r10) /* Get level 1 entry */
+ mtspr SPRN_MI_TWC, r11 /* Set segment attributes */
+ mtspr SPRN_MD_TWC, r11
+#else
lwz r10, (swapper_pg_dir-PAGE_OFFSET)@l(r10) /* Get level 1 entry */
mtspr SPRN_MI_TWC, r10 /* Set segment attributes */
-
mtspr SPRN_MD_TWC, r10
+#endif
mfspr r10, SPRN_MD_TWC
lwz r10, 0(r10) /* Get the pte */
-#ifdef ITLB_MISS_KERNEL
- mtcr r11
-#endif
#ifdef CONFIG_SWAP
rlwinm r11, r10, 32-5, _PAGE_PRESENT
and r11, r11, r10
@@ -263,7 +266,7 @@ InstructionTLBMiss:

/* Restore registers */
0: mfspr r10, SPRN_SPRG_SCRATCH0
-#if defined(ITLB_MISS_KERNEL) || defined(CONFIG_SWAP)
+#if defined(ITLB_MISS_KERNEL) || defined(CONFIG_SWAP) || defined(CONFIG_HUGETLBFS)
mfspr r11, SPRN_SPRG_SCRATCH1
#endif
rfi
--
2.25.0

2020-05-19 05:55:24

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v4 09/45] powerpc/ptdump: Add _PAGE_COHERENT flag

For platforms using shared.c (4xx, Book3e, Book3s/32),
also handle the _PAGE_COHERENT flag with corresponds to the
M bit of the WIMG flags.

Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/mm/ptdump/shared.c | 5 +++++
1 file changed, 5 insertions(+)

diff --git a/arch/powerpc/mm/ptdump/shared.c b/arch/powerpc/mm/ptdump/shared.c
index dab5d8028a9b..634b83aa3487 100644
--- a/arch/powerpc/mm/ptdump/shared.c
+++ b/arch/powerpc/mm/ptdump/shared.c
@@ -40,6 +40,11 @@ static const struct flag_info flag_array[] = {
.val = _PAGE_NO_CACHE,
.set = "i",
.clear = " ",
+ }, {
+ .mask = _PAGE_COHERENT,
+ .val = _PAGE_COHERENT,
+ .set = "m",
+ .clear = " ",
}, {
.mask = _PAGE_GUARDED,
.val = _PAGE_GUARDED,
--
2.25.0

2020-05-19 05:55:25

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v4 08/45] powerpc/ptdump: Reorder flags

Reorder flags in a more logical way:
- Page size (huge) first
- User
- RWX
- Present
- WIMG
- Special
- Dirty and Accessed

Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/mm/ptdump/8xx.c | 30 +++++++++++++++---------------
arch/powerpc/mm/ptdump/shared.c | 30 +++++++++++++++---------------
2 files changed, 30 insertions(+), 30 deletions(-)

diff --git a/arch/powerpc/mm/ptdump/8xx.c b/arch/powerpc/mm/ptdump/8xx.c
index ca9ce94672f5..a3169677dced 100644
--- a/arch/powerpc/mm/ptdump/8xx.c
+++ b/arch/powerpc/mm/ptdump/8xx.c
@@ -11,11 +11,6 @@

static const struct flag_info flag_array[] = {
{
- .mask = _PAGE_SH,
- .val = _PAGE_SH,
- .set = "sh",
- .clear = " ",
- }, {
.mask = _PAGE_RO | _PAGE_NA,
.val = 0,
.set = "rw",
@@ -37,11 +32,26 @@ static const struct flag_info flag_array[] = {
.val = _PAGE_PRESENT,
.set = "p",
.clear = " ",
+ }, {
+ .mask = _PAGE_NO_CACHE,
+ .val = _PAGE_NO_CACHE,
+ .set = "i",
+ .clear = " ",
}, {
.mask = _PAGE_GUARDED,
.val = _PAGE_GUARDED,
.set = "g",
.clear = " ",
+ }, {
+ .mask = _PAGE_SH,
+ .val = _PAGE_SH,
+ .set = "sh",
+ .clear = " ",
+ }, {
+ .mask = _PAGE_SPECIAL,
+ .val = _PAGE_SPECIAL,
+ .set = "s",
+ .clear = " ",
}, {
.mask = _PAGE_DIRTY,
.val = _PAGE_DIRTY,
@@ -52,16 +62,6 @@ static const struct flag_info flag_array[] = {
.val = _PAGE_ACCESSED,
.set = "a",
.clear = " ",
- }, {
- .mask = _PAGE_NO_CACHE,
- .val = _PAGE_NO_CACHE,
- .set = "i",
- .clear = " ",
- }, {
- .mask = _PAGE_SPECIAL,
- .val = _PAGE_SPECIAL,
- .set = "s",
- .clear = " ",
}
};

diff --git a/arch/powerpc/mm/ptdump/shared.c b/arch/powerpc/mm/ptdump/shared.c
index 44a8a64a664f..dab5d8028a9b 100644
--- a/arch/powerpc/mm/ptdump/shared.c
+++ b/arch/powerpc/mm/ptdump/shared.c
@@ -30,21 +30,6 @@ static const struct flag_info flag_array[] = {
.val = _PAGE_PRESENT,
.set = "p",
.clear = " ",
- }, {
- .mask = _PAGE_GUARDED,
- .val = _PAGE_GUARDED,
- .set = "g",
- .clear = " ",
- }, {
- .mask = _PAGE_DIRTY,
- .val = _PAGE_DIRTY,
- .set = "d",
- .clear = " ",
- }, {
- .mask = _PAGE_ACCESSED,
- .val = _PAGE_ACCESSED,
- .set = "a",
- .clear = " ",
}, {
.mask = _PAGE_WRITETHRU,
.val = _PAGE_WRITETHRU,
@@ -55,11 +40,26 @@ static const struct flag_info flag_array[] = {
.val = _PAGE_NO_CACHE,
.set = "i",
.clear = " ",
+ }, {
+ .mask = _PAGE_GUARDED,
+ .val = _PAGE_GUARDED,
+ .set = "g",
+ .clear = " ",
}, {
.mask = _PAGE_SPECIAL,
.val = _PAGE_SPECIAL,
.set = "s",
.clear = " ",
+ }, {
+ .mask = _PAGE_DIRTY,
+ .val = _PAGE_DIRTY,
+ .set = "d",
+ .clear = " ",
+ }, {
+ .mask = _PAGE_ACCESSED,
+ .val = _PAGE_ACCESSED,
+ .set = "a",
+ .clear = " ",
}
};

--
2.25.0

2020-05-19 05:55:59

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v4 07/45] powerpc/ptdump: Limit size of flags text to 1/2 chars on PPC32

In order to have all flags fit on a 80 chars wide screen,
reduce the flags to 1 char (2 where ambiguous).

No cache is 'i'
User is 'ur' (Supervisor would be sr)
Shared (for 8xx) becomes 'sh' (it was 'user' when not shared but
that was ambiguous because that's not entirely right)

Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/mm/ptdump/8xx.c | 33 ++++++++++++++++---------------
arch/powerpc/mm/ptdump/shared.c | 35 +++++++++++++++++----------------
2 files changed, 35 insertions(+), 33 deletions(-)

diff --git a/arch/powerpc/mm/ptdump/8xx.c b/arch/powerpc/mm/ptdump/8xx.c
index 9e2d8e847d6e..ca9ce94672f5 100644
--- a/arch/powerpc/mm/ptdump/8xx.c
+++ b/arch/powerpc/mm/ptdump/8xx.c
@@ -12,9 +12,9 @@
static const struct flag_info flag_array[] = {
{
.mask = _PAGE_SH,
- .val = 0,
- .set = "user",
- .clear = " ",
+ .val = _PAGE_SH,
+ .set = "sh",
+ .clear = " ",
}, {
.mask = _PAGE_RO | _PAGE_NA,
.val = 0,
@@ -30,37 +30,38 @@ static const struct flag_info flag_array[] = {
}, {
.mask = _PAGE_EXEC,
.val = _PAGE_EXEC,
- .set = " X ",
- .clear = " ",
+ .set = "x",
+ .clear = " ",
}, {
.mask = _PAGE_PRESENT,
.val = _PAGE_PRESENT,
- .set = "present",
- .clear = " ",
+ .set = "p",
+ .clear = " ",
}, {
.mask = _PAGE_GUARDED,
.val = _PAGE_GUARDED,
- .set = "guarded",
- .clear = " ",
+ .set = "g",
+ .clear = " ",
}, {
.mask = _PAGE_DIRTY,
.val = _PAGE_DIRTY,
- .set = "dirty",
- .clear = " ",
+ .set = "d",
+ .clear = " ",
}, {
.mask = _PAGE_ACCESSED,
.val = _PAGE_ACCESSED,
- .set = "accessed",
- .clear = " ",
+ .set = "a",
+ .clear = " ",
}, {
.mask = _PAGE_NO_CACHE,
.val = _PAGE_NO_CACHE,
- .set = "no cache",
- .clear = " ",
+ .set = "i",
+ .clear = " ",
}, {
.mask = _PAGE_SPECIAL,
.val = _PAGE_SPECIAL,
- .set = "special",
+ .set = "s",
+ .clear = " ",
}
};

diff --git a/arch/powerpc/mm/ptdump/shared.c b/arch/powerpc/mm/ptdump/shared.c
index f7ed2f187cb0..44a8a64a664f 100644
--- a/arch/powerpc/mm/ptdump/shared.c
+++ b/arch/powerpc/mm/ptdump/shared.c
@@ -13,8 +13,8 @@ static const struct flag_info flag_array[] = {
{
.mask = _PAGE_USER,
.val = _PAGE_USER,
- .set = "user",
- .clear = " ",
+ .set = "ur",
+ .clear = " ",
}, {
.mask = _PAGE_RW,
.val = _PAGE_RW,
@@ -23,42 +23,43 @@ static const struct flag_info flag_array[] = {
}, {
.mask = _PAGE_EXEC,
.val = _PAGE_EXEC,
- .set = " X ",
- .clear = " ",
+ .set = "x",
+ .clear = " ",
}, {
.mask = _PAGE_PRESENT,
.val = _PAGE_PRESENT,
- .set = "present",
- .clear = " ",
+ .set = "p",
+ .clear = " ",
}, {
.mask = _PAGE_GUARDED,
.val = _PAGE_GUARDED,
- .set = "guarded",
- .clear = " ",
+ .set = "g",
+ .clear = " ",
}, {
.mask = _PAGE_DIRTY,
.val = _PAGE_DIRTY,
- .set = "dirty",
- .clear = " ",
+ .set = "d",
+ .clear = " ",
}, {
.mask = _PAGE_ACCESSED,
.val = _PAGE_ACCESSED,
- .set = "accessed",
- .clear = " ",
+ .set = "a",
+ .clear = " ",
}, {
.mask = _PAGE_WRITETHRU,
.val = _PAGE_WRITETHRU,
- .set = "write through",
- .clear = " ",
+ .set = "w",
+ .clear = " ",
}, {
.mask = _PAGE_NO_CACHE,
.val = _PAGE_NO_CACHE,
- .set = "no cache",
- .clear = " ",
+ .set = "i",
+ .clear = " ",
}, {
.mask = _PAGE_SPECIAL,
.val = _PAGE_SPECIAL,
- .set = "special",
+ .set = "s",
+ .clear = " ",
}
};

--
2.25.0

2020-05-19 05:56:07

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v4 02/45] powerpc/kasan: Fix issues by lowering KASAN_SHADOW_END

At the time being, KASAN_SHADOW_END is 0x100000000, which
is 0 in 32 bits representation.

This leads to a couple of issues:
- kasan_remap_early_shadow_ro() does nothing because the comparison
k_cur < k_end is always false.
- In ptdump, address comparison for markers display fails and the
marker's name is printed at the start of the KASAN area instead of
being printed at the end.

However, there is no need to shadow the KASAN shadow area itself,
so the KASAN shadow area can stop shadowing memory at the start
of itself.

With a PAGE_OFFSET set to 0xc0000000, KASAN shadow area is then going
from 0xf8000000 to 0xff000000.

Signed-off-by: Christophe Leroy <[email protected]>
Fixes: cbd18991e24f ("powerpc/mm: Fix an Oops in kasan_mmu_init()")
Cc: [email protected]
---
arch/powerpc/include/asm/kasan.h | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/arch/powerpc/include/asm/kasan.h b/arch/powerpc/include/asm/kasan.h
index fbff9ff9032e..fc900937f653 100644
--- a/arch/powerpc/include/asm/kasan.h
+++ b/arch/powerpc/include/asm/kasan.h
@@ -23,9 +23,7 @@

#define KASAN_SHADOW_OFFSET ASM_CONST(CONFIG_KASAN_SHADOW_OFFSET)

-#define KASAN_SHADOW_END 0UL
-
-#define KASAN_SHADOW_SIZE (KASAN_SHADOW_END - KASAN_SHADOW_START)
+#define KASAN_SHADOW_END (-(-KASAN_SHADOW_START >> KASAN_SHADOW_SCALE_SHIFT))

#ifdef CONFIG_KASAN
void kasan_early_init(void);
--
2.25.0

2020-05-25 05:29:05

by Michael Ellerman

[permalink] [raw]
Subject: Re: [PATCH v4 07/45] powerpc/ptdump: Limit size of flags text to 1/2 chars on PPC32

Christophe Leroy <[email protected]> writes:
> In order to have all flags fit on a 80 chars wide screen,
> reduce the flags to 1 char (2 where ambiguous).

I don't love this, the output is less readable. Is fitting on an 80 char
screen a real issue for you? I just make my terminal window bigger.

cheers


> No cache is 'i'
> User is 'ur' (Supervisor would be sr)
> Shared (for 8xx) becomes 'sh' (it was 'user' when not shared but
> that was ambiguous because that's not entirely right)
>
> Signed-off-by: Christophe Leroy <[email protected]>
> ---
> arch/powerpc/mm/ptdump/8xx.c | 33 ++++++++++++++++---------------
> arch/powerpc/mm/ptdump/shared.c | 35 +++++++++++++++++----------------
> 2 files changed, 35 insertions(+), 33 deletions(-)

2020-05-25 18:12:29

by Christophe Leroy

[permalink] [raw]
Subject: Re: [PATCH v4 07/45] powerpc/ptdump: Limit size of flags text to 1/2 chars on PPC32



Le 25/05/2020 à 07:15, Michael Ellerman a écrit :
> Christophe Leroy <[email protected]> writes:
>> In order to have all flags fit on a 80 chars wide screen,
>> reduce the flags to 1 char (2 where ambiguous).
>
> I don't love this, the output is less readable. Is fitting on an 80 char
> screen a real issue for you? I just make my terminal window bigger.

I don't have strong opinion about that, and the terminal can be made bigger.
I just don't like how messy it is, some flags are so big that they hide
other ones and getting it more ordered and more compact helped me during
all the verifications I did with this series, but we can leave it as is
if you prefer.

Would you like a v5 without patches 7 and 8 ? Or I can just resend the
patches that will be impacted, that is 9 and 38 ?

Without the changes I get:

---[ Start of kernel VM ]---
0xc0000000-0xc0ffffff 0x00000000 16M huge shared r X
present accessed
0xc1000000-0xc7ffffff 0x01000000 112M huge shared rw
present dirty accessed
---[ vmalloc() Area ]---
0xc9000000-0xc9003fff 0x050e4000 16K shared rw
present dirty accessed
0xc9008000-0xc900bfff 0x050ec000 16K shared rw
present dirty accessed
0xc9010000-0xc9013fff 0xd0000000 16K shared rw
present guarded dirty accessed no cache
0xc9018000-0xc901bfff 0x050f0000 16K shared rw
present dirty accessed

---[ Fixmap start ]---
0xf7f00000-0xf7f7ffff 0xff000000 512K huge shared rw
present guarded dirty accessed no cache
---[ Fixmap end ]---
---[ kasan shadow mem start ]---
0xf8000000-0xf8ffffff 0x07000000 16M huge shared rw
present dirty accessed
0xf9000000-0xf91fffff [0x01288000] 16K shared r
present accessed
0xf9200000-0xf9203fff 0x050e0000 16K shared rw
present dirty accessed


With the change I get.

---[ Start of kernel VM ]---
0xc0000000-0xc0ffffff 0x00000000 16M h r x p sh
a
0xc1000000-0xc7ffffff 0x01000000 112M h rw p sh
d a
---[ vmalloc() Area ]---
0xc9000000-0xc9003fff 0x050e4000 16K rw p sh
d a
0xc9008000-0xc900bfff 0x050ec000 16K rw p sh
d a
0xc9010000-0xc9013fff 0xd0000000 16K rw p i g sh
d a
0xc9018000-0xc901bfff 0x050f0000 16K rw p sh
d a

---[ Fixmap start ]---
0xf7f00000-0xf7f7ffff 0xff000000 512K h rw p i g sh
d a
---[ Fixmap end ]---
---[ kasan shadow mem start ]---
0xf8000000-0xf8ffffff 0x07000000 16M h rw p sh
d a
0xf9000000-0xf91fffff [0x01288000] 16K r p sh
a
0xf9200000-0xf9203fff 0x050e0000 16K rw p sh
d a


Christophe

>
> cheers
>
>
>> No cache is 'i'
>> User is 'ur' (Supervisor would be sr)
>> Shared (for 8xx) becomes 'sh' (it was 'user' when not shared but
>> that was ambiguous because that's not entirely right)
>>
>> Signed-off-by: Christophe Leroy <[email protected]>
>> ---
>> arch/powerpc/mm/ptdump/8xx.c | 33 ++++++++++++++++---------------
>> arch/powerpc/mm/ptdump/shared.c | 35 +++++++++++++++++----------------
>> 2 files changed, 35 insertions(+), 33 deletions(-)

2020-05-26 13:14:09

by Michael Ellerman

[permalink] [raw]
Subject: Re: [PATCH v4 07/45] powerpc/ptdump: Limit size of flags text to 1/2 chars on PPC32

Christophe Leroy <[email protected]> writes:
> Le 25/05/2020 à 07:15, Michael Ellerman a écrit :
>> Christophe Leroy <[email protected]> writes:
>>> In order to have all flags fit on a 80 chars wide screen,
>>> reduce the flags to 1 char (2 where ambiguous).
>>
>> I don't love this, the output is less readable. Is fitting on an 80 char
>> screen a real issue for you? I just make my terminal window bigger.
>
> I don't have strong opinion about that, and the terminal can be made bigger.
> I just don't like how messy it is, some flags are so big that they hide
> other ones and getting it more ordered and more compact helped me during
> all the verifications I did with this series, but we can leave it as is
> if you prefer.

I think I do.

> Would you like a v5 without patches 7 and 8 ? Or I can just resend the
> patches that will be impacted, that is 9 and 38 ?

I dropped 7 and 8 and then fixed up 9 and 38, it was easy enough.

I used "coherent" and "huge".

> With the change I get.
>
> ---[ Start of kernel VM ]---
> 0xc0000000-0xc0ffffff 0x00000000 16M h r x p sh a
> 0xc1000000-0xc7ffffff 0x01000000 112M h rw p sh d a
> ---[ vmalloc() Area ]---
> 0xc9000000-0xc9003fff 0x050e4000 16K rw p sh d a
> 0xc9008000-0xc900bfff 0x050ec000 16K rw p sh d a
> 0xc9010000-0xc9013fff 0xd0000000 16K rw p i g sh d a
> 0xc9018000-0xc901bfff 0x050f0000 16K rw p sh d a

It's definitely more compact :)

But I worry no one other than you will be able to decipher it, without
constantly referring back to the source code.

cheers

2020-06-09 05:31:25

by Michael Ellerman

[permalink] [raw]
Subject: Re: [PATCH v4 00/45] Use hugepages to map kernel mem on 8xx

On Tue, 19 May 2020 05:48:42 +0000 (UTC), Christophe Leroy wrote:
> The main purpose of this big series is to:
> - reorganise huge page handling to avoid using mm_slices.
> - use huge pages to map kernel memory on the 8xx.
>
> The 8xx supports 4 page sizes: 4k, 16k, 512k and 8M.
> It uses 2 Level page tables, PGD having 1024 entries, each entry
> covering 4M address space. Then each page table has 1024 entries.
>
> [...]

Patches 1-6 and 9-45 applied to powerpc/next.

[01/45] powerpc/kasan: Fix error detection on memory allocation
https://git.kernel.org/powerpc/c/d132443a73d7a131775df46f33000f67ed92de1e
[02/45] powerpc/kasan: Fix issues by lowering KASAN_SHADOW_END
https://git.kernel.org/powerpc/c/3a66a24f6060e6775f8c02ac52329ea0152d7e58
[03/45] powerpc/kasan: Fix shadow pages allocation failure
https://git.kernel.org/powerpc/c/d2a91cef9bbdeb87b7449fdab1a6be6000930210
[04/45] powerpc/kasan: Remove unnecessary page table locking
https://git.kernel.org/powerpc/c/7c31c05e00fc5ff2067332c5f80e525573e7269c
[05/45] powerpc/kasan: Refactor update of early shadow mappings
https://git.kernel.org/powerpc/c/7dec42ab57f2f59feba82abf0353164479bfde4c
[06/45] powerpc/kasan: Declare kasan_init_region() weak
https://git.kernel.org/powerpc/c/ec97d022f621c6c850aec46d8818b49c6aae95ad
[09/45] powerpc/ptdump: Add _PAGE_COHERENT flag
https://git.kernel.org/powerpc/c/3af4786eb429b2df76cbd7ce3bae21467ac3e4fb
[10/45] powerpc/ptdump: Display size of BATs
https://git.kernel.org/powerpc/c/6b30830e2003d9d77696084ebe2fc19dbe7d6f70
[11/45] powerpc/ptdump: Standardise display of BAT flags
https://git.kernel.org/powerpc/c/8961a2a5353cca5451f648f4838cd848a3b2354c
[12/45] powerpc/ptdump: Properly handle non standard page size
https://git.kernel.org/powerpc/c/b00ff6d8c1c3898b0f768cbb38ef722d25bd2f39
[13/45] powerpc/ptdump: Handle hugepd at PGD level
https://git.kernel.org/powerpc/c/6b789a26d7da2e0256d199da980369ef8fb49ec6
[14/45] powerpc/32s: Don't warn when mapping RO data ROX.
https://git.kernel.org/powerpc/c/4b19f96a81bceaf0bcf44d79c0855c61158065ec
[15/45] powerpc/mm: Allocate static page tables for fixmap
https://git.kernel.org/powerpc/c/925ac141d106b55acbe112a9272f970631a3c082
[16/45] powerpc/mm: Fix conditions to perform MMU specific management by blocks on PPC32.
https://git.kernel.org/powerpc/c/4e3319c23a66dabfd6c35f4d2633d64d99b68096
[17/45] powerpc/mm: PTE_ATOMIC_UPDATES is only for 40x
https://git.kernel.org/powerpc/c/fadaac67c9007cad9fc485e36dcc54460d6d5886
[18/45] powerpc/mm: Refactor pte_update() on nohash/32
https://git.kernel.org/powerpc/c/2db99aeb63dd6e8808dc054d181c4d0e8645bbe0
[19/45] powerpc/mm: Refactor pte_update() on book3s/32
https://git.kernel.org/powerpc/c/1c1bf294882bd12669e39ccd7680c4ce34b7c15c
[20/45] powerpc/mm: Standardise __ptep_test_and_clear_young() params between PPC32 and PPC64
https://git.kernel.org/powerpc/c/c7fa77016eb6093df38fdabdb7a89bb9617e7185
[21/45] powerpc/mm: Standardise pte_update() prototype between PPC32 and PPC64
https://git.kernel.org/powerpc/c/06f52524870122fb43b214d27e8f4546da36f8ba
[22/45] powerpc/mm: Create a dedicated pte_update() for 8xx
https://git.kernel.org/powerpc/c/6ad41bfbc907be0cd414f09fa5382d2133376595
[23/45] powerpc/mm: Reduce hugepd size for 8M hugepages on 8xx
https://git.kernel.org/powerpc/c/b12c07a4bb064c0a8db7554557b89d40f57c936f
[24/45] powerpc/8xx: Drop CONFIG_8xx_COPYBACK option
https://git.kernel.org/powerpc/c/d3efcd38c0b99162d889e36a30425345a18edb33
[25/45] powerpc/8xx: Prepare handlers for _PAGE_HUGE for 512k pages.
https://git.kernel.org/powerpc/c/a891c43b97d315ee5f9fe8e797d3d48fc351e053
[26/45] powerpc/8xx: Manage 512k huge pages as standard pages.
https://git.kernel.org/powerpc/c/b250c8c08c79d1eb5354c7eaa84b7505f5f2d921
[27/45] powerpc/8xx: Only 8M pages are hugepte pages now
https://git.kernel.org/powerpc/c/d4870b89acd7c362ded08f9295e8d143cf7e0024
[28/45] powerpc/8xx: MM_SLICE is not needed anymore
https://git.kernel.org/powerpc/c/555904d07eef3a2e5fc458419edf6174362c4ddd
[29/45] powerpc/8xx: Move PPC_PIN_TLB options into 8xx Kconfig
https://git.kernel.org/powerpc/c/5d4656696c30cef56b2ab506b203533c818af04d
[30/45] powerpc/8xx: Add function to set pinned TLBs
https://git.kernel.org/powerpc/c/f76c8f6d257cefda60221c83af7f97d9f74cb3ce
[31/45] powerpc/8xx: Don't set IMMR map anymore at boot
https://git.kernel.org/powerpc/c/136a9a0f74d2e0d9de5515190fe80344b86b45cf
[32/45] powerpc/8xx: Always pin TLBs at startup.
https://git.kernel.org/powerpc/c/684c1664e0de63398aceb748343541b48d398710
[33/45] powerpc/8xx: Drop special handling of Linear and IMMR mappings in I/D TLB handlers
https://git.kernel.org/powerpc/c/400dc0f86102d2ad11d3601f1948fbb02e926431
[34/45] powerpc/8xx: Remove now unused TLB miss functions
https://git.kernel.org/powerpc/c/1251288e64ba44969e1c4d59e5ee88a6e873447b
[35/45] powerpc/8xx: Move DTLB perf handling closer.
https://git.kernel.org/powerpc/c/0c8c2c9c201b44eed6c10d7c5c8d25fe5aab87ce
[36/45] powerpc/mm: Don't be too strict with _etext alignment on PPC32
https://git.kernel.org/powerpc/c/a0591b60eef965f7f5255ad4696bbba9af4b43d0
[37/45] powerpc/8xx: Refactor kernel address boundary comparison
https://git.kernel.org/powerpc/c/c8bef10a9f17b2b9549e37878b2bcd48039c136b
[38/45] powerpc/8xx: Add a function to early map kernel via huge pages
https://git.kernel.org/powerpc/c/34536d78068318def0a370462cbc3319e1ca9014
[39/45] powerpc/8xx: Map IMMR with a huge page
https://git.kernel.org/powerpc/c/a623bb5861dc442dc8de9edc9b3116f8b7c235c4
[40/45] powerpc/8xx: Map linear memory with huge pages
https://git.kernel.org/powerpc/c/cf209951fa7f2e7a8ec92f45f27ea11bc024bbfc
[41/45] powerpc/8xx: Allow STRICT_KERNEL_RwX with pinned TLB
https://git.kernel.org/powerpc/c/da1adea07576722da4597b0df7d00931f0203229
[42/45] powerpc/8xx: Allow large TLBs with DEBUG_PAGEALLOC
https://git.kernel.org/powerpc/c/fcdafd10a363cf3278ce29c6c9a92930380c6cd8
[43/45] powerpc/8xx: Implement dedicated kasan_init_region()
https://git.kernel.org/powerpc/c/a2feeb2c2ecbd9c9206d66f238ca710b760c9ef5
[44/45] powerpc/32s: Allow mapping with BATs with DEBUG_PAGEALLOC
https://git.kernel.org/powerpc/c/2b279c0348af62f42be346c1ea6d70bac98df0f9
[45/45] powerpc/32s: Implement dedicated kasan_init_region()
https://git.kernel.org/powerpc/c/7974c4732642f710b5111165ae1f7f7fed822282

cheers