2019-09-20 19:01:22

by Jason Yan

[permalink] [raw]
Subject: [PATCH v7 00/12] implement KASLR for powerpc/fsl_booke/32

This series implements KASLR for powerpc/fsl_booke/32, as a security
feature that deters exploit attempts relying on knowledge of the location
of kernel internals.

Since CONFIG_RELOCATABLE has already supported, what we need to do is
map or copy kernel to a proper place and relocate. Freescale Book-E
parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1
entries are not suitable to map the kernel directly in a randomized
region, so we chose to copy the kernel to a proper place and restart to
relocate.

Entropy is derived from the banner and timer base, which will change every
build and boot. This not so much safe so additionally the bootloader may
pass entropy via the /chosen/kaslr-seed node in device tree.

We will use the first 512M of the low memory to randomize the kernel
image. The memory will be split in 64M zones. We will use the lower 8
bit of the entropy to decide the index of the 64M zone. Then we chose a
16K aligned offset inside the 64M zone to put the kernel in.

KERNELBASE

|--> 64M <--|
| |
+---------------+ +----------------+---------------+
| |....| |kernel| | |
+---------------+ +----------------+---------------+
| |
|-----> offset <-----|

kernstart_virt_addr

We also check if we will overlap with some areas like the dtb area, the
initrd area or the crashkernel area. If we cannot find a proper area,
kaslr will be disabled and boot from the original kernel.

Changes since v6:
- Rename create_tlb_entry() to create_kaslr_tlb_entry()
- Remove MAS2_VAL since there is no more users.
- Move kaslr_booke.c to arch/powerpc/mm/nohash.
- Call flush_icache_range() after copying the kernel.
- Warning if no kaslr-seed provided by the bootloader
- Use the right physical address when checking if the new position will overlap with other regions.
- Do not clear bss for the second pass because some global variables will not be initialized again
- Use tabs instead of spaces between the mnemonic and the arguments(in fsl_booke_entry_mapping.S).

Changes since v5:
- Rename M_IF_NEEDED to MAS2_M_IF_NEEDED
- Define some global variable as __ro_after_init
- Replace kimage_vaddr with kernstart_virt_addr
- Depend on RELOCATABLE, not select it
- Modify the comment block below the SPDX tag
- Remove some useless headers in kaslr_booke.c and move is_second_reloc
declarationto mmu_decl.h
- Remove DBG() and use pr_debug() and rewrite comment above get_boot_seed().
- Add a patch to document the KASLR implementation.
- Split a patch from patch #10 which exports kaslr offset in VMCOREINFO ELF notes.
- Remove extra logic around finding nokaslr string in cmdline.
- Make regions static global and __initdata

Changes since v4:
- Add Reviewed-by tag from Christophe
- Remove an unnecessary cast
- Remove unnecessary parenthesis
- Fix checkpatch warning

Changes since v3:
- Add Reviewed-by and Tested-by tag from Diana
- Change the comment in fsl_booke_entry_mapping.S to be consistent
with the new code.

Changes since v2:
- Remove unnecessary #ifdef
- Use SZ_64M instead of0x4000000
- Call early_init_dt_scan_chosen() to init boot_command_line
- Rename kaslr_second_init() to kaslr_late_init()

Changes since v1:
- Remove some useless 'extern' keyword.
- Replace EXPORT_SYMBOL with EXPORT_SYMBOL_GPL
- Improve some assembly code
- Use memzero_explicit instead of memset
- Use boot_command_line and remove early_command_line
- Do not print kaslr offset if kaslr is disabled

Jason Yan (12):
powerpc: unify definition of M_IF_NEEDED
powerpc: move memstart_addr and kernstart_addr to init-common.c
powerpc: introduce kernstart_virt_addr to store the kernel base
powerpc/fsl_booke/32: introduce create_kaslr_tlb_entry() helper
powerpc/fsl_booke/32: introduce reloc_kernel_entry() helper
powerpc/fsl_booke/32: implement KASLR infrastructure
powerpc/fsl_booke/32: randomize the kernel image offset
powerpc/fsl_booke/kaslr: clear the original kernel if randomized
powerpc/fsl_booke/kaslr: support nokaslr cmdline parameter
powerpc/fsl_booke/kaslr: dump out kernel offset information on panic
powerpc/fsl_booke/kaslr: export offset in VMCOREINFO ELF notes
powerpc/fsl_booke/32: Document KASLR implementation

Documentation/powerpc/kaslr-booke32.rst | 42 ++
arch/powerpc/Kconfig | 11 +
arch/powerpc/include/asm/nohash/mmu-book3e.h | 11 +-
arch/powerpc/include/asm/page.h | 7 +
arch/powerpc/kernel/early_32.c | 5 +-
arch/powerpc/kernel/exceptions-64e.S | 12 +-
arch/powerpc/kernel/fsl_booke_entry_mapping.S | 25 +-
arch/powerpc/kernel/head_fsl_booke.S | 61 ++-
arch/powerpc/kernel/machine_kexec.c | 1 +
arch/powerpc/kernel/misc_64.S | 7 +-
arch/powerpc/kernel/setup-common.c | 20 +
arch/powerpc/mm/init-common.c | 7 +
arch/powerpc/mm/init_32.c | 5 -
arch/powerpc/mm/init_64.c | 5 -
arch/powerpc/mm/mmu_decl.h | 11 +
arch/powerpc/mm/nohash/Makefile | 1 +
arch/powerpc/mm/nohash/fsl_booke.c | 8 +-
arch/powerpc/mm/nohash/kaslr_booke.c | 401 ++++++++++++++++++
18 files changed, 587 insertions(+), 53 deletions(-)
create mode 100644 Documentation/powerpc/kaslr-booke32.rst
create mode 100644 arch/powerpc/mm/nohash/kaslr_booke.c

--
2.17.2


2019-09-21 06:41:01

by Jason Yan

[permalink] [raw]
Subject: [PATCH v7 06/12] powerpc/fsl_booke/32: implement KASLR infrastructure

This patch add support to boot kernel from places other than KERNELBASE.
Since CONFIG_RELOCATABLE has already supported, what we need to do is
map or copy kernel to a proper place and relocate. Freescale Book-E
parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1
entries are not suitable to map the kernel directly in a randomized
region, so we chose to copy the kernel to a proper place and restart to
relocate.

The offset of the kernel was not randomized yet(a fixed 64M is set). We
will randomize it in the next patch.

Signed-off-by: Jason Yan <[email protected]>
Cc: Diana Craciun <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Christophe Leroy <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Nicholas Piggin <[email protected]>
Cc: Kees Cook <[email protected]>
Tested-by: Diana Craciun <[email protected]>
Reviewed-by: Christophe Leroy <[email protected]>
---
arch/powerpc/Kconfig | 11 ++++
arch/powerpc/include/asm/nohash/mmu-book3e.h | 1 -
arch/powerpc/kernel/early_32.c | 5 +-
arch/powerpc/kernel/fsl_booke_entry_mapping.S | 15 +++--
arch/powerpc/kernel/head_fsl_booke.S | 13 +++-
arch/powerpc/mm/mmu_decl.h | 7 +++
arch/powerpc/mm/nohash/Makefile | 1 +
arch/powerpc/mm/nohash/fsl_booke.c | 7 ++-
arch/powerpc/mm/nohash/kaslr_booke.c | 62 +++++++++++++++++++
9 files changed, 106 insertions(+), 16 deletions(-)
create mode 100644 arch/powerpc/mm/nohash/kaslr_booke.c

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index d8dcd8820369..4845d572b00f 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -547,6 +547,17 @@ config RELOCATABLE
setting can still be useful to bootwrappers that need to know the
load address of the kernel (eg. u-boot/mkimage).

+config RANDOMIZE_BASE
+ bool "Randomize the address of the kernel image"
+ depends on (FSL_BOOKE && FLATMEM && PPC32)
+ depends on RELOCATABLE
+ help
+ Randomizes the virtual address at which the kernel image is
+ loaded, as a security feature that deters exploit attempts
+ relying on knowledge of the location of kernel internals.
+
+ If unsure, say Y.
+
config RELOCATABLE_TEST
bool "Test relocatable kernel"
depends on (PPC64 && RELOCATABLE)
diff --git a/arch/powerpc/include/asm/nohash/mmu-book3e.h b/arch/powerpc/include/asm/nohash/mmu-book3e.h
index fa3efc2d310f..b41004664312 100644
--- a/arch/powerpc/include/asm/nohash/mmu-book3e.h
+++ b/arch/powerpc/include/asm/nohash/mmu-book3e.h
@@ -75,7 +75,6 @@
#define MAS2_E 0x00000001
#define MAS2_WIMGE_MASK 0x0000001f
#define MAS2_EPN_MASK(size) (~0 << (size + 10))
-#define MAS2_VAL(addr, size, flags) ((addr) & MAS2_EPN_MASK(size) | (flags))

#define MAS3_RPN 0xFFFFF000
#define MAS3_U0 0x00000200
diff --git a/arch/powerpc/kernel/early_32.c b/arch/powerpc/kernel/early_32.c
index 3482118ffe76..6f8689d7ca7b 100644
--- a/arch/powerpc/kernel/early_32.c
+++ b/arch/powerpc/kernel/early_32.c
@@ -22,7 +22,8 @@ notrace unsigned long __init early_init(unsigned long dt_ptr)
unsigned long offset = reloc_offset();

/* First zero the BSS */
- memset(PTRRELOC(&__bss_start), 0, __bss_stop - __bss_start);
+ if (kernstart_virt_addr == KERNELBASE)
+ memset(PTRRELOC(&__bss_start), 0, __bss_stop - __bss_start);

/*
* Identify the CPU type and fix up code sections
@@ -32,5 +33,5 @@ notrace unsigned long __init early_init(unsigned long dt_ptr)

apply_feature_fixups();

- return KERNELBASE + offset;
+ return kernstart_virt_addr + offset;
}
diff --git a/arch/powerpc/kernel/fsl_booke_entry_mapping.S b/arch/powerpc/kernel/fsl_booke_entry_mapping.S
index f4d3eaae54a9..8bccce6544b5 100644
--- a/arch/powerpc/kernel/fsl_booke_entry_mapping.S
+++ b/arch/powerpc/kernel/fsl_booke_entry_mapping.S
@@ -155,23 +155,22 @@ skpinv: addi r6,r6,1 /* Increment */

#if defined(ENTRY_MAPPING_BOOT_SETUP)

-/* 6. Setup KERNELBASE mapping in TLB1[0] */
+/* 6. Setup kernstart_virt_addr mapping in TLB1[0] */
lis r6,0x1000 /* Set MAS0(TLBSEL) = TLB1(1), ESEL = 0 */
mtspr SPRN_MAS0,r6
lis r6,(MAS1_VALID|MAS1_IPROT)@h
ori r6,r6,(MAS1_TSIZE(BOOK3E_PAGESZ_64M))@l
mtspr SPRN_MAS1,r6
- lis r6,MAS2_VAL(PAGE_OFFSET, BOOK3E_PAGESZ_64M, MAS2_M_IF_NEEDED)@h
- ori r6,r6,MAS2_VAL(PAGE_OFFSET, BOOK3E_PAGESZ_64M, MAS2_M_IF_NEEDED)@l
+ lis r6,MAS2_EPN_MASK(BOOK3E_PAGESZ_64M)@h
+ ori r6,r6,MAS2_EPN_MASK(BOOK3E_PAGESZ_64M)@l
+ and r6,r6,r20
+ ori r6,r6,MAS2_M_IF_NEEDED@l
mtspr SPRN_MAS2,r6
mtspr SPRN_MAS3,r8
tlbwe

-/* 7. Jump to KERNELBASE mapping */
- lis r6,(KERNELBASE & ~0xfff)@h
- ori r6,r6,(KERNELBASE & ~0xfff)@l
- rlwinm r7,r25,0,0x03ffffff
- add r6,r7,r6
+/* 7. Jump to kernstart_virt_addr mapping */
+ mr r6,r20

#elif defined(ENTRY_MAPPING_KEXEC_SETUP)
/*
diff --git a/arch/powerpc/kernel/head_fsl_booke.S b/arch/powerpc/kernel/head_fsl_booke.S
index d9f599b01ff1..838d9d4650c7 100644
--- a/arch/powerpc/kernel/head_fsl_booke.S
+++ b/arch/powerpc/kernel/head_fsl_booke.S
@@ -155,6 +155,8 @@ _ENTRY(_start);
*/

_ENTRY(__early_start)
+ LOAD_REG_ADDR_PIC(r20, kernstart_virt_addr)
+ lwz r20,0(r20)

#define ENTRY_MAPPING_BOOT_SETUP
#include "fsl_booke_entry_mapping.S"
@@ -277,8 +279,8 @@ set_ivor:
ori r6, r6, swapper_pg_dir@l
lis r5, abatron_pteptrs@h
ori r5, r5, abatron_pteptrs@l
- lis r4, KERNELBASE@h
- ori r4, r4, KERNELBASE@l
+ lis r3, kernstart_virt_addr@ha
+ lwz r4, kernstart_virt_addr@l(r3)
stw r5, 0(r4) /* Save abatron_pteptrs at a fixed location */
stw r6, 0(r5)

@@ -1067,7 +1069,12 @@ __secondary_start:
mr r5,r25 /* phys kernel start */
rlwinm r5,r5,0,~0x3ffffff /* aligned 64M */
subf r4,r5,r4 /* memstart_addr - phys kernel start */
- li r5,0 /* no device tree */
+ lis r7,KERNELBASE@h
+ ori r7,r7,KERNELBASE@l
+ cmpw r20,r7 /* if kernstart_virt_addr != KERNELBASE, randomized */
+ beq 2f
+ li r4,0
+2: li r5,0 /* no device tree */
li r6,0 /* not boot cpu */
bl restore_to_as0

diff --git a/arch/powerpc/mm/mmu_decl.h b/arch/powerpc/mm/mmu_decl.h
index 55e86a0bf562..a3a4937c0496 100644
--- a/arch/powerpc/mm/mmu_decl.h
+++ b/arch/powerpc/mm/mmu_decl.h
@@ -144,10 +144,17 @@ extern int switch_to_as1(void);
extern void restore_to_as0(int esel, int offset, void *dt_ptr, int bootcpu);
void create_kaslr_tlb_entry(int entry, unsigned long virt, phys_addr_t phys);
void reloc_kernel_entry(void *fdt, int addr);
+extern int is_second_reloc;
#endif
extern void loadcam_entry(unsigned int index);
extern void loadcam_multi(int first_idx, int num, int tmp_idx);

+#ifdef CONFIG_RANDOMIZE_BASE
+void kaslr_early_init(void *dt_ptr, phys_addr_t size);
+#else
+static inline void kaslr_early_init(void *dt_ptr, phys_addr_t size) {}
+#endif
+
struct tlbcam {
u32 MAS0;
u32 MAS1;
diff --git a/arch/powerpc/mm/nohash/Makefile b/arch/powerpc/mm/nohash/Makefile
index 33b6f6f29d3f..0424f6ce5bd8 100644
--- a/arch/powerpc/mm/nohash/Makefile
+++ b/arch/powerpc/mm/nohash/Makefile
@@ -8,6 +8,7 @@ obj-$(CONFIG_40x) += 40x.o
obj-$(CONFIG_44x) += 44x.o
obj-$(CONFIG_PPC_8xx) += 8xx.o
obj-$(CONFIG_PPC_FSL_BOOK3E) += fsl_booke.o
+obj-$(CONFIG_RANDOMIZE_BASE) += kaslr_booke.o
ifdef CONFIG_HUGETLB_PAGE
obj-$(CONFIG_PPC_FSL_BOOK3E) += book3e_hugetlbpage.o
endif
diff --git a/arch/powerpc/mm/nohash/fsl_booke.c b/arch/powerpc/mm/nohash/fsl_booke.c
index 556e3cd52a35..2dc27cf88add 100644
--- a/arch/powerpc/mm/nohash/fsl_booke.c
+++ b/arch/powerpc/mm/nohash/fsl_booke.c
@@ -263,7 +263,8 @@ void setup_initial_memory_limit(phys_addr_t first_memblock_base,
int __initdata is_second_reloc;
notrace void __init relocate_init(u64 dt_ptr, phys_addr_t start)
{
- unsigned long base = KERNELBASE;
+ unsigned long base = kernstart_virt_addr;
+ phys_addr_t size;

kernstart_addr = start;
if (is_second_reloc) {
@@ -291,7 +292,7 @@ notrace void __init relocate_init(u64 dt_ptr, phys_addr_t start)
start &= ~0x3ffffff;
base &= ~0x3ffffff;
virt_phys_offset = base - start;
- early_get_first_memblock_info(__va(dt_ptr), NULL);
+ early_get_first_memblock_info(__va(dt_ptr), &size);
/*
* We now get the memstart_addr, then we should check if this
* address is the same as what the PAGE_OFFSET map to now. If
@@ -316,6 +317,8 @@ notrace void __init relocate_init(u64 dt_ptr, phys_addr_t start)
/* We should never reach here */
panic("Relocation error");
}
+
+ kaslr_early_init(__va(dt_ptr), size);
}
#endif
#endif
diff --git a/arch/powerpc/mm/nohash/kaslr_booke.c b/arch/powerpc/mm/nohash/kaslr_booke.c
new file mode 100644
index 000000000000..29c1567d8d40
--- /dev/null
+++ b/arch/powerpc/mm/nohash/kaslr_booke.c
@@ -0,0 +1,62 @@
+// SPDX-License-Identifier: GPL-2.0-only
+//
+// Copyright (C) 2019 Jason Yan <[email protected]>
+
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/mm.h>
+#include <linux/swap.h>
+#include <linux/stddef.h>
+#include <linux/init.h>
+#include <linux/delay.h>
+#include <linux/memblock.h>
+#include <asm/pgalloc.h>
+#include <asm/prom.h>
+#include <mm/mmu_decl.h>
+
+static unsigned long __init kaslr_choose_location(void *dt_ptr, phys_addr_t size,
+ unsigned long kernel_sz)
+{
+ /* return a fixed offset of 64M for now */
+ return SZ_64M;
+}
+
+/*
+ * To see if we need to relocate the kernel to a random offset
+ * void *dt_ptr - address of the device tree
+ * phys_addr_t size - size of the first memory block
+ */
+notrace void __init kaslr_early_init(void *dt_ptr, phys_addr_t size)
+{
+ unsigned long tlb_virt;
+ phys_addr_t tlb_phys;
+ unsigned long offset;
+ unsigned long kernel_sz;
+
+ kernel_sz = (unsigned long)_end - (unsigned long)_stext;
+
+ offset = kaslr_choose_location(dt_ptr, size, kernel_sz);
+ if (offset == 0)
+ return;
+
+ kernstart_virt_addr += offset;
+ kernstart_addr += offset;
+
+ is_second_reloc = 1;
+
+ if (offset >= SZ_64M) {
+ tlb_virt = round_down(kernstart_virt_addr, SZ_64M);
+ tlb_phys = round_down(kernstart_addr, SZ_64M);
+
+ /* Create kernel map to relocate in */
+ create_kaslr_tlb_entry(1, tlb_virt, tlb_phys);
+ }
+
+ /* Copy the kernel to it's new location and run */
+ memcpy((void *)kernstart_virt_addr, (void *)_stext, kernel_sz);
+ flush_icache_range(kernstart_virt_addr, kernstart_virt_addr + kernel_sz);
+
+ reloc_kernel_entry(dt_ptr, kernstart_virt_addr);
+}
--
2.17.2

2019-09-21 10:16:23

by Jason Yan

[permalink] [raw]
Subject: [PATCH v7 03/12] powerpc: introduce kernstart_virt_addr to store the kernel base

Now the kernel base is a fixed value - KERNELBASE. To support KASLR, we
need a variable to store the kernel base.

Signed-off-by: Jason Yan <[email protected]>
Cc: Diana Craciun <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Christophe Leroy <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Nicholas Piggin <[email protected]>
Cc: Kees Cook <[email protected]>
Reviewed-by: Christophe Leroy <[email protected]>
Reviewed-by: Diana Craciun <[email protected]>
Tested-by: Diana Craciun <[email protected]>
---
arch/powerpc/include/asm/page.h | 2 ++
arch/powerpc/mm/init-common.c | 2 ++
2 files changed, 4 insertions(+)

diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
index 0d52f57fca04..4d32d1b561d6 100644
--- a/arch/powerpc/include/asm/page.h
+++ b/arch/powerpc/include/asm/page.h
@@ -315,6 +315,8 @@ void arch_free_page(struct page *page, int order);

struct vm_area_struct;

+extern unsigned long kernstart_virt_addr;
+
#include <asm-generic/memory_model.h>
#endif /* __ASSEMBLY__ */
#include <asm/slice.h>
diff --git a/arch/powerpc/mm/init-common.c b/arch/powerpc/mm/init-common.c
index e223da482c0c..42ef7a6e6098 100644
--- a/arch/powerpc/mm/init-common.c
+++ b/arch/powerpc/mm/init-common.c
@@ -25,6 +25,8 @@ phys_addr_t memstart_addr __ro_after_init = (phys_addr_t)~0ull;
EXPORT_SYMBOL_GPL(memstart_addr);
phys_addr_t kernstart_addr __ro_after_init;
EXPORT_SYMBOL_GPL(kernstart_addr);
+unsigned long kernstart_virt_addr __ro_after_init = KERNELBASE;
+EXPORT_SYMBOL_GPL(kernstart_virt_addr);

static bool disable_kuep = !IS_ENABLED(CONFIG_PPC_KUEP);
static bool disable_kuap = !IS_ENABLED(CONFIG_PPC_KUAP);
--
2.17.2

2019-09-21 10:57:55

by Jason Yan

[permalink] [raw]
Subject: [PATCH v7 07/12] powerpc/fsl_booke/32: randomize the kernel image offset

After we have the basic support of relocate the kernel in some
appropriate place, we can start to randomize the offset now.

Entropy is derived from the banner and timer, which will change every
build and boot. This not so much safe so additionally the bootloader may
pass entropy via the /chosen/kaslr-seed node in device tree.

We will use the first 512M of the low memory to randomize the kernel
image. The memory will be split in 64M zones. We will use the lower 8
bit of the entropy to decide the index of the 64M zone. Then we chose a
16K aligned offset inside the 64M zone to put the kernel in.

We also check if we will overlap with some areas like the dtb area, the
initrd area or the crashkernel area. If we cannot find a proper area,
kaslr will be disabled and boot from the original kernel.

Some pieces of code are derived from arch/x86/boot/compressed/kaslr.c or
arch/arm64/kernel/kaslr.c such as rotate_xor(). Credit goes to Kees and
Ard.

Signed-off-by: Jason Yan <[email protected]>
Cc: Diana Craciun <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Christophe Leroy <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Nicholas Piggin <[email protected]>
Cc: Kees Cook <[email protected]>
Reviewed-by: Diana Craciun <[email protected]>
Tested-by: Diana Craciun <[email protected]>
Reviewed-by: Christophe Leroy <[email protected]>
---
arch/powerpc/mm/nohash/kaslr_booke.c | 325 ++++++++++++++++++++++++++-
1 file changed, 323 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/mm/nohash/kaslr_booke.c b/arch/powerpc/mm/nohash/kaslr_booke.c
index 29c1567d8d40..7b238fc2c8a9 100644
--- a/arch/powerpc/mm/nohash/kaslr_booke.c
+++ b/arch/powerpc/mm/nohash/kaslr_booke.c
@@ -12,15 +12,336 @@
#include <linux/init.h>
#include <linux/delay.h>
#include <linux/memblock.h>
+#include <linux/libfdt.h>
+#include <linux/crash_core.h>
#include <asm/pgalloc.h>
#include <asm/prom.h>
+#include <asm/kdump.h>
#include <mm/mmu_decl.h>
+#include <generated/compile.h>
+#include <generated/utsrelease.h>
+
+struct regions {
+ unsigned long pa_start;
+ unsigned long pa_end;
+ unsigned long kernel_size;
+ unsigned long dtb_start;
+ unsigned long dtb_end;
+ unsigned long initrd_start;
+ unsigned long initrd_end;
+ unsigned long crash_start;
+ unsigned long crash_end;
+ int reserved_mem;
+ int reserved_mem_addr_cells;
+ int reserved_mem_size_cells;
+};
+
+/* Simplified build-specific string for starting entropy. */
+static const char build_str[] = UTS_RELEASE " (" LINUX_COMPILE_BY "@"
+ LINUX_COMPILE_HOST ") (" LINUX_COMPILER ") " UTS_VERSION;
+
+struct regions __initdata regions;
+
+static __init void kaslr_get_cmdline(void *fdt)
+{
+ int node = fdt_path_offset(fdt, "/chosen");
+
+ early_init_dt_scan_chosen(node, "chosen", 1, boot_command_line);
+}
+
+static unsigned long __init rotate_xor(unsigned long hash, const void *area,
+ size_t size)
+{
+ size_t i;
+ const unsigned long *ptr = area;
+
+ for (i = 0; i < size / sizeof(hash); i++) {
+ /* Rotate by odd number of bits and XOR. */
+ hash = (hash << ((sizeof(hash) * 8) - 7)) | (hash >> 7);
+ hash ^= ptr[i];
+ }
+
+ return hash;
+}
+
+/* Attempt to create a simple starting entropy. This can make it defferent for
+ * every build but it is still not enough. Stronger entropy should
+ * be added to make it change for every boot.
+ */
+static unsigned long __init get_boot_seed(void *fdt)
+{
+ unsigned long hash = 0;
+
+ hash = rotate_xor(hash, build_str, sizeof(build_str));
+ hash = rotate_xor(hash, fdt, fdt_totalsize(fdt));
+
+ return hash;
+}
+
+static __init u64 get_kaslr_seed(void *fdt)
+{
+ int node, len;
+ fdt64_t *prop;
+ u64 ret;
+
+ node = fdt_path_offset(fdt, "/chosen");
+ if (node < 0)
+ return 0;
+
+ prop = fdt_getprop_w(fdt, node, "kaslr-seed", &len);
+ if (!prop || len != sizeof(u64))
+ return 0;
+
+ ret = fdt64_to_cpu(*prop);
+ *prop = 0;
+ return ret;
+}
+
+static __init bool regions_overlap(u32 s1, u32 e1, u32 s2, u32 e2)
+{
+ return e1 >= s2 && e2 >= s1;
+}
+
+static __init bool overlaps_reserved_region(const void *fdt, u32 start,
+ u32 end)
+{
+ int subnode, len, i;
+ u64 base, size;
+
+ /* check for overlap with /memreserve/ entries */
+ for (i = 0; i < fdt_num_mem_rsv(fdt); i++) {
+ if (fdt_get_mem_rsv(fdt, i, &base, &size) < 0)
+ continue;
+ if (regions_overlap(start, end, base, base + size))
+ return true;
+ }
+
+ if (regions.reserved_mem < 0)
+ return false;
+
+ /* check for overlap with static reservations in /reserved-memory */
+ for (subnode = fdt_first_subnode(fdt, regions.reserved_mem);
+ subnode >= 0;
+ subnode = fdt_next_subnode(fdt, subnode)) {
+ const fdt32_t *reg;
+ u64 rsv_end;
+
+ len = 0;
+ reg = fdt_getprop(fdt, subnode, "reg", &len);
+ while (len >= (regions.reserved_mem_addr_cells +
+ regions.reserved_mem_size_cells)) {
+ base = fdt32_to_cpu(reg[0]);
+ if (regions.reserved_mem_addr_cells == 2)
+ base = (base << 32) | fdt32_to_cpu(reg[1]);
+
+ reg += regions.reserved_mem_addr_cells;
+ len -= 4 * regions.reserved_mem_addr_cells;
+
+ size = fdt32_to_cpu(reg[0]);
+ if (regions.reserved_mem_size_cells == 2)
+ size = (size << 32) | fdt32_to_cpu(reg[1]);
+
+ reg += regions.reserved_mem_size_cells;
+ len -= 4 * regions.reserved_mem_size_cells;
+
+ if (base >= regions.pa_end)
+ continue;
+
+ rsv_end = min(base + size, (u64)U32_MAX);
+
+ if (regions_overlap(start, end, base, rsv_end))
+ return true;
+ }
+ }
+ return false;
+}
+
+static __init bool overlaps_region(const void *fdt, u32 start,
+ u32 end)
+{
+ if (regions_overlap(start, end, __pa(_stext), __pa(_end)))
+ return true;
+
+ if (regions_overlap(start, end, regions.dtb_start,
+ regions.dtb_end))
+ return true;
+
+ if (regions_overlap(start, end, regions.initrd_start,
+ regions.initrd_end))
+ return true;
+
+ if (regions_overlap(start, end, regions.crash_start,
+ regions.crash_end))
+ return true;
+
+ return overlaps_reserved_region(fdt, start, end);
+}
+
+static void __init get_crash_kernel(void *fdt, unsigned long size)
+{
+#ifdef CONFIG_CRASH_CORE
+ unsigned long long crash_size, crash_base;
+ int ret;
+
+ ret = parse_crashkernel(boot_command_line, size, &crash_size,
+ &crash_base);
+ if (ret != 0 || crash_size == 0)
+ return;
+ if (crash_base == 0)
+ crash_base = KDUMP_KERNELBASE;
+
+ regions.crash_start = (unsigned long)crash_base;
+ regions.crash_end = (unsigned long)(crash_base + crash_size);
+
+ pr_debug("crash_base=0x%llx crash_size=0x%llx\n", crash_base, crash_size);
+#endif
+}
+
+static void __init get_initrd_range(void *fdt)
+{
+ u64 start, end;
+ int node, len;
+ const __be32 *prop;
+
+ node = fdt_path_offset(fdt, "/chosen");
+ if (node < 0)
+ return;
+
+ prop = fdt_getprop(fdt, node, "linux,initrd-start", &len);
+ if (!prop)
+ return;
+ start = of_read_number(prop, len / 4);
+
+ prop = fdt_getprop(fdt, node, "linux,initrd-end", &len);
+ if (!prop)
+ return;
+ end = of_read_number(prop, len / 4);
+
+ regions.initrd_start = (unsigned long)start;
+ regions.initrd_end = (unsigned long)end;
+
+ pr_debug("initrd_start=0x%llx initrd_end=0x%llx\n", start, end);
+}
+
+static __init unsigned long get_usable_address(const void *fdt,
+ unsigned long start,
+ unsigned long offset)
+{
+ unsigned long pa;
+ unsigned long pa_end;
+
+ for (pa = offset; (long)pa > (long)start; pa -= SZ_16K) {
+ pa_end = pa + regions.kernel_size;
+ if (overlaps_region(fdt, pa, pa_end))
+ continue;
+
+ return pa;
+ }
+ return 0;
+}
+
+static __init void get_cell_sizes(const void *fdt, int node, int *addr_cells,
+ int *size_cells)
+{
+ const int *prop;
+ int len;
+
+ /*
+ * Retrieve the #address-cells and #size-cells properties
+ * from the 'node', or use the default if not provided.
+ */
+ *addr_cells = *size_cells = 1;
+
+ prop = fdt_getprop(fdt, node, "#address-cells", &len);
+ if (len == 4)
+ *addr_cells = fdt32_to_cpu(*prop);
+ prop = fdt_getprop(fdt, node, "#size-cells", &len);
+ if (len == 4)
+ *size_cells = fdt32_to_cpu(*prop);
+}
+
+static unsigned long __init kaslr_legal_offset(void *dt_ptr, unsigned long index,
+ unsigned long offset)
+{
+ unsigned long koffset = 0;
+ unsigned long start;
+
+ while ((long)index >= 0) {
+ offset = memstart_addr + index * SZ_64M + offset;
+ start = memstart_addr + index * SZ_64M;
+ koffset = get_usable_address(dt_ptr, start, offset);
+ if (koffset)
+ break;
+ index--;
+ }
+
+ if (koffset != 0)
+ koffset -= memstart_addr;
+
+ return koffset;
+}

static unsigned long __init kaslr_choose_location(void *dt_ptr, phys_addr_t size,
unsigned long kernel_sz)
{
- /* return a fixed offset of 64M for now */
- return SZ_64M;
+ unsigned long offset, random;
+ unsigned long ram, linear_sz;
+ u64 seed;
+ unsigned long index;
+
+ kaslr_get_cmdline(dt_ptr);
+
+ random = get_boot_seed(dt_ptr);
+
+ seed = get_tb() << 32;
+ seed ^= get_tb();
+ random = rotate_xor(random, &seed, sizeof(seed));
+
+ /*
+ * Retrieve (and wipe) the seed from the FDT
+ */
+ seed = get_kaslr_seed(dt_ptr);
+ if (seed)
+ random = rotate_xor(random, &seed, sizeof(seed));
+ else
+ pr_warn("KASLR: No safe seed for randomizing the kernel base.\n");
+
+ ram = min_t(phys_addr_t, __max_low_memory, size);
+ ram = map_mem_in_cams(ram, CONFIG_LOWMEM_CAM_NUM, true);
+ linear_sz = min_t(unsigned long, ram, SZ_512M);
+
+ /* If the linear size is smaller than 64M, do not randmize */
+ if (linear_sz < SZ_64M)
+ return 0;
+
+ /* check for a reserved-memory node and record its cell sizes */
+ regions.reserved_mem = fdt_path_offset(dt_ptr, "/reserved-memory");
+ if (regions.reserved_mem >= 0)
+ get_cell_sizes(dt_ptr, regions.reserved_mem,
+ &regions.reserved_mem_addr_cells,
+ &regions.reserved_mem_size_cells);
+
+ regions.pa_start = memstart_addr;
+ regions.pa_end = memstart_addr + linear_sz;
+ regions.dtb_start = __pa(dt_ptr);
+ regions.dtb_end = __pa(dt_ptr) + fdt_totalsize(dt_ptr);
+ regions.kernel_size = kernel_sz;
+
+ get_initrd_range(dt_ptr);
+ get_crash_kernel(dt_ptr, ram);
+
+ /*
+ * Decide which 64M we want to start
+ * Only use the low 8 bits of the random seed
+ */
+ index = random & 0xFF;
+ index %= linear_sz / SZ_64M;
+
+ /* Decide offset inside 64M */
+ offset = random % (SZ_64M - kernel_sz);
+ offset = round_down(offset, SZ_16K);
+
+ return kaslr_legal_offset(dt_ptr, index, offset);
}

/*
--
2.17.2

2019-09-21 10:59:54

by Jason Yan

[permalink] [raw]
Subject: [PATCH v7 11/12] powerpc/fsl_booke/kaslr: export offset in VMCOREINFO ELF notes

Like all other architectures such as x86 or arm64, include KASLR offset
in VMCOREINFO ELF notes to assist in debugging. After this, we can use
crash --kaslr option to parse vmcore generated from a kaslr kernel.

Note: The crash tool needs to support --kaslr too.

Signed-off-by: Jason Yan <[email protected]>
Cc: Diana Craciun <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Christophe Leroy <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Nicholas Piggin <[email protected]>
Cc: Kees Cook <[email protected]>
---
arch/powerpc/kernel/machine_kexec.c | 1 +
1 file changed, 1 insertion(+)

diff --git a/arch/powerpc/kernel/machine_kexec.c b/arch/powerpc/kernel/machine_kexec.c
index c4ed328a7b96..078fe3d76feb 100644
--- a/arch/powerpc/kernel/machine_kexec.c
+++ b/arch/powerpc/kernel/machine_kexec.c
@@ -86,6 +86,7 @@ void arch_crash_save_vmcoreinfo(void)
VMCOREINFO_STRUCT_SIZE(mmu_psize_def);
VMCOREINFO_OFFSET(mmu_psize_def, shift);
#endif
+ vmcoreinfo_append_str("KERNELOFFSET=%lx\n", kaslr_offset());
}

/*
--
2.17.2

2019-09-21 11:27:29

by Jason Yan

[permalink] [raw]
Subject: [PATCH v7 08/12] powerpc/fsl_booke/kaslr: clear the original kernel if randomized

The original kernel still exists in the memory, clear it now.

Signed-off-by: Jason Yan <[email protected]>
Cc: Diana Craciun <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Christophe Leroy <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Nicholas Piggin <[email protected]>
Cc: Kees Cook <[email protected]>
Reviewed-by: Christophe Leroy <[email protected]>
Reviewed-by: Diana Craciun <[email protected]>
Tested-by: Diana Craciun <[email protected]>
---
arch/powerpc/mm/mmu_decl.h | 2 ++
arch/powerpc/mm/nohash/fsl_booke.c | 1 +
arch/powerpc/mm/nohash/kaslr_booke.c | 11 +++++++++++
3 files changed, 14 insertions(+)

diff --git a/arch/powerpc/mm/mmu_decl.h b/arch/powerpc/mm/mmu_decl.h
index a3a4937c0496..565731475a07 100644
--- a/arch/powerpc/mm/mmu_decl.h
+++ b/arch/powerpc/mm/mmu_decl.h
@@ -151,8 +151,10 @@ extern void loadcam_multi(int first_idx, int num, int tmp_idx);

#ifdef CONFIG_RANDOMIZE_BASE
void kaslr_early_init(void *dt_ptr, phys_addr_t size);
+void kaslr_late_init(void);
#else
static inline void kaslr_early_init(void *dt_ptr, phys_addr_t size) {}
+static inline void kaslr_late_init(void) {}
#endif

struct tlbcam {
diff --git a/arch/powerpc/mm/nohash/fsl_booke.c b/arch/powerpc/mm/nohash/fsl_booke.c
index 2dc27cf88add..b4eb06ceb189 100644
--- a/arch/powerpc/mm/nohash/fsl_booke.c
+++ b/arch/powerpc/mm/nohash/fsl_booke.c
@@ -269,6 +269,7 @@ notrace void __init relocate_init(u64 dt_ptr, phys_addr_t start)
kernstart_addr = start;
if (is_second_reloc) {
virt_phys_offset = PAGE_OFFSET - memstart_addr;
+ kaslr_late_init();
return;
}

diff --git a/arch/powerpc/mm/nohash/kaslr_booke.c b/arch/powerpc/mm/nohash/kaslr_booke.c
index 7b238fc2c8a9..aa1b60c782e7 100644
--- a/arch/powerpc/mm/nohash/kaslr_booke.c
+++ b/arch/powerpc/mm/nohash/kaslr_booke.c
@@ -381,3 +381,14 @@ notrace void __init kaslr_early_init(void *dt_ptr, phys_addr_t size)

reloc_kernel_entry(dt_ptr, kernstart_virt_addr);
}
+
+void __init kaslr_late_init(void)
+{
+ /* If randomized, clear the original kernel */
+ if (kernstart_virt_addr != KERNELBASE) {
+ unsigned long kernel_sz;
+
+ kernel_sz = (unsigned long)_end - kernstart_virt_addr;
+ memzero_explicit((void *)KERNELBASE, kernel_sz);
+ }
+}
--
2.17.2

2019-09-26 00:41:49

by Jason Yan

[permalink] [raw]
Subject: Re: [PATCH v7 00/12] implement KASLR for powerpc/fsl_booke/32

Hi Scott,

Can you test v7 to see if it works to load a kernel at a non-zero address?

Thanks,

On 2019/9/20 17:45, Jason Yan wrote:
> This series implements KASLR for powerpc/fsl_booke/32, as a security
> feature that deters exploit attempts relying on knowledge of the location
> of kernel internals.
>
> Since CONFIG_RELOCATABLE has already supported, what we need to do is
> map or copy kernel to a proper place and relocate. Freescale Book-E
> parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1
> entries are not suitable to map the kernel directly in a randomized
> region, so we chose to copy the kernel to a proper place and restart to
> relocate.
>
> Entropy is derived from the banner and timer base, which will change every
> build and boot. This not so much safe so additionally the bootloader may
> pass entropy via the /chosen/kaslr-seed node in device tree.
>
> We will use the first 512M of the low memory to randomize the kernel
> image. The memory will be split in 64M zones. We will use the lower 8
> bit of the entropy to decide the index of the 64M zone. Then we chose a
> 16K aligned offset inside the 64M zone to put the kernel in.
>
> KERNELBASE
>
> |--> 64M <--|
> | |
> +---------------+ +----------------+---------------+
> | |....| |kernel| | |
> +---------------+ +----------------+---------------+
> | |
> |-----> offset <-----|
>
> kernstart_virt_addr
>
> We also check if we will overlap with some areas like the dtb area, the
> initrd area or the crashkernel area. If we cannot find a proper area,
> kaslr will be disabled and boot from the original kernel.
>
> Changes since v6:
> - Rename create_tlb_entry() to create_kaslr_tlb_entry()
> - Remove MAS2_VAL since there is no more users.
> - Move kaslr_booke.c to arch/powerpc/mm/nohash.
> - Call flush_icache_range() after copying the kernel.
> - Warning if no kaslr-seed provided by the bootloader
> - Use the right physical address when checking if the new position will overlap with other regions.
> - Do not clear bss for the second pass because some global variables will not be initialized again
> - Use tabs instead of spaces between the mnemonic and the arguments(in fsl_booke_entry_mapping.S).
>
> Changes since v5:
> - Rename M_IF_NEEDED to MAS2_M_IF_NEEDED
> - Define some global variable as __ro_after_init
> - Replace kimage_vaddr with kernstart_virt_addr
> - Depend on RELOCATABLE, not select it
> - Modify the comment block below the SPDX tag
> - Remove some useless headers in kaslr_booke.c and move is_second_reloc
> declarationto mmu_decl.h
> - Remove DBG() and use pr_debug() and rewrite comment above get_boot_seed().
> - Add a patch to document the KASLR implementation.
> - Split a patch from patch #10 which exports kaslr offset in VMCOREINFO ELF notes.
> - Remove extra logic around finding nokaslr string in cmdline.
> - Make regions static global and __initdata
>
> Changes since v4:
> - Add Reviewed-by tag from Christophe
> - Remove an unnecessary cast
> - Remove unnecessary parenthesis
> - Fix checkpatch warning
>
> Changes since v3:
> - Add Reviewed-by and Tested-by tag from Diana
> - Change the comment in fsl_booke_entry_mapping.S to be consistent
> with the new code.
>
> Changes since v2:
> - Remove unnecessary #ifdef
> - Use SZ_64M instead of0x4000000
> - Call early_init_dt_scan_chosen() to init boot_command_line
> - Rename kaslr_second_init() to kaslr_late_init()
>
> Changes since v1:
> - Remove some useless 'extern' keyword.
> - Replace EXPORT_SYMBOL with EXPORT_SYMBOL_GPL
> - Improve some assembly code
> - Use memzero_explicit instead of memset
> - Use boot_command_line and remove early_command_line
> - Do not print kaslr offset if kaslr is disabled
>
> Jason Yan (12):
> powerpc: unify definition of M_IF_NEEDED
> powerpc: move memstart_addr and kernstart_addr to init-common.c
> powerpc: introduce kernstart_virt_addr to store the kernel base
> powerpc/fsl_booke/32: introduce create_kaslr_tlb_entry() helper
> powerpc/fsl_booke/32: introduce reloc_kernel_entry() helper
> powerpc/fsl_booke/32: implement KASLR infrastructure
> powerpc/fsl_booke/32: randomize the kernel image offset
> powerpc/fsl_booke/kaslr: clear the original kernel if randomized
> powerpc/fsl_booke/kaslr: support nokaslr cmdline parameter
> powerpc/fsl_booke/kaslr: dump out kernel offset information on panic
> powerpc/fsl_booke/kaslr: export offset in VMCOREINFO ELF notes
> powerpc/fsl_booke/32: Document KASLR implementation
>
> Documentation/powerpc/kaslr-booke32.rst | 42 ++
> arch/powerpc/Kconfig | 11 +
> arch/powerpc/include/asm/nohash/mmu-book3e.h | 11 +-
> arch/powerpc/include/asm/page.h | 7 +
> arch/powerpc/kernel/early_32.c | 5 +-
> arch/powerpc/kernel/exceptions-64e.S | 12 +-
> arch/powerpc/kernel/fsl_booke_entry_mapping.S | 25 +-
> arch/powerpc/kernel/head_fsl_booke.S | 61 ++-
> arch/powerpc/kernel/machine_kexec.c | 1 +
> arch/powerpc/kernel/misc_64.S | 7 +-
> arch/powerpc/kernel/setup-common.c | 20 +
> arch/powerpc/mm/init-common.c | 7 +
> arch/powerpc/mm/init_32.c | 5 -
> arch/powerpc/mm/init_64.c | 5 -
> arch/powerpc/mm/mmu_decl.h | 11 +
> arch/powerpc/mm/nohash/Makefile | 1 +
> arch/powerpc/mm/nohash/fsl_booke.c | 8 +-
> arch/powerpc/mm/nohash/kaslr_booke.c | 401 ++++++++++++++++++
> 18 files changed, 587 insertions(+), 53 deletions(-)
> create mode 100644 Documentation/powerpc/kaslr-booke32.rst
> create mode 100644 arch/powerpc/mm/nohash/kaslr_booke.c
>

2019-10-09 06:13:27

by Jason Yan

[permalink] [raw]
Subject: Re: [PATCH v7 00/12] implement KASLR for powerpc/fsl_booke/32

Hi Scott,

Would you please take sometime to test this?

Thank you so much.

On 2019/9/24 13:52, Jason Yan wrote:
> Hi Scott,
>
> Can you test v7 to see if it works to load a kernel at a non-zero address?
>
> Thanks,
>
> On 2019/9/20 17:45, Jason Yan wrote:
>> This series implements KASLR for powerpc/fsl_booke/32, as a security
>> feature that deters exploit attempts relying on knowledge of the location
>> of kernel internals.
>>
>> Since CONFIG_RELOCATABLE has already supported, what we need to do is
>> map or copy kernel to a proper place and relocate. Freescale Book-E
>> parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1
>> entries are not suitable to map the kernel directly in a randomized
>> region, so we chose to copy the kernel to a proper place and restart to
>> relocate.
>>
>> Entropy is derived from the banner and timer base, which will change
>> every
>> build and boot. This not so much safe so additionally the bootloader may
>> pass entropy via the /chosen/kaslr-seed node in device tree.
>>
>> We will use the first 512M of the low memory to randomize the kernel
>> image. The memory will be split in 64M zones. We will use the lower 8
>> bit of the entropy to decide the index of the 64M zone. Then we chose a
>> 16K aligned offset inside the 64M zone to put the kernel in.
>>
>>      KERNELBASE
>>
>>          |-->   64M   <--|
>>          |               |
>>          +---------------+    +----------------+---------------+
>>          |               |....|    |kernel|    |               |
>>          +---------------+    +----------------+---------------+
>>          |                         |
>>          |----->   offset    <-----|
>>
>>                                kernstart_virt_addr
>>
>> We also check if we will overlap with some areas like the dtb area, the
>> initrd area or the crashkernel area. If we cannot find a proper area,
>> kaslr will be disabled and boot from the original kernel.
>>
>> Changes since v6:
>>   - Rename create_tlb_entry() to create_kaslr_tlb_entry()
>>   - Remove MAS2_VAL since there is no more users.
>>   - Move kaslr_booke.c to arch/powerpc/mm/nohash.
>>   - Call flush_icache_range() after copying the kernel.
>>   - Warning if no kaslr-seed provided by the bootloader
>>   - Use the right physical address when checking if the new position
>> will overlap with other regions.
>>   - Do not clear bss for the second pass because some global variables
>> will not be initialized again
>>   - Use tabs instead of spaces between the mnemonic and the
>> arguments(in fsl_booke_entry_mapping.S).
>>
>> Changes since v5:
>>   - Rename M_IF_NEEDED to MAS2_M_IF_NEEDED
>>   - Define some global variable as __ro_after_init
>>   - Replace kimage_vaddr with kernstart_virt_addr
>>   - Depend on RELOCATABLE, not select it
>>   - Modify the comment block below the SPDX tag
>>   - Remove some useless headers in kaslr_booke.c and move is_second_reloc
>>     declarationto mmu_decl.h
>>   - Remove DBG() and use pr_debug() and rewrite comment above
>> get_boot_seed().
>>   - Add a patch to document the KASLR implementation.
>>   - Split a patch from patch #10 which exports kaslr offset in
>> VMCOREINFO ELF notes.
>>   - Remove extra logic around finding nokaslr string in cmdline.
>>   - Make regions static global and __initdata
>>
>> Changes since v4:
>>   - Add Reviewed-by tag from Christophe
>>   - Remove an unnecessary cast
>>   - Remove unnecessary parenthesis
>>   - Fix checkpatch warning
>>
>> Changes since v3:
>>   - Add Reviewed-by and Tested-by tag from Diana
>>   - Change the comment in fsl_booke_entry_mapping.S to be consistent
>>     with the new code.
>>
>> Changes since v2:
>>   - Remove unnecessary #ifdef
>>   - Use SZ_64M instead of0x4000000
>>   - Call early_init_dt_scan_chosen() to init boot_command_line
>>   - Rename kaslr_second_init() to kaslr_late_init()
>>
>> Changes since v1:
>>   - Remove some useless 'extern' keyword.
>>   - Replace EXPORT_SYMBOL with EXPORT_SYMBOL_GPL
>>   - Improve some assembly code
>>   - Use memzero_explicit instead of memset
>>   - Use boot_command_line and remove early_command_line
>>   - Do not print kaslr offset if kaslr is disabled
>>
>> Jason Yan (12):
>>    powerpc: unify definition of M_IF_NEEDED
>>    powerpc: move memstart_addr and kernstart_addr to init-common.c
>>    powerpc: introduce kernstart_virt_addr to store the kernel base
>>    powerpc/fsl_booke/32: introduce create_kaslr_tlb_entry() helper
>>    powerpc/fsl_booke/32: introduce reloc_kernel_entry() helper
>>    powerpc/fsl_booke/32: implement KASLR infrastructure
>>    powerpc/fsl_booke/32: randomize the kernel image offset
>>    powerpc/fsl_booke/kaslr: clear the original kernel if randomized
>>    powerpc/fsl_booke/kaslr: support nokaslr cmdline parameter
>>    powerpc/fsl_booke/kaslr: dump out kernel offset information on panic
>>    powerpc/fsl_booke/kaslr: export offset in VMCOREINFO ELF notes
>>    powerpc/fsl_booke/32: Document KASLR implementation
>>
>>   Documentation/powerpc/kaslr-booke32.rst       |  42 ++
>>   arch/powerpc/Kconfig                          |  11 +
>>   arch/powerpc/include/asm/nohash/mmu-book3e.h  |  11 +-
>>   arch/powerpc/include/asm/page.h               |   7 +
>>   arch/powerpc/kernel/early_32.c                |   5 +-
>>   arch/powerpc/kernel/exceptions-64e.S          |  12 +-
>>   arch/powerpc/kernel/fsl_booke_entry_mapping.S |  25 +-
>>   arch/powerpc/kernel/head_fsl_booke.S          |  61 ++-
>>   arch/powerpc/kernel/machine_kexec.c           |   1 +
>>   arch/powerpc/kernel/misc_64.S                 |   7 +-
>>   arch/powerpc/kernel/setup-common.c            |  20 +
>>   arch/powerpc/mm/init-common.c                 |   7 +
>>   arch/powerpc/mm/init_32.c                     |   5 -
>>   arch/powerpc/mm/init_64.c                     |   5 -
>>   arch/powerpc/mm/mmu_decl.h                    |  11 +
>>   arch/powerpc/mm/nohash/Makefile               |   1 +
>>   arch/powerpc/mm/nohash/fsl_booke.c            |   8 +-
>>   arch/powerpc/mm/nohash/kaslr_booke.c          | 401 ++++++++++++++++++
>>   18 files changed, 587 insertions(+), 53 deletions(-)
>>   create mode 100644 Documentation/powerpc/kaslr-booke32.rst
>>   create mode 100644 arch/powerpc/mm/nohash/kaslr_booke.c
>>
>
>
> .
>

2019-10-09 07:24:08

by Crystal Wood

[permalink] [raw]
Subject: Re: [PATCH v7 00/12] implement KASLR for powerpc/fsl_booke/32

On Wed, 2019-10-09 at 14:10 +0800, Jason Yan wrote:
> Hi Scott,
>
> Would you please take sometime to test this?
>
> Thank you so much.
>
> On 2019/9/24 13:52, Jason Yan wrote:
> > Hi Scott,
> >
> > Can you test v7 to see if it works to load a kernel at a non-zero address?
> >
> > Thanks,

Sorry for the delay. Here's the output:

## Booting kernel from Legacy Image at 10000000 ...
Image Name: Linux-5.4.0-rc2-00050-g8ac2cf5b4
Image Type: PowerPC Linux Kernel Image (gzip compressed)
Data Size: 7521134 Bytes = 7.2 MiB
Load Address: 04000000
Entry Point: 04000000
Verifying Checksum ... OK
## Flattened Device Tree blob at 1fc00000
Booting using the fdt blob at 0x1fc00000
Uncompressing Kernel Image ... OK
Loading Device Tree to 07fe0000, end 07fff65c ... OK
KASLR: No safe seed for randomizing the kernel base.
OF: reserved mem: initialized node qman-fqd, compatible id fsl,qman-fqd
OF: reserved mem: initialized node qman-pfdr, compatible id fsl,qman-pfdr
OF: reserved mem: initialized node bman-fbpr, compatible id fsl,bman-fbpr
Memory CAM mapping: 64/64/64 Mb, residual: 12032Mb
Linux version 5.4.0-rc2-00050-g8ac2cf5b4e4a-dirty (scott@snotra) (gcc version 8.
1.0 (GCC)) #26 SMP Wed Oct 9 01:50:40 CDT 2019
Using CoreNet Generic machine description
printk: bootconsole [udbg0] enabled
CPU maps initialized for 1 thread per core
-----------------------------------------------------
phys_mem_size = 0x2fc000000
dcache_bsize = 0x40
icache_bsize = 0x40
cpu_features = 0x00000000000003b4
possible = 0x00000000010103bc
always = 0x0000000000000020
cpu_user_features = 0x8c008000 0x08000000
mmu_features = 0x000a0010
physical_start = 0xc7c4000
-----------------------------------------------------
CoreNet Generic board
mpc85xx_qe_init: Could not find Quicc Engine node
barrier-nospec: using isync; sync as speculation barrier
Zone ranges:
Normal [mem 0x0000000004000000-0x000000000fffffff]
HighMem [mem 0x0000000010000000-0x00000002ffffffff]
Movable zone start for each node
Early memory node ranges
node 0: [mem 0x0000000004000000-0x00000002ffffffff]
Initmem setup node 0 [mem 0x0000000004000000-0x00000002ffffffff]
Kernel panic - not syncing: Failed to allocate 125173760 bytes for node 0 memory
map
CPU: 0 PID: 0 Comm: swapper Not tainted 5.4.0-rc2-00050-g8ac2cf5b4e4a-dirty #26
Call Trace:
[c989fe10] [c924bfb0] dump_stack+0x84/0xb4 (unreliable)
[c989fe30] [c880badc] panic+0x140/0x334
[c989fe90] [c89a1144] alloc_node_mem_map.constprop.117+0xa0/0x11c
[c989feb0] [c95481c4] free_area_init_node+0x314/0x5b8
[c989ff30] [c9548b34] free_area_init_nodes+0x57c/0x5c0
[c989ff80] [c952cbb4] setup_arch+0x250/0x270
[c989ffa0] [c95278e0] start_kernel+0x74/0x4e8
[c989fff0] [c87c4478] set_ivor+0x150/0x18c
Kernel Offset: 0x87c4000 from 0xc0000000
Rebooting in 180 seconds..

-Scott


2019-10-09 08:44:58

by Jason Yan

[permalink] [raw]
Subject: Re: [PATCH v7 00/12] implement KASLR for powerpc/fsl_booke/32

Hi Scott,

On 2019/10/9 15:13, Scott Wood wrote:
> On Wed, 2019-10-09 at 14:10 +0800, Jason Yan wrote:
>> Hi Scott,
>>
>> Would you please take sometime to test this?
>>
>> Thank you so much.
>>
>> On 2019/9/24 13:52, Jason Yan wrote:
>>> Hi Scott,
>>>
>>> Can you test v7 to see if it works to load a kernel at a non-zero address?
>>>
>>> Thanks,
>
> Sorry for the delay. Here's the output:
>

Thanks for the test.

> ## Booting kernel from Legacy Image at 10000000 ...
> Image Name: Linux-5.4.0-rc2-00050-g8ac2cf5b4
> Image Type: PowerPC Linux Kernel Image (gzip compressed)
> Data Size: 7521134 Bytes = 7.2 MiB
> Load Address: 04000000
> Entry Point: 04000000
> Verifying Checksum ... OK
> ## Flattened Device Tree blob at 1fc00000
> Booting using the fdt blob at 0x1fc00000
> Uncompressing Kernel Image ... OK
> Loading Device Tree to 07fe0000, end 07fff65c ... OK
> KASLR: No safe seed for randomizing the kernel base.
> OF: reserved mem: initialized node qman-fqd, compatible id fsl,qman-fqd
> OF: reserved mem: initialized node qman-pfdr, compatible id fsl,qman-pfdr
> OF: reserved mem: initialized node bman-fbpr, compatible id fsl,bman-fbpr
> Memory CAM mapping: 64/64/64 Mb, residual: 12032Mb

When boot from 04000000, the max CAM value is 64M. And
you have a board with 12G memory, CONFIG_LOWMEM_CAM_NUM=3 means only
192M memory is mapped and when kernel is randomized at the middle of
this 192M memory, we will not have enough continuous memory for node map.

Can you set CONFIG_LOWMEM_CAM_NUM=8 and see if it works?

Thanks.

> Linux version 5.4.0-rc2-00050-g8ac2cf5b4e4a-dirty (scott@snotra) (gcc version 8.
> 1.0 (GCC)) #26 SMP Wed Oct 9 01:50:40 CDT 2019
> Using CoreNet Generic machine description
> printk: bootconsole [udbg0] enabled
> CPU maps initialized for 1 thread per core
> -----------------------------------------------------
> phys_mem_size = 0x2fc000000
> dcache_bsize = 0x40
> icache_bsize = 0x40
> cpu_features = 0x00000000000003b4
> possible = 0x00000000010103bc
> always = 0x0000000000000020
> cpu_user_features = 0x8c008000 0x08000000
> mmu_features = 0x000a0010
> physical_start = 0xc7c4000
> -----------------------------------------------------
> CoreNet Generic board
> mpc85xx_qe_init: Could not find Quicc Engine node
> barrier-nospec: using isync; sync as speculation barrier
> Zone ranges:
> Normal [mem 0x0000000004000000-0x000000000fffffff]
> HighMem [mem 0x0000000010000000-0x00000002ffffffff]
> Movable zone start for each node
> Early memory node ranges
> node 0: [mem 0x0000000004000000-0x00000002ffffffff]
> Initmem setup node 0 [mem 0x0000000004000000-0x00000002ffffffff]
> Kernel panic - not syncing: Failed to allocate 125173760 bytes for node 0 memory
> map
> CPU: 0 PID: 0 Comm: swapper Not tainted 5.4.0-rc2-00050-g8ac2cf5b4e4a-dirty #26
> Call Trace:
> [c989fe10] [c924bfb0] dump_stack+0x84/0xb4 (unreliable)
> [c989fe30] [c880badc] panic+0x140/0x334
> [c989fe90] [c89a1144] alloc_node_mem_map.constprop.117+0xa0/0x11c
> [c989feb0] [c95481c4] free_area_init_node+0x314/0x5b8
> [c989ff30] [c9548b34] free_area_init_nodes+0x57c/0x5c0
> [c989ff80] [c952cbb4] setup_arch+0x250/0x270
> [c989ffa0] [c95278e0] start_kernel+0x74/0x4e8
> [c989fff0] [c87c4478] set_ivor+0x150/0x18c
> Kernel Offset: 0x87c4000 from 0xc0000000
> Rebooting in 180 seconds..
>
> -Scott
>
>
>
> .
>

2019-10-09 18:54:19

by Crystal Wood

[permalink] [raw]
Subject: Re: [PATCH v7 00/12] implement KASLR for powerpc/fsl_booke/32

On Wed, 2019-10-09 at 16:41 +0800, Jason Yan wrote:
> Hi Scott,
>
> On 2019/10/9 15:13, Scott Wood wrote:
> > On Wed, 2019-10-09 at 14:10 +0800, Jason Yan wrote:
> > > Hi Scott,
> > >
> > > Would you please take sometime to test this?
> > >
> > > Thank you so much.
> > >
> > > On 2019/9/24 13:52, Jason Yan wrote:
> > > > Hi Scott,
> > > >
> > > > Can you test v7 to see if it works to load a kernel at a non-zero
> > > > address?
> > > >
> > > > Thanks,
> >
> > Sorry for the delay. Here's the output:
> >
>
> Thanks for the test.
>
> > ## Booting kernel from Legacy Image at 10000000 ...
> > Image Name: Linux-5.4.0-rc2-00050-g8ac2cf5b4
> > Image Type: PowerPC Linux Kernel Image (gzip compressed)
> > Data Size: 7521134 Bytes = 7.2 MiB
> > Load Address: 04000000
> > Entry Point: 04000000
> > Verifying Checksum ... OK
> > ## Flattened Device Tree blob at 1fc00000
> > Booting using the fdt blob at 0x1fc00000
> > Uncompressing Kernel Image ... OK
> > Loading Device Tree to 07fe0000, end 07fff65c ... OK
> > KASLR: No safe seed for randomizing the kernel base.
> > OF: reserved mem: initialized node qman-fqd, compatible id fsl,qman-fqd
> > OF: reserved mem: initialized node qman-pfdr, compatible id fsl,qman-pfdr
> > OF: reserved mem: initialized node bman-fbpr, compatible id fsl,bman-fbpr
> > Memory CAM mapping: 64/64/64 Mb, residual: 12032Mb
>
> When boot from 04000000, the max CAM value is 64M. And
> you have a board with 12G memory, CONFIG_LOWMEM_CAM_NUM=3 means only
> 192M memory is mapped and when kernel is randomized at the middle of
> this 192M memory, we will not have enough continuous memory for node map.
>
> Can you set CONFIG_LOWMEM_CAM_NUM=8 and see if it works?

OK, that worked.

-Scott


2019-10-21 03:36:33

by Jason Yan

[permalink] [raw]
Subject: Re: [PATCH v7 00/12] implement KASLR for powerpc/fsl_booke/32



On 2019/10/10 2:46, Scott Wood wrote:
> On Wed, 2019-10-09 at 16:41 +0800, Jason Yan wrote:
>> Hi Scott,
>>
>> On 2019/10/9 15:13, Scott Wood wrote:
>>> On Wed, 2019-10-09 at 14:10 +0800, Jason Yan wrote:
>>>> Hi Scott,
>>>>
>>>> Would you please take sometime to test this?
>>>>
>>>> Thank you so much.
>>>>
>>>> On 2019/9/24 13:52, Jason Yan wrote:
>>>>> Hi Scott,
>>>>>
>>>>> Can you test v7 to see if it works to load a kernel at a non-zero
>>>>> address?
>>>>>
>>>>> Thanks,
>>>
>>> Sorry for the delay. Here's the output:
>>>
>>
>> Thanks for the test.
>>
>>> ## Booting kernel from Legacy Image at 10000000 ...
>>> Image Name: Linux-5.4.0-rc2-00050-g8ac2cf5b4
>>> Image Type: PowerPC Linux Kernel Image (gzip compressed)
>>> Data Size: 7521134 Bytes = 7.2 MiB
>>> Load Address: 04000000
>>> Entry Point: 04000000
>>> Verifying Checksum ... OK
>>> ## Flattened Device Tree blob at 1fc00000
>>> Booting using the fdt blob at 0x1fc00000
>>> Uncompressing Kernel Image ... OK
>>> Loading Device Tree to 07fe0000, end 07fff65c ... OK
>>> KASLR: No safe seed for randomizing the kernel base.
>>> OF: reserved mem: initialized node qman-fqd, compatible id fsl,qman-fqd
>>> OF: reserved mem: initialized node qman-pfdr, compatible id fsl,qman-pfdr
>>> OF: reserved mem: initialized node bman-fbpr, compatible id fsl,bman-fbpr
>>> Memory CAM mapping: 64/64/64 Mb, residual: 12032Mb
>>
>> When boot from 04000000, the max CAM value is 64M. And
>> you have a board with 12G memory, CONFIG_LOWMEM_CAM_NUM=3 means only
>> 192M memory is mapped and when kernel is randomized at the middle of
>> this 192M memory, we will not have enough continuous memory for node map.
>>
>> Can you set CONFIG_LOWMEM_CAM_NUM=8 and see if it works?
>
> OK, that worked.
>

Hi Scott, any more cases should be tested or any more comments?
What else need to be done before this feature can be merged?

Thanks,
Jason

> -Scott
>
>
>
> .
>

2019-10-23 00:21:14

by Crystal Wood

[permalink] [raw]
Subject: Re: [PATCH v7 00/12] implement KASLR for powerpc/fsl_booke/32

On Mon, 2019-10-21 at 11:34 +0800, Jason Yan wrote:
>
> On 2019/10/10 2:46, Scott Wood wrote:
> > On Wed, 2019-10-09 at 16:41 +0800, Jason Yan wrote:
> > > Hi Scott,
> > >
> > > On 2019/10/9 15:13, Scott Wood wrote:
> > > > On Wed, 2019-10-09 at 14:10 +0800, Jason Yan wrote:
> > > > > Hi Scott,
> > > > >
> > > > > Would you please take sometime to test this?
> > > > >
> > > > > Thank you so much.
> > > > >
> > > > > On 2019/9/24 13:52, Jason Yan wrote:
> > > > > > Hi Scott,
> > > > > >
> > > > > > Can you test v7 to see if it works to load a kernel at a non-zero
> > > > > > address?
> > > > > >
> > > > > > Thanks,
> > > >
> > > > Sorry for the delay. Here's the output:
> > > >
> > >
> > > Thanks for the test.
> > >
> > > > ## Booting kernel from Legacy Image at 10000000 ...
> > > > Image Name: Linux-5.4.0-rc2-00050-g8ac2cf5b4
> > > > Image Type: PowerPC Linux Kernel Image (gzip compressed)
> > > > Data Size: 7521134 Bytes = 7.2 MiB
> > > > Load Address: 04000000
> > > > Entry Point: 04000000
> > > > Verifying Checksum ... OK
> > > > ## Flattened Device Tree blob at 1fc00000
> > > > Booting using the fdt blob at 0x1fc00000
> > > > Uncompressing Kernel Image ... OK
> > > > Loading Device Tree to 07fe0000, end 07fff65c ... OK
> > > > KASLR: No safe seed for randomizing the kernel base.
> > > > OF: reserved mem: initialized node qman-fqd, compatible id fsl,qman-
> > > > fqd
> > > > OF: reserved mem: initialized node qman-pfdr, compatible id fsl,qman-
> > > > pfdr
> > > > OF: reserved mem: initialized node bman-fbpr, compatible id fsl,bman-
> > > > fbpr
> > > > Memory CAM mapping: 64/64/64 Mb, residual: 12032Mb
> > >
> > > When boot from 04000000, the max CAM value is 64M. And
> > > you have a board with 12G memory, CONFIG_LOWMEM_CAM_NUM=3 means only
> > > 192M memory is mapped and when kernel is randomized at the middle of
> > > this 192M memory, we will not have enough continuous memory for node
> > > map.
> > >
> > > Can you set CONFIG_LOWMEM_CAM_NUM=8 and see if it works?
> >
> > OK, that worked.
> >
>
> Hi Scott, any more cases should be tested or any more comments?
> What else need to be done before this feature can be merged?

I've just applied it and sent a pull request.

-Scott