2019-07-30 14:30:58

by Jason Yan

[permalink] [raw]
Subject: [PATCH v2 00/10] implement KASLR for powerpc/fsl_booke/32

This series implements KASLR for powerpc/fsl_booke/32, as a security
feature that deters exploit attempts relying on knowledge of the location
of kernel internals.

Since CONFIG_RELOCATABLE has already supported, what we need to do is
map or copy kernel to a proper place and relocate. Freescale Book-E
parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1
entries are not suitable to map the kernel directly in a randomized
region, so we chose to copy the kernel to a proper place and restart to
relocate.

Entropy is derived from the banner and timer base, which will change every
build and boot. This not so much safe so additionally the bootloader may
pass entropy via the /chosen/kaslr-seed node in device tree.

We will use the first 512M of the low memory to randomize the kernel
image. The memory will be split in 64M zones. We will use the lower 8
bit of the entropy to decide the index of the 64M zone. Then we chose a
16K aligned offset inside the 64M zone to put the kernel in.

KERNELBASE

|--> 64M <--|
| |
+---------------+ +----------------+---------------+
| |....| |kernel| | |
+---------------+ +----------------+---------------+
| |
|-----> offset <-----|

kimage_vaddr

We also check if we will overlap with some areas like the dtb area, the
initrd area or the crashkernel area. If we cannot find a proper area,
kaslr will be disabled and boot from the original kernel.

Changes since v1:
- Remove some useless 'extern' keyword.
- Replace EXPORT_SYMBOL with EXPORT_SYMBOL_GPL
- Improve some assembly code
- Use memzero_explicit instead of memset
- Use boot_command_line and remove early_command_line
- Do not print kaslr offset if kaslr is disabled

Jason Yan (10):
powerpc: unify definition of M_IF_NEEDED
powerpc: move memstart_addr and kernstart_addr to init-common.c
powerpc: introduce kimage_vaddr to store the kernel base
powerpc/fsl_booke/32: introduce create_tlb_entry() helper
powerpc/fsl_booke/32: introduce reloc_kernel_entry() helper
powerpc/fsl_booke/32: implement KASLR infrastructure
powerpc/fsl_booke/32: randomize the kernel image offset
powerpc/fsl_booke/kaslr: clear the original kernel if randomized
powerpc/fsl_booke/kaslr: support nokaslr cmdline parameter
powerpc/fsl_booke/kaslr: dump out kernel offset information on panic

arch/powerpc/Kconfig | 11 +
arch/powerpc/include/asm/nohash/mmu-book3e.h | 10 +
arch/powerpc/include/asm/page.h | 7 +
arch/powerpc/kernel/Makefile | 1 +
arch/powerpc/kernel/early_32.c | 2 +-
arch/powerpc/kernel/exceptions-64e.S | 10 -
arch/powerpc/kernel/fsl_booke_entry_mapping.S | 23 +-
arch/powerpc/kernel/head_fsl_booke.S | 57 ++-
arch/powerpc/kernel/kaslr_booke.c | 439 ++++++++++++++++++
arch/powerpc/kernel/machine_kexec.c | 1 +
arch/powerpc/kernel/misc_64.S | 5 -
arch/powerpc/kernel/setup-common.c | 19 +
arch/powerpc/mm/init-common.c | 7 +
arch/powerpc/mm/init_32.c | 5 -
arch/powerpc/mm/init_64.c | 5 -
arch/powerpc/mm/mmu_decl.h | 10 +
arch/powerpc/mm/nohash/fsl_booke.c | 8 +-
17 files changed, 572 insertions(+), 48 deletions(-)
create mode 100644 arch/powerpc/kernel/kaslr_booke.c

--
2.17.2


2019-07-30 14:31:39

by Jason Yan

[permalink] [raw]
Subject: [PATCH v2 05/10] powerpc/fsl_booke/32: introduce reloc_kernel_entry() helper

Add a new helper reloc_kernel_entry() to jump back to the start of the
new kernel. After we put the new kernel in a randomized place we can use
this new helper to enter the kernel and begin to relocate again.

Signed-off-by: Jason Yan <[email protected]>
Cc: Diana Craciun <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Christophe Leroy <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Nicholas Piggin <[email protected]>
Cc: Kees Cook <[email protected]>
---
arch/powerpc/kernel/head_fsl_booke.S | 13 +++++++++++++
arch/powerpc/mm/mmu_decl.h | 1 +
2 files changed, 14 insertions(+)

diff --git a/arch/powerpc/kernel/head_fsl_booke.S b/arch/powerpc/kernel/head_fsl_booke.S
index 04d124fee17d..2083382dd662 100644
--- a/arch/powerpc/kernel/head_fsl_booke.S
+++ b/arch/powerpc/kernel/head_fsl_booke.S
@@ -1143,6 +1143,19 @@ _GLOBAL(create_tlb_entry)
sync
blr

+/*
+ * Return to the start of the relocated kernel and run again
+ * r3 - virtual address of fdt
+ * r4 - entry of the kernel
+ */
+_GLOBAL(reloc_kernel_entry)
+ mfmsr r7
+ rlwinm r7, r7, 0, ~(MSR_IS | MSR_DS)
+
+ mtspr SPRN_SRR0,r4
+ mtspr SPRN_SRR1,r7
+ rfi
+
/*
* Create a tlb entry with the same effective and physical address as
* the tlb entry used by the current running code. But set the TS to 1.
diff --git a/arch/powerpc/mm/mmu_decl.h b/arch/powerpc/mm/mmu_decl.h
index a09f89d3aa0f..804da298beb3 100644
--- a/arch/powerpc/mm/mmu_decl.h
+++ b/arch/powerpc/mm/mmu_decl.h
@@ -143,6 +143,7 @@ extern void adjust_total_lowmem(void);
extern int switch_to_as1(void);
extern void restore_to_as0(int esel, int offset, void *dt_ptr, int bootcpu);
void create_tlb_entry(phys_addr_t phys, unsigned long virt, int entry);
+void reloc_kernel_entry(void *fdt, int addr);
#endif
extern void loadcam_entry(unsigned int index);
extern void loadcam_multi(int first_idx, int num, int tmp_idx);
--
2.17.2

2019-07-30 14:31:42

by Jason Yan

[permalink] [raw]
Subject: [PATCH v2 02/10] powerpc: move memstart_addr and kernstart_addr to init-common.c

These two variables are both defined in init_32.c and init_64.c. Move
them to init-common.c.

Signed-off-by: Jason Yan <[email protected]>
Cc: Diana Craciun <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Christophe Leroy <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Nicholas Piggin <[email protected]>
Cc: Kees Cook <[email protected]>
---
arch/powerpc/mm/init-common.c | 5 +++++
arch/powerpc/mm/init_32.c | 5 -----
arch/powerpc/mm/init_64.c | 5 -----
3 files changed, 5 insertions(+), 10 deletions(-)

diff --git a/arch/powerpc/mm/init-common.c b/arch/powerpc/mm/init-common.c
index a84da92920f7..152ae0d21435 100644
--- a/arch/powerpc/mm/init-common.c
+++ b/arch/powerpc/mm/init-common.c
@@ -21,6 +21,11 @@
#include <asm/pgtable.h>
#include <asm/kup.h>

+phys_addr_t memstart_addr = (phys_addr_t)~0ull;
+EXPORT_SYMBOL_GPL(memstart_addr);
+phys_addr_t kernstart_addr;
+EXPORT_SYMBOL_GPL(kernstart_addr);
+
static bool disable_kuep = !IS_ENABLED(CONFIG_PPC_KUEP);
static bool disable_kuap = !IS_ENABLED(CONFIG_PPC_KUAP);

diff --git a/arch/powerpc/mm/init_32.c b/arch/powerpc/mm/init_32.c
index b04896a88d79..872df48ae41b 100644
--- a/arch/powerpc/mm/init_32.c
+++ b/arch/powerpc/mm/init_32.c
@@ -56,11 +56,6 @@
phys_addr_t total_memory;
phys_addr_t total_lowmem;

-phys_addr_t memstart_addr = (phys_addr_t)~0ull;
-EXPORT_SYMBOL(memstart_addr);
-phys_addr_t kernstart_addr;
-EXPORT_SYMBOL(kernstart_addr);
-
#ifdef CONFIG_RELOCATABLE
/* Used in __va()/__pa() */
long long virt_phys_offset;
diff --git a/arch/powerpc/mm/init_64.c b/arch/powerpc/mm/init_64.c
index a44f6281ca3a..c836f1269ee7 100644
--- a/arch/powerpc/mm/init_64.c
+++ b/arch/powerpc/mm/init_64.c
@@ -63,11 +63,6 @@

#include <mm/mmu_decl.h>

-phys_addr_t memstart_addr = ~0;
-EXPORT_SYMBOL_GPL(memstart_addr);
-phys_addr_t kernstart_addr;
-EXPORT_SYMBOL_GPL(kernstart_addr);
-
#ifdef CONFIG_SPARSEMEM_VMEMMAP
/*
* Given an address within the vmemmap, determine the pfn of the page that
--
2.17.2