2019-07-17 07:51:36

by Jason Yan

[permalink] [raw]
Subject: [RFC PATCH 00/10] implement KASLR for powerpc/fsl_booke/32

This series implements KASLR for powerpc/fsl_booke/32, as a security
feature that deters exploit attempts relying on knowledge of the location
of kernel internals.

Since CONFIG_RELOCATABLE has already supported, what we need to do is
map or copy kernel to a proper place and relocate. Freescale Book-E
parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1
entries are not suitable to map the kernel directly in a randomized
region, so we chose to copy the kernel to a proper place and restart to
relocate.

Entropy is derived from the banner and timer base, which will change every
build and boot. This not so much safe so additionally the bootloader may
pass entropy via the /chosen/kaslr-seed node in device tree.

We will use the first 512M of the low memory to randomize the kernel
image. The memory will be split in 64M zones. We will use the lower 8
bit of the entropy to decide the index of the 64M zone. Then we chose a
16K aligned offset inside the 64M zone to put the kernel in.

KERNELBASE

|--> 64M <--|
| |
+---------------+ +----------------+---------------+
| |....| |kernel| | |
+---------------+ +----------------+---------------+
| |
|-----> offset <-----|

kimage_vaddr

We also check if we will overlap with some areas like the dtb area, the
initrd area or the crashkernel area. If we cannot find a proper area,
kaslr will be disabled and boot from the original kernel.

Jason Yan (10):
powerpc: unify definition of M_IF_NEEDED
powerpc: move memstart_addr and kernstart_addr to init-common.c
powerpc: introduce kimage_vaddr to store the kernel base
powerpc/fsl_booke/32: introduce create_tlb_entry() helper
powerpc/fsl_booke/32: introduce reloc_kernel_entry() helper
powerpc/fsl_booke/32: implement KASLR infrastructure
powerpc/fsl_booke/32: randomize the kernel image offset
powerpc/fsl_booke/kaslr: clear the original kernel if randomized
powerpc/fsl_booke/kaslr: support nokaslr cmdline parameter
powerpc/fsl_booke/kaslr: dump out kernel offset information on panic

arch/powerpc/Kconfig | 11 +
arch/powerpc/include/asm/nohash/mmu-book3e.h | 10 +
arch/powerpc/include/asm/page.h | 7 +
arch/powerpc/kernel/Makefile | 1 +
arch/powerpc/kernel/early_32.c | 2 +-
arch/powerpc/kernel/exceptions-64e.S | 10 -
arch/powerpc/kernel/fsl_booke_entry_mapping.S | 23 +-
arch/powerpc/kernel/head_fsl_booke.S | 61 ++-
arch/powerpc/kernel/kaslr_booke.c | 439 ++++++++++++++++++
arch/powerpc/kernel/machine_kexec.c | 1 +
arch/powerpc/kernel/misc_64.S | 5 -
arch/powerpc/kernel/setup-common.c | 23 +
arch/powerpc/mm/init-common.c | 7 +
arch/powerpc/mm/init_32.c | 5 -
arch/powerpc/mm/init_64.c | 5 -
arch/powerpc/mm/mmu_decl.h | 10 +
arch/powerpc/mm/nohash/fsl_booke.c | 8 +-
17 files changed, 580 insertions(+), 48 deletions(-)
create mode 100644 arch/powerpc/kernel/kaslr_booke.c

--
2.17.2


2019-07-17 07:52:13

by Jason Yan

[permalink] [raw]
Subject: [RFC PATCH 06/10] powerpc/fsl_booke/32: implement KASLR infrastructure

This patch add support to boot kernel from places other than KERNELBASE.
Since CONFIG_RELOCATABLE has already supported, what we need to do is
map or copy kernel to a proper place and relocate. Freescale Book-E
parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1
entries are not suitable to map the kernel directly in a randomized
region, so we chose to copy the kernel to a proper place and restart to
relocate.

The offset of the kernel was not randomized yet(a fixed 64M is set). We
will randomize it in the next patch.

Signed-off-by: Jason Yan <[email protected]>
Cc: Diana Craciun <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Christophe Leroy <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Nicholas Piggin <[email protected]>
Cc: Kees Cook <[email protected]>
---
arch/powerpc/Kconfig | 11 +++
arch/powerpc/kernel/Makefile | 1 +
arch/powerpc/kernel/early_32.c | 2 +-
arch/powerpc/kernel/fsl_booke_entry_mapping.S | 13 ++-
arch/powerpc/kernel/head_fsl_booke.S | 15 +++-
arch/powerpc/kernel/kaslr_booke.c | 83 +++++++++++++++++++
arch/powerpc/mm/mmu_decl.h | 6 ++
arch/powerpc/mm/nohash/fsl_booke.c | 7 +-
8 files changed, 125 insertions(+), 13 deletions(-)
create mode 100644 arch/powerpc/kernel/kaslr_booke.c

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index f516796dd819..3742df54bdc8 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -547,6 +547,17 @@ config RELOCATABLE
setting can still be useful to bootwrappers that need to know the
load address of the kernel (eg. u-boot/mkimage).

+config RANDOMIZE_BASE
+ bool "Randomize the address of the kernel image"
+ depends on (FSL_BOOKE && FLATMEM && PPC32)
+ select RELOCATABLE
+ help
+ Randomizes the virtual address at which the kernel image is
+ loaded, as a security feature that deters exploit attempts
+ relying on knowledge of the location of kernel internals.
+
+ If unsure, say N.
+
config RELOCATABLE_TEST
bool "Test relocatable kernel"
depends on (PPC64 && RELOCATABLE)
diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
index 56dfa7a2a6f2..cf87a0921db4 100644
--- a/arch/powerpc/kernel/Makefile
+++ b/arch/powerpc/kernel/Makefile
@@ -105,6 +105,7 @@ extra-$(CONFIG_PPC_8xx) := head_8xx.o
extra-y += vmlinux.lds

obj-$(CONFIG_RELOCATABLE) += reloc_$(BITS).o
+obj-$(CONFIG_RANDOMIZE_BASE) += kaslr_booke.o

obj-$(CONFIG_PPC32) += entry_32.o setup_32.o early_32.o
obj-$(CONFIG_PPC64) += dma-iommu.o iommu.o
diff --git a/arch/powerpc/kernel/early_32.c b/arch/powerpc/kernel/early_32.c
index 3482118ffe76..fe8347cdc07d 100644
--- a/arch/powerpc/kernel/early_32.c
+++ b/arch/powerpc/kernel/early_32.c
@@ -32,5 +32,5 @@ notrace unsigned long __init early_init(unsigned long dt_ptr)

apply_feature_fixups();

- return KERNELBASE + offset;
+ return kimage_vaddr + offset;
}
diff --git a/arch/powerpc/kernel/fsl_booke_entry_mapping.S b/arch/powerpc/kernel/fsl_booke_entry_mapping.S
index de0980945510..6d2967673ac7 100644
--- a/arch/powerpc/kernel/fsl_booke_entry_mapping.S
+++ b/arch/powerpc/kernel/fsl_booke_entry_mapping.S
@@ -161,17 +161,16 @@ skpinv: addi r6,r6,1 /* Increment */
lis r6,(MAS1_VALID|MAS1_IPROT)@h
ori r6,r6,(MAS1_TSIZE(BOOK3E_PAGESZ_64M))@l
mtspr SPRN_MAS1,r6
- lis r6,MAS2_VAL(PAGE_OFFSET, BOOK3E_PAGESZ_64M, M_IF_NEEDED)@h
- ori r6,r6,MAS2_VAL(PAGE_OFFSET, BOOK3E_PAGESZ_64M, M_IF_NEEDED)@l
- mtspr SPRN_MAS2,r6
+ lis r6,MAS2_EPN_MASK(BOOK3E_PAGESZ_64M)@h
+ ori r6,r6,MAS2_EPN_MASK(BOOK3E_PAGESZ_64M)@l
+ and r6,r6,r20
+ ori r6,r6,M_IF_NEEDED@l
+ mtspr SPRN_MAS2,r6
mtspr SPRN_MAS3,r8
tlbwe

/* 7. Jump to KERNELBASE mapping */
- lis r6,(KERNELBASE & ~0xfff)@h
- ori r6,r6,(KERNELBASE & ~0xfff)@l
- rlwinm r7,r25,0,0x03ffffff
- add r6,r7,r6
+ mr r6,r20

#elif defined(ENTRY_MAPPING_KEXEC_SETUP)
/*
diff --git a/arch/powerpc/kernel/head_fsl_booke.S b/arch/powerpc/kernel/head_fsl_booke.S
index ce40f96dae20..d34933b0745a 100644
--- a/arch/powerpc/kernel/head_fsl_booke.S
+++ b/arch/powerpc/kernel/head_fsl_booke.S
@@ -155,6 +155,8 @@ _ENTRY(_start);
*/

_ENTRY(__early_start)
+ LOAD_REG_ADDR_PIC(r20, kimage_vaddr)
+ lwz r20,0(r20)

#define ENTRY_MAPPING_BOOT_SETUP
#include "fsl_booke_entry_mapping.S"
@@ -277,8 +279,8 @@ set_ivor:
ori r6, r6, swapper_pg_dir@l
lis r5, abatron_pteptrs@h
ori r5, r5, abatron_pteptrs@l
- lis r4, KERNELBASE@h
- ori r4, r4, KERNELBASE@l
+ lis r3, kimage_vaddr@ha
+ lwz r4, kimage_vaddr@l(r3)
stw r5, 0(r4) /* Save abatron_pteptrs at a fixed location */
stw r6, 0(r5)

@@ -1067,7 +1069,14 @@ __secondary_start:
mr r5,r25 /* phys kernel start */
rlwinm r5,r5,0,~0x3ffffff /* aligned 64M */
subf r4,r5,r4 /* memstart_addr - phys kernel start */
- li r5,0 /* no device tree */
+#ifdef CONFIG_RANDOMIZE_BASE
+ lis r7,KERNELBASE@h
+ ori r7,r7,KERNELBASE@l
+ cmpw r20,r7 /* if kimage_vaddr != KERNELBASE, randomized */
+ beq 2f
+ li r4,0
+#endif
+2: li r5,0 /* no device tree */
li r6,0 /* not boot cpu */
bl restore_to_as0

diff --git a/arch/powerpc/kernel/kaslr_booke.c b/arch/powerpc/kernel/kaslr_booke.c
new file mode 100644
index 000000000000..72d8e9432048
--- /dev/null
+++ b/arch/powerpc/kernel/kaslr_booke.c
@@ -0,0 +1,83 @@
+/*
+ * Copyright (C) 2019 Jason Yan <[email protected]>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/signal.h>
+#include <linux/sched.h>
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/ptrace.h>
+#include <linux/mman.h>
+#include <linux/mm.h>
+#include <linux/swap.h>
+#include <linux/stddef.h>
+#include <linux/vmalloc.h>
+#include <linux/init.h>
+#include <linux/delay.h>
+#include <linux/highmem.h>
+#include <linux/memblock.h>
+#include <asm/pgalloc.h>
+#include <asm/prom.h>
+#include <asm/io.h>
+#include <asm/mmu_context.h>
+#include <asm/pgtable.h>
+#include <asm/mmu.h>
+#include <linux/uaccess.h>
+#include <asm/smp.h>
+#include <asm/machdep.h>
+#include <asm/setup.h>
+#include <asm/paca.h>
+#include <mm/mmu_decl.h>
+
+extern int is_second_reloc;
+
+static unsigned long __init kaslr_choose_location(void *dt_ptr, phys_addr_t size,
+ unsigned long kernel_sz)
+{
+ /* return a fixed offset of 64M for now */
+ return 0x4000000;
+}
+
+/*
+ * To see if we need to relocate the kernel to a random offset
+ * void *dt_ptr - address of the device tree
+ * phys_addr_t size - size of the first memory block
+ */
+notrace void __init kaslr_early_init(void *dt_ptr, phys_addr_t size)
+{
+ unsigned long tlb_virt;
+ phys_addr_t tlb_phys;
+ unsigned long offset;
+ unsigned long kernel_sz;
+
+ kernel_sz = (unsigned long)_end - KERNELBASE;
+
+ offset = kaslr_choose_location(dt_ptr, size, kernel_sz);
+
+ if (offset == 0)
+ return;
+
+ kimage_vaddr += offset;
+ kernstart_addr += offset;
+
+ is_second_reloc = 1;
+
+ if (offset >= SZ_64M) {
+ tlb_virt = round_down(kimage_vaddr, SZ_64M);
+ tlb_phys = round_down(kernstart_addr, SZ_64M);
+
+ /* Create kernel map to relocate in */
+ create_tlb_entry(tlb_phys, tlb_virt, 1);
+ }
+
+ /* Copy the kernel to it's new location and run */
+ memcpy((void *)kimage_vaddr, (void *)KERNELBASE, kernel_sz);
+
+ reloc_kernel_entry(dt_ptr, kimage_vaddr);
+}
diff --git a/arch/powerpc/mm/mmu_decl.h b/arch/powerpc/mm/mmu_decl.h
index dae8e9177574..754ae1e69f92 100644
--- a/arch/powerpc/mm/mmu_decl.h
+++ b/arch/powerpc/mm/mmu_decl.h
@@ -148,6 +148,12 @@ extern void reloc_kernel_entry(void *fdt, int addr);
extern void loadcam_entry(unsigned int index);
extern void loadcam_multi(int first_idx, int num, int tmp_idx);

+#ifdef CONFIG_RANDOMIZE_BASE
+extern void kaslr_early_init(void *dt_ptr, phys_addr_t size);
+#else
+static inline void kaslr_early_init(void *dt_ptr, phys_addr_t size) {}
+#endif
+
struct tlbcam {
u32 MAS0;
u32 MAS1;
diff --git a/arch/powerpc/mm/nohash/fsl_booke.c b/arch/powerpc/mm/nohash/fsl_booke.c
index 556e3cd52a35..8d25a8dc965f 100644
--- a/arch/powerpc/mm/nohash/fsl_booke.c
+++ b/arch/powerpc/mm/nohash/fsl_booke.c
@@ -263,7 +263,8 @@ void setup_initial_memory_limit(phys_addr_t first_memblock_base,
int __initdata is_second_reloc;
notrace void __init relocate_init(u64 dt_ptr, phys_addr_t start)
{
- unsigned long base = KERNELBASE;
+ unsigned long base = kimage_vaddr;
+ phys_addr_t size;

kernstart_addr = start;
if (is_second_reloc) {
@@ -291,7 +292,7 @@ notrace void __init relocate_init(u64 dt_ptr, phys_addr_t start)
start &= ~0x3ffffff;
base &= ~0x3ffffff;
virt_phys_offset = base - start;
- early_get_first_memblock_info(__va(dt_ptr), NULL);
+ early_get_first_memblock_info(__va(dt_ptr), &size);
/*
* We now get the memstart_addr, then we should check if this
* address is the same as what the PAGE_OFFSET map to now. If
@@ -316,6 +317,8 @@ notrace void __init relocate_init(u64 dt_ptr, phys_addr_t start)
/* We should never reach here */
panic("Relocation error");
}
+
+ kaslr_early_init(__va(dt_ptr), size);
}
#endif
#endif
--
2.17.2

2019-07-17 07:52:33

by Jason Yan

[permalink] [raw]
Subject: [RFC PATCH 09/10] powerpc/fsl_booke/kaslr: support nokaslr cmdline parameter

One may want to disable kaslr when boot, so provide a cmdline parameter
'nokaslr' to support this.

Signed-off-by: Jason Yan <[email protected]>
Cc: Diana Craciun <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Christophe Leroy <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Nicholas Piggin <[email protected]>
Cc: Kees Cook <[email protected]>
---
arch/powerpc/kernel/kaslr_booke.c | 14 ++++++++++++++
1 file changed, 14 insertions(+)

diff --git a/arch/powerpc/kernel/kaslr_booke.c b/arch/powerpc/kernel/kaslr_booke.c
index 00339c05879f..e65a5d9d2ff1 100644
--- a/arch/powerpc/kernel/kaslr_booke.c
+++ b/arch/powerpc/kernel/kaslr_booke.c
@@ -373,6 +373,18 @@ static unsigned long __init kaslr_choose_location(void *dt_ptr, phys_addr_t size
return kaslr_offset;
}

+static inline __init bool kaslr_disabled(void)
+{
+ char *str;
+
+ str = strstr(early_command_line, "nokaslr");
+ if ((str == early_command_line) ||
+ (str > early_command_line && *(str - 1) == ' '))
+ return true;
+
+ return false;
+}
+
/*
* To see if we need to relocate the kernel to a random offset
* void *dt_ptr - address of the device tree
@@ -388,6 +400,8 @@ notrace void __init kaslr_early_init(void *dt_ptr, phys_addr_t size)
kernel_sz = (unsigned long)_end - KERNELBASE;

kaslr_get_cmdline(dt_ptr);
+ if (kaslr_disabled())
+ return;

offset = kaslr_choose_location(dt_ptr, size, kernel_sz);

--
2.17.2

2019-07-17 07:52:35

by Jason Yan

[permalink] [raw]
Subject: [RFC PATCH 04/10] powerpc/fsl_booke/32: introduce create_tlb_entry() helper

Add a new helper create_tlb_entry() to create a tlb entry by the virtual
and physical address. This is a preparation to support boot kernel at a
randomized address.

Signed-off-by: Jason Yan <[email protected]>
Cc: Diana Craciun <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Christophe Leroy <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Nicholas Piggin <[email protected]>
Cc: Kees Cook <[email protected]>
---
arch/powerpc/kernel/head_fsl_booke.S | 30 ++++++++++++++++++++++++++++
arch/powerpc/mm/mmu_decl.h | 1 +
2 files changed, 31 insertions(+)

diff --git a/arch/powerpc/kernel/head_fsl_booke.S b/arch/powerpc/kernel/head_fsl_booke.S
index adf0505dbe02..a57d44638031 100644
--- a/arch/powerpc/kernel/head_fsl_booke.S
+++ b/arch/powerpc/kernel/head_fsl_booke.S
@@ -1114,6 +1114,36 @@ __secondary_hold_acknowledge:
.long -1
#endif

+/*
+ * Create a 64M tlb by address and entry
+ * r3/r4 - physical address
+ * r5 - virtual address
+ * r6 - entry
+ */
+_GLOBAL(create_tlb_entry)
+ lis r7,0x1000 /* Set MAS0(TLBSEL) = 1 */
+ rlwimi r7,r6,16,4,15 /* Setup MAS0 = TLBSEL | ESEL(r6) */
+ mtspr SPRN_MAS0,r7 /* Write MAS0 */
+
+ lis r6,(MAS1_VALID|MAS1_IPROT)@h
+ ori r6,r6,(MAS1_TSIZE(BOOK3E_PAGESZ_64M))@l
+ mtspr SPRN_MAS1,r6 /* Write MAS1 */
+
+ lis r6,MAS2_EPN_MASK(BOOK3E_PAGESZ_64M)@h
+ ori r6,r6,MAS2_EPN_MASK(BOOK3E_PAGESZ_64M)@l
+ and r6,r6,r5
+ ori r6,r6,MAS2_M@l
+ mtspr SPRN_MAS2,r6 /* Write MAS2(EPN) */
+
+ mr r8,r4
+ ori r8,r8,(MAS3_SW|MAS3_SR|MAS3_SX)
+ mtspr SPRN_MAS3,r8 /* Write MAS3(RPN) */
+
+ tlbwe /* Write TLB */
+ isync
+ sync
+ blr
+
/*
* Create a tlb entry with the same effective and physical address as
* the tlb entry used by the current running code. But set the TS to 1.
diff --git a/arch/powerpc/mm/mmu_decl.h b/arch/powerpc/mm/mmu_decl.h
index 32c1a191c28a..d7737cf97cee 100644
--- a/arch/powerpc/mm/mmu_decl.h
+++ b/arch/powerpc/mm/mmu_decl.h
@@ -142,6 +142,7 @@ extern unsigned long calc_cam_sz(unsigned long ram, unsigned long virt,
extern void adjust_total_lowmem(void);
extern int switch_to_as1(void);
extern void restore_to_as0(int esel, int offset, void *dt_ptr, int bootcpu);
+extern void create_tlb_entry(phys_addr_t phys, unsigned long virt, int entry);
#endif
extern void loadcam_entry(unsigned int index);
extern void loadcam_multi(int first_idx, int num, int tmp_idx);
--
2.17.2

2019-07-25 12:20:44

by Jason Yan

[permalink] [raw]
Subject: Re: [RFC PATCH 00/10] implement KASLR for powerpc/fsl_booke/32

Hi all, any comments?


On 2019/7/17 16:06, Jason Yan wrote:
> This series implements KASLR for powerpc/fsl_booke/32, as a security
> feature that deters exploit attempts relying on knowledge of the location
> of kernel internals.
>
> Since CONFIG_RELOCATABLE has already supported, what we need to do is
> map or copy kernel to a proper place and relocate. Freescale Book-E
> parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1
> entries are not suitable to map the kernel directly in a randomized
> region, so we chose to copy the kernel to a proper place and restart to
> relocate.
>
> Entropy is derived from the banner and timer base, which will change every
> build and boot. This not so much safe so additionally the bootloader may
> pass entropy via the /chosen/kaslr-seed node in device tree.
>
> We will use the first 512M of the low memory to randomize the kernel
> image. The memory will be split in 64M zones. We will use the lower 8
> bit of the entropy to decide the index of the 64M zone. Then we chose a
> 16K aligned offset inside the 64M zone to put the kernel in.
>
> KERNELBASE
>
> |--> 64M <--|
> | |
> +---------------+ +----------------+---------------+
> | |....| |kernel| | |
> +---------------+ +----------------+---------------+
> | |
> |-----> offset <-----|
>
> kimage_vaddr
>
> We also check if we will overlap with some areas like the dtb area, the
> initrd area or the crashkernel area. If we cannot find a proper area,
> kaslr will be disabled and boot from the original kernel.
>
> Jason Yan (10):
> powerpc: unify definition of M_IF_NEEDED
> powerpc: move memstart_addr and kernstart_addr to init-common.c
> powerpc: introduce kimage_vaddr to store the kernel base
> powerpc/fsl_booke/32: introduce create_tlb_entry() helper
> powerpc/fsl_booke/32: introduce reloc_kernel_entry() helper
> powerpc/fsl_booke/32: implement KASLR infrastructure
> powerpc/fsl_booke/32: randomize the kernel image offset
> powerpc/fsl_booke/kaslr: clear the original kernel if randomized
> powerpc/fsl_booke/kaslr: support nokaslr cmdline parameter
> powerpc/fsl_booke/kaslr: dump out kernel offset information on panic
>
> arch/powerpc/Kconfig | 11 +
> arch/powerpc/include/asm/nohash/mmu-book3e.h | 10 +
> arch/powerpc/include/asm/page.h | 7 +
> arch/powerpc/kernel/Makefile | 1 +
> arch/powerpc/kernel/early_32.c | 2 +-
> arch/powerpc/kernel/exceptions-64e.S | 10 -
> arch/powerpc/kernel/fsl_booke_entry_mapping.S | 23 +-
> arch/powerpc/kernel/head_fsl_booke.S | 61 ++-
> arch/powerpc/kernel/kaslr_booke.c | 439 ++++++++++++++++++
> arch/powerpc/kernel/machine_kexec.c | 1 +
> arch/powerpc/kernel/misc_64.S | 5 -
> arch/powerpc/kernel/setup-common.c | 23 +
> arch/powerpc/mm/init-common.c | 7 +
> arch/powerpc/mm/init_32.c | 5 -
> arch/powerpc/mm/init_64.c | 5 -
> arch/powerpc/mm/mmu_decl.h | 10 +
> arch/powerpc/mm/nohash/fsl_booke.c | 8 +-
> 17 files changed, 580 insertions(+), 48 deletions(-)
> create mode 100644 arch/powerpc/kernel/kaslr_booke.c
>


2019-07-25 19:59:39

by Kees Cook

[permalink] [raw]
Subject: Re: [RFC PATCH 00/10] implement KASLR for powerpc/fsl_booke/32

On Thu, Jul 25, 2019 at 03:16:28PM +0800, Jason Yan wrote:
> Hi all, any comments?

I'm a fan of it, but I don't know ppc internals well enough to sanely
review the code. :) Some comments below on design...

>
>
> On 2019/7/17 16:06, Jason Yan wrote:
> > This series implements KASLR for powerpc/fsl_booke/32, as a security
> > feature that deters exploit attempts relying on knowledge of the location
> > of kernel internals.
> >
> > Since CONFIG_RELOCATABLE has already supported, what we need to do is
> > map or copy kernel to a proper place and relocate. Freescale Book-E
> > parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1
> > entries are not suitable to map the kernel directly in a randomized
> > region, so we chose to copy the kernel to a proper place and restart to
> > relocate.
> >
> > Entropy is derived from the banner and timer base, which will change every
> > build and boot. This not so much safe so additionally the bootloader may
> > pass entropy via the /chosen/kaslr-seed node in device tree.

Good: adding kaslr-seed is a good step here. Are there any x86-like
RDRAND or RDTSC to use? (Or maybe timer base here is similar to x86
RDTSC here?)

> >
> > We will use the first 512M of the low memory to randomize the kernel
> > image. The memory will be split in 64M zones. We will use the lower 8
> > bit of the entropy to decide the index of the 64M zone. Then we chose a
> > 16K aligned offset inside the 64M zone to put the kernel in.

Does this 16K granularity have any page table performance impact? My
understanding was that x86 needed to have 2M granularity due to its page
table layouts.

Why the 64M zones instead of just 16K granularity across the entire low
512M?

> >
> > KERNELBASE
> >
> > |--> 64M <--|
> > | |
> > +---------------+ +----------------+---------------+
> > | |....| |kernel| | |
> > +---------------+ +----------------+---------------+
> > | |
> > |-----> offset <-----|
> >
> > kimage_vaddr
> >
> > We also check if we will overlap with some areas like the dtb area, the
> > initrd area or the crashkernel area. If we cannot find a proper area,
> > kaslr will be disabled and boot from the original kernel.
> >
> > Jason Yan (10):
> > powerpc: unify definition of M_IF_NEEDED
> > powerpc: move memstart_addr and kernstart_addr to init-common.c
> > powerpc: introduce kimage_vaddr to store the kernel base
> > powerpc/fsl_booke/32: introduce create_tlb_entry() helper
> > powerpc/fsl_booke/32: introduce reloc_kernel_entry() helper
> > powerpc/fsl_booke/32: implement KASLR infrastructure
> > powerpc/fsl_booke/32: randomize the kernel image offset
> > powerpc/fsl_booke/kaslr: clear the original kernel if randomized
> > powerpc/fsl_booke/kaslr: support nokaslr cmdline parameter
> > powerpc/fsl_booke/kaslr: dump out kernel offset information on panic

Is there anything planned for other fixed-location things, like x86's
CONFIG_RANDOMIZE_MEMORY?

--
Kees Cook

2019-07-26 07:06:09

by Diana Madalina Craciun

[permalink] [raw]
Subject: Re: [RFC PATCH 00/10] implement KASLR for powerpc/fsl_booke/32

Hi Jason,

I have briefly tested yesterday on a P4080 board and did not see any
issues. I do not have much expertise on KASLR, but I will take a look
over the code.

Regards,
Diana

On 7/25/2019 10:16 AM, Jason Yan wrote:
> Hi all, any comments?
>
>
> On 2019/7/17 16:06, Jason Yan wrote:
>> This series implements KASLR for powerpc/fsl_booke/32, as a security
>> feature that deters exploit attempts relying on knowledge of the location
>> of kernel internals.
>>
>> Since CONFIG_RELOCATABLE has already supported, what we need to do is
>> map or copy kernel to a proper place and relocate. Freescale Book-E
>> parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1
>> entries are not suitable to map the kernel directly in a randomized
>> region, so we chose to copy the kernel to a proper place and restart to
>> relocate.
>>
>> Entropy is derived from the banner and timer base, which will change every
>> build and boot. This not so much safe so additionally the bootloader may
>> pass entropy via the /chosen/kaslr-seed node in device tree.
>>
>> We will use the first 512M of the low memory to randomize the kernel
>> image. The memory will be split in 64M zones. We will use the lower 8
>> bit of the entropy to decide the index of the 64M zone. Then we chose a
>> 16K aligned offset inside the 64M zone to put the kernel in.
>>
>> KERNELBASE
>>
>> |--> 64M <--|
>> | |
>> +---------------+ +----------------+---------------+
>> | |....| |kernel| | |
>> +---------------+ +----------------+---------------+
>> | |
>> |-----> offset <-----|
>>
>> kimage_vaddr
>>
>> We also check if we will overlap with some areas like the dtb area, the
>> initrd area or the crashkernel area. If we cannot find a proper area,
>> kaslr will be disabled and boot from the original kernel.
>>
>> Jason Yan (10):
>> powerpc: unify definition of M_IF_NEEDED
>> powerpc: move memstart_addr and kernstart_addr to init-common.c
>> powerpc: introduce kimage_vaddr to store the kernel base
>> powerpc/fsl_booke/32: introduce create_tlb_entry() helper
>> powerpc/fsl_booke/32: introduce reloc_kernel_entry() helper
>> powerpc/fsl_booke/32: implement KASLR infrastructure
>> powerpc/fsl_booke/32: randomize the kernel image offset
>> powerpc/fsl_booke/kaslr: clear the original kernel if randomized
>> powerpc/fsl_booke/kaslr: support nokaslr cmdline parameter
>> powerpc/fsl_booke/kaslr: dump out kernel offset information on panic
>>
>> arch/powerpc/Kconfig | 11 +
>> arch/powerpc/include/asm/nohash/mmu-book3e.h | 10 +
>> arch/powerpc/include/asm/page.h | 7 +
>> arch/powerpc/kernel/Makefile | 1 +
>> arch/powerpc/kernel/early_32.c | 2 +-
>> arch/powerpc/kernel/exceptions-64e.S | 10 -
>> arch/powerpc/kernel/fsl_booke_entry_mapping.S | 23 +-
>> arch/powerpc/kernel/head_fsl_booke.S | 61 ++-
>> arch/powerpc/kernel/kaslr_booke.c | 439 ++++++++++++++++++
>> arch/powerpc/kernel/machine_kexec.c | 1 +
>> arch/powerpc/kernel/misc_64.S | 5 -
>> arch/powerpc/kernel/setup-common.c | 23 +
>> arch/powerpc/mm/init-common.c | 7 +
>> arch/powerpc/mm/init_32.c | 5 -
>> arch/powerpc/mm/init_64.c | 5 -
>> arch/powerpc/mm/mmu_decl.h | 10 +
>> arch/powerpc/mm/nohash/fsl_booke.c | 8 +-
>> 17 files changed, 580 insertions(+), 48 deletions(-)
>> create mode 100644 arch/powerpc/kernel/kaslr_booke.c
>>
>


2019-07-26 07:21:51

by Jason Yan

[permalink] [raw]
Subject: Re: [RFC PATCH 00/10] implement KASLR for powerpc/fsl_booke/32



On 2019/7/26 3:58, Kees Cook wrote:
> On Thu, Jul 25, 2019 at 03:16:28PM +0800, Jason Yan wrote:
>> Hi all, any comments?
>
> I'm a fan of it, but I don't know ppc internals well enough to sanely
> review the code. :) Some comments below on design...
>

Hi Kees, Thanks for your comments.

>>
>>
>> On 2019/7/17 16:06, Jason Yan wrote:
>>> This series implements KASLR for powerpc/fsl_booke/32, as a security
>>> feature that deters exploit attempts relying on knowledge of the location
>>> of kernel internals.
>>>
>>> Since CONFIG_RELOCATABLE has already supported, what we need to do is
>>> map or copy kernel to a proper place and relocate. Freescale Book-E
>>> parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1
>>> entries are not suitable to map the kernel directly in a randomized
>>> region, so we chose to copy the kernel to a proper place and restart to
>>> relocate.
>>>
>>> Entropy is derived from the banner and timer base, which will change every
>>> build and boot. This not so much safe so additionally the bootloader may
>>> pass entropy via the /chosen/kaslr-seed node in device tree.
>
> Good: adding kaslr-seed is a good step here. Are there any x86-like
> RDRAND or RDTSC to use? (Or maybe timer base here is similar to x86
> RDTSC here?)
>

Yes, time base is similar to RDTSC here.

>>>
>>> We will use the first 512M of the low memory to randomize the kernel
>>> image. The memory will be split in 64M zones. We will use the lower 8
>>> bit of the entropy to decide the index of the 64M zone. Then we chose a
>>> 16K aligned offset inside the 64M zone to put the kernel in.
>
> Does this 16K granularity have any page table performance impact? My
> understanding was that x86 needed to have 2M granularity due to its page
> table layouts.
>

The fsl booke TLB1 covers the whole low memeory. AFAIK, there is no page
table performance impact. But if anyone knows there is any regressions,
please let me know.

> Why the 64M zones instead of just 16K granularity across the entire low
> 512M?
>

The boot code only maps one 64M zone at early start. If the kernel
crosses two 64M zones, we need to map two 64M zones. Keep the kernel in
one 64M saves a lot of complex codes.

>>>
>>> KERNELBASE
>>>
>>> |--> 64M <--|
>>> | |
>>> +---------------+ +----------------+---------------+
>>> | |....| |kernel| | |
>>> +---------------+ +----------------+---------------+
>>> | |
>>> |-----> offset <-----|
>>>
>>> kimage_vaddr
>>>
>>> We also check if we will overlap with some areas like the dtb area, the
>>> initrd area or the crashkernel area. If we cannot find a proper area,
>>> kaslr will be disabled and boot from the original kernel.
>>>
>>> Jason Yan (10):
>>> powerpc: unify definition of M_IF_NEEDED
>>> powerpc: move memstart_addr and kernstart_addr to init-common.c
>>> powerpc: introduce kimage_vaddr to store the kernel base
>>> powerpc/fsl_booke/32: introduce create_tlb_entry() helper
>>> powerpc/fsl_booke/32: introduce reloc_kernel_entry() helper
>>> powerpc/fsl_booke/32: implement KASLR infrastructure
>>> powerpc/fsl_booke/32: randomize the kernel image offset
>>> powerpc/fsl_booke/kaslr: clear the original kernel if randomized
>>> powerpc/fsl_booke/kaslr: support nokaslr cmdline parameter
>>> powerpc/fsl_booke/kaslr: dump out kernel offset information on panic
>
> Is there anything planned for other fixed-location things, like x86's
> CONFIG_RANDOMIZE_MEMORY?
>

Yes, if this feature can be accepted, I will start to work with
powerpc64 KASLR and other things like CONFIG_RANDOMIZE_MEMORY.


2019-07-26 07:27:55

by Jason Yan

[permalink] [raw]
Subject: Re: [RFC PATCH 00/10] implement KASLR for powerpc/fsl_booke/32



On 2019/7/26 15:04, Diana Madalina Craciun wrote:
> Hi Jason,
>
> I have briefly tested yesterday on a P4080 board and did not see any
> issues. I do not have much expertise on KASLR, but I will take a look
> over the code.
>

Hi Diana, thanks. Looking forward to your suggestions.

> Regards,
> Diana
>
> On 7/25/2019 10:16 AM, Jason Yan wrote:
>> Hi all, any comments?
>>
>>
>> On 2019/7/17 16:06, Jason Yan wrote:
>>> This series implements KASLR for powerpc/fsl_booke/32, as a security
>>> feature that deters exploit attempts relying on knowledge of the location
>>> of kernel internals.
>>>
>>> Since CONFIG_RELOCATABLE has already supported, what we need to do is
>>> map or copy kernel to a proper place and relocate. Freescale Book-E
>>> parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1
>>> entries are not suitable to map the kernel directly in a randomized
>>> region, so we chose to copy the kernel to a proper place and restart to
>>> relocate.
>>>
>>> Entropy is derived from the banner and timer base, which will change every
>>> build and boot. This not so much safe so additionally the bootloader may
>>> pass entropy via the /chosen/kaslr-seed node in device tree.
>>>
>>> We will use the first 512M of the low memory to randomize the kernel
>>> image. The memory will be split in 64M zones. We will use the lower 8
>>> bit of the entropy to decide the index of the 64M zone. Then we chose a
>>> 16K aligned offset inside the 64M zone to put the kernel in.
>>>
>>> KERNELBASE
>>>
>>> |--> 64M <--|
>>> | |
>>> +---------------+ +----------------+---------------+
>>> | |....| |kernel| | |
>>> +---------------+ +----------------+---------------+
>>> | |
>>> |-----> offset <-----|
>>>
>>> kimage_vaddr
>>>
>>> We also check if we will overlap with some areas like the dtb area, the
>>> initrd area or the crashkernel area. If we cannot find a proper area,
>>> kaslr will be disabled and boot from the original kernel.
>>>
>>> Jason Yan (10):
>>> powerpc: unify definition of M_IF_NEEDED
>>> powerpc: move memstart_addr and kernstart_addr to init-common.c
>>> powerpc: introduce kimage_vaddr to store the kernel base
>>> powerpc/fsl_booke/32: introduce create_tlb_entry() helper
>>> powerpc/fsl_booke/32: introduce reloc_kernel_entry() helper
>>> powerpc/fsl_booke/32: implement KASLR infrastructure
>>> powerpc/fsl_booke/32: randomize the kernel image offset
>>> powerpc/fsl_booke/kaslr: clear the original kernel if randomized
>>> powerpc/fsl_booke/kaslr: support nokaslr cmdline parameter
>>> powerpc/fsl_booke/kaslr: dump out kernel offset information on panic
>>>
>>> arch/powerpc/Kconfig | 11 +
>>> arch/powerpc/include/asm/nohash/mmu-book3e.h | 10 +
>>> arch/powerpc/include/asm/page.h | 7 +
>>> arch/powerpc/kernel/Makefile | 1 +
>>> arch/powerpc/kernel/early_32.c | 2 +-
>>> arch/powerpc/kernel/exceptions-64e.S | 10 -
>>> arch/powerpc/kernel/fsl_booke_entry_mapping.S | 23 +-
>>> arch/powerpc/kernel/head_fsl_booke.S | 61 ++-
>>> arch/powerpc/kernel/kaslr_booke.c | 439 ++++++++++++++++++
>>> arch/powerpc/kernel/machine_kexec.c | 1 +
>>> arch/powerpc/kernel/misc_64.S | 5 -
>>> arch/powerpc/kernel/setup-common.c | 23 +
>>> arch/powerpc/mm/init-common.c | 7 +
>>> arch/powerpc/mm/init_32.c | 5 -
>>> arch/powerpc/mm/init_64.c | 5 -
>>> arch/powerpc/mm/mmu_decl.h | 10 +
>>> arch/powerpc/mm/nohash/fsl_booke.c | 8 +-
>>> 17 files changed, 580 insertions(+), 48 deletions(-)
>>> create mode 100644 arch/powerpc/kernel/kaslr_booke.c
>>>
>>
>
>
> .
>


2019-07-26 21:47:02

by Kees Cook

[permalink] [raw]
Subject: Re: [RFC PATCH 00/10] implement KASLR for powerpc/fsl_booke/32

On Fri, Jul 26, 2019 at 03:20:26PM +0800, Jason Yan wrote:
> The boot code only maps one 64M zone at early start. If the kernel crosses
> two 64M zones, we need to map two 64M zones. Keep the kernel in one 64M
> saves a lot of complex codes.

Ah-ha. Gotcha. Thanks for the clarification.

> Yes, if this feature can be accepted, I will start to work with powerpc64
> KASLR and other things like CONFIG_RANDOMIZE_MEMORY.

Awesome. :)

--
Kees Cook

2019-07-29 11:07:21

by Christophe Leroy

[permalink] [raw]
Subject: Re: [RFC PATCH 04/10] powerpc/fsl_booke/32: introduce create_tlb_entry() helper



Le 17/07/2019 à 10:06, Jason Yan a écrit :
> Add a new helper create_tlb_entry() to create a tlb entry by the virtual
> and physical address. This is a preparation to support boot kernel at a
> randomized address.
>
> Signed-off-by: Jason Yan <[email protected]>
> Cc: Diana Craciun <[email protected]>
> Cc: Michael Ellerman <[email protected]>
> Cc: Christophe Leroy <[email protected]>
> Cc: Benjamin Herrenschmidt <[email protected]>
> Cc: Paul Mackerras <[email protected]>
> Cc: Nicholas Piggin <[email protected]>
> Cc: Kees Cook <[email protected]>
> ---
> arch/powerpc/kernel/head_fsl_booke.S | 30 ++++++++++++++++++++++++++++
> arch/powerpc/mm/mmu_decl.h | 1 +
> 2 files changed, 31 insertions(+)
>
> diff --git a/arch/powerpc/kernel/head_fsl_booke.S b/arch/powerpc/kernel/head_fsl_booke.S
> index adf0505dbe02..a57d44638031 100644
> --- a/arch/powerpc/kernel/head_fsl_booke.S
> +++ b/arch/powerpc/kernel/head_fsl_booke.S
> @@ -1114,6 +1114,36 @@ __secondary_hold_acknowledge:
> .long -1
> #endif
>
> +/*
> + * Create a 64M tlb by address and entry
> + * r3/r4 - physical address
> + * r5 - virtual address
> + * r6 - entry
> + */
> +_GLOBAL(create_tlb_entry)
> + lis r7,0x1000 /* Set MAS0(TLBSEL) = 1 */
> + rlwimi r7,r6,16,4,15 /* Setup MAS0 = TLBSEL | ESEL(r6) */
> + mtspr SPRN_MAS0,r7 /* Write MAS0 */
> +
> + lis r6,(MAS1_VALID|MAS1_IPROT)@h
> + ori r6,r6,(MAS1_TSIZE(BOOK3E_PAGESZ_64M))@l
> + mtspr SPRN_MAS1,r6 /* Write MAS1 */
> +
> + lis r6,MAS2_EPN_MASK(BOOK3E_PAGESZ_64M)@h
> + ori r6,r6,MAS2_EPN_MASK(BOOK3E_PAGESZ_64M)@l
> + and r6,r6,r5
> + ori r6,r6,MAS2_M@l
> + mtspr SPRN_MAS2,r6 /* Write MAS2(EPN) */
> +
> + mr r8,r4
> + ori r8,r8,(MAS3_SW|MAS3_SR|MAS3_SX)

Could drop the mr r8, r4 and do:

ori r8,r4,(MAS3_SW|MAS3_SR|MAS3_SX)

> + mtspr SPRN_MAS3,r8 /* Write MAS3(RPN) */
> +
> + tlbwe /* Write TLB */
> + isync
> + sync
> + blr
> +
> /*
> * Create a tlb entry with the same effective and physical address as
> * the tlb entry used by the current running code. But set the TS to 1.
> diff --git a/arch/powerpc/mm/mmu_decl.h b/arch/powerpc/mm/mmu_decl.h
> index 32c1a191c28a..d7737cf97cee 100644
> --- a/arch/powerpc/mm/mmu_decl.h
> +++ b/arch/powerpc/mm/mmu_decl.h
> @@ -142,6 +142,7 @@ extern unsigned long calc_cam_sz(unsigned long ram, unsigned long virt,
> extern void adjust_total_lowmem(void);
> extern int switch_to_as1(void);
> extern void restore_to_as0(int esel, int offset, void *dt_ptr, int bootcpu);
> +extern void create_tlb_entry(phys_addr_t phys, unsigned long virt, int entry);

Please please do not add new declarations with the useless 'extern'
keyword. See checkpatch report:
https://openpower.xyz/job/snowpatch/job/snowpatch-linux-checkpatch/8124//artifact/linux/checkpatch.log

> #endif
> extern void loadcam_entry(unsigned int index);
> extern void loadcam_multi(int first_idx, int num, int tmp_idx);
>

2019-07-29 11:17:54

by Christophe Leroy

[permalink] [raw]
Subject: Re: [RFC PATCH 06/10] powerpc/fsl_booke/32: implement KASLR infrastructure



Le 17/07/2019 à 10:06, Jason Yan a écrit :
> This patch add support to boot kernel from places other than KERNELBASE.
> Since CONFIG_RELOCATABLE has already supported, what we need to do is
> map or copy kernel to a proper place and relocate. Freescale Book-E
> parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1
> entries are not suitable to map the kernel directly in a randomized
> region, so we chose to copy the kernel to a proper place and restart to
> relocate.
>
> The offset of the kernel was not randomized yet(a fixed 64M is set). We
> will randomize it in the next patch.
>
> Signed-off-by: Jason Yan <[email protected]>
> Cc: Diana Craciun <[email protected]>
> Cc: Michael Ellerman <[email protected]>
> Cc: Christophe Leroy <[email protected]>
> Cc: Benjamin Herrenschmidt <[email protected]>
> Cc: Paul Mackerras <[email protected]>
> Cc: Nicholas Piggin <[email protected]>
> Cc: Kees Cook <[email protected]>
> ---
> arch/powerpc/Kconfig | 11 +++
> arch/powerpc/kernel/Makefile | 1 +
> arch/powerpc/kernel/early_32.c | 2 +-
> arch/powerpc/kernel/fsl_booke_entry_mapping.S | 13 ++-
> arch/powerpc/kernel/head_fsl_booke.S | 15 +++-
> arch/powerpc/kernel/kaslr_booke.c | 83 +++++++++++++++++++
> arch/powerpc/mm/mmu_decl.h | 6 ++
> arch/powerpc/mm/nohash/fsl_booke.c | 7 +-
> 8 files changed, 125 insertions(+), 13 deletions(-)
> create mode 100644 arch/powerpc/kernel/kaslr_booke.c
>

[...]

> diff --git a/arch/powerpc/mm/mmu_decl.h b/arch/powerpc/mm/mmu_decl.h
> index dae8e9177574..754ae1e69f92 100644
> --- a/arch/powerpc/mm/mmu_decl.h
> +++ b/arch/powerpc/mm/mmu_decl.h
> @@ -148,6 +148,12 @@ extern void reloc_kernel_entry(void *fdt, int addr);
> extern void loadcam_entry(unsigned int index);
> extern void loadcam_multi(int first_idx, int num, int tmp_idx);
>
> +#ifdef CONFIG_RANDOMIZE_BASE
> +extern void kaslr_early_init(void *dt_ptr, phys_addr_t size);

No superflous 'extern' keyword.

Christophe

> +#else
> +static inline void kaslr_early_init(void *dt_ptr, phys_addr_t size) {}
> +#endif
> +
> struct tlbcam {
> u32 MAS0;
> u32 MAS1;

2019-07-29 11:42:01

by Christophe Leroy

[permalink] [raw]
Subject: Re: [RFC PATCH 09/10] powerpc/fsl_booke/kaslr: support nokaslr cmdline parameter



Le 17/07/2019 à 10:06, Jason Yan a écrit :
> One may want to disable kaslr when boot, so provide a cmdline parameter
> 'nokaslr' to support this.
>
> Signed-off-by: Jason Yan <[email protected]>
> Cc: Diana Craciun <[email protected]>
> Cc: Michael Ellerman <[email protected]>
> Cc: Christophe Leroy <[email protected]>
> Cc: Benjamin Herrenschmidt <[email protected]>
> Cc: Paul Mackerras <[email protected]>
> Cc: Nicholas Piggin <[email protected]>
> Cc: Kees Cook <[email protected]>
> ---
> arch/powerpc/kernel/kaslr_booke.c | 14 ++++++++++++++
> 1 file changed, 14 insertions(+)
>
> diff --git a/arch/powerpc/kernel/kaslr_booke.c b/arch/powerpc/kernel/kaslr_booke.c
> index 00339c05879f..e65a5d9d2ff1 100644
> --- a/arch/powerpc/kernel/kaslr_booke.c
> +++ b/arch/powerpc/kernel/kaslr_booke.c
> @@ -373,6 +373,18 @@ static unsigned long __init kaslr_choose_location(void *dt_ptr, phys_addr_t size
> return kaslr_offset;
> }
>
> +static inline __init bool kaslr_disabled(void)
> +{
> + char *str;
> +
> + str = strstr(early_command_line, "nokaslr");

Why using early_command_line instead of boot_command_line ?


> + if ((str == early_command_line) ||
> + (str > early_command_line && *(str - 1) == ' '))

Is that stuff really needed ?

Why not just:

return strstr(early_command_line, "nokaslr") != NULL;

> + return true;
> +
> + return false;
> +}


> +
> /*
> * To see if we need to relocate the kernel to a random offset
> * void *dt_ptr - address of the device tree
> @@ -388,6 +400,8 @@ notrace void __init kaslr_early_init(void *dt_ptr, phys_addr_t size)
> kernel_sz = (unsigned long)_end - KERNELBASE;
>
> kaslr_get_cmdline(dt_ptr);
> + if (kaslr_disabled())
> + return;
>
> offset = kaslr_choose_location(dt_ptr, size, kernel_sz);
>
>

Christophe

2019-07-29 14:33:22

by Diana Madalina Craciun

[permalink] [raw]
Subject: Re: [RFC PATCH 00/10] implement KASLR for powerpc/fsl_booke/32

Reviewed-by: Diana Craciun <[email protected]>
Tested-by: Diana Craciun <[email protected]>


On 7/17/2019 10:49 AM, Jason Yan wrote:
> This series implements KASLR for powerpc/fsl_booke/32, as a security
> feature that deters exploit attempts relying on knowledge of the location
> of kernel internals.
>
> Since CONFIG_RELOCATABLE has already supported, what we need to do is
> map or copy kernel to a proper place and relocate. Freescale Book-E
> parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1
> entries are not suitable to map the kernel directly in a randomized
> region, so we chose to copy the kernel to a proper place and restart to
> relocate.
>
> Entropy is derived from the banner and timer base, which will change every
> build and boot. This not so much safe so additionally the bootloader may
> pass entropy via the /chosen/kaslr-seed node in device tree.
>
> We will use the first 512M of the low memory to randomize the kernel
> image. The memory will be split in 64M zones. We will use the lower 8
> bit of the entropy to decide the index of the 64M zone. Then we chose a
> 16K aligned offset inside the 64M zone to put the kernel in.
>
> KERNELBASE
>
> |--> 64M <--|
> | |
> +---------------+ +----------------+---------------+
> | |....| |kernel| | |
> +---------------+ +----------------+---------------+
> | |
> |-----> offset <-----|
>
> kimage_vaddr
>
> We also check if we will overlap with some areas like the dtb area, the
> initrd area or the crashkernel area. If we cannot find a proper area,
> kaslr will be disabled and boot from the original kernel.
>
> Jason Yan (10):
> powerpc: unify definition of M_IF_NEEDED
> powerpc: move memstart_addr and kernstart_addr to init-common.c
> powerpc: introduce kimage_vaddr to store the kernel base
> powerpc/fsl_booke/32: introduce create_tlb_entry() helper
> powerpc/fsl_booke/32: introduce reloc_kernel_entry() helper
> powerpc/fsl_booke/32: implement KASLR infrastructure
> powerpc/fsl_booke/32: randomize the kernel image offset
> powerpc/fsl_booke/kaslr: clear the original kernel if randomized
> powerpc/fsl_booke/kaslr: support nokaslr cmdline parameter
> powerpc/fsl_booke/kaslr: dump out kernel offset information on panic
>
> arch/powerpc/Kconfig | 11 +
> arch/powerpc/include/asm/nohash/mmu-book3e.h | 10 +
> arch/powerpc/include/asm/page.h | 7 +
> arch/powerpc/kernel/Makefile | 1 +
> arch/powerpc/kernel/early_32.c | 2 +-
> arch/powerpc/kernel/exceptions-64e.S | 10 -
> arch/powerpc/kernel/fsl_booke_entry_mapping.S | 23 +-
> arch/powerpc/kernel/head_fsl_booke.S | 61 ++-
> arch/powerpc/kernel/kaslr_booke.c | 439 ++++++++++++++++++
> arch/powerpc/kernel/machine_kexec.c | 1 +
> arch/powerpc/kernel/misc_64.S | 5 -
> arch/powerpc/kernel/setup-common.c | 23 +
> arch/powerpc/mm/init-common.c | 7 +
> arch/powerpc/mm/init_32.c | 5 -
> arch/powerpc/mm/init_64.c | 5 -
> arch/powerpc/mm/mmu_decl.h | 10 +
> arch/powerpc/mm/nohash/fsl_booke.c | 8 +-
> 17 files changed, 580 insertions(+), 48 deletions(-)
> create mode 100644 arch/powerpc/kernel/kaslr_booke.c
>

2019-07-29 17:26:41

by Jason Yan

[permalink] [raw]
Subject: Re: [RFC PATCH 04/10] powerpc/fsl_booke/32: introduce create_tlb_entry() helper


On 2019/7/29 19:05, Christophe Leroy wrote:
>
>
> Le 17/07/2019 à 10:06, Jason Yan a écrit :
>> Add a new helper create_tlb_entry() to create a tlb entry by the virtual
>> and physical address. This is a preparation to support boot kernel at a
>> randomized address.
>>
>> Signed-off-by: Jason Yan <[email protected]>
>> Cc: Diana Craciun <[email protected]>
>> Cc: Michael Ellerman <[email protected]>
>> Cc: Christophe Leroy <[email protected]>
>> Cc: Benjamin Herrenschmidt <[email protected]>
>> Cc: Paul Mackerras <[email protected]>
>> Cc: Nicholas Piggin <[email protected]>
>> Cc: Kees Cook <[email protected]>
>> ---
>>   arch/powerpc/kernel/head_fsl_booke.S | 30 ++++++++++++++++++++++++++++
>>   arch/powerpc/mm/mmu_decl.h           |  1 +
>>   2 files changed, 31 insertions(+)
>>
>> diff --git a/arch/powerpc/kernel/head_fsl_booke.S
>> b/arch/powerpc/kernel/head_fsl_booke.S
>> index adf0505dbe02..a57d44638031 100644
>> --- a/arch/powerpc/kernel/head_fsl_booke.S
>> +++ b/arch/powerpc/kernel/head_fsl_booke.S
>> @@ -1114,6 +1114,36 @@ __secondary_hold_acknowledge:
>>       .long    -1
>>   #endif
>> +/*
>> + * Create a 64M tlb by address and entry
>> + * r3/r4 - physical address
>> + * r5 - virtual address
>> + * r6 - entry
>> + */
>> +_GLOBAL(create_tlb_entry)
>> +    lis     r7,0x1000               /* Set MAS0(TLBSEL) = 1 */
>> +    rlwimi  r7,r6,16,4,15           /* Setup MAS0 = TLBSEL | ESEL(r6) */
>> +    mtspr   SPRN_MAS0,r7            /* Write MAS0 */
>> +
>> +    lis     r6,(MAS1_VALID|MAS1_IPROT)@h
>> +    ori     r6,r6,(MAS1_TSIZE(BOOK3E_PAGESZ_64M))@l
>> +    mtspr   SPRN_MAS1,r6            /* Write MAS1 */
>> +
>> +    lis     r6,MAS2_EPN_MASK(BOOK3E_PAGESZ_64M)@h
>> +    ori     r6,r6,MAS2_EPN_MASK(BOOK3E_PAGESZ_64M)@l
>> +    and     r6,r6,r5
>> +    ori    r6,r6,MAS2_M@l
>> +    mtspr   SPRN_MAS2,r6            /* Write MAS2(EPN) */
>> +
>> +    mr      r8,r4
>> +    ori     r8,r8,(MAS3_SW|MAS3_SR|MAS3_SX)
>
> Could drop the mr r8, r4 and do:
>
> ori     r8,r4,(MAS3_SW|MAS3_SR|MAS3_SX)
>

OK, thanks for the suggestion.

>> +    mtspr   SPRN_MAS3,r8            /* Write MAS3(RPN) */
>> +
>> +    tlbwe                           /* Write TLB */
>> +    isync
>> +    sync
>> +    blr
>> +
>>   /*
>>    * Create a tlb entry with the same effective and physical address as
>>    * the tlb entry used by the current running code. But set the TS to 1.
>> diff --git a/arch/powerpc/mm/mmu_decl.h b/arch/powerpc/mm/mmu_decl.h
>> index 32c1a191c28a..d7737cf97cee 100644
>> --- a/arch/powerpc/mm/mmu_decl.h
>> +++ b/arch/powerpc/mm/mmu_decl.h
>> @@ -142,6 +142,7 @@ extern unsigned long calc_cam_sz(unsigned long
>> ram, unsigned long virt,
>>   extern void adjust_total_lowmem(void);
>>   extern int switch_to_as1(void);
>>   extern void restore_to_as0(int esel, int offset, void *dt_ptr, int
>> bootcpu);
>> +extern void create_tlb_entry(phys_addr_t phys, unsigned long virt,
>> int entry);
>
> Please please do not add new declarations with the useless 'extern'
> keyword. See checkpatch report:
> https://openpower.xyz/job/snowpatch/job/snowpatch-linux-checkpatch/8124//artifact/linux/checkpatch.log
>

Will drop all useless 'extern' in this and other patches, thanks.

>
>>   #endif
>>   extern void loadcam_entry(unsigned int index);
>>   extern void loadcam_multi(int first_idx, int num, int tmp_idx);
>>
>
> .
>

2019-07-29 17:44:12

by Jason Yan

[permalink] [raw]
Subject: Re: [RFC PATCH 09/10] powerpc/fsl_booke/kaslr: support nokaslr cmdline parameter



On 2019/7/29 19:38, Christophe Leroy wrote:
>
>
> Le 17/07/2019 à 10:06, Jason Yan a écrit :
>> One may want to disable kaslr when boot, so provide a cmdline parameter
>> 'nokaslr' to support this.
>>
>> Signed-off-by: Jason Yan <[email protected]>
>> Cc: Diana Craciun <[email protected]>
>> Cc: Michael Ellerman <[email protected]>
>> Cc: Christophe Leroy <[email protected]>
>> Cc: Benjamin Herrenschmidt <[email protected]>
>> Cc: Paul Mackerras <[email protected]>
>> Cc: Nicholas Piggin <[email protected]>
>> Cc: Kees Cook <[email protected]>
>> ---
>>   arch/powerpc/kernel/kaslr_booke.c | 14 ++++++++++++++
>>   1 file changed, 14 insertions(+)
>>
>> diff --git a/arch/powerpc/kernel/kaslr_booke.c
>> b/arch/powerpc/kernel/kaslr_booke.c
>> index 00339c05879f..e65a5d9d2ff1 100644
>> --- a/arch/powerpc/kernel/kaslr_booke.c
>> +++ b/arch/powerpc/kernel/kaslr_booke.c
>> @@ -373,6 +373,18 @@ static unsigned long __init
>> kaslr_choose_location(void *dt_ptr, phys_addr_t size
>>       return kaslr_offset;
>>   }
>> +static inline __init bool kaslr_disabled(void)
>> +{
>> +    char *str;
>> +
>> +    str = strstr(early_command_line, "nokaslr");
>
> Why using early_command_line instead of boot_command_line ?
>

Will switch to boot_command_line.

>
>> +    if ((str == early_command_line) ||
>> +        (str > early_command_line && *(str - 1) == ' '))
>
> Is that stuff really needed ?
>
> Why not just:
>
> return strstr(early_command_line, "nokaslr") != NULL;
>

This code is derived from other arch such as arm64/mips. It's trying to
make sure that 'nokaslr' is a separate word but not part of other words
such as 'abcnokaslr'.

>> +        return true;
>> +
>> +    return false;
>> +}
>
>
>> +
>>   /*
>>    * To see if we need to relocate the kernel to a random offset
>>    * void *dt_ptr - address of the device tree
>> @@ -388,6 +400,8 @@ notrace void __init kaslr_early_init(void *dt_ptr,
>> phys_addr_t size)
>>       kernel_sz = (unsigned long)_end - KERNELBASE;
>>       kaslr_get_cmdline(dt_ptr);
>> +    if (kaslr_disabled())
>> +        return;
>>       offset = kaslr_choose_location(dt_ptr, size, kernel_sz);
>>
>
> Christophe
>
> .
>