2020-03-06 06:42:29

by Jason Yan

[permalink] [raw]
Subject: [PATCH v4 0/6] implement KASLR for powerpc/fsl_booke/64

This is a try to implement KASLR for Freescale BookE64 which is based on
my earlier implementation for Freescale BookE32:
https://patchwork.ozlabs.org/project/linuxppc-dev/list/?series=131718&state=*

The implementation for Freescale BookE64 is similar as BookE32. One
difference is that Freescale BookE64 set up a TLB mapping of 1G during
booting. Another difference is that ppc64 needs the kernel to be
64K-aligned. So we can randomize the kernel in this 1G mapping and make
it 64K-aligned. This can save some code to creat another TLB map at
early boot. The disadvantage is that we only have about 1G/64K = 16384
slots to put the kernel in.

KERNELBASE

64K |--> kernel <--|
| | |
+--+--+--+ +--+--+--+--+--+--+--+--+--+ +--+--+
| | | |....| | | | | | | | | |....| | |
+--+--+--+ +--+--+--+--+--+--+--+--+--+ +--+--+
| | 1G
|-----> offset <-----|

kernstart_virt_addr

I'm not sure if the slot numbers is enough or the design has any
defects. If you have some better ideas, I would be happy to hear that.

Thank you all.

v3->v4:
Do not define __kaslr_offset as a fixed symbol. Reference __run_at_load and
__kaslr_offset by symbol instead of magic offsets.
Use IS_ENABLED(CONFIG_PPC32) instead of #ifdef CONFIG_PPC32.
Change kaslr-booke32 to kaslr-booke in index.rst
Switch some instructions to 64-bit.
v2->v3:
Fix build error when KASLR is disabled.
v1->v2:
Add __kaslr_offset for the secondary cpu boot up.

Jason Yan (6):
powerpc/fsl_booke/kaslr: refactor kaslr_legal_offset() and
kaslr_early_init()
powerpc/fsl_booke/64: introduce reloc_kernel_entry() helper
powerpc/fsl_booke/64: implement KASLR for fsl_booke64
powerpc/fsl_booke/64: do not clear the BSS for the second pass
powerpc/fsl_booke/64: clear the original kernel if randomized
powerpc/fsl_booke/kaslr: rename kaslr-booke32.rst to kaslr-booke.rst
and add 64bit part

Documentation/powerpc/index.rst | 2 +-
.../{kaslr-booke32.rst => kaslr-booke.rst} | 35 +++++++-
arch/powerpc/Kconfig | 2 +-
arch/powerpc/kernel/exceptions-64e.S | 23 +++++
arch/powerpc/kernel/head_64.S | 13 +++
arch/powerpc/kernel/setup_64.c | 3 +
arch/powerpc/mm/mmu_decl.h | 23 ++---
arch/powerpc/mm/nohash/kaslr_booke.c | 88 +++++++++++++------
8 files changed, 144 insertions(+), 45 deletions(-)
rename Documentation/powerpc/{kaslr-booke32.rst => kaslr-booke.rst} (59%)

--
2.17.2


2020-03-06 06:42:32

by Jason Yan

[permalink] [raw]
Subject: [PATCH v4 1/6] powerpc/fsl_booke/kaslr: refactor kaslr_legal_offset() and kaslr_early_init()

Some code refactor in kaslr_legal_offset() and kaslr_early_init(). No
functional change. This is a preparation for KASLR fsl_booke64.

Signed-off-by: Jason Yan <[email protected]>
Cc: Scott Wood <[email protected]>
Cc: Diana Craciun <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Christophe Leroy <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Nicholas Piggin <[email protected]>
Cc: Kees Cook <[email protected]>
---
arch/powerpc/mm/nohash/kaslr_booke.c | 34 +++++++++++++++-------------
1 file changed, 18 insertions(+), 16 deletions(-)

diff --git a/arch/powerpc/mm/nohash/kaslr_booke.c b/arch/powerpc/mm/nohash/kaslr_booke.c
index 4a75f2d9bf0e..6ebff31fefcc 100644
--- a/arch/powerpc/mm/nohash/kaslr_booke.c
+++ b/arch/powerpc/mm/nohash/kaslr_booke.c
@@ -25,6 +25,7 @@ struct regions {
unsigned long pa_start;
unsigned long pa_end;
unsigned long kernel_size;
+ unsigned long linear_sz;
unsigned long dtb_start;
unsigned long dtb_end;
unsigned long initrd_start;
@@ -260,11 +261,23 @@ static __init void get_cell_sizes(const void *fdt, int node, int *addr_cells,
*size_cells = fdt32_to_cpu(*prop);
}

-static unsigned long __init kaslr_legal_offset(void *dt_ptr, unsigned long index,
- unsigned long offset)
+static unsigned long __init kaslr_legal_offset(void *dt_ptr, unsigned long random)
{
unsigned long koffset = 0;
unsigned long start;
+ unsigned long index;
+ unsigned long offset;
+
+ /*
+ * Decide which 64M we want to start
+ * Only use the low 8 bits of the random seed
+ */
+ index = random & 0xFF;
+ index %= regions.linear_sz / SZ_64M;
+
+ /* Decide offset inside 64M */
+ offset = random % (SZ_64M - regions.kernel_size);
+ offset = round_down(offset, SZ_16K);

while ((long)index >= 0) {
offset = memstart_addr + index * SZ_64M + offset;
@@ -289,10 +302,9 @@ static inline __init bool kaslr_disabled(void)
static unsigned long __init kaslr_choose_location(void *dt_ptr, phys_addr_t size,
unsigned long kernel_sz)
{
- unsigned long offset, random;
+ unsigned long random;
unsigned long ram, linear_sz;
u64 seed;
- unsigned long index;

kaslr_get_cmdline(dt_ptr);
if (kaslr_disabled())
@@ -333,22 +345,12 @@ static unsigned long __init kaslr_choose_location(void *dt_ptr, phys_addr_t size
regions.dtb_start = __pa(dt_ptr);
regions.dtb_end = __pa(dt_ptr) + fdt_totalsize(dt_ptr);
regions.kernel_size = kernel_sz;
+ regions.linear_sz = linear_sz;

get_initrd_range(dt_ptr);
get_crash_kernel(dt_ptr, ram);

- /*
- * Decide which 64M we want to start
- * Only use the low 8 bits of the random seed
- */
- index = random & 0xFF;
- index %= linear_sz / SZ_64M;
-
- /* Decide offset inside 64M */
- offset = random % (SZ_64M - kernel_sz);
- offset = round_down(offset, SZ_16K);
-
- return kaslr_legal_offset(dt_ptr, index, offset);
+ return kaslr_legal_offset(dt_ptr, random);
}

/*
--
2.17.2

2020-03-06 06:42:49

by Jason Yan

[permalink] [raw]
Subject: [PATCH v4 5/6] powerpc/fsl_booke/64: clear the original kernel if randomized

The original kernel still exists in the memory, clear it now.

Signed-off-by: Jason Yan <[email protected]>
Cc: Scott Wood <[email protected]>
Cc: Diana Craciun <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Christophe Leroy <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Nicholas Piggin <[email protected]>
Cc: Kees Cook <[email protected]>
---
arch/powerpc/mm/nohash/kaslr_booke.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/mm/nohash/kaslr_booke.c b/arch/powerpc/mm/nohash/kaslr_booke.c
index bf60f956dc91..f7ab97aa2127 100644
--- a/arch/powerpc/mm/nohash/kaslr_booke.c
+++ b/arch/powerpc/mm/nohash/kaslr_booke.c
@@ -379,8 +379,10 @@ notrace void __init kaslr_early_init(void *dt_ptr, phys_addr_t size)
unsigned long kernel_sz;

if (IS_ENABLED(CONFIG_PPC64)) {
- if (__run_at_load == 1)
+ if (__run_at_load == 1) {
+ kaslr_late_init();
return;
+ }

/* Setup flat device-tree pointer */
initial_boot_params = dt_ptr;
--
2.17.2

2020-03-06 06:43:41

by Jason Yan

[permalink] [raw]
Subject: [PATCH v4 4/6] powerpc/fsl_booke/64: do not clear the BSS for the second pass

The BSS section has already cleared out in the first pass. No need to
clear it again. This can save some time when booting with KASLR
enabled.

Signed-off-by: Jason Yan <[email protected]>
Cc: Scott Wood <[email protected]>
Cc: Diana Craciun <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Christophe Leroy <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Nicholas Piggin <[email protected]>
Cc: Kees Cook <[email protected]>
---
arch/powerpc/kernel/head_64.S | 7 +++++++
1 file changed, 7 insertions(+)

diff --git a/arch/powerpc/kernel/head_64.S b/arch/powerpc/kernel/head_64.S
index 454129a3c259..9354c292b709 100644
--- a/arch/powerpc/kernel/head_64.S
+++ b/arch/powerpc/kernel/head_64.S
@@ -913,6 +913,13 @@ start_here_multiplatform:
bl relative_toc
tovirt(r2,r2)

+ /* Do not clear the BSS for the second pass if randomized */
+ LOAD_REG_ADDR(r3, kernstart_virt_addr)
+ ld r3,0(r3)
+ LOAD_REG_IMMEDIATE(r4, KERNELBASE)
+ cmpd r3,r4
+ bne 4f
+
/* Clear out the BSS. It may have been done in prom_init,
* already but that's irrelevant since prom_init will soon
* be detached from the kernel completely. Besides, we need
--
2.17.2

2020-03-06 06:44:10

by Jason Yan

[permalink] [raw]
Subject: [PATCH v4 2/6] powerpc/fsl_booke/64: introduce reloc_kernel_entry() helper

Like the 32bit code, we introduce reloc_kernel_entry() helper to prepare
for the KASLR 64bit version. And move the C declaration of this function
out of CONFIG_PPC32 and use long instead of int for the parameter 'addr'.

Signed-off-by: Jason Yan <[email protected]>
Cc: Scott Wood <[email protected]>
Cc: Diana Craciun <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Christophe Leroy <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Nicholas Piggin <[email protected]>
Cc: Kees Cook <[email protected]>
Reviewed-by: Christophe Leroy <[email protected]>
---
arch/powerpc/kernel/exceptions-64e.S | 13 +++++++++++++
arch/powerpc/mm/mmu_decl.h | 3 ++-
2 files changed, 15 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/kernel/exceptions-64e.S b/arch/powerpc/kernel/exceptions-64e.S
index e4076e3c072d..1b9b174bee86 100644
--- a/arch/powerpc/kernel/exceptions-64e.S
+++ b/arch/powerpc/kernel/exceptions-64e.S
@@ -1679,3 +1679,16 @@ _GLOBAL(setup_ehv_ivors)
_GLOBAL(setup_lrat_ivor)
SET_IVOR(42, 0x340) /* LRAT Error */
blr
+
+/*
+ * Return to the start of the relocated kernel and run again
+ * r3 - virtual address of fdt
+ * r4 - entry of the kernel
+ */
+_GLOBAL(reloc_kernel_entry)
+ mfmsr r7
+ rlwinm r7, r7, 0, ~(MSR_IS | MSR_DS)
+
+ mtspr SPRN_SRR0,r4
+ mtspr SPRN_SRR1,r7
+ rfi
diff --git a/arch/powerpc/mm/mmu_decl.h b/arch/powerpc/mm/mmu_decl.h
index 7097e07a209a..605129b5ccdf 100644
--- a/arch/powerpc/mm/mmu_decl.h
+++ b/arch/powerpc/mm/mmu_decl.h
@@ -140,9 +140,10 @@ extern void adjust_total_lowmem(void);
extern int switch_to_as1(void);
extern void restore_to_as0(int esel, int offset, void *dt_ptr, int bootcpu);
void create_kaslr_tlb_entry(int entry, unsigned long virt, phys_addr_t phys);
-void reloc_kernel_entry(void *fdt, int addr);
extern int is_second_reloc;
#endif
+
+void reloc_kernel_entry(void *fdt, long addr);
extern void loadcam_entry(unsigned int index);
extern void loadcam_multi(int first_idx, int num, int tmp_idx);

--
2.17.2

2020-03-06 06:44:13

by Jason Yan

[permalink] [raw]
Subject: [PATCH v4 3/6] powerpc/fsl_booke/64: implement KASLR for fsl_booke64

The implementation for Freescale BookE64 is similar as BookE32. One
difference is that Freescale BookE64 set up a TLB mapping of 1G during
booting. Another difference is that ppc64 needs the kernel to be
64K-aligned. So we can randomize the kernel in this 1G mapping and make
it 64K-aligned. This can save some code to creat another TLB map at
early boot. The disadvantage is that we only have about 1G/64K = 16384
slots to put the kernel in.

To support secondary cpu boot up, a variable __kaslr_offset was added in
first_256B section. This can help secondary cpu get the kaslr offset
before the 1:1 mapping has been setup.

Signed-off-by: Jason Yan <[email protected]>
Cc: Scott Wood <[email protected]>
Cc: Diana Craciun <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Christophe Leroy <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Nicholas Piggin <[email protected]>
Cc: Kees Cook <[email protected]>
---
arch/powerpc/Kconfig | 2 +-
arch/powerpc/kernel/exceptions-64e.S | 10 ++++
arch/powerpc/kernel/head_64.S | 6 +++
arch/powerpc/kernel/setup_64.c | 3 ++
arch/powerpc/mm/mmu_decl.h | 20 ++++----
arch/powerpc/mm/nohash/kaslr_booke.c | 72 +++++++++++++++++++---------
6 files changed, 80 insertions(+), 33 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 497b7d0b2d7e..0c76601fdd59 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -564,7 +564,7 @@ config RELOCATABLE

config RANDOMIZE_BASE
bool "Randomize the address of the kernel image"
- depends on (FSL_BOOKE && FLATMEM && PPC32)
+ depends on (PPC_FSL_BOOK3E && FLATMEM)
depends on RELOCATABLE
help
Randomizes the virtual address at which the kernel image is
diff --git a/arch/powerpc/kernel/exceptions-64e.S b/arch/powerpc/kernel/exceptions-64e.S
index 1b9b174bee86..260cf1f1e71c 100644
--- a/arch/powerpc/kernel/exceptions-64e.S
+++ b/arch/powerpc/kernel/exceptions-64e.S
@@ -1378,6 +1378,7 @@ skpinv: addi r6,r6,1 /* Increment */
1: mflr r6
addi r6,r6,(2f - 1b)
tovirt(r6,r6)
+ add r6,r6,r19
lis r7,MSR_KERNEL@h
ori r7,r7,MSR_KERNEL@l
mtspr SPRN_SRR0,r6
@@ -1400,6 +1401,7 @@ skpinv: addi r6,r6,1 /* Increment */

/* We translate LR and return */
tovirt(r8,r8)
+ add r8,r8,r19
mtlr r8
blr

@@ -1528,6 +1530,7 @@ a2_tlbinit_code_end:
*/
_GLOBAL(start_initialization_book3e)
mflr r28
+ li r19, 0

/* First, we need to setup some initial TLBs to map the kernel
* text, data and bss at PAGE_OFFSET. We don't have a real mode
@@ -1570,6 +1573,12 @@ _GLOBAL(book3e_secondary_core_init)
cmplwi r4,0
bne 2f

+ li r19, 0
+#ifdef CONFIG_RANDOMIZE_BASE
+ LOAD_REG_ADDR_PIC(r19, __kaslr_offset)
+ ld r19,0(r19)
+ rlwinm r19,r19,0,0,5
+#endif
/* Setup TLB for this core */
bl initial_tlb_book3e

@@ -1602,6 +1611,7 @@ _GLOBAL(book3e_secondary_core_init)
lis r3,PAGE_OFFSET@highest
sldi r3,r3,32
or r28,r28,r3
+ add r28,r28,r19
1: mtlr r28
blr

diff --git a/arch/powerpc/kernel/head_64.S b/arch/powerpc/kernel/head_64.S
index ad79fddb974d..454129a3c259 100644
--- a/arch/powerpc/kernel/head_64.S
+++ b/arch/powerpc/kernel/head_64.S
@@ -104,6 +104,12 @@ __secondary_hold_acknowledge:
.8byte 0x0

#ifdef CONFIG_RELOCATABLE
+#ifdef CONFIG_RANDOMIZE_BASE
+ .globl __kaslr_offset
+__kaslr_offset:
+ .8byte 0x0
+#endif
+
/* This flag is set to 1 by a loader if the kernel should run
* at the loaded address instead of the linked address. This
* is used by kexec-tools to keep the the kdump kernel in the
diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index e05e6dd67ae6..836e202dfd5b 100644
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -67,6 +67,7 @@
#include <asm/kup.h>
#include <asm/early_ioremap.h>

+#include <mm/mmu_decl.h>
#include "setup.h"

int spinning_secondaries;
@@ -300,6 +301,8 @@ void __init early_setup(unsigned long dt_ptr)
/* Enable early debugging if any specified (see udbg.h) */
udbg_early_init();

+ kaslr_early_init(__va(dt_ptr), 0);
+
udbg_printf(" -> %s(), dt_ptr: 0x%lx\n", __func__, dt_ptr);

/*
diff --git a/arch/powerpc/mm/mmu_decl.h b/arch/powerpc/mm/mmu_decl.h
index 605129b5ccdf..6efbd7fd88a4 100644
--- a/arch/powerpc/mm/mmu_decl.h
+++ b/arch/powerpc/mm/mmu_decl.h
@@ -139,22 +139,16 @@ extern unsigned long calc_cam_sz(unsigned long ram, unsigned long virt,
extern void adjust_total_lowmem(void);
extern int switch_to_as1(void);
extern void restore_to_as0(int esel, int offset, void *dt_ptr, int bootcpu);
+#endif
void create_kaslr_tlb_entry(int entry, unsigned long virt, phys_addr_t phys);
extern int is_second_reloc;
-#endif
+extern unsigned long __kaslr_offset;
+extern unsigned int __run_at_load;

void reloc_kernel_entry(void *fdt, long addr);
extern void loadcam_entry(unsigned int index);
extern void loadcam_multi(int first_idx, int num, int tmp_idx);

-#ifdef CONFIG_RANDOMIZE_BASE
-void kaslr_early_init(void *dt_ptr, phys_addr_t size);
-void kaslr_late_init(void);
-#else
-static inline void kaslr_early_init(void *dt_ptr, phys_addr_t size) {}
-static inline void kaslr_late_init(void) {}
-#endif
-
struct tlbcam {
u32 MAS0;
u32 MAS1;
@@ -164,6 +158,14 @@ struct tlbcam {
};
#endif

+#ifdef CONFIG_RANDOMIZE_BASE
+void kaslr_early_init(void *dt_ptr, phys_addr_t size);
+void kaslr_late_init(void);
+#else
+static inline void kaslr_early_init(void *dt_ptr, phys_addr_t size) {}
+static inline void kaslr_late_init(void) {}
+#endif
+
#if defined(CONFIG_PPC_BOOK3S_32) || defined(CONFIG_FSL_BOOKE) || defined(CONFIG_PPC_8xx)
/* 6xx have BATS */
/* FSL_BOOKE have TLBCAM */
diff --git a/arch/powerpc/mm/nohash/kaslr_booke.c b/arch/powerpc/mm/nohash/kaslr_booke.c
index 6ebff31fefcc..bf60f956dc91 100644
--- a/arch/powerpc/mm/nohash/kaslr_booke.c
+++ b/arch/powerpc/mm/nohash/kaslr_booke.c
@@ -228,10 +228,11 @@ static __init unsigned long get_usable_address(const void *fdt,
unsigned long start,
unsigned long offset)
{
+ unsigned long unit = IS_ENABLED(CONFIG_PPC32) ? SZ_16K : SZ_64K;
unsigned long pa;
unsigned long pa_end;

- for (pa = offset; (long)pa > (long)start; pa -= SZ_16K) {
+ for (pa = offset; (long)pa > (long)start; pa -= unit) {
pa_end = pa + regions.kernel_size;
if (overlaps_region(fdt, pa, pa_end))
continue;
@@ -268,24 +269,34 @@ static unsigned long __init kaslr_legal_offset(void *dt_ptr, unsigned long rando
unsigned long index;
unsigned long offset;

- /*
- * Decide which 64M we want to start
- * Only use the low 8 bits of the random seed
- */
- index = random & 0xFF;
- index %= regions.linear_sz / SZ_64M;
-
- /* Decide offset inside 64M */
- offset = random % (SZ_64M - regions.kernel_size);
- offset = round_down(offset, SZ_16K);
+ if (IS_ENABLED(CONFIG_PPC32)) {
+ /*
+ * Decide which 64M we want to start
+ * Only use the low 8 bits of the random seed
+ */
+ index = random & 0xFF;
+ index %= regions.linear_sz / SZ_64M;
+
+ /* Decide offset inside 64M */
+ offset = random % (SZ_64M - regions.kernel_size);
+ offset = round_down(offset, SZ_16K);
+
+ while ((long)index >= 0) {
+ offset = memstart_addr + index * SZ_64M + offset;
+ start = memstart_addr + index * SZ_64M;
+ koffset = get_usable_address(dt_ptr, start, offset);
+ if (koffset)
+ break;
+ index--;
+ }
+ } else {
+ /* Decide kernel offset inside 1G */
+ offset = random % (SZ_1G - regions.kernel_size);
+ offset = round_down(offset, SZ_64K);

- while ((long)index >= 0) {
- offset = memstart_addr + index * SZ_64M + offset;
- start = memstart_addr + index * SZ_64M;
+ start = memstart_addr;
+ offset = memstart_addr + offset;
koffset = get_usable_address(dt_ptr, start, offset);
- if (koffset)
- break;
- index--;
}

if (koffset != 0)
@@ -325,6 +336,7 @@ static unsigned long __init kaslr_choose_location(void *dt_ptr, phys_addr_t size
else
pr_warn("KASLR: No safe seed for randomizing the kernel base.\n");

+#ifdef CONFIG_PPC32
ram = min_t(phys_addr_t, __max_low_memory, size);
ram = map_mem_in_cams(ram, CONFIG_LOWMEM_CAM_NUM, true);
linear_sz = min_t(unsigned long, ram, SZ_512M);
@@ -332,6 +344,7 @@ static unsigned long __init kaslr_choose_location(void *dt_ptr, phys_addr_t size
/* If the linear size is smaller than 64M, do not randmize */
if (linear_sz < SZ_64M)
return 0;
+#endif

/* check for a reserved-memory node and record its cell sizes */
regions.reserved_mem = fdt_path_offset(dt_ptr, "/reserved-memory");
@@ -365,6 +378,14 @@ notrace void __init kaslr_early_init(void *dt_ptr, phys_addr_t size)
unsigned long offset;
unsigned long kernel_sz;

+ if (IS_ENABLED(CONFIG_PPC64)) {
+ if (__run_at_load == 1)
+ return;
+
+ /* Setup flat device-tree pointer */
+ initial_boot_params = dt_ptr;
+ }
+
kernel_sz = (unsigned long)_end - (unsigned long)_stext;

offset = kaslr_choose_location(dt_ptr, size, kernel_sz);
@@ -374,14 +395,19 @@ notrace void __init kaslr_early_init(void *dt_ptr, phys_addr_t size)
kernstart_virt_addr += offset;
kernstart_addr += offset;

- is_second_reloc = 1;
+ if (IS_ENABLED(CONFIG_PPC32)) {
+ is_second_reloc = 1;

- if (offset >= SZ_64M) {
- tlb_virt = round_down(kernstart_virt_addr, SZ_64M);
- tlb_phys = round_down(kernstart_addr, SZ_64M);
+ if (offset >= SZ_64M) {
+ tlb_virt = round_down(kernstart_virt_addr, SZ_64M);
+ tlb_phys = round_down(kernstart_addr, SZ_64M);

- /* Create kernel map to relocate in */
- create_kaslr_tlb_entry(1, tlb_virt, tlb_phys);
+ /* Create kernel map to relocate in */
+ create_kaslr_tlb_entry(1, tlb_virt, tlb_phys);
+ }
+ } else {
+ __kaslr_offset = kernstart_virt_addr - KERNELBASE;
+ __run_at_load = 1;
}

/* Copy the kernel to it's new location and run */
--
2.17.2

2020-03-06 06:44:17

by Jason Yan

[permalink] [raw]
Subject: [PATCH v4 6/6] powerpc/fsl_booke/kaslr: rename kaslr-booke32.rst to kaslr-booke.rst and add 64bit part

Now we support both 32 and 64 bit KASLR for fsl booke. Add document for
64 bit part and rename kaslr-booke32.rst to kaslr-booke.rst.

Signed-off-by: Jason Yan <[email protected]>
Cc: Scott Wood <[email protected]>
Cc: Diana Craciun <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Christophe Leroy <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Nicholas Piggin <[email protected]>
Cc: Kees Cook <[email protected]>
---
Documentation/powerpc/index.rst | 2 +-
.../{kaslr-booke32.rst => kaslr-booke.rst} | 35 ++++++++++++++++---
2 files changed, 32 insertions(+), 5 deletions(-)
rename Documentation/powerpc/{kaslr-booke32.rst => kaslr-booke.rst} (59%)

diff --git a/Documentation/powerpc/index.rst b/Documentation/powerpc/index.rst
index 0d45f0fc8e57..3bad36943b22 100644
--- a/Documentation/powerpc/index.rst
+++ b/Documentation/powerpc/index.rst
@@ -20,7 +20,7 @@ powerpc
hvcs
imc
isa-versions
- kaslr-booke32
+ kaslr-booke
mpc52xx
papr_hcalls
pci_iov_resource_on_powernv
diff --git a/Documentation/powerpc/kaslr-booke32.rst b/Documentation/powerpc/kaslr-booke.rst
similarity index 59%
rename from Documentation/powerpc/kaslr-booke32.rst
rename to Documentation/powerpc/kaslr-booke.rst
index 8b259fdfdf03..42121fed8249 100644
--- a/Documentation/powerpc/kaslr-booke32.rst
+++ b/Documentation/powerpc/kaslr-booke.rst
@@ -1,15 +1,18 @@
.. SPDX-License-Identifier: GPL-2.0

-===========================
-KASLR for Freescale BookE32
-===========================
+=========================
+KASLR for Freescale BookE
+=========================

The word KASLR stands for Kernel Address Space Layout Randomization.

This document tries to explain the implementation of the KASLR for
-Freescale BookE32. KASLR is a security feature that deters exploit
+Freescale BookE. KASLR is a security feature that deters exploit
attempts relying on knowledge of the location of kernel internals.

+KASLR for Freescale BookE32
+-------------------------
+
Since CONFIG_RELOCATABLE has already supported, what we need to do is
map or copy kernel to a proper place and relocate. Freescale Book-E
parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1
@@ -38,5 +41,29 @@ bit of the entropy to decide the index of the 64M zone. Then we chose a

kernstart_virt_addr

+
+KASLR for Freescale BookE64
+---------------------------
+
+The implementation for Freescale BookE64 is similar as BookE32. One
+difference is that Freescale BookE64 set up a TLB mapping of 1G during
+booting. Another difference is that ppc64 needs the kernel to be
+64K-aligned. So we can randomize the kernel in this 1G mapping and make
+it 64K-aligned. This can save some code to creat another TLB map at early
+boot. The disadvantage is that we only have about 1G/64K = 16384 slots to
+put the kernel in::
+
+ KERNELBASE
+
+ 64K |--> kernel <--|
+ | | |
+ +--+--+--+ +--+--+--+--+--+--+--+--+--+ +--+--+
+ | | | |....| | | | | | | | | |....| | |
+ +--+--+--+ +--+--+--+--+--+--+--+--+--+ +--+--+
+ | | 1G
+ |-----> offset <-----|
+
+ kernstart_virt_addr
+
To enable KASLR, set CONFIG_RANDOMIZE_BASE = y. If KASLR is enable and you
want to disable it at runtime, add "nokaslr" to the kernel cmdline.
--
2.17.2

2020-03-16 12:21:43

by Jason Yan

[permalink] [raw]
Subject: Re: [PATCH v4 0/6] implement KASLR for powerpc/fsl_booke/64

ping...

?? 2020/3/6 14:40, Jason Yan д??:
> This is a try to implement KASLR for Freescale BookE64 which is based on
> my earlier implementation for Freescale BookE32:
> https://patchwork.ozlabs.org/project/linuxppc-dev/list/?series=131718&state=*
>
> The implementation for Freescale BookE64 is similar as BookE32. One
> difference is that Freescale BookE64 set up a TLB mapping of 1G during
> booting. Another difference is that ppc64 needs the kernel to be
> 64K-aligned. So we can randomize the kernel in this 1G mapping and make
> it 64K-aligned. This can save some code to creat another TLB map at
> early boot. The disadvantage is that we only have about 1G/64K = 16384
> slots to put the kernel in.
>
> KERNELBASE
>
> 64K |--> kernel <--|
> | | |
> +--+--+--+ +--+--+--+--+--+--+--+--+--+ +--+--+
> | | | |....| | | | | | | | | |....| | |
> +--+--+--+ +--+--+--+--+--+--+--+--+--+ +--+--+
> | | 1G
> |-----> offset <-----|
>
> kernstart_virt_addr
>
> I'm not sure if the slot numbers is enough or the design has any
> defects. If you have some better ideas, I would be happy to hear that.
>
> Thank you all.
>
> v3->v4:
> Do not define __kaslr_offset as a fixed symbol. Reference __run_at_load and
> __kaslr_offset by symbol instead of magic offsets.
> Use IS_ENABLED(CONFIG_PPC32) instead of #ifdef CONFIG_PPC32.
> Change kaslr-booke32 to kaslr-booke in index.rst
> Switch some instructions to 64-bit.
> v2->v3:
> Fix build error when KASLR is disabled.
> v1->v2:
> Add __kaslr_offset for the secondary cpu boot up.
>
> Jason Yan (6):
> powerpc/fsl_booke/kaslr: refactor kaslr_legal_offset() and
> kaslr_early_init()
> powerpc/fsl_booke/64: introduce reloc_kernel_entry() helper
> powerpc/fsl_booke/64: implement KASLR for fsl_booke64
> powerpc/fsl_booke/64: do not clear the BSS for the second pass
> powerpc/fsl_booke/64: clear the original kernel if randomized
> powerpc/fsl_booke/kaslr: rename kaslr-booke32.rst to kaslr-booke.rst
> and add 64bit part
>
> Documentation/powerpc/index.rst | 2 +-
> .../{kaslr-booke32.rst => kaslr-booke.rst} | 35 +++++++-
> arch/powerpc/Kconfig | 2 +-
> arch/powerpc/kernel/exceptions-64e.S | 23 +++++
> arch/powerpc/kernel/head_64.S | 13 +++
> arch/powerpc/kernel/setup_64.c | 3 +
> arch/powerpc/mm/mmu_decl.h | 23 ++---
> arch/powerpc/mm/nohash/kaslr_booke.c | 88 +++++++++++++------
> 8 files changed, 144 insertions(+), 45 deletions(-)
> rename Documentation/powerpc/{kaslr-booke32.rst => kaslr-booke.rst} (59%)
>

2020-03-20 03:21:12

by Daniel Axtens

[permalink] [raw]
Subject: Re: [PATCH v4 0/6] implement KASLR for powerpc/fsl_booke/64



> This is a try to implement KASLR for Freescale BookE64 which is based on
> my earlier implementation for Freescale BookE32:
> https://patchwork.ozlabs.org/project/linuxppc-dev/list/?series=131718&state=*
>
> The implementation for Freescale BookE64 is similar as BookE32. One
> difference is that Freescale BookE64 set up a TLB mapping of 1G during
> booting. Another difference is that ppc64 needs the kernel to be
> 64K-aligned. So we can randomize the kernel in this 1G mapping and make
> it 64K-aligned. This can save some code to creat another TLB map at
> early boot. The disadvantage is that we only have about 1G/64K = 16384
> slots to put the kernel in.
>
> KERNELBASE
>
> 64K |--> kernel <--|
> | | |
> +--+--+--+ +--+--+--+--+--+--+--+--+--+ +--+--+
> | | | |....| | | | | | | | | |....| | |
> +--+--+--+ +--+--+--+--+--+--+--+--+--+ +--+--+
> | | 1G
> |-----> offset <-----|
>
> kernstart_virt_addr
>
> I'm not sure if the slot numbers is enough or the design has any
> defects. If you have some better ideas, I would be happy to hear that.
>
> Thank you all.
>
> v3->v4:
> Do not define __kaslr_offset as a fixed symbol. Reference __run_at_load and
> __kaslr_offset by symbol instead of magic offsets.
> Use IS_ENABLED(CONFIG_PPC32) instead of #ifdef CONFIG_PPC32.
> Change kaslr-booke32 to kaslr-booke in index.rst
> Switch some instructions to 64-bit.
> v2->v3:
> Fix build error when KASLR is disabled.
> v1->v2:
> Add __kaslr_offset for the secondary cpu boot up.
>
> Jason Yan (6):
> powerpc/fsl_booke/kaslr: refactor kaslr_legal_offset() and
> kaslr_early_init()
> powerpc/fsl_booke/64: introduce reloc_kernel_entry() helper
> powerpc/fsl_booke/64: implement KASLR for fsl_booke64
> powerpc/fsl_booke/64: do not clear the BSS for the second pass
> powerpc/fsl_booke/64: clear the original kernel if randomized
> powerpc/fsl_booke/kaslr: rename kaslr-booke32.rst to kaslr-booke.rst
> and add 64bit part
>
> Documentation/powerpc/index.rst | 2 +-
> .../{kaslr-booke32.rst => kaslr-booke.rst} | 35 +++++++-
> arch/powerpc/Kconfig | 2 +-
> arch/powerpc/kernel/exceptions-64e.S | 23 +++++
> arch/powerpc/kernel/head_64.S | 13 +++
> arch/powerpc/kernel/setup_64.c | 3 +
> arch/powerpc/mm/mmu_decl.h | 23 ++---
> arch/powerpc/mm/nohash/kaslr_booke.c | 88 +++++++++++++------
> 8 files changed, 144 insertions(+), 45 deletions(-)
> rename Documentation/powerpc/{kaslr-booke32.rst => kaslr-booke.rst} (59%)
>
> --
> 2.17.2


Attachments:
.config (91.72 kB)

2020-03-20 05:18:50

by Crystal Wood

[permalink] [raw]
Subject: Re: [PATCH v4 6/6] powerpc/fsl_booke/kaslr: rename kaslr-booke32.rst to kaslr-booke.rst and add 64bit part

On Fri, 2020-03-06 at 14:40 +0800, Jason Yan wrote:
> @@ -38,5 +41,29 @@ bit of the entropy to decide the index of the 64M zone.
> Then we chose a
>
> kernstart_virt_addr
>
> +
> +KASLR for Freescale BookE64
> +---------------------------
> +
> +The implementation for Freescale BookE64 is similar as BookE32. One

similar to

> +difference is that Freescale BookE64 set up a TLB mapping of 1G during
> +booting. Another difference is that ppc64 needs the kernel to be
> +64K-aligned. So we can randomize the kernel in this 1G mapping and make
> +it 64K-aligned. This can save some code to creat another TLB map at early

create

-Scott


2020-03-20 06:17:48

by Jason Yan

[permalink] [raw]
Subject: Re: [PATCH v4 0/6] implement KASLR for powerpc/fsl_booke/64



在 2020/3/20 11:19, Daniel Axtens 写道:
> Hi Jason,
>
> I tried to compile this series and got the following error:
>
> /home/dja/dev/linux/linux/arch/powerpc/mm/nohash/kaslr_booke.c: In function ‘kaslr_early_init’:
> /home/dja/dev/linux/linux/arch/powerpc/mm/nohash/kaslr_booke.c:357:33: error: ‘linear_sz’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
> 357 | regions.pa_end = memstart_addr + linear_sz;
> | ~~~~~~~~~~~~~~^~~~~~~~~~~
> /home/dja/dev/linux/linux/arch/powerpc/mm/nohash/kaslr_booke.c:317:21: note: ‘linear_sz’ was declared here
> 317 | unsigned long ram, linear_sz;
> | ^~~~~~~~~
> /home/dja/dev/linux/linux/arch/powerpc/mm/nohash/kaslr_booke.c:187:8: error: ‘ram’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
> 187 | ret = parse_crashkernel(boot_command_line, size, &crash_size,
> | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> 188 | &crash_base);
> | ~~~~~~~~~~~~
> /home/dja/dev/linux/linux/arch/powerpc/mm/nohash/kaslr_booke.c:317:16: note: ‘ram’ was declared here
> 317 | unsigned long ram, linear_sz;
> | ^~~
> cc1: all warnings being treated as errors
> make[4]: *** [/home/dja/dev/linux/linux/scripts/Makefile.build:268: arch/powerpc/mm/nohash/kaslr_booke.o] Error 1
> make[3]: *** [/home/dja/dev/linux/linux/scripts/Makefile.build:505: arch/powerpc/mm/nohash] Error 2
> make[2]: *** [/home/dja/dev/linux/linux/scripts/Makefile.build:505: arch/powerpc/mm] Error 2
> make[2]: *** Waiting for unfinished jobs....
>
> I have attached my .config file.
>

Thanks Daniel,

My config had CC_DISABLE_WARN_MAYBE_UNINITIALIZED=y enabled so I missed
this warning. I will fix it.

Thanks again.

Jason

> I'm using
> powerpc64-linux-gnu-gcc (Ubuntu 9.2.1-9ubuntu1) 9.2.1 20191008
>
> Regards,
> Daniel
>
>
>
>
>> This is a try to implement KASLR for Freescale BookE64 which is based on
>> my earlier implementation for Freescale BookE32:
>> https://patchwork.ozlabs.org/project/linuxppc-dev/list/?series=131718&state=*
>>
>> The implementation for Freescale BookE64 is similar as BookE32. One
>> difference is that Freescale BookE64 set up a TLB mapping of 1G during
>> booting. Another difference is that ppc64 needs the kernel to be
>> 64K-aligned. So we can randomize the kernel in this 1G mapping and make
>> it 64K-aligned. This can save some code to creat another TLB map at
>> early boot. The disadvantage is that we only have about 1G/64K = 16384
>> slots to put the kernel in.
>>
>> KERNELBASE
>>
>> 64K |--> kernel <--|
>> | | |
>> +--+--+--+ +--+--+--+--+--+--+--+--+--+ +--+--+
>> | | | |....| | | | | | | | | |....| | |
>> +--+--+--+ +--+--+--+--+--+--+--+--+--+ +--+--+
>> | | 1G
>> |-----> offset <-----|
>>
>> kernstart_virt_addr
>>
>> I'm not sure if the slot numbers is enough or the design has any
>> defects. If you have some better ideas, I would be happy to hear that.
>>
>> Thank you all.
>>
>> v3->v4:
>> Do not define __kaslr_offset as a fixed symbol. Reference __run_at_load and
>> __kaslr_offset by symbol instead of magic offsets.
>> Use IS_ENABLED(CONFIG_PPC32) instead of #ifdef CONFIG_PPC32.
>> Change kaslr-booke32 to kaslr-booke in index.rst
>> Switch some instructions to 64-bit.
>> v2->v3:
>> Fix build error when KASLR is disabled.
>> v1->v2:
>> Add __kaslr_offset for the secondary cpu boot up.
>>
>> Jason Yan (6):
>> powerpc/fsl_booke/kaslr: refactor kaslr_legal_offset() and
>> kaslr_early_init()
>> powerpc/fsl_booke/64: introduce reloc_kernel_entry() helper
>> powerpc/fsl_booke/64: implement KASLR for fsl_booke64
>> powerpc/fsl_booke/64: do not clear the BSS for the second pass
>> powerpc/fsl_booke/64: clear the original kernel if randomized
>> powerpc/fsl_booke/kaslr: rename kaslr-booke32.rst to kaslr-booke.rst
>> and add 64bit part
>>
>> Documentation/powerpc/index.rst | 2 +-
>> .../{kaslr-booke32.rst => kaslr-booke.rst} | 35 +++++++-
>> arch/powerpc/Kconfig | 2 +-
>> arch/powerpc/kernel/exceptions-64e.S | 23 +++++
>> arch/powerpc/kernel/head_64.S | 13 +++
>> arch/powerpc/kernel/setup_64.c | 3 +
>> arch/powerpc/mm/mmu_decl.h | 23 ++---
>> arch/powerpc/mm/nohash/kaslr_booke.c | 88 +++++++++++++------
>> 8 files changed, 144 insertions(+), 45 deletions(-)
>> rename Documentation/powerpc/{kaslr-booke32.rst => kaslr-booke.rst} (59%)
>>
>> --
>> 2.17.2