2020-02-05 02:58:23

by Jason Yan

[permalink] [raw]
Subject: [PATCH v2 0/6] implement KASLR for powerpc/fsl_booke/64

This is a try to implement KASLR for Freescale BookE64 which is based on
my earlier implementation for Freescale BookE32:
https://patchwork.ozlabs.org/project/linuxppc-dev/list/?series=131718

The implementation for Freescale BookE64 is similar as BookE32. One
difference is that Freescale BookE64 set up a TLB mapping of 1G during
booting. Another difference is that ppc64 needs the kernel to be
64K-aligned. So we can randomize the kernel in this 1G mapping and make
it 64K-aligned. This can save some code to creat another TLB map at
early boot. The disadvantage is that we only have about 1G/64K = 16384
slots to put the kernel in.

KERNELBASE

64K |--> kernel <--|
| | |
+--+--+--+ +--+--+--+--+--+--+--+--+--+ +--+--+
| | | |....| | | | | | | | | |....| | |
+--+--+--+ +--+--+--+--+--+--+--+--+--+ +--+--+
| | 1G
|-----> offset <-----|

kernstart_virt_addr

I'm not sure if the slot numbers is enough or the design has any
defects. If you have some better ideas, I would be happy to hear that.

Thank you all.

v1->v2:
Add __kaslr_offset for the secondary cpu boot up.

Jason Yan (6):
powerpc/fsl_booke/kaslr: refactor kaslr_legal_offset() and
kaslr_early_init()
powerpc/fsl_booke/64: introduce reloc_kernel_entry() helper
powerpc/fsl_booke/64: implement KASLR for fsl_booke64
powerpc/fsl_booke/64: do not clear the BSS for the second pass
powerpc/fsl_booke/64: clear the original kernel if randomized
powerpc/fsl_booke/kaslr: rename kaslr-booke32.rst to kaslr-booke.rst
and add 64bit part

.../{kaslr-booke32.rst => kaslr-booke.rst} | 35 +++++++--
arch/powerpc/Kconfig | 2 +-
arch/powerpc/kernel/exceptions-64e.S | 21 ++++++
arch/powerpc/kernel/head_64.S | 14 ++++
arch/powerpc/kernel/setup_64.c | 4 +-
arch/powerpc/mm/mmu_decl.h | 3 +-
arch/powerpc/mm/nohash/kaslr_booke.c | 71 +++++++++++++------
7 files changed, 122 insertions(+), 28 deletions(-)
rename Documentation/powerpc/{kaslr-booke32.rst => kaslr-booke.rst} (59%)

--
2.17.2


2020-02-05 02:58:28

by Jason Yan

[permalink] [raw]
Subject: [PATCH v2 4/6] powerpc/fsl_booke/64: do not clear the BSS for the second pass

The BSS section has already cleared out in the first pass. No need to
clear it again. This can save some time when booting with KASLR
enabled.

Signed-off-by: Jason Yan <[email protected]>
Cc: Scott Wood <[email protected]>
Cc: Diana Craciun <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Christophe Leroy <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Nicholas Piggin <[email protected]>
Cc: Kees Cook <[email protected]>
---
arch/powerpc/kernel/head_64.S | 7 +++++++
1 file changed, 7 insertions(+)

diff --git a/arch/powerpc/kernel/head_64.S b/arch/powerpc/kernel/head_64.S
index b4ececc4323d..9ae7fd8bbf7c 100644
--- a/arch/powerpc/kernel/head_64.S
+++ b/arch/powerpc/kernel/head_64.S
@@ -914,6 +914,13 @@ start_here_multiplatform:
bl relative_toc
tovirt(r2,r2)

+ /* Do not clear the BSS for the second pass if randomized */
+ LOAD_REG_ADDR(r3, kernstart_virt_addr)
+ lwz r3,0(r3)
+ LOAD_REG_IMMEDIATE(r4, KERNELBASE)
+ cmpw r3,r4
+ bne 4f
+
/* Clear out the BSS. It may have been done in prom_init,
* already but that's irrelevant since prom_init will soon
* be detached from the kernel completely. Besides, we need
--
2.17.2

2020-02-05 02:59:38

by Jason Yan

[permalink] [raw]
Subject: [PATCH v2 3/6] powerpc/fsl_booke/64: implement KASLR for fsl_booke64

The implementation for Freescale BookE64 is similar as BookE32. One
difference is that Freescale BookE64 set up a TLB mapping of 1G during
booting. Another difference is that ppc64 needs the kernel to be
64K-aligned. So we can randomize the kernel in this 1G mapping and make
it 64K-aligned. This can save some code to creat another TLB map at
early boot. The disadvantage is that we only have about 1G/64K = 16384
slots to put the kernel in.

To support secondary cpu boot up, a variable __kaslr_offset was added in
first_256B section. This can help secondary cpu get the kaslr offset
before the 1:1 mapping has been setup.

Signed-off-by: Jason Yan <[email protected]>
Cc: Scott Wood <[email protected]>
Cc: Diana Craciun <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Christophe Leroy <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Nicholas Piggin <[email protected]>
Cc: Kees Cook <[email protected]>
---
arch/powerpc/Kconfig | 2 +-
arch/powerpc/kernel/exceptions-64e.S | 8 +++++++
arch/powerpc/kernel/head_64.S | 7 ++++++
arch/powerpc/kernel/setup_64.c | 4 +++-
arch/powerpc/mm/nohash/kaslr_booke.c | 33 +++++++++++++++++++++++++---
5 files changed, 49 insertions(+), 5 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index c150a9d49343..754aeb96bb1c 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -568,7 +568,7 @@ config RELOCATABLE

config RANDOMIZE_BASE
bool "Randomize the address of the kernel image"
- depends on (FSL_BOOKE && FLATMEM && PPC32)
+ depends on (PPC_FSL_BOOK3E && FLATMEM)
depends on RELOCATABLE
help
Randomizes the virtual address at which the kernel image is
diff --git a/arch/powerpc/kernel/exceptions-64e.S b/arch/powerpc/kernel/exceptions-64e.S
index 1b9b174bee86..121daeaf573d 100644
--- a/arch/powerpc/kernel/exceptions-64e.S
+++ b/arch/powerpc/kernel/exceptions-64e.S
@@ -1378,6 +1378,7 @@ skpinv: addi r6,r6,1 /* Increment */
1: mflr r6
addi r6,r6,(2f - 1b)
tovirt(r6,r6)
+ add r6,r6,r19
lis r7,MSR_KERNEL@h
ori r7,r7,MSR_KERNEL@l
mtspr SPRN_SRR0,r6
@@ -1400,6 +1401,7 @@ skpinv: addi r6,r6,1 /* Increment */

/* We translate LR and return */
tovirt(r8,r8)
+ add r8,r8,r19
mtlr r8
blr

@@ -1528,6 +1530,7 @@ a2_tlbinit_code_end:
*/
_GLOBAL(start_initialization_book3e)
mflr r28
+ li r19, 0

/* First, we need to setup some initial TLBs to map the kernel
* text, data and bss at PAGE_OFFSET. We don't have a real mode
@@ -1570,6 +1573,10 @@ _GLOBAL(book3e_secondary_core_init)
cmplwi r4,0
bne 2f

+ LOAD_REG_ADDR_PIC(r19, __kaslr_offset)
+ lwz r19,0(r19)
+ rlwinm r19,r19,0,0,5
+
/* Setup TLB for this core */
bl initial_tlb_book3e

@@ -1602,6 +1609,7 @@ _GLOBAL(book3e_secondary_core_init)
lis r3,PAGE_OFFSET@highest
sldi r3,r3,32
or r28,r28,r3
+ add r28,r28,r19
1: mtlr r28
blr

diff --git a/arch/powerpc/kernel/head_64.S b/arch/powerpc/kernel/head_64.S
index ad79fddb974d..b4ececc4323d 100644
--- a/arch/powerpc/kernel/head_64.S
+++ b/arch/powerpc/kernel/head_64.S
@@ -104,6 +104,13 @@ __secondary_hold_acknowledge:
.8byte 0x0

#ifdef CONFIG_RELOCATABLE
+#ifdef CONFIG_PPC_BOOK3E
+ . = 0x58
+ .globl __kaslr_offset
+__kaslr_offset:
+DEFINE_FIXED_SYMBOL(__kaslr_offset)
+ .long 0
+#endif
/* This flag is set to 1 by a loader if the kernel should run
* at the loaded address instead of the linked address. This
* is used by kexec-tools to keep the the kdump kernel in the
diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index 6104917a282d..a16b970a8d1a 100644
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -66,7 +66,7 @@
#include <asm/feature-fixups.h>
#include <asm/kup.h>
#include <asm/early_ioremap.h>
-
+#include <mm/mmu_decl.h>
#include "setup.h"

int spinning_secondaries;
@@ -300,6 +300,8 @@ void __init early_setup(unsigned long dt_ptr)
/* Enable early debugging if any specified (see udbg.h) */
udbg_early_init();

+ kaslr_early_init(__va(dt_ptr), 0);
+
udbg_printf(" -> %s(), dt_ptr: 0x%lx\n", __func__, dt_ptr);

/*
diff --git a/arch/powerpc/mm/nohash/kaslr_booke.c b/arch/powerpc/mm/nohash/kaslr_booke.c
index 07b036e98353..c6f5c1db1394 100644
--- a/arch/powerpc/mm/nohash/kaslr_booke.c
+++ b/arch/powerpc/mm/nohash/kaslr_booke.c
@@ -231,7 +231,7 @@ static __init unsigned long get_usable_address(const void *fdt,
unsigned long pa;
unsigned long pa_end;

- for (pa = offset; (long)pa > (long)start; pa -= SZ_16K) {
+ for (pa = offset; (long)pa > (long)start; pa -= SZ_64K) {
pa_end = pa + regions.kernel_size;
if (overlaps_region(fdt, pa, pa_end))
continue;
@@ -265,14 +265,14 @@ static unsigned long __init kaslr_legal_offset(void *dt_ptr, unsigned long rando
{
unsigned long koffset = 0;
unsigned long start;
- unsigned long index;
unsigned long offset;

+#ifdef CONFIG_PPC32
/*
* Decide which 64M we want to start
* Only use the low 8 bits of the random seed
*/
- index = random & 0xFF;
+ unsigned long index = random & 0xFF;
index %= regions.linear_sz / SZ_64M;

/* Decide offset inside 64M */
@@ -287,6 +287,15 @@ static unsigned long __init kaslr_legal_offset(void *dt_ptr, unsigned long rando
break;
index--;
}
+#else
+ /* Decide kernel offset inside 1G */
+ offset = random % (SZ_1G - regions.kernel_size);
+ offset = round_down(offset, SZ_64K);
+
+ start = memstart_addr;
+ offset = memstart_addr + offset;
+ koffset = get_usable_address(dt_ptr, start, offset);
+#endif

if (koffset != 0)
koffset -= memstart_addr;
@@ -325,6 +334,7 @@ static unsigned long __init kaslr_choose_location(void *dt_ptr, phys_addr_t size
else
pr_warn("KASLR: No safe seed for randomizing the kernel base.\n");

+#ifdef CONFIG_PPC32
ram = min_t(phys_addr_t, __max_low_memory, size);
ram = map_mem_in_cams(ram, CONFIG_LOWMEM_CAM_NUM, true);
linear_sz = min_t(unsigned long, ram, SZ_512M);
@@ -332,6 +342,7 @@ static unsigned long __init kaslr_choose_location(void *dt_ptr, phys_addr_t size
/* If the linear size is smaller than 64M, do not randmize */
if (linear_sz < SZ_64M)
return 0;
+#endif

/* check for a reserved-memory node and record its cell sizes */
regions.reserved_mem = fdt_path_offset(dt_ptr, "/reserved-memory");
@@ -363,6 +374,17 @@ notrace void __init kaslr_early_init(void *dt_ptr, phys_addr_t size)
unsigned long offset;
unsigned long kernel_sz;

+#ifdef CONFIG_PPC64
+ unsigned int *__kaslr_offset = (unsigned int *)(KERNELBASE + 0x58);
+ unsigned int *__run_at_load = (unsigned int *)(KERNELBASE + 0x5c);
+
+ if (*__run_at_load == 1)
+ return;
+
+ /* Setup flat device-tree pointer */
+ initial_boot_params = dt_ptr;
+#endif
+
kernel_sz = (unsigned long)_end - (unsigned long)_stext;

offset = kaslr_choose_location(dt_ptr, size, kernel_sz);
@@ -372,6 +394,7 @@ notrace void __init kaslr_early_init(void *dt_ptr, phys_addr_t size)
kernstart_virt_addr += offset;
kernstart_addr += offset;

+#ifdef CONFIG_PPC32
is_second_reloc = 1;

if (offset >= SZ_64M) {
@@ -381,6 +404,10 @@ notrace void __init kaslr_early_init(void *dt_ptr, phys_addr_t size)
/* Create kernel map to relocate in */
create_kaslr_tlb_entry(1, tlb_virt, tlb_phys);
}
+#else
+ *__kaslr_offset = kernstart_virt_addr - KERNELBASE;
+ *__run_at_load = 1;
+#endif

/* Copy the kernel to it's new location and run */
memcpy((void *)kernstart_virt_addr, (void *)_stext, kernel_sz);
--
2.17.2

2020-02-05 02:59:51

by Jason Yan

[permalink] [raw]
Subject: [PATCH v2 6/6] powerpc/fsl_booke/kaslr: rename kaslr-booke32.rst to kaslr-booke.rst and add 64bit part

Now we support both 32 and 64 bit KASLR for fsl booke. Add document for
64 bit part and rename kaslr-booke32.rst to kaslr-booke.rst.

Signed-off-by: Jason Yan <[email protected]>
Cc: Scott Wood <[email protected]>
Cc: Diana Craciun <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Christophe Leroy <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Nicholas Piggin <[email protected]>
Cc: Kees Cook <[email protected]>
---
.../{kaslr-booke32.rst => kaslr-booke.rst} | 35 ++++++++++++++++---
1 file changed, 31 insertions(+), 4 deletions(-)
rename Documentation/powerpc/{kaslr-booke32.rst => kaslr-booke.rst} (59%)

diff --git a/Documentation/powerpc/kaslr-booke32.rst b/Documentation/powerpc/kaslr-booke.rst
similarity index 59%
rename from Documentation/powerpc/kaslr-booke32.rst
rename to Documentation/powerpc/kaslr-booke.rst
index 8b259fdfdf03..42121fed8249 100644
--- a/Documentation/powerpc/kaslr-booke32.rst
+++ b/Documentation/powerpc/kaslr-booke.rst
@@ -1,15 +1,18 @@
.. SPDX-License-Identifier: GPL-2.0

-===========================
-KASLR for Freescale BookE32
-===========================
+=========================
+KASLR for Freescale BookE
+=========================

The word KASLR stands for Kernel Address Space Layout Randomization.

This document tries to explain the implementation of the KASLR for
-Freescale BookE32. KASLR is a security feature that deters exploit
+Freescale BookE. KASLR is a security feature that deters exploit
attempts relying on knowledge of the location of kernel internals.

+KASLR for Freescale BookE32
+-------------------------
+
Since CONFIG_RELOCATABLE has already supported, what we need to do is
map or copy kernel to a proper place and relocate. Freescale Book-E
parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1
@@ -38,5 +41,29 @@ bit of the entropy to decide the index of the 64M zone. Then we chose a

kernstart_virt_addr

+
+KASLR for Freescale BookE64
+---------------------------
+
+The implementation for Freescale BookE64 is similar as BookE32. One
+difference is that Freescale BookE64 set up a TLB mapping of 1G during
+booting. Another difference is that ppc64 needs the kernel to be
+64K-aligned. So we can randomize the kernel in this 1G mapping and make
+it 64K-aligned. This can save some code to creat another TLB map at early
+boot. The disadvantage is that we only have about 1G/64K = 16384 slots to
+put the kernel in::
+
+ KERNELBASE
+
+ 64K |--> kernel <--|
+ | | |
+ +--+--+--+ +--+--+--+--+--+--+--+--+--+ +--+--+
+ | | | |....| | | | | | | | | |....| | |
+ +--+--+--+ +--+--+--+--+--+--+--+--+--+ +--+--+
+ | | 1G
+ |-----> offset <-----|
+
+ kernstart_virt_addr
+
To enable KASLR, set CONFIG_RANDOMIZE_BASE = y. If KASLR is enable and you
want to disable it at runtime, add "nokaslr" to the kernel cmdline.
--
2.17.2

2020-02-05 05:19:03

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH v2 3/6] powerpc/fsl_booke/64: implement KASLR for fsl_booke64

Hi Jason,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on powerpc/next]
[also build test ERROR on v5.5 next-20200204]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system. BTW, we also suggest to use '--base' option to specify the
base tree in git format-patch, please see https://stackoverflow.com/a/37406982]

url: https://github.com/0day-ci/linux/commits/Jason-Yan/implement-KASLR-for-powerpc-fsl_booke-64/20200205-105837
base: https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git next
config: powerpc-defconfig (attached as .config)
compiler: powerpc64-linux-gcc (GCC) 7.5.0
reproduce:
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# save the attached .config to linux build tree
GCC_VERSION=7.5.0 make.cross ARCH=powerpc

If you fix the issue, kindly add following tag
Reported-by: kbuild test robot <[email protected]>

All errors (new ones prefixed by >>):

arch/powerpc/kernel/setup_64.c: In function 'early_setup':
>> arch/powerpc/kernel/setup_64.c:303:2: error: implicit declaration of function 'kaslr_early_init'; did you mean 'udbg_early_init'? [-Werror=implicit-function-declaration]
kaslr_early_init(__va(dt_ptr), 0);
^~~~~~~~~~~~~~~~
udbg_early_init
cc1: all warnings being treated as errors

vim +303 arch/powerpc/kernel/setup_64.c

262
263 /*
264 * Early initialization entry point. This is called by head.S
265 * with MMU translation disabled. We rely on the "feature" of
266 * the CPU that ignores the top 2 bits of the address in real
267 * mode so we can access kernel globals normally provided we
268 * only toy with things in the RMO region. From here, we do
269 * some early parsing of the device-tree to setup out MEMBLOCK
270 * data structures, and allocate & initialize the hash table
271 * and segment tables so we can start running with translation
272 * enabled.
273 *
274 * It is this function which will call the probe() callback of
275 * the various platform types and copy the matching one to the
276 * global ppc_md structure. Your platform can eventually do
277 * some very early initializations from the probe() routine, but
278 * this is not recommended, be very careful as, for example, the
279 * device-tree is not accessible via normal means at this point.
280 */
281
282 void __init early_setup(unsigned long dt_ptr)
283 {
284 static __initdata struct paca_struct boot_paca;
285
286 /* -------- printk is _NOT_ safe to use here ! ------- */
287
288 /* Try new device tree based feature discovery ... */
289 if (!dt_cpu_ftrs_init(__va(dt_ptr)))
290 /* Otherwise use the old style CPU table */
291 identify_cpu(0, mfspr(SPRN_PVR));
292
293 /* Assume we're on cpu 0 for now. Don't write to the paca yet! */
294 initialise_paca(&boot_paca, 0);
295 setup_paca(&boot_paca);
296 fixup_boot_paca();
297
298 /* -------- printk is now safe to use ------- */
299
300 /* Enable early debugging if any specified (see udbg.h) */
301 udbg_early_init();
302
> 303 kaslr_early_init(__va(dt_ptr), 0);
304
305 udbg_printf(" -> %s(), dt_ptr: 0x%lx\n", __func__, dt_ptr);
306
307 /*
308 * Do early initialization using the flattened device
309 * tree, such as retrieving the physical memory map or
310 * calculating/retrieving the hash table size.
311 */
312 early_init_devtree(__va(dt_ptr));
313
314 /* Now we know the logical id of our boot cpu, setup the paca. */
315 if (boot_cpuid != 0) {
316 /* Poison paca_ptrs[0] again if it's not the boot cpu */
317 memset(&paca_ptrs[0], 0x88, sizeof(paca_ptrs[0]));
318 }
319 setup_paca(paca_ptrs[boot_cpuid]);
320 fixup_boot_paca();
321
322 /*
323 * Configure exception handlers. This include setting up trampolines
324 * if needed, setting exception endian mode, etc...
325 */
326 configure_exceptions();
327
328 /*
329 * Configure Kernel Userspace Protection. This needs to happen before
330 * feature fixups for platforms that implement this using features.
331 */
332 setup_kup();
333
334 /* Apply all the dynamic patching */
335 apply_feature_fixups();
336 setup_feature_keys();
337
338 early_ioremap_setup();
339
340 /* Initialize the hash table or TLB handling */
341 early_init_mmu();
342
343 /*
344 * After firmware and early platform setup code has set things up,
345 * we note the SPR values for configurable control/performance
346 * registers, and use those as initial defaults.
347 */
348 record_spr_defaults();
349
350 /*
351 * At this point, we can let interrupts switch to virtual mode
352 * (the MMU has been setup), so adjust the MSR in the PACA to
353 * have IR and DR set and enable AIL if it exists
354 */
355 cpu_ready_for_interrupts();
356
357 /*
358 * We enable ftrace here, but since we only support DYNAMIC_FTRACE, it
359 * will only actually get enabled on the boot cpu much later once
360 * ftrace itself has been initialized.
361 */
362 this_cpu_enable_ftrace();
363
364 udbg_printf(" <- %s()\n", __func__);
365

---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/hyperkitty/list/[email protected] Intel Corporation


Attachments:
(No filename) (5.64 kB)
.config.gz (25.07 kB)
Download all attachments