2019-07-31 10:22:21

by Jason Yan

[permalink] [raw]
Subject: [PATCH v3 00/10] implement KASLR for powerpc/fsl_booke/32

This series implements KASLR for powerpc/fsl_booke/32, as a security
feature that deters exploit attempts relying on knowledge of the location
of kernel internals.

Since CONFIG_RELOCATABLE has already supported, what we need to do is
map or copy kernel to a proper place and relocate. Freescale Book-E
parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1
entries are not suitable to map the kernel directly in a randomized
region, so we chose to copy the kernel to a proper place and restart to
relocate.

Entropy is derived from the banner and timer base, which will change every
build and boot. This not so much safe so additionally the bootloader may
pass entropy via the /chosen/kaslr-seed node in device tree.

We will use the first 512M of the low memory to randomize the kernel
image. The memory will be split in 64M zones. We will use the lower 8
bit of the entropy to decide the index of the 64M zone. Then we chose a
16K aligned offset inside the 64M zone to put the kernel in.

KERNELBASE

|--> 64M <--|
| |
+---------------+ +----------------+---------------+
| |....| |kernel| | |
+---------------+ +----------------+---------------+
| |
|-----> offset <-----|

kimage_vaddr

We also check if we will overlap with some areas like the dtb area, the
initrd area or the crashkernel area. If we cannot find a proper area,
kaslr will be disabled and boot from the original kernel.

Changes since v2:
- Remove unnecessary #ifdef
- Use SZ_64M instead of0x4000000
- Call early_init_dt_scan_chosen() to init boot_command_line
- Rename kaslr_second_init() to kaslr_late_init()

Changes since v1:
- Remove some useless 'extern' keyword.
- Replace EXPORT_SYMBOL with EXPORT_SYMBOL_GPL
- Improve some assembly code
- Use memzero_explicit instead of memset
- Use boot_command_line and remove early_command_line
- Do not print kaslr offset if kaslr is disabled

Jason Yan (10):
powerpc: unify definition of M_IF_NEEDED
powerpc: move memstart_addr and kernstart_addr to init-common.c
powerpc: introduce kimage_vaddr to store the kernel base
powerpc/fsl_booke/32: introduce create_tlb_entry() helper
powerpc/fsl_booke/32: introduce reloc_kernel_entry() helper
powerpc/fsl_booke/32: implement KASLR infrastructure
powerpc/fsl_booke/32: randomize the kernel image offset
powerpc/fsl_booke/kaslr: clear the original kernel if randomized
powerpc/fsl_booke/kaslr: support nokaslr cmdline parameter
powerpc/fsl_booke/kaslr: dump out kernel offset information on panic

arch/powerpc/Kconfig | 11 +
arch/powerpc/include/asm/nohash/mmu-book3e.h | 10 +
arch/powerpc/include/asm/page.h | 7 +
arch/powerpc/kernel/Makefile | 1 +
arch/powerpc/kernel/early_32.c | 2 +-
arch/powerpc/kernel/exceptions-64e.S | 10 -
arch/powerpc/kernel/fsl_booke_entry_mapping.S | 23 +-
arch/powerpc/kernel/head_fsl_booke.S | 55 ++-
arch/powerpc/kernel/kaslr_booke.c | 427 ++++++++++++++++++
arch/powerpc/kernel/machine_kexec.c | 1 +
arch/powerpc/kernel/misc_64.S | 5 -
arch/powerpc/kernel/setup-common.c | 19 +
arch/powerpc/mm/init-common.c | 7 +
arch/powerpc/mm/init_32.c | 5 -
arch/powerpc/mm/init_64.c | 5 -
arch/powerpc/mm/mmu_decl.h | 10 +
arch/powerpc/mm/nohash/fsl_booke.c | 8 +-
17 files changed, 558 insertions(+), 48 deletions(-)
create mode 100644 arch/powerpc/kernel/kaslr_booke.c

--
2.17.2


2019-07-31 10:22:23

by Jason Yan

[permalink] [raw]
Subject: [PATCH v3 03/10] powerpc: introduce kimage_vaddr to store the kernel base

Now the kernel base is a fixed value - KERNELBASE. To support KASLR, we
need a variable to store the kernel base.

Signed-off-by: Jason Yan <[email protected]>
Cc: Diana Craciun <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Christophe Leroy <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Nicholas Piggin <[email protected]>
Cc: Kees Cook <[email protected]>
Reviewed-by: Christophe Leroy <[email protected]>
---
arch/powerpc/include/asm/page.h | 2 ++
arch/powerpc/mm/init-common.c | 2 ++
2 files changed, 4 insertions(+)

diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
index 0d52f57fca04..60a68d3a54b1 100644
--- a/arch/powerpc/include/asm/page.h
+++ b/arch/powerpc/include/asm/page.h
@@ -315,6 +315,8 @@ void arch_free_page(struct page *page, int order);

struct vm_area_struct;

+extern unsigned long kimage_vaddr;
+
#include <asm-generic/memory_model.h>
#endif /* __ASSEMBLY__ */
#include <asm/slice.h>
diff --git a/arch/powerpc/mm/init-common.c b/arch/powerpc/mm/init-common.c
index 152ae0d21435..d4801ce48dc5 100644
--- a/arch/powerpc/mm/init-common.c
+++ b/arch/powerpc/mm/init-common.c
@@ -25,6 +25,8 @@ phys_addr_t memstart_addr = (phys_addr_t)~0ull;
EXPORT_SYMBOL_GPL(memstart_addr);
phys_addr_t kernstart_addr;
EXPORT_SYMBOL_GPL(kernstart_addr);
+unsigned long kimage_vaddr = KERNELBASE;
+EXPORT_SYMBOL_GPL(kimage_vaddr);

static bool disable_kuep = !IS_ENABLED(CONFIG_PPC_KUEP);
static bool disable_kuap = !IS_ENABLED(CONFIG_PPC_KUAP);
--
2.17.2

2019-07-31 10:51:05

by Jason Yan

[permalink] [raw]
Subject: [PATCH v3 07/10] powerpc/fsl_booke/32: randomize the kernel image offset

After we have the basic support of relocate the kernel in some
appropriate place, we can start to randomize the offset now.

Entropy is derived from the banner and timer, which will change every
build and boot. This not so much safe so additionally the bootloader may
pass entropy via the /chosen/kaslr-seed node in device tree.

We will use the first 512M of the low memory to randomize the kernel
image. The memory will be split in 64M zones. We will use the lower 8
bit of the entropy to decide the index of the 64M zone. Then we chose a
16K aligned offset inside the 64M zone to put the kernel in.

KERNELBASE

|--> 64M <--|
| |
+---------------+ +----------------+---------------+
| |....| |kernel| | |
+---------------+ +----------------+---------------+
| |
|-----> offset <-----|

kimage_vaddr

We also check if we will overlap with some areas like the dtb area, the
initrd area or the crashkernel area. If we cannot find a proper area,
kaslr will be disabled and boot from the original kernel.

Signed-off-by: Jason Yan <[email protected]>
Cc: Diana Craciun <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Christophe Leroy <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Nicholas Piggin <[email protected]>
Cc: Kees Cook <[email protected]>
---
arch/powerpc/kernel/kaslr_booke.c | 322 +++++++++++++++++++++++++++++-
1 file changed, 320 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/kernel/kaslr_booke.c b/arch/powerpc/kernel/kaslr_booke.c
index 30f84c0321b2..97250cad71de 100644
--- a/arch/powerpc/kernel/kaslr_booke.c
+++ b/arch/powerpc/kernel/kaslr_booke.c
@@ -23,6 +23,8 @@
#include <linux/delay.h>
#include <linux/highmem.h>
#include <linux/memblock.h>
+#include <linux/libfdt.h>
+#include <linux/crash_core.h>
#include <asm/pgalloc.h>
#include <asm/prom.h>
#include <asm/io.h>
@@ -34,15 +36,329 @@
#include <asm/machdep.h>
#include <asm/setup.h>
#include <asm/paca.h>
+#include <asm/kdump.h>
#include <mm/mmu_decl.h>
+#include <generated/compile.h>
+#include <generated/utsrelease.h>
+
+#ifdef DEBUG
+#define DBG(fmt...) printk(KERN_ERR fmt)
+#else
+#define DBG(fmt...)
+#endif
+
+struct regions {
+ unsigned long pa_start;
+ unsigned long pa_end;
+ unsigned long kernel_size;
+ unsigned long dtb_start;
+ unsigned long dtb_end;
+ unsigned long initrd_start;
+ unsigned long initrd_end;
+ unsigned long crash_start;
+ unsigned long crash_end;
+ int reserved_mem;
+ int reserved_mem_addr_cells;
+ int reserved_mem_size_cells;
+};

extern int is_second_reloc;

+/* Simplified build-specific string for starting entropy. */
+static const char build_str[] = UTS_RELEASE " (" LINUX_COMPILE_BY "@"
+ LINUX_COMPILE_HOST ") (" LINUX_COMPILER ") " UTS_VERSION;
+
+static __init void kaslr_get_cmdline(void *fdt)
+{
+ int node = fdt_path_offset(fdt, "/chosen");
+
+ early_init_dt_scan_chosen(node, "chosen", 1, boot_command_line);
+}
+
+static unsigned long __init rotate_xor(unsigned long hash, const void *area,
+ size_t size)
+{
+ size_t i;
+ unsigned long *ptr = (unsigned long *)area;
+
+ for (i = 0; i < size / sizeof(hash); i++) {
+ /* Rotate by odd number of bits and XOR. */
+ hash = (hash << ((sizeof(hash) * 8) - 7)) | (hash >> 7);
+ hash ^= ptr[i];
+ }
+
+ return hash;
+}
+
+/* Attempt to create a simple but unpredictable starting entropy. */
+static unsigned long __init get_boot_seed(void *fdt)
+{
+ unsigned long hash = 0;
+
+ hash = rotate_xor(hash, build_str, sizeof(build_str));
+ hash = rotate_xor(hash, fdt, fdt_totalsize(fdt));
+
+ return hash;
+}
+
+static __init u64 get_kaslr_seed(void *fdt)
+{
+ int node, len;
+ fdt64_t *prop;
+ u64 ret;
+
+ node = fdt_path_offset(fdt, "/chosen");
+ if (node < 0)
+ return 0;
+
+ prop = fdt_getprop_w(fdt, node, "kaslr-seed", &len);
+ if (!prop || len != sizeof(u64))
+ return 0;
+
+ ret = fdt64_to_cpu(*prop);
+ *prop = 0;
+ return ret;
+}
+
+static __init bool regions_overlap(u32 s1, u32 e1, u32 s2, u32 e2)
+{
+ return e1 >= s2 && e2 >= s1;
+}
+
+static __init bool overlaps_reserved_region(const void *fdt, u32 start,
+ u32 end, struct regions *regions)
+{
+ int subnode, len, i;
+ u64 base, size;
+
+ /* check for overlap with /memreserve/ entries */
+ for (i = 0; i < fdt_num_mem_rsv(fdt); i++) {
+ if (fdt_get_mem_rsv(fdt, i, &base, &size) < 0)
+ continue;
+ if (regions_overlap(start, end, base, base + size))
+ return true;
+ }
+
+ if (regions->reserved_mem < 0)
+ return false;
+
+ /* check for overlap with static reservations in /reserved-memory */
+ for (subnode = fdt_first_subnode(fdt, regions->reserved_mem);
+ subnode >= 0;
+ subnode = fdt_next_subnode(fdt, subnode)) {
+ const fdt32_t *reg;
+ u64 rsv_end;
+
+ len = 0;
+ reg = fdt_getprop(fdt, subnode, "reg", &len);
+ while (len >= (regions->reserved_mem_addr_cells +
+ regions->reserved_mem_size_cells)) {
+ base = fdt32_to_cpu(reg[0]);
+ if (regions->reserved_mem_addr_cells == 2)
+ base = (base << 32) | fdt32_to_cpu(reg[1]);
+
+ reg += regions->reserved_mem_addr_cells;
+ len -= 4 * regions->reserved_mem_addr_cells;
+
+ size = fdt32_to_cpu(reg[0]);
+ if (regions->reserved_mem_size_cells == 2)
+ size = (size << 32) | fdt32_to_cpu(reg[1]);
+
+ reg += regions->reserved_mem_size_cells;
+ len -= 4 * regions->reserved_mem_size_cells;
+
+ if (base >= regions->pa_end)
+ continue;
+
+ rsv_end = min(base + size, (u64)U32_MAX);
+
+ if (regions_overlap(start, end, base, rsv_end))
+ return true;
+ }
+ }
+ return false;
+}
+
+static __init bool overlaps_region(const void *fdt, u32 start,
+ u32 end, struct regions *regions)
+{
+ if (regions_overlap(start, end, regions->dtb_start,
+ regions->dtb_end))
+ return true;
+
+ if (regions_overlap(start, end, regions->initrd_start,
+ regions->initrd_end))
+ return true;
+
+ if (regions_overlap(start, end, regions->crash_start,
+ regions->crash_end))
+ return true;
+
+ return overlaps_reserved_region(fdt, start, end, regions);
+}
+
+static void __init get_crash_kernel(void *fdt, unsigned long size,
+ struct regions *regions)
+{
+#ifdef CONFIG_CRASH_CORE
+ unsigned long long crash_size, crash_base;
+ int ret;
+
+ ret = parse_crashkernel(boot_command_line, size, &crash_size,
+ &crash_base);
+ if (ret != 0 || crash_size == 0)
+ return;
+ if (crash_base == 0)
+ crash_base = KDUMP_KERNELBASE;
+
+ regions->crash_start = (unsigned long)crash_base;
+ regions->crash_end = (unsigned long)(crash_base + crash_size);
+
+ DBG("crash_base=0x%llx crash_size=0x%llx\n", crash_base, crash_size);
+#endif
+}
+
+static void __init get_initrd_range(void *fdt, struct regions *regions)
+{
+ u64 start, end;
+ int node, len;
+ const __be32 *prop;
+
+ node = fdt_path_offset(fdt, "/chosen");
+ if (node < 0)
+ return;
+
+ prop = fdt_getprop(fdt, node, "linux,initrd-start", &len);
+ if (!prop)
+ return;
+ start = of_read_number(prop, len / 4);
+
+ prop = fdt_getprop(fdt, node, "linux,initrd-end", &len);
+ if (!prop)
+ return;
+ end = of_read_number(prop, len / 4);
+
+ regions->initrd_start = (unsigned long)start;
+ regions->initrd_end = (unsigned long)end;
+
+ DBG("initrd_start=0x%llx initrd_end=0x%llx\n", start, end);
+}
+
+static __init unsigned long get_usable_offset(const void *fdt, struct regions *regions,
+ unsigned long start)
+{
+ unsigned long pa;
+ unsigned long pa_end;
+
+ for (pa = start; pa > regions->pa_start; pa -= SZ_16K) {
+ pa_end = pa + regions->kernel_size;
+ if (overlaps_region(fdt, pa, pa_end, regions))
+ continue;
+
+ return pa;
+ }
+ return 0;
+}
+
+static __init void get_cell_sizes(const void *fdt, int node, int *addr_cells,
+ int *size_cells)
+{
+ const int *prop;
+ int len;
+
+ /*
+ * Retrieve the #address-cells and #size-cells properties
+ * from the 'node', or use the default if not provided.
+ */
+ *addr_cells = *size_cells = 1;
+
+ prop = fdt_getprop(fdt, node, "#address-cells", &len);
+ if (len == 4)
+ *addr_cells = fdt32_to_cpu(*prop);
+ prop = fdt_getprop(fdt, node, "#size-cells", &len);
+ if (len == 4)
+ *size_cells = fdt32_to_cpu(*prop);
+}
+
static unsigned long __init kaslr_choose_location(void *dt_ptr, phys_addr_t size,
unsigned long kernel_sz)
{
- /* return a fixed offset of 64M for now */
- return SZ_64M;
+ unsigned long offset, random;
+ unsigned long ram, linear_sz;
+ unsigned long kaslr_offset;
+ u64 seed;
+ struct regions regions;
+ unsigned long index;
+
+ random = get_boot_seed(dt_ptr);
+
+ seed = get_tb() << 32;
+ seed ^= get_tb();
+ random = rotate_xor(random, &seed, sizeof(seed));
+
+ /*
+ * Retrieve (and wipe) the seed from the FDT
+ */
+ seed = get_kaslr_seed(dt_ptr);
+ if (seed)
+ random = rotate_xor(random, &seed, sizeof(seed));
+
+ ram = min((phys_addr_t)__max_low_memory, size);
+ ram = map_mem_in_cams(ram, CONFIG_LOWMEM_CAM_NUM, true);
+ linear_sz = min(ram, (unsigned long)SZ_512M);
+
+ /* If the linear size is smaller than 64M, do not randmize */
+ if (linear_sz < SZ_64M)
+ return 0;
+
+ memset(&regions, 0, sizeof(regions));
+
+ /* check for a reserved-memory node and record its cell sizes */
+ regions.reserved_mem = fdt_path_offset(dt_ptr, "/reserved-memory");
+ if (regions.reserved_mem >= 0)
+ get_cell_sizes(dt_ptr, regions.reserved_mem,
+ &regions.reserved_mem_addr_cells,
+ &regions.reserved_mem_size_cells);
+
+ regions.pa_start = 0;
+ regions.pa_end = linear_sz;
+ regions.dtb_start = __pa(dt_ptr);
+ regions.dtb_end = __pa(dt_ptr) + fdt_totalsize(dt_ptr);
+ regions.kernel_size = kernel_sz;
+
+ get_initrd_range(dt_ptr, &regions);
+ get_crash_kernel(dt_ptr, ram, &regions);
+
+ /*
+ * Decide which 64M we want to start
+ * Only use the low 8 bits of the random seed
+ */
+ index = random & 0xFF;
+ index %= linear_sz / SZ_64M;
+
+ /* Decide offset inside 64M */
+ if (index == 0) {
+ offset = random % (SZ_64M - round_up(kernel_sz, SZ_16K) * 2);
+ offset += round_up(kernel_sz, SZ_16K);
+ offset = round_up(offset, SZ_16K);
+ } else {
+ offset = random % (SZ_64M - kernel_sz);
+ offset = round_down(offset, SZ_16K);
+ }
+
+ while (index >= 0) {
+ offset = offset + index * SZ_64M;
+ kaslr_offset = get_usable_offset(dt_ptr, &regions, offset);
+ if (kaslr_offset)
+ break;
+ index--;
+ }
+
+ /* Did not find any usable region? Give up randomize */
+ if (index < 0)
+ kaslr_offset = 0;
+
+ return kaslr_offset;
}

/*
@@ -59,6 +375,8 @@ notrace void __init kaslr_early_init(void *dt_ptr, phys_addr_t size)

kernel_sz = (unsigned long)_end - KERNELBASE;

+ kaslr_get_cmdline(dt_ptr);
+
offset = kaslr_choose_location(dt_ptr, size, kernel_sz);

if (offset == 0)
--
2.17.2

2019-07-31 10:51:07

by Jason Yan

[permalink] [raw]
Subject: [PATCH v3 10/10] powerpc/fsl_booke/kaslr: dump out kernel offset information on panic

When kaslr is enabled, the kernel offset is different for every boot.
This brings some difficult to debug the kernel. Dump out the kernel
offset when panic so that we can easily debug the kernel.

Signed-off-by: Jason Yan <[email protected]>
Cc: Diana Craciun <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Christophe Leroy <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Nicholas Piggin <[email protected]>
Cc: Kees Cook <[email protected]>
Reviewed-by: Christophe Leroy <[email protected]>
---
arch/powerpc/include/asm/page.h | 5 +++++
arch/powerpc/kernel/machine_kexec.c | 1 +
arch/powerpc/kernel/setup-common.c | 19 +++++++++++++++++++
3 files changed, 25 insertions(+)

diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
index 60a68d3a54b1..cd3ac530e58d 100644
--- a/arch/powerpc/include/asm/page.h
+++ b/arch/powerpc/include/asm/page.h
@@ -317,6 +317,11 @@ struct vm_area_struct;

extern unsigned long kimage_vaddr;

+static inline unsigned long kaslr_offset(void)
+{
+ return kimage_vaddr - KERNELBASE;
+}
+
#include <asm-generic/memory_model.h>
#endif /* __ASSEMBLY__ */
#include <asm/slice.h>
diff --git a/arch/powerpc/kernel/machine_kexec.c b/arch/powerpc/kernel/machine_kexec.c
index c4ed328a7b96..078fe3d76feb 100644
--- a/arch/powerpc/kernel/machine_kexec.c
+++ b/arch/powerpc/kernel/machine_kexec.c
@@ -86,6 +86,7 @@ void arch_crash_save_vmcoreinfo(void)
VMCOREINFO_STRUCT_SIZE(mmu_psize_def);
VMCOREINFO_OFFSET(mmu_psize_def, shift);
#endif
+ vmcoreinfo_append_str("KERNELOFFSET=%lx\n", kaslr_offset());
}

/*
diff --git a/arch/powerpc/kernel/setup-common.c b/arch/powerpc/kernel/setup-common.c
index 1f8db666468d..064075f02837 100644
--- a/arch/powerpc/kernel/setup-common.c
+++ b/arch/powerpc/kernel/setup-common.c
@@ -715,12 +715,31 @@ static struct notifier_block ppc_panic_block = {
.priority = INT_MIN /* may not return; must be done last */
};

+/*
+ * Dump out kernel offset information on panic.
+ */
+static int dump_kernel_offset(struct notifier_block *self, unsigned long v,
+ void *p)
+{
+ pr_emerg("Kernel Offset: 0x%lx from 0x%lx\n",
+ kaslr_offset(), KERNELBASE);
+
+ return 0;
+}
+
+static struct notifier_block kernel_offset_notifier = {
+ .notifier_call = dump_kernel_offset
+};
+
void __init setup_panic(void)
{
/* PPC64 always does a hard irq disable in its panic handler */
if (!IS_ENABLED(CONFIG_PPC64) && !ppc_md.panic)
return;
atomic_notifier_chain_register(&panic_notifier_list, &ppc_panic_block);
+ if (IS_ENABLED(CONFIG_RANDOMIZE_BASE) && kaslr_offset() > 0)
+ atomic_notifier_chain_register(&panic_notifier_list,
+ &kernel_offset_notifier);
}

#ifdef CONFIG_CHECK_CACHE_COHERENCY
--
2.17.2

2019-08-01 15:29:37

by Diana Madalina Craciun

[permalink] [raw]
Subject: Re: [PATCH v3 00/10] implement KASLR for powerpc/fsl_booke/32

Hi Jason,

I have tested these series on a P4080 platform.

Regards,
Diana


On 7/31/2019 12:26 PM, Jason Yan wrote:
> This series implements KASLR for powerpc/fsl_booke/32, as a security
> feature that deters exploit attempts relying on knowledge of the location
> of kernel internals.
>
> Since CONFIG_RELOCATABLE has already supported, what we need to do is
> map or copy kernel to a proper place and relocate. Freescale Book-E
> parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1
> entries are not suitable to map the kernel directly in a randomized
> region, so we chose to copy the kernel to a proper place and restart to
> relocate.
>
> Entropy is derived from the banner and timer base, which will change every
> build and boot. This not so much safe so additionally the bootloader may
> pass entropy via the /chosen/kaslr-seed node in device tree.
>
> We will use the first 512M of the low memory to randomize the kernel
> image. The memory will be split in 64M zones. We will use the lower 8
> bit of the entropy to decide the index of the 64M zone. Then we chose a
> 16K aligned offset inside the 64M zone to put the kernel in.
>
> KERNELBASE
>
> |--> 64M <--|
> | |
> +---------------+ +----------------+---------------+
> | |....| |kernel| | |
> +---------------+ +----------------+---------------+
> | |
> |-----> offset <-----|
>
> kimage_vaddr
>
> We also check if we will overlap with some areas like the dtb area, the
> initrd area or the crashkernel area. If we cannot find a proper area,
> kaslr will be disabled and boot from the original kernel.
>
> Changes since v2:
> - Remove unnecessary #ifdef
> - Use SZ_64M instead of0x4000000
> - Call early_init_dt_scan_chosen() to init boot_command_line
> - Rename kaslr_second_init() to kaslr_late_init()
>
> Changes since v1:
> - Remove some useless 'extern' keyword.
> - Replace EXPORT_SYMBOL with EXPORT_SYMBOL_GPL
> - Improve some assembly code
> - Use memzero_explicit instead of memset
> - Use boot_command_line and remove early_command_line
> - Do not print kaslr offset if kaslr is disabled
>
> Jason Yan (10):
> powerpc: unify definition of M_IF_NEEDED
> powerpc: move memstart_addr and kernstart_addr to init-common.c
> powerpc: introduce kimage_vaddr to store the kernel base
> powerpc/fsl_booke/32: introduce create_tlb_entry() helper
> powerpc/fsl_booke/32: introduce reloc_kernel_entry() helper
> powerpc/fsl_booke/32: implement KASLR infrastructure
> powerpc/fsl_booke/32: randomize the kernel image offset
> powerpc/fsl_booke/kaslr: clear the original kernel if randomized
> powerpc/fsl_booke/kaslr: support nokaslr cmdline parameter
> powerpc/fsl_booke/kaslr: dump out kernel offset information on panic
>
> arch/powerpc/Kconfig | 11 +
> arch/powerpc/include/asm/nohash/mmu-book3e.h | 10 +
> arch/powerpc/include/asm/page.h | 7 +
> arch/powerpc/kernel/Makefile | 1 +
> arch/powerpc/kernel/early_32.c | 2 +-
> arch/powerpc/kernel/exceptions-64e.S | 10 -
> arch/powerpc/kernel/fsl_booke_entry_mapping.S | 23 +-
> arch/powerpc/kernel/head_fsl_booke.S | 55 ++-
> arch/powerpc/kernel/kaslr_booke.c | 427 ++++++++++++++++++
> arch/powerpc/kernel/machine_kexec.c | 1 +
> arch/powerpc/kernel/misc_64.S | 5 -
> arch/powerpc/kernel/setup-common.c | 19 +
> arch/powerpc/mm/init-common.c | 7 +
> arch/powerpc/mm/init_32.c | 5 -
> arch/powerpc/mm/init_64.c | 5 -
> arch/powerpc/mm/mmu_decl.h | 10 +
> arch/powerpc/mm/nohash/fsl_booke.c | 8 +-
> 17 files changed, 558 insertions(+), 48 deletions(-)
> create mode 100644 arch/powerpc/kernel/kaslr_booke.c
>

2019-08-02 02:30:18

by Jason Yan

[permalink] [raw]
Subject: Re: [PATCH v3 00/10] implement KASLR for powerpc/fsl_booke/32



On 2019/8/1 22:36, Diana Madalina Craciun wrote:
> Hi Jason,
>
> I have tested these series on a P4080 platform.
>
> Regards,
> Diana

Diana, thank you so much.

So can you take a look at the code of this version and give a
Reviewed-by or Tested-by?

Thanks,
Jason

2019-08-02 13:22:43

by Diana Madalina Craciun

[permalink] [raw]
Subject: Re: [PATCH v3 00/10] implement KASLR for powerpc/fsl_booke/32

Except for one comment in patch 06/10: Reviewed-by: Diana Craciun
<[email protected]>
And also: Tested-by: Diana Craciun <[email protected]>

Regards,
Diana

On 7/31/2019 12:26 PM, Jason Yan wrote:
> This series implements KASLR for powerpc/fsl_booke/32, as a security
> feature that deters exploit attempts relying on knowledge of the location
> of kernel internals.
>
> Since CONFIG_RELOCATABLE has already supported, what we need to do is
> map or copy kernel to a proper place and relocate. Freescale Book-E
> parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1
> entries are not suitable to map the kernel directly in a randomized
> region, so we chose to copy the kernel to a proper place and restart to
> relocate.
>
> Entropy is derived from the banner and timer base, which will change every
> build and boot. This not so much safe so additionally the bootloader may
> pass entropy via the /chosen/kaslr-seed node in device tree.
>
> We will use the first 512M of the low memory to randomize the kernel
> image. The memory will be split in 64M zones. We will use the lower 8
> bit of the entropy to decide the index of the 64M zone. Then we chose a
> 16K aligned offset inside the 64M zone to put the kernel in.
>
> KERNELBASE
>
> |--> 64M <--|
> | |
> +---------------+ +----------------+---------------+
> | |....| |kernel| | |
> +---------------+ +----------------+---------------+
> | |
> |-----> offset <-----|
>
> kimage_vaddr
>
> We also check if we will overlap with some areas like the dtb area, the
> initrd area or the crashkernel area. If we cannot find a proper area,
> kaslr will be disabled and boot from the original kernel.
>
> Changes since v2:
> - Remove unnecessary #ifdef
> - Use SZ_64M instead of0x4000000
> - Call early_init_dt_scan_chosen() to init boot_command_line
> - Rename kaslr_second_init() to kaslr_late_init()
>
> Changes since v1:
> - Remove some useless 'extern' keyword.
> - Replace EXPORT_SYMBOL with EXPORT_SYMBOL_GPL
> - Improve some assembly code
> - Use memzero_explicit instead of memset
> - Use boot_command_line and remove early_command_line
> - Do not print kaslr offset if kaslr is disabled
>
> Jason Yan (10):
> powerpc: unify definition of M_IF_NEEDED
> powerpc: move memstart_addr and kernstart_addr to init-common.c
> powerpc: introduce kimage_vaddr to store the kernel base
> powerpc/fsl_booke/32: introduce create_tlb_entry() helper
> powerpc/fsl_booke/32: introduce reloc_kernel_entry() helper
> powerpc/fsl_booke/32: implement KASLR infrastructure
> powerpc/fsl_booke/32: randomize the kernel image offset
> powerpc/fsl_booke/kaslr: clear the original kernel if randomized
> powerpc/fsl_booke/kaslr: support nokaslr cmdline parameter
> powerpc/fsl_booke/kaslr: dump out kernel offset information on panic
>
> arch/powerpc/Kconfig | 11 +
> arch/powerpc/include/asm/nohash/mmu-book3e.h | 10 +
> arch/powerpc/include/asm/page.h | 7 +
> arch/powerpc/kernel/Makefile | 1 +
> arch/powerpc/kernel/early_32.c | 2 +-
> arch/powerpc/kernel/exceptions-64e.S | 10 -
> arch/powerpc/kernel/fsl_booke_entry_mapping.S | 23 +-
> arch/powerpc/kernel/head_fsl_booke.S | 55 ++-
> arch/powerpc/kernel/kaslr_booke.c | 427 ++++++++++++++++++
> arch/powerpc/kernel/machine_kexec.c | 1 +
> arch/powerpc/kernel/misc_64.S | 5 -
> arch/powerpc/kernel/setup-common.c | 19 +
> arch/powerpc/mm/init-common.c | 7 +
> arch/powerpc/mm/init_32.c | 5 -
> arch/powerpc/mm/init_64.c | 5 -
> arch/powerpc/mm/mmu_decl.h | 10 +
> arch/powerpc/mm/nohash/fsl_booke.c | 8 +-
> 17 files changed, 558 insertions(+), 48 deletions(-)
> create mode 100644 arch/powerpc/kernel/kaslr_booke.c
>