2023-02-17 07:26:41

by Youling Tang

[permalink] [raw]
Subject: [PATCH v5 0/5] LoongArch: Add kernel relocation and KASLR support

This patch series to support kernel relocation and KASLR (only 64bit).

Tested the kernel images built with new toolchain (Binutils-2.40 + patched
GCC-12.2) and old toolchain (kernel.org cross toolchain [1]) on a
3A5000-7A2000-EVB.

With CONFIG_RANDOMIZE_BASE=y, the results are:

1. first boot, new toolchain:

$ sudo cat /proc/iomem | grep Kernel
01080000-0189ffff : Kernel code
018a0000-01deb5ff : Kernel data
01deb600-01ef6e9f : Kernel bss

2. second boot, new toolchain:

$ sudo cat /proc/iomem | grep Kernel
012f0000-01b0ffff : Kernel code
01b10000-0205b5ff : Kernel data
0205b600-02166e9f : Kernel bss

3. first boot, old toolchain:
010e0000-018fffff : Kernel code
01900000-01e591ff : Kernel data
01e59200-01f56dcf : Kernel bss

4. second boot, old toolchain:
010b0000-018cffff : Kernel code
018d0000-01e291ff : Kernel data
01e29200-01f26dcf : Kernel bss

Changes from v4:
- Add la_abs macro implementation.
- Remove patch2 (LoongArch: Use la.pcrel instead of la.abs for exception
handlers).
- Remove SYS_SUPPORTS_RELOCATABLE.
- Fix do_kaslr.
- Fix compiler warnings.
- Move some declarations and struct definitions to setup.h.

Changes from v3:

- JUMP_LINK_ADDR renamed to JUMP_VIRT_ADDR, and use the way of parameter
passing.
- Reimplement kernel relocation, when the link address and load address
are different, realize the effect of adaptive relocation (one of the
usage scenarios is kdump operation).
- Reimplement KASLR.

Changes from v2:

- Correctly fixup pcaddi12i/ori/lu32i.d/lu52i.d sequence generated by
GNU as <= 2.39 for la.pcrel.

Changes from v1 to v2:

- Relocate the handlers instead of using a trampoline, to avoid
performance issue on NUMA systems.
- Fix compiler warnings.

Xi Ruoyao (1):
LoongArch: Use la.pcrel instead of la.abs when it's trivially possible

Youling Tang (4):
LoongArch: Add JUMP_VIRT_ADDR macro implementation to avoid using
la.abs
LoongArch: Add la_abs macro implementation
LoongArch: Add support for kernel relocation
LoongArch: Add support for kernel address space layout randomization
(KASLR)

arch/loongarch/Kconfig | 31 +++
arch/loongarch/Makefile | 5 +
arch/loongarch/include/asm/asmmacro.h | 17 ++
arch/loongarch/include/asm/setup.h | 14 ++
arch/loongarch/include/asm/stackframe.h | 13 +-
arch/loongarch/include/asm/uaccess.h | 1 -
arch/loongarch/kernel/Makefile | 2 +
arch/loongarch/kernel/entry.S | 2 +-
arch/loongarch/kernel/genex.S | 8 +-
arch/loongarch/kernel/head.S | 31 ++-
arch/loongarch/kernel/relocate.c | 240 ++++++++++++++++++++++++
arch/loongarch/kernel/vmlinux.lds.S | 20 +-
arch/loongarch/mm/tlbex.S | 17 +-
arch/loongarch/power/suspend_asm.S | 5 +-
14 files changed, 375 insertions(+), 31 deletions(-)
create mode 100644 arch/loongarch/kernel/relocate.c

--
2.37.3



2023-02-17 07:26:54

by Youling Tang

[permalink] [raw]
Subject: [PATCH v5 1/5] LoongArch: Use la.pcrel instead of la.abs when it's trivially possible

From: Xi Ruoyao <[email protected]>

Let's start to kill la.abs inpreparation for the subsequent support of the
PIE kernel.

Signed-off-by: Xi Ruoyao <[email protected]>
---
arch/loongarch/include/asm/stackframe.h | 2 +-
arch/loongarch/include/asm/uaccess.h | 1 -
arch/loongarch/kernel/entry.S | 2 +-
arch/loongarch/kernel/head.S | 2 +-
arch/loongarch/mm/tlbex.S | 3 +--
5 files changed, 4 insertions(+), 6 deletions(-)

diff --git a/arch/loongarch/include/asm/stackframe.h b/arch/loongarch/include/asm/stackframe.h
index 4ca953062b5b..7deb043ce387 100644
--- a/arch/loongarch/include/asm/stackframe.h
+++ b/arch/loongarch/include/asm/stackframe.h
@@ -90,7 +90,7 @@
.endm

.macro set_saved_sp stackp temp temp2
- la.abs \temp, kernelsp
+ la.pcrel \temp, kernelsp
#ifdef CONFIG_SMP
LONG_ADD \temp, \temp, u0
#endif
diff --git a/arch/loongarch/include/asm/uaccess.h b/arch/loongarch/include/asm/uaccess.h
index 255899d4a7c3..0d22991ae430 100644
--- a/arch/loongarch/include/asm/uaccess.h
+++ b/arch/loongarch/include/asm/uaccess.h
@@ -22,7 +22,6 @@
extern u64 __ua_limit;

#define __UA_ADDR ".dword"
-#define __UA_LA "la.abs"
#define __UA_LIMIT __ua_limit

/*
diff --git a/arch/loongarch/kernel/entry.S b/arch/loongarch/kernel/entry.S
index 55e23b10219f..b940687f27a6 100644
--- a/arch/loongarch/kernel/entry.S
+++ b/arch/loongarch/kernel/entry.S
@@ -20,7 +20,7 @@
.align 5
SYM_FUNC_START(handle_syscall)
csrrd t0, PERCPU_BASE_KS
- la.abs t1, kernelsp
+ la.pcrel t1, kernelsp
add.d t1, t1, t0
move t2, sp
ld.d sp, t1, 0
diff --git a/arch/loongarch/kernel/head.S b/arch/loongarch/kernel/head.S
index 57bada6b4e93..aa6181714ec3 100644
--- a/arch/loongarch/kernel/head.S
+++ b/arch/loongarch/kernel/head.S
@@ -117,7 +117,7 @@ SYM_CODE_START(smpboot_entry)
li.w t0, 0x00 # FPE=0, SXE=0, ASXE=0, BTE=0
csrwr t0, LOONGARCH_CSR_EUEN

- la.abs t0, cpuboot_data
+ la.pcrel t0, cpuboot_data
ld.d sp, t0, CPU_BOOT_STACK
ld.d tp, t0, CPU_BOOT_TINFO

diff --git a/arch/loongarch/mm/tlbex.S b/arch/loongarch/mm/tlbex.S
index 58781c6e4191..3dd2a9615cd9 100644
--- a/arch/loongarch/mm/tlbex.S
+++ b/arch/loongarch/mm/tlbex.S
@@ -24,8 +24,7 @@
move a0, sp
REG_S a2, sp, PT_BVADDR
li.w a1, \write
- la.abs t0, do_page_fault
- jirl ra, t0, 0
+ bl do_page_fault
RESTORE_ALL_AND_RET
SYM_FUNC_END(tlb_do_page_fault_\write)
.endm
--
2.37.3


2023-02-17 07:26:58

by Youling Tang

[permalink] [raw]
Subject: [PATCH v5 2/5] LoongArch: Add JUMP_VIRT_ADDR macro implementation to avoid using la.abs

Add JUMP_VIRT_ADDR macro implementation to avoid using la.abs.

Signed-off-by: Youling Tang <[email protected]>
---
arch/loongarch/include/asm/stackframe.h | 9 +++++++++
arch/loongarch/kernel/head.S | 12 ++++--------
arch/loongarch/power/suspend_asm.S | 5 ++---
3 files changed, 15 insertions(+), 11 deletions(-)

diff --git a/arch/loongarch/include/asm/stackframe.h b/arch/loongarch/include/asm/stackframe.h
index 7deb043ce387..855c9d9e3210 100644
--- a/arch/loongarch/include/asm/stackframe.h
+++ b/arch/loongarch/include/asm/stackframe.h
@@ -10,6 +10,7 @@
#include <asm/asm.h>
#include <asm/asmmacro.h>
#include <asm/asm-offsets.h>
+#include <asm/addrspace.h>
#include <asm/loongarch.h>
#include <asm/thread_info.h>

@@ -216,4 +217,12 @@
RESTORE_SP_AND_RET \docfi
.endm

+/* Jump to the virtual address. */
+ .macro JUMP_VIRT_ADDR temp1 temp2
+ li.d \temp1, CACHE_BASE
+ pcaddi \temp2, 0
+ or \temp1, \temp1, \temp2
+ jirl zero, \temp1, 0xc
+ .endm
+
#endif /* _ASM_STACKFRAME_H */
diff --git a/arch/loongarch/kernel/head.S b/arch/loongarch/kernel/head.S
index aa6181714ec3..d2ac26b5b22b 100644
--- a/arch/loongarch/kernel/head.S
+++ b/arch/loongarch/kernel/head.S
@@ -50,11 +50,8 @@ SYM_CODE_START(kernel_entry) # kernel entry point
li.d t0, CSR_DMW1_INIT # CA, PLV0, 0x9000 xxxx xxxx xxxx
csrwr t0, LOONGARCH_CSR_DMWIN1

- /* We might not get launched at the address the kernel is linked to,
- so we jump there. */
- la.abs t0, 0f
- jr t0
-0:
+ JUMP_VIRT_ADDR t0, t1
+
/* Enable PG */
li.w t0, 0xb0 # PLV=0, IE=0, PG=1
csrwr t0, LOONGARCH_CSR_CRMD
@@ -106,9 +103,8 @@ SYM_CODE_START(smpboot_entry)
li.d t0, CSR_DMW1_INIT # CA, PLV0
csrwr t0, LOONGARCH_CSR_DMWIN1

- la.abs t0, 0f
- jr t0
-0:
+ JUMP_VIRT_ADDR t0, t1
+
/* Enable PG */
li.w t0, 0xb0 # PLV=0, IE=0, PG=1
csrwr t0, LOONGARCH_CSR_CRMD
diff --git a/arch/loongarch/power/suspend_asm.S b/arch/loongarch/power/suspend_asm.S
index eb2675642f9f..90da899c06a1 100644
--- a/arch/loongarch/power/suspend_asm.S
+++ b/arch/loongarch/power/suspend_asm.S
@@ -78,9 +78,8 @@ SYM_INNER_LABEL(loongarch_wakeup_start, SYM_L_GLOBAL)
li.d t0, CSR_DMW1_INIT # CA, PLV0
csrwr t0, LOONGARCH_CSR_DMWIN1

- la.abs t0, 0f
- jr t0
-0:
+ JUMP_VIRT_ADDR t0, t1
+
la.pcrel t0, acpi_saved_sp
ld.d sp, t0, 0
SETUP_WAKEUP
--
2.37.3


2023-02-17 07:27:00

by Youling Tang

[permalink] [raw]
Subject: [PATCH v5 3/5] LoongArch: Add la_abs macro implementation

Use the "la_abs macro" instead of the "la.abs pseudo instruction" to prepare
for the subsequent PIE kernel. When PIE is not enabled, la_abs is equivalent
to la.abs.

Signed-off-by: Youling Tang <[email protected]>
---
arch/loongarch/include/asm/asmmacro.h | 4 ++++
arch/loongarch/include/asm/stackframe.h | 2 +-
arch/loongarch/kernel/genex.S | 8 ++++----
arch/loongarch/mm/tlbex.S | 14 +++++++-------
4 files changed, 16 insertions(+), 12 deletions(-)

diff --git a/arch/loongarch/include/asm/asmmacro.h b/arch/loongarch/include/asm/asmmacro.h
index 328bb956f241..adc92464afa6 100644
--- a/arch/loongarch/include/asm/asmmacro.h
+++ b/arch/loongarch/include/asm/asmmacro.h
@@ -667,4 +667,8 @@
nor \dst, \src, zero
.endm

+.macro la_abs reg, sym
+ la.abs \reg, \sym
+.endm
+
#endif /* _ASM_ASMMACRO_H */
diff --git a/arch/loongarch/include/asm/stackframe.h b/arch/loongarch/include/asm/stackframe.h
index 855c9d9e3210..3e1cf97f56e6 100644
--- a/arch/loongarch/include/asm/stackframe.h
+++ b/arch/loongarch/include/asm/stackframe.h
@@ -78,7 +78,7 @@
* new value in sp.
*/
.macro get_saved_sp docfi=0
- la.abs t1, kernelsp
+ la_abs t1, kernelsp
#ifdef CONFIG_SMP
csrrd t0, PERCPU_BASE_KS
LONG_ADD t1, t1, t0
diff --git a/arch/loongarch/kernel/genex.S b/arch/loongarch/kernel/genex.S
index 7e5c293ed89f..44ff1ff64260 100644
--- a/arch/loongarch/kernel/genex.S
+++ b/arch/loongarch/kernel/genex.S
@@ -34,7 +34,7 @@ SYM_FUNC_END(__arch_cpu_idle)
SYM_FUNC_START(handle_vint)
BACKUP_T0T1
SAVE_ALL
- la.abs t1, __arch_cpu_idle
+ la_abs t1, __arch_cpu_idle
LONG_L t0, sp, PT_ERA
/* 32 byte rollback region */
ori t0, t0, 0x1f
@@ -43,7 +43,7 @@ SYM_FUNC_START(handle_vint)
LONG_S t0, sp, PT_ERA
1: move a0, sp
move a1, sp
- la.abs t0, do_vint
+ la_abs t0, do_vint
jirl ra, t0, 0
RESTORE_ALL_AND_RET
SYM_FUNC_END(handle_vint)
@@ -72,7 +72,7 @@ SYM_FUNC_END(except_vec_cex)
SAVE_ALL
build_prep_\prep
move a0, sp
- la.abs t0, do_\handler
+ la_abs t0, do_\handler
jirl ra, t0, 0
668:
RESTORE_ALL_AND_RET
@@ -93,6 +93,6 @@ SYM_FUNC_END(except_vec_cex)
BUILD_HANDLER reserved reserved none /* others */

SYM_FUNC_START(handle_sys)
- la.abs t0, handle_syscall
+ la_abs t0, handle_syscall
jr t0
SYM_FUNC_END(handle_sys)
diff --git a/arch/loongarch/mm/tlbex.S b/arch/loongarch/mm/tlbex.S
index 3dd2a9615cd9..244e2f5aeee5 100644
--- a/arch/loongarch/mm/tlbex.S
+++ b/arch/loongarch/mm/tlbex.S
@@ -39,7 +39,7 @@ SYM_FUNC_START(handle_tlb_protect)
move a1, zero
csrrd a2, LOONGARCH_CSR_BADV
REG_S a2, sp, PT_BVADDR
- la.abs t0, do_page_fault
+ la_abs t0, do_page_fault
jirl ra, t0, 0
RESTORE_ALL_AND_RET
SYM_FUNC_END(handle_tlb_protect)
@@ -115,7 +115,7 @@ smp_pgtable_change_load:

#ifdef CONFIG_64BIT
vmalloc_load:
- la.abs t1, swapper_pg_dir
+ la_abs t1, swapper_pg_dir
b vmalloc_done_load
#endif

@@ -186,7 +186,7 @@ tlb_huge_update_load:
nopage_tlb_load:
dbar 0
csrrd ra, EXCEPTION_KS2
- la.abs t0, tlb_do_page_fault_0
+ la_abs t0, tlb_do_page_fault_0
jr t0
SYM_FUNC_END(handle_tlb_load)

@@ -262,7 +262,7 @@ smp_pgtable_change_store:

#ifdef CONFIG_64BIT
vmalloc_store:
- la.abs t1, swapper_pg_dir
+ la_abs t1, swapper_pg_dir
b vmalloc_done_store
#endif

@@ -335,7 +335,7 @@ tlb_huge_update_store:
nopage_tlb_store:
dbar 0
csrrd ra, EXCEPTION_KS2
- la.abs t0, tlb_do_page_fault_1
+ la_abs t0, tlb_do_page_fault_1
jr t0
SYM_FUNC_END(handle_tlb_store)

@@ -410,7 +410,7 @@ smp_pgtable_change_modify:

#ifdef CONFIG_64BIT
vmalloc_modify:
- la.abs t1, swapper_pg_dir
+ la_abs t1, swapper_pg_dir
b vmalloc_done_modify
#endif

@@ -482,7 +482,7 @@ tlb_huge_update_modify:
nopage_tlb_modify:
dbar 0
csrrd ra, EXCEPTION_KS2
- la.abs t0, tlb_do_page_fault_1
+ la_abs t0, tlb_do_page_fault_1
jr t0
SYM_FUNC_END(handle_tlb_modify)

--
2.37.3


2023-02-17 07:27:09

by Youling Tang

[permalink] [raw]
Subject: [PATCH v5 4/5] LoongArch: Add support for kernel relocation

This config allows to compile kernel as PIE and to relocate it at
any virtual address at runtime: this paves the way to KASLR.
Runtime relocation is possible since relocation metadata are embedded
into the kernel.

Signed-off-by: Youling Tang <[email protected]>
Signed-off-by: Xi Ruoyao <[email protected]> # Use arch_initcall
Signed-off-by: Jinyang He <[email protected]> # Provide la_abs idea
---
arch/loongarch/Kconfig | 8 ++
arch/loongarch/Makefile | 5 ++
arch/loongarch/include/asm/asmmacro.h | 13 ++++
arch/loongarch/include/asm/setup.h | 14 ++++
arch/loongarch/kernel/Makefile | 2 +
arch/loongarch/kernel/head.S | 5 ++
arch/loongarch/kernel/relocate.c | 104 ++++++++++++++++++++++++++
arch/loongarch/kernel/vmlinux.lds.S | 20 ++++-
8 files changed, 169 insertions(+), 2 deletions(-)
create mode 100644 arch/loongarch/kernel/relocate.c

diff --git a/arch/loongarch/Kconfig b/arch/loongarch/Kconfig
index 54bd3dbde1f2..406a28758a52 100644
--- a/arch/loongarch/Kconfig
+++ b/arch/loongarch/Kconfig
@@ -522,6 +522,14 @@ config PHYSICAL_START
specified in the "crashkernel=YM@XM" command line boot parameter
passed to the panic-ed kernel).

+config RELOCATABLE
+ bool "Relocatable kernel"
+ help
+ This builds the kernel as a Position Independent Executable (PIE),
+ which retains all relocation metadata required to relocate the
+ kernel binary at runtime to a different virtual address than the
+ address it was linked at.
+
config SECCOMP
bool "Enable seccomp to safely compute untrusted bytecode"
depends on PROC_FS
diff --git a/arch/loongarch/Makefile b/arch/loongarch/Makefile
index ccfb52700237..2aba6ff30436 100644
--- a/arch/loongarch/Makefile
+++ b/arch/loongarch/Makefile
@@ -71,6 +71,11 @@ KBUILD_AFLAGS_MODULE += -Wa,-mla-global-with-abs
KBUILD_CFLAGS_MODULE += -fplt -Wa,-mla-global-with-abs,-mla-local-with-abs
endif

+ifeq ($(CONFIG_RELOCATABLE),y)
+LDFLAGS_vmlinux += -static -pie --no-dynamic-linker -z notext
+KBUILD_CFLAGS_KERNEL += -fPIE
+endif
+
cflags-y += -ffreestanding
cflags-y += $(call cc-option, -mno-check-zero-division)

diff --git a/arch/loongarch/include/asm/asmmacro.h b/arch/loongarch/include/asm/asmmacro.h
index adc92464afa6..ba99908d7d10 100644
--- a/arch/loongarch/include/asm/asmmacro.h
+++ b/arch/loongarch/include/asm/asmmacro.h
@@ -668,7 +668,20 @@
.endm

.macro la_abs reg, sym
+#ifndef CONFIG_RELOCATABLE
la.abs \reg, \sym
+#else
+766:
+ lu12i.w \reg, 0
+ ori \reg, \reg, 0
+ lu32i.d \reg, 0
+ lu52i.d \reg, \reg, 0
+ .pushsection ".la_abs", "aw", %progbits
+768:
+ .dword 768b-766b
+ .dword \sym
+ .popsection
+#endif
.endm

#endif /* _ASM_ASMMACRO_H */
diff --git a/arch/loongarch/include/asm/setup.h b/arch/loongarch/include/asm/setup.h
index 72ead58039f3..efbd8d7b15df 100644
--- a/arch/loongarch/include/asm/setup.h
+++ b/arch/loongarch/include/asm/setup.h
@@ -21,4 +21,18 @@ extern void per_cpu_trap_init(int cpu);
extern void set_handler(unsigned long offset, void *addr, unsigned long len);
extern void set_merr_handler(unsigned long offset, void *addr, unsigned long len);

+#ifdef CONFIG_RELOCATABLE
+struct rela_la_abs {
+ long offset;
+ long symvalue;
+};
+
+extern long __rela_dyn_start;
+extern long __rela_dyn_end;
+extern void *__la_abs_begin;
+extern void *__la_abs_end;
+
+extern void __init relocate_kernel(void);
+#endif
+
#endif /* __SETUP_H */
diff --git a/arch/loongarch/kernel/Makefile b/arch/loongarch/kernel/Makefile
index 45c78aea63ce..3288854dbe36 100644
--- a/arch/loongarch/kernel/Makefile
+++ b/arch/loongarch/kernel/Makefile
@@ -31,6 +31,8 @@ endif
obj-$(CONFIG_MODULES) += module.o module-sections.o
obj-$(CONFIG_STACKTRACE) += stacktrace.o

+obj-$(CONFIG_RELOCATABLE) += relocate.o
+
obj-$(CONFIG_PROC_FS) += proc.o

obj-$(CONFIG_SMP) += smp.o
diff --git a/arch/loongarch/kernel/head.S b/arch/loongarch/kernel/head.S
index d2ac26b5b22b..499edc80d8ab 100644
--- a/arch/loongarch/kernel/head.S
+++ b/arch/loongarch/kernel/head.S
@@ -86,6 +86,11 @@ SYM_CODE_START(kernel_entry) # kernel entry point
PTR_ADD sp, sp, tp
set_saved_sp sp, t0, t1

+#ifdef CONFIG_RELOCATABLE
+ /* Apply the relocations */
+ bl relocate_kernel
+#endif
+
bl start_kernel
ASM_BUG()

diff --git a/arch/loongarch/kernel/relocate.c b/arch/loongarch/kernel/relocate.c
new file mode 100644
index 000000000000..6f31753be1da
--- /dev/null
+++ b/arch/loongarch/kernel/relocate.c
@@ -0,0 +1,104 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Support for Kernel relocation at boot time
+ *
+ * Copyright (C) 2023 Loongson Technology Corporation Limited
+ */
+
+#include <linux/elf.h>
+#include <linux/kernel.h>
+#include <linux/printk.h>
+#include <linux/panic_notifier.h>
+#include <asm/inst.h>
+#include <asm/sections.h>
+#include <asm/setup.h>
+
+#define RELOCATED(x) ((void *)((long)x + reloc_offset))
+
+static unsigned long reloc_offset;
+
+static inline __init void relocate_relative(void)
+{
+ Elf64_Rela *rela, *rela_end;
+ rela = (Elf64_Rela *)&__rela_dyn_start;
+ rela_end = (Elf64_Rela *)&__rela_dyn_end;
+
+ for ( ; rela < rela_end; rela++) {
+ Elf64_Addr addr = rela->r_offset;
+ Elf64_Addr relocated_addr = rela->r_addend;
+
+ if (rela->r_info != R_LARCH_RELATIVE)
+ continue;
+
+ if (relocated_addr >= VMLINUX_LOAD_ADDRESS)
+ relocated_addr =
+ (Elf64_Addr)RELOCATED(relocated_addr);
+
+ *(Elf64_Addr *)RELOCATED(addr) = relocated_addr;
+ }
+}
+
+static inline void __init relocate_la_abs(void)
+{
+ struct rela_la_abs *p;
+
+ for (p = (void *)&__la_abs_begin; (void *)p < (void *)&__la_abs_end; p++) {
+ long v = p->symvalue + reloc_offset;
+ union loongarch_instruction *insn = (void *)p - p->offset;
+ u32 lu12iw, ori, lu32id, lu52id;
+
+ lu12iw = (v >> 12) & 0xfffff;
+ ori = v & 0xfff;
+ lu32id = (v >> 32) & 0xfffff;
+ lu52id = v >> 52;
+
+ insn[0].reg1i20_format.immediate = lu12iw;
+ insn[1].reg2i12_format.immediate = ori;
+ insn[2].reg1i20_format.immediate = lu32id;
+ insn[3].reg2i12_format.immediate = lu52id;
+ }
+}
+
+void __init relocate_kernel(void)
+{
+ reloc_offset = (unsigned long)_text - VMLINUX_LOAD_ADDRESS;
+
+ if (reloc_offset)
+ relocate_relative();
+
+ relocate_la_abs();
+}
+
+/*
+ * Show relocation information on panic.
+ */
+static void show_kernel_relocation(const char *level)
+{
+ if (reloc_offset > 0) {
+ printk(level);
+ pr_cont("Kernel relocated offset @ 0x%lx\n", reloc_offset);
+ pr_cont(" .text @ 0x%lx\n", (unsigned long)&_text);
+ pr_cont(" .data @ 0x%lx\n", (unsigned long)&_sdata);
+ pr_cont(" .bss @ 0x%lx\n", (unsigned long)&__bss_start);
+ }
+}
+
+static int kernel_location_notifier_fn(struct notifier_block *self,
+ unsigned long v, void *p)
+{
+ show_kernel_relocation(KERN_EMERG);
+ return NOTIFY_DONE;
+}
+
+static struct notifier_block kernel_location_notifier = {
+ .notifier_call = kernel_location_notifier_fn
+};
+
+static int __init register_kernel_offset_dumper(void)
+{
+ atomic_notifier_chain_register(&panic_notifier_list,
+ &kernel_location_notifier);
+ return 0;
+}
+
+arch_initcall(register_kernel_offset_dumper);
diff --git a/arch/loongarch/kernel/vmlinux.lds.S b/arch/loongarch/kernel/vmlinux.lds.S
index 733b16e8d55d..5ec0c008f01d 100644
--- a/arch/loongarch/kernel/vmlinux.lds.S
+++ b/arch/loongarch/kernel/vmlinux.lds.S
@@ -66,10 +66,21 @@ SECTIONS
__alt_instructions_end = .;
}

+#ifdef CONFIG_RELOCATABLE
+ . = ALIGN(8);
+ .la_abs : AT(ADDR(.la_abs) - LOAD_OFFSET) {
+ __la_abs_begin = .;
+ *(.la_abs)
+ __la_abs_end = .;
+ }
+#endif
+
.got : ALIGN(16) { *(.got) }
.plt : ALIGN(16) { *(.plt) }
.got.plt : ALIGN(16) { *(.got.plt) }

+ .data.rel : { *(.data.rel*) }
+
. = ALIGN(PECOFF_SEGMENT_ALIGN);
__init_begin = .;
__inittext_begin = .;
@@ -93,8 +104,6 @@ SECTIONS
PERCPU_SECTION(1 << CONFIG_L1_CACHE_SHIFT)
#endif

- .rela.dyn : ALIGN(8) { *(.rela.dyn) *(.rela*) }
-
.init.bss : {
*(.init.bss)
}
@@ -107,6 +116,12 @@ SECTIONS
RO_DATA(4096)
RW_DATA(1 << CONFIG_L1_CACHE_SHIFT, PAGE_SIZE, THREAD_SIZE)

+ .rela.dyn : ALIGN(8) {
+ __rela_dyn_start = .;
+ *(.rela.dyn) *(.rela*)
+ __rela_dyn_end = .;
+ }
+
.sdata : {
*(.sdata)
}
@@ -133,6 +148,7 @@ SECTIONS

DISCARDS
/DISCARD/ : {
+ *(.dynamic .dynsym .dynstr .hash .gnu.hash)
*(.gnu.attributes)
*(.options)
*(.eh_frame)
--
2.37.3


2023-02-17 07:27:15

by Youling Tang

[permalink] [raw]
Subject: [PATCH v5 5/5] LoongArch: Add support for kernel address space layout randomization (KASLR)

This patch adds support for relocating the kernel to a random address.

Entropy is derived from the banner, which will change every build and
random_get_entropy() which should provide additional runtime entropy.

The kernel is relocated by up to RANDOMIZE_BASE_MAX_OFFSET bytes from
its link address. Because relocation happens so early in the kernel boot,
the amount of physical memory has not yet been determined. This means
the only way to limit relocation within the available memory is via
Kconfig. Limit the maximum value of RANDOMIZE_BASE_MAX_OFFSET to
256M(0x10000000) because our memory layout has many holes.

Signed-off-by: Youling Tang <[email protected]>
Signed-off-by: Xi Ruoyao <[email protected]> # Fix compiler warnings
---
arch/loongarch/Kconfig | 23 +++++
arch/loongarch/kernel/head.S | 14 ++-
arch/loongarch/kernel/relocate.c | 142 ++++++++++++++++++++++++++++++-
3 files changed, 175 insertions(+), 4 deletions(-)

diff --git a/arch/loongarch/Kconfig b/arch/loongarch/Kconfig
index 406a28758a52..ab4c2ab146ab 100644
--- a/arch/loongarch/Kconfig
+++ b/arch/loongarch/Kconfig
@@ -530,6 +530,29 @@ config RELOCATABLE
kernel binary at runtime to a different virtual address than the
address it was linked at.

+config RANDOMIZE_BASE
+ bool "Randomize the address of the kernel image (KASLR)"
+ depends on RELOCATABLE
+ help
+ Randomizes the physical and virtual address at which the
+ kernel image is loaded, as a security feature that
+ deters exploit attempts relying on knowledge of the location
+ of kernel internals.
+
+ The kernel will be offset by up to RANDOMIZE_BASE_MAX_OFFSET.
+
+ If unsure, say N.
+
+config RANDOMIZE_BASE_MAX_OFFSET
+ hex "Maximum KASLR offset" if EXPERT
+ depends on RANDOMIZE_BASE
+ range 0x0 0x10000000 if 64BIT
+ default "0x01000000"
+ help
+ When KASLR is active, this provides the maximum offset that will
+ be applied to the kernel image.
+
+
config SECCOMP
bool "Enable seccomp to safely compute untrusted bytecode"
depends on PROC_FS
diff --git a/arch/loongarch/kernel/head.S b/arch/loongarch/kernel/head.S
index 499edc80d8ab..b12f459ad73a 100644
--- a/arch/loongarch/kernel/head.S
+++ b/arch/loongarch/kernel/head.S
@@ -87,10 +87,22 @@ SYM_CODE_START(kernel_entry) # kernel entry point
set_saved_sp sp, t0, t1

#ifdef CONFIG_RELOCATABLE
+#ifdef CONFIG_RANDOMIZE_BASE
+ bl do_kaslr
+
+ /* Repoint the sp into the new kernel image */
+ PTR_LI sp, (_THREAD_SIZE - PT_SIZE)
+ PTR_ADD sp, sp, tp
+ set_saved_sp sp, t0, t1
+
+ /* do_kaslr returns the new kernel image entry point */
+ jr a0
+ ASM_BUG()
+#else
/* Apply the relocations */
bl relocate_kernel
#endif
-
+#endif
bl start_kernel
ASM_BUG()

diff --git a/arch/loongarch/kernel/relocate.c b/arch/loongarch/kernel/relocate.c
index 6f31753be1da..acdac7380e4a 100644
--- a/arch/loongarch/kernel/relocate.c
+++ b/arch/loongarch/kernel/relocate.c
@@ -9,11 +9,15 @@
#include <linux/kernel.h>
#include <linux/printk.h>
#include <linux/panic_notifier.h>
+#include <linux/start_kernel.h>
+#include <asm/bootinfo.h>
+#include <asm/early_ioremap.h>
#include <asm/inst.h>
#include <asm/sections.h>
#include <asm/setup.h>

#define RELOCATED(x) ((void *)((long)x + reloc_offset))
+#define RELOCATED_KASLR(x) ((void *)((long)x + offset))

static unsigned long reloc_offset;

@@ -38,13 +42,13 @@ static inline __init void relocate_relative(void)
}
}

-static inline void __init relocate_la_abs(void)
+static inline void __init relocate_la_abs(long offset)
{
struct rela_la_abs *p;

for (p = (void *)&__la_abs_begin; (void *)p < (void *)&__la_abs_end; p++) {
long v = p->symvalue + reloc_offset;
- union loongarch_instruction *insn = (void *)p - p->offset;
+ union loongarch_instruction *insn = (void *)p - p->offset + offset;
u32 lu12iw, ori, lu32id, lu52id;

lu12iw = (v >> 12) & 0xfffff;
@@ -59,6 +63,138 @@ static inline void __init relocate_la_abs(void)
}
}

+#ifdef CONFIG_RANDOMIZE_BASE
+static inline __init unsigned long rotate_xor(unsigned long hash,
+ const void *area, size_t size)
+{
+ size_t i;
+ unsigned long *ptr = (unsigned long *)area;
+
+ for (i = 0; i < size / sizeof(hash); i++) {
+ /* Rotate by odd number of bits and XOR. */
+ hash = (hash << ((sizeof(hash) * 8) - 7)) | (hash >> 7);
+ hash ^= ptr[i];
+ }
+
+ return hash;
+}
+
+static inline __init unsigned long get_random_boot(void)
+{
+ unsigned long entropy = random_get_entropy();
+ unsigned long hash = 0;
+
+ /* Attempt to create a simple but unpredictable starting entropy. */
+ hash = rotate_xor(hash, linux_banner, strlen(linux_banner));
+
+ /* Add in any runtime entropy we can get */
+ hash = rotate_xor(hash, &entropy, sizeof(entropy));
+
+ return hash;
+}
+
+static inline __init bool kaslr_disabled(void)
+{
+ char *str;
+
+ str = strstr(boot_command_line, "nokaslr");
+ if (str == boot_command_line || (str > boot_command_line && *(str - 1) == ' '))
+ return true;
+
+ return false;
+}
+
+/* Choose a new address for the kernel */
+static inline void __init *determine_relocation_address(void)
+{
+ unsigned long kernel_length;
+ void *dest = _text;
+ unsigned long offset;
+
+ if (kaslr_disabled())
+ return dest;
+
+ kernel_length = (long)_end - (long)_text;
+
+ offset = get_random_boot() << 16;
+ offset &= (CONFIG_RANDOMIZE_BASE_MAX_OFFSET - 1);
+ if (offset < kernel_length)
+ offset += ALIGN(kernel_length, 0xffff);
+
+ return RELOCATED_KASLR(dest);
+}
+
+static inline int __init relocation_addr_valid(void *loc_new)
+{
+ if ((unsigned long)loc_new & 0x00000ffff) {
+ /* Inappropriately aligned new location */
+ return 0;
+ }
+ if ((unsigned long)loc_new < (unsigned long)_end) {
+ /* New location overlaps original kernel */
+ return 0;
+ }
+ return 1;
+}
+
+static inline void __init update_reloc_offset(unsigned long *addr, long offset)
+{
+ unsigned long *new_addr = (unsigned long *)RELOCATED_KASLR(addr);
+
+ *new_addr = (unsigned long)offset;
+}
+
+void *__init do_kaslr(void)
+{
+ void *loc_new;
+ unsigned long kernel_length;
+ long offset = 0;
+ /* Default to original kernel entry point */
+ void *kernel_entry = start_kernel;
+ char *cmdline = early_ioremap(fw_arg1, COMMAND_LINE_SIZE);
+
+ /* Boot command line was passed in fw_arg1 */
+ strscpy(boot_command_line, cmdline, COMMAND_LINE_SIZE);
+
+ kernel_length = (long)(_end) - (long)(_text);
+
+ loc_new = determine_relocation_address();
+
+ /* Sanity check relocation address */
+ if (relocation_addr_valid(loc_new))
+ offset = (unsigned long)loc_new - (unsigned long)(_text);
+
+ reloc_offset = (unsigned long)_text - VMLINUX_LOAD_ADDRESS;
+
+ if (offset) {
+ /* Copy the kernel to it's new location */
+ memcpy(loc_new, _text, kernel_length);
+
+ /* Sync the caches ready for execution of new kernel */
+ __asm__ __volatile__ (
+ "ibar 0 \t\n"
+ "dbar 0 \t\n");
+
+ reloc_offset += offset;
+
+ /* The current thread is now within the relocated image */
+ __current_thread_info = RELOCATED_KASLR(__current_thread_info);
+
+ /* Return the new kernel's entry point */
+ kernel_entry = RELOCATED_KASLR(start_kernel);
+
+ update_reloc_offset(&reloc_offset, offset);
+ }
+
+ if (reloc_offset)
+ relocate_relative();
+
+ relocate_la_abs(offset);
+
+ return kernel_entry;
+}
+#endif
+
void __init relocate_kernel(void)
{
reloc_offset = (unsigned long)_text - VMLINUX_LOAD_ADDRESS;
@@ -66,7 +202,7 @@ void __init relocate_kernel(void)
if (reloc_offset)
relocate_relative();

- relocate_la_abs();
+ relocate_la_abs(0);
}

/*
--
2.37.3


2023-02-17 08:03:12

by Xi Ruoyao

[permalink] [raw]
Subject: Re: [PATCH v5 4/5] LoongArch: Add support for kernel relocation

On Fri, 2023-02-17 at 15:26 +0800, Youling Tang wrote:

> This config allows to compile kernel as PIE and to relocate it at
> any virtual address at runtime: this paves the way to KASLR.
> Runtime relocation is possible since relocation metadata are embedded
> into the kernel.
>
> Signed-off-by: Youling Tang <[email protected]>
> Signed-off-by: Xi Ruoyao <[email protected]> # Use arch_initcall
> Signed-off-by: Jinyang He <[email protected]> # Provide la_abs idea

Use Suggested-by if only "idea" is provided.

> +struct rela_la_abs {
> +               long offset;
> +               long symvalue;
> +};

Use one tab instead of two for the indent.

Otherwise LGTM.

--
Xi Ruoyao <[email protected]>
School of Aerospace Science and Technology, Xidian University

2023-02-17 08:13:49

by Youling Tang

[permalink] [raw]
Subject: Re: [PATCH v5 4/5] LoongArch: Add support for kernel relocation



On 02/17/2023 04:02 PM, Xi Ruoyao wrote:
> On Fri, 2023-02-17 at 15:26 +0800, Youling Tang wrote:
>
>> This config allows to compile kernel as PIE and to relocate it at
>> any virtual address at runtime: this paves the way to KASLR.
>> Runtime relocation is possible since relocation metadata are embedded
>> into the kernel.
>>
>> Signed-off-by: Youling Tang <[email protected]>
>> Signed-off-by: Xi Ruoyao <[email protected]> # Use arch_initcall
>> Signed-off-by: Jinyang He <[email protected]> # Provide la_abs idea
>
> Use Suggested-by if only "idea" is provided.
In addition to the idea, the first draft code of la_abs is also
provided.

Maybe change it to the following,
Signed-off-by: Jinyang He <[email protected]> # Provide la_abs
relocation draft code

>
>> +struct rela_la_abs {
>> + long offset;
>> + long symvalue;
>> +};
>
> Use one tab instead of two for the indent.
My mistake.

Also the missing do_kaslr declaration will be added.

Youling.
> Otherwise LGTM.
>