After multiple attempts, this patchset is now based on the fact that the
64b kernel mapping was moved outside the linear mapping.
The first patch allows to build relocatable kernels but is not selected
by default. That patch should ease KASLR implementation a lot.
The second and third patches take advantage of an already existing powerpc
script that checks relocations at compile-time, and uses it for riscv.
This patchset was tested on:
* qemu riscv64 defconfig: OK
* Unmatched ubuntu config: OK
Changes in v7:
* Rebase on top of v5.15
* Fix LDFLAGS_vmlinux which was overriden when CONFIG_DYNAMIC_FTRACE was
set
* Make relocate_kernel static
* Add Ack from Michael
Changes in v6:
* Remove the kernel move to vmalloc zone
* Rebased on top of for-next
* Remove relocatable property from 32b kernel as the kernel is mapped in
the linear mapping and would then need to be copied physically too
* CONFIG_RELOCATABLE depends on !XIP_KERNEL
* Remove Reviewed-by from first patch as it changed a bit
Changes in v5:
* Add "static __init" to create_kernel_page_table function as reported by
Kbuild test robot
* Add reviewed-by from Zong
* Rebase onto v5.7
Changes in v4:
* Fix BPF region that overlapped with kernel's as suggested by Zong
* Fix end of module region that could be larger than 2GB as suggested by Zong
* Fix the size of the vm area reserved for the kernel as we could lose
PMD_SIZE if the size was already aligned on PMD_SIZE
* Split compile time relocations check patch into 2 patches as suggested by Anup
* Applied Reviewed-by from Zong and Anup
Changes in v3:
* Move kernel mapping to vmalloc
Changes in v2:
* Make RELOCATABLE depend on MMU as suggested by Anup
* Rename kernel_load_addr into kernel_virt_addr as suggested by Anup
* Use __pa_symbol instead of __pa, as suggested by Zong
* Rebased on top of v5.6-rc3
* Tested with sv48 patchset
* Add Reviewed/Tested-by from Zong and Anup
Alexandre Ghiti (3):
riscv: Introduce CONFIG_RELOCATABLE
powerpc: Move script to check relocations at compile time in scripts/
riscv: Check relocations at compile time
arch/powerpc/tools/relocs_check.sh | 18 ++--------
arch/riscv/Kconfig | 12 +++++++
arch/riscv/Makefile | 7 ++--
arch/riscv/Makefile.postlink | 36 ++++++++++++++++++++
arch/riscv/kernel/vmlinux.lds.S | 6 ++++
arch/riscv/mm/Makefile | 4 +++
arch/riscv/mm/init.c | 54 +++++++++++++++++++++++++++++-
arch/riscv/tools/relocs_check.sh | 26 ++++++++++++++
scripts/relocs_check.sh | 20 +++++++++++
9 files changed, 164 insertions(+), 19 deletions(-)
create mode 100644 arch/riscv/Makefile.postlink
create mode 100755 arch/riscv/tools/relocs_check.sh
create mode 100755 scripts/relocs_check.sh
--
2.30.2
From: Alexandre Ghiti <[email protected]>
Relocating kernel at runtime is done very early in the boot process, so
it is not convenient to check for relocations there and react in case a
relocation was not expected.
Powerpc architecture has a script that allows to check at compile time
for such unexpected relocations: extract the common logic to scripts/
so that other architectures can take advantage of it.
Signed-off-by: Alexandre Ghiti <[email protected]>
Reviewed-by: Anup Patel <[email protected]>
Acked-by: Michael Ellerman <[email protected]> (powerpc)
---
arch/powerpc/tools/relocs_check.sh | 18 ++----------------
scripts/relocs_check.sh | 20 ++++++++++++++++++++
2 files changed, 22 insertions(+), 16 deletions(-)
create mode 100755 scripts/relocs_check.sh
diff --git a/arch/powerpc/tools/relocs_check.sh b/arch/powerpc/tools/relocs_check.sh
index 014e00e74d2b..e367895941ae 100755
--- a/arch/powerpc/tools/relocs_check.sh
+++ b/arch/powerpc/tools/relocs_check.sh
@@ -15,21 +15,8 @@ if [ $# -lt 3 ]; then
exit 1
fi
-# Have Kbuild supply the path to objdump and nm so we handle cross compilation.
-objdump="$1"
-nm="$2"
-vmlinux="$3"
-
-# Remove from the bad relocations those that match an undefined weak symbol
-# which will result in an absolute relocation to 0.
-# Weak unresolved symbols are of that form in nm output:
-# " w _binary__btf_vmlinux_bin_end"
-undef_weak_symbols=$($nm "$vmlinux" | awk '$1 ~ /w/ { print $2 }')
-
bad_relocs=$(
-$objdump -R "$vmlinux" |
- # Only look at relocation lines.
- grep -E '\<R_' |
+${srctree}/scripts/relocs_check.sh "$@" |
# These relocations are okay
# On PPC64:
# R_PPC64_RELATIVE, R_PPC64_NONE
@@ -43,8 +30,7 @@ R_PPC_ADDR16_LO
R_PPC_ADDR16_HI
R_PPC_ADDR16_HA
R_PPC_RELATIVE
-R_PPC_NONE' |
- ([ "$undef_weak_symbols" ] && grep -F -w -v "$undef_weak_symbols" || cat)
+R_PPC_NONE'
)
if [ -z "$bad_relocs" ]; then
diff --git a/scripts/relocs_check.sh b/scripts/relocs_check.sh
new file mode 100755
index 000000000000..137c660499f3
--- /dev/null
+++ b/scripts/relocs_check.sh
@@ -0,0 +1,20 @@
+#!/bin/sh
+# SPDX-License-Identifier: GPL-2.0-or-later
+
+# Get a list of all the relocations, remove from it the relocations
+# that are known to be legitimate and return this list to arch specific
+# script that will look for suspicious relocations.
+
+objdump="$1"
+nm="$2"
+vmlinux="$3"
+
+# Remove from the possible bad relocations those that match an undefined
+# weak symbol which will result in an absolute relocation to 0.
+# Weak unresolved symbols are of that form in nm output:
+# " w _binary__btf_vmlinux_bin_end"
+undef_weak_symbols=$($nm "$vmlinux" | awk '$1 ~ /w/ { print $2 }')
+
+$objdump -R "$vmlinux" |
+ grep -E '\<R_' |
+ ([ "$undef_weak_symbols" ] && grep -F -w -v "$undef_weak_symbols" || cat)
--
2.30.2
From: Alexandre Ghiti <[email protected]>
Relocating kernel at runtime is done very early in the boot process, so
it is not convenient to check for relocations there and react in case a
relocation was not expected.
There exists a script in scripts/ that extracts the relocations from
vmlinux that is then used at postlink to check the relocations.
Signed-off-by: Alexandre Ghiti <[email protected]>
Reviewed-by: Anup Patel <[email protected]>
---
arch/riscv/Makefile.postlink | 36 ++++++++++++++++++++++++++++++++
arch/riscv/tools/relocs_check.sh | 26 +++++++++++++++++++++++
2 files changed, 62 insertions(+)
create mode 100644 arch/riscv/Makefile.postlink
create mode 100755 arch/riscv/tools/relocs_check.sh
diff --git a/arch/riscv/Makefile.postlink b/arch/riscv/Makefile.postlink
new file mode 100644
index 000000000000..bf2b2bca1845
--- /dev/null
+++ b/arch/riscv/Makefile.postlink
@@ -0,0 +1,36 @@
+# SPDX-License-Identifier: GPL-2.0
+# ===========================================================================
+# Post-link riscv pass
+# ===========================================================================
+#
+# Check that vmlinux relocations look sane
+
+PHONY := __archpost
+__archpost:
+
+-include include/config/auto.conf
+include scripts/Kbuild.include
+
+quiet_cmd_relocs_check = CHKREL $@
+cmd_relocs_check = \
+ $(CONFIG_SHELL) $(srctree)/arch/riscv/tools/relocs_check.sh "$(OBJDUMP)" "$(NM)" "$@"
+
+# `@true` prevents complaint when there is nothing to be done
+
+vmlinux: FORCE
+ @true
+ifdef CONFIG_RELOCATABLE
+ $(call if_changed,relocs_check)
+endif
+
+%.ko: FORCE
+ @true
+
+clean:
+ @true
+
+PHONY += FORCE clean
+
+FORCE:
+
+.PHONY: $(PHONY)
diff --git a/arch/riscv/tools/relocs_check.sh b/arch/riscv/tools/relocs_check.sh
new file mode 100755
index 000000000000..baeb2e7b2290
--- /dev/null
+++ b/arch/riscv/tools/relocs_check.sh
@@ -0,0 +1,26 @@
+#!/bin/sh
+# SPDX-License-Identifier: GPL-2.0-or-later
+# Based on powerpc relocs_check.sh
+
+# This script checks the relocations of a vmlinux for "suspicious"
+# relocations.
+
+if [ $# -lt 3 ]; then
+ echo "$0 [path to objdump] [path to nm] [path to vmlinux]" 1>&2
+ exit 1
+fi
+
+bad_relocs=$(
+${srctree}/scripts/relocs_check.sh "$@" |
+ # These relocations are okay
+ # R_RISCV_RELATIVE
+ grep -F -w -v 'R_RISCV_RELATIVE'
+)
+
+if [ -z "$bad_relocs" ]; then
+ exit 0
+fi
+
+num_bad=$(echo "$bad_relocs" | wc -l)
+echo "WARNING: $num_bad bad relocations"
+echo "$bad_relocs"
--
2.30.2
From: Alexandre Ghiti <[email protected]>
This config allows to compile 64b kernel as PIE and to relocate it at
any virtual address at runtime: this paves the way to KASLR.
Runtime relocation is possible since relocation metadata are embedded into
the kernel.
Note that relocating at runtime introduces an overhead even if the
kernel is loaded at the same address it was linked at and that the compiler
options are those used in arm64 which uses the same RELA relocation
format.
Signed-off-by: Alexandre Ghiti <[email protected]>
---
arch/riscv/Kconfig | 12 ++++++++
arch/riscv/Makefile | 7 +++--
arch/riscv/kernel/vmlinux.lds.S | 6 ++++
arch/riscv/mm/Makefile | 4 +++
arch/riscv/mm/init.c | 54 ++++++++++++++++++++++++++++++++-
5 files changed, 80 insertions(+), 3 deletions(-)
diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index ea16fa2dd768..043ba92559fa 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -213,6 +213,18 @@ config PGTABLE_LEVELS
config LOCKDEP_SUPPORT
def_bool y
+config RELOCATABLE
+ bool
+ depends on MMU && 64BIT && !XIP_KERNEL
+ help
+ This builds a kernel as a Position Independent Executable (PIE),
+ which retains all relocation metadata required to relocate the
+ kernel binary at runtime to a different virtual address than the
+ address it was linked at.
+ Since RISCV uses the RELA relocation format, this requires a
+ relocation pass at runtime even if the kernel is loaded at the
+ same address it was linked at.
+
source "arch/riscv/Kconfig.socs"
source "arch/riscv/Kconfig.erratas"
diff --git a/arch/riscv/Makefile b/arch/riscv/Makefile
index 0eb4568fbd29..2f509915f246 100644
--- a/arch/riscv/Makefile
+++ b/arch/riscv/Makefile
@@ -9,9 +9,12 @@
#
OBJCOPYFLAGS := -O binary
-LDFLAGS_vmlinux :=
+ifeq ($(CONFIG_RELOCATABLE),y)
+ LDFLAGS_vmlinux += -shared -Bsymbolic -z notext -z norelro
+ KBUILD_CFLAGS += -fPIE
+endif
ifeq ($(CONFIG_DYNAMIC_FTRACE),y)
- LDFLAGS_vmlinux := --no-relax
+ LDFLAGS_vmlinux += --no-relax
KBUILD_CPPFLAGS += -DCC_USING_PATCHABLE_FUNCTION_ENTRY
CC_FLAGS_FTRACE := -fpatchable-function-entry=8
endif
diff --git a/arch/riscv/kernel/vmlinux.lds.S b/arch/riscv/kernel/vmlinux.lds.S
index 5104f3a871e3..862a8c09723c 100644
--- a/arch/riscv/kernel/vmlinux.lds.S
+++ b/arch/riscv/kernel/vmlinux.lds.S
@@ -133,6 +133,12 @@ SECTIONS
BSS_SECTION(PAGE_SIZE, PAGE_SIZE, 0)
+ .rela.dyn : ALIGN(8) {
+ __rela_dyn_start = .;
+ *(.rela .rela*)
+ __rela_dyn_end = .;
+ }
+
#ifdef CONFIG_EFI
. = ALIGN(PECOFF_SECTION_ALIGNMENT);
__pecoff_data_virt_size = ABSOLUTE(. - __pecoff_text_end);
diff --git a/arch/riscv/mm/Makefile b/arch/riscv/mm/Makefile
index 7ebaef10ea1b..2d33ec574bbb 100644
--- a/arch/riscv/mm/Makefile
+++ b/arch/riscv/mm/Makefile
@@ -1,6 +1,10 @@
# SPDX-License-Identifier: GPL-2.0-only
CFLAGS_init.o := -mcmodel=medany
+ifdef CONFIG_RELOCATABLE
+CFLAGS_init.o += -fno-pie
+endif
+
ifdef CONFIG_FTRACE
CFLAGS_REMOVE_init.o = $(CC_FLAGS_FTRACE)
CFLAGS_REMOVE_cacheflush.o = $(CC_FLAGS_FTRACE)
diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
index c0cddf0fc22d..42041c12d496 100644
--- a/arch/riscv/mm/init.c
+++ b/arch/riscv/mm/init.c
@@ -20,6 +20,9 @@
#include <linux/dma-map-ops.h>
#include <linux/crash_dump.h>
#include <linux/hugetlb.h>
+#ifdef CONFIG_RELOCATABLE
+#include <linux/elf.h>
+#endif
#include <asm/fixmap.h>
#include <asm/tlbflush.h>
@@ -103,7 +106,7 @@ static void __init print_vm_layout(void)
print_mlm("lowmem", (unsigned long)PAGE_OFFSET,
(unsigned long)high_memory);
#ifdef CONFIG_64BIT
- print_mlm("kernel", (unsigned long)KERNEL_LINK_ADDR,
+ print_mlm("kernel", (unsigned long)kernel_map.virt_addr,
(unsigned long)ADDRESS_SPACE_END);
#endif
}
@@ -518,6 +521,44 @@ static __init pgprot_t pgprot_from_va(uintptr_t va)
#error "setup_vm() is called from head.S before relocate so it should not use absolute addressing."
#endif
+#ifdef CONFIG_RELOCATABLE
+extern unsigned long __rela_dyn_start, __rela_dyn_end;
+
+static void __init relocate_kernel(void)
+{
+ Elf64_Rela *rela = (Elf64_Rela *)&__rela_dyn_start;
+ /*
+ * This holds the offset between the linked virtual address and the
+ * relocated virtual address.
+ */
+ uintptr_t reloc_offset = kernel_map.virt_addr - KERNEL_LINK_ADDR;
+ /*
+ * This holds the offset between kernel linked virtual address and
+ * physical address.
+ */
+ uintptr_t va_kernel_link_pa_offset = KERNEL_LINK_ADDR - kernel_map.phys_addr;
+
+ for ( ; rela < (Elf64_Rela *)&__rela_dyn_end; rela++) {
+ Elf64_Addr addr = (rela->r_offset - va_kernel_link_pa_offset);
+ Elf64_Addr relocated_addr = rela->r_addend;
+
+ if (rela->r_info != R_RISCV_RELATIVE)
+ continue;
+
+ /*
+ * Make sure to not relocate vdso symbols like rt_sigreturn
+ * which are linked from the address 0 in vmlinux since
+ * vdso symbol addresses are actually used as an offset from
+ * mm->context.vdso in VDSO_OFFSET macro.
+ */
+ if (relocated_addr >= KERNEL_LINK_ADDR)
+ relocated_addr += reloc_offset;
+
+ *(Elf64_Addr *)addr = relocated_addr;
+ }
+}
+#endif /* CONFIG_RELOCATABLE */
+
#ifdef CONFIG_XIP_KERNEL
static void __init create_kernel_page_table(pgd_t *pgdir,
__always_unused bool early)
@@ -625,6 +666,17 @@ asmlinkage void __init setup_vm(uintptr_t dtb_pa)
BUG_ON((kernel_map.virt_addr + kernel_map.size) > ADDRESS_SPACE_END - SZ_4K);
#endif
+#ifdef CONFIG_RELOCATABLE
+ /*
+ * Early page table uses only one PGDIR, which makes it possible
+ * to map PGDIR_SIZE aligned on PGDIR_SIZE: if the relocation offset
+ * makes the kernel cross over a PGDIR_SIZE boundary, raise a bug
+ * since a part of the kernel would not get mapped.
+ */
+ BUG_ON(PGDIR_SIZE - (kernel_map.virt_addr & (PGDIR_SIZE - 1)) < kernel_map.size);
+ relocate_kernel();
+#endif
+
pt_ops.alloc_pte = alloc_pte_early;
pt_ops.get_pte_virt = get_pte_virt_early;
#ifndef __PAGETABLE_PMD_FOLDED
--
2.30.2
Arf, I have sent this patchset with the wrong email address. @Palmer
tell me if you want me to resend it correctly.
Thanks,
Alex
On 10/9/21 7:12 PM, Alexandre Ghiti wrote:
> From: Alexandre Ghiti <[email protected]>
>
> This config allows to compile 64b kernel as PIE and to relocate it at
> any virtual address at runtime: this paves the way to KASLR.
> Runtime relocation is possible since relocation metadata are embedded into
> the kernel.
>
> Note that relocating at runtime introduces an overhead even if the
> kernel is loaded at the same address it was linked at and that the compiler
> options are those used in arm64 which uses the same RELA relocation
> format.
>
> Signed-off-by: Alexandre Ghiti <[email protected]>
> ---
> arch/riscv/Kconfig | 12 ++++++++
> arch/riscv/Makefile | 7 +++--
> arch/riscv/kernel/vmlinux.lds.S | 6 ++++
> arch/riscv/mm/Makefile | 4 +++
> arch/riscv/mm/init.c | 54 ++++++++++++++++++++++++++++++++-
> 5 files changed, 80 insertions(+), 3 deletions(-)
>
> diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
> index ea16fa2dd768..043ba92559fa 100644
> --- a/arch/riscv/Kconfig
> +++ b/arch/riscv/Kconfig
> @@ -213,6 +213,18 @@ config PGTABLE_LEVELS
> config LOCKDEP_SUPPORT
> def_bool y
>
> +config RELOCATABLE
> + bool
> + depends on MMU && 64BIT && !XIP_KERNEL
> + help
> + This builds a kernel as a Position Independent Executable (PIE),
> + which retains all relocation metadata required to relocate the
> + kernel binary at runtime to a different virtual address than the
> + address it was linked at.
> + Since RISCV uses the RELA relocation format, this requires a
> + relocation pass at runtime even if the kernel is loaded at the
> + same address it was linked at.
> +
> source "arch/riscv/Kconfig.socs"
> source "arch/riscv/Kconfig.erratas"
>
> diff --git a/arch/riscv/Makefile b/arch/riscv/Makefile
> index 0eb4568fbd29..2f509915f246 100644
> --- a/arch/riscv/Makefile
> +++ b/arch/riscv/Makefile
> @@ -9,9 +9,12 @@
> #
>
> OBJCOPYFLAGS := -O binary
> -LDFLAGS_vmlinux :=
> +ifeq ($(CONFIG_RELOCATABLE),y)
> + LDFLAGS_vmlinux += -shared -Bsymbolic -z notext -z norelro
> + KBUILD_CFLAGS += -fPIE
> +endif
> ifeq ($(CONFIG_DYNAMIC_FTRACE),y)
> - LDFLAGS_vmlinux := --no-relax
> + LDFLAGS_vmlinux += --no-relax
> KBUILD_CPPFLAGS += -DCC_USING_PATCHABLE_FUNCTION_ENTRY
> CC_FLAGS_FTRACE := -fpatchable-function-entry=8
> endif
> diff --git a/arch/riscv/kernel/vmlinux.lds.S b/arch/riscv/kernel/vmlinux.lds.S
> index 5104f3a871e3..862a8c09723c 100644
> --- a/arch/riscv/kernel/vmlinux.lds.S
> +++ b/arch/riscv/kernel/vmlinux.lds.S
> @@ -133,6 +133,12 @@ SECTIONS
>
> BSS_SECTION(PAGE_SIZE, PAGE_SIZE, 0)
>
> + .rela.dyn : ALIGN(8) {
> + __rela_dyn_start = .;
> + *(.rela .rela*)
> + __rela_dyn_end = .;
> + }
> +
> #ifdef CONFIG_EFI
> . = ALIGN(PECOFF_SECTION_ALIGNMENT);
> __pecoff_data_virt_size = ABSOLUTE(. - __pecoff_text_end);
> diff --git a/arch/riscv/mm/Makefile b/arch/riscv/mm/Makefile
> index 7ebaef10ea1b..2d33ec574bbb 100644
> --- a/arch/riscv/mm/Makefile
> +++ b/arch/riscv/mm/Makefile
> @@ -1,6 +1,10 @@
> # SPDX-License-Identifier: GPL-2.0-only
>
> CFLAGS_init.o := -mcmodel=medany
> +ifdef CONFIG_RELOCATABLE
> +CFLAGS_init.o += -fno-pie
> +endif
> +
> ifdef CONFIG_FTRACE
> CFLAGS_REMOVE_init.o = $(CC_FLAGS_FTRACE)
> CFLAGS_REMOVE_cacheflush.o = $(CC_FLAGS_FTRACE)
> diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
> index c0cddf0fc22d..42041c12d496 100644
> --- a/arch/riscv/mm/init.c
> +++ b/arch/riscv/mm/init.c
> @@ -20,6 +20,9 @@
> #include <linux/dma-map-ops.h>
> #include <linux/crash_dump.h>
> #include <linux/hugetlb.h>
> +#ifdef CONFIG_RELOCATABLE
> +#include <linux/elf.h>
> +#endif
>
> #include <asm/fixmap.h>
> #include <asm/tlbflush.h>
> @@ -103,7 +106,7 @@ static void __init print_vm_layout(void)
> print_mlm("lowmem", (unsigned long)PAGE_OFFSET,
> (unsigned long)high_memory);
> #ifdef CONFIG_64BIT
> - print_mlm("kernel", (unsigned long)KERNEL_LINK_ADDR,
> + print_mlm("kernel", (unsigned long)kernel_map.virt_addr,
> (unsigned long)ADDRESS_SPACE_END);
> #endif
> }
> @@ -518,6 +521,44 @@ static __init pgprot_t pgprot_from_va(uintptr_t va)
> #error "setup_vm() is called from head.S before relocate so it should not use absolute addressing."
> #endif
>
> +#ifdef CONFIG_RELOCATABLE
> +extern unsigned long __rela_dyn_start, __rela_dyn_end;
> +
> +static void __init relocate_kernel(void)
> +{
> + Elf64_Rela *rela = (Elf64_Rela *)&__rela_dyn_start;
> + /*
> + * This holds the offset between the linked virtual address and the
> + * relocated virtual address.
> + */
> + uintptr_t reloc_offset = kernel_map.virt_addr - KERNEL_LINK_ADDR;
> + /*
> + * This holds the offset between kernel linked virtual address and
> + * physical address.
> + */
> + uintptr_t va_kernel_link_pa_offset = KERNEL_LINK_ADDR - kernel_map.phys_addr;
> +
> + for ( ; rela < (Elf64_Rela *)&__rela_dyn_end; rela++) {
> + Elf64_Addr addr = (rela->r_offset - va_kernel_link_pa_offset);
> + Elf64_Addr relocated_addr = rela->r_addend;
> +
> + if (rela->r_info != R_RISCV_RELATIVE)
> + continue;
> +
> + /*
> + * Make sure to not relocate vdso symbols like rt_sigreturn
> + * which are linked from the address 0 in vmlinux since
> + * vdso symbol addresses are actually used as an offset from
> + * mm->context.vdso in VDSO_OFFSET macro.
> + */
> + if (relocated_addr >= KERNEL_LINK_ADDR)
> + relocated_addr += reloc_offset;
> +
> + *(Elf64_Addr *)addr = relocated_addr;
> + }
> +}
> +#endif /* CONFIG_RELOCATABLE */
> +
> #ifdef CONFIG_XIP_KERNEL
> static void __init create_kernel_page_table(pgd_t *pgdir,
> __always_unused bool early)
> @@ -625,6 +666,17 @@ asmlinkage void __init setup_vm(uintptr_t dtb_pa)
> BUG_ON((kernel_map.virt_addr + kernel_map.size) > ADDRESS_SPACE_END - SZ_4K);
> #endif
>
> +#ifdef CONFIG_RELOCATABLE
> + /*
> + * Early page table uses only one PGDIR, which makes it possible
> + * to map PGDIR_SIZE aligned on PGDIR_SIZE: if the relocation offset
> + * makes the kernel cross over a PGDIR_SIZE boundary, raise a bug
> + * since a part of the kernel would not get mapped.
> + */
> + BUG_ON(PGDIR_SIZE - (kernel_map.virt_addr & (PGDIR_SIZE - 1)) < kernel_map.size);
> + relocate_kernel();
> +#endif
> +
> pt_ops.alloc_pte = alloc_pte_early;
> pt_ops.get_pte_virt = get_pte_virt_early;
> #ifndef __PAGETABLE_PMD_FOLDED
On Sat, 09 Oct 2021 10:20:20 PDT (-0700), [email protected] wrote:
> Arf, I have sent this patchset with the wrong email address. @Palmer
> tell me if you want me to resend it correctly.
Sorry for being kind of slow here. It's fine: there's a "From:" in the
patch, and git picks those up so it'll match the signed-off-by line. I
send pretty much all my patches that way, as I never managed to get my
Google address working correctly.
>
> Thanks,
>
> Alex
>
> On 10/9/21 7:12 PM, Alexandre Ghiti wrote:
>> From: Alexandre Ghiti <[email protected]>
>>
>> This config allows to compile 64b kernel as PIE and to relocate it at
>> any virtual address at runtime: this paves the way to KASLR.
>> Runtime relocation is possible since relocation metadata are embedded into
>> the kernel.
IMO this should really be user selectable, at a bare minimum so it's testable.
I just sent along a patch to do that (my power's off at home, so email is a bit
wacky right now).
I haven't put this on for-next yet as I'm not sure if you had a fix for the
kasan issue (which IIUC would conflict with this).
>> Note that relocating at runtime introduces an overhead even if the
>> kernel is loaded at the same address it was linked at and that the compiler
>> options are those used in arm64 which uses the same RELA relocation
>> format.
>>
>> Signed-off-by: Alexandre Ghiti <[email protected]>
>> ---
>> arch/riscv/Kconfig | 12 ++++++++
>> arch/riscv/Makefile | 7 +++--
>> arch/riscv/kernel/vmlinux.lds.S | 6 ++++
>> arch/riscv/mm/Makefile | 4 +++
>> arch/riscv/mm/init.c | 54 ++++++++++++++++++++++++++++++++-
>> 5 files changed, 80 insertions(+), 3 deletions(-)
>>
>> diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
>> index ea16fa2dd768..043ba92559fa 100644
>> --- a/arch/riscv/Kconfig
>> +++ b/arch/riscv/Kconfig
>> @@ -213,6 +213,18 @@ config PGTABLE_LEVELS
>> config LOCKDEP_SUPPORT
>> def_bool y
>>
>> +config RELOCATABLE
>> + bool
>> + depends on MMU && 64BIT && !XIP_KERNEL
>> + help
>> + This builds a kernel as a Position Independent Executable (PIE),
>> + which retains all relocation metadata required to relocate the
>> + kernel binary at runtime to a different virtual address than the
>> + address it was linked at.
>> + Since RISCV uses the RELA relocation format, this requires a
>> + relocation pass at runtime even if the kernel is loaded at the
>> + same address it was linked at.
>> +
>> source "arch/riscv/Kconfig.socs"
>> source "arch/riscv/Kconfig.erratas"
>>
>> diff --git a/arch/riscv/Makefile b/arch/riscv/Makefile
>> index 0eb4568fbd29..2f509915f246 100644
>> --- a/arch/riscv/Makefile
>> +++ b/arch/riscv/Makefile
>> @@ -9,9 +9,12 @@
>> #
>>
>> OBJCOPYFLAGS := -O binary
>> -LDFLAGS_vmlinux :=
>> +ifeq ($(CONFIG_RELOCATABLE),y)
>> + LDFLAGS_vmlinux += -shared -Bsymbolic -z notext -z norelro
>> + KBUILD_CFLAGS += -fPIE
>> +endif
>> ifeq ($(CONFIG_DYNAMIC_FTRACE),y)
>> - LDFLAGS_vmlinux := --no-relax
>> + LDFLAGS_vmlinux += --no-relax
>> KBUILD_CPPFLAGS += -DCC_USING_PATCHABLE_FUNCTION_ENTRY
>> CC_FLAGS_FTRACE := -fpatchable-function-entry=8
>> endif
>> diff --git a/arch/riscv/kernel/vmlinux.lds.S b/arch/riscv/kernel/vmlinux.lds.S
>> index 5104f3a871e3..862a8c09723c 100644
>> --- a/arch/riscv/kernel/vmlinux.lds.S
>> +++ b/arch/riscv/kernel/vmlinux.lds.S
>> @@ -133,6 +133,12 @@ SECTIONS
>>
>> BSS_SECTION(PAGE_SIZE, PAGE_SIZE, 0)
>>
>> + .rela.dyn : ALIGN(8) {
>> + __rela_dyn_start = .;
>> + *(.rela .rela*)
>> + __rela_dyn_end = .;
>> + }
>> +
>> #ifdef CONFIG_EFI
>> . = ALIGN(PECOFF_SECTION_ALIGNMENT);
>> __pecoff_data_virt_size = ABSOLUTE(. - __pecoff_text_end);
>> diff --git a/arch/riscv/mm/Makefile b/arch/riscv/mm/Makefile
>> index 7ebaef10ea1b..2d33ec574bbb 100644
>> --- a/arch/riscv/mm/Makefile
>> +++ b/arch/riscv/mm/Makefile
>> @@ -1,6 +1,10 @@
>> # SPDX-License-Identifier: GPL-2.0-only
>>
>> CFLAGS_init.o := -mcmodel=medany
>> +ifdef CONFIG_RELOCATABLE
>> +CFLAGS_init.o += -fno-pie
>> +endif
>> +
>> ifdef CONFIG_FTRACE
>> CFLAGS_REMOVE_init.o = $(CC_FLAGS_FTRACE)
>> CFLAGS_REMOVE_cacheflush.o = $(CC_FLAGS_FTRACE)
>> diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
>> index c0cddf0fc22d..42041c12d496 100644
>> --- a/arch/riscv/mm/init.c
>> +++ b/arch/riscv/mm/init.c
>> @@ -20,6 +20,9 @@
>> #include <linux/dma-map-ops.h>
>> #include <linux/crash_dump.h>
>> #include <linux/hugetlb.h>
>> +#ifdef CONFIG_RELOCATABLE
>> +#include <linux/elf.h>
>> +#endif
>>
>> #include <asm/fixmap.h>
>> #include <asm/tlbflush.h>
>> @@ -103,7 +106,7 @@ static void __init print_vm_layout(void)
>> print_mlm("lowmem", (unsigned long)PAGE_OFFSET,
>> (unsigned long)high_memory);
>> #ifdef CONFIG_64BIT
>> - print_mlm("kernel", (unsigned long)KERNEL_LINK_ADDR,
>> + print_mlm("kernel", (unsigned long)kernel_map.virt_addr,
>> (unsigned long)ADDRESS_SPACE_END);
>> #endif
>> }
>> @@ -518,6 +521,44 @@ static __init pgprot_t pgprot_from_va(uintptr_t va)
>> #error "setup_vm() is called from head.S before relocate so it should not use absolute addressing."
>> #endif
>>
>> +#ifdef CONFIG_RELOCATABLE
>> +extern unsigned long __rela_dyn_start, __rela_dyn_end;
>> +
>> +static void __init relocate_kernel(void)
>> +{
>> + Elf64_Rela *rela = (Elf64_Rela *)&__rela_dyn_start;
>> + /*
>> + * This holds the offset between the linked virtual address and the
>> + * relocated virtual address.
>> + */
>> + uintptr_t reloc_offset = kernel_map.virt_addr - KERNEL_LINK_ADDR;
>> + /*
>> + * This holds the offset between kernel linked virtual address and
>> + * physical address.
>> + */
>> + uintptr_t va_kernel_link_pa_offset = KERNEL_LINK_ADDR - kernel_map.phys_addr;
>> +
>> + for ( ; rela < (Elf64_Rela *)&__rela_dyn_end; rela++) {
>> + Elf64_Addr addr = (rela->r_offset - va_kernel_link_pa_offset);
>> + Elf64_Addr relocated_addr = rela->r_addend;
>> +
>> + if (rela->r_info != R_RISCV_RELATIVE)
>> + continue;
>> +
>> + /*
>> + * Make sure to not relocate vdso symbols like rt_sigreturn
>> + * which are linked from the address 0 in vmlinux since
>> + * vdso symbol addresses are actually used as an offset from
>> + * mm->context.vdso in VDSO_OFFSET macro.
>> + */
>> + if (relocated_addr >= KERNEL_LINK_ADDR)
>> + relocated_addr += reloc_offset;
>> +
>> + *(Elf64_Addr *)addr = relocated_addr;
>> + }
>> +}
>> +#endif /* CONFIG_RELOCATABLE */
>> +
>> #ifdef CONFIG_XIP_KERNEL
>> static void __init create_kernel_page_table(pgd_t *pgdir,
>> __always_unused bool early)
>> @@ -625,6 +666,17 @@ asmlinkage void __init setup_vm(uintptr_t dtb_pa)
>> BUG_ON((kernel_map.virt_addr + kernel_map.size) > ADDRESS_SPACE_END - SZ_4K);
>> #endif
>>
>> +#ifdef CONFIG_RELOCATABLE
>> + /*
>> + * Early page table uses only one PGDIR, which makes it possible
>> + * to map PGDIR_SIZE aligned on PGDIR_SIZE: if the relocation offset
>> + * makes the kernel cross over a PGDIR_SIZE boundary, raise a bug
>> + * since a part of the kernel would not get mapped.
>> + */
>> + BUG_ON(PGDIR_SIZE - (kernel_map.virt_addr & (PGDIR_SIZE - 1)) < kernel_map.size);
>> + relocate_kernel();
>> +#endif
>> +
>> pt_ops.alloc_pte = alloc_pte_early;
>> pt_ops.get_pte_virt = get_pte_virt_early;
>> #ifndef __PAGETABLE_PMD_FOLDED
Hi Palmer,
On 10/26/21 11:29 PM, Palmer Dabbelt wrote:
> On Sat, 09 Oct 2021 10:20:20 PDT (-0700), [email protected] wrote:
>> Arf, I have sent this patchset with the wrong email address. @Palmer
>> tell me if you want me to resend it correctly.
>
> Sorry for being kind of slow here. It's fine: there's a "From:" in
> the patch, and git picks those up so it'll match the signed-off-by
> line. I send pretty much all my patches that way, as I never managed
> to get my Google address working correctly.
>
>>
>> Thanks,
>>
>> Alex
>>
>> On 10/9/21 7:12 PM, Alexandre Ghiti wrote:
>>> From: Alexandre Ghiti <[email protected]>
>>>
>>> This config allows to compile 64b kernel as PIE and to relocate it at
>>> any virtual address at runtime: this paves the way to KASLR.
>>> Runtime relocation is possible since relocation metadata are
>>> embedded into
>>> the kernel.
>
> IMO this should really be user selectable, at a bare minimum so it's
> testable.
> I just sent along a patch to do that (my power's off at home, so email
> is a bit
> wacky right now).
>
> I haven't put this on for-next yet as I'm not sure if you had a fix
> for the
> kasan issue (which IIUC would conflict with this).
The kasan issue only revealed that I need to move the kasan shadow
memory around with sv48 support, that's not related to the relocatable
kernel.
Thanks,
Alex
>
>>> Note that relocating at runtime introduces an overhead even if the
>>> kernel is loaded at the same address it was linked at and that the
>>> compiler
>>> options are those used in arm64 which uses the same RELA relocation
>>> format.
>>>
>>> Signed-off-by: Alexandre Ghiti <[email protected]>
>>> ---
>>> arch/riscv/Kconfig | 12 ++++++++
>>> arch/riscv/Makefile | 7 +++--
>>> arch/riscv/kernel/vmlinux.lds.S | 6 ++++
>>> arch/riscv/mm/Makefile | 4 +++
>>> arch/riscv/mm/init.c | 54 ++++++++++++++++++++++++++++++++-
>>> 5 files changed, 80 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
>>> index ea16fa2dd768..043ba92559fa 100644
>>> --- a/arch/riscv/Kconfig
>>> +++ b/arch/riscv/Kconfig
>>> @@ -213,6 +213,18 @@ config PGTABLE_LEVELS
>>> config LOCKDEP_SUPPORT
>>> def_bool y
>>>
>>> +config RELOCATABLE
>>> + bool
>>> + depends on MMU && 64BIT && !XIP_KERNEL
>>> + help
>>> + This builds a kernel as a Position Independent Executable
>>> (PIE),
>>> + which retains all relocation metadata required to
>>> relocate the
>>> + kernel binary at runtime to a different virtual address
>>> than the
>>> + address it was linked at.
>>> + Since RISCV uses the RELA relocation format, this requires a
>>> + relocation pass at runtime even if the kernel is loaded
>>> at the
>>> + same address it was linked at.
>>> +
>>> source "arch/riscv/Kconfig.socs"
>>> source "arch/riscv/Kconfig.erratas"
>>>
>>> diff --git a/arch/riscv/Makefile b/arch/riscv/Makefile
>>> index 0eb4568fbd29..2f509915f246 100644
>>> --- a/arch/riscv/Makefile
>>> +++ b/arch/riscv/Makefile
>>> @@ -9,9 +9,12 @@
>>> #
>>>
>>> OBJCOPYFLAGS := -O binary
>>> -LDFLAGS_vmlinux :=
>>> +ifeq ($(CONFIG_RELOCATABLE),y)
>>> + LDFLAGS_vmlinux += -shared -Bsymbolic -z notext -z norelro
>>> + KBUILD_CFLAGS += -fPIE
>>> +endif
>>> ifeq ($(CONFIG_DYNAMIC_FTRACE),y)
>>> - LDFLAGS_vmlinux := --no-relax
>>> + LDFLAGS_vmlinux += --no-relax
>>> KBUILD_CPPFLAGS += -DCC_USING_PATCHABLE_FUNCTION_ENTRY
>>> CC_FLAGS_FTRACE := -fpatchable-function-entry=8
>>> endif
>>> diff --git a/arch/riscv/kernel/vmlinux.lds.S
>>> b/arch/riscv/kernel/vmlinux.lds.S
>>> index 5104f3a871e3..862a8c09723c 100644
>>> --- a/arch/riscv/kernel/vmlinux.lds.S
>>> +++ b/arch/riscv/kernel/vmlinux.lds.S
>>> @@ -133,6 +133,12 @@ SECTIONS
>>>
>>> BSS_SECTION(PAGE_SIZE, PAGE_SIZE, 0)
>>>
>>> + .rela.dyn : ALIGN(8) {
>>> + __rela_dyn_start = .;
>>> + *(.rela .rela*)
>>> + __rela_dyn_end = .;
>>> + }
>>> +
>>> #ifdef CONFIG_EFI
>>> . = ALIGN(PECOFF_SECTION_ALIGNMENT);
>>> __pecoff_data_virt_size = ABSOLUTE(. - __pecoff_text_end);
>>> diff --git a/arch/riscv/mm/Makefile b/arch/riscv/mm/Makefile
>>> index 7ebaef10ea1b..2d33ec574bbb 100644
>>> --- a/arch/riscv/mm/Makefile
>>> +++ b/arch/riscv/mm/Makefile
>>> @@ -1,6 +1,10 @@
>>> # SPDX-License-Identifier: GPL-2.0-only
>>>
>>> CFLAGS_init.o := -mcmodel=medany
>>> +ifdef CONFIG_RELOCATABLE
>>> +CFLAGS_init.o += -fno-pie
>>> +endif
>>> +
>>> ifdef CONFIG_FTRACE
>>> CFLAGS_REMOVE_init.o = $(CC_FLAGS_FTRACE)
>>> CFLAGS_REMOVE_cacheflush.o = $(CC_FLAGS_FTRACE)
>>> diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
>>> index c0cddf0fc22d..42041c12d496 100644
>>> --- a/arch/riscv/mm/init.c
>>> +++ b/arch/riscv/mm/init.c
>>> @@ -20,6 +20,9 @@
>>> #include <linux/dma-map-ops.h>
>>> #include <linux/crash_dump.h>
>>> #include <linux/hugetlb.h>
>>> +#ifdef CONFIG_RELOCATABLE
>>> +#include <linux/elf.h>
>>> +#endif
>>>
>>> #include <asm/fixmap.h>
>>> #include <asm/tlbflush.h>
>>> @@ -103,7 +106,7 @@ static void __init print_vm_layout(void)
>>> print_mlm("lowmem", (unsigned long)PAGE_OFFSET,
>>> (unsigned long)high_memory);
>>> #ifdef CONFIG_64BIT
>>> - print_mlm("kernel", (unsigned long)KERNEL_LINK_ADDR,
>>> + print_mlm("kernel", (unsigned long)kernel_map.virt_addr,
>>> (unsigned long)ADDRESS_SPACE_END);
>>> #endif
>>> }
>>> @@ -518,6 +521,44 @@ static __init pgprot_t pgprot_from_va(uintptr_t
>>> va)
>>> #error "setup_vm() is called from head.S before relocate so it
>>> should not use absolute addressing."
>>> #endif
>>>
>>> +#ifdef CONFIG_RELOCATABLE
>>> +extern unsigned long __rela_dyn_start, __rela_dyn_end;
>>> +
>>> +static void __init relocate_kernel(void)
>>> +{
>>> + Elf64_Rela *rela = (Elf64_Rela *)&__rela_dyn_start;
>>> + /*
>>> + * This holds the offset between the linked virtual address and
>>> the
>>> + * relocated virtual address.
>>> + */
>>> + uintptr_t reloc_offset = kernel_map.virt_addr - KERNEL_LINK_ADDR;
>>> + /*
>>> + * This holds the offset between kernel linked virtual address and
>>> + * physical address.
>>> + */
>>> + uintptr_t va_kernel_link_pa_offset = KERNEL_LINK_ADDR -
>>> kernel_map.phys_addr;
>>> +
>>> + for ( ; rela < (Elf64_Rela *)&__rela_dyn_end; rela++) {
>>> + Elf64_Addr addr = (rela->r_offset - va_kernel_link_pa_offset);
>>> + Elf64_Addr relocated_addr = rela->r_addend;
>>> +
>>> + if (rela->r_info != R_RISCV_RELATIVE)
>>> + continue;
>>> +
>>> + /*
>>> + * Make sure to not relocate vdso symbols like rt_sigreturn
>>> + * which are linked from the address 0 in vmlinux since
>>> + * vdso symbol addresses are actually used as an offset from
>>> + * mm->context.vdso in VDSO_OFFSET macro.
>>> + */
>>> + if (relocated_addr >= KERNEL_LINK_ADDR)
>>> + relocated_addr += reloc_offset;
>>> +
>>> + *(Elf64_Addr *)addr = relocated_addr;
>>> + }
>>> +}
>>> +#endif /* CONFIG_RELOCATABLE */
>>> +
>>> #ifdef CONFIG_XIP_KERNEL
>>> static void __init create_kernel_page_table(pgd_t *pgdir,
>>> __always_unused bool early)
>>> @@ -625,6 +666,17 @@ asmlinkage void __init setup_vm(uintptr_t dtb_pa)
>>> BUG_ON((kernel_map.virt_addr + kernel_map.size) >
>>> ADDRESS_SPACE_END - SZ_4K);
>>> #endif
>>>
>>> +#ifdef CONFIG_RELOCATABLE
>>> + /*
>>> + * Early page table uses only one PGDIR, which makes it possible
>>> + * to map PGDIR_SIZE aligned on PGDIR_SIZE: if the relocation
>>> offset
>>> + * makes the kernel cross over a PGDIR_SIZE boundary, raise a bug
>>> + * since a part of the kernel would not get mapped.
>>> + */
>>> + BUG_ON(PGDIR_SIZE - (kernel_map.virt_addr & (PGDIR_SIZE - 1)) <
>>> kernel_map.size);
>>> + relocate_kernel();
>>> +#endif
>>> +
>>> pt_ops.alloc_pte = alloc_pte_early;
>>> pt_ops.get_pte_virt = get_pte_virt_early;
>>> #ifndef __PAGETABLE_PMD_FOLDED
@Palmer, can I do anything for that to be pulled in 5.17?
Thanks,
Alex
On 10/27/21 07:04, Alexandre ghiti wrote:
> Hi Palmer,
>
> On 10/26/21 11:29 PM, Palmer Dabbelt wrote:
>> On Sat, 09 Oct 2021 10:20:20 PDT (-0700), [email protected] wrote:
>>> Arf, I have sent this patchset with the wrong email address. @Palmer
>>> tell me if you want me to resend it correctly.
>> Sorry for being kind of slow here. It's fine: there's a "From:" in
>> the patch, and git picks those up so it'll match the signed-off-by
>> line. I send pretty much all my patches that way, as I never managed
>> to get my Google address working correctly.
>>
>>> Thanks,
>>>
>>> Alex
>>>
>>> On 10/9/21 7:12 PM, Alexandre Ghiti wrote:
>>>> From: Alexandre Ghiti <[email protected]>
>>>>
>>>> This config allows to compile 64b kernel as PIE and to relocate it at
>>>> any virtual address at runtime: this paves the way to KASLR.
>>>> Runtime relocation is possible since relocation metadata are
>>>> embedded into
>>>> the kernel.
>> IMO this should really be user selectable, at a bare minimum so it's
>> testable.
>> I just sent along a patch to do that (my power's off at home, so email
>> is a bit
>> wacky right now).
>>
>> I haven't put this on for-next yet as I'm not sure if you had a fix
>> for the
>> kasan issue (which IIUC would conflict with this).
>
> The kasan issue only revealed that I need to move the kasan shadow
> memory around with sv48 support, that's not related to the relocatable
> kernel.
>
> Thanks,
>
> Alex
>
>
>>>> Note that relocating at runtime introduces an overhead even if the
>>>> kernel is loaded at the same address it was linked at and that the
>>>> compiler
>>>> options are those used in arm64 which uses the same RELA relocation
>>>> format.
>>>>
>>>> Signed-off-by: Alexandre Ghiti <[email protected]>
>>>> ---
>>>> arch/riscv/Kconfig | 12 ++++++++
>>>> arch/riscv/Makefile | 7 +++--
>>>> arch/riscv/kernel/vmlinux.lds.S | 6 ++++
>>>> arch/riscv/mm/Makefile | 4 +++
>>>> arch/riscv/mm/init.c | 54 ++++++++++++++++++++++++++++++++-
>>>> 5 files changed, 80 insertions(+), 3 deletions(-)
>>>>
>>>> diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
>>>> index ea16fa2dd768..043ba92559fa 100644
>>>> --- a/arch/riscv/Kconfig
>>>> +++ b/arch/riscv/Kconfig
>>>> @@ -213,6 +213,18 @@ config PGTABLE_LEVELS
>>>> config LOCKDEP_SUPPORT
>>>> def_bool y
>>>>
>>>> +config RELOCATABLE
>>>> + bool
>>>> + depends on MMU && 64BIT && !XIP_KERNEL
>>>> + help
>>>> + This builds a kernel as a Position Independent Executable
>>>> (PIE),
>>>> + which retains all relocation metadata required to
>>>> relocate the
>>>> + kernel binary at runtime to a different virtual address
>>>> than the
>>>> + address it was linked at.
>>>> + Since RISCV uses the RELA relocation format, this requires a
>>>> + relocation pass at runtime even if the kernel is loaded
>>>> at the
>>>> + same address it was linked at.
>>>> +
>>>> source "arch/riscv/Kconfig.socs"
>>>> source "arch/riscv/Kconfig.erratas"
>>>>
>>>> diff --git a/arch/riscv/Makefile b/arch/riscv/Makefile
>>>> index 0eb4568fbd29..2f509915f246 100644
>>>> --- a/arch/riscv/Makefile
>>>> +++ b/arch/riscv/Makefile
>>>> @@ -9,9 +9,12 @@
>>>> #
>>>>
>>>> OBJCOPYFLAGS := -O binary
>>>> -LDFLAGS_vmlinux :=
>>>> +ifeq ($(CONFIG_RELOCATABLE),y)
>>>> + LDFLAGS_vmlinux += -shared -Bsymbolic -z notext -z norelro
>>>> + KBUILD_CFLAGS += -fPIE
>>>> +endif
>>>> ifeq ($(CONFIG_DYNAMIC_FTRACE),y)
>>>> - LDFLAGS_vmlinux := --no-relax
>>>> + LDFLAGS_vmlinux += --no-relax
>>>> KBUILD_CPPFLAGS += -DCC_USING_PATCHABLE_FUNCTION_ENTRY
>>>> CC_FLAGS_FTRACE := -fpatchable-function-entry=8
>>>> endif
>>>> diff --git a/arch/riscv/kernel/vmlinux.lds.S
>>>> b/arch/riscv/kernel/vmlinux.lds.S
>>>> index 5104f3a871e3..862a8c09723c 100644
>>>> --- a/arch/riscv/kernel/vmlinux.lds.S
>>>> +++ b/arch/riscv/kernel/vmlinux.lds.S
>>>> @@ -133,6 +133,12 @@ SECTIONS
>>>>
>>>> BSS_SECTION(PAGE_SIZE, PAGE_SIZE, 0)
>>>>
>>>> + .rela.dyn : ALIGN(8) {
>>>> + __rela_dyn_start = .;
>>>> + *(.rela .rela*)
>>>> + __rela_dyn_end = .;
>>>> + }
>>>> +
>>>> #ifdef CONFIG_EFI
>>>> . = ALIGN(PECOFF_SECTION_ALIGNMENT);
>>>> __pecoff_data_virt_size = ABSOLUTE(. - __pecoff_text_end);
>>>> diff --git a/arch/riscv/mm/Makefile b/arch/riscv/mm/Makefile
>>>> index 7ebaef10ea1b..2d33ec574bbb 100644
>>>> --- a/arch/riscv/mm/Makefile
>>>> +++ b/arch/riscv/mm/Makefile
>>>> @@ -1,6 +1,10 @@
>>>> # SPDX-License-Identifier: GPL-2.0-only
>>>>
>>>> CFLAGS_init.o := -mcmodel=medany
>>>> +ifdef CONFIG_RELOCATABLE
>>>> +CFLAGS_init.o += -fno-pie
>>>> +endif
>>>> +
>>>> ifdef CONFIG_FTRACE
>>>> CFLAGS_REMOVE_init.o = $(CC_FLAGS_FTRACE)
>>>> CFLAGS_REMOVE_cacheflush.o = $(CC_FLAGS_FTRACE)
>>>> diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
>>>> index c0cddf0fc22d..42041c12d496 100644
>>>> --- a/arch/riscv/mm/init.c
>>>> +++ b/arch/riscv/mm/init.c
>>>> @@ -20,6 +20,9 @@
>>>> #include <linux/dma-map-ops.h>
>>>> #include <linux/crash_dump.h>
>>>> #include <linux/hugetlb.h>
>>>> +#ifdef CONFIG_RELOCATABLE
>>>> +#include <linux/elf.h>
>>>> +#endif
>>>>
>>>> #include <asm/fixmap.h>
>>>> #include <asm/tlbflush.h>
>>>> @@ -103,7 +106,7 @@ static void __init print_vm_layout(void)
>>>> print_mlm("lowmem", (unsigned long)PAGE_OFFSET,
>>>> (unsigned long)high_memory);
>>>> #ifdef CONFIG_64BIT
>>>> - print_mlm("kernel", (unsigned long)KERNEL_LINK_ADDR,
>>>> + print_mlm("kernel", (unsigned long)kernel_map.virt_addr,
>>>> (unsigned long)ADDRESS_SPACE_END);
>>>> #endif
>>>> }
>>>> @@ -518,6 +521,44 @@ static __init pgprot_t pgprot_from_va(uintptr_t
>>>> va)
>>>> #error "setup_vm() is called from head.S before relocate so it
>>>> should not use absolute addressing."
>>>> #endif
>>>>
>>>> +#ifdef CONFIG_RELOCATABLE
>>>> +extern unsigned long __rela_dyn_start, __rela_dyn_end;
>>>> +
>>>> +static void __init relocate_kernel(void)
>>>> +{
>>>> + Elf64_Rela *rela = (Elf64_Rela *)&__rela_dyn_start;
>>>> + /*
>>>> + * This holds the offset between the linked virtual address and
>>>> the
>>>> + * relocated virtual address.
>>>> + */
>>>> + uintptr_t reloc_offset = kernel_map.virt_addr - KERNEL_LINK_ADDR;
>>>> + /*
>>>> + * This holds the offset between kernel linked virtual address and
>>>> + * physical address.
>>>> + */
>>>> + uintptr_t va_kernel_link_pa_offset = KERNEL_LINK_ADDR -
>>>> kernel_map.phys_addr;
>>>> +
>>>> + for ( ; rela < (Elf64_Rela *)&__rela_dyn_end; rela++) {
>>>> + Elf64_Addr addr = (rela->r_offset - va_kernel_link_pa_offset);
>>>> + Elf64_Addr relocated_addr = rela->r_addend;
>>>> +
>>>> + if (rela->r_info != R_RISCV_RELATIVE)
>>>> + continue;
>>>> +
>>>> + /*
>>>> + * Make sure to not relocate vdso symbols like rt_sigreturn
>>>> + * which are linked from the address 0 in vmlinux since
>>>> + * vdso symbol addresses are actually used as an offset from
>>>> + * mm->context.vdso in VDSO_OFFSET macro.
>>>> + */
>>>> + if (relocated_addr >= KERNEL_LINK_ADDR)
>>>> + relocated_addr += reloc_offset;
>>>> +
>>>> + *(Elf64_Addr *)addr = relocated_addr;
>>>> + }
>>>> +}
>>>> +#endif /* CONFIG_RELOCATABLE */
>>>> +
>>>> #ifdef CONFIG_XIP_KERNEL
>>>> static void __init create_kernel_page_table(pgd_t *pgdir,
>>>> __always_unused bool early)
>>>> @@ -625,6 +666,17 @@ asmlinkage void __init setup_vm(uintptr_t dtb_pa)
>>>> BUG_ON((kernel_map.virt_addr + kernel_map.size) >
>>>> ADDRESS_SPACE_END - SZ_4K);
>>>> #endif
>>>>
>>>> +#ifdef CONFIG_RELOCATABLE
>>>> + /*
>>>> + * Early page table uses only one PGDIR, which makes it possible
>>>> + * to map PGDIR_SIZE aligned on PGDIR_SIZE: if the relocation
>>>> offset
>>>> + * makes the kernel cross over a PGDIR_SIZE boundary, raise a bug
>>>> + * since a part of the kernel would not get mapped.
>>>> + */
>>>> + BUG_ON(PGDIR_SIZE - (kernel_map.virt_addr & (PGDIR_SIZE - 1)) <
>>>> kernel_map.size);
>>>> + relocate_kernel();
>>>> +#endif
>>>> +
>>>> pt_ops.alloc_pte = alloc_pte_early;
>>>> pt_ops.get_pte_virt = get_pte_virt_early;
>>>> #ifndef __PAGETABLE_PMD_FOLDED
Hi Palmer,
Do you think this could go in for-next?
Thanks,
Alex
On 12/6/21 10:44, Alexandre ghiti wrote:
> @Palmer, can I do anything for that to be pulled in 5.17?
>
> Thanks,
>
> Alex
>
> On 10/27/21 07:04, Alexandre ghiti wrote:
>> Hi Palmer,
>>
>> On 10/26/21 11:29 PM, Palmer Dabbelt wrote:
>>> On Sat, 09 Oct 2021 10:20:20 PDT (-0700), [email protected] wrote:
>>>> Arf, I have sent this patchset with the wrong email address. @Palmer
>>>> tell me if you want me to resend it correctly.
>>> Sorry for being kind of slow here. It's fine: there's a "From:" in
>>> the patch, and git picks those up so it'll match the signed-off-by
>>> line. I send pretty much all my patches that way, as I never managed
>>> to get my Google address working correctly.
>>>
>>>> Thanks,
>>>>
>>>> Alex
>>>>
>>>> On 10/9/21 7:12 PM, Alexandre Ghiti wrote:
>>>>> From: Alexandre Ghiti <[email protected]>
>>>>>
>>>>> This config allows to compile 64b kernel as PIE and to relocate it at
>>>>> any virtual address at runtime: this paves the way to KASLR.
>>>>> Runtime relocation is possible since relocation metadata are
>>>>> embedded into
>>>>> the kernel.
>>> IMO this should really be user selectable, at a bare minimum so it's
>>> testable.
>>> I just sent along a patch to do that (my power's off at home, so email
>>> is a bit
>>> wacky right now).
>>>
>>> I haven't put this on for-next yet as I'm not sure if you had a fix
>>> for the
>>> kasan issue (which IIUC would conflict with this).
>>
>> The kasan issue only revealed that I need to move the kasan shadow
>> memory around with sv48 support, that's not related to the relocatable
>> kernel.
>>
>> Thanks,
>>
>> Alex
>>
>>
>>>>> Note that relocating at runtime introduces an overhead even if the
>>>>> kernel is loaded at the same address it was linked at and that the
>>>>> compiler
>>>>> options are those used in arm64 which uses the same RELA relocation
>>>>> format.
>>>>>
>>>>> Signed-off-by: Alexandre Ghiti <[email protected]>
>>>>> ---
>>>>> arch/riscv/Kconfig | 12 ++++++++
>>>>> arch/riscv/Makefile | 7 +++--
>>>>> arch/riscv/kernel/vmlinux.lds.S | 6 ++++
>>>>> arch/riscv/mm/Makefile | 4 +++
>>>>> arch/riscv/mm/init.c | 54
>>>>> ++++++++++++++++++++++++++++++++-
>>>>> 5 files changed, 80 insertions(+), 3 deletions(-)
>>>>>
>>>>> diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
>>>>> index ea16fa2dd768..043ba92559fa 100644
>>>>> --- a/arch/riscv/Kconfig
>>>>> +++ b/arch/riscv/Kconfig
>>>>> @@ -213,6 +213,18 @@ config PGTABLE_LEVELS
>>>>> config LOCKDEP_SUPPORT
>>>>> def_bool y
>>>>>
>>>>> +config RELOCATABLE
>>>>> + bool
>>>>> + depends on MMU && 64BIT && !XIP_KERNEL
>>>>> + help
>>>>> + This builds a kernel as a Position Independent Executable
>>>>> (PIE),
>>>>> + which retains all relocation metadata required to
>>>>> relocate the
>>>>> + kernel binary at runtime to a different virtual address
>>>>> than the
>>>>> + address it was linked at.
>>>>> + Since RISCV uses the RELA relocation format, this
>>>>> requires a
>>>>> + relocation pass at runtime even if the kernel is loaded
>>>>> at the
>>>>> + same address it was linked at.
>>>>> +
>>>>> source "arch/riscv/Kconfig.socs"
>>>>> source "arch/riscv/Kconfig.erratas"
>>>>>
>>>>> diff --git a/arch/riscv/Makefile b/arch/riscv/Makefile
>>>>> index 0eb4568fbd29..2f509915f246 100644
>>>>> --- a/arch/riscv/Makefile
>>>>> +++ b/arch/riscv/Makefile
>>>>> @@ -9,9 +9,12 @@
>>>>> #
>>>>>
>>>>> OBJCOPYFLAGS := -O binary
>>>>> -LDFLAGS_vmlinux :=
>>>>> +ifeq ($(CONFIG_RELOCATABLE),y)
>>>>> + LDFLAGS_vmlinux += -shared -Bsymbolic -z notext -z norelro
>>>>> + KBUILD_CFLAGS += -fPIE
>>>>> +endif
>>>>> ifeq ($(CONFIG_DYNAMIC_FTRACE),y)
>>>>> - LDFLAGS_vmlinux := --no-relax
>>>>> + LDFLAGS_vmlinux += --no-relax
>>>>> KBUILD_CPPFLAGS += -DCC_USING_PATCHABLE_FUNCTION_ENTRY
>>>>> CC_FLAGS_FTRACE := -fpatchable-function-entry=8
>>>>> endif
>>>>> diff --git a/arch/riscv/kernel/vmlinux.lds.S
>>>>> b/arch/riscv/kernel/vmlinux.lds.S
>>>>> index 5104f3a871e3..862a8c09723c 100644
>>>>> --- a/arch/riscv/kernel/vmlinux.lds.S
>>>>> +++ b/arch/riscv/kernel/vmlinux.lds.S
>>>>> @@ -133,6 +133,12 @@ SECTIONS
>>>>>
>>>>> BSS_SECTION(PAGE_SIZE, PAGE_SIZE, 0)
>>>>>
>>>>> + .rela.dyn : ALIGN(8) {
>>>>> + __rela_dyn_start = .;
>>>>> + *(.rela .rela*)
>>>>> + __rela_dyn_end = .;
>>>>> + }
>>>>> +
>>>>> #ifdef CONFIG_EFI
>>>>> . = ALIGN(PECOFF_SECTION_ALIGNMENT);
>>>>> __pecoff_data_virt_size = ABSOLUTE(. - __pecoff_text_end);
>>>>> diff --git a/arch/riscv/mm/Makefile b/arch/riscv/mm/Makefile
>>>>> index 7ebaef10ea1b..2d33ec574bbb 100644
>>>>> --- a/arch/riscv/mm/Makefile
>>>>> +++ b/arch/riscv/mm/Makefile
>>>>> @@ -1,6 +1,10 @@
>>>>> # SPDX-License-Identifier: GPL-2.0-only
>>>>>
>>>>> CFLAGS_init.o := -mcmodel=medany
>>>>> +ifdef CONFIG_RELOCATABLE
>>>>> +CFLAGS_init.o += -fno-pie
>>>>> +endif
>>>>> +
>>>>> ifdef CONFIG_FTRACE
>>>>> CFLAGS_REMOVE_init.o = $(CC_FLAGS_FTRACE)
>>>>> CFLAGS_REMOVE_cacheflush.o = $(CC_FLAGS_FTRACE)
>>>>> diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
>>>>> index c0cddf0fc22d..42041c12d496 100644
>>>>> --- a/arch/riscv/mm/init.c
>>>>> +++ b/arch/riscv/mm/init.c
>>>>> @@ -20,6 +20,9 @@
>>>>> #include <linux/dma-map-ops.h>
>>>>> #include <linux/crash_dump.h>
>>>>> #include <linux/hugetlb.h>
>>>>> +#ifdef CONFIG_RELOCATABLE
>>>>> +#include <linux/elf.h>
>>>>> +#endif
>>>>>
>>>>> #include <asm/fixmap.h>
>>>>> #include <asm/tlbflush.h>
>>>>> @@ -103,7 +106,7 @@ static void __init print_vm_layout(void)
>>>>> print_mlm("lowmem", (unsigned long)PAGE_OFFSET,
>>>>> (unsigned long)high_memory);
>>>>> #ifdef CONFIG_64BIT
>>>>> - print_mlm("kernel", (unsigned long)KERNEL_LINK_ADDR,
>>>>> + print_mlm("kernel", (unsigned long)kernel_map.virt_addr,
>>>>> (unsigned long)ADDRESS_SPACE_END);
>>>>> #endif
>>>>> }
>>>>> @@ -518,6 +521,44 @@ static __init pgprot_t pgprot_from_va(uintptr_t
>>>>> va)
>>>>> #error "setup_vm() is called from head.S before relocate so it
>>>>> should not use absolute addressing."
>>>>> #endif
>>>>>
>>>>> +#ifdef CONFIG_RELOCATABLE
>>>>> +extern unsigned long __rela_dyn_start, __rela_dyn_end;
>>>>> +
>>>>> +static void __init relocate_kernel(void)
>>>>> +{
>>>>> + Elf64_Rela *rela = (Elf64_Rela *)&__rela_dyn_start;
>>>>> + /*
>>>>> + * This holds the offset between the linked virtual address and
>>>>> the
>>>>> + * relocated virtual address.
>>>>> + */
>>>>> + uintptr_t reloc_offset = kernel_map.virt_addr -
>>>>> KERNEL_LINK_ADDR;
>>>>> + /*
>>>>> + * This holds the offset between kernel linked virtual
>>>>> address and
>>>>> + * physical address.
>>>>> + */
>>>>> + uintptr_t va_kernel_link_pa_offset = KERNEL_LINK_ADDR -
>>>>> kernel_map.phys_addr;
>>>>> +
>>>>> + for ( ; rela < (Elf64_Rela *)&__rela_dyn_end; rela++) {
>>>>> + Elf64_Addr addr = (rela->r_offset -
>>>>> va_kernel_link_pa_offset);
>>>>> + Elf64_Addr relocated_addr = rela->r_addend;
>>>>> +
>>>>> + if (rela->r_info != R_RISCV_RELATIVE)
>>>>> + continue;
>>>>> +
>>>>> + /*
>>>>> + * Make sure to not relocate vdso symbols like rt_sigreturn
>>>>> + * which are linked from the address 0 in vmlinux since
>>>>> + * vdso symbol addresses are actually used as an offset from
>>>>> + * mm->context.vdso in VDSO_OFFSET macro.
>>>>> + */
>>>>> + if (relocated_addr >= KERNEL_LINK_ADDR)
>>>>> + relocated_addr += reloc_offset;
>>>>> +
>>>>> + *(Elf64_Addr *)addr = relocated_addr;
>>>>> + }
>>>>> +}
>>>>> +#endif /* CONFIG_RELOCATABLE */
>>>>> +
>>>>> #ifdef CONFIG_XIP_KERNEL
>>>>> static void __init create_kernel_page_table(pgd_t *pgdir,
>>>>> __always_unused bool early)
>>>>> @@ -625,6 +666,17 @@ asmlinkage void __init setup_vm(uintptr_t
>>>>> dtb_pa)
>>>>> BUG_ON((kernel_map.virt_addr + kernel_map.size) >
>>>>> ADDRESS_SPACE_END - SZ_4K);
>>>>> #endif
>>>>>
>>>>> +#ifdef CONFIG_RELOCATABLE
>>>>> + /*
>>>>> + * Early page table uses only one PGDIR, which makes it possible
>>>>> + * to map PGDIR_SIZE aligned on PGDIR_SIZE: if the relocation
>>>>> offset
>>>>> + * makes the kernel cross over a PGDIR_SIZE boundary, raise a
>>>>> bug
>>>>> + * since a part of the kernel would not get mapped.
>>>>> + */
>>>>> + BUG_ON(PGDIR_SIZE - (kernel_map.virt_addr & (PGDIR_SIZE - 1)) <
>>>>> kernel_map.size);
>>>>> + relocate_kernel();
>>>>> +#endif
>>>>> +
>>>>> pt_ops.alloc_pte = alloc_pte_early;
>>>>> pt_ops.get_pte_virt = get_pte_virt_early;
>>>>> #ifndef __PAGETABLE_PMD_FOLDED
>
> _______________________________________________
> linux-riscv mailing list
> [email protected]
> http://lists.infradead.org/mailman/listinfo/linux-riscv
Hi Palmer,
Do you intend to pull that in for-next or not yet? Can I do something to help?
Thanks,
Alex
On Mon, Jan 10, 2022 at 9:05 AM Alexandre ghiti <[email protected]> wrote:
>
> Hi Palmer,
>
> Do you think this could go in for-next?
>
> Thanks,
>
> Alex
>
> On 12/6/21 10:44, Alexandre ghiti wrote:
> > @Palmer, can I do anything for that to be pulled in 5.17?
> >
> > Thanks,
> >
> > Alex
> >
> > On 10/27/21 07:04, Alexandre ghiti wrote:
> >> Hi Palmer,
> >>
> >> On 10/26/21 11:29 PM, Palmer Dabbelt wrote:
> >>> On Sat, 09 Oct 2021 10:20:20 PDT (-0700), [email protected] wrote:
> >>>> Arf, I have sent this patchset with the wrong email address. @Palmer
> >>>> tell me if you want me to resend it correctly.
> >>> Sorry for being kind of slow here. It's fine: there's a "From:" in
> >>> the patch, and git picks those up so it'll match the signed-off-by
> >>> line. I send pretty much all my patches that way, as I never managed
> >>> to get my Google address working correctly.
> >>>
> >>>> Thanks,
> >>>>
> >>>> Alex
> >>>>
> >>>> On 10/9/21 7:12 PM, Alexandre Ghiti wrote:
> >>>>> From: Alexandre Ghiti <[email protected]>
> >>>>>
> >>>>> This config allows to compile 64b kernel as PIE and to relocate it at
> >>>>> any virtual address at runtime: this paves the way to KASLR.
> >>>>> Runtime relocation is possible since relocation metadata are
> >>>>> embedded into
> >>>>> the kernel.
> >>> IMO this should really be user selectable, at a bare minimum so it's
> >>> testable.
> >>> I just sent along a patch to do that (my power's off at home, so email
> >>> is a bit
> >>> wacky right now).
> >>>
> >>> I haven't put this on for-next yet as I'm not sure if you had a fix
> >>> for the
> >>> kasan issue (which IIUC would conflict with this).
> >>
> >> The kasan issue only revealed that I need to move the kasan shadow
> >> memory around with sv48 support, that's not related to the relocatable
> >> kernel.
> >>
> >> Thanks,
> >>
> >> Alex
> >>
> >>
> >>>>> Note that relocating at runtime introduces an overhead even if the
> >>>>> kernel is loaded at the same address it was linked at and that the
> >>>>> compiler
> >>>>> options are those used in arm64 which uses the same RELA relocation
> >>>>> format.
> >>>>>
> >>>>> Signed-off-by: Alexandre Ghiti <[email protected]>
> >>>>> ---
> >>>>> arch/riscv/Kconfig | 12 ++++++++
> >>>>> arch/riscv/Makefile | 7 +++--
> >>>>> arch/riscv/kernel/vmlinux.lds.S | 6 ++++
> >>>>> arch/riscv/mm/Makefile | 4 +++
> >>>>> arch/riscv/mm/init.c | 54
> >>>>> ++++++++++++++++++++++++++++++++-
> >>>>> 5 files changed, 80 insertions(+), 3 deletions(-)
> >>>>>
> >>>>> diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
> >>>>> index ea16fa2dd768..043ba92559fa 100644
> >>>>> --- a/arch/riscv/Kconfig
> >>>>> +++ b/arch/riscv/Kconfig
> >>>>> @@ -213,6 +213,18 @@ config PGTABLE_LEVELS
> >>>>> config LOCKDEP_SUPPORT
> >>>>> def_bool y
> >>>>>
> >>>>> +config RELOCATABLE
> >>>>> + bool
> >>>>> + depends on MMU && 64BIT && !XIP_KERNEL
> >>>>> + help
> >>>>> + This builds a kernel as a Position Independent Executable
> >>>>> (PIE),
> >>>>> + which retains all relocation metadata required to
> >>>>> relocate the
> >>>>> + kernel binary at runtime to a different virtual address
> >>>>> than the
> >>>>> + address it was linked at.
> >>>>> + Since RISCV uses the RELA relocation format, this
> >>>>> requires a
> >>>>> + relocation pass at runtime even if the kernel is loaded
> >>>>> at the
> >>>>> + same address it was linked at.
> >>>>> +
> >>>>> source "arch/riscv/Kconfig.socs"
> >>>>> source "arch/riscv/Kconfig.erratas"
> >>>>>
> >>>>> diff --git a/arch/riscv/Makefile b/arch/riscv/Makefile
> >>>>> index 0eb4568fbd29..2f509915f246 100644
> >>>>> --- a/arch/riscv/Makefile
> >>>>> +++ b/arch/riscv/Makefile
> >>>>> @@ -9,9 +9,12 @@
> >>>>> #
> >>>>>
> >>>>> OBJCOPYFLAGS := -O binary
> >>>>> -LDFLAGS_vmlinux :=
> >>>>> +ifeq ($(CONFIG_RELOCATABLE),y)
> >>>>> + LDFLAGS_vmlinux += -shared -Bsymbolic -z notext -z norelro
> >>>>> + KBUILD_CFLAGS += -fPIE
> >>>>> +endif
> >>>>> ifeq ($(CONFIG_DYNAMIC_FTRACE),y)
> >>>>> - LDFLAGS_vmlinux := --no-relax
> >>>>> + LDFLAGS_vmlinux += --no-relax
> >>>>> KBUILD_CPPFLAGS += -DCC_USING_PATCHABLE_FUNCTION_ENTRY
> >>>>> CC_FLAGS_FTRACE := -fpatchable-function-entry=8
> >>>>> endif
> >>>>> diff --git a/arch/riscv/kernel/vmlinux.lds.S
> >>>>> b/arch/riscv/kernel/vmlinux.lds.S
> >>>>> index 5104f3a871e3..862a8c09723c 100644
> >>>>> --- a/arch/riscv/kernel/vmlinux.lds.S
> >>>>> +++ b/arch/riscv/kernel/vmlinux.lds.S
> >>>>> @@ -133,6 +133,12 @@ SECTIONS
> >>>>>
> >>>>> BSS_SECTION(PAGE_SIZE, PAGE_SIZE, 0)
> >>>>>
> >>>>> + .rela.dyn : ALIGN(8) {
> >>>>> + __rela_dyn_start = .;
> >>>>> + *(.rela .rela*)
> >>>>> + __rela_dyn_end = .;
> >>>>> + }
> >>>>> +
> >>>>> #ifdef CONFIG_EFI
> >>>>> . = ALIGN(PECOFF_SECTION_ALIGNMENT);
> >>>>> __pecoff_data_virt_size = ABSOLUTE(. - __pecoff_text_end);
> >>>>> diff --git a/arch/riscv/mm/Makefile b/arch/riscv/mm/Makefile
> >>>>> index 7ebaef10ea1b..2d33ec574bbb 100644
> >>>>> --- a/arch/riscv/mm/Makefile
> >>>>> +++ b/arch/riscv/mm/Makefile
> >>>>> @@ -1,6 +1,10 @@
> >>>>> # SPDX-License-Identifier: GPL-2.0-only
> >>>>>
> >>>>> CFLAGS_init.o := -mcmodel=medany
> >>>>> +ifdef CONFIG_RELOCATABLE
> >>>>> +CFLAGS_init.o += -fno-pie
> >>>>> +endif
> >>>>> +
> >>>>> ifdef CONFIG_FTRACE
> >>>>> CFLAGS_REMOVE_init.o = $(CC_FLAGS_FTRACE)
> >>>>> CFLAGS_REMOVE_cacheflush.o = $(CC_FLAGS_FTRACE)
> >>>>> diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
> >>>>> index c0cddf0fc22d..42041c12d496 100644
> >>>>> --- a/arch/riscv/mm/init.c
> >>>>> +++ b/arch/riscv/mm/init.c
> >>>>> @@ -20,6 +20,9 @@
> >>>>> #include <linux/dma-map-ops.h>
> >>>>> #include <linux/crash_dump.h>
> >>>>> #include <linux/hugetlb.h>
> >>>>> +#ifdef CONFIG_RELOCATABLE
> >>>>> +#include <linux/elf.h>
> >>>>> +#endif
> >>>>>
> >>>>> #include <asm/fixmap.h>
> >>>>> #include <asm/tlbflush.h>
> >>>>> @@ -103,7 +106,7 @@ static void __init print_vm_layout(void)
> >>>>> print_mlm("lowmem", (unsigned long)PAGE_OFFSET,
> >>>>> (unsigned long)high_memory);
> >>>>> #ifdef CONFIG_64BIT
> >>>>> - print_mlm("kernel", (unsigned long)KERNEL_LINK_ADDR,
> >>>>> + print_mlm("kernel", (unsigned long)kernel_map.virt_addr,
> >>>>> (unsigned long)ADDRESS_SPACE_END);
> >>>>> #endif
> >>>>> }
> >>>>> @@ -518,6 +521,44 @@ static __init pgprot_t pgprot_from_va(uintptr_t
> >>>>> va)
> >>>>> #error "setup_vm() is called from head.S before relocate so it
> >>>>> should not use absolute addressing."
> >>>>> #endif
> >>>>>
> >>>>> +#ifdef CONFIG_RELOCATABLE
> >>>>> +extern unsigned long __rela_dyn_start, __rela_dyn_end;
> >>>>> +
> >>>>> +static void __init relocate_kernel(void)
> >>>>> +{
> >>>>> + Elf64_Rela *rela = (Elf64_Rela *)&__rela_dyn_start;
> >>>>> + /*
> >>>>> + * This holds the offset between the linked virtual address and
> >>>>> the
> >>>>> + * relocated virtual address.
> >>>>> + */
> >>>>> + uintptr_t reloc_offset = kernel_map.virt_addr -
> >>>>> KERNEL_LINK_ADDR;
> >>>>> + /*
> >>>>> + * This holds the offset between kernel linked virtual
> >>>>> address and
> >>>>> + * physical address.
> >>>>> + */
> >>>>> + uintptr_t va_kernel_link_pa_offset = KERNEL_LINK_ADDR -
> >>>>> kernel_map.phys_addr;
> >>>>> +
> >>>>> + for ( ; rela < (Elf64_Rela *)&__rela_dyn_end; rela++) {
> >>>>> + Elf64_Addr addr = (rela->r_offset -
> >>>>> va_kernel_link_pa_offset);
> >>>>> + Elf64_Addr relocated_addr = rela->r_addend;
> >>>>> +
> >>>>> + if (rela->r_info != R_RISCV_RELATIVE)
> >>>>> + continue;
> >>>>> +
> >>>>> + /*
> >>>>> + * Make sure to not relocate vdso symbols like rt_sigreturn
> >>>>> + * which are linked from the address 0 in vmlinux since
> >>>>> + * vdso symbol addresses are actually used as an offset from
> >>>>> + * mm->context.vdso in VDSO_OFFSET macro.
> >>>>> + */
> >>>>> + if (relocated_addr >= KERNEL_LINK_ADDR)
> >>>>> + relocated_addr += reloc_offset;
> >>>>> +
> >>>>> + *(Elf64_Addr *)addr = relocated_addr;
> >>>>> + }
> >>>>> +}
> >>>>> +#endif /* CONFIG_RELOCATABLE */
> >>>>> +
> >>>>> #ifdef CONFIG_XIP_KERNEL
> >>>>> static void __init create_kernel_page_table(pgd_t *pgdir,
> >>>>> __always_unused bool early)
> >>>>> @@ -625,6 +666,17 @@ asmlinkage void __init setup_vm(uintptr_t
> >>>>> dtb_pa)
> >>>>> BUG_ON((kernel_map.virt_addr + kernel_map.size) >
> >>>>> ADDRESS_SPACE_END - SZ_4K);
> >>>>> #endif
> >>>>>
> >>>>> +#ifdef CONFIG_RELOCATABLE
> >>>>> + /*
> >>>>> + * Early page table uses only one PGDIR, which makes it possible
> >>>>> + * to map PGDIR_SIZE aligned on PGDIR_SIZE: if the relocation
> >>>>> offset
> >>>>> + * makes the kernel cross over a PGDIR_SIZE boundary, raise a
> >>>>> bug
> >>>>> + * since a part of the kernel would not get mapped.
> >>>>> + */
> >>>>> + BUG_ON(PGDIR_SIZE - (kernel_map.virt_addr & (PGDIR_SIZE - 1)) <
> >>>>> kernel_map.size);
> >>>>> + relocate_kernel();
> >>>>> +#endif
> >>>>> +
> >>>>> pt_ops.alloc_pte = alloc_pte_early;
> >>>>> pt_ops.get_pte_virt = get_pte_virt_early;
> >>>>> #ifndef __PAGETABLE_PMD_FOLDED
> >
> > _______________________________________________
> > linux-riscv mailing list
> > [email protected]
> > http://lists.infradead.org/mailman/listinfo/linux-riscv