2018-07-02 18:16:19

by Ard Biesheuvel

[permalink] [raw]
Subject: [PATCH v2 0/8] add support for relative references in jump tables

This series implements support for emitting the data structures associated
with jump tables as 32-bit relative references instead of absolute
references, which take up more space on builds that target 64-bit
architectures, or implement self relocation [or both].

This series enables it for arm64 and x86, although other architectures
might benefit as well.

Patch #1 does some preparatory refactoring before patch #2 introduces the
generic pieces required for using relative references.

Patch #3 wires everything up for arm64.

Patch #4 introduces support for handling 64-bit place relative relocations
on x86_64 (see 'Changes since v1' below)

For x86, patch #5 applies some preparatory changes for the arch specific
jump label C code, which is a lot more involved than on arm64, which is
why it is split off in this case. Patch #6 wires it up for x86 as well.

Patch #7 and #8 implement the changes so that the jump_entry arrays reside
in ro_after_init memory rather than remain fully writable all of the time.

Changes since v1:
- change the relative reference to the static key to a 64-bit wide one on 64
bit architectures; this is necessary on arm64, which allows modules to
reside anywhere within a 4 GB window covering the core kernel text, which
means a 32-bit signed quantity with its +/- 2 GB range is insufficient.
Note that x86_64 changes are in preparation that widen the relocation
range as well (using the PIE linker), so I assumed that the same change
is appropriate for x86 as well.
- add patch #4 to handle the relocations emitted by the compiler as a result
of the change above
- added patches to move the jump_entry arrays to ro_after_init memory, so
that they are not as easily corrupted or manipulated.
- add Will's ack to patch #3

Cc: Arnd Bergmann <[email protected]>
Cc: Heiko Carstens <[email protected]>
Cc: Kees Cook <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Martin Schwidefsky <[email protected]>
Cc: Jessica Yu <[email protected]>
Cc: Peter Zijlstra <[email protected]>

Ard Biesheuvel (8):
kernel/jump_label: abstract jump_entry member accessors
kernel/jump_label: implement generic support for relative references
arm64/kernel: jump_label: switch to relative references
x86: add support for 64-bit place relative relocations
x86: jump_label: switch to jump_entry accessors
x86/kernel: jump_table: use relative references
jump_label: annotate entries that operate on __init code earlier
jump_table: move entries into ro_after_init region

arch/Kconfig | 3 +
arch/arm/kernel/vmlinux-xip.lds.S | 1 +
arch/arm64/Kconfig | 1 +
arch/arm64/include/asm/jump_label.h | 36 ++++---
arch/arm64/kernel/jump_label.c | 6 +-
arch/s390/kernel/vmlinux.lds.S | 1 +
arch/x86/Kconfig | 1 +
arch/x86/include/asm/elf.h | 1 +
arch/x86/include/asm/jump_label.h | 24 ++---
arch/x86/kernel/jump_label.c | 62 +++++-------
arch/x86/kernel/machine_kexec_64.c | 4 +
arch/x86/kernel/module.c | 6 ++
arch/x86/tools/relocs.c | 10 ++
include/asm-generic/vmlinux.lds.h | 11 ++-
include/linux/jump_label.h | 65 ++++++++++++-
init/main.c | 1 -
kernel/jump_label.c | 100 +++++++++-----------
kernel/module.c | 9 ++
tools/objtool/special.c | 4 +-
19 files changed, 205 insertions(+), 141 deletions(-)

--
2.17.1



2018-07-02 18:14:53

by Ard Biesheuvel

[permalink] [raw]
Subject: [PATCH v2 4/8] x86: add support for 64-bit place relative relocations

Add support for R_X86_64_PC64 relocations, which operate on 64-bit
quantities holding a relative symbol reference. This allows jump
table entries to be emitted in a way that makes them invariant under
runtime relocation, which means that no metadata needs to be emitted
into the kernel image to describe such data structures, resulting in
a size reduction.

Signed-off-by: Ard Biesheuvel <[email protected]>
---
arch/x86/include/asm/elf.h | 1 +
arch/x86/kernel/machine_kexec_64.c | 4 ++++
arch/x86/kernel/module.c | 6 ++++++
arch/x86/tools/relocs.c | 10 ++++++++++
4 files changed, 21 insertions(+)

diff --git a/arch/x86/include/asm/elf.h b/arch/x86/include/asm/elf.h
index 0d157d2a1e2a..d3925d684296 100644
--- a/arch/x86/include/asm/elf.h
+++ b/arch/x86/include/asm/elf.h
@@ -62,6 +62,7 @@ typedef struct user_fxsr_struct elf_fpxregset_t;
#define R_X86_64_PC16 13 /* 16 bit sign extended pc relative */
#define R_X86_64_8 14 /* Direct 8 bit sign extended */
#define R_X86_64_PC8 15 /* 8 bit sign extended pc relative */
+#define R_X86_64_PC64 24 /* Place relative 64-bit signed */

#define R_X86_64_NUM 16

diff --git a/arch/x86/kernel/machine_kexec_64.c b/arch/x86/kernel/machine_kexec_64.c
index 4c8acdfdc5a7..6638d1edb2be 100644
--- a/arch/x86/kernel/machine_kexec_64.c
+++ b/arch/x86/kernel/machine_kexec_64.c
@@ -496,6 +496,10 @@ int arch_kexec_apply_relocations_add(struct purgatory_info *pi,
value -= (u64)address;
*(u32 *)location = value;
break;
+ case R_X86_64_PC64:
+ value -= (u64)address;
+ *(u64 *)location = value;
+ break;
default:
pr_err("Unknown rela relocation: %llu\n",
ELF64_R_TYPE(rel[i].r_info));
diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c
index f58336af095c..b052e883dd8c 100644
--- a/arch/x86/kernel/module.c
+++ b/arch/x86/kernel/module.c
@@ -201,6 +201,12 @@ int apply_relocate_add(Elf64_Shdr *sechdrs,
goto overflow;
#endif
break;
+ case R_X86_64_PC64:
+ if (*(u64 *)loc != 0)
+ goto invalid_relocation;
+ val -= (u64)loc;
+ *(u64 *)loc = val;
+ break;
default:
pr_err("%s: Unknown rela relocation: %llu\n",
me->name, ELF64_R_TYPE(rel[i].r_info));
diff --git a/arch/x86/tools/relocs.c b/arch/x86/tools/relocs.c
index 220e97841e49..a4075bc37e8f 100644
--- a/arch/x86/tools/relocs.c
+++ b/arch/x86/tools/relocs.c
@@ -195,6 +195,7 @@ static const char *rel_type(unsigned type)
#if ELF_BITS == 64
REL_TYPE(R_X86_64_NONE),
REL_TYPE(R_X86_64_64),
+ REL_TYPE(R_X86_64_PC64),
REL_TYPE(R_X86_64_PC32),
REL_TYPE(R_X86_64_GOT32),
REL_TYPE(R_X86_64_PLT32),
@@ -781,6 +782,15 @@ static int do_reloc64(struct section *sec, Elf_Rel *rel, ElfW(Sym) *sym,
add_reloc(&relocs32neg, offset);
break;

+ case R_X86_64_PC64:
+ /*
+ * Only used by jump labels
+ */
+ if (is_percpu_sym(sym, symname))
+ die("Invalid R_X86_64_PC64 relocation against per-CPU symbol %s\n",
+ symname);
+ break;
+
case R_X86_64_32:
case R_X86_64_32S:
case R_X86_64_64:
--
2.17.1


2018-07-02 18:14:53

by Ard Biesheuvel

[permalink] [raw]
Subject: [PATCH v2 7/8] jump_label: annotate entries that operate on __init code earlier

Jump table entries are mostly read-only, with the exception of the
init and module loader code that defuses entries that point into init
code when the code being referred to is freed.

For robustness, it would be better to move these entries into the
ro_after_init section, but clearing the 'code' member of each jump
table entry referring to init code at module load time races with the
module_enable_ro() call that remaps the ro_after_init section read
only, so we'd like to do it earlier.

So given that whether such an entry refers to init code can be decided
much earlier, we can pull this check forward. Since we may still need
the code entry at this point, let's switch to setting a low bit in the
'key' member just like we do to annotate the default state of a jump
table entry.

Signed-off-by: Ard Biesheuvel <[email protected]>
---
include/linux/jump_label.h | 11 ++---
init/main.c | 1 -
kernel/jump_label.c | 48 ++++++--------------
3 files changed, 18 insertions(+), 42 deletions(-)

diff --git a/include/linux/jump_label.h b/include/linux/jump_label.h
index 871826fd0c3b..feee8abc96be 100644
--- a/include/linux/jump_label.h
+++ b/include/linux/jump_label.h
@@ -141,7 +141,7 @@ static inline unsigned long jump_entry_target(const struct jump_entry *entry)

static inline struct static_key *jump_entry_key(const struct jump_entry *entry)
{
- long offset = entry->key & ~1L;
+ long offset = entry->key & ~3L;

return (struct static_key *)((unsigned long)&entry->key + offset);
}
@@ -160,7 +160,7 @@ static inline unsigned long jump_entry_target(const struct jump_entry *entry)

static inline struct static_key *jump_entry_key(const struct jump_entry *entry)
{
- return (struct static_key *)((unsigned long)entry->key & ~1UL);
+ return (struct static_key *)((unsigned long)entry->key & ~3UL);
}

#endif
@@ -172,12 +172,12 @@ static inline bool jump_entry_is_branch(const struct jump_entry *entry)

static inline bool jump_entry_is_init(const struct jump_entry *entry)
{
- return entry->code == 0;
+ return (unsigned long)entry->key & 2UL;
}

static inline void jump_entry_set_init(struct jump_entry *entry)
{
- entry->code = 0;
+ entry->key |= 2;
}

#endif
@@ -213,7 +213,6 @@ extern struct jump_entry __start___jump_table[];
extern struct jump_entry __stop___jump_table[];

extern void jump_label_init(void);
-extern void jump_label_invalidate_initmem(void);
extern void jump_label_lock(void);
extern void jump_label_unlock(void);
extern void arch_jump_label_transform(struct jump_entry *entry,
@@ -261,8 +260,6 @@ static __always_inline void jump_label_init(void)
static_key_initialized = true;
}

-static inline void jump_label_invalidate_initmem(void) {}
-
static __always_inline bool static_key_false(struct static_key *key)
{
if (unlikely(static_key_count(key) > 0))
diff --git a/init/main.c b/init/main.c
index e59a01f163d6..d1a6b8a896e5 100644
--- a/init/main.c
+++ b/init/main.c
@@ -1062,7 +1062,6 @@ static int __ref kernel_init(void *unused)
/* need to finish all async __init code before freeing the memory */
async_synchronize_full();
ftrace_free_init_mem();
- jump_label_invalidate_initmem();
free_initmem();
mark_readonly();
system_state = SYSTEM_RUNNING;
diff --git a/kernel/jump_label.c b/kernel/jump_label.c
index d424e1d22d63..7cdd49aeaf6a 100644
--- a/kernel/jump_label.c
+++ b/kernel/jump_label.c
@@ -373,14 +373,15 @@ static enum jump_label_type jump_label_type(struct jump_entry *entry)

static void __jump_label_update(struct static_key *key,
struct jump_entry *entry,
- struct jump_entry *stop)
+ struct jump_entry *stop,
+ bool init)
{
for (; (entry < stop) && (jump_entry_key(entry) == key); entry++) {
/*
* An entry->code of 0 indicates an entry which has been
* disabled because it was in an init text area.
*/
- if (!jump_entry_is_init(entry)) {
+ if (init || !jump_entry_is_init(entry)) {
if (kernel_text_address(jump_entry_code(entry)))
arch_jump_label_transform(entry, jump_label_type(entry));
else
@@ -420,6 +421,9 @@ void __init jump_label_init(void)
if (jump_label_type(iter) == JUMP_LABEL_NOP)
arch_jump_label_transform_static(iter, JUMP_LABEL_NOP);

+ if (init_section_contains((void *)jump_entry_code(iter), 1))
+ jump_entry_set_init(iter);
+
iterk = jump_entry_key(iter);
if (iterk == key)
continue;
@@ -432,19 +436,6 @@ void __init jump_label_init(void)
cpus_read_unlock();
}

-/* Disable any jump label entries in __init/__exit code */
-void __init jump_label_invalidate_initmem(void)
-{
- struct jump_entry *iter_start = __start___jump_table;
- struct jump_entry *iter_stop = __stop___jump_table;
- struct jump_entry *iter;
-
- for (iter = iter_start; iter < iter_stop; iter++) {
- if (init_section_contains((void *)jump_entry_code(iter), 1))
- jump_entry_set_init(iter);
- }
-}
-
#ifdef CONFIG_MODULES

static enum jump_label_type jump_label_init_type(struct jump_entry *entry)
@@ -524,7 +515,8 @@ static void __jump_label_mod_update(struct static_key *key)
stop = __stop___jump_table;
else
stop = m->jump_entries + m->num_jump_entries;
- __jump_label_update(key, mod->entries, stop);
+ __jump_label_update(key, mod->entries, stop,
+ m->state == MODULE_STATE_COMING);
}
}

@@ -570,6 +562,9 @@ static int jump_label_add_module(struct module *mod)
for (iter = iter_start; iter < iter_stop; iter++) {
struct static_key *iterk;

+ if (within_module_init(jump_entry_code(iter), mod))
+ jump_entry_set_init(iter);
+
iterk = jump_entry_key(iter);
if (iterk == key)
continue;
@@ -605,7 +600,7 @@ static int jump_label_add_module(struct module *mod)

/* Only update if we've changed from our initial state */
if (jump_label_type(iter) != jump_label_init_type(iter))
- __jump_label_update(key, iter, iter_stop);
+ __jump_label_update(key, iter, iter_stop, true);
}

return 0;
@@ -661,19 +656,6 @@ static void jump_label_del_module(struct module *mod)
}
}

-/* Disable any jump label entries in module init code */
-static void jump_label_invalidate_module_init(struct module *mod)
-{
- struct jump_entry *iter_start = mod->jump_entries;
- struct jump_entry *iter_stop = iter_start + mod->num_jump_entries;
- struct jump_entry *iter;
-
- for (iter = iter_start; iter < iter_stop; iter++) {
- if (within_module_init(jump_entry_code(iter), mod))
- jump_entry_set_init(iter);
- }
-}
-
static int
jump_label_module_notify(struct notifier_block *self, unsigned long val,
void *data)
@@ -695,9 +677,6 @@ jump_label_module_notify(struct notifier_block *self, unsigned long val,
case MODULE_STATE_GOING:
jump_label_del_module(mod);
break;
- case MODULE_STATE_LIVE:
- jump_label_invalidate_module_init(mod);
- break;
}

jump_label_unlock();
@@ -767,7 +746,8 @@ static void jump_label_update(struct static_key *key)
entry = static_key_entries(key);
/* if there are no users, entry can be NULL */
if (entry)
- __jump_label_update(key, entry, stop);
+ __jump_label_update(key, entry, stop,
+ system_state < SYSTEM_RUNNING);
}

#ifdef CONFIG_STATIC_KEYS_SELFTEST
--
2.17.1


2018-07-02 18:15:12

by Ard Biesheuvel

[permalink] [raw]
Subject: [PATCH v2 6/8] x86/kernel: jump_table: use relative references

Similar to the arm64 case, 64-bit x86 can benefit from using relative
references rather than absolute ones when emitting struct jump_entry
instances. Not only does this reduce the memory footprint of the entries
themselves by 33%, it also removes the need for carrying relocation
metadata on relocatable builds (i.e., for KASLR) which saves a fair
chunk of .init space as well (although the savings are not as dramatic
as on arm64)

Signed-off-by: Ard Biesheuvel <[email protected]>
---
arch/x86/Kconfig | 1 +
arch/x86/include/asm/jump_label.h | 24 +++++++-------------
tools/objtool/special.c | 4 ++--
3 files changed, 11 insertions(+), 18 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index e10a3542db7e..dd71258ec1cc 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -118,6 +118,7 @@ config X86
select HAVE_ARCH_AUDITSYSCALL
select HAVE_ARCH_HUGE_VMAP if X86_64 || X86_PAE
select HAVE_ARCH_JUMP_LABEL
+ select HAVE_ARCH_JUMP_LABEL_RELATIVE
select HAVE_ARCH_KASAN if X86_64
select HAVE_ARCH_KGDB
select HAVE_ARCH_MMAP_RND_BITS if MMU
diff --git a/arch/x86/include/asm/jump_label.h b/arch/x86/include/asm/jump_label.h
index 8c0de4282659..21efc9d07ed9 100644
--- a/arch/x86/include/asm/jump_label.h
+++ b/arch/x86/include/asm/jump_label.h
@@ -37,7 +37,8 @@ static __always_inline bool arch_static_branch(struct static_key *key, bool bran
".byte " __stringify(STATIC_KEY_INIT_NOP) "\n\t"
".pushsection __jump_table, \"aw\" \n\t"
_ASM_ALIGN "\n\t"
- _ASM_PTR "1b, %l[l_yes], %c0 + %c1 \n\t"
+ ".long 1b - ., %l[l_yes] - . \n\t"
+ _ASM_PTR "%c0 + %c1 - .\n\t"
".popsection \n\t"
: : "i" (key), "i" (branch) : : l_yes);

@@ -53,7 +54,8 @@ static __always_inline bool arch_static_branch_jump(struct static_key *key, bool
"2:\n\t"
".pushsection __jump_table, \"aw\" \n\t"
_ASM_ALIGN "\n\t"
- _ASM_PTR "1b, %l[l_yes], %c0 + %c1 \n\t"
+ ".long 1b - ., %l[l_yes] - . \n\t"
+ _ASM_PTR "%c0 + %c1 - .\n\t"
".popsection \n\t"
: : "i" (key), "i" (branch) : : l_yes);

@@ -62,18 +64,6 @@ static __always_inline bool arch_static_branch_jump(struct static_key *key, bool
return true;
}

-#ifdef CONFIG_X86_64
-typedef u64 jump_label_t;
-#else
-typedef u32 jump_label_t;
-#endif
-
-struct jump_entry {
- jump_label_t code;
- jump_label_t target;
- jump_label_t key;
-};
-
#else /* __ASSEMBLY__ */

.macro STATIC_JUMP_IF_TRUE target, key, def
@@ -88,7 +78,8 @@ struct jump_entry {
.endif
.pushsection __jump_table, "aw"
_ASM_ALIGN
- _ASM_PTR .Lstatic_jump_\@, \target, \key
+ .long .Lstatic_jump_\@ - ., \target - .
+ _ASM_PTR \key - .
.popsection
.endm

@@ -104,7 +95,8 @@ struct jump_entry {
.endif
.pushsection __jump_table, "aw"
_ASM_ALIGN
- _ASM_PTR .Lstatic_jump_\@, \target, \key + 1
+ .long .Lstatic_jump_\@ - ., \target - .
+ _ASM_PTR \key + 1 - .
.popsection
.endm

diff --git a/tools/objtool/special.c b/tools/objtool/special.c
index 84f001d52322..50af4e1274b3 100644
--- a/tools/objtool/special.c
+++ b/tools/objtool/special.c
@@ -30,9 +30,9 @@
#define EX_ORIG_OFFSET 0
#define EX_NEW_OFFSET 4

-#define JUMP_ENTRY_SIZE 24
+#define JUMP_ENTRY_SIZE 16
#define JUMP_ORIG_OFFSET 0
-#define JUMP_NEW_OFFSET 8
+#define JUMP_NEW_OFFSET 4

#define ALT_ENTRY_SIZE 13
#define ALT_ORIG_OFFSET 0
--
2.17.1


2018-07-02 18:15:39

by Ard Biesheuvel

[permalink] [raw]
Subject: [PATCH v2 8/8] jump_table: move entries into ro_after_init region

The __jump_table sections emitted into the core kernel and into
each module consist of statically initialized references into
other parts of the code, and with the exception of entries that
point into init code, which are defused at post-init time, these
data structures are never modified.

So let's move them into the ro_after_init section, to prevent them
from being corrupted inadvertently by buggy code, or deliberately
by an attacker.

Signed-off-by: Ard Biesheuvel <[email protected]>
---
arch/arm/kernel/vmlinux-xip.lds.S | 1 +
arch/s390/kernel/vmlinux.lds.S | 1 +
include/asm-generic/vmlinux.lds.h | 11 +++++++----
kernel/module.c | 9 +++++++++
4 files changed, 18 insertions(+), 4 deletions(-)

diff --git a/arch/arm/kernel/vmlinux-xip.lds.S b/arch/arm/kernel/vmlinux-xip.lds.S
index 3593d5c1acd2..763c41068ecc 100644
--- a/arch/arm/kernel/vmlinux-xip.lds.S
+++ b/arch/arm/kernel/vmlinux-xip.lds.S
@@ -118,6 +118,7 @@ SECTIONS
RW_DATA_SECTION(L1_CACHE_BYTES, PAGE_SIZE, THREAD_SIZE)
.data.ro_after_init : AT(ADDR(.data.ro_after_init) - LOAD_OFFSET) {
*(.data..ro_after_init)
+ JUMP_TABLE_DATA
}
_edata = .;

diff --git a/arch/s390/kernel/vmlinux.lds.S b/arch/s390/kernel/vmlinux.lds.S
index f0414f52817b..a7cf61e46f88 100644
--- a/arch/s390/kernel/vmlinux.lds.S
+++ b/arch/s390/kernel/vmlinux.lds.S
@@ -67,6 +67,7 @@ SECTIONS
__start_ro_after_init = .;
.data..ro_after_init : {
*(.data..ro_after_init)
+ JUMP_TABLE_DATA
}
EXCEPTION_TABLE(16)
. = ALIGN(PAGE_SIZE);
diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
index e373e2e10f6a..ed6befa4c47b 100644
--- a/include/asm-generic/vmlinux.lds.h
+++ b/include/asm-generic/vmlinux.lds.h
@@ -256,10 +256,6 @@
STRUCT_ALIGN(); \
*(__tracepoints) \
/* implement dynamic printk debug */ \
- . = ALIGN(8); \
- __start___jump_table = .; \
- KEEP(*(__jump_table)) \
- __stop___jump_table = .; \
. = ALIGN(8); \
__start___verbose = .; \
KEEP(*(__verbose)) \
@@ -303,6 +299,12 @@
. = __start_init_task + THREAD_SIZE; \
__end_init_task = .;

+#define JUMP_TABLE_DATA \
+ . = ALIGN(8); \
+ __start___jump_table = .; \
+ KEEP(*(__jump_table)) \
+ __stop___jump_table = .;
+
/*
* Allow architectures to handle ro_after_init data on their
* own by defining an empty RO_AFTER_INIT_DATA.
@@ -311,6 +313,7 @@
#define RO_AFTER_INIT_DATA \
__start_ro_after_init = .; \
*(.data..ro_after_init) \
+ JUMP_TABLE_DATA \
__end_ro_after_init = .;
#endif

diff --git a/kernel/module.c b/kernel/module.c
index 7cb82e0fcac0..0d4e320e41cd 100644
--- a/kernel/module.c
+++ b/kernel/module.c
@@ -3349,6 +3349,15 @@ static struct module *layout_and_allocate(struct load_info *info, int flags)
* Note: ro_after_init sections also have SHF_{WRITE,ALLOC} set.
*/
ndx = find_sec(info, ".data..ro_after_init");
+ if (ndx)
+ info->sechdrs[ndx].sh_flags |= SHF_RO_AFTER_INIT;
+ /*
+ * Mark the __jump_table section as ro_after_init as well: these data
+ * structures are never modified, with the exception of entries that
+ * refer to code in the __init section, which are annotated as such
+ * at module load time.
+ */
+ ndx = find_sec(info, "__jump_table");
if (ndx)
info->sechdrs[ndx].sh_flags |= SHF_RO_AFTER_INIT;

--
2.17.1


2018-07-02 18:15:43

by Ard Biesheuvel

[permalink] [raw]
Subject: [PATCH v2 5/8] x86: jump_label: switch to jump_entry accessors

In preparation of switching x86 to use place-relative references for
the code, target and key members of struct jump_entry, replace direct
references to the struct members with invocations of the new accessors.
This will allow us to make the switch by modifying the accessors only.

This incorporates a cleanup of __jump_label_transform() proposed by
Peter.

Cc: Peter Zijlstra <[email protected]>
Signed-off-by: Ard Biesheuvel <[email protected]>
---
arch/x86/kernel/jump_label.c | 62 ++++++++------------
1 file changed, 25 insertions(+), 37 deletions(-)

diff --git a/arch/x86/kernel/jump_label.c b/arch/x86/kernel/jump_label.c
index e56c95be2808..023768222a94 100644
--- a/arch/x86/kernel/jump_label.c
+++ b/arch/x86/kernel/jump_label.c
@@ -42,52 +42,37 @@ static void __jump_label_transform(struct jump_entry *entry,
void *(*poker)(void *, const void *, size_t),
int init)
{
- union jump_code_union code;
+ union jump_code_union jmp;
const unsigned char default_nop[] = { STATIC_KEY_INIT_NOP };
const unsigned char *ideal_nop = ideal_nops[NOP_ATOMIC5];
+ const void *expect, *code;
+ int line;
+
+ jmp.jump = 0xe9;
+ jmp.offset = jump_entry_target(entry) -
+ (jump_entry_code(entry) + JUMP_LABEL_NOP_SIZE);

if (type == JUMP_LABEL_JMP) {
if (init) {
- /*
- * Jump label is enabled for the first time.
- * So we expect a default_nop...
- */
- if (unlikely(memcmp((void *)entry->code, default_nop, 5)
- != 0))
- bug_at((void *)entry->code, __LINE__);
+ expect = default_nop; line = __LINE__;
} else {
- /*
- * ...otherwise expect an ideal_nop. Otherwise
- * something went horribly wrong.
- */
- if (unlikely(memcmp((void *)entry->code, ideal_nop, 5)
- != 0))
- bug_at((void *)entry->code, __LINE__);
+ expect = ideal_nop; line = __LINE__;
}

- code.jump = 0xe9;
- code.offset = entry->target -
- (entry->code + JUMP_LABEL_NOP_SIZE);
+ code = &jmp.code;
} else {
- /*
- * We are disabling this jump label. If it is not what
- * we think it is, then something must have gone wrong.
- * If this is the first initialization call, then we
- * are converting the default nop to the ideal nop.
- */
if (init) {
- if (unlikely(memcmp((void *)entry->code, default_nop, 5) != 0))
- bug_at((void *)entry->code, __LINE__);
+ expect = default_nop; line = __LINE__;
} else {
- code.jump = 0xe9;
- code.offset = entry->target -
- (entry->code + JUMP_LABEL_NOP_SIZE);
- if (unlikely(memcmp((void *)entry->code, &code, 5) != 0))
- bug_at((void *)entry->code, __LINE__);
+ expect = &jmp.code; line = __LINE__;
}
- memcpy(&code, ideal_nops[NOP_ATOMIC5], JUMP_LABEL_NOP_SIZE);
+
+ code = ideal_nop;
}

+ if (memcmp((void *)jump_entry_code(entry), expect, JUMP_LABEL_NOP_SIZE))
+ bug_at((void *)jump_entry_code(entry), line);
+
/*
* Make text_poke_bp() a default fallback poker.
*
@@ -96,11 +81,14 @@ static void __jump_label_transform(struct jump_entry *entry,
* always nop being the 'currently valid' instruction
*
*/
- if (poker)
- (*poker)((void *)entry->code, &code, JUMP_LABEL_NOP_SIZE);
- else
- text_poke_bp((void *)entry->code, &code, JUMP_LABEL_NOP_SIZE,
- (void *)entry->code + JUMP_LABEL_NOP_SIZE);
+ if (poker) {
+ (*poker)((void *)jump_entry_code(entry), code,
+ JUMP_LABEL_NOP_SIZE);
+ return;
+ }
+
+ text_poke_bp((void *)jump_entry_code(entry), code, JUMP_LABEL_NOP_SIZE,
+ (void *)jump_entry_code(entry) + JUMP_LABEL_NOP_SIZE);
}

void arch_jump_label_transform(struct jump_entry *entry,
--
2.17.1


2018-07-02 18:16:01

by Ard Biesheuvel

[permalink] [raw]
Subject: [PATCH v2 2/8] kernel/jump_label: implement generic support for relative references

To reduce the size taken up by absolute references in jump label
entries themselves and the associated relocation records in the
.init segment, add support for emitting them as relative references
instead.

Note that this requires some extra care in the sorting routine, given
that the offsets change when entries are moved around in the jump_entry
table.

Signed-off-by: Ard Biesheuvel <[email protected]>
---
arch/Kconfig | 3 +++
include/linux/jump_label.h | 28 ++++++++++++++++++++
kernel/jump_label.c | 22 ++++++++++++++-
3 files changed, 52 insertions(+), 1 deletion(-)

diff --git a/arch/Kconfig b/arch/Kconfig
index 2b8b70820002..22fa3792626e 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -348,6 +348,9 @@ config HAVE_PERF_USER_STACK_DUMP
config HAVE_ARCH_JUMP_LABEL
bool

+config HAVE_ARCH_JUMP_LABEL_RELATIVE
+ bool
+
config HAVE_RCU_TABLE_FREE
bool

diff --git a/include/linux/jump_label.h b/include/linux/jump_label.h
index 4603a1c88e48..871826fd0c3b 100644
--- a/include/linux/jump_label.h
+++ b/include/linux/jump_label.h
@@ -121,6 +121,32 @@ struct static_key {
#include <asm/jump_label.h>

#ifndef __ASSEMBLY__
+#ifdef CONFIG_HAVE_ARCH_JUMP_LABEL_RELATIVE
+
+struct jump_entry {
+ s32 code;
+ s32 target;
+ long key; // key may be far away from the core kernel under KASLR
+};
+
+static inline unsigned long jump_entry_code(const struct jump_entry *entry)
+{
+ return (unsigned long)&entry->code + entry->code;
+}
+
+static inline unsigned long jump_entry_target(const struct jump_entry *entry)
+{
+ return (unsigned long)&entry->target + entry->target;
+}
+
+static inline struct static_key *jump_entry_key(const struct jump_entry *entry)
+{
+ long offset = entry->key & ~1L;
+
+ return (struct static_key *)((unsigned long)&entry->key + offset);
+}
+
+#else

static inline unsigned long jump_entry_code(const struct jump_entry *entry)
{
@@ -137,6 +163,8 @@ static inline struct static_key *jump_entry_key(const struct jump_entry *entry)
return (struct static_key *)((unsigned long)entry->key & ~1UL);
}

+#endif
+
static inline bool jump_entry_is_branch(const struct jump_entry *entry)
{
return (unsigned long)entry->key & 1UL;
diff --git a/kernel/jump_label.c b/kernel/jump_label.c
index 8a3ac4f5f490..d424e1d22d63 100644
--- a/kernel/jump_label.c
+++ b/kernel/jump_label.c
@@ -47,14 +47,34 @@ static int jump_label_cmp(const void *a, const void *b)
return 0;
}

+static void jump_label_swap(void *a, void *b, int size)
+{
+ long delta = (unsigned long)a - (unsigned long)b;
+ struct jump_entry *jea = a;
+ struct jump_entry *jeb = b;
+ struct jump_entry tmp = *jea;
+
+ jea->code = jeb->code - delta;
+ jea->target = jeb->target - delta;
+ jea->key = jeb->key - delta;
+
+ jeb->code = tmp.code + delta;
+ jeb->target = tmp.target + delta;
+ jeb->key = tmp.key + delta;
+}
+
static void
jump_label_sort_entries(struct jump_entry *start, struct jump_entry *stop)
{
unsigned long size;
+ void *swapfn = NULL;
+
+ if (IS_ENABLED(CONFIG_HAVE_ARCH_JUMP_LABEL_RELATIVE))
+ swapfn = jump_label_swap;

size = (((unsigned long)stop - (unsigned long)start)
/ sizeof(struct jump_entry));
- sort(start, size, sizeof(struct jump_entry), jump_label_cmp, NULL);
+ sort(start, size, sizeof(struct jump_entry), jump_label_cmp, swapfn);
}

static void jump_label_update(struct static_key *key);
--
2.17.1


2018-07-02 18:16:20

by Ard Biesheuvel

[permalink] [raw]
Subject: [PATCH v2 3/8] arm64/kernel: jump_label: switch to relative references

On a randomly chosen distro kernel build for arm64, vmlinux.o shows the
following sections, containing jump label entries, and the associated
RELA relocation records, respectively:

...
[38088] __jump_table PROGBITS 0000000000000000 00e19f30
000000000002ea10 0000000000000000 WA 0 0 8
[38089] .rela__jump_table RELA 0000000000000000 01fd8bb0
000000000008be30 0000000000000018 I 38178 38088 8
...

In other words, we have 190 KB worth of 'struct jump_entry' instances,
and 573 KB worth of RELA entries to relocate each entry's code, target
and key members. This means the RELA section occupies 10% of the .init
segment, and the two sections combined represent 5% of vmlinux's entire
memory footprint.

So let's switch from 64-bit absolute references to 32-bit relative
references for the code and target field, and a 64-bit relative
reference for the 'key' field (which may reside in another module or the
core kernel, which may be more than 4 GB way on arm64 when running with
KASLR enable): this reduces the size of the __jump_table by 33%, and
gets rid of the RELA section entirely.

Acked-by: Will Deacon <[email protected]>
Signed-off-by: Ard Biesheuvel <[email protected]>
---
arch/arm64/Kconfig | 1 +
arch/arm64/include/asm/jump_label.h | 36 +++++++++-----------
arch/arm64/kernel/jump_label.c | 6 ++--
3 files changed, 20 insertions(+), 23 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 1940c6405d04..d17aa9614e69 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -91,6 +91,7 @@ config ARM64
select HAVE_ARCH_BITREVERSE
select HAVE_ARCH_HUGE_VMAP
select HAVE_ARCH_JUMP_LABEL
+ select HAVE_ARCH_JUMP_LABEL_RELATIVE
select HAVE_ARCH_KASAN if !(ARM64_16K_PAGES && ARM64_VA_BITS_48)
select HAVE_ARCH_KGDB
select HAVE_ARCH_MMAP_RND_BITS
diff --git a/arch/arm64/include/asm/jump_label.h b/arch/arm64/include/asm/jump_label.h
index 1b5e0e843c3a..bb6d15b34668 100644
--- a/arch/arm64/include/asm/jump_label.h
+++ b/arch/arm64/include/asm/jump_label.h
@@ -26,13 +26,15 @@

#define JUMP_LABEL_NOP_SIZE AARCH64_INSN_SIZE

-static __always_inline bool arch_static_branch(struct static_key *key, bool branch)
+static __always_inline bool arch_static_branch(struct static_key *key,
+ bool branch)
{
- asm goto("1: nop\n\t"
- ".pushsection __jump_table, \"aw\"\n\t"
- ".align 3\n\t"
- ".quad 1b, %l[l_yes], %c0\n\t"
- ".popsection\n\t"
+ asm goto("1: nop \n\t"
+ " .pushsection __jump_table, \"aw\" \n\t"
+ " .align 3 \n\t"
+ " .long 1b - ., %l[l_yes] - . \n\t"
+ " .quad %c0 - . \n\t"
+ " .popsection \n\t"
: : "i"(&((char *)key)[branch]) : : l_yes);

return false;
@@ -40,13 +42,15 @@ static __always_inline bool arch_static_branch(struct static_key *key, bool bran
return true;
}

-static __always_inline bool arch_static_branch_jump(struct static_key *key, bool branch)
+static __always_inline bool arch_static_branch_jump(struct static_key *key,
+ bool branch)
{
- asm goto("1: b %l[l_yes]\n\t"
- ".pushsection __jump_table, \"aw\"\n\t"
- ".align 3\n\t"
- ".quad 1b, %l[l_yes], %c0\n\t"
- ".popsection\n\t"
+ asm goto("1: b %l[l_yes] \n\t"
+ " .pushsection __jump_table, \"aw\" \n\t"
+ " .align 3 \n\t"
+ " .long 1b - ., %l[l_yes] - . \n\t"
+ " .quad %c0 - . \n\t"
+ " .popsection \n\t"
: : "i"(&((char *)key)[branch]) : : l_yes);

return false;
@@ -54,13 +58,5 @@ static __always_inline bool arch_static_branch_jump(struct static_key *key, bool
return true;
}

-typedef u64 jump_label_t;
-
-struct jump_entry {
- jump_label_t code;
- jump_label_t target;
- jump_label_t key;
-};
-
#endif /* __ASSEMBLY__ */
#endif /* __ASM_JUMP_LABEL_H */
diff --git a/arch/arm64/kernel/jump_label.c b/arch/arm64/kernel/jump_label.c
index c2dd1ad3e648..903d17023d77 100644
--- a/arch/arm64/kernel/jump_label.c
+++ b/arch/arm64/kernel/jump_label.c
@@ -25,12 +25,12 @@
void arch_jump_label_transform(struct jump_entry *entry,
enum jump_label_type type)
{
- void *addr = (void *)entry->code;
+ void *addr = (void *)jump_entry_code(entry);
u32 insn;

if (type == JUMP_LABEL_JMP) {
- insn = aarch64_insn_gen_branch_imm(entry->code,
- entry->target,
+ insn = aarch64_insn_gen_branch_imm(jump_entry_code(entry),
+ jump_entry_target(entry),
AARCH64_INSN_BRANCH_NOLINK);
} else {
insn = aarch64_insn_gen_nop();
--
2.17.1


2018-07-02 18:17:54

by Ard Biesheuvel

[permalink] [raw]
Subject: [PATCH v2 1/8] kernel/jump_label: abstract jump_entry member accessors

In preparation of allowing architectures to use relative references
in jump_label entries [which can dramatically reduce the memory
footprint], introduce abstractions for references to the 'code',
'target' and 'key' members of struct jump_entry.

Signed-off-by: Ard Biesheuvel <[email protected]>
---
include/linux/jump_label.h | 34 +++++++++++++++++
kernel/jump_label.c | 40 ++++++++------------
2 files changed, 49 insertions(+), 25 deletions(-)

diff --git a/include/linux/jump_label.h b/include/linux/jump_label.h
index b46b541c67c4..4603a1c88e48 100644
--- a/include/linux/jump_label.h
+++ b/include/linux/jump_label.h
@@ -119,6 +119,40 @@ struct static_key {

#ifdef HAVE_JUMP_LABEL
#include <asm/jump_label.h>
+
+#ifndef __ASSEMBLY__
+
+static inline unsigned long jump_entry_code(const struct jump_entry *entry)
+{
+ return entry->code;
+}
+
+static inline unsigned long jump_entry_target(const struct jump_entry *entry)
+{
+ return entry->target;
+}
+
+static inline struct static_key *jump_entry_key(const struct jump_entry *entry)
+{
+ return (struct static_key *)((unsigned long)entry->key & ~1UL);
+}
+
+static inline bool jump_entry_is_branch(const struct jump_entry *entry)
+{
+ return (unsigned long)entry->key & 1UL;
+}
+
+static inline bool jump_entry_is_init(const struct jump_entry *entry)
+{
+ return entry->code == 0;
+}
+
+static inline void jump_entry_set_init(struct jump_entry *entry)
+{
+ entry->code = 0;
+}
+
+#endif
#endif

#ifndef __ASSEMBLY__
diff --git a/kernel/jump_label.c b/kernel/jump_label.c
index 01ebdf1f9f40..8a3ac4f5f490 100644
--- a/kernel/jump_label.c
+++ b/kernel/jump_label.c
@@ -38,10 +38,10 @@ static int jump_label_cmp(const void *a, const void *b)
const struct jump_entry *jea = a;
const struct jump_entry *jeb = b;

- if (jea->key < jeb->key)
+ if (jump_entry_key(jea) < jump_entry_key(jeb))
return -1;

- if (jea->key > jeb->key)
+ if (jump_entry_key(jea) > jump_entry_key(jeb))
return 1;

return 0;
@@ -261,8 +261,8 @@ EXPORT_SYMBOL_GPL(jump_label_rate_limit);

static int addr_conflict(struct jump_entry *entry, void *start, void *end)
{
- if (entry->code <= (unsigned long)end &&
- entry->code + JUMP_LABEL_NOP_SIZE > (unsigned long)start)
+ if (jump_entry_code(entry) <= (unsigned long)end &&
+ jump_entry_code(entry) + JUMP_LABEL_NOP_SIZE > (unsigned long)start)
return 1;

return 0;
@@ -321,16 +321,6 @@ static inline void static_key_set_linked(struct static_key *key)
key->type |= JUMP_TYPE_LINKED;
}

-static inline struct static_key *jump_entry_key(struct jump_entry *entry)
-{
- return (struct static_key *)((unsigned long)entry->key & ~1UL);
-}
-
-static bool jump_entry_branch(struct jump_entry *entry)
-{
- return (unsigned long)entry->key & 1UL;
-}
-
/***
* A 'struct static_key' uses a union such that it either points directly
* to a table of 'struct jump_entry' or to a linked list of modules which in
@@ -355,7 +345,7 @@ static enum jump_label_type jump_label_type(struct jump_entry *entry)
{
struct static_key *key = jump_entry_key(entry);
bool enabled = static_key_enabled(key);
- bool branch = jump_entry_branch(entry);
+ bool branch = jump_entry_is_branch(entry);

/* See the comment in linux/jump_label.h */
return enabled ^ branch;
@@ -370,12 +360,12 @@ static void __jump_label_update(struct static_key *key,
* An entry->code of 0 indicates an entry which has been
* disabled because it was in an init text area.
*/
- if (entry->code) {
- if (kernel_text_address(entry->code))
+ if (!jump_entry_is_init(entry)) {
+ if (kernel_text_address(jump_entry_code(entry)))
arch_jump_label_transform(entry, jump_label_type(entry));
else
WARN_ONCE(1, "can't patch jump_label at %pS",
- (void *)(unsigned long)entry->code);
+ (void *)jump_entry_code(entry));
}
}
}
@@ -430,8 +420,8 @@ void __init jump_label_invalidate_initmem(void)
struct jump_entry *iter;

for (iter = iter_start; iter < iter_stop; iter++) {
- if (init_section_contains((void *)(unsigned long)iter->code, 1))
- iter->code = 0;
+ if (init_section_contains((void *)jump_entry_code(iter), 1))
+ jump_entry_set_init(iter);
}
}

@@ -441,7 +431,7 @@ static enum jump_label_type jump_label_init_type(struct jump_entry *entry)
{
struct static_key *key = jump_entry_key(entry);
bool type = static_key_type(key);
- bool branch = jump_entry_branch(entry);
+ bool branch = jump_entry_is_branch(entry);

/* See the comment in linux/jump_label.h */
return type ^ branch;
@@ -565,7 +555,7 @@ static int jump_label_add_module(struct module *mod)
continue;

key = iterk;
- if (within_module(iter->key, mod)) {
+ if (within_module((unsigned long)key, mod)) {
static_key_set_entries(key, iter);
continue;
}
@@ -615,7 +605,7 @@ static void jump_label_del_module(struct module *mod)

key = jump_entry_key(iter);

- if (within_module(iter->key, mod))
+ if (within_module((unsigned long)key, mod))
continue;

/* No memory during module load */
@@ -659,8 +649,8 @@ static void jump_label_invalidate_module_init(struct module *mod)
struct jump_entry *iter;

for (iter = iter_start; iter < iter_stop; iter++) {
- if (within_module_init(iter->code, mod))
- iter->code = 0;
+ if (within_module_init(jump_entry_code(iter), mod))
+ jump_entry_set_init(iter);
}
}

--
2.17.1


2018-07-02 18:30:37

by Kees Cook

[permalink] [raw]
Subject: Re: [PATCH v2 7/8] jump_label: annotate entries that operate on __init code earlier

On Mon, Jul 2, 2018 at 11:11 AM, Ard Biesheuvel
<[email protected]> wrote:
> Jump table entries are mostly read-only, with the exception of the
> init and module loader code that defuses entries that point into init
> code when the code being referred to is freed.
>
> For robustness, it would be better to move these entries into the
> ro_after_init section, but clearing the 'code' member of each jump
> table entry referring to init code at module load time races with the
> module_enable_ro() call that remaps the ro_after_init section read
> only, so we'd like to do it earlier.
>
> So given that whether such an entry refers to init code can be decided
> much earlier, we can pull this check forward. Since we may still need
> the code entry at this point, let's switch to setting a low bit in the
> 'key' member just like we do to annotate the default state of a jump
> table entry.
>
> Signed-off-by: Ard Biesheuvel <[email protected]>
> ---
> include/linux/jump_label.h | 11 ++---
> init/main.c | 1 -
> kernel/jump_label.c | 48 ++++++--------------
> 3 files changed, 18 insertions(+), 42 deletions(-)

Net reduction in code, too! :)

Reviewed-by: Kees Cook <[email protected]>

-Kees

>
> diff --git a/include/linux/jump_label.h b/include/linux/jump_label.h
> index 871826fd0c3b..feee8abc96be 100644
> --- a/include/linux/jump_label.h
> +++ b/include/linux/jump_label.h
> @@ -141,7 +141,7 @@ static inline unsigned long jump_entry_target(const struct jump_entry *entry)
>
> static inline struct static_key *jump_entry_key(const struct jump_entry *entry)
> {
> - long offset = entry->key & ~1L;
> + long offset = entry->key & ~3L;
>
> return (struct static_key *)((unsigned long)&entry->key + offset);
> }
> @@ -160,7 +160,7 @@ static inline unsigned long jump_entry_target(const struct jump_entry *entry)
>
> static inline struct static_key *jump_entry_key(const struct jump_entry *entry)
> {
> - return (struct static_key *)((unsigned long)entry->key & ~1UL);
> + return (struct static_key *)((unsigned long)entry->key & ~3UL);
> }
>
> #endif
> @@ -172,12 +172,12 @@ static inline bool jump_entry_is_branch(const struct jump_entry *entry)
>
> static inline bool jump_entry_is_init(const struct jump_entry *entry)
> {
> - return entry->code == 0;
> + return (unsigned long)entry->key & 2UL;
> }
>
> static inline void jump_entry_set_init(struct jump_entry *entry)
> {
> - entry->code = 0;
> + entry->key |= 2;
> }
>
> #endif
> @@ -213,7 +213,6 @@ extern struct jump_entry __start___jump_table[];
> extern struct jump_entry __stop___jump_table[];
>
> extern void jump_label_init(void);
> -extern void jump_label_invalidate_initmem(void);
> extern void jump_label_lock(void);
> extern void jump_label_unlock(void);
> extern void arch_jump_label_transform(struct jump_entry *entry,
> @@ -261,8 +260,6 @@ static __always_inline void jump_label_init(void)
> static_key_initialized = true;
> }
>
> -static inline void jump_label_invalidate_initmem(void) {}
> -
> static __always_inline bool static_key_false(struct static_key *key)
> {
> if (unlikely(static_key_count(key) > 0))
> diff --git a/init/main.c b/init/main.c
> index e59a01f163d6..d1a6b8a896e5 100644
> --- a/init/main.c
> +++ b/init/main.c
> @@ -1062,7 +1062,6 @@ static int __ref kernel_init(void *unused)
> /* need to finish all async __init code before freeing the memory */
> async_synchronize_full();
> ftrace_free_init_mem();
> - jump_label_invalidate_initmem();
> free_initmem();
> mark_readonly();
> system_state = SYSTEM_RUNNING;
> diff --git a/kernel/jump_label.c b/kernel/jump_label.c
> index d424e1d22d63..7cdd49aeaf6a 100644
> --- a/kernel/jump_label.c
> +++ b/kernel/jump_label.c
> @@ -373,14 +373,15 @@ static enum jump_label_type jump_label_type(struct jump_entry *entry)
>
> static void __jump_label_update(struct static_key *key,
> struct jump_entry *entry,
> - struct jump_entry *stop)
> + struct jump_entry *stop,
> + bool init)
> {
> for (; (entry < stop) && (jump_entry_key(entry) == key); entry++) {
> /*
> * An entry->code of 0 indicates an entry which has been
> * disabled because it was in an init text area.
> */
> - if (!jump_entry_is_init(entry)) {
> + if (init || !jump_entry_is_init(entry)) {
> if (kernel_text_address(jump_entry_code(entry)))
> arch_jump_label_transform(entry, jump_label_type(entry));
> else
> @@ -420,6 +421,9 @@ void __init jump_label_init(void)
> if (jump_label_type(iter) == JUMP_LABEL_NOP)
> arch_jump_label_transform_static(iter, JUMP_LABEL_NOP);
>
> + if (init_section_contains((void *)jump_entry_code(iter), 1))
> + jump_entry_set_init(iter);
> +
> iterk = jump_entry_key(iter);
> if (iterk == key)
> continue;
> @@ -432,19 +436,6 @@ void __init jump_label_init(void)
> cpus_read_unlock();
> }
>
> -/* Disable any jump label entries in __init/__exit code */
> -void __init jump_label_invalidate_initmem(void)
> -{
> - struct jump_entry *iter_start = __start___jump_table;
> - struct jump_entry *iter_stop = __stop___jump_table;
> - struct jump_entry *iter;
> -
> - for (iter = iter_start; iter < iter_stop; iter++) {
> - if (init_section_contains((void *)jump_entry_code(iter), 1))
> - jump_entry_set_init(iter);
> - }
> -}
> -
> #ifdef CONFIG_MODULES
>
> static enum jump_label_type jump_label_init_type(struct jump_entry *entry)
> @@ -524,7 +515,8 @@ static void __jump_label_mod_update(struct static_key *key)
> stop = __stop___jump_table;
> else
> stop = m->jump_entries + m->num_jump_entries;
> - __jump_label_update(key, mod->entries, stop);
> + __jump_label_update(key, mod->entries, stop,
> + m->state == MODULE_STATE_COMING);
> }
> }
>
> @@ -570,6 +562,9 @@ static int jump_label_add_module(struct module *mod)
> for (iter = iter_start; iter < iter_stop; iter++) {
> struct static_key *iterk;
>
> + if (within_module_init(jump_entry_code(iter), mod))
> + jump_entry_set_init(iter);
> +
> iterk = jump_entry_key(iter);
> if (iterk == key)
> continue;
> @@ -605,7 +600,7 @@ static int jump_label_add_module(struct module *mod)
>
> /* Only update if we've changed from our initial state */
> if (jump_label_type(iter) != jump_label_init_type(iter))
> - __jump_label_update(key, iter, iter_stop);
> + __jump_label_update(key, iter, iter_stop, true);
> }
>
> return 0;
> @@ -661,19 +656,6 @@ static void jump_label_del_module(struct module *mod)
> }
> }
>
> -/* Disable any jump label entries in module init code */
> -static void jump_label_invalidate_module_init(struct module *mod)
> -{
> - struct jump_entry *iter_start = mod->jump_entries;
> - struct jump_entry *iter_stop = iter_start + mod->num_jump_entries;
> - struct jump_entry *iter;
> -
> - for (iter = iter_start; iter < iter_stop; iter++) {
> - if (within_module_init(jump_entry_code(iter), mod))
> - jump_entry_set_init(iter);
> - }
> -}
> -
> static int
> jump_label_module_notify(struct notifier_block *self, unsigned long val,
> void *data)
> @@ -695,9 +677,6 @@ jump_label_module_notify(struct notifier_block *self, unsigned long val,
> case MODULE_STATE_GOING:
> jump_label_del_module(mod);
> break;
> - case MODULE_STATE_LIVE:
> - jump_label_invalidate_module_init(mod);
> - break;
> }
>
> jump_label_unlock();
> @@ -767,7 +746,8 @@ static void jump_label_update(struct static_key *key)
> entry = static_key_entries(key);
> /* if there are no users, entry can be NULL */
> if (entry)
> - __jump_label_update(key, entry, stop);
> + __jump_label_update(key, entry, stop,
> + system_state < SYSTEM_RUNNING);
> }
>
> #ifdef CONFIG_STATIC_KEYS_SELFTEST
> --
> 2.17.1
>



--
Kees Cook
Pixel Security

2018-07-02 18:31:16

by Kees Cook

[permalink] [raw]
Subject: Re: [PATCH v2 8/8] jump_table: move entries into ro_after_init region

On Mon, Jul 2, 2018 at 11:11 AM, Ard Biesheuvel
<[email protected]> wrote:
> The __jump_table sections emitted into the core kernel and into
> each module consist of statically initialized references into
> other parts of the code, and with the exception of entries that
> point into init code, which are defused at post-init time, these
> data structures are never modified.
>
> So let's move them into the ro_after_init section, to prevent them
> from being corrupted inadvertently by buggy code, or deliberately
> by an attacker.
>
> Signed-off-by: Ard Biesheuvel <[email protected]>

Yay more things read-only! :)

Reviewed-by: Kees Cook <[email protected]>

> ---
> arch/arm/kernel/vmlinux-xip.lds.S | 1 +
> arch/s390/kernel/vmlinux.lds.S | 1 +
> include/asm-generic/vmlinux.lds.h | 11 +++++++----
> kernel/module.c | 9 +++++++++
> 4 files changed, 18 insertions(+), 4 deletions(-)
>
> diff --git a/arch/arm/kernel/vmlinux-xip.lds.S b/arch/arm/kernel/vmlinux-xip.lds.S
> index 3593d5c1acd2..763c41068ecc 100644
> --- a/arch/arm/kernel/vmlinux-xip.lds.S
> +++ b/arch/arm/kernel/vmlinux-xip.lds.S
> @@ -118,6 +118,7 @@ SECTIONS
> RW_DATA_SECTION(L1_CACHE_BYTES, PAGE_SIZE, THREAD_SIZE)
> .data.ro_after_init : AT(ADDR(.data.ro_after_init) - LOAD_OFFSET) {
> *(.data..ro_after_init)
> + JUMP_TABLE_DATA
> }
> _edata = .;
>
> diff --git a/arch/s390/kernel/vmlinux.lds.S b/arch/s390/kernel/vmlinux.lds.S
> index f0414f52817b..a7cf61e46f88 100644
> --- a/arch/s390/kernel/vmlinux.lds.S
> +++ b/arch/s390/kernel/vmlinux.lds.S
> @@ -67,6 +67,7 @@ SECTIONS
> __start_ro_after_init = .;
> .data..ro_after_init : {
> *(.data..ro_after_init)
> + JUMP_TABLE_DATA
> }
> EXCEPTION_TABLE(16)
> . = ALIGN(PAGE_SIZE);
> diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
> index e373e2e10f6a..ed6befa4c47b 100644
> --- a/include/asm-generic/vmlinux.lds.h
> +++ b/include/asm-generic/vmlinux.lds.h
> @@ -256,10 +256,6 @@
> STRUCT_ALIGN(); \
> *(__tracepoints) \
> /* implement dynamic printk debug */ \
> - . = ALIGN(8); \
> - __start___jump_table = .; \
> - KEEP(*(__jump_table)) \
> - __stop___jump_table = .; \
> . = ALIGN(8); \
> __start___verbose = .; \
> KEEP(*(__verbose)) \
> @@ -303,6 +299,12 @@
> . = __start_init_task + THREAD_SIZE; \
> __end_init_task = .;
>
> +#define JUMP_TABLE_DATA \
> + . = ALIGN(8); \
> + __start___jump_table = .; \
> + KEEP(*(__jump_table)) \
> + __stop___jump_table = .;
> +
> /*
> * Allow architectures to handle ro_after_init data on their
> * own by defining an empty RO_AFTER_INIT_DATA.
> @@ -311,6 +313,7 @@
> #define RO_AFTER_INIT_DATA \
> __start_ro_after_init = .; \
> *(.data..ro_after_init) \
> + JUMP_TABLE_DATA \
> __end_ro_after_init = .;
> #endif
>
> diff --git a/kernel/module.c b/kernel/module.c
> index 7cb82e0fcac0..0d4e320e41cd 100644
> --- a/kernel/module.c
> +++ b/kernel/module.c
> @@ -3349,6 +3349,15 @@ static struct module *layout_and_allocate(struct load_info *info, int flags)
> * Note: ro_after_init sections also have SHF_{WRITE,ALLOC} set.
> */
> ndx = find_sec(info, ".data..ro_after_init");
> + if (ndx)
> + info->sechdrs[ndx].sh_flags |= SHF_RO_AFTER_INIT;
> + /*
> + * Mark the __jump_table section as ro_after_init as well: these data
> + * structures are never modified, with the exception of entries that
> + * refer to code in the __init section, which are annotated as such
> + * at module load time.
> + */
> + ndx = find_sec(info, "__jump_table");
> if (ndx)
> info->sechdrs[ndx].sh_flags |= SHF_RO_AFTER_INIT;
>
> --
> 2.17.1
>



--
Kees Cook
Pixel Security

2018-07-03 07:33:20

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH v2 4/8] x86: add support for 64-bit place relative relocations

Hi Ard,

I love your patch! Yet something to improve:

[auto build test ERROR on linus/master]
[also build test ERROR on v4.18-rc3 next-20180702]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url: https://github.com/0day-ci/linux/commits/Ard-Biesheuvel/add-support-for-relative-references-in-jump-tables/20180703-031712
config: um-x86_64_defconfig (attached as .config)
compiler: gcc-7 (Debian 7.3.0-16) 7.3.0
reproduce:
# save the attached .config to linux build tree
make ARCH=um SUBARCH=x86_64

All errors (new ones prefixed by >>):

arch/x86/um/../kernel/module.c: In function 'apply_relocate_add':
>> arch/x86/um/../kernel/module.c:204:8: error: 'R_X86_64_PC64' undeclared (first use in this function); did you mean 'R_X86_64_PC16'?
case R_X86_64_PC64:
^~~~~~~~~~~~~
R_X86_64_PC16
arch/x86/um/../kernel/module.c:204:8: note: each undeclared identifier is reported only once for each function it appears in

vim +204 arch/x86/um/../kernel/module.c

99
100 #ifdef CONFIG_X86_32
101 int apply_relocate(Elf32_Shdr *sechdrs,
102 const char *strtab,
103 unsigned int symindex,
104 unsigned int relsec,
105 struct module *me)
106 {
107 unsigned int i;
108 Elf32_Rel *rel = (void *)sechdrs[relsec].sh_addr;
109 Elf32_Sym *sym;
110 uint32_t *location;
111
112 DEBUGP("Applying relocate section %u to %u\n",
113 relsec, sechdrs[relsec].sh_info);
114 for (i = 0; i < sechdrs[relsec].sh_size / sizeof(*rel); i++) {
115 /* This is where to make the change */
116 location = (void *)sechdrs[sechdrs[relsec].sh_info].sh_addr
117 + rel[i].r_offset;
118 /* This is the symbol it is referring to. Note that all
119 undefined symbols have been resolved. */
120 sym = (Elf32_Sym *)sechdrs[symindex].sh_addr
121 + ELF32_R_SYM(rel[i].r_info);
122
123 switch (ELF32_R_TYPE(rel[i].r_info)) {
124 case R_386_32:
125 /* We add the value into the location given */
126 *location += sym->st_value;
127 break;
128 case R_386_PC32:
129 /* Add the value, subtract its position */
130 *location += sym->st_value - (uint32_t)location;
131 break;
132 default:
133 pr_err("%s: Unknown relocation: %u\n",
134 me->name, ELF32_R_TYPE(rel[i].r_info));
135 return -ENOEXEC;
136 }
137 }
138 return 0;
139 }
140 #else /*X86_64*/
141 int apply_relocate_add(Elf64_Shdr *sechdrs,
142 const char *strtab,
143 unsigned int symindex,
144 unsigned int relsec,
145 struct module *me)
146 {
147 unsigned int i;
148 Elf64_Rela *rel = (void *)sechdrs[relsec].sh_addr;
149 Elf64_Sym *sym;
150 void *loc;
151 u64 val;
152
153 DEBUGP("Applying relocate section %u to %u\n",
154 relsec, sechdrs[relsec].sh_info);
155 for (i = 0; i < sechdrs[relsec].sh_size / sizeof(*rel); i++) {
156 /* This is where to make the change */
157 loc = (void *)sechdrs[sechdrs[relsec].sh_info].sh_addr
158 + rel[i].r_offset;
159
160 /* This is the symbol it is referring to. Note that all
161 undefined symbols have been resolved. */
162 sym = (Elf64_Sym *)sechdrs[symindex].sh_addr
163 + ELF64_R_SYM(rel[i].r_info);
164
165 DEBUGP("type %d st_value %Lx r_addend %Lx loc %Lx\n",
166 (int)ELF64_R_TYPE(rel[i].r_info),
167 sym->st_value, rel[i].r_addend, (u64)loc);
168
169 val = sym->st_value + rel[i].r_addend;
170
171 switch (ELF64_R_TYPE(rel[i].r_info)) {
172 case R_X86_64_NONE:
173 break;
174 case R_X86_64_64:
175 if (*(u64 *)loc != 0)
176 goto invalid_relocation;
177 *(u64 *)loc = val;
178 break;
179 case R_X86_64_32:
180 if (*(u32 *)loc != 0)
181 goto invalid_relocation;
182 *(u32 *)loc = val;
183 if (val != *(u32 *)loc)
184 goto overflow;
185 break;
186 case R_X86_64_32S:
187 if (*(s32 *)loc != 0)
188 goto invalid_relocation;
189 *(s32 *)loc = val;
190 if ((s64)val != *(s32 *)loc)
191 goto overflow;
192 break;
193 case R_X86_64_PC32:
194 case R_X86_64_PLT32:
195 if (*(u32 *)loc != 0)
196 goto invalid_relocation;
197 val -= (u64)loc;
198 *(u32 *)loc = val;
199 #if 0
200 if ((s64)val != *(s32 *)loc)
201 goto overflow;
202 #endif
203 break;
> 204 case R_X86_64_PC64:
205 if (*(u64 *)loc != 0)
206 goto invalid_relocation;
207 val -= (u64)loc;
208 *(u64 *)loc = val;
209 break;
210 default:
211 pr_err("%s: Unknown rela relocation: %llu\n",
212 me->name, ELF64_R_TYPE(rel[i].r_info));
213 return -ENOEXEC;
214 }
215 }
216 return 0;
217
218 invalid_relocation:
219 pr_err("x86/modules: Skipping invalid relocation target, existing value is nonzero for type %d, loc %p, val %Lx\n",
220 (int)ELF64_R_TYPE(rel[i].r_info), loc, val);
221 return -ENOEXEC;
222
223 overflow:
224 pr_err("overflow in relocation type %d val %Lx\n",
225 (int)ELF64_R_TYPE(rel[i].r_info), val);
226 pr_err("`%s' likely not compiled with -mcmodel=kernel\n",
227 me->name);
228 return -ENOEXEC;
229 }
230 #endif
231

---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all Intel Corporation


Attachments:
(No filename) (5.67 kB)
.config.gz (7.38 kB)
Download all attachments

2018-07-03 07:52:56

by Jessica Yu

[permalink] [raw]
Subject: Re: [PATCH v2 8/8] jump_table: move entries into ro_after_init region

+++ Ard Biesheuvel [02/07/18 20:11 +0200]:
>The __jump_table sections emitted into the core kernel and into
>each module consist of statically initialized references into
>other parts of the code, and with the exception of entries that
>point into init code, which are defused at post-init time, these
>data structures are never modified.
>
>So let's move them into the ro_after_init section, to prevent them
>from being corrupted inadvertently by buggy code, or deliberately
>by an attacker.
>
>Signed-off-by: Ard Biesheuvel <[email protected]>

For module parts:

Acked-by: Jessica Yu <[email protected]>

>---
> arch/arm/kernel/vmlinux-xip.lds.S | 1 +
> arch/s390/kernel/vmlinux.lds.S | 1 +
> include/asm-generic/vmlinux.lds.h | 11 +++++++----
> kernel/module.c | 9 +++++++++
> 4 files changed, 18 insertions(+), 4 deletions(-)
>
>diff --git a/arch/arm/kernel/vmlinux-xip.lds.S b/arch/arm/kernel/vmlinux-xip.lds.S
>index 3593d5c1acd2..763c41068ecc 100644
>--- a/arch/arm/kernel/vmlinux-xip.lds.S
>+++ b/arch/arm/kernel/vmlinux-xip.lds.S
>@@ -118,6 +118,7 @@ SECTIONS
> RW_DATA_SECTION(L1_CACHE_BYTES, PAGE_SIZE, THREAD_SIZE)
> .data.ro_after_init : AT(ADDR(.data.ro_after_init) - LOAD_OFFSET) {
> *(.data..ro_after_init)
>+ JUMP_TABLE_DATA
> }
> _edata = .;
>
>diff --git a/arch/s390/kernel/vmlinux.lds.S b/arch/s390/kernel/vmlinux.lds.S
>index f0414f52817b..a7cf61e46f88 100644
>--- a/arch/s390/kernel/vmlinux.lds.S
>+++ b/arch/s390/kernel/vmlinux.lds.S
>@@ -67,6 +67,7 @@ SECTIONS
> __start_ro_after_init = .;
> .data..ro_after_init : {
> *(.data..ro_after_init)
>+ JUMP_TABLE_DATA
> }
> EXCEPTION_TABLE(16)
> . = ALIGN(PAGE_SIZE);
>diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
>index e373e2e10f6a..ed6befa4c47b 100644
>--- a/include/asm-generic/vmlinux.lds.h
>+++ b/include/asm-generic/vmlinux.lds.h
>@@ -256,10 +256,6 @@
> STRUCT_ALIGN(); \
> *(__tracepoints) \
> /* implement dynamic printk debug */ \
>- . = ALIGN(8); \
>- __start___jump_table = .; \
>- KEEP(*(__jump_table)) \
>- __stop___jump_table = .; \
> . = ALIGN(8); \
> __start___verbose = .; \
> KEEP(*(__verbose)) \
>@@ -303,6 +299,12 @@
> . = __start_init_task + THREAD_SIZE; \
> __end_init_task = .;
>
>+#define JUMP_TABLE_DATA \
>+ . = ALIGN(8); \
>+ __start___jump_table = .; \
>+ KEEP(*(__jump_table)) \
>+ __stop___jump_table = .;
>+
> /*
> * Allow architectures to handle ro_after_init data on their
> * own by defining an empty RO_AFTER_INIT_DATA.
>@@ -311,6 +313,7 @@
> #define RO_AFTER_INIT_DATA \
> __start_ro_after_init = .; \
> *(.data..ro_after_init) \
>+ JUMP_TABLE_DATA \
> __end_ro_after_init = .;
> #endif
>
>diff --git a/kernel/module.c b/kernel/module.c
>index 7cb82e0fcac0..0d4e320e41cd 100644
>--- a/kernel/module.c
>+++ b/kernel/module.c
>@@ -3349,6 +3349,15 @@ static struct module *layout_and_allocate(struct load_info *info, int flags)
> * Note: ro_after_init sections also have SHF_{WRITE,ALLOC} set.
> */
> ndx = find_sec(info, ".data..ro_after_init");
>+ if (ndx)
>+ info->sechdrs[ndx].sh_flags |= SHF_RO_AFTER_INIT;
>+ /*
>+ * Mark the __jump_table section as ro_after_init as well: these data
>+ * structures are never modified, with the exception of entries that
>+ * refer to code in the __init section, which are annotated as such
>+ * at module load time.
>+ */
>+ ndx = find_sec(info, "__jump_table");
> if (ndx)
> info->sechdrs[ndx].sh_flags |= SHF_RO_AFTER_INIT;
>
>--
>2.17.1
>

2018-07-04 08:01:56

by Heiko Carstens

[permalink] [raw]
Subject: Re: [PATCH v2 0/8] add support for relative references in jump tables

On Mon, Jul 02, 2018 at 08:11:37PM +0200, Ard Biesheuvel wrote:
> This series implements support for emitting the data structures associated
> with jump tables as 32-bit relative references instead of absolute
> references, which take up more space on builds that target 64-bit
> architectures, or implement self relocation [or both].
>
> This series enables it for arm64 and x86, although other architectures
> might benefit as well.

Hello Ard,

feel free to add the patch below which adds support for s390 to your series.

> Changes since v1:
> - change the relative reference to the static key to a 64-bit wide one on 64
> bit architectures; this is necessary on arm64, which allows modules to
> reside anywhere within a 4 GB window covering the core kernel text, which
> means a 32-bit signed quantity with its +/- 2 GB range is insufficient.
> Note that x86_64 changes are in preparation that widen the relocation
> range as well (using the PIE linker), so I assumed that the same change
> is appropriate for x86 as well.

FWIW, kernel modules on s390 are since ages more than 2GB away from the
core kernel text. So this is required for s390 as well.

From 77d87236f3d5474f33c25534d8ba2c7c54c88c55 Mon Sep 17 00:00:00 2001
From: Heiko Carstens <[email protected]>
Date: Wed, 4 Jul 2018 09:13:37 +0200
Subject: [PATCH] s390/jump_label: switch to relative references

Signed-off-by: Heiko Carstens <[email protected]>
---
arch/s390/Kconfig | 1 +
arch/s390/include/asm/jump_label.h | 40 +++++++++++++++-----------------------
arch/s390/kernel/jump_label.c | 11 ++++++-----
3 files changed, 23 insertions(+), 29 deletions(-)

diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
index baed39772c84..0349fb06a2ec 100644
--- a/arch/s390/Kconfig
+++ b/arch/s390/Kconfig
@@ -121,6 +121,7 @@ config S390
select HAVE_ALIGNED_STRUCT_PAGE if SLUB
select HAVE_ARCH_AUDITSYSCALL
select HAVE_ARCH_JUMP_LABEL
+ select HAVE_ARCH_JUMP_LABEL_RELATIVE
select CPU_NO_EFFICIENT_FFS if !HAVE_MARCH_Z9_109_FEATURES
select HAVE_ARCH_SECCOMP_FILTER
select HAVE_ARCH_SOFT_DIRTY
diff --git a/arch/s390/include/asm/jump_label.h b/arch/s390/include/asm/jump_label.h
index 40f651292aa7..e2d3e6c43395 100644
--- a/arch/s390/include/asm/jump_label.h
+++ b/arch/s390/include/asm/jump_label.h
@@ -14,41 +14,33 @@
* We use a brcl 0,2 instruction for jump labels at compile time so it
* can be easily distinguished from a hotpatch generated instruction.
*/
-static __always_inline bool arch_static_branch(struct static_key *key, bool branch)
+static inline bool arch_static_branch(struct static_key *key, bool branch)
{
- asm_volatile_goto("0: brcl 0,"__stringify(JUMP_LABEL_NOP_OFFSET)"\n"
- ".pushsection __jump_table, \"aw\"\n"
- ".balign 8\n"
- ".quad 0b, %l[label], %0\n"
- ".popsection\n"
- : : "X" (&((char *)key)[branch]) : : label);
-
+ asm_volatile_goto("0: brcl 0,"__stringify(JUMP_LABEL_NOP_OFFSET)"\n"
+ ".pushsection __jump_table,\"aw\"\n"
+ ".balign 8\n"
+ ".long 0b-.,%l[label]-.\n"
+ ".quad %0-.\n"
+ ".popsection\n"
+ : : "X" (&((char *)key)[branch]) : : label);
return false;
label:
return true;
}

-static __always_inline bool arch_static_branch_jump(struct static_key *key, bool branch)
+static inline bool arch_static_branch_jump(struct static_key *key, bool branch)
{
- asm_volatile_goto("0: brcl 15, %l[label]\n"
- ".pushsection __jump_table, \"aw\"\n"
- ".balign 8\n"
- ".quad 0b, %l[label], %0\n"
- ".popsection\n"
- : : "X" (&((char *)key)[branch]) : : label);
-
+ asm_volatile_goto("0: brcl 15,%l[label]\n"
+ ".pushsection __jump_table,\"aw\"\n"
+ ".balign 8\n"
+ ".long 0b-.,%l[label]-.\n"
+ ".quad %0-.\n"
+ ".popsection\n"
+ : : "X" (&((char *)key)[branch]) : : label);
return false;
label:
return true;
}

-typedef unsigned long jump_label_t;
-
-struct jump_entry {
- jump_label_t code;
- jump_label_t target;
- jump_label_t key;
-};
-
#endif /* __ASSEMBLY__ */
#endif
diff --git a/arch/s390/kernel/jump_label.c b/arch/s390/kernel/jump_label.c
index 43f8430fb67d..50a1798604a8 100644
--- a/arch/s390/kernel/jump_label.c
+++ b/arch/s390/kernel/jump_label.c
@@ -33,13 +33,13 @@ static void jump_label_make_branch(struct jump_entry *entry, struct insn *insn)
{
/* brcl 15,offset */
insn->opcode = 0xc0f4;
- insn->offset = (entry->target - entry->code) >> 1;
+ insn->offset = (jump_entry_target(entry) - jump_entry_code(entry)) >> 1;
}

static void jump_label_bug(struct jump_entry *entry, struct insn *expected,
struct insn *new)
{
- unsigned char *ipc = (unsigned char *)entry->code;
+ unsigned char *ipc = (unsigned char *)jump_entry_code(entry);
unsigned char *ipe = (unsigned char *)expected;
unsigned char *ipn = (unsigned char *)new;

@@ -59,6 +59,7 @@ static void __jump_label_transform(struct jump_entry *entry,
enum jump_label_type type,
int init)
{
+ void *code = (void *)jump_entry_code(entry);
struct insn old, new;

if (type == JUMP_LABEL_JMP) {
@@ -69,13 +70,13 @@ static void __jump_label_transform(struct jump_entry *entry,
jump_label_make_nop(entry, &new);
}
if (init) {
- if (memcmp((void *)entry->code, &orignop, sizeof(orignop)))
+ if (memcmp(code, &orignop, sizeof(orignop)))
jump_label_bug(entry, &orignop, &new);
} else {
- if (memcmp((void *)entry->code, &old, sizeof(old)))
+ if (memcmp(code, &old, sizeof(old)))
jump_label_bug(entry, &old, &new);
}
- s390_kernel_write((void *)entry->code, &new, sizeof(new));
+ s390_kernel_write(code, &new, sizeof(new));
}

static int __sm_arch_jump_label_transform(void *data)
--
2.16.4


2018-07-04 08:51:06

by Ard Biesheuvel

[permalink] [raw]
Subject: Re: [PATCH v2 0/8] add support for relative references in jump tables

On 4 July 2018 at 09:59, Heiko Carstens <[email protected]> wrote:
> On Mon, Jul 02, 2018 at 08:11:37PM +0200, Ard Biesheuvel wrote:
>> This series implements support for emitting the data structures associated
>> with jump tables as 32-bit relative references instead of absolute
>> references, which take up more space on builds that target 64-bit
>> architectures, or implement self relocation [or both].
>>
>> This series enables it for arm64 and x86, although other architectures
>> might benefit as well.
>
> Hello Ard,
>
> feel free to add the patch below which adds support for s390 to your series.
>

Thanks, I will incorporate it into v3 and beyond.

>> Changes since v1:
>> - change the relative reference to the static key to a 64-bit wide one on 64
>> bit architectures; this is necessary on arm64, which allows modules to
>> reside anywhere within a 4 GB window covering the core kernel text, which
>> means a 32-bit signed quantity with its +/- 2 GB range is insufficient.
>> Note that x86_64 changes are in preparation that widen the relocation
>> range as well (using the PIE linker), so I assumed that the same change
>> is appropriate for x86 as well.
>
> FWIW, kernel modules on s390 are since ages more than 2GB away from the
> core kernel text. So this is required for s390 as well.
>

OK, good to know.

--
Ard.


> From 77d87236f3d5474f33c25534d8ba2c7c54c88c55 Mon Sep 17 00:00:00 2001
> From: Heiko Carstens <[email protected]>
> Date: Wed, 4 Jul 2018 09:13:37 +0200
> Subject: [PATCH] s390/jump_label: switch to relative references
>
> Signed-off-by: Heiko Carstens <[email protected]>
> ---
> arch/s390/Kconfig | 1 +
> arch/s390/include/asm/jump_label.h | 40 +++++++++++++++-----------------------
> arch/s390/kernel/jump_label.c | 11 ++++++-----
> 3 files changed, 23 insertions(+), 29 deletions(-)
>
> diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
> index baed39772c84..0349fb06a2ec 100644
> --- a/arch/s390/Kconfig
> +++ b/arch/s390/Kconfig
> @@ -121,6 +121,7 @@ config S390
> select HAVE_ALIGNED_STRUCT_PAGE if SLUB
> select HAVE_ARCH_AUDITSYSCALL
> select HAVE_ARCH_JUMP_LABEL
> + select HAVE_ARCH_JUMP_LABEL_RELATIVE
> select CPU_NO_EFFICIENT_FFS if !HAVE_MARCH_Z9_109_FEATURES
> select HAVE_ARCH_SECCOMP_FILTER
> select HAVE_ARCH_SOFT_DIRTY
> diff --git a/arch/s390/include/asm/jump_label.h b/arch/s390/include/asm/jump_label.h
> index 40f651292aa7..e2d3e6c43395 100644
> --- a/arch/s390/include/asm/jump_label.h
> +++ b/arch/s390/include/asm/jump_label.h
> @@ -14,41 +14,33 @@
> * We use a brcl 0,2 instruction for jump labels at compile time so it
> * can be easily distinguished from a hotpatch generated instruction.
> */
> -static __always_inline bool arch_static_branch(struct static_key *key, bool branch)
> +static inline bool arch_static_branch(struct static_key *key, bool branch)
> {
> - asm_volatile_goto("0: brcl 0,"__stringify(JUMP_LABEL_NOP_OFFSET)"\n"
> - ".pushsection __jump_table, \"aw\"\n"
> - ".balign 8\n"
> - ".quad 0b, %l[label], %0\n"
> - ".popsection\n"
> - : : "X" (&((char *)key)[branch]) : : label);
> -
> + asm_volatile_goto("0: brcl 0,"__stringify(JUMP_LABEL_NOP_OFFSET)"\n"
> + ".pushsection __jump_table,\"aw\"\n"
> + ".balign 8\n"
> + ".long 0b-.,%l[label]-.\n"
> + ".quad %0-.\n"
> + ".popsection\n"
> + : : "X" (&((char *)key)[branch]) : : label);
> return false;
> label:
> return true;
> }
>
> -static __always_inline bool arch_static_branch_jump(struct static_key *key, bool branch)
> +static inline bool arch_static_branch_jump(struct static_key *key, bool branch)
> {
> - asm_volatile_goto("0: brcl 15, %l[label]\n"
> - ".pushsection __jump_table, \"aw\"\n"
> - ".balign 8\n"
> - ".quad 0b, %l[label], %0\n"
> - ".popsection\n"
> - : : "X" (&((char *)key)[branch]) : : label);
> -
> + asm_volatile_goto("0: brcl 15,%l[label]\n"
> + ".pushsection __jump_table,\"aw\"\n"
> + ".balign 8\n"
> + ".long 0b-.,%l[label]-.\n"
> + ".quad %0-.\n"
> + ".popsection\n"
> + : : "X" (&((char *)key)[branch]) : : label);
> return false;
> label:
> return true;
> }
>
> -typedef unsigned long jump_label_t;
> -
> -struct jump_entry {
> - jump_label_t code;
> - jump_label_t target;
> - jump_label_t key;
> -};
> -
> #endif /* __ASSEMBLY__ */
> #endif
> diff --git a/arch/s390/kernel/jump_label.c b/arch/s390/kernel/jump_label.c
> index 43f8430fb67d..50a1798604a8 100644
> --- a/arch/s390/kernel/jump_label.c
> +++ b/arch/s390/kernel/jump_label.c
> @@ -33,13 +33,13 @@ static void jump_label_make_branch(struct jump_entry *entry, struct insn *insn)
> {
> /* brcl 15,offset */
> insn->opcode = 0xc0f4;
> - insn->offset = (entry->target - entry->code) >> 1;
> + insn->offset = (jump_entry_target(entry) - jump_entry_code(entry)) >> 1;
> }
>
> static void jump_label_bug(struct jump_entry *entry, struct insn *expected,
> struct insn *new)
> {
> - unsigned char *ipc = (unsigned char *)entry->code;
> + unsigned char *ipc = (unsigned char *)jump_entry_code(entry);
> unsigned char *ipe = (unsigned char *)expected;
> unsigned char *ipn = (unsigned char *)new;
>
> @@ -59,6 +59,7 @@ static void __jump_label_transform(struct jump_entry *entry,
> enum jump_label_type type,
> int init)
> {
> + void *code = (void *)jump_entry_code(entry);
> struct insn old, new;
>
> if (type == JUMP_LABEL_JMP) {
> @@ -69,13 +70,13 @@ static void __jump_label_transform(struct jump_entry *entry,
> jump_label_make_nop(entry, &new);
> }
> if (init) {
> - if (memcmp((void *)entry->code, &orignop, sizeof(orignop)))
> + if (memcmp(code, &orignop, sizeof(orignop)))
> jump_label_bug(entry, &orignop, &new);
> } else {
> - if (memcmp((void *)entry->code, &old, sizeof(old)))
> + if (memcmp(code, &old, sizeof(old)))
> jump_label_bug(entry, &old, &new);
> }
> - s390_kernel_write((void *)entry->code, &new, sizeof(new));
> + s390_kernel_write(code, &new, sizeof(new));
> }
>
> static int __sm_arch_jump_label_transform(void *data)
> --
> 2.16.4
>

2018-09-13 21:40:57

by Kees Cook

[permalink] [raw]
Subject: Re: [PATCH v2 0/8] add support for relative references in jump tables

On Wed, Jul 4, 2018 at 1:50 AM, Ard Biesheuvel
<[email protected]> wrote:
> On 4 July 2018 at 09:59, Heiko Carstens <[email protected]> wrote:
>> On Mon, Jul 02, 2018 at 08:11:37PM +0200, Ard Biesheuvel wrote:
>>> This series implements support for emitting the data structures associated
>>> with jump tables as 32-bit relative references instead of absolute
>>> references, which take up more space on builds that target 64-bit
>>> architectures, or implement self relocation [or both].
>>>
>>> This series enables it for arm64 and x86, although other architectures
>>> might benefit as well.
>>
>> Hello Ard,
>>
>> feel free to add the patch below which adds support for s390 to your series.
>>
>
> Thanks, I will incorporate it into v3 and beyond.

Just a quick status check! :) What're your plans for this series? I'd
still love to see it.

-Kees

--
Kees Cook
Pixel Security