2019-06-14 01:08:55

by Josh Poimboeuf

[permalink] [raw]
Subject: [PATCH 0/3] module: Livepatch/ftrace fixes

Patch 1 fixes a module loading race between livepatch and ftrace.

Patch 2 adds lockdep assertions assocated with patch 1.

Patch 3 fixes a theoretical bug in the module __ro_after_init section
handling.

Josh Poimboeuf (3):
module: Fix livepatch/ftrace module text permissions race
module: Add text_mutex lockdep assertions for page attribute changes
module: Improve module __ro_after_init handling

arch/arm64/kernel/ftrace.c | 2 +-
include/linux/module.h | 4 ++--
kernel/livepatch/core.c | 10 ++++++++--
kernel/module.c | 29 ++++++++++++++++++++++-------
kernel/trace/ftrace.c | 10 +++++++++-
5 files changed, 42 insertions(+), 13 deletions(-)

--
2.20.1


2019-06-14 01:09:05

by Josh Poimboeuf

[permalink] [raw]
Subject: [PATCH 2/3] module: Add text_mutex lockdep assertions for page attribute changes

External callers of the module page attribute change functions now need
to have the text_mutex. Enforce that with lockdep assertions.

Signed-off-by: Josh Poimboeuf <[email protected]>
---
kernel/module.c | 27 +++++++++++++++++++++------
1 file changed, 21 insertions(+), 6 deletions(-)

diff --git a/kernel/module.c b/kernel/module.c
index 6e6712b3aaf5..e43a90ee2d23 100644
--- a/kernel/module.c
+++ b/kernel/module.c
@@ -64,6 +64,7 @@
#include <linux/bsearch.h>
#include <linux/dynamic_debug.h>
#include <linux/audit.h>
+#include <linux/memory.h>
#include <uapi/linux/module.h>
#include "module-internal.h"

@@ -1943,6 +1944,8 @@ static void frob_writable_data(const struct module_layout *layout,
/* livepatching wants to disable read-only so it can frob module. */
void module_disable_ro(const struct module *mod)
{
+ lockdep_assert_held(&text_mutex);
+
if (!rodata_enabled)
return;

@@ -1953,7 +1956,7 @@ void module_disable_ro(const struct module *mod)
frob_rodata(&mod->init_layout, set_memory_rw);
}

-void module_enable_ro(const struct module *mod, bool after_init)
+void __module_enable_ro(const struct module *mod, bool after_init)
{
if (!rodata_enabled)
return;
@@ -1974,7 +1977,14 @@ void module_enable_ro(const struct module *mod, bool after_init)
frob_ro_after_init(&mod->core_layout, set_memory_ro);
}

-static void module_enable_nx(const struct module *mod)
+void module_enable_ro(const struct module *mod, bool after_init)
+{
+ lockdep_assert_held(&text_mutex);
+
+ __module_enable_ro(mod, after_init);
+}
+
+static void __module_enable_nx(const struct module *mod)
{
frob_rodata(&mod->core_layout, set_memory_nx);
frob_ro_after_init(&mod->core_layout, set_memory_nx);
@@ -1988,6 +1998,8 @@ void set_all_modules_text_rw(void)
{
struct module *mod;

+ lockdep_assert_held(&text_mutex);
+
if (!rodata_enabled)
return;

@@ -2007,6 +2019,8 @@ void set_all_modules_text_ro(void)
{
struct module *mod;

+ lockdep_assert_held(&text_mutex);
+
if (!rodata_enabled)
return;

@@ -2027,7 +2041,8 @@ void set_all_modules_text_ro(void)
mutex_unlock(&module_mutex);
}
#else
-static void module_enable_nx(const struct module *mod) { }
+static void __module_enable_ro(const struct module *mod, bool after_init) { }
+static void __module_enable_nx(const struct module *mod) { }
#endif

#ifdef CONFIG_LIVEPATCH
@@ -3519,7 +3534,7 @@ static noinline int do_init_module(struct module *mod)
/* Switch to core kallsyms now init is done: kallsyms may be walking! */
rcu_assign_pointer(mod->kallsyms, &mod->core_kallsyms);
#endif
- module_enable_ro(mod, true);
+ __module_enable_ro(mod, true);
mod_tree_remove_init(mod);
module_arch_freeing_init(mod);
mod->init_layout.base = NULL;
@@ -3626,8 +3641,8 @@ static int complete_formation(struct module *mod, struct load_info *info)
/* This relies on module_mutex for list integrity. */
module_bug_finalize(info->hdr, info->sechdrs, mod);

- module_enable_ro(mod, false);
- module_enable_nx(mod);
+ __module_enable_ro(mod, false);
+ __module_enable_nx(mod);

/* Mark state as coming so strong_try_module_get() ignores us,
* but kallsyms etc. can see us. */
--
2.20.1

2019-06-14 01:09:39

by Josh Poimboeuf

[permalink] [raw]
Subject: [PATCH 3/3] module: Improve module __ro_after_init handling

module_enable_ro() can be called in the following scenario:

[load livepatch module]
initcall
klp_enable_patch()
klp_init_patch()
klp_init_object()
klp_init_object_loaded()
module_enable_ro(pmod, true)

In this case, module_enable_ro()'s 'after_init' argument is true, which
prematurely changes the module's __ro_after_init area to read-only.

If, theoretically, a registrant of the MODULE_STATE_LIVE module notifier
tried to write to the livepatch module's __ro_after_init section, an
oops would occur.

Remove the 'after_init' argument and instead make __module_enable_ro()
smart enough to only frob the __ro_after_init section after the module
has gone live.

Reported-by: Petr Mladek <[email protected]>
Signed-off-by: Josh Poimboeuf <[email protected]>
---
arch/arm64/kernel/ftrace.c | 2 +-
include/linux/module.h | 4 ++--
kernel/livepatch/core.c | 4 ++--
kernel/module.c | 14 +++++++-------
4 files changed, 12 insertions(+), 12 deletions(-)

diff --git a/arch/arm64/kernel/ftrace.c b/arch/arm64/kernel/ftrace.c
index 65a51331088e..c17d98aafc93 100644
--- a/arch/arm64/kernel/ftrace.c
+++ b/arch/arm64/kernel/ftrace.c
@@ -120,7 +120,7 @@ int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
/* point the trampoline to our ftrace entry point */
module_disable_ro(mod);
*mod->arch.ftrace_trampoline = trampoline;
- module_enable_ro(mod, true);
+ module_enable_ro(mod);

/* update trampoline before patching in the branch */
smp_wmb();
diff --git a/include/linux/module.h b/include/linux/module.h
index 188998d3dca9..4d6922f3760e 100644
--- a/include/linux/module.h
+++ b/include/linux/module.h
@@ -844,12 +844,12 @@ extern int module_sysfs_initialized;
#ifdef CONFIG_STRICT_MODULE_RWX
extern void set_all_modules_text_rw(void);
extern void set_all_modules_text_ro(void);
-extern void module_enable_ro(const struct module *mod, bool after_init);
+extern void module_enable_ro(const struct module *mod);
extern void module_disable_ro(const struct module *mod);
#else
static inline void set_all_modules_text_rw(void) { }
static inline void set_all_modules_text_ro(void) { }
-static inline void module_enable_ro(const struct module *mod, bool after_init) { }
+static inline void module_enable_ro(const struct module *mod) { }
static inline void module_disable_ro(const struct module *mod) { }
#endif

diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
index c4ce08f43bd6..f9882ffa2f44 100644
--- a/kernel/livepatch/core.c
+++ b/kernel/livepatch/core.c
@@ -724,13 +724,13 @@ static int klp_init_object_loaded(struct klp_patch *patch,
module_disable_ro(patch->mod);
ret = klp_write_object_relocations(patch->mod, obj);
if (ret) {
- module_enable_ro(patch->mod, true);
+ module_enable_ro(patch->mod);
mutex_unlock(&text_mutex);
return ret;
}

arch_klp_init_object_loaded(patch, obj);
- module_enable_ro(patch->mod, true);
+ module_enable_ro(patch->mod);

mutex_unlock(&text_mutex);

diff --git a/kernel/module.c b/kernel/module.c
index e43a90ee2d23..fb3561e0c5b0 100644
--- a/kernel/module.c
+++ b/kernel/module.c
@@ -1956,7 +1956,7 @@ void module_disable_ro(const struct module *mod)
frob_rodata(&mod->init_layout, set_memory_rw);
}

-void __module_enable_ro(const struct module *mod, bool after_init)
+static void __module_enable_ro(const struct module *mod)
{
if (!rodata_enabled)
return;
@@ -1973,15 +1973,15 @@ void __module_enable_ro(const struct module *mod, bool after_init)

frob_rodata(&mod->init_layout, set_memory_ro);

- if (after_init)
+ if (mod->state == MODULE_STATE_LIVE)
frob_ro_after_init(&mod->core_layout, set_memory_ro);
}

-void module_enable_ro(const struct module *mod, bool after_init)
+void module_enable_ro(const struct module *mod)
{
lockdep_assert_held(&text_mutex);

- __module_enable_ro(mod, after_init);
+ __module_enable_ro(mod);
}

static void __module_enable_nx(const struct module *mod)
@@ -2041,7 +2041,7 @@ void set_all_modules_text_ro(void)
mutex_unlock(&module_mutex);
}
#else
-static void __module_enable_ro(const struct module *mod, bool after_init) { }
+static void __module_enable_ro(const struct module *mod) { }
static void __module_enable_nx(const struct module *mod) { }
#endif

@@ -3534,7 +3534,7 @@ static noinline int do_init_module(struct module *mod)
/* Switch to core kallsyms now init is done: kallsyms may be walking! */
rcu_assign_pointer(mod->kallsyms, &mod->core_kallsyms);
#endif
- __module_enable_ro(mod, true);
+ __module_enable_ro(mod);
mod_tree_remove_init(mod);
module_arch_freeing_init(mod);
mod->init_layout.base = NULL;
@@ -3641,7 +3641,7 @@ static int complete_formation(struct module *mod, struct load_info *info)
/* This relies on module_mutex for list integrity. */
module_bug_finalize(info->hdr, info->sechdrs, mod);

- __module_enable_ro(mod, false);
+ __module_enable_ro(mod);
__module_enable_nx(mod);

/* Mark state as coming so strong_try_module_get() ignores us,
--
2.20.1

2019-06-14 01:10:08

by Josh Poimboeuf

[permalink] [raw]
Subject: [PATCH 1/3] module: Fix livepatch/ftrace module text permissions race

It's possible for livepatch and ftrace to be toggling a module's text
permissions at the same time, resulting in the following panic:

BUG: unable to handle page fault for address: ffffffffc005b1d9
#PF: supervisor write access in kernel mode
#PF: error_code(0x0003) - permissions violation
PGD 3ea0c067 P4D 3ea0c067 PUD 3ea0e067 PMD 3cc13067 PTE 3b8a1061
Oops: 0003 [#1] PREEMPT SMP PTI
CPU: 1 PID: 453 Comm: insmod Tainted: G O K 5.2.0-rc1-a188339ca5 #1
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-20181126_142135-anatol 04/01/2014
RIP: 0010:apply_relocate_add+0xbe/0x14c
Code: fa 0b 74 21 48 83 fa 18 74 38 48 83 fa 0a 75 40 eb 08 48 83 38 00 74 33 eb 53 83 38 00 75 4e 89 08 89 c8 eb 0a 83 38 00 75 43 <89> 08 48 63 c1 48 39 c8 74 2e eb 48 83 38 00 75 32 48 29 c1 89 08
RSP: 0018:ffffb223c00dbb10 EFLAGS: 00010246
RAX: ffffffffc005b1d9 RBX: 0000000000000000 RCX: ffffffff8b200060
RDX: 000000000000000b RSI: 0000004b0000000b RDI: ffff96bdfcd33000
RBP: ffffb223c00dbb38 R08: ffffffffc005d040 R09: ffffffffc005c1f0
R10: ffff96bdfcd33c40 R11: ffff96bdfcd33b80 R12: 0000000000000018
R13: ffffffffc005c1f0 R14: ffffffffc005e708 R15: ffffffff8b2fbc74
FS: 00007f5f447beba8(0000) GS:ffff96bdff900000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: ffffffffc005b1d9 CR3: 000000003cedc002 CR4: 0000000000360ea0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
klp_init_object_loaded+0x10f/0x219
? preempt_latency_start+0x21/0x57
klp_enable_patch+0x662/0x809
? virt_to_head_page+0x3a/0x3c
? kfree+0x8c/0x126
patch_init+0x2ed/0x1000 [livepatch_test02]
? 0xffffffffc0060000
do_one_initcall+0x9f/0x1c5
? kmem_cache_alloc_trace+0xc4/0xd4
? do_init_module+0x27/0x210
do_init_module+0x5f/0x210
load_module+0x1c41/0x2290
? fsnotify_path+0x3b/0x42
? strstarts+0x2b/0x2b
? kernel_read+0x58/0x65
__do_sys_finit_module+0x9f/0xc3
? __do_sys_finit_module+0x9f/0xc3
__x64_sys_finit_module+0x1a/0x1c
do_syscall_64+0x52/0x61
entry_SYSCALL_64_after_hwframe+0x44/0xa9

The above panic occurs when loading two modules at the same time with
ftrace enabled, where at least one of the modules is a livepatch module:

CPU0 CPU1
klp_enable_patch()
klp_init_object_loaded()
module_disable_ro()
ftrace_module_enable()
ftrace_arch_code_modify_post_process()
set_all_modules_text_ro()
klp_write_object_relocations()
apply_relocate_add()
*patches read-only code* - BOOM

A similar race exists when toggling ftrace while loading a livepatch
module.

Fix it by ensuring that the livepatch and ftrace code patching
operations -- and their respective permissions changes -- are protected
by the text_mutex.

Reported-by: Johannes Erdfelt <[email protected]>
Fixes: 444d13ff10fb ("modules: add ro_after_init support")
Signed-off-by: Josh Poimboeuf <[email protected]>
Acked-by: Jessica Yu <[email protected]>
Reviewed-by: Petr Mladek <[email protected]>
Reviewed-by: Miroslav Benes <[email protected]>
---
kernel/livepatch/core.c | 6 ++++++
kernel/trace/ftrace.c | 10 +++++++++-
2 files changed, 15 insertions(+), 1 deletion(-)

diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
index 2398832947c6..c4ce08f43bd6 100644
--- a/kernel/livepatch/core.c
+++ b/kernel/livepatch/core.c
@@ -18,6 +18,7 @@
#include <linux/elf.h>
#include <linux/moduleloader.h>
#include <linux/completion.h>
+#include <linux/memory.h>
#include <asm/cacheflush.h>
#include "core.h"
#include "patch.h"
@@ -718,16 +719,21 @@ static int klp_init_object_loaded(struct klp_patch *patch,
struct klp_func *func;
int ret;

+ mutex_lock(&text_mutex);
+
module_disable_ro(patch->mod);
ret = klp_write_object_relocations(patch->mod, obj);
if (ret) {
module_enable_ro(patch->mod, true);
+ mutex_unlock(&text_mutex);
return ret;
}

arch_klp_init_object_loaded(patch, obj);
module_enable_ro(patch->mod, true);

+ mutex_unlock(&text_mutex);
+
klp_for_each_func(obj, func) {
ret = klp_find_object_symbol(obj->name, func->old_name,
func->old_sympos,
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 5c3eadb143ed..01d383c05346 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -34,6 +34,7 @@
#include <linux/hash.h>
#include <linux/rcupdate.h>
#include <linux/kprobes.h>
+#include <linux/memory.h>

#include <trace/events/sched.h>

@@ -2612,10 +2613,12 @@ static void ftrace_run_update_code(int command)
{
int ret;

+ mutex_lock(&text_mutex);
+
ret = ftrace_arch_code_modify_prepare();
FTRACE_WARN_ON(ret);
if (ret)
- return;
+ goto out_unlock;

/*
* By default we use stop_machine() to modify the code.
@@ -2627,6 +2630,9 @@ static void ftrace_run_update_code(int command)

ret = ftrace_arch_code_modify_post_process();
FTRACE_WARN_ON(ret);
+
+out_unlock:
+ mutex_unlock(&text_mutex);
}

static void ftrace_run_modify_code(struct ftrace_ops *ops, int command,
@@ -5778,6 +5784,7 @@ void ftrace_module_enable(struct module *mod)
struct ftrace_page *pg;

mutex_lock(&ftrace_lock);
+ mutex_lock(&text_mutex);

if (ftrace_disabled)
goto out_unlock;
@@ -5839,6 +5846,7 @@ void ftrace_module_enable(struct module *mod)
ftrace_arch_code_modify_post_process();

out_unlock:
+ mutex_unlock(&text_mutex);
mutex_unlock(&ftrace_lock);

process_cached_mods(mod->name);
--
2.20.1

2019-06-14 14:05:45

by Petr Mladek

[permalink] [raw]
Subject: Re: [PATCH 2/3] module: Add text_mutex lockdep assertions for page attribute changes

On Thu 2019-06-13 20:07:23, Josh Poimboeuf wrote:
> External callers of the module page attribute change functions now need
> to have the text_mutex. Enforce that with lockdep assertions.
>
> diff --git a/kernel/module.c b/kernel/module.c
> index 6e6712b3aaf5..e43a90ee2d23 100644
> --- a/kernel/module.c
> +++ b/kernel/module.c
> @@ -3519,7 +3534,7 @@ static noinline int do_init_module(struct module *mod)
> /* Switch to core kallsyms now init is done: kallsyms may be walking! */
> rcu_assign_pointer(mod->kallsyms, &mod->core_kallsyms);
> #endif
> - module_enable_ro(mod, true);
> + __module_enable_ro(mod, true);

This one must be called under text_mutex. Otherwise it might get
called when ftrace is in the middle of modifying the functions.

It should be enough to take text_mutex right around this call.
It will prevent making the code ro when ftrace is doing
the modification. It safe also the other way.
set_all_modules_text_ro() does not call frob_ro_after_init().
Therefore ftrace could not make the after_init section RO prematurely.

> mod_tree_remove_init(mod);
> module_arch_freeing_init(mod);
> mod->init_layout.base = NULL;
> @@ -3626,8 +3641,8 @@ static int complete_formation(struct module *mod, struct load_info *info)
> /* This relies on module_mutex for list integrity. */
> module_bug_finalize(info->hdr, info->sechdrs, mod);
>
> - module_enable_ro(mod, false);
> - module_enable_nx(mod);
> + __module_enable_ro(mod, false);
> + __module_enable_nx(mod);

This one is OK. It is called when the module is in
MODULE_STATE_UNFORMED. Therefore it is ignored by ftrace.
The module state is manipulated and checked under module_mutex.

Best Regards,
Petr

2019-06-14 14:16:55

by Petr Mladek

[permalink] [raw]
Subject: Re: [PATCH 3/3] module: Improve module __ro_after_init handling

On Thu 2019-06-13 20:07:24, Josh Poimboeuf wrote:
> module_enable_ro() can be called in the following scenario:
>
> [load livepatch module]
> initcall
> klp_enable_patch()
> klp_init_patch()
> klp_init_object()
> klp_init_object_loaded()
> module_enable_ro(pmod, true)
>
> In this case, module_enable_ro()'s 'after_init' argument is true, which
> prematurely changes the module's __ro_after_init area to read-only.
>
> If, theoretically, a registrant of the MODULE_STATE_LIVE module notifier
> tried to write to the livepatch module's __ro_after_init section, an
> oops would occur.
>
> Remove the 'after_init' argument and instead make __module_enable_ro()
> smart enough to only frob the __ro_after_init section after the module
> has gone live.
>
> Reported-by: Petr Mladek <[email protected]>
> Signed-off-by: Josh Poimboeuf <[email protected]>
> ---
> arch/arm64/kernel/ftrace.c | 2 +-
> include/linux/module.h | 4 ++--
> kernel/livepatch/core.c | 4 ++--
> kernel/module.c | 14 +++++++-------
> 4 files changed, 12 insertions(+), 12 deletions(-)
>
> diff --git a/arch/arm64/kernel/ftrace.c b/arch/arm64/kernel/ftrace.c
> index 65a51331088e..c17d98aafc93 100644
> --- a/arch/arm64/kernel/ftrace.c
> +++ b/arch/arm64/kernel/ftrace.c
> @@ -120,7 +120,7 @@ int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
> /* point the trampoline to our ftrace entry point */
> module_disable_ro(mod);
> *mod->arch.ftrace_trampoline = trampoline;
> - module_enable_ro(mod, true);
> + module_enable_ro(mod);
>
> /* update trampoline before patching in the branch */
> smp_wmb();
> diff --git a/include/linux/module.h b/include/linux/module.h
> index 188998d3dca9..4d6922f3760e 100644
> --- a/include/linux/module.h
> +++ b/include/linux/module.h
> @@ -844,12 +844,12 @@ extern int module_sysfs_initialized;
> #ifdef CONFIG_STRICT_MODULE_RWX
> extern void set_all_modules_text_rw(void);
> extern void set_all_modules_text_ro(void);
> -extern void module_enable_ro(const struct module *mod, bool after_init);
> +extern void module_enable_ro(const struct module *mod);
> extern void module_disable_ro(const struct module *mod);
> #else
> static inline void set_all_modules_text_rw(void) { }
> static inline void set_all_modules_text_ro(void) { }
> -static inline void module_enable_ro(const struct module *mod, bool after_init) { }
> +static inline void module_enable_ro(const struct module *mod) { }
> static inline void module_disable_ro(const struct module *mod) { }
> #endif
>
> diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
> index c4ce08f43bd6..f9882ffa2f44 100644
> --- a/kernel/livepatch/core.c
> +++ b/kernel/livepatch/core.c
> @@ -724,13 +724,13 @@ static int klp_init_object_loaded(struct klp_patch *patch,
> module_disable_ro(patch->mod);
> ret = klp_write_object_relocations(patch->mod, obj);
> if (ret) {
> - module_enable_ro(patch->mod, true);
> + module_enable_ro(patch->mod);
> mutex_unlock(&text_mutex);
> return ret;
> }
>
> arch_klp_init_object_loaded(patch, obj);
> - module_enable_ro(patch->mod, true);
> + module_enable_ro(patch->mod);
>
> mutex_unlock(&text_mutex);
>
> diff --git a/kernel/module.c b/kernel/module.c
> index e43a90ee2d23..fb3561e0c5b0 100644
> --- a/kernel/module.c
> +++ b/kernel/module.c
> @@ -1956,7 +1956,7 @@ void module_disable_ro(const struct module *mod)
> frob_rodata(&mod->init_layout, set_memory_rw);
> }
>
> -void __module_enable_ro(const struct module *mod, bool after_init)
> +static void __module_enable_ro(const struct module *mod)
> {
> if (!rodata_enabled)
> return;
> @@ -1973,15 +1973,15 @@ void __module_enable_ro(const struct module *mod, bool after_init)
>
> frob_rodata(&mod->init_layout, set_memory_ro);
>
> - if (after_init)
> + if (mod->state == MODULE_STATE_LIVE)
> frob_ro_after_init(&mod->core_layout, set_memory_ro);

This works only now because __module_enable_ro() is called only from
three locations (klp_init_object_loaded(), complete_formation(),
and do_init_module(). And they all are called in a well defined order
from load_module().

Only the final call in do_init_module() should touch the after_init
section.

IMHO, the most clean solutiuon would be to call frob_ro_after_init()
from extra __module_after_init_enable_ro() or so. This should be
called only from the single place.

Best Regards,
Petr

2019-06-14 18:22:12

by Josh Poimboeuf

[permalink] [raw]
Subject: Re: [PATCH 2/3] module: Add text_mutex lockdep assertions for page attribute changes

On Fri, Jun 14, 2019 at 04:04:57PM +0200, Petr Mladek wrote:
> On Thu 2019-06-13 20:07:23, Josh Poimboeuf wrote:
> > External callers of the module page attribute change functions now need
> > to have the text_mutex. Enforce that with lockdep assertions.
> >
> > diff --git a/kernel/module.c b/kernel/module.c
> > index 6e6712b3aaf5..e43a90ee2d23 100644
> > --- a/kernel/module.c
> > +++ b/kernel/module.c
> > @@ -3519,7 +3534,7 @@ static noinline int do_init_module(struct module *mod)
> > /* Switch to core kallsyms now init is done: kallsyms may be walking! */
> > rcu_assign_pointer(mod->kallsyms, &mod->core_kallsyms);
> > #endif
> > - module_enable_ro(mod, true);
> > + __module_enable_ro(mod, true);
>
> This one must be called under text_mutex. Otherwise it might get
> called when ftrace is in the middle of modifying the functions.
>
> It should be enough to take text_mutex right around this call.
> It will prevent making the code ro when ftrace is doing
> the modification. It safe also the other way.
> set_all_modules_text_ro() does not call frob_ro_after_init().
> Therefore ftrace could not make the after_init section RO prematurely.
>
> > mod_tree_remove_init(mod);
> > module_arch_freeing_init(mod);
> > mod->init_layout.base = NULL;
> > @@ -3626,8 +3641,8 @@ static int complete_formation(struct module *mod, struct load_info *info)
> > /* This relies on module_mutex for list integrity. */
> > module_bug_finalize(info->hdr, info->sechdrs, mod);
> >
> > - module_enable_ro(mod, false);
> > - module_enable_nx(mod);
> > + __module_enable_ro(mod, false);
> > + __module_enable_nx(mod);
>
> This one is OK. It is called when the module is in
> MODULE_STATE_UNFORMED. Therefore it is ignored by ftrace.
> The module state is manipulated and checked under module_mutex.

Yes, good catch. Thanks.

--
Josh

2019-06-14 18:24:50

by Josh Poimboeuf

[permalink] [raw]
Subject: Re: [PATCH 3/3] module: Improve module __ro_after_init handling

On Fri, Jun 14, 2019 at 04:14:53PM +0200, Petr Mladek wrote:
> > -void __module_enable_ro(const struct module *mod, bool after_init)
> > +static void __module_enable_ro(const struct module *mod)
> > {
> > if (!rodata_enabled)
> > return;
> > @@ -1973,15 +1973,15 @@ void __module_enable_ro(const struct module *mod, bool after_init)
> >
> > frob_rodata(&mod->init_layout, set_memory_ro);
> >
> > - if (after_init)
> > + if (mod->state == MODULE_STATE_LIVE)
> > frob_ro_after_init(&mod->core_layout, set_memory_ro);
>
> This works only now because __module_enable_ro() is called only from
> three locations (klp_init_object_loaded(), complete_formation(),
> and do_init_module(). And they all are called in a well defined order
> from load_module().
>
> Only the final call in do_init_module() should touch the after_init
> section.
>
> IMHO, the most clean solutiuon would be to call frob_ro_after_init()
> from extra __module_after_init_enable_ro() or so. This should be
> called only from the single place.

Agreed, that would be better. I'll be gone for a week but I'll make
these changes when I get back. Thanks.

--
Josh

2019-06-14 21:04:32

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH 1/3] module: Fix livepatch/ftrace module text permissions race

On Thu, 13 Jun 2019 20:07:22 -0500
Josh Poimboeuf <[email protected]> wrote:

> It's possible for livepatch and ftrace to be toggling a module's text
> permissions at the same time, resulting in the following panic:
>

[..]

> The above panic occurs when loading two modules at the same time with
> ftrace enabled, where at least one of the modules is a livepatch module:
>
> CPU0 CPU1
> klp_enable_patch()
> klp_init_object_loaded()
> module_disable_ro()
> ftrace_module_enable()
> ftrace_arch_code_modify_post_process()
> set_all_modules_text_ro()
> klp_write_object_relocations()
> apply_relocate_add()
> *patches read-only code* - BOOM
>
> A similar race exists when toggling ftrace while loading a livepatch
> module.
>
> Fix it by ensuring that the livepatch and ftrace code patching
> operations -- and their respective permissions changes -- are protected
> by the text_mutex.
>
> Reported-by: Johannes Erdfelt <[email protected]>
> Fixes: 444d13ff10fb ("modules: add ro_after_init support")
> Signed-off-by: Josh Poimboeuf <[email protected]>
> Acked-by: Jessica Yu <[email protected]>
> Reviewed-by: Petr Mladek <[email protected]>
> Reviewed-by: Miroslav Benes <[email protected]>

This patch looks uncontroversial. I'm going to pull this one in and
start testing it. And if it works, I'll push to Linus.

-- Steve

2019-06-26 08:24:08

by Miroslav Benes

[permalink] [raw]
Subject: Re: [PATCH 1/3] module: Fix livepatch/ftrace module text permissions race

On Fri, 14 Jun 2019, Steven Rostedt wrote:

> On Thu, 13 Jun 2019 20:07:22 -0500
> Josh Poimboeuf <[email protected]> wrote:
>
> > It's possible for livepatch and ftrace to be toggling a module's text
> > permissions at the same time, resulting in the following panic:
> >
>
> [..]
>
> > The above panic occurs when loading two modules at the same time with
> > ftrace enabled, where at least one of the modules is a livepatch module:
> >
> > CPU0 CPU1
> > klp_enable_patch()
> > klp_init_object_loaded()
> > module_disable_ro()
> > ftrace_module_enable()
> > ftrace_arch_code_modify_post_process()
> > set_all_modules_text_ro()
> > klp_write_object_relocations()
> > apply_relocate_add()
> > *patches read-only code* - BOOM
> >
> > A similar race exists when toggling ftrace while loading a livepatch
> > module.
> >
> > Fix it by ensuring that the livepatch and ftrace code patching
> > operations -- and their respective permissions changes -- are protected
> > by the text_mutex.
> >
> > Reported-by: Johannes Erdfelt <[email protected]>
> > Fixes: 444d13ff10fb ("modules: add ro_after_init support")
> > Signed-off-by: Josh Poimboeuf <[email protected]>
> > Acked-by: Jessica Yu <[email protected]>
> > Reviewed-by: Petr Mladek <[email protected]>
> > Reviewed-by: Miroslav Benes <[email protected]>
>
> This patch looks uncontroversial. I'm going to pull this one in and
> start testing it. And if it works, I'll push to Linus.

Triggered this on s390x. Masami CCed and Linus as well, because the patch
is in master branch and we are after -rc6. Thomas CCed because of commit
2d1e38f56622 ("kprobes: Cure hotplug lock ordering issues").

======================================================
WARNING: possible circular locking dependency detected
5.2.0-rc6 #1 Tainted: G O K
------------------------------------------------------
insmod/1393 is trying to acquire lock:
000000002fdee887 (cpu_hotplug_lock.rw_sem){++++}, at: stop_machine+0x2e/0x60

but task is already holding lock:
000000005b22fb82 (text_mutex){+.+.}, at: ftrace_run_update_code+0x2a/0xa0

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #1 (text_mutex){+.+.}:
validate_chain.isra.21+0xb32/0xd70
__lock_acquire+0x4b8/0x928
lock_acquire+0x102/0x230
__mutex_lock+0x88/0x908
mutex_lock_nested+0x32/0x40
register_kprobe+0x254/0x658
init_kprobes+0x11a/0x168
do_one_initcall+0x70/0x318
kernel_init_freeable+0x456/0x508
kernel_init+0x22/0x150
ret_from_fork+0x30/0x34
kernel_thread_starter+0x0/0xc

-> #0 (cpu_hotplug_lock.rw_sem){++++}:
check_prev_add+0x90c/0xde0
validate_chain.isra.21+0xb32/0xd70
__lock_acquire+0x4b8/0x928
lock_acquire+0x102/0x230
cpus_read_lock+0x62/0xd0
stop_machine+0x2e/0x60
arch_ftrace_update_code+0x2e/0x40
ftrace_run_update_code+0x40/0xa0
ftrace_startup+0xb2/0x168
register_ftrace_function+0x64/0x88
klp_patch_object+0x1a2/0x290
klp_enable_patch+0x554/0x980
do_one_initcall+0x70/0x318
do_init_module+0x6e/0x250
load_module+0x1782/0x1990
__s390x_sys_finit_module+0xaa/0xf0
system_call+0xd8/0x2d0

other info that might help us debug this:

Possible unsafe locking scenario:

CPU0 CPU1
---- ----
lock(text_mutex);
lock(cpu_hotplug_lock.rw_sem);
lock(text_mutex);
lock(cpu_hotplug_lock.rw_sem);

*** DEADLOCK ***

3 locks held by insmod/1393:
#0: 00000000a9723159 (klp_mutex){+.+.}, at: klp_enable_patch+0x62/0x980
#1: 00000000bd173ffc (ftrace_lock){+.+.}, at: register_ftrace_function+0x56/0x88
#2: 000000005b22fb82 (text_mutex){+.+.}, at: ftrace_run_update_code+0x2a/0xa0

stack backtrace:
CPU: 0 PID: 1393 Comm: insmod Tainted: G O K 5.2.0-rc6 #1
Hardware name: IBM 2827 H43 400 (KVM/Linux)
Call Trace:
([<00000000682100b4>] show_stack+0xb4/0x130)
[<0000000068adeb8c>] dump_stack+0x94/0xd8
[<00000000682b563c>] print_circular_bug+0x1f4/0x328
[<00000000682b7264>] check_prev_add+0x90c/0xde0
[<00000000682b826a>] validate_chain.isra.21+0xb32/0xd70
[<00000000682ba018>] __lock_acquire+0x4b8/0x928
[<00000000682ba952>] lock_acquire+0x102/0x230
[<0000000068243c12>] cpus_read_lock+0x62/0xd0
[<0000000068336bf6>] stop_machine+0x2e/0x60
[<0000000068355b3e>] arch_ftrace_update_code+0x2e/0x40
[<0000000068355b90>] ftrace_run_update_code+0x40/0xa0
[<000000006835971a>] ftrace_startup+0xb2/0x168
[<0000000068359834>] register_ftrace_function+0x64/0x88
[<00000000682e8a9a>] klp_patch_object+0x1a2/0x290
[<00000000682e7d64>] klp_enable_patch+0x554/0x980
[<00000000681fcaa0>] do_one_initcall+0x70/0x318
[<000000006831449e>] do_init_module+0x6e/0x250
[<0000000068312b52>] load_module+0x1782/0x1990
[<0000000068313002>] __s390x_sys_finit_module+0xaa/0xf0
[<0000000068b03f30>] system_call+0xd8/0x2d0
INFO: lockdep is turned off.

If I am reading the code correctly, ftrace_run_update_code() takes
text_mutex now and then calls stop_machine(), which grabs
cpu_hotplug_lock for reading. do_optimize_kprobes() (see the comment
there) expects cpu_hotplug_lock to be held and takes text_mutex. Whoops.

Maybe there is a simple fix, but reverting the commit in this stage seems
warranted.

Miroslav

2019-06-26 13:39:37

by Petr Mladek

[permalink] [raw]
Subject: Re: [PATCH 1/3] module: Fix livepatch/ftrace module text permissions race

On Wed 2019-06-26 10:22:45, Miroslav Benes wrote:
> On Fri, 14 Jun 2019, Steven Rostedt wrote:
>
> > On Thu, 13 Jun 2019 20:07:22 -0500
> > Josh Poimboeuf <[email protected]> wrote:
> >
> > > It's possible for livepatch and ftrace to be toggling a module's text
> > > permissions at the same time, resulting in the following panic:
> > >
> >
> > [..]
> >
> > > The above panic occurs when loading two modules at the same time with
> > > ftrace enabled, where at least one of the modules is a livepatch module:
> > >
> > > CPU0 CPU1
> > > klp_enable_patch()
> > > klp_init_object_loaded()
> > > module_disable_ro()
> > > ftrace_module_enable()
> > > ftrace_arch_code_modify_post_process()
> > > set_all_modules_text_ro()
> > > klp_write_object_relocations()
> > > apply_relocate_add()
> > > *patches read-only code* - BOOM
> > >
> > > A similar race exists when toggling ftrace while loading a livepatch
> > > module.
> > >
> > > Fix it by ensuring that the livepatch and ftrace code patching
> > > operations -- and their respective permissions changes -- are protected
> > > by the text_mutex.
> > >
> > > Reported-by: Johannes Erdfelt <[email protected]>
> > > Fixes: 444d13ff10fb ("modules: add ro_after_init support")
> > > Signed-off-by: Josh Poimboeuf <[email protected]>
> > > Acked-by: Jessica Yu <[email protected]>
> > > Reviewed-by: Petr Mladek <[email protected]>
> > > Reviewed-by: Miroslav Benes <[email protected]>
> >
> > This patch looks uncontroversial. I'm going to pull this one in and
> > start testing it. And if it works, I'll push to Linus.
>
> Triggered this on s390x. Masami CCed and Linus as well, because the patch
> is in master branch and we are after -rc6. Thomas CCed because of commit
> 2d1e38f56622 ("kprobes: Cure hotplug lock ordering issues").
>
> ======================================================
> WARNING: possible circular locking dependency detected
> 5.2.0-rc6 #1 Tainted: G O K
> ------------------------------------------------------
> insmod/1393 is trying to acquire lock:
> 000000002fdee887 (cpu_hotplug_lock.rw_sem){++++}, at: stop_machine+0x2e/0x60
>
> but task is already holding lock:
> 000000005b22fb82 (text_mutex){+.+.}, at: ftrace_run_update_code+0x2a/0xa0
>
> which lock already depends on the new lock.
>
>
> the existing dependency chain (in reverse order) is:
>
> -> #1 (text_mutex){+.+.}:
> validate_chain.isra.21+0xb32/0xd70
> __lock_acquire+0x4b8/0x928
> lock_acquire+0x102/0x230
> __mutex_lock+0x88/0x908
> mutex_lock_nested+0x32/0x40
> register_kprobe+0x254/0x658
> init_kprobes+0x11a/0x168
> do_one_initcall+0x70/0x318
> kernel_init_freeable+0x456/0x508
> kernel_init+0x22/0x150
> ret_from_fork+0x30/0x34
> kernel_thread_starter+0x0/0xc
>
> -> #0 (cpu_hotplug_lock.rw_sem){++++}:
> check_prev_add+0x90c/0xde0
> validate_chain.isra.21+0xb32/0xd70
> __lock_acquire+0x4b8/0x928
> lock_acquire+0x102/0x230
> cpus_read_lock+0x62/0xd0
> stop_machine+0x2e/0x60
> arch_ftrace_update_code+0x2e/0x40
> ftrace_run_update_code+0x40/0xa0
> ftrace_startup+0xb2/0x168
> register_ftrace_function+0x64/0x88
> klp_patch_object+0x1a2/0x290
> klp_enable_patch+0x554/0x980
> do_one_initcall+0x70/0x318
> do_init_module+0x6e/0x250
> load_module+0x1782/0x1990
> __s390x_sys_finit_module+0xaa/0xf0
> system_call+0xd8/0x2d0
>
> other info that might help us debug this:
>
> Possible unsafe locking scenario:
>
> CPU0 CPU1
> ---- ----
> lock(text_mutex);
> lock(cpu_hotplug_lock.rw_sem);
> lock(text_mutex);
> lock(cpu_hotplug_lock.rw_sem);

It is similar problem that has been solved by 2d1e38f56622b9bb5af8
("kprobes: Cure hotplug lock ordering issues"). This commit solved
it by always taking cpu_hotplug_lock.rw_sem before text_mutex inside.

If we follow the lock ordering then ftrace has to take text_mutex
only when stop_machine() is not called or from code called via
stop_machine() parameter.

This is not easy with the current design. For example, arm calls
set_all_modules_text_rw() already in ftrace_arch_code_modify_prepare(),
see arch/arm/kernel/ftrace.c. And it is called:

+ outside stop_machine() from ftrace_run_update_code()
+ without stop_machine() from ftrace_module_enable()

A conservative solution for 5.2 release would be to move text_mutex
locking from the generic kernel/trace/ftrace.c into
arch/x86/kernel/ftrace.c:

ftrace_arch_code_modify_prepare()
ftrace_arch_code_modify_post_process()

It should be enough to fix the original problem because
x86 is the only architecture that calls set_all_modules_text_rw()
in ftrace path and supports livepatching at the same time.

We would need to do some refactoring when livepatching is enabled
for arm or nds32.

The conservative solution:

diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
index 0927bb158ffc..33786044d5ac 100644
--- a/arch/x86/kernel/ftrace.c
+++ b/arch/x86/kernel/ftrace.c
@@ -22,6 +22,7 @@
#include <linux/init.h>
#include <linux/list.h>
#include <linux/module.h>
+#include <linux/memory.h>

#include <trace/syscall.h>

@@ -35,6 +36,7 @@

int ftrace_arch_code_modify_prepare(void)
{
+ mutex_lock(&text_mutex);
set_kernel_text_rw();
set_all_modules_text_rw();
return 0;
@@ -44,6 +46,7 @@ int ftrace_arch_code_modify_post_process(void)
{
set_all_modules_text_ro();
set_kernel_text_ro();
+ mutex_unlock(&text_mutex);
return 0;
}

diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 38277af44f5c..d3034a4a3fcc 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -34,7 +34,6 @@
#include <linux/hash.h>
#include <linux/rcupdate.h>
#include <linux/kprobes.h>
-#include <linux/memory.h>

#include <trace/events/sched.h>

@@ -2611,12 +2610,10 @@ static void ftrace_run_update_code(int command)
{
int ret;

- mutex_lock(&text_mutex);
-
ret = ftrace_arch_code_modify_prepare();
FTRACE_WARN_ON(ret);
if (ret)
- goto out_unlock;
+ return ret;

/*
* By default we use stop_machine() to modify the code.
@@ -2628,9 +2625,6 @@ static void ftrace_run_update_code(int command)

ret = ftrace_arch_code_modify_post_process();
FTRACE_WARN_ON(ret);
-
-out_unlock:
- mutex_unlock(&text_mutex);
}

static void ftrace_run_modify_code(struct ftrace_ops *ops, int command,
@@ -5784,7 +5778,6 @@ void ftrace_module_enable(struct module *mod)
struct ftrace_page *pg;

mutex_lock(&ftrace_lock);
- mutex_lock(&text_mutex);

if (ftrace_disabled)
goto out_unlock;
@@ -5846,7 +5839,6 @@ void ftrace_module_enable(struct module *mod)
ftrace_arch_code_modify_post_process();

out_unlock:
- mutex_unlock(&text_mutex);
mutex_unlock(&ftrace_lock);

process_cached_mods(mod->name);

2019-06-26 14:46:29

by Thomas Gleixner

[permalink] [raw]
Subject: Re: [PATCH 1/3] module: Fix livepatch/ftrace module text permissions race

On Wed, 26 Jun 2019, Petr Mladek wrote:
> On Wed 2019-06-26 10:22:45, Miroslav Benes wrote:
> It is similar problem that has been solved by 2d1e38f56622b9bb5af8
> ("kprobes: Cure hotplug lock ordering issues"). This commit solved
> it by always taking cpu_hotplug_lock.rw_sem before text_mutex inside.
>
> If we follow the lock ordering then ftrace has to take text_mutex
> only when stop_machine() is not called or from code called via
> stop_machine() parameter.
>
> This is not easy with the current design. For example, arm calls
> set_all_modules_text_rw() already in ftrace_arch_code_modify_prepare(),
> see arch/arm/kernel/ftrace.c. And it is called:
>
> + outside stop_machine() from ftrace_run_update_code()
> + without stop_machine() from ftrace_module_enable()
>
> A conservative solution for 5.2 release would be to move text_mutex
> locking from the generic kernel/trace/ftrace.c into
> arch/x86/kernel/ftrace.c:
>
> ftrace_arch_code_modify_prepare()
> ftrace_arch_code_modify_post_process()
>
> It should be enough to fix the original problem because
> x86 is the only architecture that calls set_all_modules_text_rw()
> in ftrace path and supports livepatching at the same time.

Looks correct, but I've paged out all the gory details vs. lock ordering in
that area.

Thanks,

tglx

2019-06-26 14:53:12

by Josh Poimboeuf

[permalink] [raw]
Subject: Re: [PATCH 1/3] module: Fix livepatch/ftrace module text permissions race

On Wed, Jun 26, 2019 at 04:44:45PM +0200, Thomas Gleixner wrote:
> On Wed, 26 Jun 2019, Petr Mladek wrote:
> > On Wed 2019-06-26 10:22:45, Miroslav Benes wrote:
> > It is similar problem that has been solved by 2d1e38f56622b9bb5af8
> > ("kprobes: Cure hotplug lock ordering issues"). This commit solved
> > it by always taking cpu_hotplug_lock.rw_sem before text_mutex inside.
> >
> > If we follow the lock ordering then ftrace has to take text_mutex
> > only when stop_machine() is not called or from code called via
> > stop_machine() parameter.
> >
> > This is not easy with the current design. For example, arm calls
> > set_all_modules_text_rw() already in ftrace_arch_code_modify_prepare(),
> > see arch/arm/kernel/ftrace.c. And it is called:
> >
> > + outside stop_machine() from ftrace_run_update_code()
> > + without stop_machine() from ftrace_module_enable()
> >
> > A conservative solution for 5.2 release would be to move text_mutex
> > locking from the generic kernel/trace/ftrace.c into
> > arch/x86/kernel/ftrace.c:
> >
> > ftrace_arch_code_modify_prepare()
> > ftrace_arch_code_modify_post_process()
> >
> > It should be enough to fix the original problem because
> > x86 is the only architecture that calls set_all_modules_text_rw()
> > in ftrace path and supports livepatching at the same time.
>
> Looks correct, but I've paged out all the gory details vs. lock ordering in
> that area.

Looks good to me as well, Petr can you post a proper patch?

--
Josh

2019-06-26 14:59:48

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH 1/3] module: Fix livepatch/ftrace module text permissions race

On Wed, 26 Jun 2019 16:44:45 +0200 (CEST)
Thomas Gleixner <[email protected]> wrote:

> >
> > It should be enough to fix the original problem because
> > x86 is the only architecture that calls set_all_modules_text_rw()
> > in ftrace path and supports livepatching at the same time.
>
> Looks correct, but I've paged out all the gory details vs. lock ordering in
> that area.

I don't believe ftrace_lock and text_mutex had an order before Petr's
initial patches. Reversing them shouldn't be an issue here. They were
basically both "leaf" mutexes (not grabbing any mutexes when they are
held).

-- Steve