2018-01-10 10:01:35

by Miroslav Benes

[permalink] [raw]
Subject: [PATCH v2] livepatch: Remove immediate feature

Immediate flag has been used to disable per-task consistency and patch
all tasks immediately. It could be useful if the patch doesn't change any
function or data semantics.

However, it causes problems on its own. The consistency problem is
currently broken with respect to immediate patches.

func a
patches 1i
2i
3

When the patch 3 is applied, only 2i function is checked (by stack
checking facility). There might be a task sleeping in 1i though. Such
task is migrated to 3, because we do not check 1i in
klp_check_stack_func() at all.

Coming atomic replace feature would be easier to implement and more
reliable without immediate.

Thus, remove immediate feature completely and save us from the problems.

Note that force feature has the similar problem. However it is
considered as a last resort. If used, administrator should not apply any
new live patches and should plan for reboot into an updated kernel.

The architectures would now need to provide HAVE_RELIABLE_STACKTRACE to
fully support livepatch.

Signed-off-by: Miroslav Benes <[email protected]>
---
v2 changes:
- 2/2 from v1 dropped
- documentation

Documentation/livepatch/livepatch.txt | 89 +++++++++-------------------
include/linux/livepatch.h | 4 --
kernel/livepatch/core.c | 12 +---
kernel/livepatch/transition.c | 49 ++-------------
samples/livepatch/livepatch-callbacks-demo.c | 15 -----
samples/livepatch/livepatch-sample.c | 15 -----
samples/livepatch/livepatch-shadow-fix1.c | 15 -----
samples/livepatch/livepatch-shadow-fix2.c | 15 -----
8 files changed, 33 insertions(+), 181 deletions(-)

diff --git a/Documentation/livepatch/livepatch.txt b/Documentation/livepatch/livepatch.txt
index 896ba8941702..1ae2de758c08 100644
--- a/Documentation/livepatch/livepatch.txt
+++ b/Documentation/livepatch/livepatch.txt
@@ -72,8 +72,7 @@ example, they add a NULL pointer or a boundary check, fix a race by adding
a missing memory barrier, or add some locking around a critical section.
Most of these changes are self contained and the function presents itself
the same way to the rest of the system. In this case, the functions might
-be updated independently one by one. (This can be done by setting the
-'immediate' flag in the klp_patch struct.)
+be updated independently one by one.

But there are more complex fixes. For example, a patch might change
ordering of locking in multiple functions at the same time. Or a patch
@@ -125,12 +124,6 @@ Livepatch uses several complementary approaches to determine when it's
b) Patching CPU-bound user tasks. If the task is highly CPU-bound
then it will get patched the next time it gets interrupted by an
IRQ.
- c) In the future it could be useful for applying patches for
- architectures which don't yet have HAVE_RELIABLE_STACKTRACE. In
- this case you would have to signal most of the tasks on the
- system. However this isn't supported yet because there's
- currently no way to patch kthreads without
- HAVE_RELIABLE_STACKTRACE.

3. For idle "swapper" tasks, since they don't ever exit the kernel, they
instead have a klp_update_patch_state() call in the idle loop which
@@ -138,27 +131,16 @@ Livepatch uses several complementary approaches to determine when it's

(Note there's not yet such an approach for kthreads.)

-All the above approaches may be skipped by setting the 'immediate' flag
-in the 'klp_patch' struct, which will disable per-task consistency and
-patch all tasks immediately. This can be useful if the patch doesn't
-change any function or data semantics. Note that, even with this flag
-set, it's possible that some tasks may still be running with an old
-version of the function, until that function returns.
+Architectures which don't have HAVE_RELIABLE_STACKTRACE solely rely on
+the second approach. It's highly likely that some tasks may still be
+running with an old version of the function, until that function
+returns. In this case you would have to signal the tasks. This
+especially applies to kthreads. They may not be woken up and would need
+to be forced. See below for more information.

-There's also an 'immediate' flag in the 'klp_func' struct which allows
-you to specify that certain functions in the patch can be applied
-without per-task consistency. This might be useful if you want to patch
-a common function like schedule(), and the function change doesn't need
-consistency but the rest of the patch does.
-
-For architectures which don't have HAVE_RELIABLE_STACKTRACE, the user
-must set patch->immediate which causes all tasks to be patched
-immediately. This option should be used with care, only when the patch
-doesn't change any function or data semantics.
-
-In the future, architectures which don't have HAVE_RELIABLE_STACKTRACE
-may be allowed to use per-task consistency if we can come up with
-another way to patch kthreads.
+Unless we can come up with another way to patch kthreads, architectures
+without HAVE_RELIABLE_STACKTRACE are not considered fully supported by
+the kernel livepatching.

The /sys/kernel/livepatch/<patch>/transition file shows whether a patch
is in transition. Only a single patch (the topmost patch on the stack)
@@ -197,6 +179,11 @@ modules is permanently disabled when the force feature is used. It cannot be
guaranteed there is no task sleeping in such module. It implies unbounded
reference count if a patch module is disabled and enabled in a loop.

+Moreover, the usage of force may also affect future applications of live
+patches and cause even more harm to the system. Administrator should first
+consider to simply cancel a transition (see above). If force is used, reboot
+should be planned and no more live patches applied.
+
3.1 Adding consistency model support to new architectures
---------------------------------------------------------

@@ -234,13 +221,6 @@ For adding consistency model support to new architectures, there are a
a good backup option for those architectures which don't have
reliable stack traces yet.

-In the meantime, patches for such architectures can bypass the
-consistency model by setting klp_patch.immediate to true. This option
-is perfectly fine for patches which don't change the semantics of the
-patched functions. In practice, this is usable for ~90% of security
-fixes. Use of this option also means the patch can't be unloaded after
-it has been disabled.
-

4. Livepatch module
===================
@@ -296,9 +276,6 @@ The patch is described by several structures that split the information
only for a particular object ( vmlinux or a kernel module ). Note that
kallsyms allows for searching symbols according to the object name.

- There's also an 'immediate' flag which, when set, patches the
- function immediately, bypassing the consistency model safety checks.
-
+ struct klp_object defines an array of patched functions (struct
klp_func) in the same object. Where the object is either vmlinux
(NULL) or a module name.
@@ -317,9 +294,6 @@ The patch is described by several structures that split the information
symbols are found. The only exception are symbols from objects
(kernel modules) that have not been loaded yet.

- Setting the 'immediate' flag applies the patch to all tasks
- immediately, bypassing the consistency model safety checks.
-
For more details on how the patch is applied on a per-task basis,
see the "Consistency model" section.

@@ -334,14 +308,12 @@ section "Livepatch life-cycle" below for more details about these
two operations.

Module removal is only safe when there are no users of the underlying
-functions. The immediate consistency model is not able to detect this. The
-code just redirects the functions at the very beginning and it does not
-check if the functions are in use. In other words, it knows when the
-functions get called but it does not know when the functions return.
-Therefore it cannot be decided when the livepatch module can be safely
-removed. This is solved by a hybrid consistency model. When the system is
-transitioned to a new patch state (patched/unpatched) it is guaranteed that
-no task sleeps or runs in the old code.
+functions. This is the reason why the force feature permanently disables
+the removal. The forced tasks entered the functions but we cannot say
+that they returned back. Therefore it cannot be decided when the
+livepatch module can be safely removed. When the system is successfully
+transitioned to a new patch state (patched/unpatched) without being
+forced it is guaranteed that no task sleeps or runs in the old code.


5. Livepatch life-cycle
@@ -355,19 +327,12 @@ First, the patch is applied only when all patched symbols for already
loaded objects are found. The error handling is much easier if this
check is done before particular functions get redirected.

-Second, the immediate consistency model does not guarantee that anyone is not
-sleeping in the new code after the patch is reverted. This means that the new
-code needs to stay around "forever". If the code is there, one could apply it
-again. Therefore it makes sense to separate the operations that might be done
-once and those that need to be repeated when the patch is enabled (applied)
-again.
-
-Third, it might take some time until the entire system is migrated
-when a more complex consistency model is used. The patch revert might
-block the livepatch module removal for too long. Therefore it is useful
-to revert the patch using a separate operation that might be called
-explicitly. But it does not make sense to remove all information
-until the livepatch module is really removed.
+Second, it might take some time until the entire system is migrated with
+the hybrid consistency model being used. The patch revert might block
+the livepatch module removal for too long. Therefore it is useful to
+revert the patch using a separate operation that might be called
+explicitly. But it does not make sense to remove all information until
+the livepatch module is really removed.


5.1. Registration
diff --git a/include/linux/livepatch.h b/include/linux/livepatch.h
index fc5c1be3f6f4..4754f01c1abb 100644
--- a/include/linux/livepatch.h
+++ b/include/linux/livepatch.h
@@ -40,7 +40,6 @@
* @new_func: pointer to the patched function code
* @old_sympos: a hint indicating which symbol position the old function
* can be found (optional)
- * @immediate: patch the func immediately, bypassing safety mechanisms
* @old_addr: the address of the function being patched
* @kobj: kobject for sysfs resources
* @stack_node: list node for klp_ops func_stack list
@@ -76,7 +75,6 @@ struct klp_func {
* in kallsyms for the given object is used.
*/
unsigned long old_sympos;
- bool immediate;

/* internal */
unsigned long old_addr;
@@ -137,7 +135,6 @@ struct klp_object {
* struct klp_patch - patch structure for live patching
* @mod: reference to the live patch module
* @objs: object entries for kernel objects to be patched
- * @immediate: patch all funcs immediately, bypassing safety mechanisms
* @list: list node for global list of registered patches
* @kobj: kobject for sysfs resources
* @enabled: the patch is enabled (but operation may be incomplete)
@@ -147,7 +144,6 @@ struct klp_patch {
/* external */
struct module *mod;
struct klp_object *objs;
- bool immediate;

/* internal */
struct list_head list;
diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
index 1c3c9b27c916..41be6061b92f 100644
--- a/kernel/livepatch/core.c
+++ b/kernel/livepatch/core.c
@@ -366,11 +366,6 @@ static int __klp_enable_patch(struct klp_patch *patch)
/*
* A reference is taken on the patch module to prevent it from being
* unloaded.
- *
- * Note: For immediate (no consistency model) patches we don't allow
- * patch modules to unload since there is no safe/sane method to
- * determine if a thread is still running in the patched code contained
- * in the patch module once the ftrace registration is successful.
*/
if (!try_module_get(patch->mod))
return -ENODEV;
@@ -890,12 +885,7 @@ int klp_register_patch(struct klp_patch *patch)
if (!klp_initialized())
return -ENODEV;

- /*
- * Architectures without reliable stack traces have to set
- * patch->immediate because there's currently no way to patch kthreads
- * with the consistency model.
- */
- if (!klp_have_reliable_stack() && !patch->immediate) {
+ if (!klp_have_reliable_stack()) {
pr_err("This architecture doesn't have support for the livepatch consistency model.\n");
return -ENOSYS;
}
diff --git a/kernel/livepatch/transition.c b/kernel/livepatch/transition.c
index be5bfa533ee8..7c6631e693bc 100644
--- a/kernel/livepatch/transition.c
+++ b/kernel/livepatch/transition.c
@@ -82,7 +82,6 @@ static void klp_complete_transition(void)
struct klp_func *func;
struct task_struct *g, *task;
unsigned int cpu;
- bool immediate_func = false;

pr_debug("'%s': completing %s transition\n",
klp_transition_patch->mod->name,
@@ -104,16 +103,9 @@ static void klp_complete_transition(void)
klp_synchronize_transition();
}

- if (klp_transition_patch->immediate)
- goto done;
-
- klp_for_each_object(klp_transition_patch, obj) {
- klp_for_each_func(obj, func) {
+ klp_for_each_object(klp_transition_patch, obj)
+ klp_for_each_func(obj, func)
func->transition = false;
- if (func->immediate)
- immediate_func = true;
- }
- }

/* Prevent klp_ftrace_handler() from seeing KLP_UNDEFINED state */
if (klp_target_state == KLP_PATCHED)
@@ -132,7 +124,6 @@ static void klp_complete_transition(void)
task->patch_state = KLP_UNDEFINED;
}

-done:
klp_for_each_object(klp_transition_patch, obj) {
if (!klp_is_object_loaded(obj))
continue;
@@ -146,16 +137,11 @@ static void klp_complete_transition(void)
klp_target_state == KLP_PATCHED ? "patching" : "unpatching");

/*
- * See complementary comment in __klp_enable_patch() for why we
- * keep the module reference for immediate patches.
- *
- * klp_forced or immediate_func set implies unbounded increase of
- * module's ref count if the module is disabled/enabled in a loop.
+ * klp_forced set implies unbounded increase of module's ref count if
+ * the module is disabled/enabled in a loop.
*/
- if (!klp_forced && !klp_transition_patch->immediate &&
- !immediate_func && klp_target_state == KLP_UNPATCHED) {
+ if (!klp_forced && klp_target_state == KLP_UNPATCHED)
module_put(klp_transition_patch->mod);
- }

klp_target_state = KLP_UNDEFINED;
klp_transition_patch = NULL;
@@ -223,9 +209,6 @@ static int klp_check_stack_func(struct klp_func *func,
struct klp_ops *ops;
int i;

- if (func->immediate)
- return 0;
-
for (i = 0; i < trace->nr_entries; i++) {
address = trace->entries[i];

@@ -387,13 +370,6 @@ void klp_try_complete_transition(void)

WARN_ON_ONCE(klp_target_state == KLP_UNDEFINED);

- /*
- * If the patch can be applied or reverted immediately, skip the
- * per-task transitions.
- */
- if (klp_transition_patch->immediate)
- goto success;
-
/*
* Try to switch the tasks to the target patch state by walking their
* stacks and looking for any to-be-patched or to-be-unpatched
@@ -437,7 +413,6 @@ void klp_try_complete_transition(void)
return;
}

-success:
/* we're done, now cleanup the data structures */
klp_complete_transition();
}
@@ -457,13 +432,6 @@ void klp_start_transition(void)
klp_transition_patch->mod->name,
klp_target_state == KLP_PATCHED ? "patching" : "unpatching");

- /*
- * If the patch can be applied or reverted immediately, skip the
- * per-task transitions.
- */
- if (klp_transition_patch->immediate)
- return;
-
/*
* Mark all normal tasks as needing a patch state update. They'll
* switch either in klp_try_complete_transition() or as they exit the
@@ -513,13 +481,6 @@ void klp_init_transition(struct klp_patch *patch, int state)
pr_debug("'%s': initializing %s transition\n", patch->mod->name,
klp_target_state == KLP_PATCHED ? "patching" : "unpatching");

- /*
- * If the patch can be applied or reverted immediately, skip the
- * per-task transitions.
- */
- if (patch->immediate)
- return;
-
/*
* Initialize all tasks to the initial patch state to prepare them for
* switching to the target state.
diff --git a/samples/livepatch/livepatch-callbacks-demo.c b/samples/livepatch/livepatch-callbacks-demo.c
index 3d115bd68442..72f9e6d1387b 100644
--- a/samples/livepatch/livepatch-callbacks-demo.c
+++ b/samples/livepatch/livepatch-callbacks-demo.c
@@ -197,21 +197,6 @@ static int livepatch_callbacks_demo_init(void)
{
int ret;

- if (!klp_have_reliable_stack() && !patch.immediate) {
- /*
- * WARNING: Be very careful when using 'patch.immediate' in
- * your patches. It's ok to use it for simple patches like
- * this, but for more complex patches which change function
- * semantics, locking semantics, or data structures, it may not
- * be safe. Use of this option will also prevent removal of
- * the patch.
- *
- * See Documentation/livepatch/livepatch.txt for more details.
- */
- patch.immediate = true;
- pr_notice("The consistency model isn't supported for your architecture. Bypassing safety mechanisms and applying the patch immediately.\n");
- }
-
ret = klp_register_patch(&patch);
if (ret)
return ret;
diff --git a/samples/livepatch/livepatch-sample.c b/samples/livepatch/livepatch-sample.c
index 84795223f15f..2d554dd930e2 100644
--- a/samples/livepatch/livepatch-sample.c
+++ b/samples/livepatch/livepatch-sample.c
@@ -71,21 +71,6 @@ static int livepatch_init(void)
{
int ret;

- if (!klp_have_reliable_stack() && !patch.immediate) {
- /*
- * WARNING: Be very careful when using 'patch.immediate' in
- * your patches. It's ok to use it for simple patches like
- * this, but for more complex patches which change function
- * semantics, locking semantics, or data structures, it may not
- * be safe. Use of this option will also prevent removal of
- * the patch.
- *
- * See Documentation/livepatch/livepatch.txt for more details.
- */
- patch.immediate = true;
- pr_notice("The consistency model isn't supported for your architecture. Bypassing safety mechanisms and applying the patch immediately.\n");
- }
-
ret = klp_register_patch(&patch);
if (ret)
return ret;
diff --git a/samples/livepatch/livepatch-shadow-fix1.c b/samples/livepatch/livepatch-shadow-fix1.c
index fbe0a1f3d99b..830c55514f9f 100644
--- a/samples/livepatch/livepatch-shadow-fix1.c
+++ b/samples/livepatch/livepatch-shadow-fix1.c
@@ -133,21 +133,6 @@ static int livepatch_shadow_fix1_init(void)
{
int ret;

- if (!klp_have_reliable_stack() && !patch.immediate) {
- /*
- * WARNING: Be very careful when using 'patch.immediate' in
- * your patches. It's ok to use it for simple patches like
- * this, but for more complex patches which change function
- * semantics, locking semantics, or data structures, it may not
- * be safe. Use of this option will also prevent removal of
- * the patch.
- *
- * See Documentation/livepatch/livepatch.txt for more details.
- */
- patch.immediate = true;
- pr_notice("The consistency model isn't supported for your architecture. Bypassing safety mechanisms and applying the patch immediately.\n");
- }
-
ret = klp_register_patch(&patch);
if (ret)
return ret;
diff --git a/samples/livepatch/livepatch-shadow-fix2.c b/samples/livepatch/livepatch-shadow-fix2.c
index 53c1794bdc5f..ff9948f0ec00 100644
--- a/samples/livepatch/livepatch-shadow-fix2.c
+++ b/samples/livepatch/livepatch-shadow-fix2.c
@@ -128,21 +128,6 @@ static int livepatch_shadow_fix2_init(void)
{
int ret;

- if (!klp_have_reliable_stack() && !patch.immediate) {
- /*
- * WARNING: Be very careful when using 'patch.immediate' in
- * your patches. It's ok to use it for simple patches like
- * this, but for more complex patches which change function
- * semantics, locking semantics, or data structures, it may not
- * be safe. Use of this option will also prevent removal of
- * the patch.
- *
- * See Documentation/livepatch/livepatch.txt for more details.
- */
- patch.immediate = true;
- pr_notice("The consistency model isn't supported for your architecture. Bypassing safety mechanisms and applying the patch immediately.\n");
- }
-
ret = klp_register_patch(&patch);
if (ret)
return ret;
--
2.15.1


2018-01-11 04:35:45

by Josh Poimboeuf

[permalink] [raw]
Subject: Re: [PATCH v2] livepatch: Remove immediate feature

On Wed, Jan 10, 2018 at 11:01:28AM +0100, Miroslav Benes wrote:
> Immediate flag has been used to disable per-task consistency and patch
> all tasks immediately. It could be useful if the patch doesn't change any
> function or data semantics.
>
> However, it causes problems on its own. The consistency problem is
> currently broken with respect to immediate patches.
>
> func a
> patches 1i
> 2i
> 3
>
> When the patch 3 is applied, only 2i function is checked (by stack
> checking facility). There might be a task sleeping in 1i though. Such
> task is migrated to 3, because we do not check 1i in
> klp_check_stack_func() at all.
>
> Coming atomic replace feature would be easier to implement and more
> reliable without immediate.
>
> Thus, remove immediate feature completely and save us from the problems.
>
> Note that force feature has the similar problem. However it is
> considered as a last resort. If used, administrator should not apply any
> new live patches and should plan for reboot into an updated kernel.
>
> The architectures would now need to provide HAVE_RELIABLE_STACKTRACE to
> fully support livepatch.
>
> Signed-off-by: Miroslav Benes <[email protected]>
> ---
> v2 changes:
> - 2/2 from v1 dropped
> - documentation
>
> Documentation/livepatch/livepatch.txt | 89 +++++++++-------------------
> include/linux/livepatch.h | 4 --
> kernel/livepatch/core.c | 12 +---
> kernel/livepatch/transition.c | 49 ++-------------
> samples/livepatch/livepatch-callbacks-demo.c | 15 -----
> samples/livepatch/livepatch-sample.c | 15 -----
> samples/livepatch/livepatch-shadow-fix1.c | 15 -----
> samples/livepatch/livepatch-shadow-fix2.c | 15 -----
> 8 files changed, 33 insertions(+), 181 deletions(-)

Nice diffstat.

Acked-by: Josh Poimboeuf <[email protected]>

--
Josh

2018-01-11 10:00:46

by Jiri Kosina

[permalink] [raw]
Subject: Re: [PATCH v2] livepatch: Remove immediate feature

On Wed, 10 Jan 2018, Miroslav Benes wrote:

> Immediate flag has been used to disable per-task consistency and patch
> all tasks immediately. It could be useful if the patch doesn't change any
> function or data semantics.
>
> However, it causes problems on its own. The consistency problem is
> currently broken with respect to immediate patches.
>
> func a
> patches 1i
> 2i
> 3
>
> When the patch 3 is applied, only 2i function is checked (by stack
> checking facility). There might be a task sleeping in 1i though. Such
> task is migrated to 3, because we do not check 1i in
> klp_check_stack_func() at all.
>
> Coming atomic replace feature would be easier to implement and more
> reliable without immediate.
>
> Thus, remove immediate feature completely and save us from the problems.
>
> Note that force feature has the similar problem. However it is
> considered as a last resort. If used, administrator should not apply any
> new live patches and should plan for reboot into an updated kernel.
>
> The architectures would now need to provide HAVE_RELIABLE_STACKTRACE to
> fully support livepatch.
>
> Signed-off-by: Miroslav Benes <[email protected]>
> ---
> v2 changes:
> - 2/2 from v1 dropped
> - documentation

It's very refreshing to have to handle something non-toxic (no PTI, no
retpolines) finally :)

Applied to for-4.16/remove-immediate, thanks.

--
Jiri Kosina
SUSE Labs